{"abstract": "This paper addresses semantic image segmentation by incorporating rich\ninformation into Markov Random Field (MRF), including high-order relations and\nmixture of label contexts. Unlike previous works that optimized MRFs using\niterative algorithm, we solve MRF by proposing a Convolutional Neural Network\n(CNN), namely Deep Parsing Network (DPN), which enables deterministic\nend-to-end computation in a single forward pass. Specifically, DPN extends a\ncontemporary CNN architecture to model unary terms and additional layers are\ncarefully devised to approximate the mean field algorithm (MF) for pairwise\nterms. It has several appealing properties. First, different from the recent\nworks that combined CNN and MRF, where many iterations of MF were required for\neach training image during back-propagation, DPN is able to achieve high\nperformance by approximating one iteration of MF. Second, DPN represents\nvarious types of pairwise terms, making many existing works as its special\ncases. Third, DPN makes MF easier to be parallelized and speeded up in\nGraphical Processing Unit (GPU). DPN is thoroughly evaluated on the PASCAL VOC\n2012 dataset, where a single DPN model yields a new state-of-the-art\nsegmentation accuracy.", "field": [], "task": ["Semantic Segmentation"], "method": [], "dataset": ["Cityscapes test"], "metric": ["Mean IoU (class)"], "title": "Semantic Image Segmentation via Deep Parsing Network"} {"abstract": "We propose the Encoder-Recurrent-Decoder (ERD) model for recognition and\nprediction of human body pose in videos and motion capture. The ERD model is a\nrecurrent neural network that incorporates nonlinear encoder and decoder\nnetworks before and after recurrent layers. We test instantiations of ERD\narchitectures in the tasks of motion capture (mocap) generation, body pose\nlabeling and body pose forecasting in videos. Our model handles mocap training\ndata across multiple subjects and activity domains, and synthesizes novel\nmotions while avoid drifting for long periods of time. For human pose labeling,\nERD outperforms a per frame body part detector by resolving left-right body\npart confusions. For video pose forecasting, ERD predicts body joint\ndisplacements across a temporal horizon of 400ms and outperforms a first order\nmotion model based on optical flow. ERDs extend previous Long Short Term Memory\n(LSTM) models in the literature to jointly learn representations and their\ndynamics. Our experiments show such representation learning is crucial for both\nlabeling and prediction in space-time. We find this is a distinguishing feature\nbetween the spatio-temporal visual domain in comparison to 1D text, speech or\nhandwriting, where straightforward hard coded representations have shown\nexcellent results when directly combined with recurrent units.", "field": [], "task": ["Human Dynamics", "Human Pose Forecasting", "Motion Capture", "Optical Flow Estimation", "Representation Learning"], "method": [], "dataset": ["Human3.6M"], "metric": ["MAR, walking, 400ms", "MAR, walking, 1,000ms"], "title": "Recurrent Network Models for Human Dynamics"} {"abstract": "Zero-shot learning (ZSL) is a challenging problem that aims to recognize the target categories without seen data, where semantic information is leveraged to transfer knowledge from some source classes. Although ZSL has made great progress in recent years, most existing approaches are easy to overfit the sources classes in generalized zero-shot learning (GZSL) task, which indicates that they learn little knowledge about target classes. To tackle such problem, we propose a novel Transferable Contrastive Network (TCN) that explicitly transfers knowledge from the source classes to the target classes. It automatically contrasts one image with different classes to judge whether they are consistent or not. By exploiting the class similarities to make knowledge transfer from source images to similar target classes, our approach is more robust to recognize the target images. Experiments on five benchmark datasets show the superiority of our approach for GZSL.", "field": [], "task": ["Generalized Zero-Shot Learning", "Transfer Learning", "Zero-Shot Learning"], "method": [], "dataset": ["SUN Attribute", "CUB-200-2011"], "metric": ["average top-1 classification accuracy", "Harmonic mean"], "title": "Transferable Contrastive Network for Generalized Zero-Shot Learning"} {"abstract": "A novel algorithm to segment a primary object in a video sequence is proposed in this work. First, we generate candidate regions for the primary object using both color and motion edges. Second,we estimate initial primary object regions, by exploiting the recurrence property of the primary object. Third, we augment the initial regions with missing parts or reducing them by excluding noisy parts repeatedly. This augmentation and reduction process (ARP) identifies the primary object region in each frame. Experimental results demonstrate that the proposed algorithm significantly outperforms the state-of-the-art conventional algorithms on recent benchmark datasets.\r", "field": [], "task": ["Semantic Segmentation", "Unsupervised Video Object Segmentation"], "method": [], "dataset": ["DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "Primary Object Segmentation in Videos Based on Region Augmentation and Reduction"} {"abstract": "Numerous models describing the human emotional states have been built by the\npsychology community. Alongside, Deep Neural Networks (DNN) are reaching\nexcellent performances and are becoming interesting features extraction tools\nin many computer vision tasks.Inspired by works from the psychology community,\nwe first study the link between the compact two-dimensional representation of\nthe emotion known as arousal-valence, and discrete emotion classes (e.g. anger,\nhappiness, sadness, etc.) used in the computer vision community. It enables to\nassess the benefits -- in terms of discrete emotion inference -- of adding an\nextra dimension to arousal-valence (usually named dominance). Building on these\nobservations, we propose CAKE, a 3-dimensional representation of emotion\nlearned in a multi-domain fashion, achieving accurate emotion recognition on\nseveral public datasets. Moreover, we visualize how emotions boundaries are\norganized inside DNN representations and show that DNNs are implicitly learning\narousal-valence-like descriptions of emotions. Finally, we use the CAKE\nrepresentation to compare the quality of the annotations of different public\ndatasets.", "field": [], "task": ["Emotion Recognition", "Facial Expression Recognition"], "method": [], "dataset": ["AffectNet"], "metric": ["Accuracy (7 emotion)", "Accuracy (8 emotion)"], "title": "CAKE: Compact and Accurate K-dimensional representation of Emotion"} {"abstract": "We propose an octree guided neural network architecture and spherical\nconvolutional kernel for machine learning from arbitrary 3D point clouds. The\nnetwork architecture capitalizes on the sparse nature of irregular point\nclouds, and hierarchically coarsens the data representation with space\npartitioning. At the same time, the proposed spherical kernels systematically\nquantize point neighborhoods to identify local geometric structures in the\ndata, while maintaining the properties of translation-invariance and asymmetry.\nWe specify spherical kernels with the help of network neurons that in turn are\nassociated with spatial locations. We exploit this association to avert dynamic\nkernel generation during network training that enables efficient learning with\nhigh resolution point clouds. The effectiveness of the proposed technique is\nestablished on the benchmark tasks of 3D object classification and\nsegmentation, achieving new state-of-the-art on ShapeNet and RueMonge2014\ndatasets.", "field": [], "task": ["3D Object Classification", "3D Part Segmentation", "Object Classification"], "method": [], "dataset": ["ShapeNet-Part"], "metric": ["Class Average IoU", "Instance Average IoU"], "title": "Octree guided CNN with Spherical Kernels for 3D Point Clouds"} {"abstract": "Masked Language Model (MLM) framework has been widely adopted for self-supervised language pre-training. In this paper, we argue that randomly sampled masks in MLM would lead to undesirably large gradient variance. Thus, we theoretically quantify the gradient variance via correlating the gradient covariance with the Hamming distance between two different masks (given a certain text sequence). To reduce the variance due to the sampling of masks, we propose a fully-explored masking strategy, where a text sequence is divided into a certain number of non-overlapping segments. Thereafter, the tokens within one segment are masked for training. We prove, from a theoretical perspective, that the gradients derived from this new masking schema have a smaller variance and can lead to more efficient self-supervised training. We conduct extensive experiments on both continual pre-training and general pre-training from scratch. Empirical results confirm that this new masking strategy can consistently outperform standard random masking. Detailed efficiency analysis and ablation studies further validate the advantages of our fully-explored masking strategy under the MLM framework.", "field": [], "task": ["Language Modelling", "Sentence Classification"], "method": [], "dataset": ["ACL-ARC"], "metric": ["F1"], "title": "Improving Self-supervised Pre-training via a Fully-Explored Masked Language Model"} {"abstract": "Background\r\nAtrial fibrillation (AF) is the most common and debilitating abnormalities of the arrhythmias worldwide, with a major impact on morbidity and mortality. The detection of AF becomes crucial in preventing both acute and chronic cardiac rhythm disorders.\r\n\r\nObjective\r\nOur objective is to devise a method for real-time, automated detection of AF episodes in electrocardiograms (ECGs). This method utilizes RR intervals, and it involves several basic operations of nonlinear/linear integer filters, symbolic dynamics and the calculation of Shannon entropy. Using novel recursive algorithms, online analytical processing of this method can be achieved.\r\n\r\nResults\r\nFour publicly-accessible sets of clinical data (Long-Term AF, MIT-BIH AF, MIT-BIH Arrhythmia, and MIT-BIH Normal Sinus Rhythm Databases) were selected for investigation. The first database is used as a training set; in accordance with the receiver operating characteristic (ROC) curve, the best performance using this method was achieved at the discrimination threshold of 0.353: the sensitivity (Se), specificity (Sp), positive predictive value (PPV) and overall accuracy (ACC) were 96.72%, 95.07%, 96.61% and 96.05%, respectively. The other three databases are used as testing sets. Using the obtained threshold value (i.e., 0.353), for the second set, the obtained parameters were 96.89%, 98.25%, 97.62% and 97.67%, respectively; for the third database, these parameters were 97.33%, 90.78%, 55.29% and 91.46%, respectively; finally, for the fourth set, the Sp was 98.28%. The existing methods were also employed for comparison.\r\n\r\nConclusions\r\nOverall, in contrast to the other available techniques, the test results indicate that the newly developed approach outperforms traditional methods using these databases under assessed various experimental situations, and suggest our technique could be of practical use for clinicians in the future.", "field": [], "task": ["Atrial Fibrillation Detection"], "method": [], "dataset": ["MIT-BIH AF"], "metric": ["Accuracy"], "title": "Automatic online detection of atrial fibrillation based on symbolic dynamics and Shannon entropy"} {"abstract": "Recent research efforts enable study for natural language grounded navigation in photo-realistic environments, e.g., following natural language instructions or dialog. However, existing methods tend to overfit training data in seen environments and fail to generalize well in previously unseen environments. To close the gap between seen and unseen environments, we aim at learning a generalized navigation model from two novel perspectives: (1) we introduce a multitask navigation model that can be seamlessly trained on both Vision-Language Navigation (VLN) and Navigation from Dialog History (NDH) tasks, which benefits from richer natural language guidance and effectively transfers knowledge across tasks; (2) we propose to learn environment-agnostic representations for the navigation policy that are invariant among the environments seen during training, thus generalizing better on unseen environments. Extensive experiments show that environment-agnostic multitask learning significantly reduces the performance gap between seen and unseen environments, and the navigation agent trained so outperforms baselines on unseen environments by 16% (relative measure on success rate) on VLN and 120% (goal progress) on NDH. Our submission to the CVDN leaderboard establishes a new state-of-the-art for the NDH task on the holdout test set. Code is available at https://github.com/google-research/valan.", "field": [], "task": ["Vision-Language Navigation"], "method": [], "dataset": ["VLN Challenge", "Cooperative Vision-and-Dialogue Navigation"], "metric": ["length", "spl", "oracle success", "dist_to_end_reduction", "success", "error"], "title": "Environment-agnostic Multitask Learning for Natural Language Grounded Navigation"} {"abstract": "Segmentation is a fundamental task in medical image analysis. However, most existing methods focus on primary region extraction and ignore edge information, which is useful for obtaining accurate segmentation. In this paper, we propose a generic medical segmentation method, called Edge-aTtention guidance Network (ET-Net), which embeds edge-attention representations to guide the segmentation network. Specifically, an edge guidance module is utilized to learn the edge-attention representations in the early encoding layers, which are then transferred to the multi-scale decoding layers, fused using a weighted aggregation module. The experimental results on four segmentation tasks (i.e., optic disc/cup and vessel segmentation in retinal images, and lung segmentation in chest X-Ray and CT images) demonstrate that preserving edge-attention representations contributes to the final segmentation accuracy, and our proposed method outperforms current state-of-the-art segmentation methods. The source code of our method is available at https://github.com/ZzzJzzZ/ETNet.", "field": [], "task": ["Medical Image Segmentation", "Optic Disc Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["Montgomery County", "LUNA", "DRIVE"], "metric": ["mIoU", "Accuracy"], "title": "ET-Net: A Generic Edge-aTtention Guidance Network for Medical Image Segmentation"} {"abstract": "We present a transition-based AMR parser that directly generates AMR parses\nfrom plain text. We use Stack-LSTMs to represent our parser state and make\ndecisions greedily. In our experiments, we show that our parser achieves very\ncompetitive scores on English using only AMR training data. Adding additional\ninformation, such as POS tags and dependency trees, improves the results\nfurther.", "field": [], "task": ["AMR Parsing"], "method": [], "dataset": ["LDC2014T12"], "metric": ["F1 Newswire", "F1 Full"], "title": "AMR Parsing using Stack-LSTMs"} {"abstract": "Neural architecture search (NAS) has shown promising results discovering models that are both accurate and fast. For NAS, training a one-shot model has become a popular strategy to rank the relative quality of different architectures (child models) using a single set of shared weights. However, while one-shot model weights can effectively rank different network architectures, the absolute accuracies from these shared weights are typically far below those obtained from stand-alone training. To compensate, existing methods assume that the weights must be retrained, finetuned, or otherwise post-processed after the search is completed. These steps significantly increase the compute requirements and complexity of the architecture search and model deployment. In this work, we propose BigNAS, an approach that challenges the conventional wisdom that post-processing of the weights is necessary to get good prediction accuracies. Without extra retraining or post-processing steps, we are able to train a single set of shared weights on ImageNet and use these weights to obtain child models whose sizes range from 200 to 1000 MFLOPs. Our discovered model family, BigNASModels, achieve top-1 accuracies ranging from 76.5% to 80.9%, surpassing state-of-the-art models in this range including EfficientNets and Once-for-All networks without extra retraining or post-processing. We present ablative study and analysis to further understand the proposed BigNASModels.", "field": [], "task": ["Neural Architecture Search"], "method": [], "dataset": ["ImageNet"], "metric": ["Top-1 Error Rate", "MACs", "Params", "Accuracy"], "title": "BigNAS: Scaling Up Neural Architecture Search with Big Single-Stage Models"} {"abstract": "Estimating individual level treatment effects (ITE) from observational data is a challenging and important area in causal machine learning and is commonly considered in diverse mission-critical applications. In this paper, we propose an information theoretic approach in order to find more reliable representations for estimating ITE. We leverage the Information Bottleneck (IB) principle, which addresses the trade-off between conciseness and predictive power of representation. With the introduction of an extended graphical model for causal information bottleneck, we encourage the independence between the learned representation and the treatment type. We also introduce an additional form of a regularizer from the perspective of understanding ITE in the semi-supervised learning framework to ensure more reliable representations. Experimental results show that our model achieves the state-of-the-art results and exhibits more reliable prediction performances with uncertainty information on real-world datasets.", "field": [], "task": ["Causal Inference"], "method": [], "dataset": ["IDHP"], "metric": ["Average Treatment Effect Error"], "title": "Reliable Estimation of Individual Treatment Effect with Causal Information Bottleneck"} {"abstract": "Accurate segmenting nuclei instances is a crucial step in computer-aided\nimage analysis to extract rich features for cellular estimation and following\ndiagnosis as well as treatment. While it still remains challenging because the\nwide existence of nuclei clusters, along with the large morphological variances\namong different organs make nuclei instance segmentation susceptible to\nover-/under-segmentation. Additionally, the inevitably subjective annotating\nand mislabeling prevent the network learning from reliable samples and\neventually reduce the generalization capability for robustly segmenting unseen\norgan nuclei. To address these issues, we propose a novel deep neural network,\nnamely Contour-aware Informative Aggregation Network (CIA-Net) with multi-level\ninformation aggregation module between two task-specific decoders. Rather than\nindependent decoders, it leverages the merit of spatial and texture\ndependencies between nuclei and contour by bi-directionally aggregating\ntask-specific features. Furthermore, we proposed a novel smooth truncated loss\nthat modulates losses to reduce the perturbation from outliers. Consequently,\nthe network can focus on learning from reliable and informative samples, which\ninherently improves the generalization capability. Experiments on the 2018\nMICCAI challenge of Multi-Organ-Nuclei-Segmentation validated the effectiveness\nof our proposed method, surpassing all the other 35 competitive teams by a\nsignificant margin.", "field": [], "task": ["Instance Segmentation", "Multi-tissue Nucleus Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["Kumar"], "metric": ["Hausdorff Distance (mm)", "Dice"], "title": "CIA-Net: Robust Nuclei Instance Segmentation with Contour-aware Information Aggregation"} {"abstract": "Domain adaptation (DA) and domain generalization (DG) have emerged as a solution to the domain shift problem where the distribution of the source and target data is different. The task of DG is more challenging than DA as the target data is totally unseen during the training phase in DG scenarios. The current state-of-the-art employs adversarial techniques, however, these are rarely considered for the DG problem. Furthermore, these approaches do not consider correlation alignment which has been proven highly beneficial for minimizing domain discrepancy. In this paper, we propose a correlation-aware adversarial DA and DG framework where the features of the source and target data are minimized using correlation alignment along with adversarial learning. Incorporating the correlation alignment module along with adversarial learning helps to achieve a more domain agnostic model due to the improved ability to reduce domain discrepancy with unlabeled target data more effectively. Experiments on benchmark datasets serve as evidence that our proposed method yields improved state-of-the-art performance.", "field": [], "task": ["Domain Adaptation", "Domain Generalization"], "method": [], "dataset": ["Office-31", "Office-Home", "ImageCLEF-DA"], "metric": ["Average Accuracy", "Accuracy"], "title": "Correlation-aware Adversarial Domain Adaptation and Generalization"} {"abstract": "In this paper, we analyze neural network-based dialogue systems trained in an end-to-end manner using an updated version of the recent Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words. This dataset is interesting because of its size, long context lengths, and technical nature; thus, it can be used to train large models directly from data with minimal feature engineering. We provide baselines in two different environments: one where models are trained to select the correct next response from a list of candidate responses, and one where models are trained to maximize the loglikelihood of a generated utterance conditioned on the context of the conversation. These are both evaluated on a recall task that we call next utterance classification (NUC), and using vector-based metrics that capture the topicality of the responses. We observe that current end-to-end models are unable to completely solve these tasks; thus, we provide a qualitative error analysis to determine the primary causes of error for end-to-end models evaluated on NUC, and examine sample utterances from the generative models. As a result of this analysis, we suggest some promising directions for future research on the Ubuntu Dialogue Corpus, which can also be applied to end-to-end dialogue systems in general.", "field": [], "task": ["Conversation Disentanglement", "Feature Engineering"], "method": [], "dataset": ["Linux IRC (Ch2 Elsner)", "Linux IRC (Ch2 Kummerfeld)"], "metric": ["1-1", "Shen F-1", "Local"], "title": "Training End-to-End Dialogue Systems with the Ubuntu Dialogue Corpus"} {"abstract": "Object segmentation and structure localization are important steps in\nautomated image analysis pipelines for microscopy images. We present a\nconvolution neural network (CNN) based deep learning architecture for\nsegmentation of objects in microscopy images. The proposed network can be used\nto segment cells, nuclei and glands in fluorescence microscopy and histology\nimages after slight tuning of input parameters. The network trains at multiple\nresolutions of the input image, connects the intermediate layers for better\nlocalization and context and generates the output using multi-resolution\ndeconvolution filters. The extra convolutional layers which bypass the\nmax-pooling operation allow the network to train for variable input intensities\nand object size and make it robust to noisy data. We compare our results on\npublicly available data sets and show that the proposed network outperforms\nrecent deep learning algorithms.", "field": [], "task": ["Multi-tissue Nucleus Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["Kumar"], "metric": ["Hausdorff Distance (mm)", "Dice"], "title": "Micro-Net: A unified model for segmentation of various objects in microscopy images"} {"abstract": "We present our system for the WNUT 2017 Named Entity Recognition challenge on Twitter data. We describe two modifications of a basic neural network architecture for sequence tagging. First, we show how we exploit additional labeled data, where the Named Entity tags differ from the target task. Then, we propose a way to incorporate sentence level features. Our system uses both methods and ranked second for entity level annotations, achieving an F1-score of 40.78, and second for surface form annotations, achieving an F1-score of 39.33.", "field": [], "task": ["Named Entity Recognition", "Transfer Learning"], "method": [], "dataset": ["Long-tail emerging entities"], "metric": ["F1 (surface form)", "F1"], "title": "Transfer Learning and Sentence Level Features for Named Entity Recognition on Tweets"} {"abstract": "Recently, emotion detection in conversations becomes a hot research topic in the Natural Language Processing community. In this paper, we focus on emotion detection in multi-speaker conversations instead of traditional two-speaker conversations in existing studies. Different from non-conversation text, emotion detection in conversation text has one specific challenge in modeling the context-sensitive dependence. Besides, emotion detection in multi-speaker conversations endorses another specific challenge in modeling the speaker-sensitive dependence. To address above two challenges, we propose a conversational graph-based convolutional neural network. On the one hand, our approach represents each utterance and each speaker as a node. On the other hand, the context-sensitive dependence is represented by an undirected edge between two utterances nodes from the same conversation and the speaker-sensitive dependence is represented by an undirected edge between an utterance node and its speaker node. In this way, the entire conversational corpus can be symbolized as a large heterogeneous graph and the emotion detection task can be recast as a classification problem of the utterance nodes in the graph. The experimental results on a multi-modal and multi-speaker conversation corpus demonstrate the great effectiveness of the proposed approach.", "field": [], "task": ["Emotion Recognition in Conversation"], "method": [], "dataset": ["MELD"], "metric": ["Weighted Macro-F1"], "title": "Modeling both context- and speaker-sensitive dependence for emotion detection in multi-speaker conversations"} {"abstract": "Self-training is a competitive approach in domain adaptive segmentation, which trains the network with the pseudo labels on the target domain. However inevitably, the pseudo labels are noisy and the target features are dispersed due to the discrepancy between source and target domains. In this paper, we rely on representative prototypes, the feature centroids of classes, to address the two issues for unsupervised domain adaptation. In particular, we take one step further and exploit the feature distances from prototypes that provide richer information than mere prototypes. Specifically, we use it to estimate the likelihood of pseudo labels to facilitate online correction in the course of training. Meanwhile, we align the prototypical assignments based on relative feature distances for two different views of the same target, producing a more compact target feature space. Moreover, we find that distilling the already learned knowledge to a self-supervised pretrained model further boosts the performance. Our method shows tremendous performance advantage over state-of-the-art methods. We will make the code publicly available.", "field": [], "task": ["Domain Adaptation", "Image-to-Image Translation", "Semantic Segmentation", "Synthetic-to-Real Translation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["GTA5 to Cityscapes", "GTAV-to-Cityscapes Labels", "SYNTHIA-to-Cityscapes"], "metric": ["mIoU (13 classes)", "mIoU"], "title": "Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation"} {"abstract": "Deep learning based general language models have achieved state-of-the-art results in many popular tasks such as sentiment analysis and QA tasks. Text in domains like social media has its own salient characteristics. Domain knowledge should be helpful in domain relevant tasks. In this work, we devise a simple method to obtain domain knowledge and further propose a method to integrate domain knowledge with general knowledge based on deep language models to improve performance of emotion classification. Experiments on Twitter data show that even though a deep language model fine-tuned by a target domain data has attained comparable results to that of previous state-of-the-art models, this fine-tuned model can still benefit from our extracted domain knowledge to obtain more improvement. This highlights the importance of making use of domain knowledge in domain-specific applications.", "field": [], "task": ["Emotion Classification", "Language Modelling", "Sentiment Analysis"], "method": [], "dataset": ["SemEval 2018 Task 1E-c"], "metric": ["Micro-F1", "Macro-F1", "Accuracy"], "title": "Improving Multi-label Emotion Classification by Integrating both General and Domain-specific Knowledge"} {"abstract": "With the rise in popularity of machine and deep learning models, there is an increased focus on their vulnerability to malicious inputs. These adversarial examples drift model predictions away from the original intent of the network and are a growing concern in practical security. In order to combat these attacks, neural networks can leverage traditional image processing approaches or state-of-the-art defensive models to reduce perturbations in the data. Defensive approaches that take a global approach to noise reduction are effective against adversarial attacks, however their lossy approach often distorts important data within the image. In this work, we propose a visual saliency based approach to cleaning data affected by an adversarial attack. Our model leverages the salient regions of an adversarial image in order to provide a targeted countermeasure while comparatively reducing loss within the cleaned images. We measure the accuracy of our model by evaluating the effectiveness of state-of-the-art saliency methods prior to attack, under attack, and after application of cleaning methods. We demonstrate the effectiveness of our proposed approach in comparison with related defenses and against established adversarial attack methods, across two saliency datasets. Our targeted approach shows significant improvements in a range of standard statistical and distance saliency metrics, in comparison with both traditional and state-of-the-art approaches.", "field": [], "task": ["Adversarial Attack", "Music Genre Recognition"], "method": [], "dataset": ["1B Words"], "metric": ["10 Hops"], "title": "SAD: Saliency-based Defenses Against Adversarial Examples"} {"abstract": "3D semantic scene labeling is fundamental to agents operating in the real\nworld. In particular, labeling raw 3D point sets from sensors provides\nfine-grained semantics. Recent works leverage the capabilities of Neural\nNetworks (NNs), but are limited to coarse voxel predictions and do not\nexplicitly enforce global consistency. We present SEGCloud, an end-to-end\nframework to obtain 3D point-level segmentation that combines the advantages of\nNNs, trilinear interpolation(TI) and fully connected Conditional Random Fields\n(FC-CRF). Coarse voxel predictions from a 3D Fully Convolutional NN are\ntransferred back to the raw 3D points via trilinear interpolation. Then the\nFC-CRF enforces global consistency and provides fine-grained semantics on the\npoints. We implement the latter as a differentiable Recurrent NN to allow joint\noptimization. We evaluate the framework on two indoor and two outdoor 3D\ndatasets (NYU V2, S3DIS, KITTI, Semantic3D.net), and show performance\ncomparable or superior to the state-of-the-art on all datasets.", "field": [], "task": ["Semantic Segmentation"], "method": [], "dataset": ["Semantic3D", "S3DIS Area5"], "metric": ["mAcc", "mIoU"], "title": "SEGCloud: Semantic Segmentation of 3D Point Clouds"} {"abstract": "This paper addresses the problem of 3D human pose estimation from a single\nimage. We follow a standard two-step pipeline by first detecting the 2D\nposition of the $N$ body joints, and then using these observations to infer 3D\npose. For the first step, we use a recent CNN-based detector. For the second\nstep, most existing approaches perform 2$N$-to-3$N$ regression of the Cartesian\njoint coordinates. We show that more precise pose estimates can be obtained by\nrepresenting both the 2D and 3D human poses using $N\\times N$ distance\nmatrices, and formulating the problem as a 2D-to-3D distance matrix regression.\nFor learning such a regressor we leverage on simple Neural Network\narchitectures, which by construction, enforce positivity and symmetry of the\npredicted matrices. The approach has also the advantage to naturally handle\nmissing observations and allowing to hypothesize the position of non-observed\njoints. Quantitative results on Humaneva and Human3.6M datasets demonstrate\nconsistent performance gains over state-of-the-art. Qualitative evaluation on\nthe images in-the-wild of the LSP dataset, using the regressor learned on\nHuman3.6M, reveals very promising generalization results.", "field": [], "task": ["3D Human Pose Estimation", "Pose Estimation", "Regression"], "method": [], "dataset": ["HumanEva-I"], "metric": ["Mean Reconstruction Error (mm)"], "title": "3D Human Pose Estimation from a Single Image via Distance Matrix Regression"} {"abstract": "Faster RCNN has achieved great success for generic object detection including\nPASCAL object detection and MS COCO object detection. In this report, we\npropose a detailed designed Faster RCNN method named FDNet1.0 for face\ndetection. Several techniques were employed including multi-scale training,\nmulti-scale testing, light-designed RCNN, some tricks for inference and a\nvote-based ensemble method. Our method achieves two 1th places and one 2nd\nplace in three tasks over WIDER FACE validation dataset (easy set, medium set,\nhard set).", "field": [], "task": ["Face Detection", "Object Detection"], "method": [], "dataset": ["WIDER Face (Hard)", "WIDER Face (Medium)", "WIDER Face (Easy)"], "metric": ["AP"], "title": "Face Detection Using Improved Faster RCNN"} {"abstract": "Point cloud data from 3D LiDAR sensors are one of the most crucial sensor modalities for versatile safety-critical applications such as self-driving vehicles. Since the annotations of point cloud data is an expensive and time-consuming process, therefore recently the utilisation of simulated environments and 3D LiDAR sensors for this task started to get some popularity. With simulated sensors and environments, the process for obtaining an annotated synthetic point cloud data became much easier. However, the generated synthetic point cloud data are still missing the artefacts usually exist in point cloud data from real 3D LiDAR sensors. As a result, the performance of the trained models on this data for perception tasks when tested on real point cloud data is degraded due to the domain shift between simulated and real environments. Thus, in this work, we are proposing a domain adaptation framework for bridging this gap between synthetic and real point cloud data. Our proposed framework is based on the deep cycle-consistent generative adversarial networks (CycleGAN) architecture. We have evaluated the performance of our proposed framework on the task of vehicle detection from a bird's eye view (BEV) point cloud images coming from real 3D LiDAR sensors. The framework has shown competitive results with an improvement of more than 7% in average precision score over other baseline approaches when tested on real BEV point cloud images.", "field": [], "task": ["Domain Adaptation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["PreSIL to KITTI"], "metric": ["AP@0.7"], "title": "Domain Adaptation for Vehicle Detection from Bird's Eye View LiDAR Point Cloud Data"} {"abstract": "Emotion recognition in conversations (ERC) has received much attention recently in the natural language processing community. Considering that the emotions of the utterances in conversations are interactive, previous works usually implicitly model the emotion interaction between utterances by modeling dialogue context, but the misleading emotion information from context often interferes with the emotion interaction. We noticed that the gold emotion labels of the context utterances can provide explicit and accurate emotion interaction, but it is impossible to input gold labels at inference time. To address this problem, we propose an iterative emotion interaction network, which uses iteratively predicted emotion labels instead of gold emotion labels to explicitly model the emotion interaction. This approach solves the above problem, and can effectively retain the performance advantages of explicit modeling. We conduct experiments on two datasets, and our approach achieves state-of-the-art performance.", "field": [], "task": ["Emotion Recognition", "Emotion Recognition in Conversation"], "method": [], "dataset": ["IEMOCAP", "MELD"], "metric": ["Weighted Macro-F1", "F1"], "title": "An Iterative Emotion Interaction Network for Emotion Recognition in Conversations"} {"abstract": "Three-dimensional objects are commonly represented as 3D boxes in a point-cloud. This representation mimics the well-studied image-based 2D bounding-box detection but comes with additional challenges. Objects in a 3D world do not follow any particular orientation, and box-based detectors have difficulties enumerating all orientations or fitting an axis-aligned bounding box to rotated objects. In this paper, we instead propose to represent, detect, and track 3D objects as points. Our framework, CenterPoint, first detects centers of objects using a keypoint detector and regresses to other attributes, including 3D size, 3D orientation, and velocity. In a second stage, it refines these estimates using additional point features on the object. In CenterPoint, 3D object tracking simplifies to greedy closest-point matching. The resulting detection and tracking algorithm is simple, efficient, and effective. CenterPoint achieved state-of-the-art performance on the nuScenes benchmark for both 3D detection and tracking, with 65.5 NDS and 63.8 AMOTA for a single model. On the Waymo Open Dataset, CenterPoint outperforms all previous single model method by a large margin and ranks first among all Lidar-only submissions. The code and pretrained models are available at https://github.com/tianweiy/CenterPoint.", "field": [], "task": ["3D Multi-Object Tracking", "3D Object Detection", "3D Object Tracking", "Object Detection", "Object Tracking"], "method": [], "dataset": ["waymo pedestrian", "waymo cyclist", "nuScenes", "waymo all_ns"], "metric": ["mAAE", "mAP", "APH/L2", "mAVE", "mASE", "mAOE", "NDS", "amota", "mATE"], "title": "Center-based 3D Object Detection and Tracking"} {"abstract": "This paper revisits the bilinear attention networks in the visual question answering task from a graph perspective. The classical bilinear attention networks build a bilinear attention map to extract the joint representation of words in the question and objects in the image but lack fully exploring the relationship between words for complex reasoning. In contrast, we develop bilinear graph networks to model the context of the joint embeddings of words and objects. Two kinds of graphs are investigated, namely image-graph and question-graph. The image-graph transfers features of the detected objects to their related query words, enabling the output nodes to have both semantic and factual information. The question-graph exchanges information between these output nodes from image-graph to amplify the implicit yet important relationship between objects. These two kinds of graphs cooperate with each other, and thus our resulting model can model the relationship and dependency between objects, which leads to the realization of multi-step reasoning. Experimental results on the VQA v2.0 validation dataset demonstrate the ability of our method to handle the complex questions. On the test-std set, our best single model achieves state-of-the-art performance, boosting the overall accuracy to 72.41%.", "field": [], "task": ["Question Answering", "Visual Question Answering"], "method": [], "dataset": ["VQA v2 test-std", "GQA Test2019"], "metric": ["Binary", "number", "overall", "other", "Validity", "Consistency", "Plausibility", "Distribution", "yes/no", "Accuracy", "Open"], "title": "Bilinear Graph Networks for Visual Question Answering"} {"abstract": "There is a natural correlation between the visual and auditive elements of a\nvideo. In this work we leverage this connection to learn general and effective\nmodels for both audio and video analysis from self-supervised temporal\nsynchronization. We demonstrate that a calibrated curriculum learning scheme, a\ncareful choice of negative examples, and the use of a contrastive loss are\ncritical ingredients to obtain powerful multi-sensory representations from\nmodels optimized to discern temporal synchronization of audio-video pairs.\nWithout further finetuning, the resulting audio features achieve performance\nsuperior or comparable to the state-of-the-art on established audio\nclassification benchmarks (DCASE2014 and ESC-50). At the same time, our visual\nsubnet provides a very effective initialization to improve the accuracy of\nvideo-based action recognition models: compared to learning from scratch, our\nself-supervised pretraining yields a remarkable gain of +19.9% in action\nrecognition accuracy on UCF101 and a boost of +17.7% on HMDB51.", "field": [], "task": ["Action Recognition", "Audio Classification", "Curriculum Learning", "Temporal Action Localization"], "method": [], "dataset": ["ESC-50"], "metric": ["Top-1 Accuracy"], "title": "Cooperative Learning of Audio and Video Models from Self-Supervised Synchronization"} {"abstract": "Obtaining large, human labelled speech datasets to train models for emotion\nrecognition is a notoriously challenging task, hindered by annotation cost and\nlabel ambiguity. In this work, we consider the task of learning embeddings for\nspeech classification without access to any form of labelled audio. We base our\napproach on a simple hypothesis: that the emotional content of speech\ncorrelates with the facial expression of the speaker. By exploiting this\nrelationship, we show that annotations of expression can be transferred from\nthe visual domain (faces) to the speech domain (voices) through cross-modal\ndistillation. We make the following contributions: (i) we develop a strong\nteacher network for facial emotion recognition that achieves the state of the\nart on a standard benchmark; (ii) we use the teacher to train a student, tabula\nrasa, to learn representations (embeddings) for speech emotion recognition\nwithout access to labelled audio data; and (iii) we show that the speech\nemotion embedding can be used for speech emotion recognition on external\nbenchmark datasets. Code, models and data are available.", "field": [], "task": ["Emotion Recognition", "Facial Expression Recognition", "Speech Emotion Recognition"], "method": [], "dataset": ["FERPlus"], "metric": ["Accuracy"], "title": "Emotion Recognition in Speech using Cross-Modal Transfer in the Wild"} {"abstract": "Conversational Emotion Recognition (CER) is a crucial task in Natural Language Processing (NLP) with wide applications. Prior works in CER generally focus on modeling emotion influences solely with utterance-level features, with little attention paid on phrase-level semantic connection between utterances. Phrases carry sentiments when they are referred to emotional events under certain topics, providing a global semantic connection between utterances throughout the entire conversation. In this work, we propose a two-stage Summarization and Aggregation Graph Inference Network (SumAggGIN), which seamlessly integrates inference for topic-related emotional phrases and local dependency reasoning over neighbouring utterances in a global-to-local fashion. Topic-related emotional phrases, which constitutes the global topic-related emotional connections, are recognized by our proposed heterogeneous Summarization Graph. Local dependencies, which captures short-term emotional effects between neighbouring utterances, are further injected via an Aggregation Graph to distinguish the subtle differences between utterances containing emotional phrases. The two steps of graph inference are tightly-coupled for a comprehensively understanding of emotional fluctuation. Experimental results on three CER benchmark datasets verify the effectiveness of our proposed model, which outperforms the state-of-the-art approaches.", "field": [], "task": ["Emotion Recognition", "Emotion Recognition in Conversation"], "method": [], "dataset": ["IEMOCAP", "MELD"], "metric": ["Weighted Macro-F1", "F1", "Accuracy"], "title": "Summarize before Aggregate: A Global-to-local Heterogeneous Graph Inference Network for Conversational Emotion Recognition"} {"abstract": "Temporal coherence is a valuable source of information in the context of\noptical flow estimation. However, finding a suitable motion model to leverage\nthis information is a non-trivial task. In this paper we propose an\nunsupervised online learning approach based on a convolutional neural network\n(CNN) that estimates such a motion model individually for each frame. By\nrelating forward and backward motion these learned models not only allow to\ninfer valuable motion information based on the backward flow, they also help to\nimprove the performance at occlusions, where a reliable prediction is\nparticularly useful. Moreover, our learned models are spatially variant and\nhence allow to estimate non-rigid motion per construction. This, in turns,\nallows to overcome the major limitation of recent rigidity-based approaches\nthat seek to improve the estimation by incorporating additional stereo/SfM\nconstraints. Experiments demonstrate the usefulness of our new approach. They\nnot only show a consistent improvement of up to 27% for all major benchmarks\n(KITTI 2012, KITTI 2015, MPI Sintel) compared to a baseline without prediction,\nthey also show top results for the MPI Sintel benchmark -- the one of the three\nbenchmarks that contains the largest amount of non-rigid motion.", "field": [], "task": ["Optical Flow Estimation"], "method": [], "dataset": ["Sintel-clean"], "metric": ["Average End-Point Error"], "title": "ProFlow: Learning to Predict Optical Flow"} {"abstract": "Information selection is the most important component in document summarization task. In this paper, we propose to extend the basic neural encoding-decoding framework with an information selection layer to explicitly model and optimize the information selection process in abstractive document summarization. Specifically, our information selection layer consists of two parts: gated global information filtering and local sentence selection. Unnecessary information in the original document is first globally filtered, then salient sentences are selected locally while generating each summary sentence sequentially. To optimize the information selection process directly, distantly-supervised training guided by the golden summary is also imported. Experimental results demonstrate that the explicit modeling and optimizing of the information selection process improves document summarization performance significantly, which enables our model to generate more informative and concise summaries, and thus significantly outperform state-of-the-art neural abstractive methods.", "field": [], "task": ["Abstractive Text Summarization", "Document Summarization", "Machine Translation", "Text Generation"], "method": [], "dataset": ["CNN / Daily Mail"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Improving Neural Abstractive Document Summarization with Explicit Information Selection Modeling"} {"abstract": "Recent neural sequence-to-sequence models have shown significant progress on short text summarization. However, for document summarization, they fail to capture the long-term structure of both documents and multi-sentence summaries, resulting in information loss and repetitions. In this paper, we propose to leverage the structural information of both documents and multi-sentence summaries to improve the document summarization performance. Specifically, we import both structural-compression and structural-coverage regularization into the summarization process in order to capture the information compression and information coverage properties, which are the two most important structural properties of document summarization. Experimental results demonstrate that the structural regularization improves the document summarization performance significantly, which enables our model to generate more informative and concise summaries, and thus significantly outperforms state-of-the-art neural abstractive methods.", "field": [], "task": ["Abstractive Text Summarization", "Document Summarization", "Machine Translation", "Sentence Summarization", "Text Generation", "Text Summarization"], "method": [], "dataset": ["CNN / Daily Mail"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Improving Neural Abstractive Document Summarization with Structural Regularization"} {"abstract": "Existing named entity recognition (NER) systems rely on large amounts of human-labeled data for supervision. However, obtaining large-scale annotated data is challenging particularly in specific domains like health-care, e-commerce and so on. Given the availability of domain specific knowledge resources, (e.g., ontologies, dictionaries), distant supervision is a solution to generate automatically labeled training data to reduce human effort. The outcome of distant supervision for NER, however, is often noisy. False positive and false negative instances are the main issues that reduce performance on this kind of auto-generated data. In this paper, we explore distant supervision in a supervised setup. We adopt a technique of partial annotation to address false negative cases and implement a reinforcement learning strategy with a neural network policy to identify false positive instances. Our results establish a new state-of-the-art on four benchmark datasets taken from different domains and different languages. We then go on to show that our model reduces the amount of manually annotated data required to perform NER in a new domain.", "field": [], "task": ["Denoising", "Named Entity Recognition"], "method": [], "dataset": ["BC5CDR"], "metric": ["F1"], "title": "Reinforcement-based denoising of distantly supervised NER with partial annotation"} {"abstract": "Recently, anchor-free detection methods have been through great progress. The major two families, anchor-point detection and key-point detection, are at opposite edges of the speed-accuracy trade-off, with anchor-point detectors having the speed advantage. In this work, we boost the performance of the anchor-point detector over the key-point counterparts while maintaining the speed advantage. To achieve this, we formulate the detection problem from the anchor point's perspective and identify ineffective training as the main problem. Our key insight is that anchor points should be optimized jointly as a group both within and across feature pyramid levels. We propose a simple yet effective training strategy with soft-weighted anchor points and soft-selected pyramid levels to address the false attention issue within each pyramid level and the feature selection issue across all the pyramid levels, respectively. To evaluate the effectiveness, we train a single-stage anchor-free detector called Soft Anchor-Point Detector (SAPD). Experiments show that our concise SAPD pushes the envelope of speed/accuracy trade-off to a new level, outperforming recent state-of-the-art anchor-free and anchor-based detectors. Without bells and whistles, our best model can achieve a single-model single-scale AP of 47.4% on COCO.", "field": [], "task": ["Feature Selection", "Object Detection"], "method": [], "dataset": ["COCO test-dev"], "metric": ["APM", "box AP", "AP75", "APS", "APL", "AP50"], "title": "Soft Anchor-Point Object Detection"} {"abstract": "This work studies the problem of object goal navigation which involves navigating to an instance of the given object category in unseen environments. End-to-end learning-based navigation methods struggle at this task as they are ineffective at exploration and long-term planning. We propose a modular system called, `Goal-Oriented Semantic Exploration' which builds an episodic semantic map and uses it to explore the environment efficiently based on the goal object category. Empirical results in visually realistic simulation environments show that the proposed model outperforms a wide range of baselines including end-to-end learning-based methods as well as modular map-based methods and led to the winning entry of the CVPR-2020 Habitat ObjectNav Challenge. Ablation analysis indicates that the proposed model learns semantic priors of the relative arrangement of objects in a scene, and uses them to explore efficiently. Domain-agnostic module design allow us to transfer our model to a mobile robot platform and achieve similar performance for object goal navigation in the real-world.", "field": [], "task": ["Robot Navigation"], "method": [], "dataset": ["Habitat 2020 Object Nav test-std"], "metric": ["SOFT_SPL", "DISTANCE_TO_GOAL", "SUCCESS", "SPL"], "title": "Object Goal Navigation using Goal-Oriented Semantic Exploration"} {"abstract": "Depth estimation provides essential information to perform autonomous driving\nand driver assistance. Especially, Monocular Depth Estimation is interesting\nfrom a practical point of view, since using a single camera is cheaper than\nmany other options and avoids the need for continuous calibration strategies as\nrequired by stereo-vision approaches. State-of-the-art methods for Monocular\nDepth Estimation are based on Convolutional Neural Networks (CNNs). A promising\nline of work consists of introducing additional semantic information about the\ntraffic scene when training CNNs for depth estimation. In practice, this means\nthat the depth data used for CNN training is complemented with images having\npixel-wise semantic labels, which usually are difficult to annotate (e.g.\ncrowded urban images). Moreover, so far it is common practice to assume that\nthe same raw training data is associated with both types of ground truth, i.e.,\ndepth and semantic labels. The main contribution of this paper is to show that\nthis hard constraint can be circumvented, i.e., that we can train CNNs for\ndepth estimation by leveraging the depth and semantic information coming from\nheterogeneous datasets. In order to illustrate the benefits of our approach, we\ncombine KITTI depth and Cityscapes semantic segmentation datasets,\noutperforming state-of-the-art results on Monocular Depth Estimation.", "field": [], "task": ["Autonomous Driving", "Depth Estimation", "Monocular Depth Estimation", "Semantic Segmentation"], "method": [], "dataset": ["KITTI Eigen split"], "metric": ["absolute relative error"], "title": "Monocular Depth Estimation by Learning from Heterogeneous Datasets"} {"abstract": "Progress in Sentence Simplification has been hindered by the lack of supervised data, particularly in languages other than English. Previous work has aligned sentences from original and simplified corpora such as English Wikipedia and Simple English Wikipedia, but this limits corpus size, domain, and language. In this work, we propose using unsupervised mining techniques to automatically create training corpora for simplification in multiple languages from raw Common Crawl web data. When coupled with a controllable generation mechanism that can flexibly adjust attributes such as length and lexical complexity, these mined paraphrase corpora can be used to train simplification systems in any language. We further incorporate multilingual unsupervised pretraining methods to create even stronger models and show that by training on mined data rather than supervised corpora, we outperform the previous best results. We evaluate our approach on English, French, and Spanish simplification benchmarks and reach state-of-the-art performance with a totally unsupervised approach. We will release our models and code to mine the data in any language included in Common Crawl.", "field": [], "task": ["Text Simplification"], "method": [], "dataset": ["ASSET", "TurkCorpus"], "metric": ["BLEU", "SARI (EASSE>=0.2.1)"], "title": "Multilingual Unsupervised Sentence Simplification"} {"abstract": "We introduce GEM, a living benchmark for natural language Generation (NLG), its Evaluation, and Metrics. Measuring progress in NLG relies on a constantly evolving ecosystem of automated metrics, datasets, and human evaluation standards. However, due to this moving target, new models often still evaluate on divergent anglo-centric corpora with well-established, but flawed, metrics. This disconnect makes it challenging to identify the limitations of current models and opportunities for progress. Addressing this limitation, GEM provides an environment in which models can easily be applied to a wide set of corpora and evaluation strategies can be tested. Regular updates to the benchmark will help NLG research become more multilingual and evolve the challenge alongside models. This paper serves as the description of the initial release for which we are organizing a shared task at our ACL 2021 Workshop and to which we invite the entire NLG community to participate.", "field": [], "task": ["Abstractive Text Summarization", "Cross-Lingual Abstractive Summarization", "Data-to-Text Generation", "Extreme Summarization", "Question Answering", "Task-Oriented Dialogue Systems", "Text Generation", "Text Simplification"], "method": [], "dataset": ["SGD", "Cleaned E2E NLG Challenge", "WebNLG en", "WebNLG ru", "MLSUM de", "MLSUM es", "ASSET", "Czech restaurant information", "TurkCorpus", "CommonGen", "DART", "ToTTo"], "metric": ["METEOR"], "title": "The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics"} {"abstract": "The intensive annotation cost and the rich but unlabeled data contained in videos motivate us to propose an unsupervised video-based person re-identification (re-ID) method. We start from two assumptions: 1) different video tracklets typically contain different persons, given that the tracklets are taken at distinct places or with long intervals; 2) within each tracklet, the frames are mostly of the same person. Based on these assumptions, this paper propose a stepwise metric promotion approach to estimate the identities of training tracklets, which iterates between cross-camera tracklet association and feature learning. Specifically, We use each training tracklet as a query, and perform retrieval in the cross camera training set. Our method is built on reciprocal nearest neighbor search and can eliminate the hard negative label matches, i.e., the cross-camera nearest neighbors of the false matches in the initial rank list. The tracklet that passes the reciprocal nearest neighbor check is considered to have the same ID with the query. Experimental results on the PRID 2011, ILIDS-VID, and MARS datasets show that the proposed method achieves very competitive re-ID accuracy compared with its supervised counterparts.\r", "field": [], "task": ["Person Re-Identification", "Video-Based Person Re-Identification"], "method": [], "dataset": ["PRID2011"], "metric": ["Rank-1", "Rank-20", "Rank-5"], "title": "Stepwise Metric Promotion for Unsupervised Video Person Re-Identification"} {"abstract": "Sentence simplification aims to simplify the content and structure of complex\nsentences, and thus make them easier to interpret for human readers, and easier\nto process for downstream NLP applications. Recent advances in neural machine\ntranslation have paved the way for novel approaches to the task. In this paper,\nwe adapt an architecture with augmented memory capacities called Neural\nSemantic Encoders (Munkhdalai and Yu, 2017) for sentence simplification. Our\nexperiments demonstrate the effectiveness of our approach on different\nsimplification datasets, both in terms of automatic evaluation measures and\nhuman judgments.", "field": [], "task": ["Machine Translation", "Text Simplification"], "method": [], "dataset": ["PWKP / WikiSmall", "Newsela", "TurkCorpus"], "metric": ["BLEU", "SARI (EASSE>=0.2.1)", "SARI"], "title": "Sentence Simplification with Memory-Augmented Neural Networks"} {"abstract": "Sentence simplification aims to improve readability and understandability,\nbased on several operations such as splitting, deletion, and paraphrasing.\nHowever, a valid simplified sentence should also be logically entailed by its\ninput sentence. In this work, we first present a strong pointer-copy mechanism\nbased sequence-to-sequence sentence simplification model, and then improve its\nentailment and paraphrasing capabilities via multi-task learning with related\nauxiliary tasks of entailment and paraphrase generation. Moreover, we propose a\nnovel 'multi-level' layered soft sharing approach where each auxiliary task\nshares different (higher versus lower) level layers of the sentence\nsimplification model, depending on the task's semantic versus lexico-syntactic\nnature. We also introduce a novel multi-armed bandit based training approach\nthat dynamically learns how to effectively switch across tasks during\nmulti-task learning. Experiments on multiple popular datasets demonstrate that\nour model outperforms competitive simplification systems in SARI and FKGL\nautomatic metrics, and human evaluation. Further, we present several ablation\nanalyses on alternative layer sharing methods, soft versus hard sharing,\ndynamic multi-armed bandit sampling approaches, and our model's learned\nentailment and paraphrasing skills.", "field": [], "task": ["Multi-Task Learning", "Paraphrase Generation", "Text Simplification"], "method": [], "dataset": ["PWKP / WikiSmall", "Newsela", "TurkCorpus"], "metric": ["BLEU", "SARI (EASSE>=0.2.1)", "SARI"], "title": "Dynamic Multi-Level Multi-Task Learning for Sentence Simplification"} {"abstract": "Multi-person 3D human pose estimation from a single image is a challenging problem, especially for in-the-wild settings due to the lack of 3D annotated data. We propose HG-RCNN, a Mask-RCNN based network that also leverages the benefits of the Hourglass architecture for multi-person 3D Human Pose Estimation. A two-staged approach is presented that first estimates the 2D keypoints in every Region of Interest (RoI) and then lifts the estimated keypoints to 3D. Finally, the estimated 3D poses are placed in camera-coordinates using weak-perspective projection assumption and joint optimization of focal length and root translations. The result is a simple and modular network for multi-person 3D human pose estimation that does not require any multi-person 3D pose dataset. Despite its simple formulation, HG-RCNN achieves the state-of-the-art results on MuPoTS-3D while also approximating the 3D pose in the camera-coordinate system.", "field": [], "task": ["3D Human Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["MuPoTS-3D"], "metric": ["3DPCK"], "title": "Multi-Person 3D Human Pose Estimation from Monocular Images"} {"abstract": "Monocular depth estimation is a challenging task in scene understanding, with the goal to acquire the geometric properties of 3D space from 2D images. Due to the lack of RGB-depth image pairs, unsupervised learning methods aim at deriving depth information with alternative supervision such as stereo pairs. However, most existing works fail to model the geometric structure of objects, which generally results from considering pixel-level objective functions during training. In this paper, we propose SceneNet to overcome this limitation with the aid of semantic understanding from segmentation. Moreover, our proposed model is able to perform region-aware depth estimation by enforcing semantics consistency between stereo pairs. In our experiments, we qualitatively and quantitatively verify the effectiveness and robustness of our model, which produces favorable results against the state-of-the-art approaches do.\r", "field": [], "task": ["Depth Estimation", "Monocular Depth Estimation", "Scene Understanding"], "method": [], "dataset": ["KITTI Eigen split"], "metric": ["absolute relative error"], "title": "Towards Scene Understanding: Unsupervised Monocular Depth Estimation With Semantic-Aware Representation"} {"abstract": "The key of Weakly Supervised Fine-grained Image Classification (WFGIC) is how to pick out the discriminative regions and learn the discriminative features from them. However, most recent WFGIC methods pick out the discriminative regions independently and utilize their features directly, while neglecting the facts that regions\u2019 features are mutually semantic correlated and region groups can be more discriminative. To address these issues, we propose an end-to-end Graph-propagation based Correlation Learning (GCL) model to fully mine and exploit the discriminative potentials of region correlations for WFGIC. Specifically, in discriminative\r\nregion localization phase, a Criss-cross Graph Propagation (CGP) sub-network is proposed to learn region correlations, which establishes correlation between regions and then enhances each region by weighted aggregating other regions in a criss-cross way. By this means each region\u2019s representation encodes the global image-level context and local spatial context simultaneously, thus the network is guided to implicitly discover the more powerful discriminative region groups for WFGIC. In discriminative feature representation phase, the Correlation Feature Strengthening (CFS) sub-network is proposed to explore the internal semantic correlation among discriminative patches feature vectors, to improve their discriminative power by iteratively enhancing informative elements while suppressing the useless ones. Extensive experiments demonstrate the effectiveness of proposed CGP and CFS sub-networks, and show that the GCL model achieves better performance both in accuracy and efficiency.", "field": [], "task": ["Fine-Grained Image Classification", "Image Classification"], "method": [], "dataset": [" CUB-200-2011", "Stanford Cars", "FGVC Aircraft"], "metric": ["Accuracy"], "title": "Graph-propagation based Correlation Learning for Weakly Supervised Fine-grained Image Classification"} {"abstract": "Unsupervised domain adaptation (UDA) transfers knowledge from a label-rich source domain to a fully-unlabeled target domain. To tackle this task, recent approaches resort to discriminative domain transfer in virtue of pseudo-labels to enforce the class-level distribution alignment across the source and target domains. These methods, however, are vulnerable to the error accumulation and thus incapable of preserving cross-domain category consistency, as the pseudo-labeling accuracy is not guaranteed explicitly. In this paper, we propose the Progressive Feature Alignment Network (PFAN) to align the discriminative features across domains progressively and effectively, via exploiting the intra-class variation in the target domain. To be specific, we first develop an Easy-to-Hard Transfer Strategy (EHTS) and an Adaptive Prototype Alignment (APA) step to train our model iteratively and alternatively. Moreover, upon observing that a good domain adaptation usually requires a non-saturated source classifier, we consider a simple yet efficient way to retard the convergence speed of the source classification loss by further involving a temperature variate into the soft-max function. The extensive experimental results reveal that the proposed PFAN exceeds the state-of-the-art performance on three UDA datasets.", "field": [], "task": ["Domain Adaptation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["SVHN-to-MNIST"], "metric": ["Accuracy"], "title": "Progressive Feature Alignment for Unsupervised Domain Adaptation"} {"abstract": "We present an efficient method for detecting anomalies in videos. Recent\napplications of convolutional neural networks have shown promises of\nconvolutional layers for object detection and recognition, especially in\nimages. However, convolutional neural networks are supervised and require\nlabels as learning signals. We propose a spatiotemporal architecture for\nanomaly detection in videos including crowded scenes. Our architecture includes\ntwo main components, one for spatial feature representation, and one for\nlearning the temporal evolution of the spatial features. Experimental results\non Avenue, Subway and UCSD benchmarks confirm that the detection accuracy of\nour method is comparable to state-of-the-art methods at a considerable speed of\nup to 140 fps.", "field": [], "task": ["Anomaly Detection", "Object Detection"], "method": [], "dataset": ["UBI-Fights"], "metric": ["AUC"], "title": "Abnormal Event Detection in Videos using Spatiotemporal Autoencoder"} {"abstract": "Current methods for training convolutional neural networks depend on large amounts of labeled samples for supervised training. In this paper we present an approach for training a convolutional neural network using only unlabeled data. We train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. We find that this simple feature learning algorithm is surprisingly successful when applied to visual object recognition. The feature representation learned by our algorithm achieves classification results matching or outperforming the current state-of-the-art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101).", "field": [], "task": ["Image Classification", "Object Recognition"], "method": [], "dataset": ["STL-10", "CIFAR-10"], "metric": ["Percentage correct"], "title": "Discriminative Unsupervised Feature Learning with Convolutional Neural Networks"} {"abstract": "PyOD is an open-source Python toolbox for performing scalable outlier detection on multivariate data. Uniquely, it provides access to a wide range of outlier detection algorithms, including established outlier ensembles and more recent neural network-based approaches, under a single, well-documented API designed for use by both practitioners and researchers. With robustness and scalability in mind, best practices such as unit testing, continuous integration, code coverage, maintainability checks, interactive examples and parallelization are emphasized as core components in the toolbox's development. PyOD is compatible with both Python 2 and 3 and can be installed through Python Package Index (PyPI) or https://github.com/yzhao062/pyod.", "field": [], "task": ["Anomaly Detection", "Outlier Detection", "outlier ensembles"], "method": [], "dataset": [], "metric": [], "title": "PyOD: A Python Toolbox for Scalable Outlier Detection"} {"abstract": "Sentiment analysis (SA) is one of the most useful natural language processing applications. Literature is flooding with many papers and systems addressing this task, but most of the work is focused on English. In this paper, we present {``}Mazajak{''}, an online system for Arabic SA. The system is based on a deep learning model, which achieves state-of-the-art results on many Arabic dialect datasets including SemEval 2017 and ASTD. The availability of such system should assist various applications and research that rely on sentiment analysis as a tool.", "field": [], "task": ["Arabic Sentiment Analysis", "Sentiment Analysis", "Twitter Sentiment Analysis"], "method": [], "dataset": ["ArSAS", "SemEval 2017 Task 4-A", "ASTD"], "metric": ["Average Recall"], "title": "Mazajak: An Online Arabic Sentiment Analyser"} {"abstract": "In the absence of large labelled datasets, self-supervised learning techniques\r\ncan boost performance by learning useful representations from unlabelled data,\r\nwhich is often more readily available. However, there is often a domain shift\r\nbetween the unlabelled collection and the downstream target problem data. We\r\nshow that by learning Bayesian instance weights for the unlabelled data, we\r\ncan improve the downstream classification accuracy by prioritising the most\r\nuseful instances. Additionally, we show that the training time can be reduced by\r\ndiscarding unnecessary datapoints. Our method, BetaDataWeighter is evaluated\r\nusing the popular self-supervised rotation prediction task on STL-10 and Visual\r\nDecathlon. We compare to related instance weighting schemes, both hand-designed\r\nheuristics and meta-learning, as well as conventional self-supervised learning.\r\nBetaDataWeighter achieves both the highest average accuracy and rank across\r\ndatasets, and on STL-10 it prunes up to 78% of unlabelled images without significant\r\nloss in accuracy, corresponding to over 50% reduction in training time.", "field": [], "task": ["Image Classification", "Meta-Learning", "Self-Supervised Learning"], "method": [], "dataset": ["STL-10"], "metric": ["Percentage correct"], "title": "Don\u2019t Wait, Just Weight: Improving Unsupervised Representations by Learning Goal-Driven Instance Weights"} {"abstract": "Understanding a narrative requires reading between the lines and reasoning\nabout the unspoken but obvious implications about events and people's mental\nstates - a capability that is trivial for humans but remarkably hard for\nmachines. To facilitate research addressing this challenge, we introduce a new\nannotation framework to explain naive psychology of story characters as\nfully-specified chains of mental states with respect to motivations and\nemotional reactions. Our work presents a new large-scale dataset with rich\nlow-level annotations and establishes baseline performance on several new\ntasks, suggesting avenues for future research.", "field": [], "task": ["Emotion Classification"], "method": [], "dataset": ["ROCStories"], "metric": ["F1"], "title": "Modeling Naive Psychology of Characters in Simple Commonsense Stories"} {"abstract": "In active learning, sampling bias could pose a serious inconsistency problem and hinder the algorithm from finding the optimal hypothesis. However, many methods for neural networks are hypothesis space agnostic and do not address this problem. We examine active learning with convolutional neural networks through the principled lens of version space reduction. We identify the connection between two approaches---prior mass reduction and diameter reduction---and propose a new diameter-based querying method---the minimum Gibbs-vote disagreement. By estimating version space diameter and bias, we illustrate how version space of neural networks evolves and examine the realizability assumption. With experiments on MNIST, Fashion-MNIST, SVHN and STL-10 datasets, we demonstrate that diameter reduction methods reduce the version space more effectively and perform better than prior mass reduction and other baselines, and that the Gibbs vote disagreement is on par with the best query method.", "field": [], "task": ["Active Learning", "Image Classification"], "method": [], "dataset": ["STL-10"], "metric": ["Percentage correct"], "title": "Effective Version Space Reduction for Convolutional Neural Networks"} {"abstract": "Recently, skeleton based action recognition gains more popularity due to\ncost-effective depth sensors coupled with real-time skeleton estimation\nalgorithms. Traditional approaches based on handcrafted features are limited to\nrepresent the complexity of motion patterns. Recent methods that use Recurrent\nNeural Networks (RNN) to handle raw skeletons only focus on the contextual\ndependency in the temporal domain and neglect the spatial configurations of\narticulated skeletons. In this paper, we propose a novel two-stream RNN\narchitecture to model both temporal dynamics and spatial configurations for\nskeleton based action recognition. We explore two different structures for the\ntemporal stream: stacked RNN and hierarchical RNN. Hierarchical RNN is designed\naccording to human body kinematics. We also propose two effective methods to\nmodel the spatial structure by converting the spatial graph into a sequence of\njoints. To improve generalization of our model, we further exploit 3D\ntransformation based data augmentation techniques including rotation and\nscaling transformation to transform the 3D coordinates of skeletons during\ntraining. Experiments on 3D action recognition benchmark datasets show that our\nmethod brings a considerable improvement for a variety of actions, i.e.,\ngeneric actions, interaction activities and gestures.", "field": [], "task": ["3D Action Recognition", "Action Recognition", "Data Augmentation", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "Modeling Temporal Dynamics and Spatial Configurations of Actions Using Two-Stream Recurrent Neural Networks"} {"abstract": "We address the unsupervised learning of several interconnected problems in\nlow-level vision: single view depth prediction, camera motion estimation,\noptical flow, and segmentation of a video into the static scene and moving\nregions. Our key insight is that these four fundamental vision problems are\ncoupled through geometric constraints. Consequently, learning to solve them\ntogether simplifies the problem because the solutions can reinforce each other.\nWe go beyond previous work by exploiting geometry more explicitly and\nsegmenting the scene into static and moving regions. To that end, we introduce\nCompetitive Collaboration, a framework that facilitates the coordinated\ntraining of multiple specialized neural networks to solve complex problems.\nCompetitive Collaboration works much like expectation-maximization, but with\nneural networks that act as both competitors to explain pixels that correspond\nto static or moving regions, and as collaborators through a moderator that\nassigns pixels to be either static or independently moving. Our novel method\nintegrates all these problems in a common framework and simultaneously reasons\nabout the segmentation of the scene into moving objects and the static\nbackground, the camera motion, depth of the static scene structure, and the\noptical flow of moving objects. Our model is trained without any supervision\nand achieves state-of-the-art performance among joint unsupervised methods on\nall sub-problems.", "field": [], "task": ["Depth Estimation", "Monocular Depth Estimation", "Motion Estimation", "Motion Segmentation", "Optical Flow Estimation"], "method": [], "dataset": ["KITTI Eigen split"], "metric": ["absolute relative error"], "title": "Competitive Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation"} {"abstract": "In this paper, we study abstractive summarization for open-domain videos. Unlike the traditional text news summarization, the goal is less to \"compress\" text information but rather to provide a fluent textual summary of information that has been collected and fused from different source modalities, in our case video and audio transcripts (or text). We show how a multi-source sequence-to-sequence model with hierarchical attention can integrate information from different modalities into a coherent output, compare various models trained with different modalities and present pilot experiments on the How2 corpus of instructional videos. We also propose a new evaluation metric (Content F1) for abstractive summarization task that measures semantic adequacy rather than fluency of the summaries, which is covered by metrics like ROUGE and BLEU.", "field": [], "task": ["Abstractive Text Summarization", "Text Summarization"], "method": [], "dataset": ["How2"], "metric": ["ROUGE-L", "Content F1"], "title": "Multimodal Abstractive Summarization for How2 Videos"} {"abstract": "Text-to-image retrieval is an essential task in multi-modal information retrieval, i.e. retrieving relevant images from a large and unlabelled image dataset given textual queries. In this paper, we propose VisualSparta, a novel text-to-image retrieval model that shows substantial improvement over existing models on both accuracy and efficiency. We show that VisualSparta is capable of outperforming all previous scalable methods in MSCOCO and Flickr30K. It also shows substantial retrieving speed advantages, i.e. for an index with 1 million images, VisualSparta gets over 391x speed up compared to standard vector search. Experiments show that this speed advantage even gets bigger for larger datasets because VisualSparta can be efficiently implemented as an inverted index. To the best of our knowledge, VisualSparta is the first transformer-based text-to-image retrieval model that can achieve real-time searching for very large dataset, with significant accuracy improvement compared to previous state-of-the-art methods.", "field": [], "task": ["Cross-Modal Retrieval", "Image Retrieval", "Information Retrieval", "Text-Image Retrieval", "Text-to-Image Retrieval"], "method": [], "dataset": ["MSCOCO-1k", "COCO 2014", "Flickr30k", "Flickr30K 1K test"], "metric": ["recall@5", "recall@10", "QPS", "recall@1", "R@10", "Text-to-image R@10", "Text-to-image R@1", "R@5", "R@1", "Text-to-image R@5"], "title": "VisualSparta: Sparse Transformer Fragment-level Matching for Large-scale Text-to-Image Search"} {"abstract": "(Unsupervised) Domain Adaptation (DA) seeks for classifying target instances when solely provided with source labeled and target unlabeled examples for training. Learning domain-invariant features helps to achieve this goal, whereas it underpins unlabeled samples drawn from a single or multiple explicit target domains (Multi-target DA). In this paper, we consider a more realistic transfer scenario: our target domain is comprised of multiple sub-targets implicitly blended with each other, so that learners could not identify which sub-target each unlabeled sample belongs to. This Blending-target Domain Adaptation (BTDA) scenario commonly appears in practice and threatens the validities of most existing DA algorithms, due to the presence of domain gaps and categorical misalignments among these hidden sub-targets. To reap the transfer performance gains in this new scenario, we propose Adversarial Meta-Adaptation Network (AMEAN). AMEAN entails two adversarial transfer learning processes. The first is a conventional adversarial transfer to bridge our source and mixed target domains. To circumvent the intra-target category misalignment, the second process presents as ``learning to adapt'': It deploys an unsupervised meta-learner receiving target data and their ongoing feature-learning feedbacks, to discover target clusters as our ``meta-sub-target'' domains. These meta-sub-targets auto-design our meta-sub-target DA loss, which empirically eliminates the implicit category mismatching in our mixed target. We evaluate AMEAN and a variety of DA algorithms in three benchmarks under the BTDA setup. Empirical results show that BTDA is a quite challenging transfer setup for most existing DA algorithms, yet AMEAN significantly outperforms these state-of-the-art baselines and effectively restrains the negative transfer effects in BTDA.", "field": [], "task": ["Domain Adaptation", "Transfer Learning", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["Office-31", "Office-Home"], "metric": ["Accuracy"], "title": "Blending-target Domain Adaptation by Adversarial Meta-Adaptation Networks"} {"abstract": "This work considers the problem of unsupervised domain adaptation in person re-identification (re-ID), which aims to transfer knowledge from the source domain to the target domain. Existing methods are primary to reduce the inter-domain shift between the domains, which however usually overlook the relations among target samples. This paper investigates into the intra-domain variations of the target domain and proposes a novel adaptation framework w.r.t. three types of underlying invariance, i.e., Exemplar-Invariance, Camera-Invariance, and Neighborhood-Invariance. Specifically, an exemplar memory is introduced to store features of samples, which can effectively and efficiently enforce the invariance constraints over the global dataset. We further present the Graph-based Positive Prediction (GPP) method to explore reliable neighbors for the target domain, which is built upon the memory and is trained on the source samples. Experiments demonstrate that 1) the three invariance properties are indispensable for effective domain adaptation, 2) the memory plays a key role in implementing invariance learning and improves the performance with limited extra computation cost, 3) GPP could facilitate the invariance learning and thus significantly improves the results, and 4) our approach produces new state-of-the-art adaptation accuracy on three re-ID large-scale benchmarks.", "field": [], "task": ["Domain Adaptation", "Person Re-Identification", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["Duke to Market", "Duke to MSMT", "Market to Duke", "Market to MSMT"], "metric": ["rank-10", "mAP", "rank-5", "rank-1"], "title": "Learning to Adapt Invariance in Memory for Person Re-identification"} {"abstract": "In Multi-Label Text Classification (MLTC), one sample can belong to more than one class. It is observed that most MLTC tasks, there are dependencies or correlations among labels. Existing methods tend to ignore the relationship among labels. In this paper, a graph attention network-based model is proposed to capture the attentive dependency structure among the labels. The graph attention network uses a feature matrix and a correlation matrix to capture and explore the crucial dependencies between the labels and generate classifiers for the task. The generated classifiers are applied to sentence feature vectors obtained from the text feature extraction network(BiLSTM) to enable end-to-end training. Attention allows the system to assign different weights to neighbor nodes per label, thus allowing it to learn the dependencies among labels implicitly. The results of the proposed model are validated on five real-world MLTC datasets. The proposed model achieves similar or better performance compared to the previous state-of-the-art models.", "field": [], "task": ["Document Classification", "Graph Representation Learning", "Multi-Label Text Classification", "Text Classification"], "method": [], "dataset": ["Slashdot", "RCV1-v2", "Reuters-21578", "RCV1", "AAPD"], "metric": ["Micro-F1", "F1", "Micro F1"], "title": "MAGNET: Multi-Label Text Classification using Attention-based Graph Neural Network"} {"abstract": "Document-level Relation Extraction (RE) requires extracting relations expressed within and across sentences. Recent works show that graph-based methods, usually constructing a document-level graph that captures document-aware interactions, can obtain useful entity representations thus helping tackle document-level RE. These methods either focus more on the entire graph, or pay more attention to a part of the graph, e.g., paths between the target entity pair. However, we find that document-level RE may benefit from focusing on both of them simultaneously. Therefore, to obtain more comprehensive entity representations, we propose the \\textbf{C}oarse-to-\\textbf{F}ine \\textbf{E}ntity \\textbf{R}epresentation model (\\textbf{CFER}) that adopts a coarse-to-fine strategy involving two phases. First, CFER uses graph neural networks to integrate global information in the entire graph at a coarse level. Next, CFER utilizes the global information as a guidance to selectively aggregate path information between the target entity pair at a fine level. In classification, we combine the entity representations from both two levels into more comprehensive representations for relation extraction. Experimental results on a large-scale document-level RE dataset show that CFER achieves better performance than previous baseline models. Further, we verify the effectiveness of our strategy through elaborate model analysis.", "field": [], "task": ["Relation Extraction"], "method": [], "dataset": ["DocRED"], "metric": ["Ign F1", "F1"], "title": "Coarse-to-Fine Entity Representations for Document-level Relation Extraction"} {"abstract": "A fundamental trade-off between effectiveness and efficiency needs to be\nbalanced when designing an online question answering system. Effectiveness\ncomes from sophisticated functions such as extractive machine reading\ncomprehension (MRC), while efficiency is obtained from improvements in\npreliminary retrieval components such as candidate document selection and\nparagraph ranking. Given the complexity of the real-world multi-document MRC\nscenario, it is difficult to jointly optimize both in an end-to-end system. To\naddress this problem, we develop a novel deep cascade learning model, which\nprogressively evolves from the document-level and paragraph-level ranking of\ncandidate texts to more precise answer extraction with machine reading\ncomprehension. Specifically, irrelevant documents and paragraphs are first\nfiltered out with simple functions for efficiency consideration. Then we\njointly train three modules on the remaining texts for better tracking the\nanswer: the document extraction, the paragraph extraction and the answer\nextraction. Experiment results show that the proposed method outperforms the\nprevious state-of-the-art methods on two large-scale multi-document benchmark\ndatasets, i.e., TriviaQA and DuReader. In addition, our online system can\nstably serve typical scenarios with millions of daily requests in less than\n50ms.", "field": [], "task": ["Machine Reading Comprehension", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["MS MARCO"], "metric": ["Rouge-L", "BLEU-1"], "title": "A Deep Cascade Model for Multi-Document Reading Comprehension"} {"abstract": "Recent advances in video super-resolution have shown that convolutional\nneural networks combined with motion compensation are able to merge information\nfrom multiple low-resolution (LR) frames to generate high-quality images.\nCurrent state-of-the-art methods process a batch of LR frames to generate a\nsingle high-resolution (HR) frame and run this scheme in a sliding window\nfashion over the entire video, effectively treating the problem as a large\nnumber of separate multi-frame super-resolution tasks. This approach has two\nmain weaknesses: 1) Each input frame is processed and warped multiple times,\nincreasing the computational cost, and 2) each output frame is estimated\nindependently conditioned on the input frames, limiting the system's ability to\nproduce temporally consistent results.\n In this work, we propose an end-to-end trainable frame-recurrent video\nsuper-resolution framework that uses the previously inferred HR estimate to\nsuper-resolve the subsequent frame. This naturally encourages temporally\nconsistent results and reduces the computational cost by warping only one image\nin each step. Furthermore, due to its recurrent nature, the proposed method has\nthe ability to assimilate a large number of previous frames without increased\ncomputational demands. Extensive evaluations and comparisons with previous\nmethods validate the strengths of our approach and demonstrate that the\nproposed framework is able to significantly outperform the current state of the\nart.", "field": [], "task": ["Motion Compensation", "Multi-Frame Super-Resolution", "Super-Resolution", "Video Super-Resolution"], "method": [], "dataset": ["Vid4 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Frame-Recurrent Video Super-Resolution"} {"abstract": "We present the first dataset targeted at end-to-end NLG in Czech in the restaurant domain, along with several strong baseline models using the sequence-to-sequence approach. While non-English NLG is under-explored in general, Czech, as a morphologically rich language, makes the task even harder: Since Czech requires inflecting named entities, delexicalization or copy mechanisms do not work out-of-the-box and lexicalizing the generated outputs is non-trivial. In our experiments, we present two different approaches to this this problem: (1) using a neural language model to select the correct inflected form while lexicalizing, (2) a two-step generation setup: our sequence-to-sequence model generates an interleaved sequence of lemmas and morphological tags, which are then inflected by a morphological generator.", "field": [], "task": ["Data-to-Text Generation", "Language Modelling"], "method": [], "dataset": ["Czech Restaurant NLG"], "metric": ["CIDER", "BLEU score", "METEOR", "NIST"], "title": "Neural Generation for Czech: Data and Baselines"} {"abstract": "Neural models of dialog rely on generalized latent representations of language. This paper introduces a novel training procedure which explicitly learns multiple representations of language at several levels of granularity. The multi-granularity training algorithm modifies the mechanism by which negative candidate responses are sampled in order to control the granularity of learned latent representations. Strong performance gains are observed on the next utterance retrieval task using both the MultiWOZ dataset and the Ubuntu dialog corpus. Analysis significantly demonstrates that multiple granularities of representation are being learned, and that multi-granularity training facilitates better transfer to downstream tasks.", "field": [], "task": ["Conversational Response Selection"], "method": [], "dataset": ["Ubuntu Dialogue (v1, Ranking)"], "metric": ["R10@1", "R2@1"], "title": "Multi-Granularity Representations of Dialog"} {"abstract": "Generative adversarial networks (GANs) have great successes on synthesizing\ndata. However, the existing GANs restrict the discriminator to be a binary\nclassifier, and thus limit their learning capacity for tasks that need to\nsynthesize output with rich structures such as natural language descriptions.\nIn this paper, we propose a novel generative adversarial network, RankGAN, for\ngenerating high-quality language descriptions. Rather than training the\ndiscriminator to learn and assign absolute binary predicate for individual data\nsample, the proposed RankGAN is able to analyze and rank a collection of\nhuman-written and machine-written sentences by giving a reference group. By\nviewing a set of data samples collectively and evaluating their quality through\nrelative ranking scores, the discriminator is able to make better assessment\nwhich in turn helps to learn a better generator. The proposed RankGAN is\noptimized through the policy gradient technique. Experimental results on\nmultiple public datasets clearly demonstrate the effectiveness of the proposed\napproach.", "field": [], "task": ["Text Generation"], "method": [], "dataset": ["Chinese Poems", "EMNLP2017 WMT", "COCO Captions"], "metric": ["BLEU-3", "BLEU-4", "BLEU-2", "BLEU-5"], "title": "Adversarial Ranking for Language Generation"} {"abstract": "We achieve 3D semantic scene labeling by exploring semantic relation between each point and its contextual neighbors through edges. Besides an encoder-decoder branch for predicting point labels, we construct an edge branch to hierarchically integrate point features and generate edge features. To incorporate point features in the edge branch, we establish a hierarchical graph framework, where the graph is initialized from a coarse layer and gradually enriched along the point decoding process. For each edge in the final graph, we predict a label to indicate the semantic consistency of the two connected points to enhance point prediction. At different layers, edge features are also fed into the corresponding point module to integrate contextual information for message passing enhancement in local regions. The two branches interact with each other and cooperate in segmentation. Decent experimental results on several 3D semantic labeling datasets demonstrate the effectiveness of our work.", "field": [], "task": ["Scene Labeling", "Semantic Segmentation"], "method": [], "dataset": ["S3DIS Area5"], "metric": ["oAcc", "mAcc", "mIoU"], "title": "Hierarchical Point-Edge Interaction Network for Point Cloud Semantic Segmentation"} {"abstract": "Head pose estimation, which computes the intrinsic Euler angles (yaw, pitch, roll) from the human, is crucial for gaze estimation, face alignment, and 3D reconstruction. Traditional approaches heavily relies on the accuracy of facial landmarks. It limits their performances, especially when the visibility of the face is not in good condition. In this paper, to do the estimation without facial landmarks, we combine the coarse and fine regression output together for a deep network. Utilizing more quantization units for the angles, a fine classifier is trained with the help of other auxiliary coarse units. Integrating regression is adopted to get the final prediction. The proposed approach is evaluated on three challenging benchmarks. It achieves the state-of-the-art on AFLW2000, BIWI and performs favorably on AFLW. The code has been released on Github.", "field": [], "task": ["3D Reconstruction", "Face Alignment", "Gaze Estimation", "Head Pose Estimation", "Pose Estimation", "Quantization", "Regression"], "method": [], "dataset": ["AFLW2000", "AFLW", "BIWI"], "metric": ["MAE", "MAE (trained with BIWI data)"], "title": "Hybrid coarse-fine classification for head pose estimation"} {"abstract": "Face alignment, which fits a face model to an image and extracts the semantic\nmeanings of facial pixels, has been an important topic in CV community.\nHowever, most algorithms are designed for faces in small to medium poses (below\n45 degree), lacking the ability to align faces in large poses up to 90 degree.\nThe challenges are three-fold: Firstly, the commonly used landmark-based face\nmodel assumes that all the landmarks are visible and is therefore not suitable\nfor profile views. Secondly, the face appearance varies more dramatically\nacross large poses, ranging from frontal view to profile view. Thirdly,\nlabelling landmarks in large poses is extremely challenging since the invisible\nlandmarks have to be guessed. In this paper, we propose a solution to the three\nproblems in an new alignment framework, called 3D Dense Face Alignment (3DDFA),\nin which a dense 3D face model is fitted to the image via convolutional neutral\nnetwork (CNN). We also propose a method to synthesize large-scale training\nsamples in profile views to solve the third problem of data labelling.\nExperiments on the challenging AFLW database show that our approach achieves\nsignificant improvements over state-of-the-art methods.", "field": [], "task": ["3D Face Reconstruction", "Face Alignment", "Face Model", "Head Pose Estimation"], "method": [], "dataset": ["AFLW2000", "300W", "Florence", "AFLW2000-3D", "BIWI"], "metric": ["Error rate", "NME", "MAE (trained with other data)", "MAE", "Mean NME "], "title": "Face Alignment Across Large Poses: A 3D Solution"} {"abstract": "In this paper, we present Deep Graph Kernels (DGK), a unified framework to learn latent representations of sub-structures for graphs, inspired by latest advancements in language modeling and deep learning. Our framework leverages the dependency information between sub-structures by learning their latent representations. We demonstrate instances of our framework on three popular graph kernels, namely Graphlet kernels, Weisfeiler-Lehman subtree kernels, and Shortest-Path graph kernels. Our experiments on several benchmark datasets show that Deep Graph Kernels achieve significant improvements in classification accuracy over state-of-the-art graph kernels.", "field": [], "task": ["Graph Classification", "Language Modelling"], "method": [], "dataset": ["COLLAB", "RE-M12K", "IMDb-B", "ENZYMES", "Android Malware Dataset", "PROTEINS", "D&D", "NCI1", "MUTAG", "IMDb-M", "RE-M5K"], "metric": ["Accuracy"], "title": "Deep Graph Kernels"} {"abstract": "Can neural networks learn to compare graphs without feature engineering? In\nthis paper, we show that it is possible to learn representations for graph\nsimilarity with neither domain knowledge nor supervision (i.e.\\ feature\nengineering or labeled graphs). We propose Deep Divergence Graph Kernels, an\nunsupervised method for learning representations over graphs that encodes a\nrelaxed notion of graph isomorphism. Our method consists of three parts. First,\nwe learn an encoder for each anchor graph to capture its structure. Second, for\neach pair of graphs, we train a cross-graph attention network which uses the\nnode representations of an anchor graph to reconstruct another graph. This\napproach, which we call isomorphism attention, captures how well the\nrepresentations of one graph can encode another. We use the attention-augmented\nencoder's predictions to define a divergence score for each pair of graphs.\nFinally, we construct an embedding space for all graphs using these pair-wise\ndivergence scores.\n Unlike previous work, much of which relies on 1) supervision, 2) domain\nspecific knowledge (e.g. a reliance on Weisfeiler-Lehman kernels), and 3) known\nnode alignment, our unsupervised method jointly learns node representations,\ngraph representations, and an attention-based alignment between graphs.\n Our experimental results show that Deep Divergence Graph Kernels can learn an\nunsupervised alignment between graphs, and that the learned representations\nachieve competitive results when used as features on a number of challenging\ngraph classification tasks. Furthermore, we illustrate how the learned\nattention allows insight into the the alignment of sub-structures across\ngraphs.", "field": [], "task": ["Feature Engineering", "Graph Classification", "Graph Similarity"], "method": [], "dataset": ["MUTAG", "D&D", "PTC", "NCI1"], "metric": ["Accuracy"], "title": "DDGK: Learning Graph Representations for Deep Divergence Graph Kernels"} {"abstract": "Visual question answering (VQA) and image captioning require a shared body of general knowledge connecting language and vision. We present a novel approach to improve VQA performance that exploits this connection by jointly generating captions that are targeted to help answer a specific visual question. The model is trained using an existing caption dataset by automatically determining question-relevant captions using an online gradient-based method. Experimental results on the VQA v2 challenge demonstrates that our approach obtains state-of-the-art VQA performance (e.g. 68.4% on the Test-standard set using a single model) by simultaneously generating question-relevant captions.", "field": [], "task": ["Image Captioning", "Question Answering", "Visual Question Answering"], "method": [], "dataset": ["VQA v2 test-std"], "metric": ["overall"], "title": "Generating Question Relevant Captions to Aid Visual Question Answering"} {"abstract": "The success of deep supervised learning depends on its automatic data representation abilities. A good representation of high-dimensional complex data should enjoy low-dimensionally and disentanglement while losing as little information as possible. \nIn this work, we give a statistical understanding of how deep representation goals can be achieved with reproducing kernel Hilbert spaces (RKHS) and generative adversarial networks (GAN). At the population level, we formulate the ideal representation learning task as that of finding a nonlinear map that minimizes the sum of losses characterizing conditional independence (with RKHS) and disentanglement (with GAN). We estimate the target map at the sample level nonparametrically with deep neural networks. We prove the consistency in terms of the population objective function value. We validate the proposed methods via comprehensive numerical experiments and real data analysis in the context of regression and classification. The resulting prediction accuracies are better than state-of-the-art methods.", "field": [], "task": ["Image Classification", "Regression", "Representation Learning"], "method": [], "dataset": ["Kuzushiji-MNIST", "STL-10"], "metric": ["Percentage correct", "Accuracy"], "title": "Toward Understanding Supervised Representation Learning with RKHS and GAN"} {"abstract": "In this paper, we introduce a new model for leveraging unlabeled data to\nimprove generalization performances of image classifiers: a two-branch\nencoder-decoder architecture called HybridNet. The first branch receives\nsupervision signal and is dedicated to the extraction of invariant\nclass-related representations. The second branch is fully unsupervised and\ndedicated to model information discarded by the first branch to reconstruct\ninput data. To further support the expected behavior of our model, we propose\nan original training objective. It favors stability in the discriminative\nbranch and complementarity between the learned representations in the two\nbranches. HybridNet is able to outperform state-of-the-art results on CIFAR-10,\nSVHN and STL-10 in various semi-supervised settings. In addition,\nvisualizations and ablation studies validate our contributions and the behavior\nof the model on both CIFAR-10 and STL-10 datasets.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["STL-10"], "metric": ["Percentage correct"], "title": "HybridNet: Classification and Reconstruction Cooperation for Semi-Supervised Learning"} {"abstract": "Although deep learning has been applied to successfully address many data mining problems, relatively limited work has been done on deep learning for anomaly detection. Existing deep anomaly detection methods, which focus on learning new feature representations to enable downstream anomaly detection methods, perform indirect optimization of anomaly scores, leading to data-inefficient learning and suboptimal anomaly scoring. Also, they are typically designed as unsupervised learning due to the lack of large-scale labeled anomaly data. As a result, they are difficult to leverage prior knowledge (e.g., a few labeled anomalies) when such information is available as in many real-world anomaly detection applications. This paper introduces a novel anomaly detection framework and its instantiation to address these problems. Instead of representation learning, our method fulfills an end-to-end learning of anomaly scores by a neural deviation learning, in which we leverage a few (e.g., multiple to dozens) labeled anomalies and a prior probability to enforce statistically significant deviations of the anomaly scores of anomalies from that of normal data objects in the upper tail. Extensive results show that our method can be trained substantially more data-efficiently and achieves significantly better anomaly scoring than state-of-the-art competing methods.", "field": [], "task": ["Anomaly Detection", "Cyber Attack Detection", "Fraud Detection", "Network Intrusion Detection", "Representation Learning"], "method": [], "dataset": ["Kaggle-Credit Card Fraud Dataset", "NB15-Backdoor", "Census", "Thyroid"], "metric": ["Average Precision", "AUC"], "title": "Deep Anomaly Detection with Deviation Networks"} {"abstract": "Temporal receptive fields of models play an important role in action segmentation. Large receptive fields facilitate the long-term relations among video clips while small receptive fields help capture the local details. Existing methods construct models with hand-designed receptive fields in layers. Can we effectively search for receptive field combinations to replace hand-designed patterns? To answer this question, we propose to find better receptive field combinations through a global-to-local search scheme. Our search scheme exploits both global search to find the coarse combinations and local search to get the refined receptive field combination patterns further. The global search finds possible coarse combinations other than human-designed patterns. On top of the global search, we propose an expectation guided iterative local search scheme to refine combinations effectively. Our global-to-local search can be plugged into existing action segmentation methods to achieve state-of-the-art performance.", "field": [], "task": ["Action Segmentation"], "method": [], "dataset": ["50 Salads"], "metric": ["Acc", "Edit", "F1@10%", "F1@25%", "F1@50%"], "title": "Global2Local: Efficient Structure Search for Video Action Segmentation"} {"abstract": "Neural networks are a powerful means of classifying object images. The proposed\r\nimage category classification method for object images combines convolutional neural\r\nnetworks (CNNs) and support vector machines (SVMs). A pre-trained CNN, called Alex-Net,\r\nis used as a pattern-feature extractor. Alex-Net is pre-trained for the large-scale object-image\r\ndataset ImageNet. Instead of training, Alex-Net, pre-trained for ImageNet is used. An SVM is\r\nused as trainable classifier. The feature vectors are passed to the SVM from Alex-Net. The\r\nSTL-10 dataset are used as object images. The number of classes is ten. Training and test\r\nsamples are clearly split. STL-10 object images are trained by the SVM with data\r\naugmentation. We use the pattern transformation method with the cosine function. We also\r\napply some augmentation method such as rotation, skewing and elastic distortion. By using the\r\ncosine function, the original patterns were left-justified, right-justified, top-justified, or bottomjustified. Patterns were also center-justified and enlarged. Test error rate is decreased by 0.435\r\npercentage points from 16.055% by augmentation with cosine transformation. Error rates are\r\nincreased by other augmentation method such as rotation, skewing and elastic distortion,\r\ncompared without augmentation . Number of augmented data is 30 times that of the original\r\nSTL-10 5K training samples. Experimental test error rate for the test 8k STL-10 object images\r\nwas 15.620%, which shows that image augmentation is effective for image category\r\nclassification.", "field": [], "task": ["Data Augmentation", "Image Augmentation", "Image Classification"], "method": [], "dataset": ["STL-10"], "metric": ["Percentage correct"], "title": "Image Augmentation for Object Image Classification Based On Combination of PreTrained CNN and SVM"} {"abstract": "Inferring the depth of images is a fundamental inverse problem within the field of Computer Vision since depth information is obtained through 2D images, which can be generated from infinite possibilities of observed real scenes. Benefiting from the progress of Convolutional Neural Networks (CNNs) to explore structural features and spatial image information, Single Image Depth Estimation (SIDE) is often highlighted in scopes of scientific and technological innovation, as this concept provides advantages related to its low implementation cost and robustness to environmental conditions. In the context of autonomous vehicles, state-of-the-art CNNs optimize the SIDE task by producing high-quality depth maps, which are essential during the autonomous navigation process in different locations. However, such networks are usually supervised by sparse and noisy depth data, from Light Detection and Ranging (LiDAR) laser scans, and are carried out at high computational cost, requiring high-performance Graphic Processing Units (GPUs). Therefore, we propose a new lightweight and fast supervised CNN architecture combined with novel feature extraction models which are designed for real-world autonomous navigation. We also introduce an efficient surface normals module, jointly with a simple geometric 2.5D loss function, to solve SIDE problems. We also innovate by incorporating multiple Deep Learning techniques, such as the use of densification algorithms and additional semantic, surface normals and depth information to train our framework. The method introduced in this work focuses on robotic applications in indoor and outdoor environments and its results are evaluated on the competitive and publicly available NYU Depth V2 and KITTI Depth datasets.", "field": [], "task": ["Autonomous Navigation", "Autonomous Vehicles", "Depth Completion", "Monocular Depth Estimation", "Semantic Segmentation", "Surface Normals Estimation", "Visual Odometry"], "method": [], "dataset": ["NYU-Depth V2", "NYU-Depth V2 Surface Normals", "KITTI Eigen split", "KITTI Depth Completion Eigen Split"], "metric": ["RMSE", "REL", "absolute relative error"], "title": "On Deep Learning Techniques to Boost Monocular Depth Estimation for Autonomous Navigation"} {"abstract": "Neural Machine Translation (NMT), though recently developed, has shown promising results for various language pairs. Despite that, NMT has only been applied to mostly formal texts such as those in the WMT shared tasks. This work further explores the effectiveness of NMT in spoken language domains by participating in the MT track of the IWSLT 2015. We consider two scenarios: (a) how to adapt existing NMT systems to a new domain and (b) the generalization of NMT to low-resource language pairs. Our results demonstrate that using an existing NMT framework1, we can achieve competitive results in the aforementioned scenarios when translating from English to German and Vietnamese. Notably, we have advanced state-of-the-art results in the IWSLT EnglishGerman MT track by up to 5.2 BLEU points.", "field": [], "task": ["Machine Translation"], "method": [], "dataset": ["IWSLT2015 English-Vietnamese"], "metric": ["BLEU"], "title": "Stanford Neural Machine Translation Systems for Spoken Language Domains"} {"abstract": "We study how to sample negative examples to automatically construct a training set for effective model learning in retrieval-based dialogue systems. Following an idea of dynamically adapting negative examples to matching models in learning, we consider four strategies including minimum sampling, maximum sampling, semi-hard sampling, and decay-hard sampling. Empirical studies on two benchmarks with three matching models indicate that compared with the widely used random sampling strategy, although the first two strategies lead to performance drop, the latter two ones can bring consistent improvement to the performance of all the models on both benchmarks.", "field": [], "task": ["Conversational Response Selection"], "method": [], "dataset": ["Ubuntu Dialogue (v1, Ranking)"], "metric": ["R10@1", "R10@5", "R2@1", "R10@2"], "title": "Sampling Matters! An Empirical Study of Negative Sampling Strategies for Learning of Matching Models in Retrieval-based Dialogue Systems"} {"abstract": "While depth cameras and inertial sensors have been frequently leveraged for\nhuman action recognition, these sensing modalities are impractical in many\nscenarios where cost or environmental constraints prohibit their use. As such,\nthere has been recent interest on human action recognition using low-cost,\nreadily-available RGB cameras via deep convolutional neural networks. However,\nmany of the deep convolutional neural networks proposed for action recognition\nthus far have relied heavily on learning global appearance cues directly from\nimaging data, resulting in highly complex network architectures that are\ncomputationally expensive and difficult to train. Motivated to reduce network\ncomplexity and achieve higher performance, we introduce the concept of\nspatio-temporal activation reprojection (STAR). More specifically, we reproject\nthe spatio-temporal activations generated by human pose estimation layers in\nspace and time using a stack of 3D convolutions. Experimental results on\nUTD-MHAD and J-HMDB demonstrate that an end-to-end architecture based on the\nproposed STAR framework (which we nickname STAR-Net) is proficient in\nsingle-environment and small-scale applications. On UTD-MHAD, STAR-Net\noutperforms several methods using richer data modalities such as depth and\ninertial sensors.", "field": [], "task": ["Action Recognition", "Multimodal Activity Recognition", "Pose Estimation", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["UTD-MHAD", "J-HMDB"], "metric": ["Accuracy (CS)", "Accuracy (RGB+pose)"], "title": "STAR-Net: Action Recognition using Spatio-Temporal Activation Reprojection"} {"abstract": "A new deep learning-based electroencephalography (EEG) signal analysis framework is proposed. While deep neural networks, specifically convolutional neural networks (CNNs), have gained remarkable attention recently, they still suffer from high dimensionality of the training data. Two-dimensional input images of CNNs are more vulnerable to be redundant versus one-dimensional input time-series of conventional neural networks. In this study, we propose a new dimensionality reduction framework for reducing the dimension of CNN inputs based on the tensor decomposition of the time-frequency representation of EEG signals. The proposed tensor decomposition-based dimensionality reduction algorithm transforms a large set of slices of the input tensor to a concise set of slices which are called super-slices. Employing super-slices not only handles the artifacts and redundancies of the EEG data but also reduces the dimension of the CNNs training inputs. We also consider different time-frequency representation methods for EEG image generation and provide a comprehensive comparison among them. We test our proposed framework on HCB-MIT data and as results show our approach outperforms other previous studies.", "field": [], "task": ["Dimensionality Reduction", "EEG", "Image Generation", "Seizure Detection", "Time Series"], "method": [], "dataset": ["CHB-MIT"], "metric": ["Accuracy"], "title": "EEG Signal Dimensionality Reduction and Classification using Tensor Decomposition and Deep Convolutional Neural Networks"} {"abstract": "Real-world data often follow a long-tailed distribution as the frequency of each class is typically different. For example, a dataset can have a large number of under-represented classes and a few classes with more than sufficient data. However, a model to represent the dataset is usually expected to have reasonably homogeneous performances across classes. Introducing class-balanced loss and advanced methods on data re-sampling and augmentation are among the best practices to alleviate the data imbalance problem. However, the other part of the problem about the under-represented classes will have to rely on additional knowledge to recover the missing information. In this work, we present a novel approach to address the long-tailed problem by augmenting the under-represented classes in the feature space with the features learned from the classes with ample samples. In particular, we decompose the features of each class into a class-generic component and a class-specific component using class activation maps. Novel samples of under-represented classes are then generated on the fly during training stages by fusing the class-specific features from the under-represented classes with the class-generic features from confusing classes. Our results on different datasets such as iNaturalist, ImageNet-LT, Places-LT and a long-tailed version of CIFAR have shown the state of the art performances.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["iNaturalist 2018"], "metric": ["Top-1 Accuracy"], "title": "Feature Space Augmentation for Long-Tailed Data"} {"abstract": "We present our UWB system for the task of capturing discriminative attributes at SemEval 2018. Given two words and an attribute, the system decides, whether this attribute is discriminative between the words or not. Assuming Distributional Hypothesis, i.e., a word meaning is related to the distribution across contexts, we introduce several approaches to compare word contextual information. We experiment with state-of-the-art semantic spaces and with simple co-occurrence statistics. We show the word distribution in the corpus has potential for detecting discriminative attributes. Our system achieves F1 score 72.1{\\%} and is ranked {\\#}4 among 26 submitted systems.", "field": [], "task": ["Relation Extraction"], "method": [], "dataset": ["SemEval 2018 Task 10"], "metric": ["F1-Score"], "title": "UWB at SemEval-2018 Task 10: Capturing Discriminative Attributes from Word Distributions"} {"abstract": "Knowledge Graph Completion (KGC) aims at automatically predicting missing links for large-scale knowledge graphs. A vast number of state-of-the-art KGC techniques have got published at top conferences in several research fields, including data mining, machine learning, and natural language processing. However, we notice that several recent papers report very high performance, which largely outperforms previous state-of-the-art methods. In this paper, we find that this can be attributed to the inappropriate evaluation protocol used by them and propose a simple evaluation protocol to address this problem. The proposed protocol is robust to handle bias in the model, which can substantially affect the final results. We conduct extensive experiments and report the performance of several existing methods using our protocol. The reproducible code has been made publicly available", "field": [], "task": ["Knowledge Graph Completion", "Knowledge Graphs", "Link Prediction"], "method": [], "dataset": ["FB15k-237"], "metric": ["Hits@10", "MR", "MRR"], "title": "A Re-evaluation of Knowledge Graph Completion Methods"} {"abstract": "We present an online approach to efficiently and simultaneously detect and track the 2D pose of multiple people in a video sequence. We build upon Part Affinity Field (PAF) representation designed for static images, and propose an architecture that can encode and predict Spatio-Temporal Affinity Fields (STAF) across a video sequence. In particular, we propose a novel temporal topology cross-linked across limbs which can consistently handle body motions of a wide range of magnitudes. Additionally, we make the overall approach recurrent in nature, where the network ingests STAF heatmaps from previous frames and estimates those for the current frame. Our approach uses only online inference and tracking, and is currently the fastest and the most accurate bottom-up approach that is runtime invariant to the number of people in the scene and accuracy invariant to input frame rate of camera. Running at $\\sim$30 fps on a single GPU at single scale, it achieves highly competitive results on the PoseTrack benchmarks.", "field": [], "task": ["Pose Tracking"], "method": [], "dataset": ["PoseTrack2017"], "metric": ["MOTA"], "title": "Efficient Online Multi-Person 2D Pose Tracking with Recurrent Spatio-Temporal Affinity Fields"} {"abstract": "In the financial domain, risk modeling and profit generation heavily rely on the sophisticated and intricate stock movement prediction task. Stock forecasting is complex, given the stochastic dynamics and non-stationary behavior of the market. Stock movements are influenced by varied factors beyond the conventionally studied historical prices, such as social media and correlations among stocks. The rising ubiquity of online content and knowledge mandates an exploration of models that factor in such multimodal signals for accurate stock forecasting. We introduce an architecture that achieves a potent blend of chaotic temporal signals from financial data, social media, and inter-stock relationships via a graph neural network in a hierarchical temporal fashion. Through experiments on real-world S{\\&}P 500 index data and English tweets, we show the practical applicability of our model as a tool for investment decision making and trading.", "field": [], "task": ["Decision Making", "Stock Market Prediction"], "method": [], "dataset": ["stocknet"], "metric": ["F1"], "title": "Deep Attentive Learning for Stock Movement Prediction From Social Media Text and Company Correlations"} {"abstract": "Attention operators have been widely applied in various fields, including computer vision, natural language processing, and network embedding learning. Attention operators on graph data enables learnable weights when aggregating information from neighboring nodes. However, graph attention operators (GAOs) consume excessive computational resources, preventing their applications on large graphs. In addition, GAOs belong to the family of soft attention, instead of hard attention, which has been shown to yield better performance. In this work, we propose novel hard graph attention operator (hGAO) and channel-wise graph attention operator (cGAO). hGAO uses the hard attention mechanism by attending to only important nodes. Compared to GAO, hGAO improves performance and saves computational cost by only attending to important nodes. To further reduce the requirements on computational resources, we propose the cGAO that performs attention operations along channels. cGAO avoids the dependency on the adjacency matrix, leading to dramatic reductions in computational resource requirements. Experimental results demonstrate that our proposed deep models with the new operators achieve consistently better performance. Comparison results also indicates that hGAO achieves significantly better performance than GAO on both node and graph embedding tasks. Efficiency comparison shows that our cGAO leads to dramatic savings in computational resources, making them applicable to large graphs.", "field": [], "task": ["Graph Classification", "Graph Embedding", "Graph Representation Learning", "Network Embedding", "Representation Learning"], "method": [], "dataset": ["COLLAB", "PROTEINS", "D&D", "IMDb-M", "MUTAG", "PTC"], "metric": ["Accuracy"], "title": "Graph Representation Learning via Hard and Channel-Wise Attention Networks"} {"abstract": "We propose a general framework for self-supervised learning of transferable visual representations based on Video-Induced Visual Invariances (VIVI). We consider the implicit hierarchy present in the videos and make use of (i) frame-level invariances (e.g. stability to color and contrast perturbations), (ii) shot/clip-level invariances (e.g. robustness to changes in object orientation and lighting conditions), and (iii) video-level invariances (semantic relationships of scenes across shots/clips), to define a holistic self-supervised loss. Training models using different variants of the proposed framework on videos from the YouTube-8M (YT8M) data set, we obtain state-of-the-art self-supervised transfer learning results on the 19 diverse downstream tasks of the Visual Task Adaptation Benchmark (VTAB), using only 1000 labels per task. We then show how to co-train our models jointly with labeled images, outperforming an ImageNet-pretrained ResNet-50 by 0.8 points with 10x fewer labeled images, as well as the previous best supervised model by 3.7 points using the full ImageNet data set.", "field": [], "task": ["Image Classification", "Self-Supervised Learning", "Transfer Learning"], "method": [], "dataset": ["VTAB-1k"], "metric": ["Top-1 Accuracy"], "title": "Self-Supervised Learning of Video-Induced Visual Invariances"} {"abstract": "The incorporation of pseudo data in the training of grammatical error correction models has been one of the main factors in improving the performance of such models. However, consensus is lacking on experimental configurations, namely, choosing how the pseudo data should be generated or used. In this study, these choices are investigated through extensive experiments, and state-of-the-art performance is achieved on the CoNLL-2014 test set ($F_{0.5}=65.0$) and the official test set of the BEA-2019 shared task ($F_{0.5}=70.2$) without making any modifications to the model architecture.", "field": [], "task": ["Grammatical Error Correction"], "method": [], "dataset": ["CoNLL-2014 Shared Task", "BEA-2019 (test)"], "metric": ["F0.5"], "title": "An Empirical Study of Incorporating Pseudo Data into Grammatical Error Correction"} {"abstract": "In this paper, we present a novel method named RECON, that automatically identifies relations in a sentence (sentential relation extraction) and aligns to a knowledge graph (KG). RECON uses a graph neural network to learn representations of both the sentence as well as facts stored in a KG, improving the overall extraction quality. These facts, including entity attributes (label, alias, description, instance-of) and factual triples, have not been collectively used in the state of the art methods. We evaluate the effect of various forms of representing the KG context on the performance of RECON. The empirical evaluation on two standard relation extraction datasets shows that RECON significantly outperforms all state of the art methods on NYT Freebase and Wikidata datasets. RECON reports 87.23 F1 score (Vs 82.29 baseline) on Wikidata dataset whereas on NYT Freebase, reported values are 87.5(P@10) and 74.1(P@30) compared to the previous baseline scores of 81.3(P@10) and 63.1(P@30).", "field": [], "task": ["Relation Extraction"], "method": [], "dataset": ["New York Times Corpus"], "metric": ["P@30%", "P@10%"], "title": "RECON: Relation Extraction using Knowledge Graph Context in a Graph Neural Network"} {"abstract": "To answer the question in machine comprehension (MC) task, the models need to\nestablish the interaction between the question and the context. To tackle the\nproblem that the single-pass model cannot reflect on and correct its answer, we\npresent Ruminating Reader. Ruminating Reader adds a second pass of attention\nand a novel information fusion component to the Bi-Directional Attention Flow\nmodel (BiDAF). We propose novel layer structures that construct an query-aware\ncontext vector representation and fuse encoding representation with\nintermediate representation on top of BiDAF model. We show that a multi-hop\nattention mechanism can be applied to a bi-directional attention structure. In\nexperiments on SQuAD, we find that the Reader outperforms the BiDAF baseline by\na substantial margin, and matches or surpasses the performance of all other\npublished systems.", "field": [], "task": ["Question Answering", "Reading Comprehension"], "method": [], "dataset": ["SQuAD1.1 dev", "SQuAD1.1"], "metric": ["EM", "F1"], "title": "Ruminating Reader: Reasoning with Gated Multi-Hop Attention"} {"abstract": "Graph classification is a significant problem in many scientific domains. It\naddresses tasks such as the classification of proteins and chemical compounds\ninto categories according to their functions, or chemical and structural\nproperties. In a supervised setting, this problem can be framed as learning the\nstructure, features and relationships between features within a set of labelled\ngraphs and being able to correctly predict the labels or categories of unseen\ngraphs.\n A significant difficulty in this task arises when attempting to apply\nestablished classification algorithms due to the requirement for fixed size\nmatrix or tensor representations of the graphs which may vary greatly in their\nnumbers of nodes and edges. Building on prior work combining explicit tensor\nrepresentations with a standard image-based classifier, we propose a model to\nperform graph classification by extracting fixed size tensorial information\nfrom each graph in a given set, and using a Capsule Network to perform\nclassification.\n The graphs we consider here are undirected and with categorical features on\nthe nodes. Using standard benchmarking chemical and protein datasets, we\ndemonstrate that our graph Capsule Network classification model using an\nexplicit tensorial representation of the graphs is competitive with current\nstate of the art graph kernels and graph neural network models despite only\nlimited hyper-parameter searching.", "field": [], "task": ["Graph Classification"], "method": [], "dataset": ["NCI109", "ENZYMES", "PROTEINS", "D&D", "NCI1", "MUTAG", "PTC"], "metric": ["Accuracy"], "title": "Capsule Neural Networks for Graph Classification using Explicit Tensorial Graph Representations"} {"abstract": "Objects moving at high speed along complex trajectories often appear in videos, especially videos of sports. Such objects elapse non-negligible distance during exposure time of a single frame and therefore their position in the frame is not well defined. They appear as semi-transparent streaks due to the motion blur and cannot be reliably tracked by standard trackers. We propose a novel approach called Tracking by Deblatting based on the observation that motion blur is directly related to the intra-frame trajectory of an object. Blur is estimated by solving two intertwined inverse problems, blind deblurring and image matting, which we call deblatting. The trajectory is then estimated by fitting a piecewise quadratic curve, which models physically justifiable trajectories. As a result, tracked objects are precisely localized with higher temporal resolution than by conventional trackers. The proposed TbD tracker was evaluated on a newly created dataset of videos with ground truth obtained by a high-speed camera using a novel Trajectory-IoU metric that generalizes the traditional Intersection over Union and measures the accuracy of the intra-frame trajectory. The proposed method outperforms baseline both in recall and trajectory accuracy.", "field": [], "task": ["Deblurring", "Image Matting", "Object Tracking"], "method": [], "dataset": ["Falling Objects", "TbD", "TbD-3D"], "metric": ["SSIM", "TIoU", "PSNR"], "title": "Intra-frame Object Tracking by Deblatting"} {"abstract": "Semantic segmentation of 3D point cloud data is essential for enhanced high-level perception in autonomous platforms. Furthermore, given the increasing deployment of LiDAR sensors onboard of cars and drones, a special emphasis is also placed on non-computationally intensive algorithms that operate on mobile GPUs. Previous efficient state-of-the-art methods relied on 2D spherical projection of point clouds as input for 2D fully convolutional neural networks to balance the accuracy-speed trade-off. This paper introduces a novel approach for 3D point cloud semantic segmentation that exploits multiple projections of the point cloud to mitigate the loss of information inherent in single projection methods. Our Multi-Projection Fusion (MPF) framework analyzes spherical and bird's-eye view projections using two separate highly-efficient 2D fully convolutional models then combines the segmentation results of both views. The proposed framework is validated on the SemanticKITTI dataset where it achieved a mIoU of 55.5 which is higher than state-of-the-art projection-based methods RangeNet++ and PolarNet while being 1.6x faster than the former and 3.1x faster than the latter.", "field": [], "task": ["Real-Time Semantic Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["SemanticKITTI"], "metric": ["Speed (FPS)", "mIOU", "mIoU"], "title": "Multi Projection Fusion for Real-time Semantic Segmentation of 3D LiDAR Point Clouds"} {"abstract": "Recognizing text from natural images is a hot research topic in computer\nvision due to its various applications. Despite the enduring research of\nseveral decades on optical character recognition (OCR), recognizing texts from\nnatural images is still a challenging task. This is because scene texts are\noften in irregular (e.g. curved, arbitrarily-oriented or seriously distorted)\narrangements, which have not yet been well addressed in the literature.\nExisting methods on text recognition mainly work with regular (horizontal and\nfrontal) texts and cannot be trivially generalized to handle irregular texts.\nIn this paper, we develop the arbitrary orientation network (AON) to directly\ncapture the deep features of irregular texts, which are combined into an\nattention-based decoder to generate character sequence. The whole network can\nbe trained end-to-end by using only images and word-level annotations.\nExtensive experiments on various benchmarks, including the CUTE80,\nSVT-Perspective, IIIT5k, SVT and ICDAR datasets, show that the proposed\nAON-based method achieves the-state-of-the-art performance in irregular\ndatasets, and is comparable to major existing methods in regular datasets.", "field": [], "task": ["Optical Character Recognition"], "method": [], "dataset": ["ICDAR2015", "ICDAR 2003"], "metric": ["Accuracy"], "title": "AON: Towards Arbitrarily-Oriented Text Recognition"} {"abstract": "We aim to detect all instances of a category in an image and, for each\ninstance, mark the pixels that belong to it. We call this task Simultaneous\nDetection and Segmentation (SDS). Unlike classical bounding box detection, SDS\nrequires a segmentation and not just a box. Unlike classical semantic\nsegmentation, we require individual object instances. We build on recent work\nthat uses convolutional neural networks to classify category-independent region\nproposals (R-CNN [16]), introducing a novel architecture tailored for SDS. We\nthen use category-specific, top- down figure-ground predictions to refine our\nbottom-up proposals. We show a 7 point boost (16% relative) over our baselines\non SDS, a 5 point boost (10% relative) over state-of-the-art on semantic\nsegmentation, and state-of-the-art performance in object detection. Finally, we\nprovide diagnostic tools that unpack performance and provide directions for\nfuture work.", "field": [], "task": ["Object Detection", "Semantic Segmentation"], "method": [], "dataset": ["PASCAL VOC 2012", "PASCAL VOC 2012 test"], "metric": ["Mean IoU", "MAP"], "title": "Simultaneous Detection and Segmentation"} {"abstract": "We investigate a new commonsense inference task: given an event described in a short free-form text (\"X drinks coffee in the morning\"), a system reasons about the likely intents (\"X wants to stay awake\") and reactions (\"X feels alert\") of the event's participants. To support this study, we construct a new crowdsourced corpus of 25,000 event phrases covering a diverse range of everyday events and situations. We report baseline performance on this task, demonstrating that neural encoder-decoder models can successfully compose embedding representations of previously unseen events and reason about the likely intents and reactions of the event participants. In addition, we demonstrate how commonsense inference on people's intents and reactions can help unveil the implicit gender inequality prevalent in modern movie scripts.", "field": [], "task": ["Common Sense Reasoning"], "method": [], "dataset": ["Event2Mind test", "Event2Mind dev"], "metric": ["Average Cross-Ent"], "title": "Event2Mind: Commonsense Inference on Events, Intents, and Reactions"} {"abstract": "We combine two of the most popular approaches to automated Grammatical Error\nCorrection (GEC): GEC based on Statistical Machine Translation (SMT) and GEC\nbased on Neural Machine Translation (NMT). The hybrid system achieves new\nstate-of-the-art results on the CoNLL-2014 and JFLEG benchmarks. This GEC\nsystem preserves the accuracy of SMT output and, at the same time, generates\nmore fluent sentences as it typical for NMT. Our analysis shows that the\ncreated systems are closer to reaching human-level performance than any other\nGEC system reported so far.", "field": [], "task": ["Grammatical Error Correction", "Machine Translation"], "method": [], "dataset": ["CoNLL-2014 Shared Task (10 annotations)", "CoNLL-2014 Shared Task", "JFLEG"], "metric": ["GLEU", "F0.5"], "title": "Near Human-Level Performance in Grammatical Error Correction with Hybrid Machine Translation"} {"abstract": "Actionness was introduced to quantify the likelihood of containing a generic\naction instance at a specific location. Accurate and efficient estimation of\nactionness is important in video analysis and may benefit other relevant tasks\nsuch as action recognition and action detection. This paper presents a new deep\narchitecture for actionness estimation, called hybrid fully convolutional\nnetwork (H-FCN), which is composed of appearance FCN (A-FCN) and motion FCN\n(M-FCN). These two FCNs leverage the strong capacity of deep models to estimate\nactionness maps from the perspectives of static appearance and dynamic motion,\nrespectively. In addition, the fully convolutional nature of H-FCN allows it to\nefficiently process videos with arbitrary sizes. Experiments are conducted on\nthe challenging datasets of Stanford40, UCF Sports, and JHMDB to verify the\neffectiveness of H-FCN on actionness estimation, which demonstrate that our\nmethod achieves superior performance to previous ones. Moreover, we apply the\nestimated actionness maps on action proposal generation and action detection.\nOur actionness maps advance the current state-of-the-art performance of these\ntasks substantially.", "field": [], "task": ["Action Detection", "Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["J-HMDB-21"], "metric": ["Frame-mAP"], "title": "Actionness Estimation Using Hybrid Fully Convolutional Networks"} {"abstract": "We propose a novel multi-grained attention network (MGAN) model for aspect level sentiment classification. Existing approaches mostly adopt coarse-grained attention mechanism, which may bring information loss if the aspect has multiple words or larger context. We propose a fine-grained attention mechanism, which can capture the word-level interaction between aspect and context. And then we leverage the fine-grained and coarse-grained attention mechanisms to compose the MGAN framework. Moreover, unlike previous works which train each aspect with its context separately, we design an aspect alignment loss to depict the aspect-level interactions among the aspects that have the same context. We evaluate the proposed approach on three datasets: laptop and restaurant are from SemEval 2014, and the last one is a twitter dataset. Experimental results show that the multi-grained attention network consistently outperforms the state-of-the-art methods on all three datasets. We also conduct experiments to evaluate the effectiveness of aspect alignment loss, which indicates the aspect-level interactions can bring extra useful information and further improve the performance.", "field": [], "task": ["Aspect-Based Sentiment Analysis", "Sentiment Analysis"], "method": [], "dataset": ["SemEval 2014 Task 4 Sub Task 2"], "metric": ["Laptop (Acc)", "Restaurant (Acc)", "Mean Acc (Restaurant + Laptop)"], "title": "Multi-grained Attention Network for Aspect-Level Sentiment Classification"} {"abstract": "In this paper we present state-of-the-art (SOTA) performance on the LibriSpeech corpus with two novel neural network architectures, a multistream CNN for acoustic modeling and a self-attentive simple recurrent unit (SRU) for language modeling. In the hybrid ASR framework, the multistream CNN acoustic model processes an input of speech frames in multiple parallel pipelines where each stream has a unique dilation rate for diversity. Trained with the SpecAugment data augmentation method, it achieves relative word error rate (WER) improvements of 4% on test-clean and 14% on test-other. We further improve the performance via N-best rescoring using a 24-layer self-attentive SRU language model, achieving WERs of 1.75% on test-clean and 4.46% on test-other.", "field": [], "task": ["Data Augmentation", "Language Modelling", "Speech Recognition"], "method": [], "dataset": ["LibriSpeech test-other", "LibriSpeech test-clean"], "metric": ["Word Error Rate (WER)"], "title": "ASAPP-ASR: Multistream CNN and Self-Attentive SRU for SOTA Speech Recognition"} {"abstract": "We introduce a novel parameterized convolutional neural network for aspect level sentiment classification. Using parameterized filters and parameterized gates, we incorporate aspect information into convolutional neural networks (CNN). Experiments demonstrate that our parameterized filters and parameterized gates effectively capture the aspect-specific features, and our CNN-based models achieve excellent results on SemEval 2014 datasets.", "field": [], "task": ["Sentiment Analysis"], "method": [], "dataset": ["SemEval 2014 Task 4 Sub Task 2"], "metric": ["Laptop (Acc)", "Restaurant (Acc)", "Mean Acc (Restaurant + Laptop)"], "title": "Parameterized Convolutional Neural Networks for Aspect Level Sentiment Classification"} {"abstract": "Aspect sentiment classification (ASC) is a fundamental task in sentiment analysis. Given an aspect/target and a sentence, the task classifies the sentiment polarity expressed on the target in the sentence. Memory networks (MNs) have been used for this task recently and have achieved state-of-the-art results. In MNs, attention mechanism plays a crucial role in detecting the sentiment context for the given target. However, we found an important problem with the current MNs in performing the ASC task. Simply improving the attention mechanism will not solve it. The problem is referred to as target-sensitive sentiment, which means that the sentiment polarity of the (detected) context is dependent on the given target and it cannot be inferred from the context alone. To tackle this problem, we propose the target-sensitive memory networks (TMNs). Several alternative techniques are designed for the implementation of TMNs and their effectiveness is experimentally evaluated.", "field": [], "task": ["Aspect-Based Sentiment Analysis", "Sentiment Analysis"], "method": [], "dataset": ["SemEval 2014 Task 4 Sub Task 2"], "metric": ["Laptop (Acc)", "Restaurant (Acc)", "Mean Acc (Restaurant + Laptop)"], "title": "Target-Sensitive Memory Networks for Aspect Sentiment Classification"} {"abstract": "Annotation errors and bias are inevitable among different facial expression datasets due to the subjectiveness of annotating facial expressions. Ascribe to the inconsistent annotations, performance of existing facial expression recognition (FER) methods cannot keep improving when the training set is enlarged by merging multiple datasets. To address the inconsistency, we propose an Inconsistent Pseudo Annotations to Latent Truth(IPA2LT) framework to train a FER model from multiple inconsistently labeled datasets and large scale unlabeled data. In IPA2LT, we assign each sample more than one labels with human annotations or model predictions. Then, we propose an end-to-end LTNet with a scheme of discovering the latent truth from the inconsistent pseudo labels and the input face images. To our knowledge, IPA2LT serves as the first work to solve the training problem with inconsistently labeled FER datasets. Experiments on synthetic data validate the effectiveness of the proposed method in learning from inconsistent labels. We also conduct extensive experiments in FER and show that our method outperforms other state-of-the-art and optional methods under a rigorous evaluation protocol involving 7 FER datasets.", "field": [], "task": ["Facial Expression Recognition"], "method": [], "dataset": ["AffectNet"], "metric": ["Accuracy (7 emotion)", "Accuracy (8 emotion)"], "title": "Facial Expression Recognition with Inconsistently Annotated Datasets"} {"abstract": "ConvNets achieve good results when training from clean data, but learning from noisy labels significantly degrades performances and remains challenging. Unlike previous works constrained by many conditions, making them infeasible to real noisy cases, this work presents a novel deep self-learning framework to train a robust network on the real noisy datasets without extra supervision. The proposed approach has several appealing benefits. (1) Different from most existing work, it does not rely on any assumption on the distribution of the noisy labels, making it robust to real noises. (2) It does not need extra clean supervision or accessorial network to help training. (3) A self-learning framework is proposed to train the network in an iterative end-to-end manner, which is effective and efficient. Extensive experiments in challenging benchmarks such as Clothing1M and Food101-N show that our approach outperforms its counterparts in all empirical settings.", "field": [], "task": ["Image Classification", "Learning with noisy labels"], "method": [], "dataset": ["Food-101N", "Clothing1M"], "metric": ["Accuracy"], "title": "Deep Self-Learning From Noisy Labels"} {"abstract": "Label noise is increasingly prevalent in datasets acquired from noisy channels. Existing approaches that detect and remove label noise generally rely on some form of supervision, which is not scalable and error-prone. In this paper, we propose NoiseRank, for unsupervised label noise reduction using Markov Random Fields (MRF). We construct a dependence model to estimate the posterior probability of an instance being incorrectly labeled given the dataset, and rank instances based on their estimated probabilities. Our method 1) Does not require supervision from ground-truth labels, or priors on label or noise distribution. 2) It is interpretable by design, enabling transparency in label noise removal. 3) It is agnostic to classifier architecture/optimization framework and content modality. These advantages enable wide applicability in real noise settings, unlike prior works constrained by one or more conditions. NoiseRank improves state-of-the-art classification on Food101-N (~20% noise), and is effective on high noise Clothing-1M (~40% noise).", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["Clothing1M"], "metric": ["Accuracy"], "title": "NoiseRank: Unsupervised Label Noise Reduction with Dependence Models"} {"abstract": "Learning powerful data embeddings has become a center piece in machine learning, especially in natural language processing and computer vision domains. The crux of these embeddings is that they are pretrained on huge corpus of data in a unsupervised fashion, sometimes aided with transfer learning. However currently in the graph learning domain, embeddings learned through existing graph neural networks (GNNs) are task dependent and thus cannot be shared across different datasets. In this paper, we present a first powerful and theoretically guaranteed graph neural network that is designed to learn task-independent graph embeddings, thereafter referred to as deep universal graph embedding (DUGNN). Our DUGNN model incorporates a novel graph neural network (as a universal graph encoder) and leverages rich Graph Kernels (as a multi-task graph decoder) for both unsupervised learning and (task-specific) adaptive supervised learning. By learning task-independent graph embeddings across diverse datasets, DUGNN also reaps the benefits of transfer learning. Through extensive experiments and ablation studies, we show that the proposed DUGNN model consistently outperforms both the existing state-of-art GNN models and Graph Kernels by an increased accuracy of 3% - 8% on graph classification benchmark datasets.", "field": [], "task": ["Graph Classification", "Graph Embedding", "Graph Learning", "Transfer Learning"], "method": [], "dataset": ["COLLAB", "ENZYMES", "IMDb-B", "PROTEINS", "D&D", "IMDb-M", "PTC"], "metric": ["Accuracy"], "title": "Learning Universal Graph Neural Network Embeddings With Aid Of Transfer Learning"} {"abstract": "Graph learning is currently dominated by graph kernels, which, while powerful, suffer some significant limitations. Convolutional Neural Networks (CNNs) offer a very appealing alternative, but processing graphs with CNNs is not trivial. To address this challenge, many sophisticated extensions of CNNs have recently been introduced. In this paper, we reverse the problem: rather than proposing yet another graph CNN model, we introduce a novel way to represent graphs as multi-channel image-like structures that allows them to be handled by vanilla 2D CNNs. Experiments reveal that our method is more accurate than state-of-the-art graph kernels and graph CNNs on 4 out of 6 real-world datasets (with and without continuous node attributes), and close elsewhere. Our approach is also preferable to graph kernels in terms of time complexity. Code and data are publicly available.", "field": [], "task": ["Graph Classification", "Graph Learning"], "method": [], "dataset": ["COLLAB", "RE-M12K", "IMDb-B", "RE-M5K"], "metric": ["Accuracy"], "title": "Graph Classification with 2D Convolutional Neural Networks"} {"abstract": "Multi-choice Machine Reading Comprehension (MMRC) aims to select the correct answer from a set of options based on a given passage and question. Due to task specific of MMRC, it is non-trivial to transfer knowledge from other MRC tasks such as SQuAD, Dream. In this paper, we simply reconstruct multi-choice to single-choice by training a binary classification to distinguish whether a certain answer is correct. Then select the option with the highest confidence score. We construct our model upon ALBERT-xxlarge model and estimate it on the RACE dataset. During training, We adopt AutoML strategy to tune better parameters. Experimental results show that the single-choice is better than multi-choice. In addition, by transferring knowledge from other kinds of MRC tasks, our model achieves a new state-of-the-art results in both single and ensemble settings.", "field": [], "task": ["AutoML", "Machine Reading Comprehension", "Reading Comprehension", "Transfer Learning"], "method": [], "dataset": ["RACE"], "metric": ["Accuracy"], "title": "Improving Machine Reading Comprehension with Single-choice Decision and Transfer Learning"} {"abstract": "Visual dialog is a challenging vision-language task, which requires the agent\nto answer multi-round questions about an image. It typically needs to address\ntwo major problems: (1) How to answer visually-grounded questions, which is the\ncore challenge in visual question answering (VQA); (2) How to infer the\nco-reference between questions and the dialog history. An example of visual\nco-reference is: pronouns (\\eg, ``they'') in the question (\\eg, ``Are they on\nor off?'') are linked with nouns (\\eg, ``lamps'') appearing in the dialog\nhistory (\\eg, ``How many lamps are there?'') and the object grounded in the\nimage. In this work, to resolve the visual co-reference for visual dialog, we\npropose a novel attention mechanism called Recursive Visual Attention (RvA).\nSpecifically, our dialog agent browses the dialog history until the agent has\nsufficient confidence in the visual co-reference resolution, and refines the\nvisual attention recursively. The quantitative and qualitative experimental\nresults on the large-scale VisDial v0.9 and v1.0 datasets demonstrate that the\nproposed RvA not only outperforms the state-of-the-art methods, but also\nachieves reasonable recursion and interpretable attention maps without\nadditional annotations. The code is available at\n\\url{https://github.com/yuleiniu/rva}.", "field": [], "task": ["Question Answering", "Visual Dialog", "Visual Question Answering"], "method": [], "dataset": ["Visual Dialog v1.0 test-std", "VisDial v0.9 val"], "metric": ["MRR (x 100)", "R@10", "NDCG (x 100)", "R@5", "Mean Rank", "MRR", "Mean", "R@1"], "title": "Recursive Visual Attention in Visual Dialog"} {"abstract": "Open-domain question answering can be reformulated as a phrase retrieval problem, without the need for processing documents on-demand during inference (Seo et al., 2019). However, current phrase retrieval models heavily depend on their sparse representations while still underperforming retriever-reader approaches. In this work, we show for the first time that we can learn dense phrase representations alone that achieve much stronger performance in open-domain QA. Our approach includes (1) learning query-agnostic phrase representations via question generation and distillation; (2) novel negative-sampling methods for global normalization; (3) query-side fine-tuning for transfer learning. On five popular QA datasets, our model DensePhrases improves previous phrase retrieval models by 15%-25% absolute accuracy and matches the performance of state-of-the-art retriever-reader models. Our model is easy to parallelize due to pure dense representations and processes more than 10 questions per second on CPUs. Finally, we directly use our pre-indexed dense phrase representations for two slot filling tasks, showing the promise of utilizing DensePhrases as a dense knowledge base for downstream tasks.", "field": [], "task": ["Open-Domain Question Answering", "Question Answering", "Question Generation", "Slot Filling", "Transfer Learning"], "method": [], "dataset": ["Natural Questions (long)", "SQuAD1.1 dev", "KILT: Zero Shot RE", "KILT: T-REx"], "metric": ["R-Prec", "Recall@5", "F1", "KILT-F1", "Accuracy", "EM", "KILT-AC"], "title": "Learning Dense Representations of Phrases at Scale"} {"abstract": "Anomaly detection is a challenging task and usually formulated as an unsupervised learning problem for the unexpectedness of anomalies. This paper proposes a simple yet powerful approach to this issue, which is implemented in the student-teacher framework for its advantages but substantially extends it in terms of both accuracy and efficiency. Given a strong model pre-trained on image classification as the teacher, we distill the knowledge into a single student network with the identical architecture to learn the distribution of anomaly-free images and this one-step transfer preserves the crucial clues as much as possible. Moreover, we integrate the multi-scale feature matching strategy into the framework, and this hierarchical feature alignment enables the student network to receive a mixture of multi-level knowledge from the feature pyramid under better supervision, thus allowing to detect anomalies of various sizes. The difference between feature pyramids generated by the two networks serves as a scoring function indicating the probability of anomaly occurring. Due to such operations, our approach achieves accurate and fast pixel-level anomaly detection. Very competitive results are delivered on three major benchmarks, significantly superior to the state of the art ones. In addition, it makes inferences at a very high speed (with 100 FPS for images of the size at 256x256), at least dozens of times faster than the latest counterparts.", "field": [], "task": ["Anomaly Detection", "Image Classification", "Unsupervised Anomaly Detection"], "method": [], "dataset": ["MVTec AD"], "metric": ["Detection AUROC", "Segmentation AUROC"], "title": "Student-Teacher Feature Pyramid Matching for Unsupervised Anomaly Detection"} {"abstract": "We present deep communicating agents in an encoder-decoder architecture to\naddress the challenges of representing a long document for abstractive\nsummarization. With deep communicating agents, the task of encoding a long text\nis divided across multiple collaborating agents, each in charge of a subsection\nof the input text. These encoders are connected to a single decoder, trained\nend-to-end using reinforcement learning to generate a focused and coherent\nsummary. Empirical results demonstrate that multiple communicating encoders\nlead to a higher quality summary compared to several strong baselines,\nincluding those based on a single encoder or multiple non-communicating\nencoders.", "field": [], "task": ["Abstractive Text Summarization"], "method": [], "dataset": ["CNN / Daily Mail"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Deep Communicating Agents for Abstractive Summarization"} {"abstract": "Existing pose estimation approaches fall into two categories: single-stage and multi-stage methods. While multi-stage methods are seemingly more suited for the task, their performance in current practice is not as good as single-stage methods. This work studies this issue. We argue that the current multi-stage methods' unsatisfactory performance comes from the insufficiency in various design choices. We propose several improvements, including the single-stage module design, cross stage feature aggregation, and coarse-to-fine supervision. The resulting method establishes the new state-of-the-art on both MS COCO and MPII Human Pose dataset, justifying the effectiveness of a multi-stage architecture. The source code is publicly available for further research.", "field": [], "task": ["Keypoint Detection", "Pose Estimation"], "method": [], "dataset": ["COCO", "COCO test-challenge", "COCO minival", "MPII Human Pose", "COCO test-dev"], "metric": ["ARM", "Test AP", "APM", "AR75", "PCKh-0.5", "AR50", "ARL", "AP75", "AP", "APL", "AP50", "AR"], "title": "Rethinking on Multi-Stage Networks for Human Pose Estimation"} {"abstract": "Numerous task-specific variants of conditional generative adversarial networks have been developed for image completion. Yet, a serious limitation remains that all existing algorithms tend to fail when handling large-scale missing regions. To overcome this challenge, we propose a generic new approach that bridges the gap between image-conditional and recent modulated unconditional generative architectures via co-modulation of both conditional and stochastic style representations. Also, due to the lack of good quantitative metrics for image completion, we propose the new Paired/Unpaired Inception Discriminative Score (P-IDS/U-IDS), which robustly measures the perceptual fidelity of inpainted images compared to real images via linear separability in a feature space. Experiments demonstrate superior performance in terms of both quality and diversity over state-of-the-art methods in free-form image completion and easy generalization to image-to-image translation. Code is available at https://github.com/zsyzzsoft/co-mod-gan.", "field": [], "task": [], "method": [], "dataset": ["Places2", "FFHQ 512 x 512"], "metric": ["FID", "P-IDS", "U-IDS"], "title": "Large Scale Image Completion via Co-Modulated Generative Adversarial Networks"} {"abstract": "Part-based approaches for fine-grained recognition do not show the expected performance gain over global methods, although being able to explicitly focus on small details that are relevant for distinguishing highly similar classes. We assume that part-based methods suffer from a missing representation of local features, which is invariant to the order of parts and can handle a varying number of visible parts appropriately. The order of parts is artificial and often only given by ground-truth annotations, whereas viewpoint variations and occlusions result in parts that are not observable. Therefore, we propose integrating a Fisher vector encoding of part features into convolutional neural networks. The parameters for this encoding are estimated jointly with those of the neural network in an end-to-end manner. Our approach improves state-of-the-art accuracies for bird species classification on CUB-200-2011 from 90.40\\% to 90.95\\%, on NA-Birds from 89.20\\% to 90.30\\%, and on Birdsnap from 84.30\\% to 86.97\\%.", "field": [], "task": ["Fine-Grained Image Classification"], "method": [], "dataset": [" CUB-200-2011"], "metric": ["Accuracy"], "title": "End-to-end Learning of a Fisher Vector Encoding for Part Features in Fine-grained Recognition"} {"abstract": "We propose OmniPose, a single-pass, end-to-end trainable framework, that achieves state-of-the-art results for multi-person pose estimation. Using a novel waterfall module, the OmniPose architecture leverages multi-scale feature representations that increase the effectiveness of backbone feature extractors, without the need for post-processing. OmniPose incorporates contextual information across scales and joint localization with Gaussian heatmap modulation at the multi-scale feature extractor to estimate human pose with state-of-the-art accuracy. The multi-scale representations, obtained by the improved waterfall module in OmniPose, leverage the efficiency of progressive filtering in the cascade architecture, while maintaining multi-scale fields-of-view comparable to spatial pyramid configurations. Our results on multiple datasets demonstrate that OmniPose, with an improved HRNet backbone and waterfall module, is a robust and efficient architecture for multi-person pose estimation that achieves state-of-the-art results.", "field": [], "task": [], "method": [], "dataset": ["COCO", "UPenn Action", "Leeds Sports Poses", "MPII Human Pose"], "metric": ["Validation AP", "PCKh-0.5", "Mean PCK@0.2", "AP", "PCK"], "title": "OmniPose: A Multi-Scale Framework for Multi-Person Pose Estimation"} {"abstract": "The recent research in semi-supervised learning (SSL) is mostly dominated by consistency regularization based methods which achieve strong performance. However, they heavily rely on domain-specific data augmentations, which are not easy to generate for all data modalities. Pseudo-labeling (PL) is a general SSL approach that does not have this constraint but performs relatively poorly in its original formulation. We argue that PL underperforms due to the erroneous high confidence predictions from poorly calibrated models; these predictions generate many incorrect pseudo-labels, leading to noisy training. We propose an uncertainty-aware pseudo-label selection (UPS) framework which improves pseudo labeling accuracy by drastically reducing the amount of noise encountered in the training process. Furthermore, UPS generalizes the pseudo-labeling process, allowing for the creation of negative pseudo-labels; these negative pseudo-labels can be used for multi-label classification as well as negative learning to improve the single-label classification. We achieve strong performance when compared to recent SSL methods on the CIFAR-10 and CIFAR-100 datasets. Also, we demonstrate the versatility of our method on the video dataset UCF-101 and the multi-label dataset Pascal VOC.", "field": [], "task": ["Multi-Label Classification", "Semi-Supervised Image Classification", "Semi-Supervised Video Classification"], "method": [], "dataset": ["CIFAR-100, 4000 Labels", "cifar-100, 10000 Labels", "CIFAR-10, 4000 Labels", "CIFAR-10, 1000 Labels"], "metric": ["Accuracy"], "title": "In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label Selection Framework for Semi-Supervised Learning"} {"abstract": "The paper discusses a pooling mechanism to induce subsampling in graph structured data and introduces it as a component of a graph convolutional neural network. The pooling mechanism builds on the Non-Negative Matrix Factorization (NMF) of a matrix representing node adjacency and node similarity as adaptively obtained through the vertices embedding learned by the model. Such mechanism is applied to obtain an incrementally coarser graph where nodes are adaptively pooled into communities based on the outcomes of the non-negative factorization. The empirical analysis on graph classification benchmarks shows how such coarsening process yields significant improvements in the predictive performance of the model with respect to its non-pooled counterpart.", "field": [], "task": ["Graph Classification"], "method": [], "dataset": ["COLLAB", "ENZYMES", "D&D", "PROTEINS", "NCI1"], "metric": ["Accuracy"], "title": "A Non-Negative Factorization approach to node pooling in Graph Convolutional Neural Networks"} {"abstract": "This paper presents results of our experiments for the next utterance ranking\non the Ubuntu Dialog Corpus -- the largest publicly available multi-turn dialog\ncorpus. First, we use an in-house implementation of previously reported models\nto do an independent evaluation using the same data. Second, we evaluate the\nperformances of various LSTMs, Bi-LSTMs and CNNs on the dataset. Third, we\ncreate an ensemble by averaging predictions of multiple models. The ensemble\nfurther improves the performance and it achieves a state-of-the-art result for\nthe next utterance ranking on this dataset. Finally, we discuss our future\nplans using this corpus.", "field": [], "task": ["Conversational Response Selection"], "method": [], "dataset": ["Ubuntu Dialogue (v1, Ranking)"], "metric": ["R10@1", "R10@5", "R2@1", "R10@2"], "title": "Improved Deep Learning Baselines for Ubuntu Corpus Dialogs"} {"abstract": "Graph kernels have attracted a lot of attention during the last decade, and\nhave evolved into a rapidly developing branch of learning on structured data.\nDuring the past 20 years, the considerable research activity that occurred in\nthe field resulted in the development of dozens of graph kernels, each focusing\non specific structural properties of graphs. Graph kernels have proven\nsuccessful in a wide range of domains, ranging from social networks to\nbioinformatics. The goal of this survey is to provide a unifying view of the\nliterature on graph kernels. In particular, we present a comprehensive overview\nof a wide range of graph kernels. Furthermore, we perform an experimental\nevaluation of several of those kernels on publicly available datasets, and\nprovide a comparative study. Finally, we discuss key applications of graph\nkernels, and outline some challenges that remain to be addressed.", "field": [], "task": ["Graph Classification"], "method": [], "dataset": ["PROTEINS", "NCI1"], "metric": ["Accuracy"], "title": "Graph Kernels: A Survey"} {"abstract": "With the advantage of high mobility, Unmanned Aerial Vehicles (UAVs) are used\nto fuel numerous important applications in computer vision, delivering more\nefficiency and convenience than surveillance cameras with fixed camera angle,\nscale and view. However, very limited UAV datasets are proposed, and they focus\nonly on a specific task such as visual tracking or object detection in\nrelatively constrained scenarios. Consequently, it is of great importance to\ndevelop an unconstrained UAV benchmark to boost related researches. In this\npaper, we construct a new UAV benchmark focusing on complex scenarios with new\nlevel challenges. Selected from 10 hours raw videos, about 80,000\nrepresentative frames are fully annotated with bounding boxes as well as up to\n14 kinds of attributes (e.g., weather condition, flying altitude, camera view,\nvehicle category, and occlusion) for three fundamental computer vision tasks:\nobject detection, single object tracking, and multiple object tracking. Then, a\ndetailed quantitative study is performed using most recent state-of-the-art\nalgorithms for each task. Experimental results show that the current\nstate-of-the-art methods perform relative worse on our dataset, due to the new\nchallenges appeared in UAV based real scenes, e.g., high density, small object,\nand camera motion. To our knowledge, our work is the first time to explore such\nissues in unconstrained scenes comprehensively.", "field": [], "task": ["Multiple Object Tracking", "Object Detection", "Object Tracking", "Visual Tracking"], "method": [], "dataset": ["UAVDT"], "metric": ["mAP"], "title": "The Unmanned Aerial Vehicle Benchmark: Object Detection and Tracking"} {"abstract": "Retrieving information from an online search engine, is the first and most important step in many data mining tasks. Most of the search engines currently available on the web, including all social media platforms, are black-boxes (a.k.a opaque) supporting short keyword queries. In these settings, retrieving all posts and comments discussing a particular news item automatically and at large scales is a challenging task. In this paper, we propose a method for generating short keyword queries given a prototype document. The proposed iterative query selection algorithm (IQS) interacts with the opaque search engine to iteratively improve the query. It is evaluated on the Twitter TREC Microblog 2012 and TREC-COVID 2019 datasets showing superior performance compared to state-of-the-art. IQS is applied to automatically collect a large-scale fake news dataset of about 70K true and fake news items. The dataset, publicly available for research, includes more than 22M accounts and 61M tweets in Twitter approved format. We demonstrate the usefulness of the dataset for fake news detection task achieving state-of-the-art performance.", "field": [], "task": ["Fake News Detection"], "method": [], "dataset": ["LIAR"], "metric": ["10%"], "title": "Fake News Data Collection and Classification: Iterative Query Selection for Opaque Search Engines with Pseudo Relevance Feedback"} {"abstract": "Facial aging and facial rejuvenation analyze a given face photograph to\npredict a future look or estimate a past look of the person. To achieve this,\nit is critical to preserve human identity and the corresponding aging\nprogression and regression with high accuracy. However, existing methods cannot\nsimultaneously handle these two objectives well. We propose a novel generative\nadversarial network based approach, named the Conditional Multi-Adversarial\nAutoEncoder with Ordinal Regression (CMAAE-OR). It utilizes an age estimation\ntechnique to control the aging accuracy and takes a high-level feature\nrepresentation to preserve personalized identity. Specifically, the face is\nfirst mapped to a latent vector through a convolutional encoder. The latent\nvector is then projected onto the face manifold conditional on the age through\na deconvolutional generator. The latent vector preserves personalized face\nfeatures and the age controls facial aging and rejuvenation. A discriminator\nand an ordinal regression are imposed on the encoder and the generator in\ntandem, making the generated face images to be more photorealistic while\nsimultaneously exhibiting desirable aging effects. Besides, a high-level\nfeature representation is utilized to preserve personalized identity of the\ngenerated face. Experiments on two benchmark datasets demonstrate appealing\nperformance of the proposed method over the state-of-the-art.", "field": [], "task": ["Age Estimation", "Regression"], "method": [], "dataset": ["FGNET", "MORPH"], "metric": ["MAE"], "title": "Facial Aging and Rejuvenation by Conditional Multi-Adversarial Autoencoder with Ordinal Regression"} {"abstract": "The Neural Autoregressive Distribution Estimator (NADE) and its real-valued\nversion RNADE are competitive density models of multidimensional data across a\nvariety of domains. These models use a fixed, arbitrary ordering of the data\ndimensions. One can easily condition on variables at the beginning of the\nordering, and marginalize out variables at the end of the ordering, however\nother inference tasks require approximate inference. In this work we introduce\nan efficient procedure to simultaneously train a NADE model for each possible\nordering of the variables, by sharing parameters across all these models. We\ncan thus use the most convenient model for each inference task at hand, and\nensembles of such models with different orderings are immediately available.\nMoreover, unlike the original NADE, our training procedure scales to deep\nmodels. Empirically, ensembles of Deep NADE models obtain state of the art\ndensity estimation performance.", "field": [], "task": ["Density Estimation", "Image Generation"], "method": [], "dataset": ["Binarized MNIST"], "metric": ["nats"], "title": "A Deep and Tractable Density Estimator"} {"abstract": "In this paper we describe a method to perform sequence-discriminative training of neural network acoustic models without the need for frame-level cross-entropy pre-training. We use the lattice-free version of the maximum mutual information\r\n(MMI) criterion: LF-MMI. To make its computation feasible we use a phone n-gram language model, in place of the word language model. To further reduce its space and time complexity we compute the objective function using neural network outputs at one third the standard frame rate. These changes enable us to perform the computation for the forward-backward algorithm on GPUs. Further the reduced output frame-rate also provides a significant speed-up during decoding.\r\nWe present results on 5 different LVCSR tasks with training data ranging from 100 to 2100 hours. Models trained with LFMMI provide a relative word error rate reduction of \u223c11.5%, over those trained with cross-entropy objective function, and \u223c8%, over those trained with cross-entropy and sMBR objective functions. A further reduction of \u223c2.5%, relative, can be obtained by fine tuning these models with the word-lattice based sMBR objective function.", "field": [], "task": ["Language Modelling", "Large Vocabulary Continuous Speech Recognition", "Speech Recognition"], "method": [], "dataset": ["WSJ eval92"], "metric": ["Word Error Rate (WER)"], "title": "Purely sequence-trained neural networks for ASR based on lattice-free MMI"} {"abstract": "Recent findings indicate that over-parametrization, while crucial for\nsuccessfully training deep neural networks, also introduces large amounts of\nredundancy. Tensor methods have the potential to efficiently parametrize\nover-complete representations by leveraging this redundancy. In this paper, we\npropose to fully parametrize Convolutional Neural Networks (CNNs) with a single\nhigh-order, low-rank tensor. Previous works on network tensorization have\nfocused on parametrizing individual layers (convolutional or fully connected)\nonly, and perform the tensorization layer-by-layer separately. In contrast, we\npropose to jointly capture the full structure of a neural network by\nparametrizing it with a single high-order tensor, the modes of which represent\neach of the architectural design parameters of the network (e.g. number of\nconvolutional blocks, depth, number of stacks, input features, etc). This\nparametrization allows to regularize the whole network and drastically reduce\nthe number of parameters. Our model is end-to-end trainable and the low-rank\nstructure imposed on the weight tensor acts as an implicit regularization. We\nstudy the case of networks with rich structure, namely Fully Convolutional\nNetworks (FCNs), which we propose to parametrize with a single 8th-order\ntensor. We show that our approach can achieve superior performance with small\ncompression rates, and attain high compression rates with negligible drop in\naccuracy for the challenging task of human pose estimation.", "field": [], "task": ["Pose Estimation"], "method": [], "dataset": ["MPII Human Pose"], "metric": ["PCKh-0.5"], "title": "T-Net: Parametrizing Fully Convolutional Nets with a Single High-Order Tensor"} {"abstract": "Aggregating extra features has been considered as an effective approach to\nboost traditional pedestrian detection methods. However, there is still a lack\nof studies on whether and how CNN-based pedestrian detectors can benefit from\nthese extra features. The first contribution of this paper is exploring this\nissue by aggregating extra features into CNN-based pedestrian detection\nframework. Through extensive experiments, we evaluate the effects of different\nkinds of extra features quantitatively. Moreover, we propose a novel network\narchitecture, namely HyperLearner, to jointly learn pedestrian detection as\nwell as the given extra feature. By multi-task training, HyperLearner is able\nto utilize the information of given features and improve detection performance\nwithout extra inputs in inference. The experimental results on multiple\npedestrian benchmarks validate the effectiveness of the proposed HyperLearner.", "field": [], "task": ["Pedestrian Detection"], "method": [], "dataset": ["Caltech"], "metric": ["Reasonable Miss Rate"], "title": "What Can Help Pedestrian Detection?"} {"abstract": "Articulated human pose estimation is a fundamental yet challenging task in\ncomputer vision. The difficulty is particularly pronounced in scale variations\nof human body parts when camera view changes or severe foreshortening happens.\nAlthough pyramid methods are widely used to handle scale changes at inference\ntime, learning feature pyramids in deep convolutional neural networks (DCNNs)\nis still not well explored. In this work, we design a Pyramid Residual Module\n(PRMs) to enhance the invariance in scales of DCNNs. Given input features, the\nPRMs learn convolutional filters on various scales of input features, which are\nobtained with different subsampling ratios in a multi-branch network. Moreover,\nwe observe that it is inappropriate to adopt existing methods to initialize the\nweights of multi-branch networks, which achieve superior performance than plain\nnetworks in many tasks recently. Therefore, we provide theoretic derivation to\nextend the current weight initialization scheme to multi-branch network\nstructures. We investigate our method on two standard benchmarks for human pose\nestimation. Our approach obtains state-of-the-art results on both benchmarks.\nCode is available at https://github.com/bearpaw/PyraNet.", "field": [], "task": ["Pose Estimation"], "method": [], "dataset": ["Leeds Sports Poses", "MPII Human Pose"], "metric": ["PCK", "PCKh-0.5"], "title": "Learning Feature Pyramids for Human Pose Estimation"} {"abstract": "In this paper, we propose to incorporate convolutional neural networks with a\nmulti-context attention mechanism into an end-to-end framework for human pose\nestimation. We adopt stacked hourglass networks to generate attention maps from\nfeatures at multiple resolutions with various semantics. The Conditional Random\nField (CRF) is utilized to model the correlations among neighboring regions in\nthe attention map. We further combine the holistic attention model, which\nfocuses on the global consistency of the full human body, and the body part\nattention model, which focuses on the detailed description for different body\nparts. Hence our model has the ability to focus on different granularity from\nlocal salient regions to global semantic-consistent spaces. Additionally, we\ndesign novel Hourglass Residual Units (HRUs) to increase the receptive field of\nthe network. These units are extensions of residual units with a side branch\nincorporating filters with larger receptive fields, hence features with various\nscales are learned and combined within the HRUs. The effectiveness of the\nproposed multi-context attention mechanism and the hourglass residual units is\nevaluated on two widely used human pose estimation benchmarks. Our approach\noutperforms all existing methods on both benchmarks over all the body parts.", "field": [], "task": ["Pose Estimation"], "method": [], "dataset": ["Leeds Sports Poses", "MPII Human Pose"], "metric": ["PCK", "PCKh-0.5"], "title": "Multi-Context Attention for Human Pose Estimation"} {"abstract": "Random data augmentation is a critical technique to avoid overfitting in\ntraining deep neural network models. However, data augmentation and network\ntraining are usually treated as two isolated processes, limiting the\neffectiveness of network training. Why not jointly optimize the two? We propose\nadversarial data augmentation to address this limitation. The main idea is to\ndesign an augmentation network (generator) that competes against a target\nnetwork (discriminator) by generating `hard' augmentation operations online.\nThe augmentation network explores the weaknesses of the target network, while\nthe latter learns from `hard' augmentations to achieve better performance. We\nalso design a reward/penalty strategy for effective joint training. We\ndemonstrate our approach on the problem of human pose estimation and carry out\na comprehensive experimental analysis, showing that our method can\nsignificantly improve state-of-the-art models without additional data efforts.", "field": [], "task": ["Data Augmentation", "Pose Estimation"], "method": [], "dataset": ["Leeds Sports Poses", "MPII Human Pose"], "metric": ["PCK", "PCKh-0.5"], "title": "Jointly Optimize Data Augmentation and Network Training: Adversarial Data Augmentation in Human Pose Estimation"} {"abstract": "Human pose estimation using deep neural networks aims to map input images\nwith large variations into multiple body keypoints which must satisfy a set of\ngeometric constraints and inter-dependency imposed by the human body model.\nThis is a very challenging nonlinear manifold learning process in a very high\ndimensional feature space. We believe that the deep neural network, which is\ninherently an algebraic computation system, is not the most effecient way to\ncapture highly sophisticated human knowledge, for example those highly coupled\ngeometric characteristics and interdependence between keypoints in human poses.\nIn this work, we propose to explore how external knowledge can be effectively\nrepresented and injected into the deep neural networks to guide its training\nprocess using learned projections that impose proper prior. Specifically, we\nuse the stacked hourglass design and inception-resnet module to construct a\nfractal network to regress human pose images into heatmaps with no explicit\ngraphical modeling. We encode external knowledge with visual features which are\nable to characterize the constraints of human body models and evaluate the\nfitness of intermediate network output. We then inject these external features\ninto the neural network using a projection matrix learned using an auxiliary\ncost function. The effectiveness of the proposed inception-resnet module and\nthe benefit in guided learning with knowledge projection is evaluated on two\nwidely used benchmarks. Our approach achieves state-of-the-art performance on\nboth datasets.", "field": [], "task": ["Pose Estimation"], "method": [], "dataset": ["Leeds Sports Poses", "MPII Human Pose"], "metric": ["PCK", "PCKh-0.5"], "title": "Knowledge-Guided Deep Fractal Neural Networks for Human Pose Estimation"} {"abstract": "Existing human pose estimation approaches often only consider how to improve\nthe model generalisation performance, but putting aside the significant\nefficiency problem. This leads to the development of heavy models with poor\nscalability and cost-effectiveness in practical use. In this work, we\ninvestigate the under-studied but practically critical pose model efficiency\nproblem. To this end, we present a new Fast Pose Distillation (FPD) model\nlearning strategy. Specifically, the FPD trains a lightweight pose neural\nnetwork architecture capable of executing rapidly with low computational cost.\nIt is achieved by effectively transferring the pose structure knowledge of a\nstrong teacher network. Extensive evaluations demonstrate the advantages of our\nFPD method over a broad range of state-of-the-art pose estimation approaches in\nterms of model cost-effectiveness on two standard benchmark datasets, MPII\nHuman Pose and Leeds Sports Pose.", "field": [], "task": ["Pose Estimation"], "method": [], "dataset": ["Leeds Sports Poses", "MPII Human Pose"], "metric": ["PCK", "PCKh-0.5"], "title": "Fast Human Pose Estimation"} {"abstract": "In this paper we consider the problem of human pose estimation from a single\nstill image. We propose a novel approach where each location in the image votes\nfor the position of each keypoint using a convolutional neural net. The voting\nscheme allows us to utilize information from the whole image, rather than rely\non a sparse set of keypoint locations. Using dense, multi-target votes, not\nonly produces good keypoint predictions, but also enables us to compute\nimage-dependent joint keypoint probabilities by looking at consensus voting.\nThis differs from most previous methods where joint probabilities are learned\nfrom relative keypoint locations and are independent of the image. We finally\ncombine the keypoints votes and joint probabilities in order to identify the\noptimal pose configuration. We show our competitive performance on the MPII\nHuman Pose and Leeds Sports Pose datasets.", "field": [], "task": ["Pose Estimation"], "method": [], "dataset": ["MPII Human Pose"], "metric": ["PCKh-0.5"], "title": "Human Pose Estimation using Deep Consensus Voting"} {"abstract": "End-to-end automatic speech recognition (ASR) models with a single neural network have recently demonstrated state-of-the-art results compared to conventional hybrid speech recognizers. Specifically, recurrent neural network transducer (RNN-T) has shown competitive ASR performance on various benchmarks. In this work, we examine ways in which RNN-T can achieve better ASR accuracy via performing auxiliary tasks. We propose (i) using the same auxiliary task as primary RNN-T ASR task, and (ii) performing context-dependent graphemic state prediction as in conventional hybrid modeling. In transcribing social media videos with varying training data size, we first evaluate the streaming ASR performance on three languages: Romanian, Turkish and German. We find that both proposed methods provide consistent improvements. Next, we observe that both auxiliary tasks demonstrate efficacy in learning deep transformer encoders for RNN-T criterion, thus achieving competitive results - 2.0%/4.2% WER on LibriSpeech test-clean/other - as compared to prior top performing models.", "field": [], "task": ["Speech Recognition"], "method": [], "dataset": ["LibriSpeech test-other", "LibriSpeech test-clean"], "metric": ["Word Error Rate (WER)"], "title": "Improving RNN Transducer Based ASR with Auxiliary Tasks"} {"abstract": "Joint segmentation and classification of fine-grained actions is important\nfor applications of human-robot interaction, video surveillance, and human\nskill evaluation. However, despite substantial recent progress in large-scale\naction classification, the performance of state-of-the-art fine-grained action\nrecognition approaches remains low. We propose a model for action segmentation\nwhich combines low-level spatiotemporal features with a high-level segmental\nclassifier. Our spatiotemporal CNN is comprised of a spatial component that\nuses convolutional filters to capture information about objects and their\nrelationships, and a temporal component that uses large 1D convolutional\nfilters to capture information about how object relationships change across\ntime. These features are used in tandem with a semi-Markov model that models\ntransitions from one action to another. We introduce an efficient constrained\nsegmental inference algorithm for this model that is orders of magnitude faster\nthan the current approach. We highlight the effectiveness of our Segmental\nSpatiotemporal CNN on cooking and surgical action datasets for which we observe\nsubstantially improved performance relative to recent baseline methods.", "field": [], "task": ["Action Classification", "Action Classification ", "Action Recognition", "Action Segmentation", "Fine-grained Action Recognition", "Human robot interaction", "Temporal Action Localization"], "method": [], "dataset": ["GTEA"], "metric": ["Acc", "Edit", "F1@10%", "F1@25%", "F1@50%"], "title": "Segmental Spatiotemporal CNNs for Fine-grained Action Segmentation"} {"abstract": "The design of complexity-aware cascaded detectors, combining features of very\ndifferent complexities, is considered. A new cascade design procedure is\nintroduced, by formulating cascade learning as the Lagrangian optimization of a\nrisk that accounts for both accuracy and complexity. A boosting algorithm,\ndenoted as complexity aware cascade training (CompACT), is then derived to\nsolve this optimization. CompACT cascades are shown to seek an optimal\ntrade-off between accuracy and complexity by pushing features of higher\ncomplexity to the later cascade stages, where only a few difficult candidate\npatches remain to be classified. This enables the use of features of vastly\ndifferent complexities in a single detector. In result, the feature pool can be\nexpanded to features previously impractical for cascade design, such as the\nresponses of a deep convolutional neural network (CNN). This is demonstrated\nthrough the design of a pedestrian detector with a pool of features whose\ncomplexities span orders of magnitude. The resulting cascade generalizes the\ncombination of a CNN with an object proposal mechanism: rather than a\npre-processing stage, CompACT cascades seamlessly integrate CNNs in their\nstages. This enables state of the art performance on the Caltech and KITTI\ndatasets, at fairly fast speeds.", "field": [], "task": ["Pedestrian Detection"], "method": [], "dataset": ["Caltech"], "metric": ["Reasonable Miss Rate"], "title": "Learning Complexity-Aware Cascades for Deep Pedestrian Detection"} {"abstract": "Even with the advent of more sophisticated, data-hungry methods, boosted decision trees remain extraordinarily successful for fast rigid object detection, achieving top accuracy on numerous datasets. While effective, most boosted detectors use decision trees with orthogonal (single feature) splits, and the topology of the resulting decision boundary may not be well matched to the natural topology of the data. Given highly correlated data, decision trees with oblique (multiple feature) splits can be effective. Use of oblique splits, however, comes at considerable computational expense. Inspired by recent work on discriminative decorrelation of HOG features, we instead propose an efficient feature transform that removes correlations in local neighborhoods. The result is an overcomplete but locally decorrelated representation ideally suited for use with orthogonal decision trees. In fact, orthogonal trees with our locally decorrelated features outperform oblique trees trained over the original features at a fraction of the computational cost. The overall improvement in accuracy is dramatic: on the Caltech Pedestrian Dataset, we reduce false positives nearly tenfold over the previous state-of-the-art.", "field": [], "task": ["Object Detection", "Pedestrian Detection"], "method": [], "dataset": ["Caltech"], "metric": ["Reasonable Miss Rate"], "title": "Local Decorrelation For Improved Pedestrian Detection"} {"abstract": "The perception system in autonomous vehicles is responsible for detecting and tracking the surrounding objects. This is usually done by taking advantage of several sensing modalities to increase robustness and accuracy, which makes sensor fusion a crucial part of the perception system. In this paper, we focus on the problem of radar and camera sensor fusion and propose a middle-fusion approach to exploit both radar and camera data for 3D object detection. Our approach, called CenterFusion, first uses a center point detection network to detect objects by identifying their center points on the image. It then solves the key data association problem using a novel frustum-based method to associate the radar detections to their corresponding object's center point. The associated radar detections are used to generate radar-based feature maps to complement the image features, and regress to object properties such as depth, rotation and velocity. We evaluate CenterFusion on the challenging nuScenes dataset, where it improves the overall nuScenes Detection Score (NDS) of the state-of-the-art camera-based algorithm by more than 12%. We further show that CenterFusion significantly improves the velocity estimation accuracy without using any additional temporal information. The code is available at https://github.com/mrnabati/CenterFusion .", "field": [], "task": ["3D Object Detection", "Autonomous Vehicles", "Object Detection", "Sensor Fusion"], "method": [], "dataset": ["nuScenes"], "metric": ["mAP", "NDS"], "title": "CenterFusion: Center-based Radar and Camera Fusion for 3D Object Detection"} {"abstract": "Generative adversarial networks (GANs) are a framework for producing a\ngenerative model by way of a two-player minimax game. In this paper, we propose\nthe \\emph{Generative Multi-Adversarial Network} (GMAN), a framework that\nextends GANs to multiple discriminators. In previous work, the successful\ntraining of GANs requires modifying the minimax objective to accelerate\ntraining early on. In contrast, GMAN can be reliably trained with the original,\nuntampered objective. We explore a number of design perspectives with the\ndiscriminator role ranging from formidable adversary to forgiving teacher.\nImage generation tasks comparing the proposed framework to standard GANs\ndemonstrate GMAN produces higher quality samples in a fraction of the\niterations when measured by a pairwise GAM-type metric.", "field": [], "task": ["Image Generation"], "method": [], "dataset": ["CIFAR-10"], "metric": ["Inception score"], "title": "Generative Multi-Adversarial Networks"} {"abstract": "Studies have shown that a dominant class of questions asked by visually impaired users on images of their surroundings involves reading text in the image. But today's VQA models can not read! Our paper takes a first step towards addressing this problem. First, we introduce a new \"TextVQA\" dataset to facilitate progress on this important problem. Existing datasets either have a small proportion of questions about text (e.g., the VQA dataset) or are too small (e.g., the VizWiz dataset). TextVQA contains 45,336 questions on 28,408 images that require reasoning about text to answer. Second, we introduce a novel model architecture that reads text in the image, reasons about it in the context of the image and the question, and predicts an answer which might be a deduction based on the text and the image or composed of the strings found in the image. Consequently, we call our approach Look, Read, Reason & Answer (LoRRA). We show that LoRRA outperforms existing state-of-the-art VQA models on our TextVQA dataset. We find that the gap between human performance and machine performance is significantly larger on TextVQA than on VQA 2.0, suggesting that TextVQA is well-suited to benchmark progress along directions complementary to VQA 2.0.", "field": [], "task": ["Visual Question Answering"], "method": [], "dataset": ["VizWiz 2018", "VQA v2 test-dev"], "metric": ["overall", "Accuracy"], "title": "Towards VQA Models That Can Read"} {"abstract": "Human conversation is a complex mechanism with subtle nuances. It is hence an\nambitious goal to develop artificial intelligence agents that can participate\nfluently in a conversation. While we are still far from achieving this goal,\nrecent progress in visual question answering, image captioning, and visual\nquestion generation shows that dialog systems may be realizable in the not too\ndistant future. To this end, a novel dataset was introduced recently and\nencouraging results were demonstrated, particularly for question answering. In\nthis paper, we demonstrate a simple symmetric discriminative baseline, that can\nbe applied to both predicting an answer as well as predicting a question. We\nshow that this method performs on par with the state of the art, even memory\nnet based methods. In addition, for the first time on the visual dialog\ndataset, we assess the performance of a system asking questions, and\ndemonstrate how visual dialog can be generated from discriminative question\ngeneration and question answering.", "field": [], "task": ["Image Captioning", "Question Answering", "Question Generation", "Visual Dialog", "Visual Question Answering"], "method": [], "dataset": ["VisDial v0.9 val"], "metric": ["R@10", "R@5", "Mean Rank", "MRR", "R@1"], "title": "Two can play this Game: Visual Dialog with Discriminative Question Generation and Answering"} {"abstract": "We propose the Temporal Point Cloud Networks (TPCN), a novel and flexible framework with joint spatial and temporal learning for trajectory prediction. Unlike existing approaches that rasterize agents and map information as 2D images or operate in a graph representation, our approach extends ideas from point cloud learning with dynamic temporal learning to capture both spatial and temporal information by splitting trajectory prediction into both spatial and temporal dimensions. In the spatial dimension, agents can be viewed as an unordered point set, and thus it is straightforward to apply point cloud learning techniques to model agents' locations. While the spatial dimension does not take kinematic and motion information into account, we further propose dynamic temporal learning to model agents' motion over time. Experiments on the Argoverse motion forecasting benchmark show that our approach achieves the state-of-the-art results.", "field": [], "task": ["Motion Forecasting", "Trajectory Prediction"], "method": [], "dataset": ["Argoverse CVPR 2020"], "metric": ["p-minADE (K=6)", "MR (K=1)", "DAC (K=6)", "DAC (K=1)", "minFDE (K=6)", "minADE (K=1)", "MR (K=6)", "minADE (K=6)", "minFDE (K=1)", "p-minFDE (K=6)"], "title": "TPCN: Temporal Point Cloud Networks for Motion Forecasting"} {"abstract": "Deep learning approaches to 3D shape segmentation are typically formulated as\na multi-class labeling problem. Existing models are trained for a fixed set of\nlabels, which greatly limits their flexibility and adaptivity. We opt for\ntop-down recursive decomposition and develop the first deep learning model for\nhierarchical segmentation of 3D shapes, based on recursive neural networks.\nStarting from a full shape represented as a point cloud, our model performs\nrecursive binary decomposition, where the decomposition network at all nodes in\nthe hierarchy share weights. At each node, a node classifier is trained to\ndetermine the type (adjacency or symmetry) and stopping criteria of its\ndecomposition. The features extracted in higher level nodes are recursively\npropagated to lower level ones. Thus, the meaningful decompositions in higher\nlevels provide strong contextual cues constraining the segmentations in lower\nlevels. Meanwhile, to increase the segmentation accuracy at each node, we\nenhance the recursive contextual feature with the shape feature extracted for\nthe corresponding part. Our method segments a 3D shape in point cloud into an\nunfixed number of parts, depending on the shape complexity, showing strong\ngenerality and flexibility. It achieves the state-of-the-art performance, both\nfor fine-grained and semantic segmentation, on the public benchmark and a new\nbenchmark of fine-grained segmentation proposed in this work. We also\ndemonstrate its application for fine-grained part refinements in image-to-shape\nreconstruction.", "field": [], "task": ["3D Instance Segmentation", "3D Part Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["ShapeNet-Part", "S3DIS"], "metric": ["Class Average IoU", "mRec"], "title": "PartNet: A Recursive Part Decomposition Network for Fine-grained and Hierarchical Shape Segmentation"} {"abstract": "Visual dialog (VisDial) is a task which requires an AI agent to answer a series of questions grounded in an image. Unlike in visual question answering (VQA), the series of questions should be able to capture a temporal context from a dialog history and exploit visually-grounded information. A problem called visual reference resolution involves these challenges, requiring the agent to resolve ambiguous references in a given question and find the references in a given image. In this paper, we propose Dual Attention Networks (DAN) for visual reference resolution. DAN consists of two kinds of attention networks, REFER and FIND. Specifically, REFER module learns latent relationships between a given question and a dialog history by employing a self-attention mechanism. FIND module takes image features and reference-aware representations (i.e., the output of REFER module) as input, and performs visual grounding via bottom-up attention mechanism. We qualitatively and quantitatively evaluate our model on VisDial v1.0 and v0.9 datasets, showing that DAN outperforms the previous state-of-the-art model by a significant margin.", "field": [], "task": ["Question Answering", "Visual Dialog", "Visual Grounding", "Visual Question Answering"], "method": [], "dataset": ["Visual Dialog v1.0 test-std", "VisDial v0.9 val"], "metric": ["MRR (x 100)", "R@10", "NDCG (x 100)", "R@5", "Mean Rank", "MRR", "Mean", "R@1"], "title": "Dual Attention Networks for Visual Reference Resolution in Visual Dialog"} {"abstract": "Solving grounded language tasks often requires reasoning about relationships between objects in the context of a given task. For example, to answer the question \"What color is the mug on the plate?\" we must check the color of the specific mug that satisfies the \"on\" relationship with respect to the plate. Recent work has proposed various methods capable of complex relational reasoning. However, most of their power is in the inference structure, while the scene is represented with simple local appearance features. In this paper, we take an alternate approach and build contextualized representations for objects in a visual scene to support relational reasoning. We propose a general framework of Language-Conditioned Graph Networks (LCGN), where each node represents an object, and is described by a context-aware representation from related objects through iterative message passing conditioned on the textual input. E.g., conditioning on the \"on\" relationship to the plate, the object \"mug\" gathers messages from the object \"plate\" to update its representation to \"mug on the plate\", which can be easily consumed by a simple classifier for answer prediction. We experimentally show that our LCGN approach effectively supports relational reasoning and improves performance across several tasks and datasets. Our code is available at http://ronghanghu.com/lcgn.", "field": [], "task": ["Referring Expression Comprehension", "Relational Reasoning", "Visual Question Answering"], "method": [], "dataset": ["GQA test-dev", "GQA test-std", "CLEVR"], "metric": ["Accuracy"], "title": "Language-Conditioned Graph Networks for Relational Reasoning"} {"abstract": "Knowledge graphs (KGs) have become popular structures for unifying real-world entities by modelling the relationships between them and their attributes. Entity alignment -- the task of identifying corresponding entities across different KGs -- has attracted a great deal of attention in both academia and industry. However, existing alignment techniques often require large amounts of labelled data, are unable to encode multi-modal data simultaneously, and enforce only few consistency constraints. In this paper, we propose an end-to-end, unsupervised entity alignment framework for cross-lingual KGs that fuses different types of information in order to fully exploit the richness of KG data. The model captures the relation-based correlation between entities by using a multi-order graph convolutional neural (GCN) model that is designed to satisfy the consistency constraints, while incorporating the attribute-based correlation via a translation machine. We adopt a late-fusion mechanism to combine all the information together, which allows these approaches to complement each other and thus enhances the final alignment result, and makes the model more robust to consistency violations. Empirical results show that our model is more accurate and orders of magnitude faster than existing baselines. We also demonstrate its sensitivity to hyper-parameters, effort saving in terms of labelling, and the robustness against adversarial conditions.", "field": [], "task": ["Entity Alignment", "Knowledge Graphs"], "method": [], "dataset": ["DBP15k zh-en", "dbp15k fr-en", "dbp15k ja-en"], "metric": ["Hits@1"], "title": "Entity Alignment for Knowledge Graphs with Multi-order Convolutional Networks"} {"abstract": "Reasoning about human motion is an important prerequisite to safe and socially-aware robotic navigation. As a result, multi-agent behavior prediction has become a core component of modern human-robot interactive systems, such as self-driving cars. While there exist many methods for trajectory forecasting, most do not enforce dynamic constraints and do not account for environmental information (e.g., maps). Towards this end, we present Trajectron++, a modular, graph-structured recurrent model that forecasts the trajectories of a general number of diverse agents while incorporating agent dynamics and heterogeneous data (e.g., semantic maps). Trajectron++ is designed to be tightly integrated with robotic planning and control frameworks; for example, it can produce predictions that are optionally conditioned on ego-agent motion plans. We demonstrate its performance on several challenging real-world trajectory forecasting datasets, outperforming a wide array of state-of-the-art deterministic and generative methods.", "field": [], "task": ["Motion Forecasting", "Self-Driving Cars", "Trajectory Forecasting", "Trajectory Prediction"], "method": [], "dataset": ["nuScenes"], "metric": ["MinADE_10", "MissRateTopK_2_10", "MinADE_5", "MissRateTopK_2_5", "MinFDE_1", "OffRoadRate"], "title": "Trajectron++: Dynamically-Feasible Trajectory Forecasting With Heterogeneous Data"} {"abstract": "Question Answering (QA) systems are used to provide proper responses to users' questions automatically. Sentence matching is an essential task in the QA systems and is usually reformulated as a Paraphrase Identification (PI) problem. Given a question, the aim of the task is to find the most similar question from a QA knowledge base. In this paper, we propose a Multi-task Sentence Encoding Model (MSEM) for the PI problem, wherein a connected graph is employed to depict the relation between sentences, and a multi-task learning model is applied to address both the sentence matching and sentence intent classification problem. In addition, we implement a general semantic retrieval framework that combines our proposed model and the Approximate Nearest Neighbor (ANN) technology, which enables us to find the most similar question from all available candidates very quickly during online serving. The experiments show the superiority of our proposed method as compared with the existing sentence matching models.", "field": [], "task": ["Intent Classification", "Multi-Task Learning", "Paraphrase Identification", "Question Answering", "Semantic Retrieval"], "method": [], "dataset": ["Quora Question Pairs"], "metric": ["Accuracy"], "title": "Multi-task Sentence Encoding Model for Semantic Retrieval in Question Answering Systems"} {"abstract": "Most multiple people tracking systems compute trajectories based on the tracking-by-detection paradigm. Consequently, the performance depends to a large extent on the quality of the employed input detections. However, despite an enormous progress in recent years, partially occluded people are still often not recognized. Also, many correct detections are mistakenly discarded when the non-maximum suppression is performed. Improving the tracking performance thus requires to augment the coarse input. Wellsuited for this task are fine-graded body joint detections, as they allow to locate even strongly occluded persons.\r\nThus in this work, we analyze the suitability of including joint detections for multiple people tracking. We introduce different affinities between the two detection types and evaluate their performances. Tracking is then performed within a near-online framework based on a min cost graph labeling formulation. As a result, our framework can recover heavily occluded persons and solve the data association efficiently. We evaluate our framework on the MOT16/17 benchmark. Experimental results demonstrate that our framework achieves state-of-the-art results.", "field": [], "task": ["Multi-Object Tracking", "Multiple People Tracking"], "method": [], "dataset": ["MOT17"], "metric": ["MOTA"], "title": "Multiple People Tracking using Body and Joint Detections"} {"abstract": "We present a solution to the problem of paraphrase identification of\nquestions. We focus on a recent dataset of question pairs annotated with binary\nparaphrase labels and show that a variant of the decomposable attention model\n(Parikh et al., 2016) results in accurate performance on this task, while being\nfar simpler than many competing neural architectures. Furthermore, when the\nmodel is pretrained on a noisy dataset of automatically collected question\nparaphrases, it obtains the best reported performance on the dataset.", "field": [], "task": ["Paraphrase Identification"], "method": [], "dataset": ["Quora Question Pairs"], "metric": ["Accuracy"], "title": "Neural Paraphrase Identification of Questions with Noisy Pretraining"} {"abstract": "Supervised learning techniques are at the center of many tasks in remote sensing. Unfortunately, these methods, especially recent deep learning methods, often require large amounts of labeled data for training. Even though satellites acquire large amounts of data, labeling the data is often tedious, expensive and requires expert knowledge. Hence, improved methods that require fewer labeled samples are needed. We present MSMatch, the first semi-supervised learning approach competitive with supervised methods on scene classification on the EuroSAT benchmark dataset. We test both RGB and multispectral images and perform various ablation studies to identify the critical parts of the model. The trained neural network achieves state-of-the-art results on EuroSAT with an accuracy that is between 1.98% and 19.76% better than previous methods depending on the number of labeled training examples. With just five labeled examples per class we reach 94.53% and 95.86% accuracy on the EuroSAT RGB and multispectral datasets, respectively. With 50 labels per class we reach 97.62% and 98.23% accuracy. Our results show that MSMatch is capable of greatly reducing the requirements for labeled data. It translates well to multispectral data and should enable various applications that are currently infeasible due to a lack of labeled data. We provide the source code of MSMatch online to enable easy reproduction and quick adoption.", "field": [], "task": [], "method": [], "dataset": ["EuroSAT"], "metric": ["Accuracy (%)"], "title": "MSMatch: Semi-Supervised Multispectral Scene Classification with Few Labels"} {"abstract": "Recently, deep neural networks (DNNs) have been successfully used for speech enhancement, and DNN-based speech enhancement is becoming an attractive research area. While time-frequency masking based on the short-time Fourier transform (STFT) has been widely used for DNN-based speech enhancement over the last years, time domain methods such as the time-domain audio separation network (TasNet) have also been proposed. The most suitable method depends on the scale of the dataset and the type of task. In this paper, we explore the best speech enhancement algorithm on two different datasets. We propose a STFT-based method and a loss function using problem-agnostic speech encoder (PASE) features to improve subjective quality for the smaller dataset. Our proposed methods are effective on the Voice Bank + DEMAND dataset and compare favorably to other state-of-the-art methods. We also implement a low-latency version of TasNet, which we submitted to the DSN Challenge and made public by open-sourcing it. Our model achieves excellent performance on the DNS Challenge dataset.", "field": [], "task": ["Speech Dereverberation", "Speech Enhancement"], "method": [], "dataset": ["Deep Noise Suppression (DNS) Challenge"], "metric": ["\u0394PESQ", "PESQ-WB", "PESQ"], "title": "Exploring the Best Loss Function for DNN-Based Low-latency Speech Enhancement with Temporal Convolutional Networks"} {"abstract": "We have created a large diverse set of cars from overhead images, which are\nuseful for training a deep learner to binary classify, detect and count them.\nThe dataset and all related material will be made publically available. The set\ncontains contextual matter to aid in identification of difficult targets. We\ndemonstrate classification and detection on this dataset using a neural network\nwe call ResCeption. This network combines residual learning with\nInception-style layers and is used to count cars in one look. This is a new way\nto count objects rather than by localization or density estimation. It is\nfairly accurate, fast and easy to implement. Additionally, the counting method\nis not car or scene specific. It would be easy to train this method to count\nother kinds of objects and counting over new scenes requires no extra set up or\nassumptions about object locations.", "field": [], "task": ["Density Estimation"], "method": [], "dataset": ["CARPK"], "metric": ["MAE", "RMSE"], "title": "A Large Contextual Dataset for Classification, Detection and Counting of Cars with Deep Learning"} {"abstract": "Semantic part localization can facilitate fine-grained categorization by\nexplicitly isolating subtle appearance differences associated with specific\nobject parts. Methods for pose-normalized representations have been proposed,\nbut generally presume bounding box annotations at test time due to the\ndifficulty of object detection. We propose a model for fine-grained\ncategorization that overcomes these limitations by leveraging deep\nconvolutional features computed on bottom-up region proposals. Our method\nlearns whole-object and part detectors, enforces learned geometric constraints\nbetween them, and predicts a fine-grained category from a pose-normalized\nrepresentation. Experiments on the Caltech-UCSD bird dataset confirm that our\nmethod outperforms state-of-the-art fine-grained categorization methods in an\nend-to-end evaluation without requiring a bounding box at test time.", "field": [], "task": ["Fine-Grained Image Classification", "Object Detection"], "method": [], "dataset": [" CUB-200-2011"], "metric": ["Accuracy"], "title": "Part-based R-CNNs for Fine-grained Category Detection"} {"abstract": "We propose a novel model to address the task of Visual Dialog which exhibits complex dialog structures. To obtain a reasonable answer based on the current question and the dialog history, the underlying semantic dependencies between dialog entities are essential. In this paper, we explicitly formalize this task as inference in a graphical model with partially observed nodes and unknown graph structures (relations in dialog). The given dialog entities are viewed as the observed nodes. The answer to a given question is represented by a node with missing value. We first introduce an Expectation Maximization algorithm to infer both the underlying dialog structures and the missing node values (desired answers). Based on this, we proceed to propose a differentiable graph neural network (GNN) solution that approximates this process. Experiment results on the VisDial and VisDial-Q datasets show that our model outperforms comparative methods. It is also observed that our method can infer the underlying dialog structure for better dialog reasoning.", "field": [], "task": ["Visual Dialog"], "method": [], "dataset": ["Visual Dialog v1.0 test-std", "VisDial v0.9 val"], "metric": ["MRR (x 100)", "R@10", "NDCG (x 100)", "R@5", "Mean Rank", "MRR", "Mean", "R@1"], "title": "Reasoning Visual Dialogs with Structural and Partial Observations"} {"abstract": "Deep neural networks have been exhibiting splendid accuracies in many of\nvisual pattern classification problems. Many of the state-of-the-art methods\nemploy a technique known as data augmentation at the training stage. This paper\naddresses an issue of decision rule for classifiers trained with augmented\ndata. Our method is named as APAC: the Augmented PAttern Classification, which\nis a way of classification using the optimal decision rule for augmented data\nlearning. Discussion of methods of data augmentation is not our primary focus.\nWe show clear evidences that APAC gives far better generalization performance\nthan the traditional way of class prediction in several experiments. Our\nconvolutional neural network model with APAC achieved a state-of-the-art\naccuracy on the MNIST dataset among non-ensemble classifiers. Even our\nmultilayer perceptron model beats some of the convolutional models with\nrecently invented stochastic regularization techniques on the CIFAR-10 dataset.", "field": [], "task": ["Data Augmentation", "Image Classification"], "method": [], "dataset": ["MNIST", "CIFAR-10"], "metric": ["Percentage error", "Percentage correct"], "title": "APAC: Augmented PAttern Classification with Neural Networks"} {"abstract": "In recent years, various shadow detection methods from a single image have\nbeen proposed and used in vision systems; however, most of them are not\nappropriate for the robotic applications due to the expensive time complexity.\nThis paper introduces a fast shadow detection method using a deep learning\nframework, with a time cost that is appropriate for robotic applications. In\nour solution, we first obtain a shadow prior map with the help of multi-class\nsupport vector machine using statistical features. Then, we use a semantic-\naware patch-level Convolutional Neural Network that efficiently trains on\nshadow examples by combining the original image and the shadow prior map.\nExperiments on benchmark datasets demonstrate the proposed method significantly\ndecreases the time complexity of shadow detection, by one or two orders of\nmagnitude compared with state-of-the-art methods, without losing accuracy.", "field": [], "task": ["Shadow Detection"], "method": [], "dataset": ["SBU"], "metric": ["BER"], "title": "Fast Shadow Detection from a Single Image Using a Patched Convolutional Neural Network"} {"abstract": "An important goal in visual recognition is to devise image representations\nthat are invariant to particular transformations. In this paper, we address\nthis goal with a new type of convolutional neural network (CNN) whose\ninvariance is encoded by a reproducing kernel. Unlike traditional approaches\nwhere neural networks are learned either to represent data or for solving a\nclassification task, our network learns to approximate the kernel feature map\non training data. Such an approach enjoys several benefits over classical ones.\nFirst, by teaching CNNs to be invariant, we obtain simple network architectures\nthat achieve a similar accuracy to more complex ones, while being easy to train\nand robust to overfitting. Second, we bridge a gap between the neural network\nliterature and kernels, which are natural tools to model invariance. We\nevaluate our methodology on visual recognition tasks where CNNs have proven to\nperform well, e.g., digit recognition with the MNIST dataset, and the more\nchallenging CIFAR-10 and STL-10 datasets, where our accuracy is competitive\nwith the state of the art.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["STL-10", "MNIST", "CIFAR-10"], "metric": ["Percentage error", "Percentage correct"], "title": "Convolutional Kernel Networks"} {"abstract": "In this paper, we investigate the problem of learning feature representation\nfrom unlabeled data using a single-layer K-means network. A K-means network\nmaps the input data into a feature representation by finding the nearest\ncentroid for each input point, which has attracted researchers' great attention\nrecently due to its simplicity, effectiveness, and scalability. However, one\ndrawback of this feature mapping is that it tends to be unreliable when the\ntraining data contains noise. To address this issue, we propose a SVDD based\nfeature learning algorithm that describes the density and distribution of each\ncluster from K-means with an SVDD ball for more robust feature representation.\nFor this purpose, we present a new SVDD algorithm called C-SVDD that centers\nthe SVDD ball towards the mode of local density of each cluster, and we show\nthat the objective of C-SVDD can be solved very efficiently as a linear\nprogramming problem. Additionally, traditional unsupervised feature learning\nmethods usually take an average or sum of local representations to obtain\nglobal representation which ignore spatial relationship among them. To use\nspatial information we propose a global representation with a variant of SIFT\ndescriptor. The architecture is also extended with multiple receptive field\nscales and multiple pooling sizes. Extensive experiments on several popular\nobject recognition benchmarks, such as STL-10, MINST, Holiday and Copydays\nshows that the proposed C-SVDDNet method yields comparable or better\nperformance than that of the previous state of the art methods.", "field": [], "task": ["Image Classification", "Object Recognition"], "method": [], "dataset": ["MNIST", "STL-10"], "metric": ["Percentage error", "Percentage correct"], "title": "Unsupervised Feature Learning with C-SVDDNet"} {"abstract": "Methods for unconstrained face alignment must satisfy two requirements: they must not rely on accurate initialisation/face detection and they should perform equally well for the whole spectrum of facial poses. To the best of our knowledge, there are no methods meeting these requirements to satisfactory extent, and in this paper, we propose Convolutional Aggregation of Local Evidence (CALE), a Convolutional Neural Network (CNN) architecture particularly designed for addressing both of them. In particular, to remove the requirement for accurate face detection, our system firstly performs facial part detection, providing confidence scores for the location of each of the facial landmarks (local evidence). Next, these score maps along with early CNN features are aggregated by our system through joint regression in order to refine the landmarks\u2019 location. Besides playing the role of a graphical model, CNN regression is a key feature of our system, guiding the network to rely on context for predicting the location of occluded landmarks, typically encountered in very large poses. The whole system is trained end-to-end with intermediate supervision. When applied to AFLW-PIFA, the most challenging human face alignment test set to date, our method provides more than 50% gain in localisation accuracy when compared to other recently published methods for large pose face alignment. Going beyond human faces, we also demonstrate that CALE is effective in dealing with very large changes in shape and appearance, typically encountered in animal faces.", "field": [], "task": ["Face Alignment", "Face Detection", "Regression"], "method": [], "dataset": ["AFLW-PIFA (34 points)", "AFLW-PIFA (21 points)"], "metric": ["NME"], "title": "Convolutional aggregation of local evidence for large pose face alignment"} {"abstract": "Sparseness is a useful regularizer for learning in a wide range of\napplications, in particular in neural networks. This paper proposes a model\ntargeted at classification tasks, where sparse activity and sparse connectivity\nare used to enhance classification capabilities. The tool for achieving this is\na sparseness-enforcing projection operator which finds the closest vector with\na pre-defined sparseness for any given vector. In the theoretical part of this\npaper, a comprehensive theory for such a projection is developed. In\nconclusion, it is shown that the projection is differentiable almost everywhere\nand can thus be implemented as a smooth neuronal transfer function. The entire\nmodel can hence be tuned end-to-end using gradient-based methods. Experiments\non the MNIST database of handwritten digits show that classification\nperformance can be boosted by sparse activity or sparse connectivity. With a\ncombination of both, performance can be significantly better compared to\nclassical non-sparse approaches.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["MNIST"], "metric": ["Percentage error"], "title": "Sparse Activity and Sparse Connectivity in Supervised Learning"} {"abstract": "This work presents a two-stage neural architecture for learning and refining structural correspondences between graphs. First, we use localized node embeddings computed by a graph neural network to obtain an initial ranking of soft correspondences between nodes. Secondly, we employ synchronous message passing networks to iteratively re-rank the soft correspondences to reach a matching consensus in local neighborhoods between graphs. We show, theoretically and empirically, that our message passing scheme computes a well-founded measure of consensus for corresponding neighborhoods, which is then used to guide the iterative re-ranking process. Our purely local and sparsity-aware architecture scales well to large, real-world inputs while still being able to recover global correspondences consistently. We demonstrate the practical effectiveness of our method on real-world tasks from the fields of computer vision and entity alignment between knowledge graphs, on which we improve upon the current state-of-the-art. Our source code is available under https://github.com/rusty1s/ deep-graph-matching-consensus.", "field": [], "task": ["Entity Alignment", "Graph Matching", "Knowledge Graphs"], "method": [], "dataset": ["DBP15k zh-en"], "metric": ["Hits@1"], "title": "Deep Graph Matching Consensus"} {"abstract": "SegBlocks reduces the computational cost of existing neural networks, by dynamically adjusting the processing resolution of image regions based on their complexity. Our method splits an image into blocks and downsamples blocks of low complexity, reducing the number of operations and memory consumption. A lightweight policy network, selecting the complex regions, is trained using reinforcement learning. In addition, we introduce several modules implemented in CUDA to process images in blocks. Most important, our novel BlockPad module prevents the feature discontinuities at block borders of which existing methods suffer, while keeping memory consumption under control. Our experiments on Cityscapes and Mapillary Vistas semantic segmentation show that dynamically processing images offers a better accuracy versus complexity trade-off compared to static baselines of similar complexity. For instance, our method reduces the number of floating-point operations of SwiftNet-RN18 by 60% and increases the inference speed by 50%, with only 0.3% decrease in mIoU accuracy on Cityscapes.", "field": [], "task": ["Real-Time Semantic Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["Mapillary val", "Cityscapes test"], "metric": ["Frame (fps)", "mIoU"], "title": "SegBlocks: Block-Based Dynamic Resolution Networks for Real-Time Segmentation"} {"abstract": "In this paper, we study the problem of semantic annotation on 3D models that\nare represented as shape graphs. A functional view is taken to represent\nlocalized information on graphs, so that annotations such as part segment or\nkeypoint are nothing but 0-1 indicator vertex functions. Compared with images\nthat are 2D grids, shape graphs are irregular and non-isomorphic data\nstructures. To enable the prediction of vertex functions on them by\nconvolutional neural networks, we resort to spectral CNN method that enables\nweight sharing by parameterizing kernels in the spectral domain spanned by\ngraph laplacian eigenbases. Under this setting, our network, named SyncSpecCNN,\nstrive to overcome two key challenges: how to share coefficients and conduct\nmulti-scale analysis in different parts of the graph for a single shape, and\nhow to share information across related but different shapes that may be\nrepresented by very different graphs. Towards these goals, we introduce a\nspectral parameterization of dilated convolutional kernels and a spectral\ntransformer network. Experimentally we tested our SyncSpecCNN on various tasks,\nincluding 3D shape part segmentation and 3D keypoint prediction.\nState-of-the-art performance has been achieved on all benchmark datasets.", "field": [], "task": ["3D Part Segmentation"], "method": [], "dataset": ["ShapeNet-Part"], "metric": ["Class Average IoU", "Instance Average IoU"], "title": "SyncSpecCNN: Synchronized Spectral CNN for 3D Shape Segmentation"} {"abstract": "Modern neural networks have the capacity to overfit noisy labels frequently found in real-world datasets. Although great progress has been made, existing techniques are limited in providing theoretical guarantees for the performance of the neural networks trained with noisy labels. Here we propose a novel approach with strong theoretical guarantees for robust training of deep networks trained with noisy labels. The key idea behind our method is to select weighted subsets (coresets) of clean data points that provide an approximately low-rank Jacobian matrix. We then prove that gradient descent applied to the subsets do not overfit the noisy labels. Our extensive experiments corroborate our theory and demonstrate that deep networks trained on our subsets achieve a significantly superior performance compared to state-of-the art, e.g., 6% increase in accuracy on CIFAR-10 with 80% noisy labels, and 7% increase in accuracy on mini Webvision.", "field": [], "task": [], "method": [], "dataset": ["mini WebVision 1.0"], "metric": ["Top-5 Accuracy", "ImageNet Top-1 Accuracy", "ImageNet Top-5 Accuracy", "Top-1 Accuracy"], "title": "Coresets for Robust Training of Neural Networks against Noisy Labels"} {"abstract": "Disambiguating named entities in naturallanguage text maps mentions of ambiguous names onto canonical entities like people or places, registered in a knowledge base such as DBpedia or YAGO. This paper presents a robust method for collective disambiguation, by harnessing context from knowledge bases and using a new form of coherence graph. It unifies prior approaches into a comprehensive framework that combines three measures: the prior probability of an entity being mentioned, the similarity between the contexts of a mention and a candidate entity, as well as the coherence among candidate entities for all mentions together. The method builds a weighted graph of mentions and candidate entities, and computes a dense subgraph that approximates the best joint mention-entity mapping. Experiments show that the new method significantly outperforms prior methods in terms of accuracy, with robust behavior across a variety of inputs.", "field": [], "task": ["Entity Disambiguation", "Entity Linking"], "method": [], "dataset": ["AIDA-CoNLL"], "metric": ["Micro-F1 strong", "Macro-F1 strong", "In-KB Accuracy"], "title": "Robust Disambiguation of Named Entities in Text"} {"abstract": "Character-based neural models have recently proven very useful for many NLP\ntasks. However, there is a gap of sophistication between methods for learning\nrepresentations of sentences and words. While most character models for\nlearning representations of sentences are deep and complex, models for learning\nrepresentations of words are shallow and simple. Also, in spite of considerable\nresearch on learning character embeddings, it is still not clear which kind of\narchitecture is the best for capturing character-to-word representations. To\naddress these questions, we first investigate the gaps between methods for\nlearning word and sentence representations. We conduct detailed experiments and\ncomparisons of different state-of-the-art convolutional models, and also\ninvestigate the advantages and disadvantages of their constituents.\nFurthermore, we propose IntNet, a funnel-shaped wide convolutional neural\narchitecture with no down-sampling for learning representations of the internal\nstructure of words by composing their characters from limited, supervised\ntraining corpora. We evaluate our proposed model on six sequence labeling\ndatasets, including named entity recognition, part-of-speech tagging, and\nsyntactic chunking. Our in-depth analysis shows that IntNet significantly\noutperforms other character embedding models and obtains new state-of-the-art\nperformance without relying on any external knowledge or resources.", "field": [], "task": ["Chunking", "Named Entity Recognition", "Part-Of-Speech Tagging"], "method": [], "dataset": ["CoNLL 2003 (English)", "Penn Treebank"], "metric": ["F1", "F1 score", "Accuracy"], "title": "Learning Better Internal Structure of Words for Sequence Labeling"} {"abstract": "One of the most important factors which directly and significantly affects the quality of the neural sequence labeling is the selection and encoding the input features to generate rich semantic and grammatical representation vectors. In this paper, we propose a deep neural network model to address a particular task of sequence labeling problem, the task of Named Entity Recognition (NER). The model consists of three sub-networks to fully exploit character-level and capitalization features as well as word-level contextual representation. To show the ability of our model to generalize to different languages, we evaluated the model in Russian, Vietnamese, English and Chinese and obtained state-of-the-art performances: 91.10%, 94.43%, 91.22%, 92.95% of F-Measure on Gareev's dataset, VLSP-2016, CoNLL-2003 and MSRA datasets, respectively. Besides that, our model also obtained a good performance (about 70% of F1) with using only 100 samples for training and development sets.", "field": [], "task": ["Named Entity Recognition", "Named Entity Recognition In Vietnamese"], "method": [], "dataset": ["CoNLL 2003 (English)", "VLSP-2016"], "metric": ["F1"], "title": "A Deep Neural Network Model for the Task of Named Entity Recognition"} {"abstract": "Recent research on the time-domain audio separation networks (TasNets) has brought great success to speech separation. Nevertheless, conventional TasNets struggle to satisfy the memory and latency constraints in industrial applications. In this regard, we design a low-cost high-performance architecture, namely, globally attentive locally recurrent (GALR) network. Alike the dual-path RNN (DPRNN), we first split a feature sequence into 2D segments and then process the sequence along both the intra- and inter-segment dimensions. Our main innovation lies in that, on top of features recurrently processed along the inter-segment dimensions, GALR applies a self-attention mechanism to the sequence along the inter-segment dimension, which aggregates context-aware information and also enables parallelization. Our experiments suggest that GALR is a notably more effective network than the prior work. On one hand, with only 1.5M parameters, it has achieved comparable separation performance at a much lower cost with 36.1% less runtime memory and 49.4% fewer computational operations, relative to the DPRNN. On the other hand, in a comparable model size with DPRNN, GALR has consistently outperformed DPRNN in three datasets, in particular, with a substantial margin of 2.4dB absolute improvement of SI-SNRi in the benchmark WSJ0-2mix task.", "field": [], "task": ["Speech Separation"], "method": [], "dataset": ["wsj0-2mix"], "metric": ["SI-SDRi"], "title": "Effective Low-Cost Time-Domain Audio Separation Using Globally Attentive Locally Recurrent Networks"} {"abstract": "We present a joint model of three core tasks in the entity analysis stack: coreference resolution (within-document clustering), named entity recognition (coarse semantic typing), and entity linking (matching to Wikipedia entities). Our model is formally a structured conditional random field. Unary factors encode local features from strong baselines for each task. We then add binary and ternary factors to capture cross-task interactions, such as the constraint that coreferent mentions have the same semantic type. On the ACE 2005 and OntoNotes datasets, we achieve state-of-the-art results for all three tasks. Moreover, joint modeling improves performance on each task over strong independent baselines.", "field": [], "task": ["Coreference Resolution", "Entity Linking", "Named Entity Recognition"], "method": [], "dataset": ["Ontonotes v5 (English)"], "metric": ["F1"], "title": "A Joint Model for Entity Analysis: Coreference, Typing, and Linking"} {"abstract": "Most video-based action recognition approaches choose to extract features from the whole video to recognize actions. The cluttered background and non-action motions limit the performances of these methods, since they lack the explicit modeling of human body movements. With recent advances of human pose estimation, this work presents a novel method to recognize human action as the evolution of pose estimation maps. Instead of relying on the inaccurate human poses estimated from videos, we observe that pose estimation maps, the byproduct of pose estimation, preserve richer cues of human body to benefit action recognition. Specifically, the evolution of pose estimation maps can be decomposed as an evolution of heatmaps, e.g., probabilistic maps, and an evolution of estimated 2D human poses, which denote the changes of body shape and body pose, respectively. Considering the sparse property of heatmap, we develop spatial rank pooling to aggregate the evolution of heatmaps as a body shape evolution image. As body shape evolution image does not differentiate body parts, we design body guided sampling to aggregate the evolution of poses as a body pose evolution image. The complementary properties between both types of images are explored by deep convolutional neural networks to predict action label. Experiments on NTU RGB+D, UTD-MHAD and PennAction datasets verify the effectiveness of our method, which outperforms most state-of-the-art methods.", "field": [], "task": ["Action Recognition", "Multimodal Activity Recognition", "Pose Estimation", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D", "UTD-MHAD", "NTU RGB+D 120"], "metric": ["Accuracy (Cross-Subject)", "Accuracy (Cross-Setup)", "Accuracy (CV)", "Accuracy (CS)"], "title": "Recognizing Human Actions as the Evolution of Pose Estimation Maps"} {"abstract": "Highly regularized LSTMs achieve impressive results on several benchmark\ndatasets in language modeling. We propose a new regularization method based on\ndecoding the last token in the context using the predicted distribution of the\nnext token. This biases the model towards retaining more contextual\ninformation, in turn improving its ability to predict the next token. With\nnegligible overhead in the number of parameters and training time, our Past\nDecode Regularization (PDR) method achieves a word level perplexity of 55.6 on\nthe Penn Treebank and 63.5 on the WikiText-2 datasets using a single softmax.\nWe also show gains by using PDR in combination with a mixture-of-softmaxes,\nachieving a word level perplexity of 53.8 and 60.5 on these datasets. In\naddition, our method achieves 1.169 bits-per-character on the Penn Treebank\nCharacter dataset for character level language modeling. These results\nconstitute a new state-of-the-art in their respective settings.", "field": [], "task": ["Language Modelling"], "method": [], "dataset": ["Penn Treebank (Word Level)", "WikiText-2", "Penn Treebank (Character Level)"], "metric": ["Number of params", "Bit per Character (BPC)", "Validation perplexity", "Test perplexity", "Params"], "title": "Improved Language Modeling by Decoding the Past"} {"abstract": "Single modality action recognition on RGB or depth sequences has been\nextensively explored recently. It is generally accepted that each of these two\nmodalities has different strengths and limitations for the task of action\nrecognition. Therefore, analysis of the RGB+D videos can help us to better\nstudy the complementary properties of these two types of modalities and achieve\nhigher levels of performance. In this paper, we propose a new deep autoencoder\nbased shared-specific feature factorization network to separate input\nmultimodal signals into a hierarchy of components. Further, based on the\nstructure of the features, a structured sparsity learning machine is proposed\nwhich utilizes mixed norms to apply regularization within components and group\nselection between them for better classification performance. Our experimental\nresults show the effectiveness of our cross-modality feature analysis framework\nby achieving state-of-the-art accuracy for action classification on five\nchallenging benchmark datasets.", "field": [], "task": ["Action Classification", "Action Classification ", "Action Recognition", "Multimodal Activity Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D", "MSR Daily Activity3D dataset"], "metric": ["Accuracy (CS)", "Accuracy"], "title": "Deep Multimodal Feature Analysis for Action Recognition in RGB+D Videos"} {"abstract": "This paper presents a deep learning based approach to the problem of human\npose estimation. We employ generative adversarial networks as our learning\nparadigm in which we set up two stacked hourglass networks with the same\narchitecture, one as the generator and the other as the discriminator. The\ngenerator is used as a human pose estimator after the training is done. The\ndiscriminator distinguishes ground-truth heatmaps from generated ones, and\nback-propagates the adversarial loss to the generator. This process enables the\ngenerator to learn plausible human body configurations and is shown to be\nuseful for improving the prediction accuracy.", "field": [], "task": ["Pose Estimation"], "method": [], "dataset": ["MPII Human Pose"], "metric": ["PCKh-0.5"], "title": "Self Adversarial Training for Human Pose Estimation"} {"abstract": "Computational biology and bioinformatics provide vast data gold-mines from protein sequences, ideal for Language Models (LMs) taken from Natural Language Processing (NLP). These LMs reach for new prediction frontiers at low inference costs. Here, we trained two auto-regressive language models (Transformer-XL, XLNet) and two auto-encoder models (Bert, Albert) on data from UniRef and BFD containing up to 393 billion amino acids (words) from 2.1 billion protein sequences (22- and 112-times the entire English Wikipedia). The LMs were trained on the Summit supercomputer at Oak Ridge National Laboratory (ORNL), using 936 nodes (total 5616 GPUs) and one TPU Pod (V3-512 or V3-1024). We validated the advantage of up-scaling LMs to larger models supported by bigger data by predicting secondary structure (3-states: Q3=76-84, 8-states: Q8=65-73), sub-cellular localization for 10 cellular compartments (Q10=74) and whether a protein is membrane-bound or water-soluble (Q2=89). Dimensionality reduction revealed that the LM-embeddings from unlabeled data (only protein sequences) captured important biophysical properties governing protein shape. This implied learning some of the grammar of the language of life realized in protein sequences. The successful up-scaling of protein LMs through HPC to larger data sets slightly reduced the gap between models trained on evolutionary information and LMs. The official GitHub repository: https://github.com/agemagician/ProtTrans", "field": [], "task": ["Dimensionality Reduction", "Protein Secondary Structure Prediction"], "method": [], "dataset": ["CASP12", "CB513", "TS115"], "metric": ["Q8", "Q3"], "title": "ProtTrans: Towards Cracking the Language of Life's Code Through Self-Supervised Deep Learning and High Performance Computing"} {"abstract": "In this paper, we systematically analyze the connecting architectures of\nrecurrent neural networks (RNNs). Our main contribution is twofold: first, we\npresent a rigorous graph-theoretic framework describing the connecting\narchitectures of RNNs in general. Second, we propose three architecture\ncomplexity measures of RNNs: (a) the recurrent depth, which captures the RNN's\nover-time nonlinear complexity, (b) the feedforward depth, which captures the\nlocal input-output nonlinearity (similar to the \"depth\" in feedforward neural\nnetworks (FNNs)), and (c) the recurrent skip coefficient which captures how\nrapidly the information propagates over time. We rigorously prove each\nmeasure's existence and computability. Our experimental results show that RNNs\nmight benefit from larger recurrent depth and feedforward depth. We further\ndemonstrate that increasing recurrent skip coefficient offers performance\nboosts on long term dependency problems.", "field": [], "task": ["Language Modelling"], "method": [], "dataset": ["Text8"], "metric": ["Bit per Character (BPC)"], "title": "Architectural Complexity Measures of Recurrent Neural Networks"} {"abstract": "We present a simple and effective blind image deblurring method based on the dark channel prior. Our work is inspired by the interesting observation that the dark channel of blurred images is less sparse. While most image patches in the clean image contain some dark pixels, these pixels are not dark when averaged with neighboring high-intensity pixels during the blur process.Our analysis shows that this change in the sparsity of the dark channel is an inherent property of the blur process, both theoretically and empirically. This change in the sparsity of the dark channel is an inherent property of the blur process, which we both prove mathematically and validate using training data. Therefore, enforcing the sparsity of the dark channel helps blind deblurring on various scenarios, including natural, face, text, and low-illumination images. However, sparsity of the dark channel introduces a non-convex non-linear optimization problem. We introduce a linear approximation of the min operator to compute the dark channel. Our look-up-table-based method converges fast in practice and can be directly extended to non-uniform deblurring. Extensive experiments show that our method achieves state-of-the-art results on deblurring natural images and compares favorably methods that are well-engineered for specific scenarios.", "field": [], "task": ["Blind Image Deblurring", "Deblurring"], "method": [], "dataset": ["RealBlur-J (trained on GoPro)", "RealBlur-R (trained on GoPro)"], "metric": ["SSIM (sRGB)", "PSNR (sRGB)"], "title": "Blind Image Deblurring Using Dark Channel Prior"} {"abstract": "Images taken in low-light conditions with handheld cameras are often blurry due to the required long exposure time. Although significant progress has been made recently on image deblurring, state-of-the-art approaches often fail on low-light images, as these images do not contain a sufficient number of salient features that deblurring methods rely on. On the other hand, light streaks are common phenomena in low-light images that contain rich blur information, but have not been extensively explored in previous approaches. In this work, we propose a new method that utilizes light streaks to help deblur low-light images. We introduce a non-linear blur model that explicitly models light streaks and their underlying light sources, and poses them as constraints for estimating the blur kernel in an optimization framework. Our method also automatically detects useful light streaks in the input image. Experimental results show that our approach obtains good results on challenging real-world examples that no other methods could achieve before.", "field": [], "task": ["Deblurring"], "method": [], "dataset": ["RealBlur-J (trained on GoPro)", "RealBlur-R (trained on GoPro)"], "metric": ["SSIM (sRGB)", "PSNR (sRGB)"], "title": "Deblurring Low-light Images with Light Streaks"} {"abstract": "Electroencephalograph (EEG) emotion recognition is a significant task in the brain-computer interface field. Although many deep learning methods are proposed recently, it is still challenging to make full use of the information contained in different domains of EEG signals. In this paper, we present a novel method, called four-dimensional attention-based neural network (4D-aNN) for EEG emotion recognition. First, raw EEG signals are transformed into 4D spatial-spectral-temporal representations. Then, the proposed 4D-aNN adopts spectral and spatial attention mechanisms to adaptively assign the weights of different brain regions and frequency bands, and a convolutional neural network (CNN) is utilized to deal with the spectral and spatial information of the 4D representations. Moreover, a temporal attention mechanism is integrated into a bidirectional Long Short-Term Memory (LSTM) to explore temporal dependencies of the 4D representations. Our model achieves state-of-the-art performance on the SEED dataset under intra-subject splitting. The experimental results have shown the effectiveness of the attention mechanisms in different domains for EEG emotion recognition.", "field": [], "task": ["EEG", "Emotion Recognition"], "method": [], "dataset": ["SEED"], "metric": ["Accuracy"], "title": "4D Attention-based Neural Network for EEG Emotion Recognition"} {"abstract": "Pre-trained language models have proven their unique powers in capturing implicit language features. However, most pre-training approaches focus on the word-level training objective, while sentence-level objectives are rarely studied. In this paper, we propose Contrastive LEArning for sentence Representation (CLEAR), which employs multiple sentence-level augmentation strategies in order to learn a noise-invariant sentence representation. These augmentations include word and span deletion, reordering, and substitution. Furthermore, we investigate the key reasons that make contrastive learning effective through numerous experiments. We observe that different sentence augmentations during pre-training lead to different performance improvements on various downstream tasks. Our approach is shown to outperform multiple existing methods on both SentEval and GLUE benchmarks.", "field": [], "task": ["Linguistic Acceptability", "Natural Language Inference", "Question Answering", "Semantic Textual Similarity", "Sentiment Analysis"], "method": [], "dataset": ["MultiNLI", "SST-2 Binary classification", "RTE", "MRPC", "STS Benchmark", "CoLA", "QNLI", "Quora Question Pairs"], "metric": ["Pearson Correlation", "Accuracy"], "title": "CLEAR: Contrastive Learning for Sentence Representation"} {"abstract": "Wearable devices that acquire photoplethysmographic (PPG) signals are becoming increasingly popular to monitor the heart rate during physical exercise. However, high accuracy and low computational complexity are conflicting requirements. We propose a method that provides highly accurate heart rate estimates at a very low computational cost in order to be implementable on wearables. To achieve the lowest possible complexity, only basic signal processing operations, i.e., correlation-based fundamental frequency estimation and spectral combination, harmonic noise damping and frequency domain tracking, are used. The proposed approach outperforms state-of-the-art methods on current benchmark data considerably in terms of computation time, while achieving a similar accuracy.", "field": [], "task": ["Heart rate estimation", "Photoplethysmography (PPG)"], "method": [], "dataset": ["PPG-DaLiA", "WESAD"], "metric": ["MAE [bpm, session-wise]"], "title": "Computationally efficient heart rate estimation during physical exercise using photoplethysmographic signals"} {"abstract": "In this work we propose a novel deep-learning approach for age estimation based on face images. We first introduce a dual image augmentation-aggregation approach based on attention. This allows the network to jointly utilize multiple face image augmentations whose embeddings are aggregated by a Transformer-Encoder. The resulting aggregated embedding is shown to better encode the face image attributes. We then propose a probabilistic hierarchical regression framework that combines a discrete probabilistic estimate of age labels, with a corresponding ensemble of regressors. Each regressor is particularly adapted and trained to refine the probabilistic estimate over a range of ages. Our scheme is shown to outperform contemporary schemes and provide a new state-of-the-art age estimation accuracy, when applied to the MORPH II dataset for age estimation. Last, we introduce a bias analysis of state-of-the-art age estimation results.", "field": [], "task": [], "method": [], "dataset": ["MORPH Album2 (RS)", "MORPH Album2 (SE)"], "metric": ["MAE"], "title": "Hierarchical Attention-based Age Estimation and Bias Estimation"} {"abstract": "Facial Expression Recognition (FER) is a classification task that points to face variants. Hence, there are certain intimate relationships between facial expressions. We call them affinity features, which are barely taken into account by current FER algorithms. Besides, to capture the edge information of the image, Convolutional Neural Networks (CNNs) generally utilize a host of edge paddings. Although they are desirable, the feature map is deeply eroded after multi-layer convolution. We name what has formed in this process the albino features, which definitely weaken the representation of the expression. To tackle these challenges, we propose a novel architecture named Amend Representation Module (ARM). ARM is a substitute for the pooling layer. Theoretically, it could be embedded in any CNN with a pooling layer. ARM efficiently enhances facial expression representation from two different directions: 1) reducing the weight of eroded features to offset the side effect of padding, and 2) sharing affinity features over mini-batch to strengthen the representation learning. In terms of data imbalance, we designed a minimal random resampling (MRR) scheme to suppress network overfitting. Experiments on public benchmarks prove that our ARM boosts the performance of FER remarkably. The validation accuracies are respectively 90.55% on RAF-DB, 64.49% on Affect-Net, and 71.38% on FER2013, exceeding current state-of-the-art methods.", "field": [], "task": [], "method": [], "dataset": ["RAF-DB", "AffectNet", "FER2013"], "metric": ["Overall Accuracy", "Accuracy (8 emotion)", "Accuracy (7 emotion)", "Avg. Accuracy", "Accuracy"], "title": "Learning to Amend Facial Expression Representation via De-albino and Affinity"} {"abstract": "Neural architecture search (NAS) has witnessed prevailing success in image classification and (very recently) segmentation tasks. In this paper, we present the first preliminary study on introducing the NAS algorithm to generative adversarial networks (GANs), dubbed AutoGAN. The marriage of NAS and GANs faces its unique challenges. We define the search space for the generator architectural variations and use an RNN controller to guide the search, with parameter sharing and dynamic-resetting to accelerate the process. Inception score is adopted as the reward, and a multi-level search strategy is introduced to perform NAS in a progressive way. Experiments validate the effectiveness of AutoGAN on the task of unconditional image generation. Specifically, our discovered architectures achieve highly competitive performance compared to current state-of-the-art hand-crafted GANs, e.g., setting new state-of-the-art FID scores of 12.42 on CIFAR-10, and 31.01 on STL-10, respectively. We also conclude with a discussion of the current limitations and future potential of AutoGAN. The code is available at https://github.com/TAMU-VITA/AutoGAN", "field": [], "task": ["Image Classification", "Image Generation", "Neural Architecture Search"], "method": [], "dataset": ["STL-10", "CIFAR-10"], "metric": ["Inception score", "FID"], "title": "AutoGAN: Neural Architecture Search for Generative Adversarial Networks"} {"abstract": "In interactive object segmentation a user collaborates with a computer vision model to segment an object. Recent works employ convolutional neural networks for this task: Given an image and a set of corrections made by the user as input, they output a segmentation mask. These approaches achieve strong performance by training on large datasets but they keep the model parameters unchanged at test time. Instead, we recognize that user corrections can serve as sparse training examples and we propose a method that capitalizes on that idea to update the model parameters on-the-fly to the data at hand. Our approach enables the adaptation to a particular object and its background, to distributions shifts in a test set, to specific object classes, and even to large domain changes, where the imaging modality changes between training and testing. We perform extensive experiments on 8 diverse datasets and show: Compared to a model with frozen parameters, our method reduces the required corrections (i) by 9%-30% when distribution shifts are small between training and testing; (ii) by 12%-44% when specializing to a specific class; (iii) and by 60% and 77% when we completely change domain between training and testing.", "field": [], "task": ["Interactive Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["DRIONS-DB", "Rooftop", "Berkeley", "DAVIS", "GrabCut"], "metric": ["NoC@90", "NoC@85", "NoC@80"], "title": "Continuous Adaptation for Interactive Object Segmentation by Learning from Corrections"} {"abstract": "Temporal action proposal generation is an important task, aiming to localize\nthe video segments containing human actions in an untrimmed video. In this\npaper, we propose a multi-granularity generator (MGG) to perform the temporal\naction proposal from different granularity perspectives, relying on the video\nvisual features equipped with the position embedding information. First, we\npropose to use a bilinear matching model to exploit the rich local information\nwithin the video sequence. Afterwards, two components, namely segment proposal\nproducer (SPP) and frame actionness producer (FAP), are combined to perform the\ntask of temporal action proposal at two distinct granularities. SPP considers\nthe whole video in the form of feature pyramid and generates segment proposals\nfrom one coarse perspective, while FAP carries out a finer actionness\nevaluation for each video frame. Our proposed MGG can be trained in an\nend-to-end fashion. By temporally adjusting the segment proposals with\nfine-grained frame actionness information, MGG achieves the superior\nperformance over state-of-the-art methods on the public THUMOS-14 and\nActivityNet-1.3 datasets. Moreover, we employ existing action classifiers to\nperform the classification of the proposals generated by MGG, leading to\nsignificant improvements compared against the competing methods for the video\ndetection task.", "field": [], "task": ["Action Recognition", "Temporal Action Proposal Generation"], "method": [], "dataset": ["ActivityNet-1.3", "THUMOS\u201914"], "metric": ["mAP@0.3", "AUC (val)", "mAP@0.4", "mAP@0.5", "AR@100"], "title": "Multi-granularity Generator for Temporal Action Proposal"} {"abstract": "We present a Temporal Context Network (TCN) for precise temporal localization\nof human activities. Similar to the Faster-RCNN architecture, proposals are\nplaced at equal intervals in a video which span multiple temporal scales. We\npropose a novel representation for ranking these proposals. Since pooling\nfeatures only inside a segment is not sufficient to predict activity\nboundaries, we construct a representation which explicitly captures context\naround a proposal for ranking it. For each temporal segment inside a proposal,\nfeatures are uniformly sampled at a pair of scales and are input to a temporal\nconvolutional neural network for classification. After ranking proposals,\nnon-maximum suppression is applied and classification is performed to obtain\nfinal detections. TCN outperforms state-of-the-art methods on the ActivityNet\ndataset and the THUMOS14 dataset.", "field": [], "task": ["Temporal Localization"], "method": [], "dataset": ["THUMOS\u201914"], "metric": ["mAP@0.4", "mAP@0.5"], "title": "Temporal Context Network for Activity Localization in Videos"} {"abstract": "In this paper, we address the problem of detecting unseen objects from RGB images and estimating their poses in 3D. We propose two mobile friendly networks: MobilePose-Base and MobilePose-Shape. The former is used when there is only pose supervision, and the latter is for the case when shape supervision is available, even a weak one. We revisit shape features used in previous methods, including segmentation and coordinate map. We explain when and why pixel-level shape supervision can improve pose estimation. Consequently, we add shape prediction as an intermediate layer in the MobilePose-Shape, and let the network learn pose from shape. Our models are trained on mixed real and synthetic data, with weak and noisy shape supervision. They are ultra lightweight that can run in real-time on modern mobile devices (e.g. 36 FPS on Galaxy S20). Comparing with previous single-shot solutions, our method has higher accuracy, while using a significantly smaller model (2~3% in model size or number of parameters).", "field": [], "task": ["Monocular 3D Object Detection", "Pose Estimation"], "method": [], "dataset": ["Google Objectron"], "metric": ["Average Precision at 0.5 3D IoU", "MPE", "AP at 10' Elevation error", "AP at 15' Azimuth error"], "title": "MobilePose: Real-Time Pose Estimation for Unseen Objects with Weak Shape Supervision"} {"abstract": "In this paper we describe the TurkuNLP entry at the CoNLL 2018 Shared Task on Multilingual Parsing from Raw Text to Universal Dependencies. Compared to the last year, this year the shared task includes two new main metrics to measure the morphological tagging and lemmatization accuracies in addition to syntactic trees. Basing our motivation into these new metrics, we developed an end-to-end parsing pipeline especially focusing on developing a novel and state-of-the-art component for lemmatization. Our system reached the highest aggregate ranking on three main metrics out of 26 teams by achieving 1st place on metric involving lemmatization, and 2nd on both morphological tagging and parsing.", "field": [], "task": ["Dependency Parsing", "Lemmatization", "Machine Translation", "Morphological Tagging", "Word Embeddings"], "method": [], "dataset": ["Universal Dependencies"], "metric": ["UAS", "BLEX", "LAS"], "title": "Turku Neural Parser Pipeline: An End-to-End System for the CoNLL 2018 Shared Task"} {"abstract": "Recent powerful pre-trained language models have achieved remarkable performance on most of the popular datasets for reading comprehension. It is time to introduce more challenging datasets to push the development of this field towards more comprehensive reasoning of text. In this paper, we introduce a new Reading Comprehension dataset requiring logical reasoning (ReClor) extracted from standardized graduate admission examinations. As earlier studies suggest, human-annotated datasets usually contain biases, which are often exploited by models to achieve high accuracy without truly understanding the text. In order to comprehensively evaluate the logical reasoning ability of models on ReClor, we propose to identify biased data points and separate them into EASY set while the rest as HARD set. Empirical results show that state-of-the-art models have an outstanding ability to capture biases contained in the dataset with high accuracy on EASY set. However, they struggle on HARD set with poor performance near that of random guess, indicating more research is needed to essentially enhance the logical reasoning ability of current models.", "field": [], "task": ["Logical Reasoning Question Answering", "Logical Reasoning Reading Comprehension", "Machine Reading Comprehension", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["ReClor"], "metric": ["Accuracy", "Accuracy (hard)", "Accuracy (easy)", "Test"], "title": "ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning"} {"abstract": "Robust detection and tracking of objects is crucial for the deployment of autonomous vehicle technology. Image based benchmark datasets have driven development in computer vision tasks such as object detection, tracking and segmentation of agents in the environment. Most autonomous vehicles, however, carry a combination of cameras and range sensors such as lidar and radar. As machine learning based methods for detection and tracking become more prevalent, there is a need to train and evaluate such methods on datasets containing range sensor data along with images. In this work we present nuTonomy scenes (nuScenes), the first dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree field of view. nuScenes comprises 1000 scenes, each 20s long and fully annotated with 3D bounding boxes for 23 classes and 8 attributes. It has 7x as many annotations and 100x as many images as the pioneering KITTI dataset. We define novel 3D detection and tracking metrics. We also provide careful dataset analysis as well as baselines for lidar and image based detection and tracking. Data, development kit and more information are available online.", "field": [], "task": ["3D Object Detection", "Autonomous Driving", "Autonomous Vehicles", "Object Detection"], "method": [], "dataset": ["nuScenes"], "metric": ["NDS"], "title": "nuScenes: A multimodal dataset for autonomous driving"} {"abstract": "We present the Frontier Aware Search with backTracking (FAST) Navigator, a\ngeneral framework for action decoding, that achieves state-of-the-art results\non the Room-to-Room (R2R) Vision-and-Language navigation challenge of Anderson\net. al. (2018). Given a natural language instruction and photo-realistic image\nviews of a previously unseen environment, the agent was tasked with navigating\nfrom source to target location as quickly as possible. While all current\napproaches make local action decisions or score entire trajectories using beam\nsearch, ours balances local and global signals when exploring an unobserved\nenvironment. Importantly, this lets us act greedily but use global signals to\nbacktrack when necessary. Applying FAST framework to existing state-of-the-art\nmodels achieved a 17% relative gain, an absolute 6% gain on Success rate\nweighted by Path Length (SPL).", "field": [], "task": ["Vision and Language Navigation", "Vision-Language Navigation"], "method": [], "dataset": ["Room2Room", "VLN Challenge"], "metric": ["length", "spl", "oracle success", "success", "error"], "title": "Tactical Rewind: Self-Correction via Backtracking in Vision-and-Language Navigation"} {"abstract": "Commonsense reasoning is fundamental to natural language understanding. While\ntraditional methods rely heavily on human-crafted features and knowledge bases,\nwe explore learning commonsense knowledge from a large amount of raw text via\nunsupervised learning. We propose two neural network models based on the Deep\nStructured Semantic Models (DSSM) framework to tackle two classic commonsense\nreasoning tasks, Winograd Schema challenges (WSC) and Pronoun Disambiguation\n(PDP). Evaluation shows that the proposed models effectively capture contextual\ninformation in the sentence and co-reference information between pronouns and\nnouns, and achieve significant improvement over previous state-of-the-art\napproaches.", "field": [], "task": ["Common Sense Reasoning", "Natural Language Understanding"], "method": [], "dataset": ["PDP60", "Winograd Schema Challenge"], "metric": ["Score", "Accuracy"], "title": "Unsupervised Deep Structured Semantic Models for Commonsense Reasoning"} {"abstract": "Convolutional neural networks have witnessed remarkable improvements in computational efficiency in recent years. A key driving force has been the idea of trading-off model expressivity and efficiency through a combination of $1\\times 1$ and depth-wise separable convolutions in lieu of a standard convolutional layer. The price of the efficiency, however, is the sub-optimal flow of information across space and channels in the network. To overcome this limitation, we present MUXConv, a layer that is designed to increase the flow of information by progressively multiplexing channel and spatial information in the network, while mitigating computational complexity. Furthermore, to demonstrate the effectiveness of MUXConv, we integrate it within an efficient multi-objective evolutionary algorithm to search for the optimal model hyper-parameters while simultaneously optimizing accuracy, compactness, and computational efficiency. On ImageNet, the resulting models, dubbed MUXNets, match the performance (75.3% top-1 accuracy) and multiply-add operations (218M) of MobileNetV3 while being 1.6$\\times$ more compact, and outperform other mobile models in all the three criteria. MUXNet also performs well under transfer learning and when adapted to object detection. On the ChestX-Ray 14 benchmark, its accuracy is comparable to the state-of-the-art while being $3.3\\times$ more compact and $14\\times$ more efficient. Similarly, detection on PASCAL VOC 2007 is 1.2% more accurate, 28% faster and 6% more compact compared to MobileNetV2. Code is available from https://github.com/human-analysis/MUXConv", "field": [], "task": ["Image Classification", "Neural Architecture Search", "Object Detection", "Pneumonia Detection", "Semantic Segmentation", "Transfer Learning"], "method": [], "dataset": ["ChestX-ray14", "CIFAR-10 Image Classification", "ADE20K", "CIFAR-100", "CIFAR-10", "ImageNet"], "metric": ["Number of params", "AUROC", "Validation mIoU", "Top 1 Accuracy", "Percentage error", "MACs", "Percentage correct", "Top-1 Error Rate", "Params", "FLOPS", "Parameters", "PARAMS", "Top 5 Accuracy", "Accuracy", "Percentage Error"], "title": "MUXConv: Information Multiplexing in Convolutional Neural Networks"} {"abstract": "Learning to follow instructions is of fundamental importance to autonomous agents for vision-and-language navigation (VLN). In this paper, we study how an agent can navigate long paths when learning from a corpus that consists of shorter ones. We show that existing state-of-the-art agents do not generalize well. To this end, we propose BabyWalk, a new VLN agent that is learned to navigate by decomposing long instructions into shorter ones (BabySteps) and completing them sequentially. A special design memory buffer is used by the agent to turn its past experiences into contexts for future steps. The learning process is composed of two phases. In the first phase, the agent uses imitation learning from demonstration to accomplish BabySteps. In the second phase, the agent uses curriculum-based reinforcement learning to maximize rewards on navigation tasks with increasingly longer instructions. We create two new benchmark datasets (of long navigation tasks) and use them in conjunction with existing ones to examine BabyWalk's generalization ability. Empirical results show that BabyWalk achieves state-of-the-art results on several metrics, in particular, is able to follow long instructions better. The codes and the datasets are released on our project page https://github.com/Sha-Lab/babywalk.", "field": [], "task": ["Imitation Learning", "Vision and Language Navigation"], "method": [], "dataset": ["Cooperative Vision-and-Dialogue Navigation"], "metric": ["spl", "dist_to_end_reduction"], "title": "BabyWalk: Going Farther in Vision-and-Language Navigation by Taking Baby Steps"} {"abstract": "Robots navigating in human environments should use language to ask for assistance and be able to understand human responses. To study this challenge, we introduce Cooperative Vision-and-Dialog Navigation, a dataset of over 2k embodied, human-human dialogs situated in simulated, photorealistic home environments. The Navigator asks questions to their partner, the Oracle, who has privileged access to the best next steps the Navigator should take according to a shortest path planner. To train agents that search an environment for a goal location, we define the Navigation from Dialog History task. An agent, given a target object and a dialog history between humans cooperating to find that object, must infer navigation actions towards the goal in unexplored environments. We establish an initial, multi-modal sequence-to-sequence model and demonstrate that looking farther back in the dialog history improves performance. Sourcecode and a live interface demo can be found at https://cvdn.dev/", "field": [], "task": ["Visual Navigation"], "method": [], "dataset": ["Cooperative Vision-and-Dialogue Navigation"], "metric": ["spl", "dist_to_end_reduction"], "title": "Vision-and-Dialog Navigation"} {"abstract": "Different from Visual Question Answering task that requires to answer only one question about an image, Visual Dialogue involves multiple questions which cover a broad range of visual content that could be related to any objects, relationships or semantics. The key challenge in Visual Dialogue task is thus to learn a more comprehensive and semantic-rich image representation which may have adaptive attentions on the image for variant questions. In this research, we propose a novel model to depict an image from both visual and semantic perspectives. Specifically, the visual view helps capture the appearance-level information, including objects and their relationships, while the semantic view enables the agent to understand high-level visual semantics from the whole image to the local regions. Futhermore, on top of such multi-view image features, we propose a feature selection framework which is able to adaptively capture question-relevant information hierarchically in fine-grained level. The proposed method achieved state-of-the-art results on benchmark Visual Dialogue datasets. More importantly, we can tell which modality (visual or semantic) has more contribution in answering the current question by visualizing the gate values. It gives us insights in understanding of human cognition in Visual Dialogue.", "field": [], "task": ["Feature Selection", "Question Answering", "Visual Dialog", "Visual Question Answering"], "method": [], "dataset": ["Visual Dialog v1.0 test-std", "VisDial v0.9 val"], "metric": ["MRR (x 100)", "R@10", "NDCG (x 100)", "R@5", "Mean Rank", "MRR", "Mean", "R@1"], "title": "DualVD: An Adaptive Dual Encoding Model for Deep Visual Understanding in Visual Dialogue"} {"abstract": "Robots navigating autonomously need to perceive and track the motion of objects and other agents in its surroundings. This information enables planning and executing robust and safe trajectories. To facilitate these processes, the motion should be perceived in 3D Cartesian space. However, most recent multi-object tracking (MOT) research has focused on tracking people and moving objects in 2D RGB video sequences. In this work we present JRMOT, a novel 3D MOT system that integrates information from RGB images and 3D point clouds to achieve real-time, state-of-the-art tracking performance. Our system is built with recent neural networks for re-identification, 2D and 3D detection and track description, combined into a joint probabilistic data-association framework within a multi-modal recursive Kalman architecture. As part of our work, we release the JRDB dataset, a novel large scale 2D+3D dataset and benchmark, annotated with over 2 million boxes and 3500 time consistent 2D+3D trajectories across 54 indoor and outdoor scenes. JRDB contains over 60 minutes of data including 360 degree cylindrical RGB video and 3D pointclouds in social settings that we use to develop, train and evaluate JRMOT. The presented 3D MOT system demonstrates state-of-the-art performance against competing methods on the popular 2D tracking KITTI benchmark and serves as first 3D tracking solution for our benchmark. Real-robot tests on our social robot JackRabbot indicate that the system is capable of tracking multiple pedestrians fast and reliably. We provide the ROS code of our tracker at https://sites.google.com/view/jrmot.", "field": [], "task": ["Autonomous Navigation", "Motion Planning", "Multi-Object Tracking", "Object Tracking"], "method": [], "dataset": ["KITTI Tracking test"], "metric": ["MOTA"], "title": "JRMOT: A Real-Time 3D Multi-Object Tracker and a New Large-Scale Dataset"} {"abstract": "This paper investigates the notion of learning user and item representations in non-Euclidean space. Specifically, we study the connection between metric learning in hyperbolic space and collaborative filtering by exploring Mobius gyrovector spaces where the formalism of the spaces could be utilized to generalize the most common Euclidean vector operations. Overall, this work aims to bridge the gap between Euclidean and hyperbolic geometry in recommender systems through metric learning approach. We propose HyperML (Hyperbolic Metric Learning), a conceptually simple but highly effective model for boosting the performance. Via a series of extensive experiments, we show that our proposed HyperML not only outperforms their Euclidean counterparts, but also achieves state-of-the-art performance on multiple benchmark datasets, demonstrating the effectiveness of personalized recommendation in hyperbolic geometry.", "field": [], "task": ["Metric Learning", "Recommendation Systems", "Representation Learning"], "method": [], "dataset": ["MovieLens 1M", "MovieLens 20M"], "metric": ["nDCG@10", "HR@10"], "title": "HyperML: A Boosting Metric Learning Approach in Hyperbolic Space for Recommender Systems"} {"abstract": "Incorporating knowledge graph (KG) into recommender system is promising in\nimproving the recommendation accuracy and explainability. However, existing\nmethods largely assume that a KG is complete and simply transfer the\n\"knowledge\" in KG at the shallow level of entity raw data or embeddings. This\nmay lead to suboptimal performance, since a practical KG can hardly be\ncomplete, and it is common that a KG has missing facts, relations, and\nentities. Thus, we argue that it is crucial to consider the incomplete nature\nof KG when incorporating it into recommender system.\n In this paper, we jointly learn the model of recommendation and knowledge\ngraph completion. Distinct from previous KG-based recommendation methods, we\ntransfer the relation information in KG, so as to understand the reasons that a\nuser likes an item. As an example, if a user has watched several movies\ndirected by (relation) the same person (entity), we can infer that the director\nrelation plays a critical role when the user makes the decision, thus help to\nunderstand the user's preference at a finer granularity.\n Technically, we contribute a new translation-based recommendation model,\nwhich specially accounts for various preferences in translating a user to an\nitem, and then jointly train it with a KG completion model by combining several\ntransfer schemes. Extensive experiments on two benchmark datasets show that our\nmethod outperforms state-of-the-art KG-based recommendation methods. Further\nanalysis verifies the positive effect of joint training on both tasks of\nrecommendation and KG completion, and the advantage of our model in\nunderstanding user preference. We publish our project at\nhttps://github.com/TaoMiner/joint-kg-recommender.", "field": [], "task": ["Graph Learning", "Knowledge Graph Completion", "Recommendation Systems"], "method": [], "dataset": ["MovieLens 1M", "DBbook2014"], "metric": ["NDCG", "HR@10", "Hits@10", "Mean Rank"], "title": "Unifying Knowledge Graph Learning and Recommendation: Towards a Better Understanding of User Preferences"} {"abstract": "Multimodal attentional networks are currently state-of-the-art models for\nVisual Question Answering (VQA) tasks involving real images. Although attention\nallows to focus on the visual content relevant to the question, this simple\nmechanism is arguably insufficient to model complex reasoning features required\nfor VQA or other high-level tasks.\n In this paper, we propose MuRel, a multimodal relational network which is\nlearned end-to-end to reason over real images. Our first contribution is the\nintroduction of the MuRel cell, an atomic reasoning primitive representing\ninteractions between question and image regions by a rich vectorial\nrepresentation, and modeling region relations with pairwise combinations.\nSecondly, we incorporate the cell into a full MuRel network, which\nprogressively refines visual and question interactions, and can be leveraged to\ndefine visualization schemes finer than mere attention maps.\n We validate the relevance of our approach with various ablation studies, and\nshow its superiority to attention-based methods on three datasets: VQA 2.0,\nVQA-CP v2 and TDIUC. Our final MuRel network is competitive to or outperforms\nstate-of-the-art results in this challenging context.\n Our code is available: https://github.com/Cadene/murel.bootstrap.pytorch", "field": [], "task": ["Relational Reasoning", "Visual Question Answering"], "method": [], "dataset": ["VQA v2 test-std", "VQA v2 test-dev", "VQA-CP", "TDIUC"], "metric": ["Score", "overall", "Accuracy"], "title": "MUREL: Multimodal Relational Reasoning for Visual Question Answering"} {"abstract": "Low-rank matrix approximation (LRMA) methods have achieved excellent accuracy among today's collaborative filtering (CF) methods. In existing LRMA methods, the rank of user/item feature matrices is typically fixed, i.e., the same rank is adopted to describe all users/items. However, our studies show that submatrices with different ranks could coexist in the same user-item rating matrix, so that approximations with fixed ranks cannot perfectly describe the internal structures of the rating matrix, therefore leading to inferior recommendation accuracy. In this paper, a mixture-rank matrix approximation (MRMA) method is proposed, in which user-item ratings can be characterized by a mixture of LRMA models with different ranks. Meanwhile, a learning algorithm capitalizing on iterated condition modes is proposed to tackle the non-convex optimization problem pertaining to MRMA. Experimental studies on MovieLens and Netflix datasets demonstrate that MRMA can outperform six state-of-the-art LRMA-based CF methods in terms of recommendation accuracy.", "field": [], "task": [], "method": [], "dataset": ["MovieLens 10M"], "metric": ["RMSE"], "title": "Mixture-Rank Matrix Approximation for Collaborative Filtering"} {"abstract": "In interactive instance segmentation, users give feedback to iteratively refine segmentation masks. The user-provided clicks are transformed into guidance maps which provide the network with necessary cues on the whereabouts of the object of interest. Guidance maps used in current systems are purely distance-based and are either too localized or non-informative. We propose a novel transformation of user clicks to generate content-aware guidance maps that leverage the hierarchical structural information present in an image. Using our guidance maps, even the most basic FCNs are able to outperform existing approaches that require state-of-the-art segmentation networks pre-trained on large scale segmentation datasets. We demonstrate the effectiveness of our proposed transformation strategy through comprehensive experimentation in which we significantly raise state-of-the-art on four standard interactive segmentation benchmarks. \r", "field": [], "task": ["Instance Segmentation", "Interactive Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["Berkeley", "GrabCut"], "metric": ["NoC@90"], "title": "Content-Aware Multi-Level Guidance for Interactive Instance Segmentation"} {"abstract": "Accurately annotating large scale dataset is notoriously expensive both in time and in money. Although acquiring low-quality-annotated dataset can be much cheaper, it often badly damages the performance of trained models when using such dataset without particular treatment. Various methods have been proposed for learning with noisy labels. However, most methods only handle limited kinds of noise patterns, require auxiliary information or steps (e.g., knowing or estimating the noise transition matrix), or lack theoretical justification. In this paper, we propose a novel information-theoretic loss function, L_DMI, for training deep neural networks robust to label noise. The core of L_DMI is a generalized version of mutual information, termed Determinant based Mutual Information (DMI), which is not only information-monotone but also relatively invariant. To the best of our knowledge, L_DMI is the first loss function that is provably robust to instance-independent label noise, regardless of noise pattern, and it can be applied to any existing classification neural networks straightforwardly without any auxiliary information. In addition to theoretical justification, we also empirically show that using L_DMI outperforms all other counterparts in the classification task on both image dataset and natural language dataset include Fashion-MNIST, CIFAR-10, Dogs vs. Cats, MR with a variety of synthesized noise patterns and noise amounts, as well as a real-world dataset Clothing1M.", "field": [], "task": ["Image Classification", "Learning with noisy labels"], "method": [], "dataset": ["Clothing1M"], "metric": ["Accuracy"], "title": "L_DMI: A Novel Information-theoretic Loss Function for Training Deep Nets Robust to Label Noise"} {"abstract": "The interactive image segmentation model allows users to iteratively add new inputs for refinement until a satisfactory result is finally obtained. Therefore, an ideal interactive segmentation model should learn to capture the user's intention with minimal interaction. However, existing models fail to fully utilize the valuable user input information in the segmentation refinement process and thus offer an unsatisfactory user experience. In order to fully exploit the user-provided information, we propose a new deep framework, called Regional Interactive Segmentation Network (RIS-Net), to expand the field-of-view of the given inputs to capture the local regional information surrounding them for local refinement. Additionally, RIS-Net adopts multiscale global contextual information to augment each local region for improving feature representation. We also introduce click discount factors to develop a novel optimization strategy for more effective end-to-end training. Comprehensive evaluations on four challenging datasets well demonstrate the superiority of the proposed RIS-Net over other state-of-the-art approaches.\r", "field": [], "task": ["Interactive Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["GrabCut", "SBD"], "metric": ["NoC@90", "NoC@85"], "title": "Regional Interactive Image Segmentation Networks"} {"abstract": "Human action recognition based on the depth information provided by commodity depth sensors is an important yet challenging task. The noisy depth maps, different lengths of action sequences, and free styles in performing actions, may cause large intra-class variations. In this paper, a new framework based on sparse coding and temporal pyramid matching (TPM) is proposed for depth-based human action recognition. Especially, a discriminative class-specific dictionary learning algorithm is proposed for sparse coding. By adding the group sparsity and geometry constraints, features can be well reconstructed by the sub-dictionary belonging to the same class, and the geometry relationships among features are also kept in the calculated coefficients. The proposed approach is evaluated on two benchmark datasets captured by depth cameras. Experimental results show that the proposed algorithm repeatedly achieves superior performance to the state of the art algorithms. Moreover, the proposed dictionary learning method also outperforms classic dictionary learning approaches.", "field": [], "task": ["Action Recognition", "Dictionary Learning", "Multimodal Activity Recognition", "Temporal Action Localization"], "method": [], "dataset": ["MSR Daily Activity3D dataset"], "metric": ["Accuracy"], "title": "Group sparsity and geometry constrained dictionary learning for action recognition from depth maps."} {"abstract": "Since many safety-critical systems, such as surgical robots and autonomous driving cars, are in unstable environments with sensor noise and incomplete data, it is desirable for object detectors to take into account the confidence of localization prediction. There are three limitations of the prior uncertainty estimation methods for anchor-based object detection. 1) They model the uncertainty based on object properties having different characteristics, such as location (center point) and scale (width, height). 2) they model a box offset and ground-truth as Gaussian distribution and Dirac delta distribution, which leads to the model misspecification problem. Because the Dirac delta distribution is not exactly represented as Gaussian, i.e., for any $\\mu$ and $\\Sigma$. 3) Since anchor-based methods are sensitive to hyper-parameters of anchor, the localization uncertainty modeling is also sensitive to these parameters. Therefore, we propose a new localization uncertainty estimation method called Gaussian-FCOS for anchor-free object detection. Our method captures the uncertainty based on four directions of box offsets~(left, right, top, bottom) that have similar properties, which enables to capture which direction is uncertain and provide a quantitative value in range~[0, 1]. To this end, we design a new uncertainty loss, negative power log-likelihood loss, to measure uncertainty by weighting IoU to the likelihood loss, which alleviates the model misspecification problem. Experiments on COCO datasets demonstrate that our Gaussian-FCOS reduces false positives and finds more missing-objects by mitigating over-confidence scores with the estimated uncertainty. We hope Gaussian-FCOS serves as a crucial component for the reliability-required task.", "field": [], "task": ["Autonomous Driving", "Object Detection"], "method": [], "dataset": ["COCO test-dev"], "metric": ["box AP"], "title": "Localization Uncertainty Estimation for Anchor-Free Object Detection"} {"abstract": "Multi-choice Machine Reading Comprehension (MRC) requires model to decide the correct answer from a set of answer options when given a passage and a question. Thus in addition to a powerful Pre-trained Language Model (PrLM) as encoder, multi-choice MRC especially relies on a matching network design which is supposed to effectively capture the relationships among the triplet of passage, question and answers. While the newer and more powerful PrLMs have shown their mightiness even without the support from a matching network, we propose a new DUal Multi-head Co-Attention (DUMA) model, which is inspired by human's transposition thinking process solving the multi-choice MRC problem: respectively considering each other's focus from the standpoint of passage and question. The proposed DUMA has been shown effective and is capable of generally promoting PrLMs. Our proposed method is evaluated on two benchmark multi-choice MRC tasks, DREAM and RACE, showing that in terms of powerful PrLMs, DUMA can still boost the model to reach new state-of-the-art performance.", "field": [], "task": ["Language Modelling", "Machine Reading Comprehension", "Reading Comprehension"], "method": [], "dataset": ["RACE"], "metric": ["Accuracy (High)", "Accuracy (Middle)", "Accuracy"], "title": "DUMA: Reading Comprehension with Transposition Thinking"} {"abstract": "Words in natural language follow a Zipfian distribution whereby some words\nare frequent but most are rare. Learning representations for words in the \"long\ntail\" of this distribution requires enormous amounts of data. Representations\nof rare words trained directly on end tasks are usually poor, requiring us to\npre-train embeddings on external data, or treat all rare words as\nout-of-vocabulary words with a unique representation. We provide a method for\npredicting embeddings of rare words on the fly from small amounts of auxiliary\ndata with a network trained end-to-end for the downstream task. We show that\nthis improves results against baselines where embeddings are trained on the end\ntask for reading comprehension, recognizing textual entailment and language\nmodeling.", "field": [], "task": ["Language Modelling", "Natural Language Inference", "Question Answering", "Reading Comprehension", "Word Embeddings"], "method": [], "dataset": ["SQuAD1.1 dev", "SQuAD1.1"], "metric": ["EM", "F1"], "title": "Learning to Compute Word Embeddings On the Fly"} {"abstract": "Click-Through Rate(CTR) estimation has become one of the most fundamental tasks in many real-world applications and it's important for ranking models to effectively capture complex high-order features. Shallow feed-forward network is widely used in many state-of-the-art DNN models such as FNN, DeepFM and xDeepFM to implicitly capture high-order feature interactions. However, some research has proved that addictive feature interaction, particular feed-forward neural networks, is inefficient in capturing common feature interaction. To resolve this problem, we introduce specific multiplicative operation into DNN ranking system by proposing instance-guided mask which performs element-wise product both on the feature embedding and feed-forward layers guided by input instance. We also turn the feed-forward layer in DNN model into a mixture of addictive and multiplicative feature interactions by proposing MaskBlock in this paper. MaskBlock combines the layer normalization, instance-guided mask, and feed-forward layer and it is a basic building block to be used to design new ranking model under various configurations. The model consisting of MaskBlock is called MaskNet in this paper and two new MaskNet models are proposed to show the effectiveness of MaskBlock as basic building block for composing high performance ranking systems. The experiment results on three real-world datasets demonstrate that our proposed MaskNet models outperform state-of-the-art models such as DeepFM and xDeepFM significantly, which implies MaskBlock is an effective basic building unit for composing new high performance ranking systems.", "field": [], "task": ["Click-Through Rate Prediction", "Recommendation Systems"], "method": [], "dataset": ["Criteo"], "metric": ["AUC"], "title": "MaskNet: Introducing Feature-Wise Multiplication to CTR Ranking Models by Instance-Guided Mask"} {"abstract": "Spatial and temporal stream model has gained great success in video action recognition. Most existing works pay more attention to designing effective features fusion methods, which train the two-stream model in a separate way. However, it's hard to ensure discriminability and explore complementary information between different streams in existing works. In this work, we propose a novel cooperative cross-stream network that investigates the conjoint information in multiple different modalities. The jointly spatial and temporal stream networks feature extraction is accomplished by an end-to-end learning manner. It extracts this complementary information of different modality from a connection block, which aims at exploring correlations of different stream features. Furthermore, different from the conventional ConvNet that learns the deep separable features with only one cross-entropy loss, our proposed model enhances the discriminative power of the deeply learned features and reduces the undesired modality discrepancy by jointly optimizing a modality ranking constraint and a cross-entropy loss for both homogeneous and heterogeneous modalities. The modality ranking constraint constitutes intra-modality discriminative embedding and inter-modality triplet constraint, and it reduces both the intra-modality and cross-modality feature variations. Experiments on three benchmark datasets demonstrate that by cooperating appearance and motion feature extraction, our method can achieve state-of-the-art or competitive performance compared with existing results.", "field": [], "task": ["Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["UCF101", "HMDB-51", "Something-Something V2"], "metric": ["Top-5 Accuracy", "Average accuracy of 3 splits", "3-fold Accuracy", "Top-1 Accuracy"], "title": "Cooperative Cross-Stream Network for Discriminative Action Representation"} {"abstract": "Deep metric learning aims to learn an embedding function, modeled as deep\nneural network. This embedding function usually puts semantically similar\nimages close while dissimilar images far from each other in the learned\nembedding space. Recently, ensemble has been applied to deep metric learning to\nyield state-of-the-art results. As one important aspect of ensemble, the\nlearners should be diverse in their feature embeddings. To this end, we propose\nan attention-based ensemble, which uses multiple attention masks, so that each\nlearner can attend to different parts of the object. We also propose a\ndivergence loss, which encourages diversity among the learners. The proposed\nmethod is applied to the standard benchmarks of deep metric learning and\nexperimental results show that it outperforms the state-of-the-art methods by a\nsignificant margin on image retrieval tasks.", "field": [], "task": ["Image Retrieval", "Metric Learning"], "method": [], "dataset": [" CUB-200-2011", "In-Shop", "CARS196", "SOP"], "metric": ["R@1"], "title": "Attention-based Ensemble for Deep Metric Learning"} {"abstract": "Prediction of future states of the environment and interacting agents is a key competence required for autonomous agents to operate successfully in the real world. Prior work for structured sequence prediction based on latent variable models imposes a uni-modal standard Gaussian prior on the latent variables. This induces a strong model bias which makes it challenging to fully capture the multi-modality of the distribution of the future states. In this work, we introduce Conditional Flow Variational Autoencoders (CF-VAE) using our novel conditional normalizing flow based prior to capture complex multi-modal conditional distributions for effective structured sequence prediction. Moreover, we propose two novel regularization schemes which stabilizes training and deals with posterior collapse for stable training and better fit to the target data distribution. Our experiments on three multi-modal structured sequence prediction datasets -- MNIST Sequences, Stanford Drone and HighD -- show that the proposed method obtains state of art results across different evaluation metrics.", "field": [], "task": ["Latent Variable Models", "Trajectory Prediction"], "method": [], "dataset": ["Stanford Drone"], "metric": ["ADE-8/12 @K = 20", "FDE-8/12 @K= 20"], "title": "Conditional Flow Variational Autoencoders for Structured Sequence Prediction"} {"abstract": "Multi-agent interacting systems are prevalent in the world, from pure physical systems to complicated social dynamic systems. In many applications, effective understanding of the situation and accurate trajectory prediction of interactive agents play a significant role in downstream tasks, such as decision making and planning. In this paper, we propose a generic trajectory forecasting framework (named EvolveGraph) with explicit relational structure recognition and prediction via latent interaction graphs among multiple heterogeneous, interactive agents. Considering the uncertainty of future behaviors, the model is designed to provide multi-modal prediction hypotheses. Since the underlying interactions may evolve even with abrupt changes, and different modalities of evolution may lead to different outcomes, we address the necessity of dynamic relational reasoning and adaptively evolving the interaction graphs. We also introduce a double-stage training pipeline which not only improves training efficiency and accelerates convergence, but also enhances model performance. The proposed framework is evaluated on both synthetic physics simulations and multiple real-world benchmark datasets in various areas. The experimental results illustrate that our approach achieves state-of-the-art performance in terms of prediction accuracy.", "field": [], "task": ["Autonomous Driving", "Autonomous Vehicles", "Decision Making", "Relational Reasoning", "Trajectory Forecasting", "Trajectory Prediction"], "method": [], "dataset": ["Stanford Drone"], "metric": ["ADE-8/12 @K = 20", "FDE-8/12 @K= 20"], "title": "EvolveGraph: Multi-Agent Trajectory Prediction with Dynamic Relational Reasoning"} {"abstract": "Effective understanding of the environment and accurate trajectory prediction of surrounding dynamic obstacles are critical for intelligent systems such as autonomous vehicles and wheeled mobile robotics navigating in complex scenarios to achieve safe and high-quality decision making, motion planning and control. Due to the uncertain nature of the future, it is desired to make inference from a probability perspective instead of deterministic prediction. In this paper, we propose a conditional generative neural system (CGNS) for probabilistic trajectory prediction to approximate the data distribution, with which realistic, feasible and diverse future trajectory hypotheses can be sampled. The system combines the strengths of conditional latent space learning and variational divergence minimization, and leverages both static context and interaction information with soft attention mechanisms. We also propose a regularization method for incorporating soft constraints into deep neural networks with differentiable barrier functions, which can regulate and push the generated samples into the feasible regions. The proposed system is evaluated on several public benchmark datasets for pedestrian trajectory prediction and a roundabout naturalistic driving dataset collected by ourselves. The experimental results demonstrate that our model achieves better performance than various baseline approaches in terms of prediction accuracy.", "field": [], "task": ["Autonomous Vehicles", "Decision Making", "Motion Planning", "Trajectory Prediction"], "method": [], "dataset": ["Stanford Drone", "ETH/UCY"], "metric": ["ADE-8/12", "ADE-8/12 @K = 20", "FDE-8/12 @K= 20"], "title": "Conditional Generative Neural System for Probabilistic Trajectory Prediction"} {"abstract": "Humans navigate complex crowded environments based on social conventions: they respect personal space, yielding right-of-way and avoid collisions. In our work, we propose a data-driven approach to learn these human-human interactions for predicting their future trajectories. This is in contrast to traditional approaches which use hand-crafted functions such as Social forces. We present a new Long Short-Term Memory (LSTM) model which jointly reasons across multiple individuals in a scene. Different from the conventional LSTM, we share the information between multiple LSTMs through a new pooling layer. This layer pools the hidden representation from LSTMs corresponding to neighboring trajectories to capture interactions within this neighborhood. We demonstrate the performance of our method on several public datasets. Our model outperforms previous forecasting methods by more than 42% . We also analyze the trajectories predicted by our model to demonstrate social behaviours such as collision avoidance and group movement, learned by our model.", "field": [], "task": ["Trajectory Prediction"], "method": [], "dataset": ["Stanford Drone"], "metric": ["ADE (8/12) @K=5", "FDE(8/12) @K=5"], "title": "Social LSTM: Human Trajectory Prediction in Crowded Spaces"} {"abstract": "Connectionist temporal classification (CTC) is widely used for maximum\nlikelihood learning in end-to-end speech recognition models. However, there is\nusually a disparity between the negative maximum likelihood and the performance\nmetric used in speech recognition, e.g., word error rate (WER). This results in\na mismatch between the objective function and metric during training. We show\nthat the above problem can be mitigated by jointly training with maximum\nlikelihood and policy gradient. In particular, with policy learning we are able\nto directly optimize on the (otherwise non-differentiable) performance metric.\nWe show that joint training improves relative performance by 4% to 13% for our\nend-to-end model as compared to the same model learned through maximum\nlikelihood. The model achieves 5.53% WER on Wall Street Journal dataset, and\n5.42% and 14.70% on Librispeech test-clean and test-other set, respectively.", "field": [], "task": ["End-To-End Speech Recognition", "Speech Recognition"], "method": [], "dataset": ["LibriSpeech test-clean"], "metric": ["Word Error Rate (WER)"], "title": "Improving End-to-End Speech Recognition with Policy Learning"} {"abstract": "Skeleton-based human action recognition has become an active research area in recent years. The key to this task is to fully explore both spatial and temporal features. Recently, GCN-based methods modeling the human body skeletons as spatial-temporal graphs, have achieved remarkable performances. However, most GCN-based methods use a fixed adjacency matrix defined by the dataset, which can only capture the structural information provided by joints directly connected through bones and ignore the dependencies between distant joints that are not connected. In addition, such a fixed adjacency matrix used in all layers leads to the network failing to extract multi-level semantic features. In this paper we propose a pseudo graph convolutional network with temporal and channel-wise attention (PGCN-TCA) to solve this problem. The fixed normalized adjacent matrix is substituted with a learnable matrix. In this way, the matrix can learn the dependencies between connected joints and joints that are not physically connected. At the same time, learnable matrices in different layers can help the network capture multi-level features in spatial domain. Moreover, Since frames and input channels that contain outstanding characteristics play significant roles in distinguishing the action from others, we propose a mixed temporal and channel-wise attention. Our method achieves comparable performances to state-of-the-art methods on NTU-RGB+D and HDM05 datasets.", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "PGCN-TCA: Pseudo Graph Convolutional Network With Temporal and Channel-Wise Attention for Skeleton-Based Action Recognition"} {"abstract": "This paper extends the Spatial-Temporal Graph Convolutional Network (ST-GCN) for skeleton-based action recognition by introducing two novel modules, namely, the Graph Vertex Feature Encoder (GVFE) and the Dilated Hierarchical Temporal Convolutional Network (DH-TCN). On the one hand, the GVFE module learns appropriate vertex features for action recognition by encoding raw skeleton data into a new feature space. On the other hand, the DH-TCN module is capable of capturing both short-term and long-term temporal dependencies using a hierarchical dilated convolutional network. Experiments have been conducted on the challenging NTU RGB-D-60 and NTU RGB-D 120 datasets. The obtained results show that our method competes with state-of-the-art approaches while using a smaller number of layers and parameters; thus reducing the required training time and memory.", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition"], "method": [], "dataset": ["NTU RGB+D", "NTU RGB+D 120"], "metric": ["Accuracy (Cross-Subject)", "Accuracy (Cross-Setup)", "Accuracy (CV)", "Accuracy (CS)"], "title": "Vertex Feature Encoding and Hierarchical Temporal Modeling in a Spatial-Temporal Graph Convolutional Network for Action Recognition"} {"abstract": "Sentiment Analysis is an important algorithm in Natural Language Processing which is used to detect sentiment within some text. In our project, we had chosen to work on analyzing reviews of various drugs which have been reviewed in form of texts and have also been given a rating on a scale from 1-10. We had obtained this data set from the UCI machine learning repository which had 2 data sets: train and test (split as 75-25\\%). We had split the number rating for the drug into three classes in general: positive (7-10), negative (1-4) or neutral(4-7). There are multiple reviews for the drugs that belong to a similar condition and we decided to investigate how the reviews for different conditions use different words impact the ratings of the drugs. Our intention was mainly to implement supervised machine learning classification algorithms that predict the class of the rating using the textual review. We had primarily implemented different embeddings such as Term Frequency Inverse Document Frequency (TFIDF) and the Count Vectors (CV). We had trained models on the most popular conditions such as \"Birth Control\", \"Depression\" and \"Pain\" within the data set and obtained good results while predicting the test data sets.", "field": [], "task": ["Sentiment Analysis"], "method": [], "dataset": [], "metric": [], "title": "Sentiment Analysis in Drug Reviews using Supervised Machine Learning Algorithms"} {"abstract": "Recurrent neural networks (RNNs) are capable of modeling the temporal\ndynamics of complex sequential information. However, the structures of existing\nRNN neurons mainly focus on controlling the contributions of current and\nhistorical information but do not explore the different importance levels of\ndifferent elements in an input vector of a time slot. We propose adding a\nsimple yet effective Element-wiseAttention Gate (EleAttG) to an RNN block\n(e.g., all RNN neurons in a network layer) that empowers the RNN neurons to\nhave the attentiveness capability. For an RNN block, an EleAttG is added to\nadaptively modulate the input by assigning different levels of importance,\ni.e., attention, to each element/dimension of the input. We refer to an RNN\nblock equipped with an EleAttG as an EleAtt-RNN block. Specifically, the\nmodulation of the input is content adaptive and is performed at fine\ngranularity, being element-wise rather than input-wise. The proposed EleAttG,\nas an additional fundamental unit, is general and can be applied to any RNN\nstructures, e.g., standard RNN, Long Short-Term Memory (LSTM), or Gated\nRecurrent Unit (GRU). We demonstrate the effectiveness of the proposed\nEleAtt-RNN by applying it to the action recognition tasks on both 3D human\nskeleton data and RGB videos. Experiments show that adding attentiveness\nthrough EleAttGs to RNN blocks significantly boosts the power of RNNs.", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "Adding Attentiveness to the Neurons in Recurrent Neural Networks"} {"abstract": "This paper presents a new framework for human action recognition from a 3D\nskeleton sequence. Previous studies do not fully utilize the temporal\nrelationships between video segments in a human action. Some studies\nsuccessfully used very deep Convolutional Neural Network (CNN) models but often\nsuffer from the data insufficiency problem. In this study, we first segment a\nskeleton sequence into distinct temporal segments in order to exploit the\ncorrelations between them. The temporal and spatial features of a skeleton\nsequence are then extracted simultaneously by utilizing a fine-to-coarse (F2C)\nCNN architecture optimized for human skeleton sequences. We evaluate our\nproposed method on NTU RGB+D and SBU Kinect Interaction dataset. It achieves\n79.6% and 84.6% of accuracies on NTU RGB+D with cross-object and cross-view\nprotocol, respectively, which are almost identical with the state-of-the-art\nperformance. In addition, our method significantly improves the accuracy of the\nactions in two-person interactions.", "field": [], "task": ["3D Action Recognition", "Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "A Fine-to-Coarse Convolutional Neural Network for 3D Human Action Recognition"} {"abstract": "This letter presents SkeletonNet, a deep learning framework for skeleton-based 3-D action recognition. Given a skeleton sequence, the spatial structure of the skeleton joints in each frame and the temporal information between multiple frames are two important factors for action recognition. We first extract body-part-based features from each frame of the skeleton sequence. Compared to the original coordinates of the skeleton joints, the proposed features are translation, rotation, and scale invariant. To learn robust temporal information, instead of treating the features of all frames as a time series, we transform the features into images and feed them to the proposed deep learning network, which contains two parts: one to extract general features from the input images, while the other to generate a discriminative and compact representation for action recognition. The proposed method is tested on the SBU kinect interaction dataset, the CMU dataset, and the large-scale NTU RGB+D dataset and achieves state-of-the-art performance.", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Time Series"], "method": [], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "Skeletonnet: Mining deep part features for 3-d action recognition"} {"abstract": "Recent advances on human motion analysis have made the extraction of human skeleton structure feasible, even from single depth images. This structure has been proven quite informative for discriminating actions in a recognition scenario. In this context, we propose a local skeleton descriptor that encodes the relative position of joint quadruples. Such a coding implies a similarity normalisation transform that leads to a compact (6D) view-invariant skeletal feature, referred to as skeletal quad. Further, the use of a Fisher kernel representation is suggested to describe the skeletal quads contained in a (sub)action. A Gaussian mixture model is learnt from training data, so that the generation of any set of quads is encoded by its Fisher vector. Finally, a multi-level representation of Fisher vectors leads to an action description that roughly carries the order of sub-action within each action sequence. Efficient classification is here achieved by linear SVMs. The proposed action representation is tested on widely used datasets, MSRAction3D and HDM05. The experimental evaluation shows that the proposed method outperforms state-of-the-art algorithms that rely only on joints, while it competes with methods that combine joints with extra cues.", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "Skeletal quads: Human action recognition using joint quadruples"} {"abstract": "Existing deep embedding methods in vision tasks are capable of learning a\ncompact Euclidean space from images, where Euclidean distances correspond to a\nsimilarity metric. To make learning more effective and efficient, hard sample\nmining is usually employed, with samples identified through computing the\nEuclidean feature distance. However, the global Euclidean distance cannot\nfaithfully characterize the true feature similarity in a complex visual feature\nspace, where the intraclass distance in a high-density region may be larger\nthan the interclass distance in low-density regions. In this paper, we\nintroduce a Position-Dependent Deep Metric (PDDM) unit, which is capable of\nlearning a similarity metric adaptive to local feature structure. The metric\ncan be used to select genuinely hard samples in a local neighborhood to guide\nthe deep embedding learning in an online and robust manner. The new layer is\nappealing in that it is pluggable to any convolutional networks and is trained\nend-to-end. Our local similarity-aware feature embedding not only demonstrates\nfaster convergence and boosted performance on two complex image retrieval\ndatasets, its large margin nature also leads to superior generalization results\nunder the large and open set scenarios of transfer learning and zero-shot\nlearning on ImageNet 2010 and ImageNet-10K datasets.", "field": [], "task": ["Image Retrieval", "Transfer Learning", "Zero-Shot Learning"], "method": [], "dataset": [" CUB-200-2011"], "metric": ["R@1"], "title": "Local Similarity-Aware Deep Feature Embedding"} {"abstract": "Even in the absence of any explicit semantic annotation, vast collections of\naudio recordings provide valuable information for learning the categorical\nstructure of sounds. We consider several class-agnostic semantic constraints\nthat apply to unlabeled nonspeech audio: (i) noise and translations in time do\nnot change the underlying sound category, (ii) a mixture of two sound events\ninherits the categories of the constituents, and (iii) the categories of events\nin close temporal proximity are likely to be the same or related. Without\nlabels to ground them, these constraints are incompatible with classification\nloss functions. However, they may still be leveraged to identify geometric\ninequalities needed for triplet loss-based training of convolutional neural\nnetworks. The result is low-dimensional embeddings of the input spectrograms\nthat recover 41% and 84% of the performance of their fully-supervised\ncounterparts when applied to downstream query-by-example sound retrieval and\nsound event classification tasks, respectively. Moreover, in\nlimited-supervision settings, our unsupervised embeddings double the\nstate-of-the-art classification performance.", "field": [], "task": ["Audio Classification"], "method": [], "dataset": ["AudioSet"], "metric": ["Test mAP"], "title": "Unsupervised Learning of Semantic Audio Representations"} {"abstract": "Convolutional layers in graph neural networks are a fundamental type of layer which output a representation or embedding of each graph vertex. The representation typically encodes information about the vertex in question and its neighbourhood. If one wishes to perform a graph centric task, such as graph classification, this set of vertex representations must be integrated or pooled to form a graph representation. In this article we propose a novel pooling method which maps a set of vertex representations to a function space representation. This method is distinct from existing pooling methods which perform a mapping to either a vector or sequence space. Experimental graph classification results demonstrate that the proposed method generally outperforms most baseline pooling methods and in some cases achieves best performance.", "field": [], "task": ["Graph Classification"], "method": [], "dataset": ["PROTEINS", "MUTAG"], "metric": ["Accuracy"], "title": "Function Space Pooling For Graph Convolutional Networks"} {"abstract": "Motion is a salient cue to recognize actions in video. Modern action recognition models leverage motion information either explicitly by using optical flow as input or implicitly by means of 3D convolutional filters that simultaneously capture appearance and motion information. This paper proposes an alternative approach based on a learnable correlation operator that can be used to establish frame-toframe matches over convolutional feature maps in the different layers of the network. The proposed architecture enables the fusion of this explicit temporal matching information with traditional appearance cues captured by 2D convolution. Our correlation network compares favorably with widely-used 3D CNNs for video modeling, and achieves competitive results over the prominent two-stream network while being much faster to train. We empirically demonstrate that correlation networks produce strong results on a variety of video datasets, and outperform the state of the art on four popular benchmarks for action recognition: Kinetics, Something-Something, Diving48 and Sports1M.", "field": [], "task": ["Action Classification", "Action Recognition", "Optical Flow Estimation"], "method": [], "dataset": ["Kinetics-400"], "metric": ["Vid acc@1"], "title": "Video Modeling with Correlation Networks"} {"abstract": "Two-stream network architecture has the ability to capture temporal and spatial features from videos simultaneously and has achieved excellent performance on video action recognition tasks. However, there is a fair amount of redundant information in both temporal and spatial dimensions in videos, which increases the complexity of network learning. To solve this problem, we propose residual spatial-temporal attention network (R-STAN), a feed-forward convolutional neural network using residual learning and spatial-temporal attention mechanism for video action recognition, which makes the network focus more on discriminative temporal and spatial features. In our R-STAN, each stream is constructed by stacking residual spatial-temporal attention blocks (R-STAB), the spatial-temporal attention modules integrated in the residual blocks have the ability to generate attention-aware features along temporal and spatial dimensions, which largely reduce the redundant information. Together with the specific characteristic of residual learning, we are able to construct a very deep network for learning spatial-temporal information in videos. With the layers going deeper, the attention-aware features from the different R-STABs can change adaptively. We validate our R-STAN through a large number of experiments on UCF101 and HMDB51 datasets. Our experiments show that our proposed network combined with residual learning and spatial-temporal attention mechanism contributes substantially to the performance of video action recognition.", "field": [], "task": ["Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["UCF101", "HMDB-51"], "metric": ["Average accuracy of 3 splits", "3-fold Accuracy"], "title": "R-STAN: Residual Spatial-Temporal Attention Network for Action Recognition"} {"abstract": "Spatio-temporal representations in frame sequences play an important role in\nthe task of action recognition. Previously, a method of using optical flow as a\ntemporal information in combination with a set of RGB images that contain\nspatial information has shown great performance enhancement in the action\nrecognition tasks. However, it has an expensive computational cost and requires\ntwo-stream (RGB and optical flow) framework. In this paper, we propose MFNet\n(Motion Feature Network) containing motion blocks which make it possible to\nencode spatio-temporal information between adjacent frames in a unified network\nthat can be trained end-to-end. The motion block can be attached to any\nexisting CNN-based action recognition frameworks with only a small additional\ncost. We evaluated our network on two of the action recognition datasets\n(Jester and Something-Something) and achieved competitive performances for both\ndatasets by training the networks from scratch.", "field": [], "task": ["Action Recognition", "Optical Flow Estimation", "Temporal Action Localization"], "method": [], "dataset": ["Jester", "Something-Something V1"], "metric": ["Val", "Top 1 Accuracy"], "title": "Motion Feature Network: Fixed Motion Filter for Action Recognition"} {"abstract": "Knowledge representation of graph-based systems is fundamental across many disciplines. To date, most existing methods for representation learning primarily focus on networks with simplex labels, yet real-world objects (nodes) are inherently complex in nature and often contain rich semantics or labels, e.g., a user may belong to diverse interest groups of a social network, resulting in multi-label networks for many applications. The multi-label network nodes not only have multiple labels for each node, such labels are often highly correlated making existing methods ineffective or fail to handle such correlation for node representation learning. In this paper, we propose a novel multi-label graph convolutional network (ML-GCN) for learning node representation for multi-label networks. To fully explore label-label correlation and network topology structures, we propose to model a multi-label network as two Siamese GCNs: a node-node-label graph and a label-label-node graph. The two GCNs each handle one aspect of representation learning for nodes and labels, respectively, and they are seamlessly integrated under one objective function. The learned label representations can effectively preserve the inner-label interaction and node label properties, and are then aggregated to enhance the node representation learning under a unified training framework. Experiments and comparisons on multi-label node classification validate the effectiveness of our proposed approach.", "field": [], "task": ["Multi-Label Classification", "Node Classification", "Representation Learning"], "method": [], "dataset": ["MS-COCO"], "metric": ["mAP"], "title": "Multi-Label Graph Convolutional Network Representation Learning"} {"abstract": "Differentiable rendering is a very successful technique that applies to a Single-View 3D Reconstruction. Current renderers use losses based on pixels between a rendered image of some 3D reconstructed object and ground-truth images from given matched viewpoints to optimise parameters of the 3D shape. These models require a rendering step, along with visibility handling and evaluation of the shading model. The main goal of this paper is to demonstrate that we can avoid these steps and still get reconstruction results as other state-of-the-art models that are equal or even better than existing category-specific reconstruction methods. First, we use the same CNN architecture for the prediction of a point cloud shape and pose prediction like the one used by Insafutdinov \\& Dosovitskiy. Secondly, we propose the novel effective loss function that evaluates how well the projections of reconstructed 3D point clouds cover the ground truth object's silhouette. Then we use Poisson Surface Reconstruction to transform the reconstructed point cloud into a 3D mesh. Finally, we perform a GAN-based texture mapping on a particular 3D mesh and produce a textured 3D mesh from a single 2D image. We evaluate our method on different datasets (including ShapeNet, CUB-200-2011, and Pascal3D+) and achieve state-of-the-art results, outperforming all the other supervised and unsupervised methods and 3D representations, all in terms of performance, accuracy, and training time.", "field": [], "task": ["3D Reconstruction", "Pose Prediction", "Single-View 3D Reconstruction"], "method": [], "dataset": ["ShapeNet"], "metric": ["Mean", "Mean IoU", "3DIoU"], "title": "An Effective Loss Function for Generating 3D Models from Single 2D Image without Rendering"} {"abstract": "Visual Question Answering (VQA) requires a fine-grained and simultaneous understanding of both the visual content of images and the textual content of questions. Therefore, designing an effective `co-attention' model to associate key words in questions with key objects in images is central to VQA performance. So far, most successful attempts at co-attention learning have been achieved by using shallow models, and deep co-attention models show little improvement over their shallow counterparts. In this paper, we propose a deep Modular Co-Attention Network (MCAN) that consists of Modular Co-Attention (MCA) layers cascaded in depth. Each MCA layer models the self-attention of questions and images, as well as the guided-attention of images jointly using a modular composition of two basic attention units. We quantitatively and qualitatively evaluate MCAN on the benchmark VQA-v2 dataset and conduct extensive ablation studies to explore the reasons behind MCAN's effectiveness. Experimental results demonstrate that MCAN significantly outperforms the previous state-of-the-art. Our best single model delivers 70.63$\\%$ overall accuracy on the test-dev set. Code is available at https://github.com/MILVLG/mcan-vqa.", "field": [], "task": ["Question Answering", "Visual Question Answering"], "method": [], "dataset": ["VQA v2 test-std", "VQA v2 test-dev"], "metric": ["overall", "Accuracy"], "title": "Deep Modular Co-Attention Networks for Visual Question Answering"} {"abstract": "Semantic segmentation is one of the key tasks in computer vision, which is to assign a category label to each pixel in an image. Despite significant progress achieved recently, most existing methods still suffer from two challenging issues: 1) the size of objects and stuff in an image can be very diverse, demanding for incorporating multi-scale features into the fully convolutional networks (FCNs); 2) the pixels close to or at the boundaries of object/stuff are hard to classify due to the intrinsic weakness of convolutional networks. To address the first issue, we propose a new Multi-Receptive Field Module (MRFM), explicitly taking multi-scale features into account. For the second issue, we design an edge-aware loss which is effective in distinguishing the boundaries of object/stuff. With these two designs, our Multi Receptive Field Network achieves new state-of-the-art results on two widely-used semantic segmentation benchmark datasets. Specifically, we achieve a mean IoU of 83.0 on the Cityscapes dataset and 88.4 mean IoU on the Pascal VOC2012 dataset.", "field": [], "task": ["Semantic Segmentation"], "method": [], "dataset": ["PASCAL VOC 2012 test", "Cityscapes test"], "metric": ["Mean IoU", "Mean IoU (class)"], "title": "Multi Receptive Field Network for Semantic Segmentation"} {"abstract": "Recently, remarkable advances have been achieved in 3D human pose estimation\nfrom monocular images because of the powerful Deep Convolutional Neural\nNetworks (DCNNs). Despite their success on large-scale datasets collected in\nthe constrained lab environment, it is difficult to obtain the 3D pose\nannotations for in-the-wild images. Therefore, 3D human pose estimation in the\nwild is still a challenge. In this paper, we propose an adversarial learning\nframework, which distills the 3D human pose structures learned from the fully\nannotated dataset to in-the-wild images with only 2D pose annotations. Instead\nof defining hard-coded rules to constrain the pose estimation results, we\ndesign a novel multi-source discriminator to distinguish the predicted 3D poses\nfrom the ground-truth, which helps to enforce the pose estimator to generate\nanthropometrically valid poses even with images in the wild. We also observe\nthat a carefully designed information source for the discriminator is essential\nto boost the performance. Thus, we design a geometric descriptor, which\ncomputes the pairwise relative locations and distances between body joints, as\na new information source for the discriminator. The efficacy of our adversarial\nlearning framework with the new geometric descriptor has been demonstrated\nthrough extensive experiments on widely used public benchmarks. Our approach\nsignificantly improves the performance compared with previous state-of-the-art\napproaches.", "field": [], "task": ["3D Human Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["MPI-INF-3DHP"], "metric": ["3DPCK", "AUC"], "title": "3D Human Pose Estimation in the Wild by Adversarial Learning"} {"abstract": "We present a novel 3D object detection framework, named IPOD, based on raw\npoint cloud. It seeds object proposal for each point, which is the basic\nelement. This paradigm provides us with high recall and high fidelity of\ninformation, leading to a suitable way to process point cloud data. We design\nan end-to-end trainable architecture, where features of all points within a\nproposal are extracted from the backbone network and achieve a proposal feature\nfor final bounding inference. These features with both context information and\nprecise point cloud coordinates yield improved performance. We conduct\nexperiments on KITTI dataset, evaluating our performance in terms of 3D object\ndetection, Bird's Eye View (BEV) detection and 2D object detection. Our method\naccomplishes new state-of-the-art , showing great advantage on the hard set.", "field": [], "task": ["2D Object Detection", "3D Object Detection", "Object Detection"], "method": [], "dataset": ["KITTI Cars Hard", "KITTI Pedestrians Hard", "KITTI Cyclists Hard", "KITTI Cyclists Moderate", "KITTI Pedestrians Moderate", "KITTI Cars Moderate", "KITTI Pedestrians Easy", "KITTI Cyclists Easy", "KITTI Cars Easy"], "metric": ["AP"], "title": "IPOD: Intensive Point-based Object Detector for Point Cloud"} {"abstract": "When multiple conversations occur simultaneously, a listener must decide which conversation each utterance is part of in order to interpret and respond to it appropriately. We refer to this task as disentanglement. We present a corpus of Internet Relay Chat (IRC) dialogue in which the various conversations have been manually disentangled, and evaluate annotator reliability. This is, to our knowledge, the first such corpus for internet chat. We propose a graph-theoretic model for disentanglement, using discourse-based features which have not been previously applied to this task. The model\u2019s predicted disentanglements are highly correlated with manual annotations.", "field": [], "task": ["Conversation Disentanglement"], "method": [], "dataset": ["irc-disentanglement", "Linux IRC (Ch2 Elsner)", "Linux IRC (Ch2 Kummerfeld)"], "metric": ["F", "P", "Local", "1-1", "Shen F-1", "VI", "R"], "title": "You Talking to Me? A Corpus and Algorithm for Conversation Disentanglement"} {"abstract": "Currently, in Autonomous Driving (AD), most of the 3D object detection frameworks (either anchor- or anchor-free-based) consider the detection as a Bounding Box (BBox) regression problem. However, this compact representation is not sufficient to explore all the information of the objects. To tackle this problem, we propose a simple but practical detection framework to jointly predict the 3D BBox and instance segmentation. For instance segmentation, we propose a Spatial Embeddings (SEs) strategy to assemble all foreground points into their corresponding object centers. Base on the SE results, the object proposals can be generated based on a simple clustering strategy. For each cluster, only one proposal is generated. Therefore, the Non-Maximum Suppression (NMS) process is no longer needed here. Finally, with our proposed instance-aware ROI pooling, the BBox is refined by a second-stage network. Experimental results on the public KITTI dataset show that the proposed SEs can significantly improve the instance segmentation results compared with other feature embedding-based method. Meanwhile, it also outperforms most of the 3D object detectors on the KITTI testing benchmark.\r", "field": [], "task": ["3D Instance Segmentation", "3D Object Detection", "Autonomous Driving", "Instance Segmentation", "Object Detection", "Regression", "Semantic Segmentation"], "method": [], "dataset": ["KITTI Cars Hard", "KITTI Cars Moderate", "KITTI Cars Easy"], "metric": ["AP"], "title": "Joint 3D Instance Segmentation and Object Detection for Autonomous Driving"} {"abstract": "Accurate 3D object detection from point clouds has become a crucial component in autonomous driving. However, the volumetric representations and the projection methods in previous works fail to establish the relationships between the local point sets. In this paper, we propose Sparse Voxel-Graph Attention Network (SVGA-Net), a novel end-to-end trainable network which mainly contains voxel-graph module and sparse-to-dense regression module to achieve comparable 3D detection tasks from raw LIDAR data. Specifically, SVGA-Net constructs the local complete graph within each divided 3D spherical voxel and global KNN graph through all voxels. The local and global graphs serve as the attention mechanism to enhance the extracted features. In addition, the novel sparse-to-dense regression module enhances the 3D box estimation accuracy through feature maps aggregation at different levels. Experiments on KITTI detection benchmark demonstrate the efficiency of extending the graph representation to 3D object detection and the proposed SVGA-Net can achieve decent detection accuracy.", "field": [], "task": ["3D Object Detection", "Autonomous Driving", "Object Detection", "Regression"], "method": [], "dataset": ["KITTI Cars Hard", "KITTI Pedestrians Hard", "KITTI Cyclists Hard", "KITTI Cyclists Moderate", "KITTI Pedestrians Moderate", "KITTI Cars Hard val", "KITTI Cars Moderate val", "KITTI Cars Moderate", "KITTI Pedestrians Easy", "KITTI Cyclists Easy", "KITTI Cars Easy val", "KITTI Cars Easy"], "metric": ["AP"], "title": "SVGA-Net: Sparse Voxel-Graph Attention Network for 3D Object Detection from Point Clouds"} {"abstract": "Graph Neural Network (GNN) research has concentrated on improving convolutional layers, with little attention paid to developing graph pooling layers. Yet pooling layers can enable GNNs to reason over abstracted groups of nodes instead of single nodes. To close this gap, we propose a graph pooling layer relying on the notion of edge contraction: EdgePool learns a localized and sparse hard pooling transform. We show that EdgePool outperforms alternative pooling methods, can be easily integrated into most GNN models, and improves performance on both node and graph classification.", "field": [], "task": ["Graph Classification"], "method": [], "dataset": ["PROTEINS"], "metric": ["Accuracy"], "title": "Edge Contraction Pooling for Graph Neural Networks"} {"abstract": "In recent years, many works in the video action recognition literature have shown that two stream models (combining spatial and temporal input streams) are necessary for achieving state of the art performance. In this paper we show the benefits of including yet another stream based on human pose estimated from each frame -- specifically by rendering pose on input RGB frames. At first blush, this additional stream may seem redundant given that human pose is fully determined by RGB pixel values -- however we show (perhaps surprisingly) that this simple and flexible addition can provide complementary gains. Using this insight, we then propose a new model, which we dub PERF-Net (short for Pose Empowered RGB-Flow Net), which combines this new pose stream with the standard RGB and flow based input streams via distillation techniques and show that our model outperforms the state-of-the-art by a large margin in a number of human action recognition datasets while not requiring flow or pose to be explicitly computed at inference time.", "field": [], "task": ["Action Classification", "Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["HMDB-51", "UCF101", "Kinetics-600"], "metric": ["Top-5 Accuracy", "Average accuracy of 3 splits", "3-fold Accuracy", "Top-1 Accuracy"], "title": "PERF-Net: Pose Empowered RGB-Flow Net"} {"abstract": "Typical video classification methods often divide a video into short clips, do inference on each clip independently, then aggregate the clip-level predictions to generate the video-level results. However, processing visually similar clips independently ignores the temporal structure of the video sequence, and increases the computational cost at inference time. In this paper, we propose a novel framework named FASTER, i.e., Feature Aggregation for Spatio-TEmporal Redundancy. FASTER aims to leverage the redundancy between neighboring clips and reduce the computational cost by learning to aggregate the predictions from models of different complexities. The FASTER framework can integrate high quality representations from expensive models to capture subtle motion information and lightweight representations from cheap models to cover scene changes in the video. A new recurrent network (i.e., FAST-GRU) is designed to aggregate the mixture of different representations. Compared with existing approaches, FASTER can reduce the FLOPs by over 10x? while maintaining the state-of-the-art accuracy across popular datasets, such as Kinetics, UCF-101 and HMDB-51.", "field": [], "task": ["Action Classification", "Action Recognition", "Video Classification"], "method": [], "dataset": ["Kinetics-400", "UCF101", "HMDB-51"], "metric": ["Average accuracy of 3 splits", "3-fold Accuracy", "Vid acc@1"], "title": "FASTER Recurrent Networks for Efficient Video Classification"} {"abstract": "Real-world networks exhibit prominent hierarchical and modular structures, with various subgraphs as building blocks. Most existing studies simply consider distinct subgraphs as motifs and use only their numbers to characterize the underlying network. Although such statistics can be used to describe a network model, or even to design some network algorithms, the role of subgraphs in such applications can be further explored so as to improve the results. In this paper, the concept of subgraph network (SGN) is introduced and then applied to network models, with algorithms designed for constructing the 1st-order and 2nd-order SGNs, which can be easily extended to build higher-order ones. Furthermore, these SGNs are used to expand the structural feature space of the underlying network, beneficial for network classification. Numerical experiments demonstrate that the network classification model based on the structural features of the original network together with the 1st-order and 2nd-order SGNs always performs the best as compared to the models based only on one or two of such networks. In other words, the structural features of SGNs can complement that of the original network for better network classification, regardless of the feature extraction method used, such as the handcrafted, network embedding and kernel-based methods.", "field": [], "task": ["Graph Classification", "Network Embedding"], "method": [], "dataset": ["NCI109", "IMDb-B", "PROTEINS", "NCI1", "MUTAG", "PTC"], "metric": ["Accuracy"], "title": "Subgraph Networks with Application to Structural Feature Space Expansion"} {"abstract": "Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.", "field": [], "task": ["Colorization", "Image Generation", "Image Inpainting"], "method": [], "dataset": ["CIFAR-10"], "metric": ["Inception score", "FID", "bits/dimension"], "title": "Score-Based Generative Modeling through Stochastic Differential Equations"} {"abstract": "In this paper, we revive the use of old-fashioned handcrafted video representations for action recognition and put new life into these techniques via a CNN-based hallucination step. Despite of the use of RGB and optical flow frames, the I3D model (amongst others) thrives on combining its output with the Improved Dense Trajectory (IDT) and extracted with its low-level video descriptors encoded via Bag-of-Words (BoW) and Fisher Vectors (FV). Such a fusion of CNNs and handcrafted representations is time-consuming due to pre-processing, descriptor extraction, encoding and tuning parameters. Thus, we propose an end-to-end trainable network with streams which learn the IDT-based BoW/FV representations at the training stage and are simple to integrate with the I3D model. Specifically, each stream takes I3D feature maps ahead of the last 1D conv. layer and learns to `translate' these maps to BoW/FV representations. Thus, our model can hallucinate and use such synthesized BoW/FV representations at the testing stage. We show that even features of the entire I3D optical flow stream can be hallucinated thus simplifying the pipeline. Our model saves 20-55h of computations and yields state-of-the-art results on four publicly available datasets.", "field": [], "task": ["Action Classification", "Action Recognition", "Optical Flow Estimation"], "method": [], "dataset": ["HMDB-51", "Charades"], "metric": ["Average accuracy of 3 splits", "MAP"], "title": "Hallucinating IDT Descriptors and I3D Optical Flow Features for Action Recognition with CNNs"} {"abstract": "In action recognition research, two primary types of information are appearance and motion information that is learned from RGB images through visual sensors. However, depending on the action characteristics, contextual information, such as the existence of specific objects or globally-shared information in the image, becomes vital information to define the action. For example, the existence of the ball is vital information distinguishing \u201ckicking\u201d from \u201crunning\u201d. Furthermore, some actions share typical global abstract poses, which can be used as a key to classify actions. Based on these observations, we propose the multi-stream network model, which incorporates spatial, temporal, and contextual cues in the image for action recognition. We experimented on the proposed method using C3D or inflated 3D ConvNet (I3D) as a backbone network, regarding two different action recognition datasets. As a result, we observed overall improvement in accuracy, demonstrating the effectiveness of our proposed method.", "field": [], "task": ["Action Recognition"], "method": [], "dataset": ["UCF101", "HMDB-51"], "metric": ["Average accuracy of 3 splits", "3-fold Accuracy"], "title": "Contextual Action Cues from Camera Sensor for Multi-Stream Action Recognition"} {"abstract": "Video recognition models have progressed significantly over the past few years, evolving from shallow classifiers trained on hand-crafted features to deep spatiotemporal networks. However, labeled video data required to train such models have not been able to keep up with the ever-increasing depth and sophistication of these networks. In this work, we propose an alternative approach to learning video representations that require no semantically labeled videos and instead leverages the years of effort in collecting and labeling large and clean still-image datasets. We do so by using state-of-the-art models pre-trained on image datasets as \"teachers\" to train video models in a distillation framework. We demonstrate that our method learns truly spatiotemporal features, despite being trained only using supervision from still-image networks. Moreover, it learns good representations across different input modalities, using completely uncurated raw video data sources and with different 2D teacher models. Our method obtains strong transfer performance, outperforming standard techniques for bootstrapping video architectures with image-based models by 16%. We believe that our approach opens up new approaches for learning spatiotemporal representations from unlabeled video data.", "field": [], "task": ["Action Recognition", "Temporal Action Localization", "Video Recognition"], "method": [], "dataset": ["UCF101", "HMDB-51"], "metric": ["Average accuracy of 3 splits", "3-fold Accuracy"], "title": "DistInit: Learning Video Representations Without a Single Labeled Video"} {"abstract": "The lack of fine-grained joints such as hand fingers is a fundamental performance bottleneck for state of the art skeleton action recognition models trained on the largest action recognition dataset, NTU-RGBD. To address this bottleneck, we introduce a new skeleton based human action dataset - NTU60-X. In addition to the 25 body joints for each skeleton as in NTU-RGBD, NTU60-X dataset includes finger and facial joints, enabling a richer skeleton representation. We appropriately modify the state of the art approaches to enable training using the introduced dataset. Our results demonstrate the effectiveness of NTU60-X in overcoming the aforementioned bottleneck and improve state of the art performance, overall and on hitherto worst performing action categories.", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition"], "method": [], "dataset": ["NTU60-X"], "metric": ["Accuracy (Body + Fingers + Face joints)", "Accuracy (Body joints)", "Accuracy (Body + Fingers joints)"], "title": "NTU60-X: Towards Skeleton-based Recognition of Subtle Human Actions"} {"abstract": "Deep learning models have enjoyed great success for image related computer\nvision tasks like image classification and object detection. For video related\ntasks like human action recognition, however, the advancements are not as\nsignificant yet. The main challenge is the lack of effective and efficient\nmodels in modeling the rich temporal spatial information in a video. We\nintroduce a simple yet effective operation, termed Temporal-Spatial Mapping\n(TSM), for capturing the temporal evolution of the frames by jointly analyzing\nall the frames of a video. We propose a video level 2D feature representation\nby transforming the convolutional features of all frames to a 2D feature map,\nreferred to as VideoMap. With each row being the vectorized feature\nrepresentation of a frame, the temporal-spatial features are compactly\nrepresented, while the temporal dynamic evolution is also well embedded. Based\non the VideoMap representation, we further propose a temporal attention model\nwithin a shallow convolutional neural network to efficiently exploit the\ntemporal-spatial dynamics. The experiment results show that the proposed scheme\nachieves the state-of-the-art performance, with 4.2% accuracy gain over\nTemporal Segment Network (TSN), a competing baseline method, on the challenging\nhuman action benchmark dataset HMDB51.", "field": [], "task": ["Action Recognition", "Image Classification", "Object Detection", "Temporal Action Localization"], "method": [], "dataset": ["UCF101"], "metric": ["3-fold Accuracy"], "title": "Temporal-Spatial Mapping for Action Recognition"} {"abstract": "Convolutional neural networks (CNNs) have been extensively applied for image\nrecognition problems giving state-of-the-art results on recognition, detection,\nsegmentation and retrieval. In this work we propose and evaluate several deep\nneural network architectures to combine image information across a video over\nlonger time periods than previously attempted. We propose two methods capable\nof handling full length videos. The first method explores various convolutional\ntemporal feature pooling architectures, examining the various design choices\nwhich need to be made when adapting a CNN for this task. The second proposed\nmethod explicitly models the video as an ordered sequence of frames. For this\npurpose we employ a recurrent neural network that uses Long Short-Term Memory\n(LSTM) cells which are connected to the output of the underlying CNN. Our best\nnetworks exhibit significant performance improvements over previously published\nresults on the Sports 1 million dataset (73.1% vs. 60.9%) and the UCF-101\ndatasets with (88.6% vs. 88.0%) and without additional optical flow information\n(82.6% vs. 72.8%).", "field": [], "task": ["Action Recognition", "Optical Flow Estimation", "Video Classification"], "method": [], "dataset": ["Sports-1M", "UCF101"], "metric": ["Video hit@5", "Video hit@1 ", "3-fold Accuracy"], "title": "Beyond Short Snippets: Deep Networks for Video Classification"} {"abstract": "Convolutional Neural Networks (CNNs) have been established as a powerful class of models for image recognition problems. Encouraged by these results, we provide an extensive empirical evaluation of CNNs on large-scale video classification using a new dataset of 1 million YouTube videos belonging to 487 classes. We study multiple approaches for extending the connectivity of a CNN in time domain to take advantage of local spatio-temporal information and suggest a multiresolution, foveated architecture as a promising way of speeding up the training. Our best spatio-temporal networks display significant performance improvements compared to strong feature-based baselines (55.3% to 63.9%), but only a surprisingly modest improvement compared to single-frame models (59.3% to 60.9%). We further study the generalization performance of our best model by retraining the top layers on the UCF-101 Action Recognition dataset and observe significant performance improvements compared to the UCF-101 baseline model (63.3% up from 43.9%).", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Video Classification"], "method": [], "dataset": ["Sports-1M", "UCF101"], "metric": ["Video hit@5", "Video hit@1 ", "3-fold Accuracy", "Clip Hit@1"], "title": "Large-Scale Video Classification with Convolutional Neural Networks"} {"abstract": "State-of-the-art image captioning methods mostly focus on improving visual features, less attention has been paid to utilizing the inherent properties of language to boost captioning performance. In this paper, we show that vocabulary coherence between words and syntactic paradigm of sentences are also important to generate high-quality image caption. Following the conventional encoder-decoder framework, we propose the Reflective Decoding Network (RDN) for image captioning, which enhances both the long-sequence dependency and position perception of words in a caption decoder. Our model learns to collaboratively attend on both visual and textual features and meanwhile perceive each word's relative position in the sentence to maximize the information delivered in the generated caption. We evaluate the effectiveness of our RDN on the COCO image captioning datasets and achieve superior performance over the previous methods. Further experiments reveal that our approach is particularly advantageous for hard cases with complex scenes to describe by captions.", "field": [], "task": ["Image Captioning"], "method": [], "dataset": ["COCO Captions"], "metric": ["CIDEr-D", "METEOR", "BLEU-1", "CIDER", "ROUGE-L", "BLEU-4"], "title": "Reflective Decoding Network for Image Captioning"} {"abstract": "The Vision-and-Language Navigation (VLN) task entails an agent following\nnavigational instruction in photo-realistic unknown environments. This\nchallenging task demands that the agent be aware of which instruction was\ncompleted, which instruction is needed next, which way to go, and its\nnavigation progress towards the goal. In this paper, we introduce a\nself-monitoring agent with two complementary components: (1) visual-textual\nco-grounding module to locate the instruction completed in the past, the\ninstruction required for the next action, and the next moving direction from\nsurrounding images and (2) progress monitor to ensure the grounded instruction\ncorrectly reflects the navigation progress. We test our self-monitoring agent\non a standard benchmark and analyze our proposed approach through a series of\nablation studies that elucidate the contributions of the primary components.\nUsing our proposed method, we set the new state of the art by a significant\nmargin (8% absolute increase in success rate on the unseen test set). Code is\navailable at https://github.com/chihyaoma/selfmonitoring-agent .", "field": [], "task": ["Natural Language Visual Grounding", "Vision and Language Navigation", "Vision-Language Navigation", "Visual Navigation"], "method": [], "dataset": ["VLN Challenge"], "metric": ["length", "spl", "oracle success", "success", "error"], "title": "Self-Monitoring Navigation Agent via Auxiliary Progress Estimation"} {"abstract": "We propose a novel deep learning architecture for regressing disparity from a\nrectified pair of stereo images. We leverage knowledge of the problem's\ngeometry to form a cost volume using deep feature representations. We learn to\nincorporate contextual information using 3-D convolutions over this volume.\nDisparity values are regressed from the cost volume using a proposed\ndifferentiable soft argmin operation, which allows us to train our method\nend-to-end to sub-pixel accuracy without any additional post-processing or\nregularization. We evaluate our method on the Scene Flow and KITTI datasets and\non KITTI we set a new state-of-the-art benchmark, while being significantly\nfaster than competing approaches.", "field": [], "task": ["Regression"], "method": [], "dataset": ["KITTI Depth Completion Validation"], "metric": ["RMSE"], "title": "End-to-End Learning of Geometry and Context for Deep Stereo Regression"} {"abstract": "As deep learning continues to make progress for challenging perception tasks,\nthere is increased interest in combining vision, language, and decision-making.\nSpecifically, the Vision and Language Navigation (VLN) task involves navigating\nto a goal purely from language instructions and visual information without\nexplicit knowledge of the goal. Recent successful approaches have made in-roads\nin achieving good success rates for this task but rely on beam search, which\nthoroughly explores a large number of trajectories and is unrealistic for\napplications such as robotics. In this paper, inspired by the intuition of\nviewing the problem as search on a navigation graph, we propose to use a\nprogress monitor developed in prior work as a learnable heuristic for search.\nWe then propose two modules incorporated into an end-to-end architecture: 1) A\nlearned mechanism to perform backtracking, which decides whether to continue\nmoving forward or roll back to a previous state (Regret Module) and 2) A\nmechanism to help the agent decide which direction to go next by showing\ndirections that are visited and their associated progress estimate (Progress\nMarker). Combined, the proposed approach significantly outperforms current\nstate-of-the-art methods using greedy action selection, with 5% absolute\nimprovement on the test server in success rates, and more importantly 8% on\nsuccess rates normalized by the path length. Our code is available at\nhttps://github.com/chihyaoma/regretful-agent .", "field": [], "task": ["Decision Making", "Vision and Language Navigation", "Vision-Language Navigation", "Visual Navigation"], "method": [], "dataset": ["VLN Challenge"], "metric": ["length", "spl", "oracle success", "success", "error"], "title": "The Regretful Agent: Heuristic-Aided Navigation through Progress Estimation"} {"abstract": "Recent deep learning models achieve impressive results on 3D scene analysis tasks by operating directly on unstructured point clouds. A lot of progress was made in the field of object classification and semantic segmentation. However, the task of instance segmentation is less explored. In this work, we present 3D-BEVIS, a deep learning framework for 3D semantic instance segmentation on point clouds. Following the idea of previous proposal-free instance segmentation approaches, our model learns a feature embedding and groups the obtained feature space into semantic instances. Current point-based methods scale linearly with the number of points by processing local sub-parts of a scene individually. However, to perform instance segmentation by clustering, globally consistent features are required. Therefore, we propose to combine local point geometry with global context information from an intermediate bird's-eye view representation.", "field": [], "task": ["3D Instance Segmentation", "3D Semantic Instance Segmentation", "Instance Segmentation", "Object Classification", "Semantic Segmentation"], "method": [], "dataset": ["ScanNetV2"], "metric": ["mAP@0.50"], "title": "3D-BEVIS: Bird's-Eye-View Instance Segmentation"} {"abstract": "The Flickr30k dataset has become a standard benchmark for sentence-based\nimage description. This paper presents Flickr30k Entities, which augments the\n158k captions from Flickr30k with 244k coreference chains, linking mentions of\nthe same entities across different captions for the same image, and associating\nthem with 276k manually annotated bounding boxes. Such annotations are\nessential for continued progress in automatic image description and grounded\nlanguage understanding. They enable us to define a new benchmark for\nlocalization of textual entity mentions in an image. We present a strong\nbaseline for this task that combines an image-text embedding, detectors for\ncommon objects, a color classifier, and a bias towards selecting larger\nobjects. While our baseline rivals in accuracy more complex state-of-the-art\nmodels, we show that its gains cannot be easily parlayed into improvements on\nsuch tasks as image-sentence retrieval, thus underlining the limitations of\ncurrent methods and the need for further research.", "field": [], "task": [], "method": [], "dataset": ["Flickr30K 1K test"], "metric": ["R@10", "R@1", "R@5"], "title": "Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models"} {"abstract": "Graph neural networks (GNNs) have recently made remarkable breakthroughs in the paradigm of learning with graph-structured data. However, most existing GNNs limit the receptive field of the node on each layer to its connected (one-hop) neighbors, which disregards the fact that large receptive field has been proven to be a critical factor in state-of-the-art neural networks. In this paper, we propose a novel approach to appropriately define a variable receptive field for GNNs by incorporating high-order proximity information extracted from the hierarchical topological structure of the input graph. Specifically, multiscale groups obtained from trainable hierarchical semi-nonnegative matrix factorization are used for adjusting the weights when aggregating one-hop neighbors. Integrated with the graph attention mechanism on attributes of neighboring nodes, the learnable parameters within the process of aggregation are optimized in an end-to-end manner. Extensive experiments show that the proposed method (hpGAT) outperforms state-of-the-art methods and demonstrate the importance of exploiting high-order proximity in handling noisy information of local neighborhood.", "field": [], "task": ["Node Classification"], "method": [], "dataset": ["Cora", "Citeseer"], "metric": ["Accuracy"], "title": "hpGAT: High-order Proximity Informed Graph Attention Network"} {"abstract": "Recent techniques in self-supervised monocular depth estimation are\napproaching the performance of supervised methods, but operate in low\nresolution only. We show that high resolution is key towards high-fidelity\nself-supervised monocular depth prediction. Inspired by recent deep learning\nmethods for Single-Image Super-Resolution, we propose a sub-pixel convolutional\nlayer extension for depth super-resolution that accurately synthesizes\nhigh-resolution disparities from their corresponding low-resolution\nconvolutional features. In addition, we introduce a differentiable\nflip-augmentation layer that accurately fuses predictions from the image and\nits horizontally flipped version, reducing the effect of left and right shadow\nregions generated in the disparity map due to occlusions. Both contributions\nprovide significant performance gains over the state-of-the-art in\nself-supervised depth and pose estimation on the public KITTI benchmark. A\nvideo of our approach can be found at https://youtu.be/jKNgBeBMx0I.", "field": [], "task": ["Depth Estimation", "Image Super-Resolution", "Monocular Depth Estimation", "Pose Estimation", "Super-Resolution"], "method": [], "dataset": ["KITTI Eigen split unsupervised"], "metric": ["absolute relative error"], "title": "SuperDepth: Self-Supervised, Super-Resolved Monocular Depth Estimation"} {"abstract": "We present an approach which takes advantage of both structure and semantics for unsupervised monocular learning of depth and ego-motion. More specifically, we model the motion of individual objects and learn their 3D motion vector jointly with depth and ego-motion. We obtain more accurate results, especially for challenging dynamic scenes not addressed by previous approaches. This is an extended version of Casser et al. [AAAI'19]. Code and models have been open sourced at https://sites.google.com/corp/view/struct2depth.", "field": [], "task": ["Depth And Camera Motion", "Depth Estimation", "Monocular Depth Estimation", "Motion Estimation"], "method": [], "dataset": ["KITTI Eigen split unsupervised"], "metric": ["absolute relative error"], "title": "Unsupervised Monocular Depth and Ego-motion Learning with Structure and Semantics"} {"abstract": "Estimating 3D poses from a monocular video is still a challenging task, despite the significant progress that has been made in recent years. Generally, the performance of existing methods drops when the target person is too small/large, or the motion is too fast/slow relative to the scale and speed of the training data. Moreover, to our knowledge, many of these methods are not designed or trained under severe occlusion explicitly, making their performance on handling occlusion compromised. Addressing these problems, we introduce a spatio-temporal network for robust 3D human pose estimation. As humans in videos may appear in different scales and have various motion speeds, we apply multi-scale spatial features for 2D joints or keypoints prediction in each individual frame, and multi-stride temporal convolutional net-works (TCNs) to estimate 3D joints or keypoints. Furthermore, we design a spatio-temporal discriminator based on body structures as well as limb motions to assess whether the predicted pose forms a valid pose and a valid movement. During training, we explicitly mask out some keypoints to simulate various occlusion cases, from minor to severe occlusion, so that our network can learn better and becomes robust to various degrees of occlusion. As there are limited 3D ground-truth data, we further utilize 2D video data to inject a semi-supervised learning capability to our network. Experiments on public datasets validate the effectiveness of our method, and our ablation studies show the strengths of our network\\'s individual submodules.", "field": [], "task": ["3D Human Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["HumanEva-I", "Human3.6M", "3DPW"], "metric": ["Average MPJPE (mm)", "PA-MPJPE", "Using 2D ground-truth joints", "Mean Reconstruction Error (mm)", "Multi-View or Monocular"], "title": "3D Human Pose Estimation using Spatio-Temporal Networks with Explicit Occlusion Training"} {"abstract": "Circuits of biological neurons, such as in the functional parts of the brain can be modeled as networks of coupled oscillators. Inspired by the ability of these systems to express a rich set of outputs while keeping (gradients of) state variables bounded, we propose a novel architecture for recurrent neural networks. Our proposed RNN is based on a time-discretization of a system of second-order ordinary differential equations, modeling networks of controlled nonlinear oscillators. We prove precise bounds on the gradients of the hidden states, leading to the mitigation of the exploding and vanishing gradient problem for this RNN. Experiments show that the proposed RNN is comparable in performance to the state of the art on a variety of benchmarks, demonstrating the potential of this architecture to provide stable and accurate RNNs for processing complex sequential data.", "field": [], "task": ["Sentiment Analysis", "Sequential Image Classification"], "method": [], "dataset": ["IMDb", "Sequential MNIST"], "metric": ["Permuted Accuracy", "Unpermuted Accuracy", "Accuracy"], "title": "Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and (gradient) stable architecture for learning long time dependencies"} {"abstract": "In this paper, we propose a novel method for a sentence-level answer-selection task that is a fundamental problem in natural language processing. First, we explore the effect of additional information by adopting a pretrained language model to compute the vector representation of the input text and by applying transfer learning from a large-scale corpus. Second, we enhance the compare-aggregate model by proposing a novel latent clustering method to compute additional information within the target corpus and by changing the objective function from listwise to pointwise. To evaluate the performance of the proposed approaches, experiments are performed with the WikiQA and TREC-QA datasets. The empirical results demonstrate the superiority of our proposed approach, which achieve state-of-the-art performance for both datasets.", "field": [], "task": ["Answer Selection", "Language Modelling", "Question Answering", "Transfer Learning"], "method": [], "dataset": ["TrecQA", "WikiQA"], "metric": ["MRR", "MAP"], "title": "A Compare-Aggregate Model with Latent Clustering for Answer Selection"} {"abstract": "We address the challenging problem of learning motion representations using deep models for video recognition. To this end, we make use of attention modules that learn to highlight regions in the video and aggregate features for recognition. Specifically, we propose to leverage output attention maps as a vehicle to transfer the learned representation from a motion (flow) network to an RGB network. We systematically study the design of attention modules, and develop a novel method for attention distillation. Our method is evaluated on major action benchmarks, and consistently improves the performance of the baseline RGB network by a significant margin. Moreover, we demonstrate that our attention maps can leverage motion cues in learning to identify the location of actions in video frames. We believe our method provides a step towards learning motion-aware representations in deep models. Our project page is available at https://aptx4869lm.github.io/AttentionDistillation/", "field": [], "task": ["Action Recognition", "Video Recognition"], "method": [], "dataset": ["UCF101", "HMDB-51", "Something-Something V2"], "metric": ["Top-5 Accuracy", "Average accuracy of 3 splits", "3-fold Accuracy", "Top-1 Accuracy"], "title": "Attention Distillation for Learning Video Representations"} {"abstract": "Most online multi-object trackers perform object detection stand-alone in a neural net without any input from tracking. In this paper, we present a new online joint detection and tracking model, TraDeS (TRAck to DEtect and Segment), exploiting tracking clues to assist detection end-to-end. TraDeS infers object tracking offset by a cost volume, which is used to propagate previous object features for improving current object detection and segmentation. Effectiveness and superiority of TraDeS are shown on 4 datasets, including MOT (2D tracking), nuScenes (3D tracking), MOTS and Youtube-VIS (instance segmentation tracking). Project page: https://jialianwu.com/projects/TraDeS.html.", "field": [], "task": ["Instance Segmentation", "Object Detection", "Object Tracking", "Semantic Segmentation"], "method": [], "dataset": ["nuScenes", "MOT16", "MOT17", "YouTube-VIS validation"], "metric": ["MOTA", "amota", "AP75", "IDF1", "AP50", "mask AP"], "title": "Track to Detect and Segment: An Online Multi-Object Tracker"} {"abstract": "Joint object detection and semantic segmentation can be applied to many\nfields, such as self-driving cars and unmanned surface vessels. An initial and\nimportant progress towards this goal has been achieved by simply sharing the\ndeep convolutional features for the two tasks. However, this simple scheme is\nunable to make full use of the fact that detection and segmentation are\nmutually beneficial. To overcome this drawback, we propose a framework called\nTripleNet where triple supervisions including detection-oriented supervision,\nclass-aware segmentation supervision, and class-agnostic segmentation\nsupervision are imposed on each layer of the decoder network. Class-agnostic\nsegmentation supervision provides an objectness prior knowledge for both\nsemantic segmentation and object detection. Besides the three types of\nsupervisions, two light-weight modules (i.e., inner-connected module and\nattention skip-layer fusion) are also incorporated into each layer of the\ndecoder. In the proposed framework, detection and segmentation can sufficiently\nboost each other. Moreover, class-agnostic and class-aware segmentation on each\ndecoder layer are not performed at the test stage. Therefore, no extra\ncomputational costs are introduced at the test stage. Experimental results on\nthe VOC2007 and VOC2012 datasets demonstrate that the proposed TripleNet is\nable to improve both the detection and segmentation accuracies without adding\nextra computational costs.", "field": [], "task": ["Object Detection", "Self-Driving Cars", "Semantic Segmentation"], "method": [], "dataset": ["PASCAL VOC 2012 test"], "metric": ["Mean IoU"], "title": "Triply Supervised Decoder Networks for Joint Detection and Segmentation"} {"abstract": "Recent leading approaches to semantic segmentation rely on deep convolutional\nnetworks trained with human-annotated, pixel-level segmentation masks. Such\npixel-accurate supervision demands expensive labeling effort and limits the\nperformance of deep networks that usually benefit from more training data. In\nthis paper, we propose a method that achieves competitive accuracy but only\nrequires easily obtained bounding box annotations. The basic idea is to iterate\nbetween automatically generating region proposals and training convolutional\nnetworks. These two steps gradually recover segmentation masks for improving\nthe networks, and vise versa. Our method, called BoxSup, produces competitive\nresults supervised by boxes only, on par with strong baselines fully supervised\nby masks under the same setting. By leveraging a large amount of bounding\nboxes, BoxSup further unleashes the power of deep convolutional networks and\nyields state-of-the-art results on PASCAL VOC 2012 and PASCAL-CONTEXT.", "field": [], "task": ["Semantic Segmentation"], "method": [], "dataset": ["PASCAL Context", "PASCAL VOC 2012 test"], "metric": ["Mean IoU", "mIoU"], "title": "BoxSup: Exploiting Bounding Boxes to Supervise Convolutional Networks for Semantic Segmentation"} {"abstract": "Semantic segmentation is a task that traditionally requires a large dataset of pixel-level ground truth labels, which is time-consuming and expensive to obtain. Recent advancements in the weakly-supervised setting show that reasonable performance can be obtained by using only image-level labels. Classification is often used as a proxy task to train a deep neural network from which attention maps are extracted. However, the classification task needs only the minimum evidence to make predictions, hence it focuses on the most discriminative object regions. To overcome this problem, we propose a novel formulation of adversarial erasing of the attention maps. In contrast to previous adversarial erasing methods, we optimize two networks with opposing loss functions, which eliminates the requirement of certain suboptimal strategies; for instance, having multiple training steps that complicate the training process or a weight sharing policy between networks operating on different distributions that might be suboptimal for performance. The proposed solution does not require saliency masks, instead it uses a regularization loss to prevent the attention maps from spreading to less discriminative object regions. Our experiments on the Pascal VOC dataset demonstrate that our adversarial approach increases segmentation performance by 2.1 mIoU compared to our baseline and by 1.0 mIoU compared to previous adversarial erasing approaches.", "field": [], "task": ["Semantic Segmentation", "Weakly-Supervised Semantic Segmentation"], "method": [], "dataset": ["PASCAL VOC 2012 test", "PASCAL VOC 2012 val"], "metric": ["Mean IoU", "mIoU"], "title": "Find it if You Can: End-to-End Adversarial Erasing for Weakly-Supervised Semantic Segmentation"} {"abstract": "Sparse Neural Networks regained attention due to their potential for mathematical and computational advantages. We give motivation to study Artificial Neural Networks (ANNs) from a network science perspective, provide a technique to embed arbitrary Directed Acyclic Graphs into ANNs and report study results on predicting the performance of image classifiers based on the structural properties of the networks' underlying graph. Results could further progress neuroevolution and add explanations for the success of distinct architectures from a structural perspective.", "field": [], "task": ["Neural Architecture Search"], "method": [], "dataset": ["MNIST"], "metric": ["R2"], "title": "Structural Analysis of Sparse Neural Networks"} {"abstract": "A grand goal in AI is to build a robot that can accurately navigate based on\nnatural language instructions, which requires the agent to perceive the scene,\nunderstand and ground language, and act in the real-world environment. One key\nchallenge here is to learn to navigate in new environments that are unseen\nduring training. Most of the existing approaches perform dramatically worse in\nunseen environments as compared to seen ones. In this paper, we present a\ngeneralizable navigational agent. Our agent is trained in two stages. The first\nstage is training via mixed imitation and reinforcement learning, combining the\nbenefits from both off-policy and on-policy optimization. The second stage is\nfine-tuning via newly-introduced 'unseen' triplets (environment, path,\ninstruction). To generate these unseen triplets, we propose a simple but\neffective 'environmental dropout' method to mimic unseen environments, which\novercomes the problem of limited seen environment variability. Next, we apply\nsemi-supervised learning (via back-translation) on these dropped-out\nenvironments to generate new paths and instructions. Empirically, we show that\nour agent is substantially better at generalizability when fine-tuned with\nthese triplets, outperforming the state-of-art approaches by a large margin on\nthe private unseen test set of the Room-to-Room task, and achieving the top\nrank on the leaderboard.", "field": [], "task": ["Vision-Language Navigation"], "method": [], "dataset": ["Room2Room", "VLN Challenge"], "metric": ["length", "spl", "oracle success", "success", "error"], "title": "Learning to Navigate Unseen Environments: Back Translation with Environmental Dropout"} {"abstract": "Cross-lingual model transfer has been a promising approach for inducing dependency parsers for low-resource languages where annotated treebanks are not available. The major obstacles for the model transfer approach are two-fold: 1. Lexical features are not directly transferable across languages; 2. Target language-specific syntactic structures are difficult to be recovered. To address these two challenges, we present a novel representation learning framework for multi-source transfer parsing. Our framework allows multi-source transfer parsing using full lexical features straightforwardly. By evaluating on the Google universal dependency treebanks (v2.0), our best models yield an absolute improvement of 6.53% in averaged labeled attachment score, as compared with delexicalized multi-source transfer models. We also significantly outperform the state-of-the-art transfer system proposed most recently.", "field": [], "task": ["Cross-lingual zero-shot dependency parsing", "Representation Learning"], "method": [], "dataset": ["Universal Dependency Treebank"], "metric": ["UAS", "LAS"], "title": "A Representation Learning Framework for Multi-Source Transfer Parsing"} {"abstract": "We propose CRaWl (CNNs for Random Walks), a novel neural network architecture for graph learning. It is based on processing sequences of small subgraphs induced by random walks with standard 1D CNNs. Thus, CRaWl is fundamentally different from typical message passing graph neural network architectures. It is inspired by techniques counting small subgraphs, such as the graphlet kernel and motif counting, and combines them with random walk based techniques in a highly efficient and scalable neural architecture. We demonstrate empirically that CRaWl matches or outperforms state-of-the-art GNN architectures across a multitude of benchmark datasets for graph learning.", "field": [], "task": ["Graph Learning"], "method": [], "dataset": ["REDDIT-B", "ZINC-500k"], "metric": ["MAE", "Accuracy"], "title": "Graph Learning with 1D Convolutions on Random Walks"} {"abstract": "In this work, we tackle the problem of crowd counting in images. We present a\nConvolutional Neural Network (CNN) based density estimation approach to solve\nthis problem. Predicting a high resolution density map in one go is a\nchallenging task. Hence, we present a two branch CNN architecture for\ngenerating high resolution density maps, where the first branch generates a low\nresolution density map, and the second branch incorporates the low resolution\nprediction and feature maps from the first branch to generate a high resolution\ndensity map. We also propose a multi-stage extension of our approach where each\nstage in the pipeline utilizes the predictions from all the previous stages.\nEmpirical comparison with the previous state-of-the-art crowd counting methods\nshows that our method achieves the lowest mean absolute error on three\nchallenging crowd counting benchmarks: Shanghaitech, WorldExpo'10, and UCF\ndatasets.", "field": [], "task": ["Crowd Counting", "Density Estimation"], "method": [], "dataset": ["UCF CC 50", "ShanghaiTech A", "WorldExpo\u201910", "ShanghaiTech B"], "metric": ["MAE", "Average MAE"], "title": "Iterative Crowd Counting"} {"abstract": "Convnets have enabled significant progress in pedestrian detection recently,\nbut there are still open questions regarding suitable architectures and\ntraining data. We revisit CNN design and point out key adaptations, enabling\nplain FasterRCNN to obtain state-of-the-art results on the Caltech dataset.\n To achieve further improvement from more and better data, we introduce\nCityPersons, a new set of person annotations on top of the Cityscapes dataset.\nThe diversity of CityPersons allows us for the first time to train one single\nCNN model that generalizes well over multiple benchmarks. Moreover, with\nadditional training with CityPersons, we obtain top results using FasterRCNN on\nCaltech, improving especially for more difficult cases (heavy occlusion and\nsmall scale) and providing higher localization quality.", "field": [], "task": ["Pedestrian Detection"], "method": [], "dataset": ["CityPersons", "Caltech"], "metric": ["Medium MR^-2", "Small MR^-2", "Reasonable MR^-2", "Large MR^-2", "Reasonable Miss Rate"], "title": "CityPersons: A Diverse Dataset for Pedestrian Detection"} {"abstract": "In this paper, we propose a pose grammar to tackle the problem of 3D human\npose estimation. Our model directly takes 2D pose as input and learns a\ngeneralized 2D-3D mapping function. The proposed model consists of a base\nnetwork which efficiently captures pose-aligned features and a hierarchy of\nBi-directional RNNs (BRNN) on the top to explicitly incorporate a set of\nknowledge regarding human body configuration (i.e., kinematics, symmetry, motor\ncoordination). The proposed model thus enforces high-level constraints over\nhuman poses. In learning, we develop a pose sample simulator to augment\ntraining samples in virtual camera views, which further improves our model\ngeneralizability. We validate our method on public 3D human pose benchmarks and\npropose a new evaluation protocol working on cross-view setting to verify the\ngeneralization capability of different methods. We empirically observe that\nmost state-of-the-art methods encounter difficulty under such setting while our\nmethod can well handle such challenges.", "field": [], "task": ["3D Human Pose Estimation", "3D Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["Human3.6M"], "metric": ["MPJPE"], "title": "Learning Pose Grammar to Encode Human Body Configuration for 3D Pose Estimation"} {"abstract": "Self-supervised learning (SSL) is rapidly closing the gap with supervised methods on large computer vision benchmarks. A successful approach to SSL is to learn representations which are invariant to distortions of the input sample. However, a recurring issue with this approach is the existence of trivial constant representations. Most current methods avoid such collapsed solutions by careful implementation details. We propose an objective function that naturally avoids such collapse by measuring the cross-correlation matrix between the outputs of two identical networks fed with distorted versions of a sample, and making it as close to the identity matrix as possible. This causes the representation vectors of distorted versions of a sample to be similar, while minimizing the redundancy between the components of these vectors. The method is called Barlow Twins, owing to neuroscientist H. Barlow's redundancy-reduction principle applied to a pair of identical networks. Barlow Twins does not require large batches nor asymmetry between the network twins such as a predictor network, gradient stopping, or a moving average on the weight updates. It allows the use of very high-dimensional output vectors. Barlow Twins outperforms previous methods on ImageNet for semi-supervised classification in the low-data regime, and is on par with current state of the art for ImageNet classification with a linear classifier head, and for transfer tasks of classification and object detection.", "field": [], "task": ["Object Detection", "Self-Supervised Learning"], "method": [], "dataset": ["ImageNet - 1% labeled data", "ImageNet - 10% labeled data", "iNaturalist 2018", "Places205", "ImageNet"], "metric": ["Top-1 Accuracy", "Top 5 Accuracy", "Top 1 Accuracy"], "title": "Barlow Twins: Self-Supervised Learning via Redundancy Reduction"} {"abstract": "Non-uniform blind deblurring for general dynamic scenes is a challenging\ncomputer vision problem as blurs arise not only from multiple object motions\nbut also from camera shake, scene depth variation. To remove these complicated\nmotion blurs, conventional energy optimization based methods rely on simple\nassumptions such that blur kernel is partially uniform or locally linear.\nMoreover, recent machine learning based methods also depend on synthetic blur\ndatasets generated under these assumptions. This makes conventional deblurring\nmethods fail to remove blurs where blur kernel is difficult to approximate or\nparameterize (e.g. object motion boundaries). In this work, we propose a\nmulti-scale convolutional neural network that restores sharp images in an\nend-to-end manner where blur is caused by various sources. Together, we present\nmulti-scale loss function that mimics conventional coarse-to-fine approaches.\nFurthermore, we propose a new large-scale dataset that provides pairs of\nrealistic blurry image and the corresponding ground truth sharp image that are\nobtained by a high-speed camera. With the proposed model trained on this\ndataset, we demonstrate empirically that our method achieves the\nstate-of-the-art performance in dynamic scene deblurring not only\nqualitatively, but also quantitatively.", "field": [], "task": ["Deblurring"], "method": [], "dataset": ["RealBlur-J (trained on GoPro)", "GoPro", "RealBlur-R (trained on GoPro)", "HIDE (trained on GOPRO)"], "metric": ["SSIM", "SSIM (sRGB)", "PSNR", "PSNR (sRGB)"], "title": "Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring"} {"abstract": "LiDAR-based 3D object detection is an important task for autonomous driving and current approaches suffer from sparse and partial point clouds of distant and occluded objects. In this paper, we propose a novel two-stage approach, namely PC-RGNN, dealing with such challenges by two specific solutions. On the one hand, we introduce a point cloud completion module to recover high-quality proposals of dense points and entire views with original structures preserved. On the other hand, a graph neural network module is designed, which comprehensively captures relations among points through a local-global attention mechanism as well as multi-scale graph based context aggregation, substantially strengthening encoded features. Extensive experiments on the KITTI benchmark show that the proposed approach outperforms the previous state-of-the-art baselines by remarkable margins, highlighting its effectiveness.", "field": [], "task": ["3D Object Detection", "Autonomous Driving", "Object Detection", "Point Cloud Completion"], "method": [], "dataset": ["KITTI Cars Hard", "KITTI Cars Moderate", "KITTI Cars Moderate val", "KITTI Cars Hard val", "KITTI Cars Easy val", "KITTI Cars Easy"], "metric": ["AP"], "title": "PC-RGNN: Point Cloud Completion and Graph Neural Network for 3D Object Detection"} {"abstract": "Low-dimensional embeddings of nodes in large graphs have proved extremely\nuseful in a variety of prediction tasks, from content recommendation to\nidentifying protein functions. However, most existing approaches require that\nall nodes in the graph are present during training of the embeddings; these\nprevious approaches are inherently transductive and do not naturally generalize\nto unseen nodes. Here we present GraphSAGE, a general, inductive framework that\nleverages node feature information (e.g., text attributes) to efficiently\ngenerate node embeddings for previously unseen data. Instead of training\nindividual embeddings for each node, we learn a function that generates\nembeddings by sampling and aggregating features from a node's local\nneighborhood. Our algorithm outperforms strong baselines on three inductive\nnode-classification benchmarks: we classify the category of unseen nodes in\nevolving information graphs based on citation and Reddit post data, and we show\nthat our algorithm generalizes to completely unseen graphs using a multi-graph\ndataset of protein-protein interactions.", "field": [], "task": ["Graph Classification", "Graph Regression", "Link Prediction", "Node Classification", "Representation Learning"], "method": [], "dataset": ["PPI", "Reddit", "CIFAR10 100k", "Cora (0.5%)", "CiteSeer with Public Split: fixed 20 nodes per class", "Citeseer Full-supervised", "PubMed with Public Split: fixed 20 nodes per class", "PubMed (0.1%)", "ZINC-500k", "Cora (3%)", "Brazil Air-Traffic", "Europe Air-Traffic", "CiteSeer (0.5%)", "PubMed (0.03%)", "PubMed (0.05%)", "Pubmed Full-supervised", "Wiki-Vote", "PATTERN 100k", "CiteSeer (1%)", "Cora with Public Split: fixed 20 nodes per class", "Flickr", "Facebook", "USA Air-Traffic", "Cora Full-supervised", "Cora (1%)"], "metric": ["MAE", "Accuracy (%)", "F1", "Accuracy"], "title": "Inductive Representation Learning on Large Graphs"} {"abstract": "Traditional convolutional neural networks (CNN) are stationary and\nfeedforward. They neither change their parameters during evaluation nor use\nfeedback from higher to lower layers. Real brains, however, do. So does our\nDeep Attention Selective Network (dasNet) architecture. DasNets feedback\nstructure can dynamically alter its convolutional filter sensitivities during\nclassification. It harnesses the power of sequential processing to improve\nclassification performance, by allowing the network to iteratively focus its\ninternal attention on some of its convolutional filters. Feedback is trained\nthrough direct policy search in a huge million-dimensional parameter space,\nthrough scalable natural evolution strategies (SNES). On the CIFAR-10 and\nCIFAR-100 datasets, dasNet outperforms the previous state-of-the-art model.", "field": [], "task": ["Deep Attention"], "method": [], "dataset": ["CIFAR-10"], "metric": ["Percentage correct"], "title": "Deep Networks with Internal Selective Attention through Feedback Connections"} {"abstract": "Most existing methods determine relation types only after all the entities\nhave been recognized, thus the interaction between relation types and entity\nmentions is not fully modeled. This paper presents a novel paradigm to deal\nwith relation extraction by regarding the related entities as the arguments of\na relation. We apply a hierarchical reinforcement learning (HRL) framework in\nthis paradigm to enhance the interaction between entity mentions and relation\ntypes. The whole extraction process is decomposed into a hierarchy of two-level\nRL policies for relation detection and entity extraction respectively, so that\nit is more feasible and natural to deal with overlapping relations. Our model\nwas evaluated on public datasets collected via distant supervision, and results\nshow that it gains better performance than existing methods and is more\npowerful for extracting overlapping relations.", "field": [], "task": ["Entity Extraction using GAN", "Hierarchical Reinforcement Learning", "Relation Extraction"], "method": [], "dataset": ["NYT24", "NYT29"], "metric": ["F1"], "title": "A Hierarchical Framework for Relation Extraction with Reinforcement Learning"} {"abstract": "Convolutional Neural Networks (CNNs) are state-of-the-art models for document\nimage classification tasks. However, many of these approaches rely on\nparameters and architectures designed for classifying natural images, which\ndiffer from document images. We question whether this is appropriate and\nconduct a large empirical study to find what aspects of CNNs most affect\nperformance on document images. Among other results, we exceed the\nstate-of-the-art on the RVL-CDIP dataset by using shear transform data\naugmentation and an architecture designed for a larger input image.\nAdditionally, we analyze the learned features and find evidence that CNNs\ntrained on RVL-CDIP learn region-specific layout features.", "field": [], "task": ["Data Augmentation", "Document Image Classification", "Image Classification"], "method": [], "dataset": ["RVL-CDIP"], "metric": ["Accuracy"], "title": "Analysis of Convolutional Neural Networks for Document Image Classification"} {"abstract": "Learning to represent videos is a very challenging task both algorithmically and computationally. Standard video CNN architectures have been designed by directly extending architectures devised for image understanding to include the time dimension, using modules such as 3D convolutions, or by using two-stream design to capture both appearance and motion in videos. We interpret a video CNN as a collection of multi-stream convolutional blocks connected to each other, and propose the approach of automatically finding neural architectures with better connectivity and spatio-temporal interactions for video understanding. This is done by evolving a population of overly-connected architectures guided by connection weight learning. Architectures combining representations that abstract different input types (i.e., RGB and optical flow) at multiple temporal resolutions are searched for, allowing different types or sources of information to interact with each other. Our method, referred to as AssembleNet, outperforms prior approaches on public video datasets, in some cases by a great margin. We obtain 58.6% mAP on Charades and 34.27% accuracy on Moments-in-Time.", "field": [], "task": ["Action Classification", "Action Recognition", "Multimodal Activity Recognition", "Optical Flow Estimation", "Video Classification", "Video Understanding"], "method": [], "dataset": ["Charades", "Moments in Time Dataset", "Moments in Time"], "metric": ["Top 1 Accuracy", "MAP", "Top-5 (%)", "Top-1 (%)", "Top 5 Accuracy"], "title": "AssembleNet: Searching for Multi-Stream Neural Connectivity in Video Architectures"} {"abstract": "Most state-of-the-art methods for action recognition rely on a two-stream architecture that processes appearance and motion independently. In this paper, we claim that considering them jointly offers rich information for action recognition. We introduce a novel representation that gracefully encodes the movement of some semantic keypoints. We use the human joints as these keypoints and term our Pose moTion representation PoTion. Specifically, we first run a state-of-the-art human pose estimator and extract heatmaps for the human joints in each frame. We obtain our PoTion representation by temporally aggregating these probability maps. This is achieved by colorizing each of them depending on the relative time of the frames in the video clip and summing them. This fixed-size representation for an entire video clip is suitable to classify actions using a shallow convolutional neural network. Our experimental evaluation shows that PoTion outperforms other state-of-the-art pose representations. Furthermore, it is complementary to standard appearance and motion streams. When combining PoTion with the recent two-stream I3D approach [5], we obtain state-of-the-art performance on the JHMDB, HMDB and UCF101 datasets.", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["UCF101", "JHMDB (2D poses only)", "J-HMDB", "Charades"], "metric": ["3-fold Accuracy", "MAP", "Accuracy (pose)", "Average accuracy of 3 splits", "No. parameters", "Accuracy (RGB+pose)"], "title": "PoTion: Pose MoTion Representation for Action Recognition"} {"abstract": "How do humans recognize the action \"opening a book\" ? We argue that there are\ntwo important cues: modeling temporal shape dynamics and modeling functional\nrelationships between humans and objects. In this paper, we propose to\nrepresent videos as space-time region graphs which capture these two important\ncues. Our graph nodes are defined by the object region proposals from different\nframes in a long range video. These nodes are connected by two types of\nrelations: (i) similarity relations capturing the long range dependencies\nbetween correlated objects and (ii) spatial-temporal relations capturing the\ninteractions between nearby objects. We perform reasoning on this graph\nrepresentation via Graph Convolutional Networks. We achieve state-of-the-art\nresults on both Charades and Something-Something datasets. Especially for\nCharades, we obtain a huge 4.4% gain when our model is applied in complex\nenvironments.", "field": [], "task": ["Action Classification", "Action Recognition"], "method": [], "dataset": ["Something-Something V1", "Charades"], "metric": ["Top 1 Accuracy", "MAP"], "title": "Videos as Space-Time Region Graphs"} {"abstract": "Rain streaks can severely degrade the visibility, which causes many current\ncomputer vision algorithms fail to work. So it is necessary to remove the rain\nfrom images. We propose a novel deep network architecture based on deep\nconvolutional and recurrent neural networks for single image deraining. As\ncontextual information is very important for rain removal, we first adopt the\ndilated convolutional neural network to acquire large receptive field. To\nbetter fit the rain removal task, we also modify the network. In heavy rain,\nrain streaks have various directions and shapes, which can be regarded as the\naccumulation of multiple rain streak layers. We assign different alpha-values\nto various rain streak layers according to the intensity and transparency by\nincorporating the squeeze-and-excitation block. Since rain streak layers\noverlap with each other, it is not easy to remove the rain in one stage. So we\nfurther decompose the rain removal into multiple stages. Recurrent neural\nnetwork is incorporated to preserve the useful information in previous stages\nand benefit the rain removal in later stages. We conduct extensive experiments\non both synthetic and real-world datasets. Our proposed method outperforms the\nstate-of-the-art approaches under all evaluation metrics. Codes and\nsupplementary material are available at our project webpage:\nhttps://xialipku.github.io/RESCAN .", "field": [], "task": ["Rain Removal", "Single Image Deraining"], "method": [], "dataset": ["Test2800", "Rain100H", "Test100", "Test1200", "Rain100L"], "metric": ["SSIM", "PSNR"], "title": "Recurrent Squeeze-and-Excitation Context Aggregation Net for Single Image Deraining"} {"abstract": "Image generation has been successfully cast as an autoregressive sequence\ngeneration or transformation problem. Recent work has shown that self-attention\nis an effective way of modeling textual sequences. In this work, we generalize\na recently proposed model architecture based on self-attention, the\nTransformer, to a sequence modeling formulation of image generation with a\ntractable likelihood. By restricting the self-attention mechanism to attend to\nlocal neighborhoods we significantly increase the size of images the model can\nprocess in practice, despite maintaining significantly larger receptive fields\nper layer than typical convolutional neural networks. While conceptually\nsimple, our generative models significantly outperform the current state of the\nart in image generation on ImageNet, improving the best published negative\nlog-likelihood on ImageNet from 3.83 to 3.77. We also present results on image\nsuper-resolution with a large magnification ratio, applying an encoder-decoder\nconfiguration of our architecture. In a human evaluation study, we find that\nimages generated by our super-resolution model fool human observers three times\nmore often than the previous state of the art.", "field": [], "task": ["Image Generation", "Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["ImageNet 32x32", "CIFAR-10"], "metric": ["bits/dimension", "bpd"], "title": "Image Transformer"} {"abstract": "Modeling the distribution of natural images is challenging, partly because of\nstrong statistical dependencies which can extend over hundreds of pixels.\nRecurrent neural networks have been successful in capturing long-range\ndependencies in a number of problems but only recently have found their way\ninto generative image models. We here introduce a recurrent image model based\non multi-dimensional long short-term memory units which are particularly suited\nfor image modeling due to their spatial structure. Our model scales to images\nof arbitrary size and its likelihood is computationally tractable. We find that\nit outperforms the state of the art in quantitative comparisons on several\nimage datasets and produces promising results when used for texture synthesis\nand inpainting.", "field": [], "task": ["Image Generation", "Texture Synthesis"], "method": [], "dataset": ["CIFAR-10"], "metric": ["bits/dimension"], "title": "Generative Image Modeling Using Spatial LSTMs"} {"abstract": "Transfer learning is a widely used method to build high performing computer\nvision models. In this paper, we study the efficacy of transfer learning by\nexamining how the choice of data impacts performance. We find that more\npre-training data does not always help, and transfer performance depends on a\njudicious choice of pre-training data. These findings are important given the\ncontinued increase in dataset sizes. We further propose domain adaptive\ntransfer learning, a simple and effective pre-training method using importance\nweights computed based on the target dataset. Our method to compute importance\nweights follow from ideas in domain adaptation, and we show a novel application\nto transfer learning. Our methods achieve state-of-the-art results on multiple\nfine-grained classification datasets and are well-suited for use in practice.", "field": [], "task": ["Domain Adaptation", "Fine-Grained Image Classification", "Transfer Learning"], "method": [], "dataset": ["Stanford Cars"], "metric": ["Accuracy"], "title": "Domain Adaptive Transfer Learning with Specialist Models"} {"abstract": "Two optical flow estimation problems are addressed: i) occlusion estimation\nand handling, and ii) estimation from image sequences longer than two frames.\nThe proposed ContinualFlow method estimates occlusions before flow, avoiding\nthe use of flow corrupted by occlusions for their estimation. We show that\nproviding occlusion masks as an additional input to flow estimation improves\nthe standard performance metric by more than 25\\% on both KITTI and Sintel. As\na second contribution, a novel method for incorporating information from past\nframes into flow estimation is introduced. The previous frame flow serves as an\ninput to occlusion estimation and as a prior in occluded regions, i.e. those\nwithout visual correspondences. By continually using the previous frame flow,\nContinualFlow performance improves further by 18\\% on KITTI and 7\\% on Sintel,\nachieving top performance on KITTI and Sintel.", "field": [], "task": ["Occlusion Estimation", "Optical Flow Estimation"], "method": [], "dataset": ["Sintel-final"], "metric": ["Average End-Point Error"], "title": "Continual Occlusions and Optical Flow Estimation"} {"abstract": "Language Identification (LI) is an important first step in several speech processing systems. With a growing number of voice-based assistants, speech LI has emerged as a widely researched field. To approach the problem of identifying languages, we can either adopt an implicit approach where only the speech for a language is present or an explicit one where text is available with its corresponding transcript. This paper focuses on an implicit approach due to the absence of transcriptive data. This paper benchmarks existing models and proposes a new attention based model for language identification which uses log-Mel spectrogram images as input. We also present the effectiveness of raw waveforms as features to neural network models for LI tasks. For training and evaluation of models, we classified six languages (English, French, German, Spanish, Russian and Italian) with an accuracy of 95.4% and four languages (English, French, German, Spanish) with an accuracy of 96.3% obtained from the VoxForge dataset. This approach can further be scaled to incorporate more languages.", "field": [], "task": ["Language Identification", "Spoken language identification"], "method": [], "dataset": ["VoxForge European", "VoxForge Commonwealth"], "metric": ["Accuracy (%)"], "title": "Spoken Language Identification using ConvNets"} {"abstract": "Model-based optimization methods and discriminative learning methods have\nbeen the two dominant strategies for solving various inverse problems in\nlow-level vision. Typically, those two kinds of methods have their respective\nmerits and drawbacks, e.g., model-based optimization methods are flexible for\nhandling different inverse problems but are usually time-consuming with\nsophisticated priors for the purpose of good performance; in the meanwhile,\ndiscriminative learning methods have fast testing speed but their application\nrange is greatly restricted by the specialized task. Recent works have revealed\nthat, with the aid of variable splitting techniques, denoiser prior can be\nplugged in as a modular part of model-based optimization methods to solve other\ninverse problems (e.g., deblurring). Such an integration induces considerable\nadvantage when the denoiser is obtained via discriminative learning. However,\nthe study of integration with fast discriminative denoiser prior is still\nlacking. To this end, this paper aims to train a set of fast and effective CNN\n(convolutional neural network) denoisers and integrate them into model-based\noptimization method to solve other inverse problems. Experimental results\ndemonstrate that the learned set of denoisers not only achieve promising\nGaussian denoising results but also can be used as prior to deliver good\nperformance for various low-level vision applications.", "field": [], "task": ["Color Image Denoising", "Deblurring", "Denoising", "Image Denoising", "Image Restoration"], "method": [], "dataset": ["Set5 - 3x upscaling", "Set14 - 2x upscaling", "Set14 - 4x upscaling", "CBSD68 sigma50", "BSD68 sigma15", "BSD68 sigma50", "Set14 - 3x upscaling", "BSD68 sigma35", "Set5 - 4x upscaling", "BSD68 sigma25", "BSD68 sigma5", "Set5 - 2x upscaling"], "metric": ["PSNR"], "title": "Learning Deep CNN Denoiser Prior for Image Restoration"} {"abstract": "This paper investigates the use of automatically collected web audio data for the task of spoken language recognition. We generate semi-random search phrases from language-specific Wikipedia data that are then used to retrieve videos from YouTube for 107 languages. Speech activity detection and speaker diarization are used to extract segments from the videos that contain speech. Post-filtering is used to remove segments from the database that are likely not in the given language, increasing the proportion of correctly labeled segments to 98%, based on crowd-sourced verification. The size of the resulting training set (VoxLingua107) is 6628 hours (62 hours per language on the average) and it is accompanied by an evaluation set of 1609 verified utterances. We use the data to build language recognition models for several spoken language identification tasks. Experiments show that using the automatically retrieved training data gives competitive results to using hand-labeled proprietary datasets. The dataset is publicly available.", "field": [], "task": ["Action Detection", "Activity Detection", "Language Identification", "Speaker Diarization", "Spoken language identification"], "method": [], "dataset": ["LRE07", "VOXLINGUA107", "KALAKA-3"], "metric": ["PO", "3 sec", "0..5sec", "Average", "30 sec", "5..20sec", "PC", "EC", "10 sec", "EO"], "title": "VOXLINGUA107: A DATASET FOR SPOKEN LANGUAGE RECOGNITION"} {"abstract": "The fully connected layers of a deep convolutional neural network typically\ncontain over 90% of the network parameters, and consume the majority of the\nmemory required to store the network parameters. Reducing the number of\nparameters while preserving essentially the same predictive performance is\ncritically important for operating deep neural networks in memory constrained\nenvironments such as GPUs or embedded devices.\n In this paper we show how kernel methods, in particular a single Fastfood\nlayer, can be used to replace all fully connected layers in a deep\nconvolutional neural network. This novel Fastfood layer is also end-to-end\ntrainable in conjunction with convolutional layers, allowing us to combine them\ninto a new architecture, named deep fried convolutional networks, which\nsubstantially reduces the memory footprint of convolutional networks trained on\nMNIST and ImageNet with no drop in predictive performance.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["MNIST"], "metric": ["Percentage error"], "title": "Deep Fried Convnets"} {"abstract": "We present a generative model for the unsupervised learning of dependency structures. We also describe the multiplicative combination of this dependency model with a model of linear constituency. The product model outperforms both components on their respective evaluation metrics, giving the best published figures for unsupervised dependency parsing and unsupervised constituency parsing. We also demonstrate that the combined model works and is robust cross-linguistically, being able to exploit either attachment or distributional regularities that are salient in the data.", "field": [], "task": ["Constituency Parsing", "Dependency Parsing", "Unsupervised Dependency Parsing"], "method": [], "dataset": ["Penn Treebank"], "metric": ["UAS"], "title": "Corpus-Based Induction of Syntactic Structure: Models of Dependency and Constituency"} {"abstract": "We present Deep Graph Infomax (DGI), a general approach for learning node\nrepresentations within graph-structured data in an unsupervised manner. DGI\nrelies on maximizing mutual information between patch representations and\ncorresponding high-level summaries of graphs---both derived using established\ngraph convolutional network architectures. The learnt patch representations\nsummarize subgraphs centered around nodes of interest, and can thus be reused\nfor downstream node-wise learning tasks. In contrast to most prior approaches\nto unsupervised learning with GCNs, DGI does not rely on random walk\nobjectives, and is readily applicable to both transductive and inductive\nlearning setups. We demonstrate competitive performance on a variety of node\nclassification benchmarks, which at times even exceeds the performance of\nsupervised learning.", "field": [], "task": ["Node Classification"], "method": [], "dataset": ["Cora", "Pubmed", "Citeseer"], "metric": ["Accuracy"], "title": "Deep Graph Infomax"} {"abstract": "We present a self-training approach to unsupervised dependency parsing that\nreuses existing supervised and unsupervised parsing algorithms. Our approach,\ncalled `iterated reranking' (IR), starts with dependency trees generated by an\nunsupervised parser, and iteratively improves these trees using the richer\nprobability models used in supervised parsing that are in turn trained on these\ntrees. Our system achieves 1.8% accuracy higher than the state-of-the-part\nparser of Spitkovsky et al. (2013) on the WSJ corpus.", "field": [], "task": ["Dependency Parsing", "Unsupervised Dependency Parsing"], "method": [], "dataset": ["Penn Treebank"], "metric": ["UAS"], "title": "Unsupervised Dependency Parsing: Let's Use Supervised Parsers"} {"abstract": "Inducing a grammar directly from text is one of the oldest and most challenging tasks in Computational Linguistics. Significant progress has been made for inducing dependency grammars, however the models employed are overly simplistic, particularly in comparison to supervised parsing models. In this paper we present an approach to dependency grammar induction using tree substitution grammar which is capable of learning large dependency fragments and thereby better modelling the text. We define a hierarchical non-parametric Pitman-Yor Process prior which biases towards a small grammar with simple productions. This approach significantly improves the state-of-the-art, when measured by head attachment accuracy.", "field": [], "task": ["Dependency Grammar Induction", "Dependency Parsing", "Unsupervised Dependency Parsing"], "method": [], "dataset": ["Penn Treebank"], "metric": ["UAS"], "title": "Unsupervised Induction of Tree Substitution Grammars for Dependency Parsing"} {"abstract": "We present a family of priors over probabilistic grammar weights, called the shared logistic normal distribution. This family extends the partitioned logistic normal distribution, enabling factored covariance between the probabilities of different derivation events in the probabilistic grammar, providing a new way to encode prior knowledge about an unknown grammar. We describe a variational EM algorithm for learning a probabilistic grammar based on this family of priors. We then experiment with unsupervised dependency grammar induction and show significant improvements using our model for both monolingual learning and bilingual learning with a non-parallel, multilingual corpus.", "field": [], "task": ["Dependency Grammar Induction", "Unsupervised Dependency Parsing"], "method": [], "dataset": ["Penn Treebank"], "metric": ["UAS"], "title": "Shared Logistic Normal Distributions for Soft Parameter Tying in Unsupervised Grammar Induction"} {"abstract": "The NOESIS II challenge, as the Track 2 of the 8th Dialogue System Technology Challenges (DSTC 8), is the extension of DSTC 7. This track incorporates new elements that are vital for the creation of a deployed task-oriented dialogue system. This paper describes our systems that are evaluated on all subtasks under this challenge. We study the problem of employing pre-trained attention-based network for multi-turn dialogue systems. Meanwhile, several adaptation methods are proposed to adapt the pre-trained language models for multi-turn dialogue systems, in order to keep the intrinsic property of dialogue systems. In the released evaluation results of Track 2 of DSTC 8, our proposed models ranked fourth in subtask 1, third in subtask 2, and first in subtask 3 and subtask 4 respectively.", "field": [], "task": ["Conversation Disentanglement", "Task-Oriented Dialogue Systems"], "method": [], "dataset": ["irc-disentanglement"], "metric": ["VI", "F", "P", "R"], "title": "Pre-Trained and Attention-Based Neural Networks for Building Noetic Task-Oriented Dialogue Systems"} {"abstract": "Autonomous driving requires 3D perception of vehicles and other objects in\nthe in environment. Much of the current methods support 2D vehicle detection.\nThis paper proposes a flexible pipeline to adopt any 2D detection network and\nfuse it with a 3D point cloud to generate 3D information with minimum changes\nof the 2D detection networks. To identify the 3D box, an effective model\nfitting algorithm is developed based on generalised car models and score maps.\nA two-stage convolutional neural network (CNN) is proposed to refine the\ndetected 3D box. This pipeline is tested on the KITTI dataset using two\ndifferent 2D detection networks. The 3D detection results based on these two\nnetworks are similar, demonstrating the flexibility of the proposed pipeline.\nThe results rank second among the 3D detection algorithms, indicating its\ncompetencies in 3D detection.", "field": [], "task": ["3D Object Detection", "Autonomous Driving"], "method": [], "dataset": ["KITTI Cars Hard", "KITTI Cars Moderate", "KITTI Cars Easy"], "metric": ["AP"], "title": "A General Pipeline for 3D Detection of Vehicles"} {"abstract": "Attention networks in multimodal learning provide an efficient way to utilize\ngiven visual information selectively. However, the computational cost to learn\nattention distributions for every pair of multimodal input channels is\nprohibitively expensive. To solve this problem, co-attention builds two\nseparate attention distributions for each modality neglecting the interaction\nbetween multimodal inputs. In this paper, we propose bilinear attention\nnetworks (BAN) that find bilinear attention distributions to utilize given\nvision-language information seamlessly. BAN considers bilinear interactions\namong two groups of input channels, while low-rank bilinear pooling extracts\nthe joint representations for each pair of channels. Furthermore, we propose a\nvariant of multimodal residual networks to exploit eight-attention maps of the\nBAN efficiently. We quantitatively and qualitatively evaluate our model on\nvisual question answering (VQA 2.0) and Flickr30k Entities datasets, showing\nthat BAN significantly outperforms previous methods and achieves new\nstate-of-the-arts on both datasets.", "field": [], "task": ["Visual Question Answering"], "method": [], "dataset": ["VQA v2 test-std", "Flickr30k Entities Test", "VQA v2 test-dev"], "metric": ["overall", "R@10", "R@5", "Accuracy", "R@1"], "title": "Bilinear Attention Networks"} {"abstract": "Network representation learning (NRL) has been widely used to help analyze\nlarge-scale networks through mapping original networks into a low-dimensional\nvector space. However, existing NRL methods ignore the impact of properties of\nrelations on the object relevance in heterogeneous information networks (HINs).\nTo tackle this issue, this paper proposes a new NRL framework, called\nEvent2vec, for HINs to consider both quantities and properties of relations\nduring the representation learning process. Specifically, an event (i.e., a\ncomplete semantic unit) is used to represent the relation among multiple\nobjects, and both event-driven first-order and second-order proximities are\ndefined to measure the object relevance according to the quantities and\nproperties of relations. We theoretically prove how event-driven proximities\ncan be preserved in the embedding space by Event2vec, which utilizes event\nembeddings to facilitate learning the object embeddings. Experimental studies\ndemonstrate the advantages of Event2vec over state-of-the-art algorithms on\nfour real-world datasets and three network analysis tasks (including network\nreconstruction, link prediction, and node classification).", "field": [], "task": ["Link Prediction", "Node Classification", "Representation Learning"], "method": [], "dataset": ["IMDb", "Yelp", "Douban", "DBLP"], "metric": ["AUC"], "title": "Representation Learning for Heterogeneous Information Networks via Embedding Events"} {"abstract": "In this work we present a self-supervised learning framework to\nsimultaneously train two Convolutional Neural Networks (CNNs) to predict depth\nand surface normals from a single image. In contrast to most existing\nframeworks which represent outdoor scenes as fronto-parallel planes at\npiece-wise smooth depth, we propose to predict depth with surface orientation\nwhile assuming that natural scenes have piece-wise smooth normals. We show that\na simple depth-normal consistency as a soft-constraint on the predictions is\nsufficient and effective for training both these networks simultaneously. The\ntrained normal network provides state-of-the-art predictions while the depth\nnetwork, relying on much realistic smooth normal assumption, outperforms the\ntraditional self-supervised depth prediction network by a large margin on the\nKITTI benchmark. Demo video: https://youtu.be/ZD-ZRsw7hdM", "field": [], "task": ["Depth Estimation", "Monocular Depth Estimation", "Self-Supervised Learning"], "method": [], "dataset": ["KITTI Eigen split"], "metric": ["absolute relative error"], "title": "Self-supervised Learning for Single View Depth and Surface Normal Estimation"} {"abstract": "The symmetry for the corners of a box, the continuity for the surfaces of a monitor, the linkage between the torso and other body parts --- it suggests that 3D objects may have common and underlying inner relations between local structures, and it is a fundamental ability for intelligent species to reason for them. In this paper, we propose an effective plug-and-play module called the structural relation network (SRN) to reason about the structural dependencies of local regions in 3D point clouds. Existing network architectures on point sets such as PointNet++ capture local structures individually, without considering their inner interactions. Instead, our SRN simultaneously exploits local information by modeling their geometrical and locational relations, which play critical roles for our humans to understand 3D objects. The proposed SRN module is simple, interpretable, and does not require any additional supervision signals, which can be easily equipped with the existing networks. Experimental results on benchmark datasets indicate promising boosts on the tasks of 3D point cloud classification and segmentation by capturing structural relations with the SRN module.\r", "field": [], "task": ["3D Part Segmentation", "3D Point Cloud Classification", "Relational Reasoning"], "method": [], "dataset": ["ShapeNet-Part"], "metric": ["Class Average IoU", "Instance Average IoU"], "title": "Structural Relational Reasoning of Point Clouds"} {"abstract": "This work presents a novel pipeline that demonstrates what is achievable with a combined effort of state-of-the-art approaches, surpassing the 50% exact match on NaturalQuestions and EfficentQA datasets. Specifically, it proposes the novel R2-D2 (Rank twice, reaD twice) pipeline composed of retriever, reranker, extractive reader, generative reader and a simple way to combine them. Furthermore, previous work often comes with a massive index of external documents that scales in the order of tens of GiB. This work presents a simple approach for pruning the contents of a massive index such that the open-domain QA system altogether with index, OS, and library components fits into 6GiB docker image while retaining only 8% of original index contents and losing only 3% EM accuracy.", "field": [], "task": ["Open-Domain Question Answering"], "method": [], "dataset": ["Natural Questions"], "metric": ["Exact Match"], "title": "Pruning the Index Contents for Memory Efficient Open-Domain QA"} {"abstract": "Our work involves enriching the Stack-LSTM transition-based AMR parser (Ballesteros and Al-Onaizan, 2017) by augmenting training with Policy Learning and rewarding the Smatch score of sampled graphs. In addition, we also combined several AMR-to-text alignments with an attention mechanism and we supplemented the parser with pre-processed concept identification, named entities and contextualized embeddings. We achieve a highly competitive performance that is comparable to the best published results. We show an in-depth study ablating each of the new components of the parser", "field": [], "task": ["AMR Parsing"], "method": [], "dataset": ["LDC2017T10"], "metric": ["Smatch"], "title": "Rewarding Smatch: Transition-Based AMR Parsing with Reinforcement Learning"} {"abstract": "Video based action recognition is one of the important and challenging\nproblems in computer vision research. Bag of Visual Words model (BoVW) with\nlocal features has become the most popular method and obtained the\nstate-of-the-art performance on several realistic datasets, such as the HMDB51,\nUCF50, and UCF101. BoVW is a general pipeline to construct a global\nrepresentation from a set of local features, which is mainly composed of five\nsteps: (i) feature extraction, (ii) feature pre-processing, (iii) codebook\ngeneration, (iv) feature encoding, and (v) pooling and normalization. Many\nefforts have been made in each step independently in different scenarios and\ntheir effect on action recognition is still unknown. Meanwhile, video data\nexhibits different views of visual pattern, such as static appearance and\nmotion dynamics. Multiple descriptors are usually extracted to represent these\ndifferent views. Many feature fusion methods have been developed in other areas\nand their influence on action recognition has never been investigated before.\nThis paper aims to provide a comprehensive study of all steps in BoVW and\ndifferent fusion methods, and uncover some good practice to produce a\nstate-of-the-art action recognition system. Specifically, we explore two kinds\nof local features, ten kinds of encoding methods, eight kinds of pooling and\nnormalization strategies, and three kinds of fusion methods. We conclude that\nevery step is crucial for contributing to the final recognition rate.\nFurthermore, based on our comprehensive study, we propose a simple yet\neffective representation, called hybrid representation, by exploring the\ncomplementarity of different BoVW frameworks and local descriptors. Using this\nrepresentation, we obtain the state-of-the-art on the three challenging\ndatasets: HMDB51 (61.1%), UCF50 (92.3%), and UCF101 (87.9%).", "field": [], "task": ["Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["UCF101"], "metric": ["3-fold Accuracy"], "title": "Bag of Visual Words and Fusion Methods for Action Recognition: Comprehensive Study and Good Practice"} {"abstract": "Real-time semantic segmentation of LiDAR data is crucial for autonomously driving vehicles, which are usually equipped with an embedded platform and have limited computational resources. Approaches that operate directly on the point cloud use complex spatial aggregation operations, which are very expensive and difficult to optimize for embedded platforms. They are therefore not suitable for real-time applications with embedded systems. As an alternative, projection-based methods are more efficient and can run on embedded platforms. However, the current state-of-the-art projection-based methods do not achieve the same accuracy as point-based methods and use millions of parameters. In this paper, we therefore propose a projection-based method, called Multi-scale Interaction Network (MINet), which is very efficient and accurate. The network uses multiple paths with different scales and balances the computational resources between the scales. Additional dense interactions between the scales avoid redundant computations and make the network highly efficient. The proposed network outperforms point-based, image-based, and projection-based methods in terms of accuracy, number of parameters, and runtime. Moreover, the network processes more than 24 scans per second on an embedded platform, which is higher than the framerates of LiDAR sensors. The network is therefore suitable for autonomous vehicles.", "field": [], "task": ["3D Semantic Segmentation", "Autonomous Vehicles", "Real-Time 3D Semantic Segmentation", "Real-Time Semantic Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["SemanticKITTI"], "metric": ["Parameters (M)", "Speed (FPS)", "mIoU"], "title": "Multi-scale Interaction for Real-time LiDAR Data Segmentation on an Embedded Platform"} {"abstract": "Current state-of-the-art approaches for spatio-temporal action localization\nrely on detections at the frame level and model temporal context with 3D\nConvNets. Here, we go one step further and model spatio-temporal relations to\ncapture the interactions between human actors, relevant objects and scene\nelements essential to differentiate similar human actions. Our approach is\nweakly supervised and mines the relevant elements automatically with an\nactor-centric relational network (ACRN). ACRN computes and accumulates\npair-wise relation information from actor and global scene features, and\ngenerates relation features for action classification. It is implemented as\nneural networks and can be trained jointly with an existing action detection\nsystem. We show that ACRN outperforms alternative approaches which capture\nrelation information, and that the proposed framework improves upon the\nstate-of-the-art performance on JHMDB and AVA. A visualization of the learned\nrelation features confirms that our approach is able to attend to the relevant\nrelations for each action.", "field": [], "task": ["Action Classification", "Action Classification ", "Action Detection", "Action Localization", "Action Recognition", "Spatio-Temporal Action Localization", "Temporal Action Localization"], "method": [], "dataset": ["AVA v2.1"], "metric": ["mAP (Val)"], "title": "Actor-Centric Relation Network"} {"abstract": "The continually increasing number of complex datasets each year necessitates\never improving machine learning methods for robust and accurate categorization\nof these data. This paper introduces Random Multimodel Deep Learning (RMDL): a\nnew ensemble, deep learning approach for classification. Deep learning models\nhave achieved state-of-the-art results across many domains. RMDL solves the\nproblem of finding the best deep learning structure and architecture while\nsimultaneously improving robustness and accuracy through ensembles of deep\nlearning architectures. RDML can accept as input a variety data to include\ntext, video, images, and symbolic. This paper describes RMDL and shows test\nresults for image and text data including MNIST, CIFAR-10, WOS, Reuters, IMDB,\nand 20newsgroup. These test results show that RDML produces consistently better\nperformance than standard methods over a broad range of data types and\nclassification problems.", "field": [], "task": ["Document Classification", "Face Recognition", "Hierarchical Text Classification of Blurbs (GermEval 2019)", "Image Classification", "Multi-Label Text Classification", "Unsupervised Pre-training"], "method": [], "dataset": ["Measles", "LOCAL DATASET", "20NEWS", "CIFAR-10", "MNIST", "UCI measles"], "metric": ["Percentage error", "Percentage correct", "Sensitivity", "Accuracy (%)", "Accuracy", "Sensitivity (VEB)"], "title": "RMDL: Random Multimodel Deep Learning for Classification"} {"abstract": "This work proposes a general-purpose, fully-convolutional network\narchitecture for efficiently processing large-scale 3D data. One striking\ncharacteristic of our approach is its ability to process unorganized 3D\nrepresentations such as point clouds as input, then transforming them\ninternally to ordered structures to be processed via 3D convolutions. In\ncontrast to conventional approaches that maintain either unorganized or\norganized representations, from input to output, our approach has the advantage\nof operating on memory efficient input data representations while at the same\ntime exploiting the natural structure of convolutional operations to avoid the\nredundant computing and storing of spatial information in the network. The\nnetwork eliminates the need to pre- or post process the raw sensor data. This,\ntogether with the fully-convolutional nature of the network, makes it an\nend-to-end method able to process point clouds of huge spaces or even entire\nrooms with up to 200k points at once. Another advantage is that our network can\nproduce either an ordered output or map predictions directly onto the input\ncloud, thus making it suitable as a general-purpose point cloud descriptor\napplicable to many 3D tasks. We demonstrate our network's ability to\neffectively learn both low-level features as well as complex compositional\nrelationships by evaluating it on benchmark datasets for semantic voxel\nsegmentation, semantic part segmentation and 3D scene captioning.", "field": [], "task": ["Semantic Segmentation"], "method": [], "dataset": ["ScanNet"], "metric": ["3DIoU"], "title": "Fully-Convolutional Point Networks for Large-Scale Point Clouds"} {"abstract": "We propose a new layer design by adding a linear gating mechanism to shortcut\nconnections. By using a scalar parameter to control each gate, we provide a way\nto learn identity mappings by optimizing only one parameter. We build upon the\nmotivation behind Residual Networks, where a layer is reformulated in order to\nmake learning identity mappings less problematic to the optimizer. The\naugmentation introduces only one extra parameter per layer, and provides easier\noptimization by making degeneration into identity mappings simpler. We propose\na new model, the Gated Residual Network, which is the result when augmenting\nResidual Networks. Experimental results show that augmenting layers provides\nbetter optimization, increased performance, and more layer independence. We\nevaluate our method on MNIST using fully-connected networks, showing empirical\nindications that our augmentation facilitates the optimization of deep models,\nand that it provides high tolerance to full layer removal: the model retains\nover 90% of its performance even after half of its layers have been randomly\nremoved. We also evaluate our model on CIFAR-10 and CIFAR-100 using Wide Gated\nResNets, achieving 3.65% and 18.27% error, respectively.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["CIFAR-100", "CIFAR-10"], "metric": ["Percentage correct"], "title": "Learning Identity Mappings with Residual Gates"} {"abstract": "In this paper, we propose a novel Pattern-Affinitive Propagation (PAP) framework to jointly predict depth, surface normal and semantic segmentation. The motivation behind it comes from the statistic observation that pattern-affinitive pairs recur much frequently across different tasks as well as within a task. Thus, we can conduct two types of propagations, cross-task propagation and task-specific propagation, to adaptively diffuse those similar patterns. The former integrates cross-task affinity patterns to adapt to each task therein through the calculation on non-local relationships. Next the latter performs an iterative diffusion in the feature space so that the cross-task affinity patterns can be widely-spread within the task. Accordingly, the learning of each task can be regularized and boosted by the complementary task-level affinities. Extensive experiments demonstrate the effectiveness and the superiority of our method on the joint three tasks. Meanwhile, we achieve the state-of-the-art or competitive results on the three related datasets, NYUD-v2, SUN-RGBD and KITTI.", "field": [], "task": ["Monocular Depth Estimation", "Semantic Segmentation"], "method": [], "dataset": ["NYU-Depth V2"], "metric": ["RMSE"], "title": "Pattern-Affinitive Propagation across Depth, Surface Normal and Semantic Segmentation"} {"abstract": "Monocular depth estimation is an ill-posed problem, and as such critically relies on scene priors and semantics. Due to its complexity, we propose a deep neural network model based on a semantic divide-and-conquer approach. Our model decomposes a scene into semantic segments, such as object instances and background stuff classes, and then predicts a scale and shift invariant depth map for each semantic segment in a canonical space. Semantic segments of the same category share the same depth decoder, so the global depth prediction task is decomposed into a series of category-specific ones, which are simpler to learn and easier to generalize to new scene types. Finally, our model stitches each local depth segment by predicting its scale and shift based on the global context of the image. The model is trained end-to-end using a multi-task loss for panoptic segmentation and depth prediction, and is therefore able to leverage large-scale panoptic segmentation datasets to boost its semantic understanding. We validate the effectiveness of our approach and show state-of-the-art performance on three benchmark datasets.\r", "field": [], "task": ["Depth Estimation", "Monocular Depth Estimation", "Panoptic Segmentation"], "method": [], "dataset": ["NYU-Depth V2", "Cityscapes test"], "metric": ["RMSE"], "title": "SDC-Depth: Semantic Divide-and-Conquer Network for Monocular Depth Estimation"} {"abstract": "Pose Machines provide a sequential prediction framework for learning rich\nimplicit spatial models. In this work we show a systematic design for how\nconvolutional networks can be incorporated into the pose machine framework for\nlearning image features and image-dependent spatial models for the task of pose\nestimation. The contribution of this paper is to implicitly model long-range\ndependencies between variables in structured prediction tasks such as\narticulated pose estimation. We achieve this by designing a sequential\narchitecture composed of convolutional networks that directly operate on belief\nmaps from previous stages, producing increasingly refined estimates for part\nlocations, without the need for explicit graphical model-style inference. Our\napproach addresses the characteristic difficulty of vanishing gradients during\ntraining by providing a natural learning objective function that enforces\nintermediate supervision, thereby replenishing back-propagated gradients and\nconditioning the learning procedure. We demonstrate state-of-the-art\nperformance and outperform competing methods on standard benchmarks including\nthe MPII, LSP, and FLIC datasets.", "field": [], "task": ["3D Human Pose Estimation", "Pose Estimation", "Structured Prediction"], "method": [], "dataset": ["FLIC Wrists", "Leeds Sports Poses", "FLIC Elbows", "Total Capture", "J-HMDB", "MPII Human Pose"], "metric": ["Average MPJPE (mm)", "PCK@0.2", "PCKh-0.5", "Mean PCK@0.2", "PCK"], "title": "Convolutional Pose Machines"} {"abstract": "In this paper, we present Adaptive Computation Steps (ACS) algo-rithm, which\nenables end-to-end speech recognition models to dy-namically decide how many\nframes should be processed to predict a linguistic output. The model that\napplies ACS algorithm follows the encoder-decoder framework, while unlike the\nattention-based mod-els, it produces alignments independently at the encoder\nside using the correlation between adjacent frames. Thus, predictions can be\nmade as soon as sufficient acoustic information is received, which makes the\nmodel applicable in online cases. Besides, a small change is made to the\ndecoding stage of the encoder-decoder framework, which allows the prediction to\nexploit bidirectional contexts. We verify the ACS algorithm on a Mandarin\nspeech corpus AIShell-1, and it achieves a 31.2% CER in the online occasion,\ncompared to the 32.4% CER of the attention-based model. To fully demonstrate\nthe advantage of ACS algorithm, offline experiments are conducted, in which our\nACS model achieves an 18.7% CER, outperforming the attention-based counterpart\nwith the CER of 22.0%.", "field": [], "task": ["End-To-End Speech Recognition", "Speech Recognition"], "method": [], "dataset": ["AISHELL-1"], "metric": ["Word Error Rate (WER)"], "title": "End-to-end Speech Recognition with Adaptive Computation Steps"} {"abstract": "Normalizing Flows are generative models which produce tractable distributions where both sampling and density evaluation can be efficient and exact. The goal of this survey article is to give a coherent and comprehensive review of the literature around the construction and use of Normalizing Flows for distribution learning. We aim to provide context and explanation of the models, review current state-of-the-art literature, and identify open questions and promising future directions.", "field": ["Distribution Approximation"], "task": [], "method": ["Normalizing Flows"], "dataset": [], "metric": [], "title": "Normalizing Flows: An Introduction and Review of Current Methods"} {"abstract": "Vision-and-Language Navigation (VLN) tasks such as Room-to-Room (R2R) require machine agents to interpret natural language instructions and learn to act in visually realistic environments to achieve navigation goals. The overall task requires competence in several perception problems: successful agents combine spatio-temporal, vision and language understanding to produce appropriate action sequences. Our approach adapts pre-trained vision and language representations to relevant in-domain tasks making them more effective for VLN. Specifically, the representations are adapted to solve both a cross-modal sequence alignment and sequence coherence task. In the sequence alignment task, the model determines whether an instruction corresponds to a sequence of visual frames. In the sequence coherence task, the model determines whether the perceptual sequences are predictive sequentially in the instruction-conditioned latent space. By transferring the domain-adapted representations, we improve competitive agents in R2R as measured by the success rate weighted by path length (SPL) metric.", "field": [], "task": ["Representation Learning", "Vision and Language Navigation"], "method": [], "dataset": ["VLN Challenge"], "metric": ["length", "spl", "oracle success", "success", "error"], "title": "Transferable Representation Learning in Vision-and-Language Navigation"} {"abstract": "Despite recent impressive results on single-object and single-domain image generation, the generation of complex scenes with multiple objects remains challenging. In this paper, we start with the idea that a model must be able to understand individual objects and relationships between objects in order to generate complex scenes well. Our layout-to-image-generation method, which we call Object-Centric Generative Adversarial Network (or OC-GAN), relies on a novel Scene-Graph Similarity Module (SGSM). The SGSM learns representations of the spatial relationships between objects in the scene, which lead to our model's improved layout-fidelity. We also propose changes to the conditioning mechanism of the generator that enhance its object instance-awareness. Apart from improving image quality, our contributions mitigate two failure modes in previous approaches: (1) spurious objects being generated without corresponding bounding boxes in the layout, and (2) overlapping bounding boxes in the layout leading to merged objects in images. Extensive quantitative evaluation and ablation studies demonstrate the impact of our contributions, with our model outperforming previous state-of-the-art approaches on both the COCO-Stuff and Visual Genome datasets. Finally, we address an important limitation of evaluation metrics used in previous works by introducing SceneFID -- an object-centric adaptation of the popular Fr{\\'e}chet Inception Distance metric, that is better suited for multi-object images.", "field": [], "task": ["Image Generation", "Layout-to-Image Generation"], "method": [], "dataset": ["COCO-Stuff 64x64", "COCO-Stuff 256x256", "Visual Genome 128x128", "Visual Genome 256x256", "Visual Genome 64x64", "COCO-Stuff 128x128"], "metric": ["Inception Score", "SceneFID", "FID"], "title": "Object-Centric Image Generation from Layouts"} {"abstract": "Despite data augmentation being a de facto technique for boosting the performance of deep neural networks, little attention has been paid to developing augmentation strategies for generative adversarial networks (GANs). To this end, we introduce a novel augmentation scheme designed specifically for GAN-based semantic image synthesis models. We propose to randomly warp object shapes in the semantic label maps used as an input to the generator. The local shape discrepancies between the warped and non-warped label maps and images enable the GAN to learn better the structural and geometric details of the scene and thus to improve the quality of generated images. While benchmarking the augmented GAN models against their vanilla counterparts, we discover that the quantification metrics reported in the previous semantic image synthesis studies are strongly biased towards specific semantic classes as they are derived via an external pre-trained segmentation network. We therefore propose to improve the established semantic image synthesis evaluation scheme by analyzing separately the performance of generated images on the biased and unbiased classes for the given segmentation network. Finally, we show strong quantitative and qualitative improvements obtained with our augmentation scheme, on both class splits, using state-of-the-art semantic image synthesis models across three different datasets. On average across COCO-Stuff, ADE20K and Cityscapes datasets, the augmented models outperform their vanilla counterparts by ~3 mIoU and ~10 FID points.", "field": [], "task": ["Data Augmentation", "Image Generation", "Image-to-Image Translation"], "method": [], "dataset": ["ADE20K Labels-to-Photos", "COCO-Stuff Labels-to-Photos", "Cityscapes Labels-to-Photo"], "metric": ["mIoU", "FID", "Accuracy"], "title": "Improving Augmentation and Evaluation Schemes for Semantic Image Synthesis"} {"abstract": "Supervised training of neural networks for classification is typically performed with a global loss function. The loss function provides a gradient for the output layer, and this gradient is back-propagated to hidden layers to dictate an update direction for the weights. An alternative approach is to train the network with layer-wise loss functions. In this paper we demonstrate, for the first time, that layer-wise training can approach the state-of-the-art on a variety of image datasets. We use single-layer sub-networks and two different supervised loss functions to generate local error signals for the hidden layers, and we show that the combination of these losses help with optimization in the context of local learning. Using local errors could be a step towards more biologically plausible deep learning because the global error does not have to be transported back to hidden layers. A completely backprop free variant outperforms previously reported results among methods aiming for higher biological plausibility. Code is available https://github.com/anokland/local-loss", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["Kuzushiji-MNIST", "CIFAR-100", "CIFAR-10", "MNIST", "STL-10", "SVHN", "Fashion-MNIST"], "metric": ["Error", "Percentage error", "Percentage correct", "Accuracy"], "title": "Training Neural Networks with Local Error Signals"} {"abstract": "Recent studies have used deep residual convolutional neural networks (CNNs)\nfor JPEG compression artifact reduction. This study proposes a scalable CNN\ncalled S-Net. Our approach effectively adjusts the network scale dynamically in\na multitask system for real-time operation with little performance loss. It\noffers a simple and direct technique to evaluate the performance gains obtained\nwith increasing network depth, and it is helpful for removing redundant network\nlayers to maximize the network efficiency. We implement our architecture using\nthe Keras framework with the TensorFlow backend on an NVIDIA K80 GPU server. We\ntrain our models on the DIV2K dataset and evaluate their performance on public\nbenchmark datasets. To validate the generality and universality of the proposed\nmethod, we created and utilized a new dataset, called WIN143, for\nover-processed images evaluation. Experimental results indicate that our\nproposed approach outperforms other CNN-based methods and achieves\nstate-of-the-art performance.", "field": [], "task": ["JPEG Artifact Correction", "Jpeg Compression Artifact Reduction"], "method": [], "dataset": ["LIVE1 (Quality 20 Grayscale)", "Live1 (Quality 10 Grayscale)", "LIVE1 (Quality 10 Color)", "LIVE1 (Quality 20 Color)"], "metric": ["SSIM", "PSNR", "PSNR-B"], "title": "S-Net: A Scalable Convolutional Neural Network for JPEG Compression Artifact Reduction"} {"abstract": "JPEG is one of the widely used lossy compression methods. JPEG-compressed\nimages usually suffer from compression artifacts including blocking and\nblurring, especially at low bit-rates. Soft decoding is an effective solution\nto improve the quality of compressed images without changing codec or\nintroducing extra coding bits. Inspired by the excellent performance of the\ndeep convolutional neural networks (CNNs) on both low-level and high-level\ncomputer vision problems, we develop a dual pixel-wavelet domain deep\nCNNs-based soft decoding network for JPEG-compressed images, namely DPW-SDNet.\nThe pixel domain deep network takes the four downsampled versions of the\ncompressed image to form a 4-channel input and outputs a pixel domain\nprediction, while the wavelet domain deep network uses the 1-level discrete\nwavelet transformation (DWT) coefficients to form a 4-channel input to produce\na DWT domain prediction. The pixel domain and wavelet domain estimates are\ncombined to generate the final soft decoded result. Experimental results\ndemonstrate the superiority of the proposed DPW-SDNet over several\nstate-of-the-art compression artifacts reduction algorithms.", "field": [], "task": ["JPEG Artifact Correction"], "method": [], "dataset": ["LIVE1 (Quality 20 Grayscale)", "Live1 (Quality 10 Grayscale)", "LIVE1 (Quality 10 Color)", "LIVE1 (Quality 20 Color)"], "metric": ["SSIM", "PSNR", "PSNR-B"], "title": "DPW-SDNet: Dual Pixel-Wavelet Domain Deep CNNs for Soft Decoding of JPEG-Compressed Images"} {"abstract": "Network embeddings have become very popular in learning effective feature\nrepresentations of networks. Motivated by the recent successes of embeddings in\nnatural language processing, researchers have tried to find network embeddings\nin order to exploit machine learning algorithms for mining tasks like node\nclassification and edge prediction. However, most of the work focuses on\nfinding distributed representations of nodes, which are inherently ill-suited\nto tasks such as community detection which are intuitively dependent on\nsubgraphs.\n Here, we propose sub2vec, an unsupervised scalable algorithm to learn feature\nrepresentations of arbitrary subgraphs. We provide means to characterize\nsimilarties between subgraphs and provide theoretical analysis of sub2vec and\ndemonstrate that it preserves the so-called local proximity. We also highlight\nthe usability of sub2vec by leveraging it for network mining tasks, like\ncommunity detection. We show that sub2vec gets significant gains over\nstate-of-the-art methods and node-embedding methods. In particular, sub2vec\noffers an approach to generate a richer vocabulary of features of subgraphs to\nsupport representation and reasoning.", "field": [], "task": ["Community Detection", "Node Classification"], "method": [], "dataset": ["Android Malware Dataset"], "metric": ["Accuracy"], "title": "Distributed Representation of Subgraphs"} {"abstract": "We present a neural encoder-decoder AMR parser that extends an attention-based model by predicting the alignment between graph nodes and sentence tokens explicitly with a pointer mechanism. Candidate lemmas are predicted as a pre-processing step so that the lemmas of lexical concepts, as well as constant strings, are factored out of the graph linearization and recovered through the predicted alignments. The approach does not rely on syntactic parses or extensive external resources. Our parser obtained 59{\\%} Smatch on the SemEval test set.", "field": [], "task": ["AMR Parsing", "Lemmatization"], "method": [], "dataset": ["LDC2017T10"], "metric": ["Smatch"], "title": "Oxford at SemEval-2017 Task 9: Neural AMR Parsing with Pointer-Augmented Attention"} {"abstract": "Panoptic segmentation aims at generating pixel-wise class and instance predictions for each pixel in the input image, which is a challenging task and far more complicated than naively fusing the semantic and instance segmentation results. Prediction fusion is therefore important to achieve accurate panoptic segmentation. In this paper, we present REFINE, pREdiction FusIon NEtwork for panoptic segmentation, to achieve high-quality panoptic segmentation by improving cross-task prediction fusion, and within-task prediction fusion. Our single-model ResNeXt-101 with DCN achieves PQ=51.5 on the COCO dataset, surpassing state-of-the-art performance by a convincing margin and is comparable with ensembled models. Our smaller model with a ResNet-50 backbone achieves PQ=44.9, which is comparable with state-of-the-art methods with larger backbones.", "field": [], "task": ["Instance Segmentation", "Panoptic Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["COCO test-dev"], "metric": ["PQst", "PQ", "PQth"], "title": "REFINE: Prediction Fusion Network for Panoptic Segmentation"} {"abstract": "Panoptic segmentation that unifies instance segmentation and semantic segmentation has recently attracted increasing attention. While most existing methods focus on designing novel architectures, we steer toward a different perspective: performing automated multi-loss adaptation (named Ada-Segment) on the fly to flexibly adjust multiple training losses over the course of training using a controller trained to capture the learning dynamics. This offers a few advantages: it bypasses manual tuning of the sensitive loss combination, a decisive factor for panoptic segmentation; it allows to explicitly model the learning dynamics, and reconcile the learning of multiple objectives (up to ten in our experiments); with an end-to-end architecture, it generalizes to different datasets without the need of re-tuning hyperparameters or re-adjusting the training process laboriously. Our Ada-Segment brings 2.7% panoptic quality (PQ) improvement on COCO val split from the vanilla baseline, achieving the state-of-the-art 48.5% PQ on COCO test-dev split and 32.9% PQ on ADE20K dataset. The extensive ablation studies reveal the ever-changing dynamics throughout the training process, necessitating the incorporation of an automated and adaptive learning strategy as presented in this paper.", "field": [], "task": ["Instance Segmentation", "Panoptic Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["COCO test-dev"], "metric": ["PQst", "PQ", "PQth"], "title": "Ada-Segment: Automated Multi-loss Adaptation for Panoptic Segmentation"} {"abstract": "Person re-identification is an important task that requires learning\ndiscriminative visual features for distinguishing different person identities.\nDiverse auxiliary information has been utilized to improve the visual feature\nlearning. In this paper, we propose to exploit natural language description as\nadditional training supervisions for effective visual features. Compared with\nother auxiliary information, language can describe a specific person from more\ncompact and semantic visual aspects, thus is complementary to the pixel-level\nimage data. Our method not only learns better global visual feature with the\nsupervision of the overall description but also enforces semantic consistencies\nbetween local visual and linguistic features, which is achieved by building\nglobal and local image-language associations. The global image-language\nassociation is established according to the identity labels, while the local\nassociation is based upon the implicit correspondences between image regions\nand noun phrases. Extensive experiments demonstrate the effectiveness of\nemploying language as training supervisions with the two association schemes.\nOur method achieves state-of-the-art performance without utilizing any\nauxiliary information during testing and shows better performance than other\njoint embedding methods for the image-language association.", "field": [], "task": ["Person Re-Identification", "Text based Person Retrieval"], "method": [], "dataset": ["CUHK-PEDES"], "metric": ["R@10", "R@1", "R@5"], "title": "Improving Deep Visual Representation for Person Re-identification by Global and Local Image-language Association"} {"abstract": "We introduce the dense captioning task, which requires a computer vision\nsystem to both localize and describe salient regions in images in natural\nlanguage. The dense captioning task generalizes object detection when the\ndescriptions consist of a single word, and Image Captioning when one predicted\nregion covers the full image. To address the localization and description task\njointly we propose a Fully Convolutional Localization Network (FCLN)\narchitecture that processes an image with a single, efficient forward pass,\nrequires no external regions proposals, and can be trained end-to-end with a\nsingle round of optimization. The architecture is composed of a Convolutional\nNetwork, a novel dense localization layer, and Recurrent Neural Network\nlanguage model that generates the label sequences. We evaluate our network on\nthe Visual Genome dataset, which comprises 94,000 images and 4,100,000\nregion-grounded captions. We observe both speed and accuracy improvements over\nbaselines based on current state of the art approaches in both generation and\nretrieval settings.", "field": [], "task": ["Image Captioning", "Language Modelling", "Object Detection"], "method": [], "dataset": ["Visual Genome"], "metric": ["MAP"], "title": "DenseCap: Fully Convolutional Localization Networks for Dense Captioning"} {"abstract": "Video-based human action recognition is currently one of the most active research areas in computer vision. Various research studies indicate that the performance of action recognition is highly dependent on the type of features being extracted and how the actions are represented. Since the release of the Kinect camera, a large number of Kinect-based human action recognition techniques have been proposed in the literature. However, there still does not exist a thorough comparison of these Kinect-based techniques under the grouping of feature types, such as handcrafted versus deep learning features and depth-based versus skeleton-based features. In this paper, we analyze and compare ten recent Kinect-based algorithms for both cross-subject action recognition and cross-view action recognition using six benchmark datasets. In addition, we have implemented and improved some of these techniques and included their variants in the comparison. Our experiments show that the majority of methods perform better on cross-subject action recognition than cross-view action recognition, that skeleton-based features are more robust for cross-view recognition than depth-based features, and that deep learning features are suitable for large datasets.", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "A Comparative Review of Recent Kinect-based Action Recognition Algorithms"} {"abstract": "Skeleton-based human action recognition is becoming popular due to its computational efficiency and robustness. Since not all skeleton joints are informative for action recognition, attention mechanisms are adopted to extract informative joints and suppress the influence of irrelevant ones. However, existing attention frameworks usually ignore helpful scenario context information. In this paper, we propose a cross-attention module that consists of a self-attention branch and a cross-attention branch for skeleton-based action recognition. It helps to extract joints that are not only more informative but also highly correlated to the corresponding scenario context information. Moreover, the cross-attention module maintains input variables\u2019 size and can be flexibly incorporated into many existing frameworks without breaking their behaviors. To facilitate end-to-end training, we further develop a scenario context information extraction branch to extract context information from raw RGB video directly. We conduct comprehensive experiments on the NTU RGB+D and the Kinetics databases, and experimental results demonstrate the correctness and effectiveness of the proposed model.", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "Context-Aware Cross-Attention for Skeleton-Based Human Action Recognition"} {"abstract": "In this paper, we propose a three-stream convolutional neural network (3SCNN) for action recognition from skeleton sequences, which aims to thoroughly and fully exploit the skeleton data by extracting, learning, fusing and inferring multiple motion-related features, including 3D joint positions and joint displacements across adjacent frames as well as oriented bone segments. The proposed 3SCNN involves three sequential stages. The first stage enriches three independently extracted features by co-occurrence feature learning. The second stage involves multi-channel pairwise fusion to take advantage of the complementary and diverse nature among three features. The third stage is a multi-task and ensemble learning network to further improve the generalization ability of 3SCNN. Experimental results on the standard dataset show the effectiveness of our proposed multi-stream feature learning, fusion and inference method for skeleton-based 3D action recognition.", "field": [], "task": ["3D Action Recognition", "Action Recognition", "Skeleton Based Action Recognition"], "method": [], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "Three-Stream Convolutional Neural Network With Multi-Task and Ensemble Learning for 3D Action Recognition"} {"abstract": "Skeleton-based action recognition has recently attracted a lot of attention. Researchers are coming up with new approaches for extracting spatio-temporal relations and making considerable progress on large-scale skeleton-based datasets. Most of the architectures being proposed are based upon recurrent neural networks (RNNs), convolutional neural networks (CNNs) and graph-based CNNs. When it comes to skeleton-based action recognition, the importance of long term contextual information is central which is not captured by the current architectures. In order to come up with a better representation and capturing of long term spatio-temporal relationships, we propose three variants of Self-Attention Network (SAN), namely, SAN-V1, SAN-V2 and SAN-V3. Our SAN variants has the impressive capability of extracting high-level semantics by capturing long-range correlations. We have also integrated the Temporal Segment Network (TSN) with our SAN variants which resulted in improved overall performance. Different configurations of Self-Attention Network (SAN) variants and Temporal Segment Network (TSN) are explored with extensive experiments. Our chosen configuration outperforms state-of-the-art Top-1 and Top-5 by 4.4% and 7.9% respectively on Kinetics and shows consistently better performance than state-of-the-art methods on NTU RGB+D.", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "Self-Attention Network for Skeleton-based Human Action Recognition"} {"abstract": "Recurrent neural networks (RNNs) are capable of modeling temporal dependencies of complex sequential data. In general, current available structures of RNNs tend to concentrate on controlling the contributions of current and previous information. However, the exploration of different importance levels of different elements within an input vector is always ignored. We propose a simple yet effective Element-wise-Attention Gate (EleAttG), which can be easily added to an RNN block (e.g. all RNN neurons in an RNN layer), to empower the RNN neurons to have attentiveness capability. For an RNN block, an EleAttG is used for adaptively modulating the input by assigning different levels of importance, i.e., attention, to each element/dimension of the input. We refer to an RNN block equipped with an EleAttG as an EleAtt-RNN block. Instead of modulating the input as a whole, the EleAttG modulates the input at fine granularity, i.e., element-wise, and the modulation is content adaptive. The proposed EleAttG, as an additional fundamental unit, is general and can be applied to any RNN structures, e.g., standard RNN, Long Short-Term Memory (LSTM), or Gated Recurrent Unit (GRU). We demonstrate the effectiveness of the proposed EleAtt-RNN by applying it to different tasks including the action recognition, from both skeleton-based data and RGB videos, gesture recognition, and sequential MNIST classification. Experiments show that adding attentiveness through EleAttGs to RNN blocks significantly improves the power of RNNs.", "field": [], "task": ["Action Recognition", "Gesture Recognition", "Skeleton Based Action Recognition"], "method": [], "dataset": ["NTU RGB+D", "N-UCLA", "SYSU 3D"], "metric": ["Accuracy (CS)", "Accuracy (CV)", "Accuracy"], "title": "EleAtt-RNN: Adding Attentiveness to Neurons in Recurrent Neural Networks"} {"abstract": "Human action recognition is an important task in computer vision. Extracting\ndiscriminative spatial and temporal features to model the spatial and temporal\nevolutions of different actions plays a key role in accomplishing this task. In\nthis work, we propose an end-to-end spatial and temporal attention model for\nhuman action recognition from skeleton data. We build our model on top of the\nRecurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM), which\nlearns to selectively focus on discriminative joints of skeleton within each\nframe of the inputs and pays different levels of attention to the outputs of\ndifferent frames. Furthermore, to ensure effective training of the network, we\npropose a regularized cross-entropy loss to drive the model learning process\nand develop a joint training strategy accordingly. Experimental results\ndemonstrate the effectiveness of the proposed model,both on the small human\naction recognition data set of SBU and the currently largest NTU dataset.", "field": [], "task": ["Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "An End-to-End Spatio-Temporal Attention Model for Human Action Recognition from Skeleton Data"} {"abstract": "We propose a deep learning-based approach to the problem of premise\nselection: selecting mathematical statements relevant for proving a given\nconjecture. We represent a higher-order logic formula as a graph that is\ninvariant to variable renaming but still fully preserves syntactic and semantic\ninformation. We then embed the graph into a vector via a novel embedding method\nthat preserves the information of edge ordering. Our approach achieves\nstate-of-the-art results on the HolStep dataset, improving the classification\naccuracy from 83% to 90.3%.", "field": [], "task": ["Automated Theorem Proving", "Graph Embedding"], "method": [], "dataset": ["HolStep (Conditional)", "HolStep (Unconditional)"], "metric": ["Classification Accuracy"], "title": "Premise Selection for Theorem Proving by Deep Graph Embedding"} {"abstract": "In this paper we propose to exploit multiple related tasks for accurate multi-sensor 3D object detection. Towards this goal we present an end-to-end learnable architecture that reasons about 2D and 3D object detection as well as ground estimation and depth completion. Our experiments show that all these tasks are complementary and help the network learn better representations by fusing information at various levels. Importantly, our approach leads the KITTI benchmark on 2D, 3D and BEV object detection, while being real time.", "field": [], "task": ["3D Object Detection", "Depth Completion", "Object Detection", "Sensor Fusion"], "method": [], "dataset": ["KITTI Cars Hard", "KITTI Cars Moderate", "KITTI Cars Easy"], "metric": ["AP"], "title": "Multi-Task Multi-Sensor Fusion for 3D Object Detection"} {"abstract": "This paper introduces SelfMatch, a semi-supervised learning method that combines the power of contrastive self-supervised learning and consistency regularization. SelfMatch consists of two stages: (1) self-supervised pre-training based on contrastive learning and (2) semi-supervised fine-tuning based on augmentation consistency regularization. We empirically demonstrate that SelfMatch achieves the state-of-the-art results on standard benchmark datasets such as CIFAR-10 and SVHN. For example, for CIFAR-10 with 40 labeled examples, SelfMatch achieves 93.19% accuracy that outperforms the strong previous methods such as MixMatch (52.46%), UDA (70.95%), ReMixMatch (80.9%), and FixMatch (86.19%). We note that SelfMatch can close the gap between supervised learning (95.87%) and semi-supervised learning (93.19%) by using only a few labels for each class.", "field": [], "task": ["Self-Supervised Learning", "Semi-Supervised Image Classification"], "method": [], "dataset": ["CIFAR-10, 250 Labels", "CIFAR-10, 40 Labels", "CIFAR-10, 4000 Labels"], "metric": ["Percentage error", "Accuracy"], "title": "SelfMatch: Combining Contrastive Self-Supervision and Consistency for Semi-Supervised Learning"} {"abstract": "Most recent person re-identification approaches are based on the use of deep convolutional neural networks (CNNs). These networks, although effective in multiple tasks such as classification or object detection, tend to focus on the most discriminative part of an object rather than retrieving all its relevant features. This behavior penalizes the performance of a CNN for the re-identification task, since it should identify diverse and fine grained features. It is then essential to make the network learn a wide variety of finer characteristics in order to make the re-identification process of people effective and robust to finer changes. In this article, we introduce Deep Miner, a method that allows CNNs to \"mine\" richer and more diverse features about people for their re-identification. Deep Miner is specifically composed of three types of branches: a Global branch (G-branch), a Local branch (L-branch) and an Input-Erased branch (IE-branch). G-branch corresponds to the initial backbone which predicts global characteristics, while L-branch retrieves part level resolution features. The IE-branch for its part, receives partially suppressed feature maps as input thereby allowing the network to \"mine\" new features (those ignored by G-branch) as output. For this special purpose, a dedicated suppression procedure for identifying and removing features within a given CNN is introduced. This suppression procedure has the major benefit of being simple, while it produces a model that significantly outperforms state-of-the-art (SOTA) re-identification methods. Specifically, we conduct experiments on four standard person re-identification benchmarks and witness an absolute performance gain up to 6.5% mAP compared to SOTA.", "field": [], "task": ["Object Detection", "Person Re-Identification"], "method": [], "dataset": ["CUHK03 detected", "MSMT17", "CUHK03 labeled", "DukeMTMC-reID", "Market-1501"], "metric": ["Rank-1", "mAP", "MAP"], "title": "Deep Miner: A Deep and Multi-branch Network which Mines Rich and Diverse Features for Person Re-identification"} {"abstract": "This paper describes BomJi, a supervised system for capturing discriminative\nattributes in word pairs (e.g. yellow as discriminative for banana over\nwatermelon). The system relies on an XGB classifier trained on carefully\nengineered graph-, pattern- and word embedding based features. It participated\nin the SemEval- 2018 Task 10 on Capturing Discriminative Attributes, achieving\nan F1 score of 0:73 and ranking 2nd out of 26 participant systems.", "field": [], "task": ["Relation Extraction"], "method": [], "dataset": ["SemEval 2018 Task 10"], "metric": ["F1-Score"], "title": "BomJi at SemEval-2018 Task 10: Combining Vector-, Pattern- and Graph-based Information to Identify Discriminative Attributes"} {"abstract": "Biological systems understand the world by simultaneously processing high-dimensional inputs from modalities as diverse as vision, audition, touch, proprioception, etc. The perception models used in deep learning on the other hand are designed for individual modalities, often relying on domain-specific assumptions such as the local grid structures exploited by virtually all existing vision models. These priors introduce helpful inductive biases, but also lock models to individual modalities. In this paper we introduce the Perceiver - a model that builds upon Transformers and hence makes few architectural assumptions about the relationship between its inputs, but that also scales to hundreds of thousands of inputs, like ConvNets. The model leverages an asymmetric attention mechanism to iteratively distill inputs into a tight latent bottleneck, allowing it to scale to handle very large inputs. We show that this architecture performs competitively or beyond strong, specialized models on classification tasks across various modalities: images, point clouds, audio, video and video+audio. The Perceiver obtains performance comparable to ResNet-50 on ImageNet without convolutions and by directly attending to 50,000 pixels. It also surpasses state-of-the-art results for all modalities in AudioSet.", "field": [], "task": [], "method": [], "dataset": ["AudioSet", "ImageNet", "ModelNet40"], "metric": ["Mean Accuracy", "Test mAP", "Top 1 Accuracy"], "title": "Perceiver: General Perception with Iterative Attention"} {"abstract": "We present a new method for finding video CNN architectures that capture rich spatio-temporal information in videos. Previous work, taking advantage of 3D convolutions, obtained promising results by manually designing video CNN architectures. We here develop a novel evolutionary search algorithm that automatically explores models with different types and combinations of layers to jointly learn interactions between spatial and temporal aspects of video representations. We demonstrate the generality of this algorithm by applying it to two meta-architectures, obtaining new architectures superior to manually designed architectures. Further, we propose a new component, the iTGM layer, which more efficiently utilizes its parameters to allow learning of space-time interactions over longer time horizons. The iTGM layer is often preferred by the evolutionary algorithm and allows building cost-efficient networks. The proposed approach discovers new and diverse video architectures that were previously unknown. More importantly they are both more accurate and faster than prior models, and outperform the state-of-the-art results on multiple datasets we test, including HMDB, Kinetics, and Moments in Time. We will open source the code and models, to encourage future model development.", "field": [], "task": ["Action Classification", "Action Recognition", "Action Recognition In Videos"], "method": [], "dataset": ["Kinetics-400", "Charades", "Moments in Time"], "metric": ["MAP", "Vid acc@1", "Top 1 Accuracy"], "title": "Evolving Space-Time Neural Architectures for Videos"} {"abstract": "Conversations have an intrinsic one-to-many property, which means that multiple responses can be appropriate for the same dialog context. In task-oriented dialogs, this property leads to different valid dialog policies towards task completion. However, none of the existing task-oriented dialog generation approaches takes this property into account. We propose a Multi-Action Data Augmentation (MADA) framework to utilize the one-to-many property to generate diverse appropriate dialog responses. Specifically, we first use dialog states to summarize the dialog history, and then discover all possible mappings from every dialog state to its different valid system actions. During dialog system training, we enable the current dialog state to map to all valid system actions discovered in the previous process to create additional state-action pairs. By incorporating these additional pairs, the dialog policy learns a balanced action distribution, which further guides the dialog model to generate diverse responses. Experimental results show that the proposed framework consistently improves dialog policy diversity, and results in improved response diversity and appropriateness. Our model obtains state-of-the-art results on MultiWOZ.", "field": [], "task": ["Data Augmentation", "End-To-End Dialogue Modelling"], "method": [], "dataset": ["MULTIWOZ 2.0"], "metric": ["MultiWOZ (Inform)", "BLEU", "MultiWOZ (Success)"], "title": "Task-Oriented Dialog Systems that Consider Multiple Appropriate Responses under the Same Context"} {"abstract": "One of the most popular approaches to multi-target tracking is\ntracking-by-detection. Current min-cost flow algorithms which solve the data\nassociation problem optimally have three main drawbacks: they are\ncomputationally expensive, they assume that the whole video is given as a\nbatch, and they scale badly in memory and computation with the length of the\nvideo sequence. In this paper, we address each of these issues, resulting in a\ncomputationally and memory-bounded solution. First, we introduce a dynamic\nversion of the successive shortest-path algorithm which solves the data\nassociation problem optimally while reusing computation, resulting in\nsignificantly faster inference than standard solvers. Second, we address the\noptimal solution to the data association problem when dealing with an incoming\nstream of data (i.e., online setting). Finally, we present our main\ncontribution which is an approximate online solution with bounded memory and\ncomputation which is capable of handling videos of arbitrarily length while\nperforming tracking in real time. We demonstrate the effectiveness of our\nalgorithms on the KITTI and PETS2009 benchmarks and show state-of-the-art\nperformance, while being significantly faster than existing solvers.", "field": [], "task": [], "method": [], "dataset": ["KITTI Tracking test"], "metric": ["MOTA"], "title": "FollowMe: Efficient Online Min-Cost Flow Tracking with Bounded Memory and Computation"} {"abstract": "We propose a simple neural architecture for natural language inference. Our\napproach uses attention to decompose the problem into subproblems that can be\nsolved separately, thus making it trivially parallelizable. On the Stanford\nNatural Language Inference (SNLI) dataset, we obtain state-of-the-art results\nwith almost an order of magnitude fewer parameters than previous work and\nwithout relying on any word-order information. Adding intra-sentence attention\nthat takes a minimum amount of order into account yields further improvements.", "field": [], "task": ["Natural Language Inference"], "method": [], "dataset": ["SNLI"], "metric": ["Parameters", "% Train Accuracy", "% Test Accuracy"], "title": "A Decomposable Attention Model for Natural Language Inference"} {"abstract": "This paper designs a high-performance deep convolutional network (DeepID2+) for face recognition. It is learned with the identification-verification supervisory signal. By increasing the dimension of hidden representations and adding supervision to early convolutional layers, DeepID2+ achieves new state-of-the-art on LFW and YouTube Faces benchmarks. Through empirical studies, we have discovered three properties of its deep neural activations critical for the high performance: sparsity, selectiveness and robustness. (1) It is observed that neural activations are moderately sparse. Moderate sparsity maximizes the discriminative power of the deep net as well as the distance between images. It is surprising that DeepID2+ still can achieve high recognition accuracy even after the neural responses are binarized. (2) Its neurons in higher layers are highly selective to identities and identity-related attributes. We can identify different subsets of neurons which are either constantly excited or inhibited when different identities or attributes are present. Although DeepID2+ is not taught to distinguish attributes during training, it has implicitly learned such high-level concepts. (3) It is much more robust to occlusions, although occlusion patterns are not included in the training set.", "field": [], "task": ["Face Recognition"], "method": [], "dataset": ["YouTube Faces DB", "Labeled Faces in the Wild", "Oulu-CASIA"], "metric": ["Accuracy"], "title": "Deeply learned face representations are sparse, selective, and robust"} {"abstract": "Although Neural Machine Translation (NMT) models have advanced\nstate-of-the-art performance in machine translation, they face problems like\nthe inadequate translation. We attribute this to that the standard Maximum\nLikelihood Estimation (MLE) cannot judge the real translation quality due to\nits several limitations. In this work, we propose an adequacy-oriented learning\nmechanism for NMT by casting translation as a stochastic policy in\nReinforcement Learning (RL), where the reward is estimated by explicitly\nmeasuring translation adequacy. Benefiting from the sequence-level training of\nRL strategy and a more accurate reward designed specifically for translation,\nour model outperforms multiple strong baselines, including (1) standard and\ncoverage-augmented attention models with MLE-based training, and (2) advanced\nreinforcement and adversarial training strategies with rewards based on both\nword-level BLEU and character-level chrF3. Quantitative and qualitative\nanalyses on different language pairs and NMT architectures demonstrate the\neffectiveness and universality of the proposed approach.", "field": [], "task": ["Machine Translation"], "method": [], "dataset": ["WMT2014 English-German"], "metric": ["BLEU score"], "title": "Neural Machine Translation with Adequacy-Oriented Learning"} {"abstract": "The huge variance of human pose and the misalignment of detected human images\nsignificantly increase the difficulty of person Re-Identification (Re-ID).\nMoreover, efficient Re-ID systems are required to cope with the massive visual\ndata being produced by video surveillance systems. Targeting to solve these\nproblems, this work proposes a Global-Local-Alignment Descriptor (GLAD) and an\nefficient indexing and retrieval framework, respectively. GLAD explicitly\nleverages the local and global cues in human body to generate a discriminative\nand robust representation. It consists of part extraction and descriptor\nlearning modules, where several part regions are first detected and then deep\nneural networks are designed for representation learning on both the local and\nglobal regions. A hierarchical indexing and retrieval framework is designed to\neliminate the huge redundancy in the gallery set, and accelerate the online\nRe-ID procedure. Extensive experimental results show GLAD achieves competitive\naccuracy compared to the state-of-the-art methods. Our retrieval framework\nsignificantly accelerates the online Re-ID procedure without loss of accuracy.\nTherefore, this work has potential to work better on person Re-ID tasks in real\nscenarios.", "field": [], "task": ["Person Re-Identification", "Representation Learning"], "method": [], "dataset": ["Market-1501"], "metric": ["Rank-1", "MAP"], "title": "GLAD: Global-Local-Alignment Descriptor for Pedestrian Retrieval"} {"abstract": "Aspect-based sentiment analysis (ABSA) involves three subtasks, i.e., aspect term extraction, opinion term extraction, and aspect-level sentiment classification. Most existing studies focused on one of these subtasks only. Several recent researches made successful attempts to solve the complete ABSA problem with a unified framework. However, the interactive relations among three subtasks are still under-exploited. We argue that such relations encode collaborative signals between different subtasks. For example, when the opinion term is \\textit{{``}delicious{''}}, the aspect term must be \\textit{{``}food{''}} rather than \\textit{{``}place{''}}. In order to fully exploit these relations, we propose a Relation-Aware Collaborative Learning (RACL) framework which allows the subtasks to work coordinately via the multi-task learning and relation propagation mechanisms in a stacked multi-layer network. Extensive experiments on three real-world datasets demonstrate that RACL significantly outperforms the state-of-the-art methods for the complete ABSA task.", "field": [], "task": ["Aspect-Based Sentiment Analysis", "Multi-Task Learning", "Sentiment Analysis"], "method": [], "dataset": ["SemEval 2014 Task 4 Subtask 1+2", "SemEval 2014 Task 4 Laptop"], "metric": ["F1"], "title": "Relation-Aware Collaborative Learning for Unified Aspect-Based Sentiment Analysis"} {"abstract": "Relation extraction studies the issue of predicting semantic relations between pairs of entities in sentences. Attention mechanisms are often used in this task to alleviate the inner-sentence noise by performing soft selections of words independently. Based on the observation that information pertinent to relations is usually contained within segments (continuous words in a sentence), it is possible to make use of this phenomenon for better extraction. In this paper, we aim to incorporate such segment information into neural relation extractor. Our approach views the attention mechanism as linear-chain conditional random fields over a set of latent variables whose edges encode the desired structure, and regards attention weight as the marginal distribution of each word being selected as a part of the relational expression. Experimental results show that our method can attend to continuous relational expressions without explicit annotations, and achieve the state-of-the-art performance on the large-scale TACRED dataset.", "field": [], "task": ["Relation Extraction"], "method": [], "dataset": ["TACRED"], "metric": ["F1"], "title": "Beyond Word Attention: Using Segment Attention in Neural Relation Extraction"} {"abstract": "Hyperspectral image (HSI) classification is widely used for the analysis of remotely sensed images. Hyperspectral imagery includes varying bands of images. Convolutional Neural Network (CNN) is one of the most frequently used deep learning based methods for visual data processing. The use of CNN for HSI classification is also visible in recent works. These approaches are mostly based on 2D CNN. Whereas, the HSI classification performance is highly dependent on both spatial and spectral information. Very few methods have utilized the 3D CNN because of increased computational complexity. This letter proposes a Hybrid Spectral Convolutional Neural Network (HybridSN) for HSI classification. Basically, the HybridSN is a spectral-spatial 3D-CNN followed by spatial 2D-CNN. The 3D-CNN facilitates the joint spatial-spectral feature representation from a stack of spectral bands. The 2D-CNN on top of the 3D-CNN further learns more abstract level spatial representation. Moreover, the use of hybrid CNNs reduces the complexity of the model compared to 3D-CNN alone. To test the performance of this hybrid approach, very rigorous HSI classification experiments are performed over Indian Pines, Pavia University and Salinas Scene remote sensing datasets. The results are compared with the state-of-the-art hand-crafted as well as end-to-end deep learning based methods. A very satisfactory performance is obtained using the proposed HybridSN for HSI classification. The source code can be found at \\url{https://github.com/gokriznastic/HybridSN}.", "field": [], "task": ["Hyperspectral Image Classification", "Image Classification"], "method": [], "dataset": ["Indian Pines", "Salinas Scene"], "metric": ["Overall Accuracy"], "title": "HybridSN: Exploring 3D-2D CNN Feature Hierarchy for Hyperspectral Image Classification"} {"abstract": "In this paper, we propose a new dataset for outdoor depth estimation from single and stereo RGB images. The dataset was acquired from the point of view of a pedestrian. Currently, the most novel approaches take advantage of deep learning-based techniques, which have proven to outperform traditional state-of-the-art computer vision methods. Nonetheless, these methods require large amounts of reliable ground-truth data. Despite their already existing several datasets that could be used for depth estimation, almost none of them are outdoor-oriented from an egocentric point of view. Our dataset introduces a large number of high-definition pairs of color frames and corresponding depth maps from a human perspective. In addition, the proposed dataset also features human interaction and great variability of data, as shown in this work.", "field": [], "task": ["Depth Estimation", "Monocular Depth Estimation"], "method": [], "dataset": ["UASOL"], "metric": ["RMSE"], "title": "UASOL, a large-scale high-resolution outdoor stereo dataset"} {"abstract": "Multiple-choice reading comprehension (MCRC) is the task of selecting the\ncorrect answer from multiple options given a question and an article. Existing\nMCRC models typically either read each option independently or compute a\nfixed-length representation for each option before comparing them. However,\nhumans typically compare the options at multiple-granularity level before\nreading the article in detail to make reasoning more efficient. Mimicking\nhumans, we propose an option comparison network (OCN) for MCRC which compares\noptions at word-level to better identify their correlations to help reasoning.\nSpecially, each option is encoded into a vector sequence using a skimmer to\nretain fine-grained information as much as possible. An attention mechanism is\nleveraged to compare these sequences vector-by-vector to identify more subtle\ncorrelations between options, which is potentially valuable for reasoning.\nExperimental results on the human English exam MCRC dataset RACE show that our\nmodel outperforms existing methods significantly. Moreover, it is also the\nfirst model that surpasses Amazon Mechanical Turker performance on the whole\ndataset.", "field": [], "task": ["Question Answering", "Reading Comprehension"], "method": [], "dataset": ["RACE"], "metric": ["RACE-h", "RACE-m", "RACE"], "title": "Option Comparison Network for Multiple-choice Reading Comprehension"} {"abstract": "We show that the basic classification framework alone can be used to tackle some of the most challenging tasks in image synthesis. In contrast to other state-of-the-art approaches, the toolkit we develop is rather minimal: it uses a single, off-the-shelf classifier for all these tasks. The crux of our approach is that we train this classifier to be adversarially robust. It turns out that adversarial robustness is precisely what we need to directly manipulate salient features of the input. Overall, our findings demonstrate the utility of robustness in the broader machine learning context. Code and models for our experiments can be found at https://git.io/robust-apps.", "field": [], "task": ["Image Generation"], "method": [], "dataset": ["CIFAR-10"], "metric": ["Inception score"], "title": "Image Synthesis with a Single (Robust) Classifier"} {"abstract": "The term fine-grained visual classification (FGVC) refers to classification tasks where the classes are very similar and the classification model needs to be able to find subtle differences to make the correct prediction. State-of-the-art approaches often include a localization step designed to help a classification network by localizing the relevant parts of the input images. However, this usually requires multiple iterations or passes through a full classification network or complex training schedules. In this work we present an efficient localization module that can be fused with a classification network in an end-to-end setup. On the one hand the module is trained by the gradient flowing back from the classification network. On the other hand, two self-supervised loss functions are introduced to increase the localization accuracy. We evaluate the new model on the three benchmark datasets CUB200-2011, Stanford Cars and FGVC-Aircraft and are able to achieve competitive recognition performance.", "field": [], "task": ["Fine-Grained Image Classification"], "method": [], "dataset": [" CUB-200-2011", "Stanford Cars", "FGVC Aircraft"], "metric": ["Accuracy"], "title": "Fine-Grained Visual Classification with Efficient End-to-end Localization"} {"abstract": "Clustering is an important problem in various machine learning applications, but still a challenging task when dealing with complex real data. The existing clustering algorithms utilize either shallow models with insufficient capacity for capturing the non-linear nature of data, or deep models with large number of parameters prone to overfitting. In this paper, we propose a deep Generative Adversarial Clustering Network (ClusterGAN), which tackles the problems of training of deep clustering models in unsupervised manner. ClusterGAN consists of three networks, a discriminator, a generator and a clusterer (i.e. a clustering network). We employ an adversarial game between these three players to synthesize realistic samples given discriminative latent variables via the generator, and learn the inverse mapping of the real samples to the discriminative embedding space via the clusterer. Moreover, we utilize a conditional entropy minimization loss to increase/decrease the similarity of intra/inter cluster samples. Since the ground-truth similarities are unknown in clustering task, we propose a novel balanced self-paced learning algorithm to gradually include samples into training from easy to difficult, while considering the diversity of selected samples from all clusters. Therefore, our method makes it possible to efficiently train clusterers with large depth by leveraging the proposed adversarial game and balanced self-paced learning algorithm. According our experiments, ClusterGAN achieves competitive results compared to the state-of-the-art clustering and hashing models on several datasets. \r", "field": [], "task": ["Deep Clustering", "Image Clustering", "Image Retrieval", "Unsupervised Spatial Clustering"], "method": [], "dataset": ["USPS", "MNIST-full"], "metric": ["NMI", "Accuracy"], "title": "Balanced Self-Paced Learning for Generative Adversarial Clustering Network"} {"abstract": "Sequence encoders are crucial components in many neural architectures for learning to read and comprehend. This paper presents a new compositional encoder for reading comprehension (RC). Our proposed encoder is not only aimed at being fast but also expressive. Specifically, the key novelty behind our encoder is that it explicitly models across multiple granularities using a new dilated composition mechanism. In our approach, gating functions are learned by modeling relationships and reasoning over multi-granular sequence information, enabling compositional learning that is aware of both long and short term information. We conduct experiments on three RC datasets, showing that our proposed encoder demonstrates very promising results both as a standalone encoder as well as a complementary building block. Empirical results show that simple Bi-Attentive architectures augmented with our proposed encoder not only achieves state-of-the-art / highly competitive results but is also considerably faster than other published works.", "field": [], "task": ["Open-Domain Question Answering", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["SearchQA", "NarrativeQA"], "metric": ["METEOR", "BLEU-1", "N-gram F1", "Unigram Acc", "EM", "F1", "Rouge-L", "BLEU-4"], "title": "Multi-Granular Sequence Encoding via Dilated Compositional Units for Reading Comprehension"} {"abstract": "Recurrent Neural Networks (RNNs) with attention mechanisms have obtained\nstate-of-the-art results for many sequence processing tasks. Most of these\nmodels use a simple form of encoder with attention that looks over the entire\nsequence and assigns a weight to each token independently. We present a\nmechanism for focusing RNN encoders for sequence modelling tasks which allows\nthem to attend to key parts of the input as needed. We formulate this using a\nmulti-layer conditional sequence encoder that reads in one token at a time and\nmakes a discrete decision on whether the token is relevant to the context or\nquestion being asked. The discrete gating mechanism takes in the context\nembedding and the current hidden state as inputs and controls information flow\ninto the layer above. We train it using policy gradient methods. We evaluate\nthis method on several types of tasks with different attributes. First, we\nevaluate the method on synthetic tasks which allow us to evaluate the model for\nits generalization ability and probe the behavior of the gates in more\ncontrolled settings. We then evaluate this approach on large scale Question\nAnswering tasks including the challenging MS MARCO and SearchQA tasks. Our\nmodels shows consistent improvements for both tasks over prior work and our\nbaselines. It has also shown to generalize significantly better on synthetic\ntasks as compared to the baselines.", "field": [], "task": ["Open-Domain Question Answering", "Policy Gradient Methods", "Question Answering"], "method": [], "dataset": ["SearchQA"], "metric": ["Unigram Acc", "N-gram F1"], "title": "Focused Hierarchical RNNs for Conditional Sequence Processing"} {"abstract": "Existing weakly supervised fine-grained image recognition (WFGIR) methods usually pick out the discriminative regions from the high-level feature maps directly. We discover that due to the operation of stacking local receptive filed, Convolutional Neural Network causes the discriminative region diffusion in high-level feature maps, which leads to inaccurate discriminative region localization. In this paper, we propose an end-to-end Discriminative Feature-oriented Gaussian Mixture Model (DF-GMM), to address the problem of discriminative region diffusion and find better fine-grained details. Specifically, DF-GMM consists of 1) a low-rank representation mechanism (LRM), which learns a set of low-rank discriminative bases by Gaussian Mixture Model (GMM) in high-level semantic feature maps to improve discriminative ability of feature representation, 2) a low-rank representation reorganization mechanism (LR ^2 M) which resumes the space information corresponding to low-rank discriminative bases to reconstruct the low-rank feature maps. It alleviates the discriminative region diffusion problem and locate discriminative regions more precisely. Extensive experiments verify that DF-GMM yields the best performance under the same settings with the most competitive approaches, in CUB-Bird, Stanford-Cars datasets, and FGVC Aircraft.\r", "field": [], "task": ["Fine-Grained Image Classification", "Fine-Grained Image Recognition", "Image Classification"], "method": [], "dataset": [" CUB-200-2011", "Stanford Cars", "FGVC Aircraft"], "metric": ["Accuracy"], "title": "Weakly Supervised Fine-Grained Image Classification via Guassian Mixture Model Oriented Discriminative Learning"} {"abstract": "Fine-grained visual classification (FGVC) is becoming an important research field, due to its wide applications and the rapid development of computer vision technologies. The current state-of-the-art (SOTA) methods in the FGVC usually employ attention mechanisms to first capture the semantic parts and then discover their subtle differences between distinct classes. The channel-spatial attention mechanisms, which focus on the discriminative channels and regions simultaneously, have significantly improved the classification performance. However, the existing attention modules are poorly guided since part-based detectors in the FGVC depend on the network learning ability without the supervision of part annotations. As obtaining such part annotations is labor-intensive, some visual localization and explanation methods, such as gradient-weighted class activation mapping (Grad-CAM), can be utilized for supervising the attention mechanism. We propose a Grad-CAM guided channel-spatial attention module for the FGVC, which employs the Grad-CAM to supervise and constrain the attention weights by generating the coarse localization maps. To demonstrate the effectiveness of the proposed method, we conduct comprehensive experiments on three popular FGVC datasets, including CUB-$200$-$2011$, Stanford Cars, and FGVC-Aircraft datasets. The proposed method outperforms the SOTA attention modules in the FGVC task. In addition, visualizations of feature maps also demonstrate the superiority of the proposed method against the SOTA approaches.", "field": [], "task": ["Fine-Grained Image Classification", "Visual Localization"], "method": [], "dataset": [" CUB-200-2011"], "metric": ["Accuracy"], "title": "Grad-CAM guided channel-spatial attention module for fine-grained visual classification"} {"abstract": "The current supervised relation classification (RC) task uses a single embedding to represent the relation between a pair of entities. We argue that a better approach is to treat the RC task as a Question answering (QA) like span prediction problem. We present a span-prediction based system for RC and evaluate its performance compared to the embedding based system. We achieve state-of-the-art results on the TACRED and SemEval task 8 datasets.", "field": [], "task": ["Question Answering", "Relation Classification", "Relation Extraction"], "method": [], "dataset": ["TACRED", "SemEval-2010 Task 8"], "metric": ["F1"], "title": "Relation Extraction as Two-way Span-Prediction"} {"abstract": "Previous research on relation classification has verified the effectiveness\nof using dependency shortest paths or subtrees. In this paper, we further\nexplore how to make full use of the combination of these dependency\ninformation. We first propose a new structure, termed augmented dependency path\n(ADP), which is composed of the shortest dependency path between two entities\nand the subtrees attached to the shortest path. To exploit the semantic\nrepresentation behind the ADP structure, we develop dependency-based neural\nnetworks (DepNN): a recursive neural network designed to model the subtrees,\nand a convolutional neural network to capture the most important features on\nthe shortest path. Experiments on the SemEval-2010 dataset show that our\nproposed method achieves state-of-art results.", "field": [], "task": ["Relation Classification"], "method": [], "dataset": ["SemEval 2010 Task 8"], "metric": ["F1"], "title": "A Dependency-Based Neural Network for Relation Classification"} {"abstract": "Nowadays, neural networks play an important role in the task of relation classification. In this paper, we propose a novel attention-based convolutional neural network architecture for this task. Our model makes full use of word embedding, part-of-speech tag embedding and position embedding information. Word level attention mechanism is able to better determine which parts of the sentence are most influential with respect to the two entities of interest. This architecture enables learning some important features from task-specific labeled data, forgoing the need for external knowledge such as explicit dependency structures. Experiments on the SemEval-2010 Task 8 benchmark dataset show that our model achieves better performances than several state-of-the-art neural network models and can achieve a competitive performance just with minimal feature engineering.", "field": [], "task": ["Feature Engineering", "Relation Classification", "Relation Extraction"], "method": [], "dataset": ["SemEval-2010 Task 8"], "metric": ["F1"], "title": "Attention-Based Convolutional Neural Network for Semantic Relation Extraction"} {"abstract": "This paper describes a novel method of live keyword spotting using a two-stage time delay neural network. The model is trained using transfer learning: initial training with phone targets from a large speech corpus is followed by training with keyword targets from a smaller data set. The accuracy of the system is evaluated on two separate tasks. The first is the freely available Google Speech Commands dataset. The second is an in-house task specifically developed for keyword spotting. The results show significant improvements in false accept and false reject rates in both clean and noisy environments when compared with previously known techniques. Furthermore, we investigate various techniques to reduce computation in terms of multiplications per second of audio. Compared to recently published work, the proposed system provides up to 89% savings on computational complexity.", "field": [], "task": ["Keyword Spotting", "Transfer Learning"], "method": [], "dataset": ["Google Speech Commands"], "metric": ["10-keyword Speech Commands dataset"], "title": "Efficient keyword spotting using time delay neural networks"} {"abstract": "With the rise of low power speech-enabled devices, there is a growing demand to quickly produce models for recognizing arbitrary sets of keywords. As with many machine learning tasks, one of the most challenging parts in the model creation process is obtaining a sufficient amount of training data. In this paper, we explore the effectiveness of synthesized speech data in training small, spoken term detection models of around 400k parameters. Instead of training such models directly on the audio or low level features such as MFCCs, we use a pre-trained speech embedding model trained to extract useful features for keyword spotting models. Using this speech embedding, we show that a model which detects 10 keywords when trained on only synthetic speech is equivalent to a model trained on over 500 real examples. We also show that a model without our speech embeddings would need to be trained on over 4000 real examples to reach the same accuracy.", "field": [], "task": ["Keyword Spotting"], "method": [], "dataset": ["Google Speech Commands"], "metric": ["Google Speech Commands V2 12"], "title": "Training Keyword Spotters with Limited and Synthesized Speech Data"} {"abstract": "Data augmentation is often used to enlarge datasets with synthetic samples generated in accordance with the underlying data distribution. To enable a wider range of augmentations, we explore negative data augmentation strategies (NDA)that intentionally create out-of-distribution samples. We show that such negative out-of-distribution samples provide information on the support of the data distribution, and can be leveraged for generative modeling and representation learning. We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator. We prove that under suitable conditions, optimizing the resulting objective still recovers the true data distribution but can directly bias the generator towards avoiding samples that lack the desired structure. Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities. Further, we incorporate the same negative data augmentation strategy in a contrastive learning framework for self-supervised representation learning on images and videos, achieving improved performance on downstream image classification, object detection, and action recognition tasks. These results suggest that prior knowledge on what does not constitute valid data is an effective form of weak supervision across a range of unsupervised learning tasks.", "field": [], "task": ["Action Recognition", "Anomaly Detection", "Conditional Image Generation", "Data Augmentation", "Image Classification", "Image Generation", "Object Detection", "Representation Learning"], "method": [], "dataset": ["CIFAR-100", "CIFAR-10"], "metric": ["FID"], "title": "Negative Data Augmentation"} {"abstract": "Who did what to whom is a major focus in natural language understanding, which is right the aim of semantic role labeling (SRL) task. Despite of sharing a lot of processing characteristics and even task purpose, it is surprisingly that jointly considering these two related tasks was never formally reported in previous work. Thus this paper makes the first attempt to let SRL enhance text comprehension and inference through specifying verbal predicates and their corresponding semantic roles. In terms of deep learning models, our embeddings are enhanced by explicit contextual semantic role labels for more fine-grained semantics. We show that the salient labels can be conveniently added to existing models and significantly improve deep learning models in challenging text comprehension tasks. Extensive experiments on benchmark machine reading comprehension and inference datasets verify that the proposed semantic learning helps our system reach new state-of-the-art over strong baselines which have been enhanced by well pretrained language models from the latest progress.", "field": [], "task": ["Machine Reading Comprehension", "Natural Language Understanding", "Reading Comprehension", "Semantic Role Labeling"], "method": [], "dataset": ["SNLI"], "metric": ["Parameters", "% Train Accuracy", "% Test Accuracy"], "title": "Explicit Contextual Semantics for Text Comprehension"} {"abstract": "We evaluate the character-level translation method for neural semantic\nparsing on a large corpus of sentences annotated with Abstract Meaning\nRepresentations (AMRs). Using a sequence-to-sequence model, and some trivial\npreprocessing and postprocessing of AMRs, we obtain a baseline accuracy of 53.1\n(F-score on AMR-triples). We examine five different approaches to improve this\nbaseline result: (i) reordering AMR branches to match the word order of the\ninput sentence increases performance to 58.3; (ii) adding part-of-speech tags\n(automatically produced) to the input shows improvement as well (57.2); (iii)\nSo does the introduction of super characters (conflating frequent sequences of\ncharacters to a single character), reaching 57.4; (iv) optimizing the training\nprocess by using pre-training and averaging a set of models increases\nperformance to 58.7; (v) adding silver-standard training data obtained by an\noff-the-shelf parser yields the biggest improvement, resulting in an F-score of\n64.0. Combining all five techniques leads to an F-score of 71.0 on holdout\ndata, which is state-of-the-art in AMR parsing. This is remarkable because of\nthe relative simplicity of the approach.", "field": [], "task": ["AMR Parsing", "Semantic Parsing"], "method": [], "dataset": ["LDC2017T10"], "metric": ["Smatch"], "title": "Neural Semantic Parsing by Character-based Translation: Experiments with Abstract Meaning Representations"} {"abstract": "Recurrent neural network grammars (RNNG) are a recently proposed\nprobabilistic generative modeling family for natural language. They show\nstate-of-the-art language modeling and parsing performance. We investigate what\ninformation they learn, from a linguistic perspective, through various\nablations to the model and the data, and by augmenting the model with an\nattention mechanism (GA-RNNG) to enable closer inspection. We find that\nexplicit modeling of composition is crucial for achieving the best performance.\nThrough the attention mechanism, we find that headedness plays a central role\nin phrasal representation (with the model's latent attention largely agreeing\nwith predictions made by hand-crafted head rules, albeit with some important\ndifferences). By training grammars without nonterminal labels, we find that\nphrasal representations depend minimally on nonterminals, providing support for\nthe endocentricity hypothesis.", "field": [], "task": ["Constituency Parsing", "Dependency Parsing", "Language Modelling"], "method": [], "dataset": ["Penn Treebank"], "metric": ["F1 score"], "title": "What Do Recurrent Neural Network Grammars Learn About Syntax?"} {"abstract": "Supervised object detection and semantic segmentation require object or even\npixel level annotations. When there exist image level labels only, it is\nchallenging for weakly supervised algorithms to achieve accurate predictions.\nThe accuracy achieved by top weakly supervised algorithms is still\nsignificantly lower than their fully supervised counterparts. In this paper, we\npropose a novel weakly supervised curriculum learning pipeline for multi-label\nobject recognition, detection and semantic segmentation. In this pipeline, we\nfirst obtain intermediate object localization and pixel labeling results for\nthe training images, and then use such results to train task-specific deep\nnetworks in a fully supervised manner. The entire process consists of four\nstages, including object localization in the training images, filtering and\nfusing object instances, pixel labeling for the training images, and\ntask-specific network training. To obtain clean object instances in the\ntraining images, we propose a novel algorithm for filtering, fusing and\nclassifying object instances collected from multiple solution mechanisms. In\nthis algorithm, we incorporate both metric learning and density-based\nclustering to filter detected object instances. Experiments show that our\nweakly supervised pipeline achieves state-of-the-art results in multi-label\nimage classification as well as weakly supervised object detection and very\ncompetitive results in weakly supervised semantic segmentation on MS-COCO,\nPASCAL VOC 2007 and PASCAL VOC 2012.", "field": [], "task": ["Curriculum Learning", "Image Classification", "Metric Learning", "Multi-Label Classification", "Object Detection", "Object Localization", "Object Recognition", "Semantic Segmentation", "Weakly Supervised Object Detection", "Weakly-Supervised Semantic Segmentation"], "method": [], "dataset": ["PASCAL VOC 2007"], "metric": ["MAP"], "title": "Multi-Evidence Filtering and Fusion for Multi-Label Classification, Object Detection and Semantic Segmentation Based on Weakly Supervised Learning"} {"abstract": "We consider addressing the major failures in weakly supervised object detectors. As most weakly supervised object detection methods are based on pre-generated proposals, they often show two false detections: (i) group multiple object instances with one bounding box, and (ii) focus on only parts rather than the whole objects. We propose an image segmentation framework to help correctly detect individual instances. The input images are first segmented into several sub-images based on the proposal overlaps to uncouple the grouping objects. Then the batch of sub-images are fed into the convolutional network to train an object detector. Within each sub-image, a partial aggregation strategy is adopted to dynamically select a portion of the proposal-level scores to produce the sub-image-level output. This regularizes the model to learn context knowledge about the object content. Finally, the outputs of the sub-images are pooled together as the model prediction. The ideas are implemented with VGG-D backbone to be comparable with recent state-of-the-art weakly supervised methods. Extensive experiments on PASCAL VOC datasets show the superiority of our design. The proposed model outperforms other alternatives on detection, localization, and classification tasks.", "field": [], "task": ["Object Detection", "Semantic Segmentation", "Weakly Supervised Object Detection"], "method": [], "dataset": ["PASCAL VOC 2007"], "metric": ["MAP"], "title": "Fewer is More: Image Segmentation Based Weakly Supervised Object Detection with Partial Aggregation"} {"abstract": "We consider the problem of weakly supervised object detection, where the\ntraining samples are annotated using only image-level labels that indicate the\npresence or absence of an object category. In order to model the uncertainty in\nthe location of the objects, we employ a dissimilarity coefficient based\nprobabilistic learning objective. The learning objective minimizes the\ndifference between an annotation agnostic prediction distribution and an\nannotation aware conditional distribution. The main computational challenge is\nthe complex nature of the conditional distribution, which consists of terms\nover hundreds or thousands of variables. The complexity of the conditional\ndistribution rules out the possibility of explicitly modeling it. Instead, we\nexploit the fact that deep learning frameworks rely on stochastic optimization.\nThis allows us to use a state of the art discrete generative model that can\nprovide annotation consistent samples from the conditional distribution.\nExtensive experiments on PASCAL VOC 2007 and 2012 data sets demonstrate the\nefficacy of our proposed approach.", "field": [], "task": ["Object Detection", "Stochastic Optimization", "Weakly Supervised Object Detection"], "method": [], "dataset": ["PASCAL VOC 2007", "PASCAL VOC 2012 test"], "metric": ["MAP"], "title": "Dissimilarity Coefficient based Weakly Supervised Object Detection"} {"abstract": "Most existing weakly supervised localization (WSL) approaches learn detectors\nby finding positive bounding boxes based on features learned with image-level\nsupervision. However, those features do not contain spatial location related\ninformation and usually provide poor-quality positive samples for training a\ndetector. To overcome this issue, we propose a deep self-taught learning\napproach, which makes the detector learn the object-level features reliable for\nacquiring tight positive samples and afterwards re-train itself based on them.\nConsequently, the detector progressively improves its detection ability and\nlocalizes more informative positive samples. To implement such self-taught\nlearning, we propose a seed sample acquisition method via image-to-object\ntransferring and dense subgraph discovery to find reliable positive samples for\ninitializing the detector. An online supportive sample harvesting scheme is\nfurther proposed to dynamically select the most confident tight positive\nsamples and train the detector in a mutual boosting way. To prevent the\ndetector from being trapped in poor optima due to overfitting, we propose a new\nrelative improvement of predicted CNN scores for guiding the self-taught\nlearning process. Extensive experiments on PASCAL 2007 and 2012 show that our\napproach outperforms the state-of-the-arts, strongly validating its\neffectiveness.", "field": [], "task": ["Object Localization", "Weakly Supervised Object Detection", "Weakly-Supervised Object Localization"], "method": [], "dataset": ["PASCAL VOC 2007", "PASCAL VOC 2012 test"], "metric": ["MAP"], "title": "Deep Self-Taught Learning for Weakly Supervised Object Localization"} {"abstract": "In this paper, we address the problem of weakly supervisedobject localization (WSL), which trains a detection network on the datasetwith only image-level annotations. The proposed approach is built on theobservation that the proposal set from the training dataset is a collectionof background, object parts, and objects. Several strategies are taken toadaptively eliminate the noisy proposals and generate pseudo object-levelannotations for the weakly labeled dataset. A multiple instance learning(MIL) algorithm enhanced by mask-out strategy is adopted to collect theclass-specific object proposals, which are then utilized to adapt a pre-trained classification network to a detection network. In addition, thedetection results from the detection network are re-weighted by jointlyconsidering the detection scores and the overlap ratio of proposals in aproposal subset optimization framework. The optimal proposals work asobject-level labels that enable a pseudo-strongly supervised dataset fortraining the detection network. Consequently, we establish a fully adap-tive detection network. Extensive evaluations on the PASCAL VOC 2007and 2012 datasets demonstrate a significant improvement compared withthe state-of-the-art methods.", "field": [], "task": ["Denoising", "Multiple Instance Learning", "Object Localization", "Weakly Supervised Object Detection"], "method": [], "dataset": ["PASCAL VOC 2007", "PASCAL VOC 2012 test"], "metric": ["MAP"], "title": "Adaptively Denoising Proposal Collection forWeakly Supervised Object Localization"} {"abstract": "Deep learning has achieved excellent performance in various computer vision\ntasks, but requires a lot of training examples with clean labels. It is easy to\ncollect a dataset with noisy labels, but such noise makes networks overfit\nseriously and accuracies drop dramatically. To address this problem, we propose\nan end-to-end framework called PENCIL, which can update both network parameters\nand label estimations as label distributions. PENCIL is independent of the\nbackbone network structure and does not need an auxiliary clean dataset or\nprior information about noise, thus it is more general and robust than existing\nmethods and is easy to apply. PENCIL outperforms previous state-of-the-art\nmethods by large margins on both synthetic and real-world datasets with\ndifferent noise types and noise rates. Experiments show that PENCIL is robust\non clean datasets, too.", "field": [], "task": ["Image Classification", "Learning with noisy labels"], "method": [], "dataset": ["Clothing1M"], "metric": ["Accuracy"], "title": "Probabilistic End-to-end Noise Correction for Learning with Noisy Labels"} {"abstract": "The well-known word analogy experiments show that the recent word vectors\ncapture fine-grained linguistic regularities in words by linear vector offsets,\nbut it is unclear how well the simple vector offsets can encode visual\nregularities over words. We study a particular image-word relevance relation in\nthis paper. Our results show that the word vectors of relevant tags for a given\nimage rank ahead of the irrelevant tags, along a principal direction in the\nword vector space. Inspired by this observation, we propose to solve image\ntagging by estimating the principal direction for an image. Particularly, we\nexploit linear mappings and nonlinear deep neural networks to approximate the\nprincipal direction from an input image. We arrive at a quite versatile tagging\nmodel. It runs fast given a test image, in constant time w.r.t.\\ the training\nset size. It not only gives superior performance for the conventional tagging\ntask on the NUS-WIDE dataset, but also outperforms competitive baselines on\nannotating images with previously unseen tags", "field": [], "task": ["Multi-label zero-shot learning", "Zero-Shot Learning"], "method": [], "dataset": ["NUS-WIDE"], "metric": ["mAP"], "title": "Fast Zero-Shot Image Tagging"} {"abstract": "Recently, there is rising interest in modelling the interactions of two\nsentences with deep neural networks. However, most of the existing methods\nencode two sequences with separate encoders, in which a sentence is encoded\nwith little or no information from the other sentence. In this paper, we\npropose a deep architecture to model the strong interaction of sentence pair\nwith two coupled-LSTMs. Specifically, we introduce two coupled ways to model\nthe interdependences of two LSTMs, coupling the local contextualized\ninteractions of two sentences. We then aggregate these interactions and use a\ndynamic pooling to select the most informative features. Experiments on two\nvery large datasets demonstrate the efficacy of our proposed architecture and\nits superiority to state-of-the-art methods.", "field": [], "task": [], "method": [], "dataset": ["SNLI"], "metric": ["Parameters", "% Train Accuracy", "% Test Accuracy"], "title": "Modelling Interaction of Sentence Pair with coupled-LSTMs"} {"abstract": "Recurrent neural networks (RNNs) process input text sequentially and model\nthe conditional transition between word tokens. In contrast, the advantages of\nrecursive networks include that they explicitly model the compositionality and\nthe recursive structure of natural language. However, the current recursive\narchitecture is limited by its dependence on syntactic tree. In this paper, we\nintroduce a robust syntactic parsing-independent tree structured model, Neural\nTree Indexers (NTI) that provides a middle ground between the sequential RNNs\nand the syntactic treebased recursive models. NTI constructs a full n-ary tree\nby processing the input text with its node function in a bottom-up fashion.\nAttention mechanism can then be applied to both structure and node function. We\nimplemented and evaluated a binarytree model of NTI, showing the model achieved\nthe state-of-the-art performance on three different NLP tasks: natural language\ninference, answer sentence selection, and sentence classification,\noutperforming state-of-the-art recurrent and recursive neural networks.", "field": [], "task": ["Natural Language Inference", "Sentence Classification"], "method": [], "dataset": ["SNLI"], "metric": ["Parameters", "% Train Accuracy", "% Test Accuracy"], "title": "Neural Tree Indexers for Text Understanding"} {"abstract": "Convolutional Neural Networks (CNN) conduct image classification by activating dominant features that correlated with labels. When the training and testing data are under similar distributions, their dominant features are similar, which usually facilitates decent performance on the testing data. The performance is nonetheless unmet when tested on samples from different distributions, leading to the challenges in cross-domain image classification. We introduce a simple training heuristic, Representation Self-Challenging (RSC), that significantly improves the generalization of CNN to the out-of-domain data. RSC iteratively challenges (discards) the dominant features activated on the training data, and forces the network to activate remaining features that correlates with labels. This process appears to activate feature representations applicable to out-of-domain data without prior knowledge of new domain and without learning extra network parameters. We present theoretical properties and conditions of RSC for improving cross-domain generalization. The experiments endorse the simple, effective and architecture-agnostic nature of our RSC method.", "field": [], "task": ["Domain Generalization", "Image Classification"], "method": [], "dataset": ["VLCS", "Office-Home", "PACS"], "metric": ["Average Accuracy"], "title": "Self-Challenging Improves Cross-Domain Generalization"} {"abstract": "We present a new two-stage 3D object detection framework, named sparse-to-dense 3D Object Detector (STD). The first stage is a bottom-up proposal generation network that uses raw point cloud as input to generate accurate proposals by seeding each point with a new spherical anchor. It achieves a high recall with less computation compared with prior works. Then, PointsPool is applied for generating proposal features by transforming their interior point features from sparse expression to compact representation, which saves even more computation time. In box prediction, which is the second stage, we implement a parallel intersection-over-union (IoU) branch to increase awareness of localization accuracy, resulting in further improved performance. We conduct experiments on KITTI dataset, and evaluate our method in terms of 3D object and Bird's Eye View (BEV) detection. Our method outperforms other state-of-the-arts by a large margin, especially on the hard set, with inference speed more than 10 FPS.", "field": [], "task": ["3D Object Detection", "Object Detection"], "method": [], "dataset": ["KITTI Cyclists Hard", "KITTI Pedestrians Hard", "KITTI Cars Moderate", "KITTI Cyclists Moderate", "KITTI Cars Hard", "KITTI Pedestrians Moderate", "KITTI Pedestrians Easy", "KITTI Cyclists Easy", "KITTI Cars Easy"], "metric": ["AP"], "title": "STD: Sparse-to-Dense 3D Object Detector for Point Cloud"} {"abstract": "This paper proposes dynamic chunk reader (DCR), an end-to-end neural reading\ncomprehension (RC) model that is able to extract and rank a set of answer\ncandidates from a given document to answer questions. DCR is able to predict\nanswers of variable lengths, whereas previous neural RC models primarily\nfocused on predicting single tokens or entities. DCR encodes a document and an\ninput question with recurrent neural networks, and then applies a word-by-word\nattention mechanism to acquire question-aware representations for the document,\nfollowed by the generation of chunk representations and a ranking module to\npropose the top-ranked chunk as the answer. Experimental results show that DCR\nachieves state-of-the-art exact match and F1 scores on the SQuAD dataset.", "field": [], "task": ["Question Answering", "Reading Comprehension"], "method": [], "dataset": ["SQuAD1.1 dev", "SQuAD1.1"], "metric": ["EM", "F1"], "title": "End-to-End Answer Chunk Extraction and Ranking for Reading Comprehension"} {"abstract": "Keyword spotting (KWS) is a major component of human\u2013computer interaction for smarton-device terminals and service robots, the purpose of which is to maximize the detection accuracy whilekeeping footprint size small. In this paper, based on the powerful ability of DenseNet on extracting localfeature-maps, we propose a new network architecture (DenseNet-BiLSTM) for KWS. In our DenseNet-BiLSTM, the DenseNet is primarily applied to obtain local features, while the BiLSTM is used to grabtime series features. In general, the DenseNet is used in computer vision tasks, and it may corrupt contextualinformation for speech audios. In order to make DenseNet suitable for KWS, we propose a variant DenseNet,called DenseNet-Speech, which removes the pool on the time dimension in transition layers to preservespeech time series information. In addition, our DenseNet-Speech uses less dense blocks and filters to keepthe model small, thereby reducing time consumption for mobile devices. The experimental results show thatfeature-maps from DenseNet-Speech maintain time series information well. Our method outperforms thestate-of-the-art methods in terms of accuracy on Google Speech Commands dataset. DenseNet-BiLSTM isable to achieve the accuracy of 96.6% for the 20-commands recognition task with 223K trainable parameters.", "field": [], "task": ["Keyword Spotting", "Time Series"], "method": [], "dataset": ["Google Speech Commands"], "metric": ["Google Speech Commands V2 20"], "title": "Effective Combination of DenseNet andBiLSTM for Keyword Spotting"} {"abstract": "Self-attention networks have proven to be of profound value for its strength\nof capturing global dependencies. In this work, we propose to model localness\nfor self-attention networks, which enhances the ability of capturing useful\nlocal context. We cast localness modeling as a learnable Gaussian bias, which\nindicates the central and scope of the local region to be paid more attention.\nThe bias is then incorporated into the original attention distribution to form\na revised distribution. To maintain the strength of capturing long distance\ndependencies and enhance the ability of capturing short-range dependencies, we\nonly apply localness modeling to lower layers of self-attention networks.\nQuantitative and qualitative analyses on Chinese-English and English-German\ntranslation tasks demonstrate the effectiveness and universality of the\nproposed approach.", "field": [], "task": [], "method": [], "dataset": ["WMT2014 English-German"], "metric": ["BLEU score"], "title": "Modeling Localness for Self-Attention Networks"} {"abstract": "Change detection in high resolution remote sensing images is crucial to the understanding of land surface changes. As traditional change detection methods are not suitable for the task considering the challenges brought by the fine image details and complex texture features conveyed in high resolution images, a number of deep learning-based change detection methods have been proposed to improve the change detection performance. Although the state-of-the-art deep feature based methods outperform all the other deep learning-based change detection methods, networks in the existing deep feature based methods are mostly modified from architectures that are originally proposed for single-image semantic segmentation. Transferring these networks for change detection task still poses some key issues. In this paper, we propose a deeply supervised image fusion network (IFN) for change detection in high resolution bi-temporal remote sensing images. Specifically, highly representative deep features of bi-temporal images are firstly extracted through a fully convolutional two-stream architecture. Then, the extracted deep features are fed into a deeply supervised difference discrimination network (DDN) for change detection. To improve boundary completeness and internal compactness of objects in the output change maps, multi-level deep features of raw images are fused with image difference features by means of attention modules for change map reconstruction. DDN is further enhanced by directly introducing change map losses to intermediate layers in the network, and the whole network is trained in an end-to-end manner. IFN is applied to a publicly available dataset, as well as a challenging dataset consisting of multi-source bi-temporal images from Google Earth covering different cities in China. Both visual interpretation and quantitative assessment confirm that IFN outperforms four benchmark methods derived from the literature, by returning changed areas with complete boundaries and high internal compactness compared to the state-of-the-art methods.", "field": [], "task": ["Change detection for remote sensing images", "Semantic Segmentation"], "method": [], "dataset": ["CDD Dataset (season-varying)"], "metric": ["F1-Score"], "title": "A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sensing images"} {"abstract": "Implementation and experiments of graph embedding algorithms.deep walk,LINE(Large-scale Information Network Embedding),node2vec,SDNE(Structural Deep Network Embedding),struc2vec", "field": [], "task": ["Graph Embedding", "Network Embedding", "Node Classification"], "method": [], "dataset": ["BlogCatalog", "Wikipedia"], "metric": ["Macro-F1", "Accuracy"], "title": "struc2vec: Learning Node Representations from Structural Identity"} {"abstract": "In this paper, we propose the TBCNN-pair model to recognize entailment and\ncontradiction between two sentences. In our model, a tree-based convolutional\nneural network (TBCNN) captures sentence-level semantics; then heuristic\nmatching layers like concatenation, element-wise product/difference combine the\ninformation in individual sentences. Experimental results show that our model\noutperforms existing sentence encoding-based approaches by a large margin.", "field": [], "task": ["Natural Language Inference"], "method": [], "dataset": ["SNLI"], "metric": ["Parameters", "% Train Accuracy", "% Test Accuracy"], "title": "Natural Language Inference by Tree-Based Convolution and Heuristic Matching"} {"abstract": "In Word Sense Disambiguation (WSD), the predominant approach generally\ninvolves a supervised system trained on sense annotated corpora. The limited\nquantity of such corpora however restricts the coverage and the performance of\nthese systems. In this article, we propose a new method that solves these\nissues by taking advantage of the knowledge present in WordNet, and especially\nthe hypernymy and hyponymy relationships between synsets, in order to reduce\nthe number of different sense tags that are necessary to disambiguate all words\nof the lexical database. Our method leads to state of the art results on most\nWSD evaluation tasks, while improving the coverage of supervised systems,\nreducing the training time and the size of the models, without additional\ntraining data. In addition, we exhibit results that significantly outperform\nthe state of the art when our method is combined with an ensembling technique\nand the addition of the WordNet Gloss Tagged as training corpus.", "field": [], "task": ["Word Sense Disambiguation"], "method": [], "dataset": ["Supervised:", "SensEval 3 Task 1", "SemEval 2013 Task 12", "SemEval 2007 Task 17", "SemEval 2015 Task 13", "SemEval 2007 Task 7", "SensEval 2"], "metric": ["Senseval 2", "Senseval 3", "SemEval 2013", "F1", "SemEval 2007", "SemEval 2015"], "title": "Improving the Coverage and the Generalization Ability of Neural Word Sense Disambiguation through Hypernymy and Hyponymy Relationships"} {"abstract": "Most existing person re-identification (re-id) methods rely on supervised\nmodel learning on per-camera-pair manually labelled pairwise training data.\nThis leads to poor scalability in a practical re-id deployment, due to the lack\nof exhaustive identity labelling of positive and negative image pairs for every\ncamera-pair. In this work, we present an unsupervised re-id deep learning\napproach. It is capable of incrementally discovering and exploiting the\nunderlying re-id discriminative information from automatically generated person\ntracklet data end-to-end. We formulate an Unsupervised Tracklet Association\nLearning (UTAL) framework. This is by jointly learning within-camera tracklet\ndiscrimination and cross-camera tracklet association in order to maximise the\ndiscovery of tracklet identity matching both within and across camera views.\nExtensive experiments demonstrate the superiority of the proposed model over\nthe state-of-the-art unsupervised learning and domain adaptation person re-id\nmethods on eight benchmarking datasets.", "field": [], "task": ["Domain Adaptation", "Person Re-Identification"], "method": [], "dataset": ["PRID2011", "iLIDS-VID", "CUHK03", "DukeTracklet", "MSMT17", "DukeMTMC-reID", "MARS", "Market-1501"], "metric": ["mAP", "Rank-10", "MAP", "Rank-1", "Rank-20", "Rank-5"], "title": "Unsupervised Tracklet Person Re-Identification"} {"abstract": "We propose a novel crowd counting approach that leverages abundantly\navailable unlabeled crowd imagery in a learning-to-rank framework. To induce a\nranking of cropped images , we use the observation that any sub-image of a\ncrowded scene image is guaranteed to contain the same number or fewer persons\nthan the super-image. This allows us to address the problem of limited size of\nexisting datasets for crowd counting. We collect two crowd scene datasets from\nGoogle using keyword searches and query-by-example image retrieval,\nrespectively. We demonstrate how to efficiently learn from these unlabeled\ndatasets by incorporating learning-to-rank in a multi-task network which\nsimultaneously ranks images and estimates crowd density maps. Experiments on\ntwo of the most challenging crowd counting datasets show that our approach\nobtains state-of-the-art results.", "field": [], "task": ["Crowd Counting", "Image Retrieval", "Learning-To-Rank"], "method": [], "dataset": ["UCF CC 50", "ShanghaiTech A", "ShanghaiTech B"], "metric": ["MAE"], "title": "Leveraging Unlabeled Data for Crowd Counting by Learning to Rank"} {"abstract": "Person Re-Identification aims to retrieve person identities from images captured by multiple cameras or the same cameras in different time instances and locations. Because of its importance in many vision applications from surveillance to human-machine interaction, person re-identification methods need to be reliable and fast. While more and more deep architectures are proposed for increasing performance, those methods also increase overall model complexity. This paper proposes a lightweight network that combines global, part-based, and channel features in a unified multi-branch architecture that builds on the resource-efficient OSNet backbone. Using a well-founded combination of training techniques and design choices, our final model achieves state-of-the-art results on CUHK03 labeled, CUHK03 detected, and Market-1501 with 85.1% mAP / 87.2% rank1, 82.4% mAP / 84.9% rank1, and 91.5% mAP / 96.3% rank1, respectively.", "field": [], "task": ["Person Re-Identification"], "method": [], "dataset": ["CUHK03 detected", "Market-1501", "CUHK03 labeled"], "metric": ["Rank-1", "MAP"], "title": "Lightweight Multi-Branch Network for Person Re-Identification"} {"abstract": "People live in a 3D world. However, existing works on person re-identification (re-id) mostly consider the semantic representation learning in a 2D space, intrinsically limiting the understanding of people. In this work, we address this limitation by exploring the prior knowledge of the 3D body structure. Specifically, we project 2D images to a 3D space and introduce a novel parameter-efficient Omni-scale Graph Network (OG-Net) to learn the pedestrian representation directly from 3D point clouds. OG-Net effectively exploits the local information provided by sparse 3D points and takes advantage of the structure and appearance information in a coherent manner. With the help of 3D geometry information, we can learn a new type of deep re-id feature free from noisy variants, such as scale and viewpoint. To our knowledge, we are among the first attempts to conduct person re-identification in the 3D space. We demonstrate through extensive experiments that the proposed method (1) eases the matching difficulty in the traditional 2D space, (2) exploits the complementary information of 2D appearance and 3D structure, (3) achieves competitive results with limited parameters on four large-scale person re-id datasets, and (4) has good scalability to unseen datasets.", "field": [], "task": ["3D Point Cloud Classification", "Person Re-Identification", "Representation Learning"], "method": [], "dataset": ["MSMT17", "DukeMTMC-reID->Market-1501", "ModelNet40", "DukeMTMC-reID", "Market-1501->DukeMTMC-reID", "Market-1501"], "metric": ["Overall Accuracy", "mAP", "MAP", "Rank-1", "Mean Accuracy"], "title": "Parameter-Efficient Person Re-identification in the 3D Space"} {"abstract": "Person re-identification (re-ID) has become increasingly popular in the\ncommunity due to its application and research significance. It aims at spotting\na person of interest in other cameras. In the early days, hand-crafted\nalgorithms and small-scale evaluation were predominantly reported. Recent years\nhave witnessed the emergence of large-scale datasets and deep learning systems\nwhich make use of large data volumes. Considering different tasks, we classify\nmost current re-ID methods into two classes, i.e., image-based and video-based;\nin both tasks, hand-crafted and deep learning systems will be reviewed.\nMoreover, two new re-ID tasks which are much closer to real-world applications\nare described and discussed, i.e., end-to-end re-ID and fast re-ID in very\nlarge galleries. This paper: 1) introduces the history of person re-ID and its\nrelationship with image classification and instance retrieval; 2) surveys a\nbroad selection of the hand-crafted systems and the large-scale methods in both\nimage- and video-based re-ID; 3) describes critical future directions in\nend-to-end re-ID and fast retrieval in large galleries; and 4) finally briefs\nsome important yet under-developed issues.", "field": [], "task": ["Image Classification", "Person Re-Identification"], "method": [], "dataset": ["DukeMTMC-reID", "Market-1501"], "metric": ["Rank-1", "MAP"], "title": "Person Re-identification: Past, Present and Future"} {"abstract": "We introduce associative embedding, a novel method for supervising\nconvolutional neural networks for the task of detection and grouping. A number\nof computer vision problems can be framed in this manner including multi-person\npose estimation, instance segmentation, and multi-object tracking. Usually the\ngrouping of detections is achieved with multi-stage pipelines, instead we\npropose an approach that teaches a network to simultaneously output detections\nand group assignments. This technique can be easily integrated into any\nstate-of-the-art network architecture that produces pixel-wise predictions. We\nshow how to apply this method to both multi-person pose estimation and instance\nsegmentation and report state-of-the-art performance for multi-person pose on\nthe MPII and MS-COCO datasets.", "field": [], "task": ["Instance Segmentation", "Keypoint Detection", "Multi-Person Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["COCO", "MPII Multi-Person", "COCO test-dev"], "metric": ["Test AP", "ARM", "APM", "AR75", "AR50", "ARL", "AP75", "AP", "APL", "mAP@0.5", "AP50", "AR"], "title": "Associative Embedding: End-to-End Learning for Joint Detection and Grouping"} {"abstract": "Discriminative model learning for image denoising has been recently\nattracting considerable attentions due to its favorable denoising performance.\nIn this paper, we take one step forward by investigating the construction of\nfeed-forward denoising convolutional neural networks (DnCNNs) to embrace the\nprogress in very deep architecture, learning algorithm, and regularization\nmethod into image denoising. Specifically, residual learning and batch\nnormalization are utilized to speed up the training process as well as boost\nthe denoising performance. Different from the existing discriminative denoising\nmodels which usually train a specific model for additive white Gaussian noise\n(AWGN) at a certain noise level, our DnCNN model is able to handle Gaussian\ndenoising with unknown noise level (i.e., blind Gaussian denoising). With the\nresidual learning strategy, DnCNN implicitly removes the latent clean image in\nthe hidden layers. This property motivates us to train a single DnCNN model to\ntackle with several general image denoising tasks such as Gaussian denoising,\nsingle image super-resolution and JPEG image deblocking. Our extensive\nexperiments demonstrate that our DnCNN model can not only exhibit high\neffectiveness in several general image denoising tasks, but also be efficiently\nimplemented by benefiting from GPU computing.", "field": [], "task": ["Denoising", "Image Denoising", "Image Super-Resolution", "JPEG Artifact Correction", "Super-Resolution"], "method": [], "dataset": ["Darmstadt Noise Dataset", "Urban100 sigma15", "BSD100 - 4x upscaling", "Set14 - 2x upscaling", "BSD100 - 2x upscaling", "Urban100 - 3x upscaling", "LIVE1 (Quality 40 Grayscale)", "BSD68 sigma25", "Classic5 (Quality 20 Grayscale)", "Set5 - 2x upscaling", "Urban100 - 4x upscaling", "Set5 - 3x upscaling", "Urban100 sigma25", "Set14 - 4x upscaling", "Set14 - 3x upscaling", "Live1 (Quality 10 Grayscale)", "Classic5 (Quality 10 Grayscale)", "LIVE1 (Quality 20 Grayscale)", "Set5 - 4x upscaling", "LIVE1 (Quality 30 Grayscale)", "Classic5 (Quality 40 Grayscale)", "BSD68 sigma15", "BSD100 - 3x upscaling", "Urban100 - 2x upscaling", "CBSD68 sigma35", "Classic5 (Quality 30 Grayscale)"], "metric": ["SSIM", "PSNR"], "title": "Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising"} {"abstract": "Sparse matrix factorization is a popular tool to obtain interpretable data\ndecompositions, which are also effective to perform data completion or\ndenoising. Its applicability to large datasets has been addressed with online\nand randomized methods, that reduce the complexity in one of the matrix\ndimension, but not in both of them. In this paper, we tackle very large\nmatrices in both dimensions. We propose a new factoriza-tion method that scales\ngracefully to terabyte-scale datasets, that could not be processed by previous\nalgorithms in a reasonable amount of time. We demonstrate the efficiency of our\napproach on massive functional Magnetic Resonance Imaging (fMRI) data, and on\nmatrix completion problems for recommender systems, where we obtain significant\nspeed-ups compared to state-of-the art coordinate descent methods.", "field": [], "task": ["Dictionary Learning", "Matrix Completion", "Recommendation Systems"], "method": [], "dataset": ["MovieLens 1M", "MovieLens 10M"], "metric": ["RMSE"], "title": "Dictionary Learning for Massive Matrix Factorization"} {"abstract": "Multi-Target Multi-Camera Tracking (MTMCT) tracks many people through video\ntaken from several cameras. Person Re-Identification (Re-ID) retrieves from a\ngallery images of people similar to a person query image. We learn good\nfeatures for both MTMCT and Re-ID with a convolutional neural network. Our\ncontributions include an adaptive weighted triplet loss for training and a new\ntechnique for hard-identity mining. Our method outperforms the state of the art\nboth on the DukeMTMC benchmarks for tracking, and on the Market-1501 and\nDukeMTMC-ReID benchmarks for Re-ID. We examine the correlation between good\nRe-ID and good MTMCT scores, and perform ablation studies to elucidate the\ncontributions of the main components of our system. Code is available.", "field": [], "task": ["Person Re-Identification"], "method": [], "dataset": ["Market-1501"], "metric": ["Rank-1", "MAP"], "title": "Features for Multi-Target Multi-Camera Tracking and Re-Identification"} {"abstract": "Joint entity and relation extraction is to detect entity and relation using a single model. In this paper, we present a novel unified joint extraction model which directly tags entity and relation labels according to a query word position p, i.e., detecting entity at p, and identifying entities at other positions that have relationship with the former. To this end, we first design a tagging scheme to generate n tag sequences for an n-word sentence. Then a position-attention mechanism is introduced to produce different sentence representations for every query position to model these n tag sequences. In this way, our method can simultaneously extract all entities and their type, as well as all overlapping relations. Experiment results show that our framework performances significantly better on extracting overlapping relations as well as detecting long-range relation, and thus we achieve state-of-the-art performance on two public datasets.", "field": [], "task": ["Joint Entity and Relation Extraction", "Relation Extraction"], "method": [], "dataset": ["NYT", "NYT-single"], "metric": ["F1"], "title": "Joint extraction of entities and overlapping relations using position-attentive sequence labeling"} {"abstract": "In the past few years, the field of computer vision has gone through a\nrevolution fueled mainly by the advent of large datasets and the adoption of\ndeep convolutional neural networks for end-to-end learning. The person\nre-identification subfield is no exception to this. Unfortunately, a prevailing\nbelief in the community seems to be that the triplet loss is inferior to using\nsurrogate losses (classification, verification) followed by a separate metric\nlearning step. We show that, for models trained from scratch as well as\npretrained ones, using a variant of the triplet loss to perform end-to-end deep\nmetric learning outperforms most other published methods by a large margin.", "field": [], "task": ["Metric Learning", "Person Re-Identification"], "method": [], "dataset": ["MARS", "DukeMTMC-reID", "Market-1501", "CUHK03"], "metric": ["Rank-1", "mAP", "Rank-5", "MAP"], "title": "In Defense of the Triplet Loss for Person Re-Identification"} {"abstract": "In this paper, we propose a novel method called AlignedReID that extracts a\nglobal feature which is jointly learned with local features. Global feature\nlearning benefits greatly from local feature learning, which performs an\nalignment/matching by calculating the shortest path between two sets of local\nfeatures, without requiring extra supervision. After the joint learning, we\nonly keep the global feature to compute the similarities between images. Our\nmethod achieves rank-1 accuracy of 94.4% on Market1501 and 97.8% on CUHK03,\noutperforming state-of-the-art methods by a large margin. We also evaluate\nhuman-level performance and demonstrate that our method is the first to surpass\nhuman-level performance on Market1501 and CUHK03, two widely used Person ReID\ndatasets.", "field": [], "task": ["Person Re-Identification"], "method": [], "dataset": ["CUHK-SYSU", "Market-1501", "CUHK03"], "metric": ["Rank-1", "Rank-10", "Rank-5", "MAP"], "title": "AlignedReID: Surpassing Human-Level Performance in Person Re-Identification"} {"abstract": "In this paper, we propose quantized densely connected U-Nets for efficient\nvisual landmark localization. The idea is that features of the same semantic\nmeanings are globally reused across the stacked U-Nets. This dense connectivity\nlargely improves the information flow, yielding improved localization accuracy.\nHowever, a vanilla dense design would suffer from critical efficiency issue in\nboth training and testing. To solve this problem, we first propose order-K\ndense connectivity to trim off long-distance shortcuts; then, we use a\nmemory-efficient implementation to significantly boost the training efficiency\nand investigate an iterative refinement that may slice the model size in half.\nFinally, to reduce the memory consumption and high precision operations both in\ntraining and testing, we further quantize weights, inputs, and gradients of our\nlocalization network to low bit-width numbers. We validate our approach in two\ntasks: human pose estimation and face alignment. The results show that our\napproach achieves state-of-the-art localization accuracy, but using ~70% fewer\nparameters, ~98% less model size and saving ~75% training memory compared with\nother benchmark localizers. The code is available at\nhttps://github.com/zhiqiangdon/CU-Net.", "field": [], "task": ["Face Alignment", "Pose Estimation"], "method": [], "dataset": ["MPII Human Pose"], "metric": ["PCKh-0.5"], "title": "Quantized Densely Connected U-Nets for Efficient Landmark Localization"} {"abstract": "With the advances in capturing 2D or 3D skeleton data, skeleton-based action recognition has received an increasing interest over the last years. As skeleton data is commonly represented by graphs, graph convolutional networks have been proposed for this task. While current graph convolutional networks accurately recognize actions, they are too expensive for robotics applications where limited computational resources are available. In this paper, we therefore propose a highly efficient graph convolutional network that addresses the limitations of previous works. This is achieved by a parallel structure that gradually fuses motion and spatial information and by reducing the temporal resolution as early as possible. Furthermore, we explicitly address the issue that human poses can contain errors. To this end, the network first refines the poses before they are further processed to recognize the action. We therefore call the network Pose Refinement Graph Convolutional Network. Compared to other graph convolutional networks, our network requires 86\\%-93\\% less parameters and reduces the floating point operations by 89%-96% while achieving a comparable accuracy. It therefore provides a much better trade-off between accuracy, memory footprint and processing time, which makes it suitable for robotics applications.", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition"], "method": [], "dataset": ["NTU RGB+D", "Kinetics-Skeleton dataset"], "metric": ["Accuracy (CS)", "Accuracy (CV)", "Accuracy"], "title": "Pose Refinement Graph Convolutional Network for Skeleton-based Action Recognition"} {"abstract": "In this work, we propose a novel segmental hypergraph representation to model\noverlapping entity mentions that are prevalent in many practical datasets. We\nshow that our model built on top of such a new representation is able to\ncapture features and interactions that cannot be captured by previous models\nwhile maintaining a low time complexity for inference. We also present a\ntheoretical analysis to formally assess how our representation is better than\nalternative representations reported in the literature in terms of\nrepresentational power. Coupled with neural networks for feature learning, our\nmodel achieves the state-of-the-art performance in three benchmark datasets\nannotated with overlapping mentions.", "field": [], "task": ["Named Entity Recognition", "Nested Mention Recognition", "Nested Named Entity Recognition", "Overlapping Mention Recognition"], "method": [], "dataset": ["GENIA", "ACE 2005", "ACE 2004"], "metric": ["F1"], "title": "Neural Segmental Hypergraphs for Overlapping Mention Recognition"} {"abstract": "Sentence splitting is a major simplification operator. Here we present a\nsimple and efficient splitting algorithm based on an automatic semantic parser.\nAfter splitting, the text is amenable for further fine-tuned simplification\noperations. In particular, we show that neural Machine Translation can be\neffectively used in this situation. Previous application of Machine Translation\nfor simplification suffers from a considerable disadvantage in that they are\nover-conservative, often failing to modify the source in any way. Splitting\nbased on semantic parsing, as proposed here, alleviates this issue. Extensive\nautomatic and human evaluation shows that the proposed method compares\nfavorably to the state-of-the-art in combined lexical and structural\nsimplification.", "field": [], "task": ["Machine Translation", "Semantic Parsing", "Text Simplification"], "method": [], "dataset": ["TurkCorpus"], "metric": ["BLEU", "SARI (EASSE>=0.2.1)"], "title": "Simple and Effective Text Simplification Using Semantic and Neural Methods"} {"abstract": "Referring object detection and referring image segmentation are important\ntasks that require joint understanding of visual information and natural\nlanguage. Yet there has been evidence that current benchmark datasets suffer\nfrom bias, and current state-of-the-art models cannot be easily evaluated on\ntheir intermediate reasoning process. To address these issues and complement\nsimilar efforts in visual question answering, we build CLEVR-Ref+, a synthetic\ndiagnostic dataset for referring expression comprehension. The precise\nlocations and attributes of the objects are readily available, and the\nreferring expressions are automatically associated with functional programs.\nThe synthetic nature allows control over dataset bias (through sampling\nstrategy), and the modular programs enable intermediate reasoning ground truth\nwithout human annotators.\n In addition to evaluating several state-of-the-art models on CLEVR-Ref+, we\nalso propose IEP-Ref, a module network approach that significantly outperforms\nother models on our dataset. In particular, we present two interesting and\nimportant findings using IEP-Ref: (1) the module trained to transform feature\nmaps into segmentation masks can be attached to any intermediate module to\nreveal the entire reasoning process step-by-step; (2) even if all training data\nhas at least one object referred, IEP-Ref can correctly predict no-foreground\nwhen presented with false-premise referring expressions. To the best of our\nknowledge, this is the first direct and quantitative proof that neural modules\nbehave in the way they are intended.", "field": [], "task": ["Object Detection", "Question Answering", "Referring Expression Comprehension", "Referring Expression Segmentation", "Semantic Segmentation", "Visual Question Answering", "Visual Reasoning"], "method": [], "dataset": ["CLEVR-Ref+"], "metric": ["IoU"], "title": "CLEVR-Ref+: Diagnosing Visual Reasoning with Referring Expressions"} {"abstract": "Siamese network based trackers formulate tracking as convolutional feature\ncross-correlation between target template and searching region. However,\nSiamese trackers still have accuracy gap compared with state-of-the-art\nalgorithms and they cannot take advantage of feature from deep networks, such\nas ResNet-50 or deeper. In this work we prove the core reason comes from the\nlack of strict translation invariance. By comprehensive theoretical analysis\nand experimental validations, we break this restriction through a simple yet\neffective spatial aware sampling strategy and successfully train a\nResNet-driven Siamese tracker with significant performance gain. Moreover, we\npropose a new model architecture to perform depth-wise and layer-wise\naggregations, which not only further improves the accuracy but also reduces the\nmodel size. We conduct extensive ablation studies to demonstrate the\neffectiveness of the proposed tracker, which obtains currently the best results\non four large tracking benchmarks, including OTB2015, VOT2018, UAV123, and\nLaSOT. Our model will be released to facilitate further studies based on this\nproblem.", "field": [], "task": ["Visual Object Tracking", "Visual Tracking"], "method": [], "dataset": ["TrackingNet", "VOT2017/18"], "metric": ["Normalized Precision", "Precision", "Expected Average Overlap (EAO)", "Accuracy"], "title": "SiamRPN++: Evolution of Siamese Visual Tracking with Very Deep Networks"} {"abstract": "The problem of 3D layout recovery in indoor scenes has been a core research\ntopic for over a decade. However, there are still several major challenges that\nremain unsolved. Among the most relevant ones, a major part of the\nstate-of-the-art methods make implicit or explicit assumptions on the scenes --\ne.g. box-shaped or Manhattan layouts. Also, current methods are computationally\nexpensive and not suitable for real-time applications like robot navigation and\nAR/VR. In this work we present CFL (Corners for Layout), the first end-to-end\nmodel for 3D layout recovery on 360 images. Our experimental results show that\nwe outperform the state of the art relaxing assumptions about the scene and at\na lower cost. We also show that our model generalizes better to camera position\nvariations than conventional approaches by using EquiConvs, a type of\nconvolution applied directly on the sphere projection and hence invariant to\nthe equirectangular distortions.\n CFL Webpage: https://cfernandezlab.github.io/CFL/", "field": [], "task": ["3D Room Layouts From A Single RGB Panorama", "Robot Navigation"], "method": [], "dataset": ["PanoContext"], "metric": ["3DIoU"], "title": "Corners for Layout: End-to-End Layout Recovery from 360 Images"} {"abstract": "Most state-of-the-art methods for action recognition consist of a two-stream architecture with 3D convolutions: an appearance stream for RGB frames and a motion stream for optical flow frames. Although combining flow with RGB improves the performance, the cost of computing accurate optical flow is high, and increases action recognition latency. This limits the usage of two-stream approaches in real-world applications requiring low latency. In this paper, we introduce two learning approaches to train a standard 3D CNN, operating on RGB frames, that mimics the motion stream, and as a result avoids flow computation at test time. First, by minimizing a feature-based loss compared to the Flow stream, we show that the network reproduces the motion stream with high fidelity. Second, to leverage both appearance and motion information effectively, we train with a linear combination of the feature-based loss and the standard cross-entropy loss for action recognition. We denote the stream trained using this combined loss as Motion-Augmented RGB Stream (MARS). As a single stream, MARS performs better than RGB or Flow alone, for instance with 72.7% accuracy on Kinetics compared to 72.0% and 65.6% with RGB and Flow streams respectively.\r", "field": [], "task": ["Action Classification", "Action Recognition", "Optical Flow Estimation", "Temporal Action Localization"], "method": [], "dataset": ["Kinetics-400", "UCF101", "MiniKinetics", "Something-Something V1", "HMDB-51"], "metric": ["3-fold Accuracy", "Top 1 Accuracy", "Top-1 Accuracy", "Average accuracy of 3 splits", "Vid acc@1"], "title": "MARS: Motion-Augmented RGB Stream for Action Recognition"} {"abstract": "The task of person re-identification (ReID) has attracted growing attention in recent years leading to improved performance, albeit with little focus on real-world applications. Most SotA methods are based on heavy pre-trained models, e.g. ResNet50 (~25M parameters), which makes them less practical and more tedious to explore architecture modifications. In this study, we focus on a small-sized randomly initialized model that enables us to easily introduce architecture and training modifications suitable for person ReID. The outcomes of our study are a compact network and a fitting training regime. We show the robustness of the network by outperforming the SotA on both Market1501 and DukeMTMC. Furthermore, we show the representation power of our ReID network via SotA results on a different task of multi-object tracking.", "field": [], "task": ["Multi-Object Tracking", "Person Re-Identification"], "method": [], "dataset": ["DukeMTMC-reID", "Market-1501"], "metric": ["Rank-1", "MAP"], "title": "Compact Network Training for Person ReID"} {"abstract": "In this work, we present several deep learning models for the automatic diacritization of Arabic text. Our models are built using two main approaches, viz. Feed-Forward Neural Network (FFNN) and Recurrent Neural Network (RNN), with several enhancements such as 100-hot encoding, embeddings, Conditional Random Field (CRF) and Block-Normalized Gradient (BNG). The models are tested on the only freely available benchmark dataset and the results show that our models are either better or on par with other models, which require language-dependent post-processing steps, unlike ours. Moreover, we show that diacritics in Arabic can be used to enhance the models of NLP tasks such as Machine Translation (MT) by proposing the Translation over Diacritization (ToD) approach.", "field": [], "task": ["Arabic Text Diacritization", "Machine Translation"], "method": [], "dataset": ["Tashkeela"], "metric": ["Diacritic Error Rate", "Word Error Rate (WER)"], "title": "Neural Arabic Text Diacritization: State of the Art Results and a Novel Approach for Machine Translation"} {"abstract": "In general, sufficient data is essential for the better performance and generalization of deep-learning models. However, lots of limitations(cost, resources, etc.) of data collection leads to lack of enough data in most of the areas. In addition, various domains of each data sources and licenses also lead to difficulties in collection of sufficient data. This situation makes us hard to utilize not only the pre-trained model, but also the external knowledge. Therefore, it is important to leverage small dataset effectively for achieving the better performance. We applied some techniques in three aspects: data, loss function, and prediction to enable training from scratch with less data. With these methods, we obtain high accuracy by leveraging ImageNet data which consist of only 50 images per class. Furthermore, our model is ranked 4th in Visual Inductive Printers for Data-Effective Computer Vision Challenge.", "field": [], "task": ["Data Augmentation", "Image Classification", "Object Classification"], "method": [], "dataset": ["ImageNet VIPriors subset"], "metric": ["Top-1"], "title": "Data-Efficient Deep Learning Method for Image Classification Using Data Augmentation, Focal Cosine Loss, and Ensemble"} {"abstract": "Click-Through Rate (CTR) prediction is one of the most important and challenging in calculating advertisements and recommendation systems. To build a machine learning system with these data, it is important to properly model the interaction among features. However, many current works calculate the feature interactions in a simple way such as inner product and element-wise product. This paper aims to fully utilize the information between features and improve the performance of deep neural networks in the CTR prediction task. In this paper, we propose a Feature Interaction based Neural Network (FINN) which is able to model feature interaction via a 3-dimention relation tensor. FINN provides representations for the feature interactions on the the bottom layer and the non-linearity of neural network in modelling higher-order feature interactions. We evaluate our models on CTR prediction tasks compared with classical baselines and show that our deep FINN model outperforms other state-of-the-art deep models such as PNN and DeepFM. Evaluation results demonstrate that feature interaction contains significant information for better CTR prediction. It also indicates that our models can effectively learn the feature interactions, and achieve better performances in real-world datasets.", "field": [], "task": ["Click-Through Rate Prediction", "Recommendation Systems"], "method": [], "dataset": ["Criteo"], "metric": ["Log Loss", "AUC"], "title": "Feature Interaction based Neural Network for Click-Through Rate Prediction"} {"abstract": "Temporal action detection in long videos is an important problem.\nState-of-the-art methods address this problem by applying action classifiers on\nsliding windows. Although sliding windows may contain an identifiable portion\nof the actions, they may not necessarily cover the entire action instance,\nwhich would lead to inferior performance. We adapt a two-stage temporal action\ndetection pipeline with Cascaded Boundary Regression (CBR) model.\nClass-agnostic proposals and specific actions are detected respectively in the\nfirst and the second stage. CBR uses temporal coordinate regression to refine\nthe temporal boundaries of the sliding windows. The salient aspect of the\nrefinement process is that, inside each stage, the temporal boundaries are\nadjusted in a cascaded way by feeding the refined windows back to the system\nfor further boundary refinement. We test CBR on THUMOS-14 and TVSeries, and\nachieve state-of-the-art performance on both datasets. The performance gain is\nespecially remarkable under high IoU thresholds, e.g. map@tIoU=0.5 on THUMOS-14\nis improved from 19.0% to 31.0%.", "field": [], "task": ["Action Detection", "Regression"], "method": [], "dataset": ["THUMOS\u201914"], "metric": ["mAP IOU@0.6", "mAP IOU@0.7", "mAP IOU@0.5", "mAP IOU@0.2", "mAP IOU@0.4", "mAP IOU@0.3", "mAP IOU@0.1"], "title": "Cascaded Boundary Regression for Temporal Action Detection"} {"abstract": "This work considers the problem of computing distances between structured objects such as undirected graphs, seen as probability distributions in a specific metric space. We consider a new transportation distance (i.e. that minimizes a total cost of transporting probability masses) that unveils the geometric nature of the structured objects space. Unlike Wasserstein or Gromov-Wasserstein metrics that focus solely and respectively on features (by considering a metric in the feature space) or structure (by seeing structure as a metric space), our new distance exploits jointly both information, and is consequently called Fused Gromov-Wasserstein (FGW). After discussing its properties and computational aspects, we show results on a graph classification task, where our method outperforms both graph kernels and deep graph convolutional networks. Exploiting further on the metric properties of FGW, interesting geometric objects such as Fr\\'echet means or barycenters of graphs are illustrated and discussed in a clustering context.", "field": [], "task": ["Graph Classification", "Graph Clustering", "Time Series"], "method": [], "dataset": ["PROTEINS", "MUTAG", "ENZYMES", "NCI1"], "metric": ["Accuracy"], "title": "Optimal Transport for structured data with application on graphs"} {"abstract": "We present models for encoding sentences into embedding vectors that\nspecifically target transfer learning to other NLP tasks. The models are\nefficient and result in accurate performance on diverse transfer tasks. Two\nvariants of the encoding models allow for trade-offs between accuracy and\ncompute resources. For both variants, we investigate and report the\nrelationship between model complexity, resource consumption, the availability\nof transfer task training data, and task performance. Comparisons are made with\nbaselines that use word level transfer learning via pretrained word embeddings\nas well as baselines do not use any transfer learning. We find that transfer\nlearning using sentence embeddings tends to outperform word level transfer.\nWith transfer learning via sentence embeddings, we observe surprisingly good\nperformance with minimal amounts of supervised training data for a transfer\ntask. We obtain encouraging results on Word Embedding Association Tests (WEAT)\ntargeted at detecting model bias. Our pre-trained sentence encoding models are\nmade freely available for download and on TF Hub.", "field": [], "task": ["Conversational Response Selection", "Semantic Textual Similarity", "Sentence Embeddings", "Sentiment Analysis", "Subjectivity Analysis", "Text Classification", "Transfer Learning", "Word Embeddings"], "method": [], "dataset": ["CR", "SST-2 Binary classification", "PolyAI Reddit", "MR", "STS Benchmark", "TREC-6", "SUBJ", "MPQA"], "metric": ["Error", "1-of-100 Accuracy", "Pearson Correlation", "Accuracy"], "title": "Universal Sentence Encoder"} {"abstract": "Convolutional network are the de-facto standard for analysing spatio-temporal\ndata such as images, videos, 3D shapes, etc. Whilst some of this data is\nnaturally dense (for instance, photos), many other data sources are inherently\nsparse. Examples include pen-strokes forming on a piece of paper, or (colored)\n3D point clouds that were obtained using a LiDAR scanner or RGB-D camera.\nStandard \"dense\" implementations of convolutional networks are very inefficient\nwhen applied on such sparse data. We introduce a sparse convolutional operation\ntailored to processing sparse data that differs from prior work on sparse\nconvolutional networks in that it operates strictly on submanifolds, rather\nthan \"dilating\" the observation with every layer in the network. Our empirical\nanalysis of the resulting submanifold sparse convolutional networks shows that\nthey perform on par with state-of-the-art methods whilst requiring\nsubstantially less computation.", "field": [], "task": ["3D Part Segmentation"], "method": [], "dataset": ["ShapeNet-Part"], "metric": ["Instance Average IoU"], "title": "Submanifold Sparse Convolutional Networks"} {"abstract": "Triple extraction is an essential task in information extraction for natural language processing and knowledge graph construction. In this paper, we revisit the end-to-end triple extraction task for sequence generation. Since generative triple extraction may struggle to capture long-term dependencies and generate unfaithful triples, we introduce a novel model, contrastive triple extraction with a generative transformer. Specifically, we introduce a single shared transformer module for encoder-decoder-based generation. To generate faithful results, we propose a novel triplet contrastive training object. Moreover, we introduce two mechanisms to further improve model performance (i.e., batch-wise dynamic attention-masking and triple-wise calibration). Experimental results on three datasets (i.e., NYT, WebNLG, and MIE) show that our approach achieves better performance than that of baselines.", "field": [], "task": ["graph construction", "Relation Extraction"], "method": [], "dataset": ["NYT", "WebNLG"], "metric": ["F1"], "title": "Contrastive Triple Extraction with Generative Transformer"} {"abstract": "Dialogue Act (DA) classification is a challenging problem in dialogue\ninterpretation, which aims to attach semantic labels to utterances and\ncharacterize the speaker's intention. Currently, many existing approaches\nformulate the DA classification problem ranging from multi-classification to\nstructured prediction, which suffer from two limitations: a) these methods are\neither handcrafted feature-based or have limited memories. b) adversarial\nexamples can't be correctly classified by traditional training methods. To\naddress these issues, in this paper we first cast the problem into a question\nand answering problem and proposed an improved dynamic memory networks with\nhierarchical pyramidal utterance encoder. Moreover, we apply adversarial\ntraining to train our proposed model. We evaluate our model on two public\ndatasets, i.e., Switchboard dialogue act corpus and the MapTask corpus.\nExtensive experiments show that our proposed model is not only robust, but also\nachieves better performance when compared with some state-of-the-art baselines.", "field": [], "task": ["Dialogue Act Classification", "Dialogue Interpretation", "Structured Prediction"], "method": [], "dataset": ["Switchboard corpus"], "metric": ["Accuracy"], "title": "Improved Dynamic Memory Network for Dialogue Act Classification with Adversarial Training"} {"abstract": "Multi-hop reading comprehension focuses on one type of factoid question,\nwhere a system needs to properly integrate multiple pieces of evidence to\ncorrectly answer a question. Previous work approximates global evidence with\nlocal coreference information, encoding coreference chains with DAG-styled GRU\nlayers within a gated-attention reader. However, coreference is limited in\nproviding information for rich inference. We introduce a new method for better\nconnecting global evidence, which forms more complex graphs compared to DAGs.\nTo perform evidence integration on our graphs, we investigate two recent graph\nneural networks, namely graph convolutional network (GCN) and graph recurrent\nnetwork (GRN). Experiments on two standard datasets show that richer global\ninformation leads to better answers. Our method performs better than all\npublished results on these datasets.", "field": [], "task": ["Multi-Hop Reading Comprehension", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["COMPLEXQUESTIONS", "WikiHop"], "metric": ["F1", "Test"], "title": "Exploring Graph-structured Passage Representation for Multi-hop Reading Comprehension with Graph Neural Networks"} {"abstract": "Most Reading Comprehension methods limit themselves to queries which can be\nanswered using a single sentence, paragraph, or document. Enabling models to\ncombine disjoint pieces of textual evidence would extend the scope of machine\ncomprehension methods, but currently there exist no resources to train and test\nthis capability. We propose a novel task to encourage the development of models\nfor text understanding across multiple documents and to investigate the limits\nof existing methods. In our task, a model learns to seek and combine evidence -\neffectively performing multi-hop (alias multi-step) inference. We devise a\nmethodology to produce datasets for this task, given a collection of\nquery-answer pairs and thematically linked documents. Two datasets from\ndifferent domains are induced, and we identify potential pitfalls and devise\ncircumvention strategies. We evaluate two previously proposed competitive\nmodels and find that one can integrate information across documents. However,\nboth models struggle to select relevant information, as providing documents\nguaranteed to be relevant greatly improves their performance. While the models\noutperform several strong baselines, their best accuracy reaches 42.9% compared\nto human performance at 74.0% - leaving ample room for improvement.", "field": [], "task": ["Multi-Hop Reading Comprehension", "Reading Comprehension"], "method": [], "dataset": ["WikiHop"], "metric": ["Test"], "title": "Constructing Datasets for Multi-hop Reading Comprehension Across Documents"} {"abstract": "In statistical relational learning, the link prediction problem is key to\nautomatically understand the structure of large knowledge bases. As in previous\nstudies, we propose to solve this problem through latent factorization.\nHowever, here we make use of complex valued embeddings. The composition of\ncomplex embeddings can handle a large variety of binary relations, among them\nsymmetric and antisymmetric relations. Compared to state-of-the-art models such\nas Neural Tensor Network and Holographic Embeddings, our approach based on\ncomplex embeddings is arguably simpler, as it only uses the Hermitian dot\nproduct, the complex counterpart of the standard dot product between real\nvectors. Our approach is scalable to large datasets as it remains linear in\nboth space and time, while consistently outperforming alternative approaches on\nstandard link prediction benchmarks.", "field": [], "task": ["Link Prediction", "Relational Reasoning"], "method": [], "dataset": ["WN18RR", "WN18", "FB15k-237"], "metric": ["Hits@10", "MRR", "Hits@3", "Hits@1"], "title": "Complex Embeddings for Simple Link Prediction"} {"abstract": "Text classification is a challenging problem which aims to identify the\ncategory of texts. Recently, Capsule Networks (CapsNets) are proposed for image\nclassification. It has been shown that CapsNets have several advantages over\nConvolutional Neural Networks (CNNs), while, their validity in the domain of\ntext has less been explored. An effective method named deep compositional code\nlearning has been proposed lately. This method can save many parameters about\nword embeddings without any significant sacrifices in performance. In this\npaper, we introduce the Compositional Coding (CC) mechanism between capsules,\nand we propose a new routing algorithm, which is based on k-means clustering\ntheory. Experiments conducted on eight challenging text classification datasets\nshow the proposed method achieves competitive accuracy compared to the\nstate-of-the-art approach with significantly fewer parameters.", "field": [], "task": ["Sentiment Analysis", "Text Classification"], "method": [], "dataset": ["Yelp Fine-grained classification", "Amazon Review Polarity", "Yelp Binary classification", "Yahoo! Answers", "DBpedia", "Amazon Review Full", "AG News", "Sogou News"], "metric": ["Error", "Accuracy"], "title": "Compositional coding capsule network with k-means routing for text classification"} {"abstract": "Most of the currently successful source separation techniques use the magnitude spectrogram as input, and are therefore by default omitting part of the signal: the phase. To avoid omitting potentially useful information, we study the viability of using end-to-end models for music source separation --- which take into account all the information available in the raw audio signal, including the phase. Although during the last decades end-to-end music source separation has been considered almost unattainable, our results confirm that waveform-based models can perform similarly (if not better) than a spectrogram-based deep learning model. Namely: a Wavenet-based model we propose and Wave-U-Net can outperform DeepConvSep, a recent spectrogram-based deep learning model.", "field": [], "task": ["Music Source Separation"], "method": [], "dataset": ["MUSDB18"], "metric": ["SDR (vocals)", "SDR (other)", "SDR (drums)", "SDR (bass)"], "title": "End-to-end music source separation: is it possible in the waveform domain?"} {"abstract": "Graph convolutional networks (GCNs) have been successfully applied in node classification tasks of network mining. However, most of these models based on neighborhood aggregation are usually shallow and lack the \"graph pooling\" mechanism, which prevents the model from obtaining adequate global information. In order to increase the receptive field, we propose a novel deep Hierarchical Graph Convolutional Network (H-GCN) for semi-supervised node classification. H-GCN first repeatedly aggregates structurally similar nodes to hyper-nodes and then refines the coarsened graph to the original to restore the representation for each node. Instead of merely aggregating one- or two-hop neighborhood information, the proposed coarsening procedure enlarges the receptive field for each node, hence more global information can be captured. The proposed H-GCN model shows strong empirical performance on various public benchmark graph datasets, outperforming state-of-the-art methods and acquiring up to 5.9% performance improvement in terms of accuracy. In addition, when only a few labeled samples are provided, our model gains substantial improvements.", "field": [], "task": ["Node Classification"], "method": [], "dataset": ["Cora with Public Split: fixed 20 nodes per class", "CiteSeer with Public Split: fixed 20 nodes per class", "PubMed with Public Split: fixed 20 nodes per class"], "metric": ["Accuracy"], "title": "Hierarchical Graph Convolutional Networks for Semi-supervised Node Classification"} {"abstract": "Collaborative filtering is widely used in modern recommender systems. Recent research shows that variational autoencoders (VAEs) yield state-of-the-art performance by integrating flexible representations from deep neural networks into latent variable models, mitigating limitations of traditional linear factor models. VAEs are typically trained by maximizing the likelihood (MLE) of users interacting with ground-truth items. While simple and often effective, MLE-based training does not directly maximize the recommendation-quality metrics one typically cares about, such as top-N ranking. In this paper we investigate new methods for training collaborative filtering models based on actor-critic reinforcement learning, to directly optimize the non-differentiable quality metrics of interest. Specifically, we train a critic network to approximate ranking-based metrics, and then update the actor network (represented here by a VAE) to directly optimize against the learned metrics. In contrast to traditional learning-to-rank methods that require to re-run the optimization procedure for new lists, our critic-based method amortizes the scoring process with a neural network, and can directly provide the (approximate) ranking scores for new lists. Empirically, we show that the proposed methods outperform several state-of-the-art baselines, including recently-proposed deep learning approaches, on three large-scale real-world datasets. The code to reproduce the experimental results and figure plots is on Github: https://github.com/samlobel/RaCT_CF", "field": [], "task": ["Latent Variable Models", "Learning-To-Rank", "Recommendation Systems"], "method": [], "dataset": ["Netflix", "MovieLens 20M", "Million Song Dataset"], "metric": ["Recall@50", "Recall@20", "nDCG@100"], "title": "Towards Amortized Ranking-Critical Training for Collaborative Filtering"} {"abstract": "In this paper, we propose a wide contextual residual network (WCRN) with active learning (AL) for remote sensing image (RSI)\r\nclassification. Although ResNets have achieved great success in various applications (e.g. RSI classification), its performance is limited by the requirement of abundant labeled samples. As it is very difficult and expensive to obtain class labels in real world, we integrate the proposed WCRN with AL to improve its generalization by using the most informative training samples. Specifically, we first design\r\na wide contextual residual network for RSI classification. We then integrate it with AL to achieve good machine generalization with\r\nlimited number of training sampling. Experimental results on the University of Pavia and Flevoland datasets demonstrate that the proposed WCRN with AL can significantly reduce the needs of samples.", "field": [], "task": ["Active Learning", "Classification Of Hyperspectral Images", "Hyperspectral Image Classification", "Image Classification", "Remote Sensing Image Classification"], "method": [], "dataset": ["Pavia University"], "metric": ["Overall Accuracy", "Accuracy"], "title": "Wide Contextual Residual Network with Active Learning for Remote Sensing Image Classification"} {"abstract": "Named entity recognition (NER) is one of the best studied tasks in natural language processing. However, most approaches are not capable of handling nested structures which are common in many applications. In this paper we introduce a novel neural network architecture that first merges tokens and/or entities into entities forming nested structures, and then labels each of them independently. Unlike previous work, our merge and label approach predicts real-valued instead of discrete segmentation structures, which allow it to combine word and nested entity embeddings while maintaining differentiability. %which smoothly groups entities into single vectors across multiple levels. We evaluate our approach using the ACE 2005 Corpus, where it achieves state-of-the-art F1 of 74.6, further improved with contextual embeddings (BERT) to 82.4, an overall improvement of close to 8 F1 points over previous approaches trained on the same data. Additionally we compare it against BiLSTM-CRFs, the dominant approach for flat NER structures, demonstrating that its ability to predict nested structures does not impact performance in simpler cases.", "field": [], "task": ["Entity Embeddings", "Named Entity Recognition", "Nested Mention Recognition", "Nested Named Entity Recognition"], "method": [], "dataset": ["ACE 2005"], "metric": ["F1"], "title": "Merge and Label: A novel neural network architecture for nested NER"} {"abstract": "We propose a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner. The attention module guides our model to focus on more important regions distinguishing between source and target domains based on the attention map obtained by the auxiliary classifier. Unlike previous attention-based method which cannot handle the geometric changes between domains, our model can translate both images requiring holistic changes and images requiring large shape changes. Moreover, our new AdaLIN (Adaptive Layer-Instance Normalization) function helps our attention-guided model to flexibly control the amount of change in shape and texture by learned parameters depending on datasets. Experimental results show the superiority of the proposed method compared to the existing state-of-the-art models with a fixed network architecture and hyper-parameters. Our code and datasets are available at https://github.com/taki0112/UGATIT or https://github.com/znxlwm/UGATIT-pytorch.", "field": [], "task": ["Fundus to Angiography Generation", "Image-to-Image Translation", "Unsupervised Image-To-Image Translation"], "method": [], "dataset": ["Fundus Fluorescein Angiogram Photographs & Colour Fundus Images of Diabetic Patients", "vangogh2photo", "photo2portrait", "portrait2photo", "photo2vangogh", "selfie-to-anime", "horse2zebra", "cat2dog", "zebra2horse", "dog2cat", "anime-to-selfie"], "metric": ["Kernel Inception Distance", "FID"], "title": "U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation"} {"abstract": "Generative adversarial networks conditioned on textual image descriptions are capable of generating realistic-looking images. However, current methods still struggle to generate images based on complex image captions from a heterogeneous domain. Furthermore, quantitatively evaluating these text-to-image models is challenging, as most evaluation metrics only judge image quality but not the conformity between the image and its caption. To address these challenges we introduce a new model that explicitly models individual objects within an image and a new evaluation metric called Semantic Object Accuracy (SOA) that specifically evaluates images given an image caption. The SOA uses a pre-trained object detector to evaluate if a generated image contains objects that are mentioned in the image caption, e.g. whether an image generated from \"a car driving down the street\" contains a car. We perform a user study comparing several text-to-image models and show that our SOA metric ranks the models the same way as humans, whereas other metrics such as the Inception Score do not. Our evaluation also shows that models which explicitly model objects outperform models which only model global image characteristics.", "field": [], "task": ["Image Captioning", "Image Generation", "Text-to-Image Generation"], "method": [], "dataset": ["COCO"], "metric": ["Inception score", "SOA-C", "FID"], "title": "Semantic Object Accuracy for Generative Text-to-Image Synthesis"} {"abstract": "Distributional Reinforcement Learning (RL) differs from traditional RL in that, rather than the expectation of total returns, it estimates distributions and has achieved state-of-the-art performance on Atari Games. The key challenge in practical distributional RL algorithms lies in how to parameterize estimated distributions so as to better approximate the true continuous distribution. Existing distributional RL algorithms parameterize either the probability side or the return value side of the distribution function, leaving the other side uniformly fixed as in C51, QR-DQN or randomly sampled as in IQN. In this paper, we propose fully parameterized quantile function that parameterizes both the quantile fraction axis (i.e., the x-axis) and the value axis (i.e., y-axis) for distributional RL. Our algorithm contains a fraction proposal network that generates a discrete set of quantile fractions and a quantile value network that gives corresponding quantile values. The two networks are jointly trained to find the best approximation of the true distribution. Experiments on 55 Atari Games show that our algorithm significantly outperforms existing distributional RL algorithms and creates a new record for the Atari Learning Environment for non-distributed agents.", "field": [], "task": ["Atari Games", "Distributional Reinforcement Learning"], "method": [], "dataset": ["Atari 2600 Amidar", "Atari 2600 River Raid", "Atari 2600 Alien", "Atari 2600 Space Invaders", "Atari 2600 Phoenix", "Atari 2600 Gravitar", "Atari 2600 Ice Hockey", "Atari 2600 Bowling", "Atari 2600 Berzerk", "Atari 2600 Asterix", "Atari 2600 Breakout", "Atari 2600 Crazy Climber", "Atari 2600 James Bond", "Atari 2600 Robotank", "Atari 2600 Asteroids", "Atari 2600 Fishing Derby", "Atari 2600 Ms. Pacman", "Atari 2600 Frostbite", "Atari 2600 Star Gunner", "Atari 2600 Battle Zone", "Atari 2600 Chopper Command", "Atari 2600 Kung-Fu Master", "Atari 2600 HERO", "Atari 2600 Wizard of Wor", "Atari 2600 Skiing"], "metric": ["Score"], "title": "Fully Parameterized Quantile Function for Distributional Reinforcement Learning"} {"abstract": "Face detection and alignment in unconstrained environment is always deployed on edge devices which have limited memory storage and low computing power. This paper proposes a one-stage method named CenterFace to simultaneously predict facial box and landmark location with real-time speed and high accuracy. The proposed method also belongs to the anchor free category. This is achieved by: (a) learning face existing possibility by the semantic maps, (b) learning bounding box, offsets and five landmarks for each position that potentially contains a face. Specifically, the method can run in real-time on a single CPU core and 200 FPS using NVIDIA 2080TI for VGA-resolution images, and can simultaneously achieve superior accuracy (WIDER FACE Val/Test-Easy: 0.935/0.932, Medium: 0.924/0.921, Hard: 0.875/0.873 and FDDB discontinuous: 0.980, continuous: 0.732). A demo of CenterFace can be available at https://github.com/Star-Clouds/CenterFace.", "field": [], "task": ["Face Detection"], "method": [], "dataset": ["WIDER Face (Hard)", "WIDER Face (Medium)", "WIDER Face (Easy)"], "metric": ["AP"], "title": "CenterFace: Joint Face Detection and Alignment Using Face as Point"} {"abstract": "The existing action tubelet detectors often depend on heuristic anchor design and placement, which might be computationally expensive and sub-optimal for precise localization. In this paper, we present a conceptually simple, computationally efficient, and more precise action tubelet detection framework, termed as MovingCenter Detector (MOC-detector), by treating an action instance as a trajectory of moving points. Based on the insight that movement information could simplify and assist action tubelet detection, our MOC-detector is composed of three crucial head branches: (1) Center Branch for instance center detection and action recognition, (2) Movement Branch for movement estimation at adjacent frames to form trajectories of moving points, (3) Box Branch for spatial extent detection by directly regressing bounding box size at each estimated center. These three branches work together to generate the tubelet detection results, which could be further linked to yield video-level tubes with a matching strategy. Our MOC-detector outperforms the existing state-of-the-art methods for both metrics of frame-mAP and video-mAP on the JHMDB and UCF101-24 datasets. The performance gap is more evident for higher video IoU, demonstrating that our MOC-detector is particularly effective for more precise action detection. We provide the code at https://github.com/MCG-NJU/MOC-Detector.", "field": [], "task": ["Action Detection", "Action Recognition"], "method": [], "dataset": ["UCF101-24"], "metric": ["Video-mAP 0.5", "Video-mAP 0.75", "mAP", "Video-mAP 0.2"], "title": "Actions as Moving Points"} {"abstract": "Learning diverse features is key to the success of person re-identification. Various part-based methods have been extensively proposed for learning local representations, which, however, are still inferior to the best-performing methods for person re-identification. This paper proposes to construct a strong lightweight network architecture, termed PLR-OSNet, based on the idea of Part-Level feature Resolution over the Omni-Scale Network (OSNet) for achieving feature diversity. The proposed PLR-OSNet has two branches, one branch for global feature representation and the other branch for local feature representation. The local branch employs a uniform partition strategy for part-level feature resolution but produces only a single identity-prediction loss, which is in sharp contrast to the existing part-based methods. Empirical evidence demonstrates that the proposed PLR-OSNet achieves state-of-the-art performance on popular person Re-ID datasets, including Market1501, DukeMTMC-reID and CUHK03, despite its small model size.", "field": [], "task": ["Person Re-Identification"], "method": [], "dataset": ["CUHK03 detected", "DukeMTMC-reID", "Market-1501", "CUHK03 labeled"], "metric": ["Rank-1", "MAP"], "title": "Learning Diverse Features with Part-Level Resolution for Person Re-Identification"} {"abstract": "Word Sense Disambiguation (WSD) has been a basic and on-going issue since its introduction in natural language processing (NLP) community. Its application lies in many different areas including sentiment analysis, Information Retrieval (IR), machine translation and knowledge graph construction. Solutions to WSD are mostly categorized into supervised and knowledge-based approaches. In this paper, a knowledge-based method is proposed, modeling the problem with semantic space and semantic path hidden behind a given sentence. The approach relies on the well-known Knowledge Base (KB) named WordNet and models the semantic space and semantic path by Latent Semantic Analysis (LSA) and PageRank respectively. Experiments has proven the method\u2019s effectiveness, achieving state-of-the-art performance in several WSD datasets.", "field": [], "task": ["graph construction", "Information Retrieval", "Machine Translation", "Sentiment Analysis", "Word Sense Disambiguation"], "method": [], "dataset": ["Knowledge-based:"], "metric": ["Senseval 2", "Senseval 3", "SemEval 2013", "All", "SemEval 2007", "SemEval 2015"], "title": "Word Sense Disambiguation: A comprehensive knowledge exploitation framework"} {"abstract": "Word Sense Disambiguation is a long-standing task in Natural Language Processing, lying at the core of human language understanding. However, the evaluation of automatic systems has been problematic, mainly due to the lack of a reliable evaluation framework. In this paper we develop a unified evaluation framework and analyze the performance of various Word Sense Disambiguation systems in a fair setup. The results show that supervised systems clearly outperform knowledge-based models. Among the supervised systems, a linear classifier trained on conventional local features still proves to be a hard baseline to beat. Nonetheless, recent approaches exploiting neural networks on unlabeled corpora achieve promising results, surpassing this hard baseline in most test sets.", "field": [], "task": ["Word Sense Disambiguation"], "method": [], "dataset": ["Knowledge-based:"], "metric": ["Senseval 2", "Senseval 3", "SemEval 2013", "All", "SemEval 2007", "SemEval 2015"], "title": "Word Sense Disambiguation: A Unified Evaluation Framework and Empirical Comparison"} {"abstract": "This paper exploits the intrinsic features of urban-scene images and proposes a general add-on module, called height-driven attention networks (HANet), for improving semantic segmentation for urban-scene images. It emphasizes informative features or classes selectively according to the vertical position of a pixel. The pixel-wise class distributions are significantly different from each other among horizontally segmented sections in the urban-scene images. Likewise, urban-scene images have their own distinct characteristics, but most semantic segmentation networks do not reflect such unique attributes in the architecture. The proposed network architecture incorporates the capability exploiting the attributes to handle the urban scene dataset effectively. We validate the consistent performance (mIoU) increase of various semantic segmentation models on two datasets when HANet is adopted. This extensive quantitative analysis demonstrates that adding our module to existing models is easy and cost-effective. Our method achieves a new state-of-the-art performance on the Cityscapes benchmark with a large margin among ResNet-101 based segmentation models. Also, we show that the proposed model is coherent with the facts observed in the urban scene by visualizing and interpreting the attention map. Our code and trained models are publicly available at https://github.com/shachoi/HANet", "field": [], "task": ["Scene Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["Cityscapes test"], "metric": ["Mean IoU (class)"], "title": "Cars Can't Fly up in the Sky: Improving Urban-Scene Segmentation via Height-driven Attention Networks"} {"abstract": "We aim to provide a computationally cheap yet effective approach for fine-grained image classification (FGIC) in this letter. Unlike previous methods that rely on complex part localization modules, our approach learns fine-grained features by enhancing the semantics of sub-features of a global feature. Specifically, we first achieve the sub-feature semantic by arranging feature channels of a CNN into different groups through channel permutation. Meanwhile, to enhance the discriminability of sub-features, the groups are guided to be activated on object parts with strong discriminability by a weighted combination regularization. Our approach is parameter parsimonious and can be easily integrated into the backbone model as a plug-and-play module for end-to-end training with only image-level supervision. Experiments verified the effectiveness of our approach and validated its comparable performance to the state-of-the-art methods. Code is available at https://github.com/cswluo/SEF", "field": [], "task": ["Fine-Grained Image Classification", "Image Classification"], "method": [], "dataset": [" CUB-200-2011", "Stanford Dogs", "Stanford Cars", "FGVC Aircraft"], "metric": ["Accuracy"], "title": "Learning Semantically Enhanced Feature for Fine-Grained Image Classification"} {"abstract": "Multi-view stereo (MVS) is the golden mean between the accuracy of active depth sensing and the practicality of monocular depth estimation. Cost volume based approaches employing 3D convolutional neural networks (CNNs) have considerably improved the accuracy of MVS systems. However, this accuracy comes at a high computational cost which impedes practical adoption. Distinct from cost volume approaches, we propose an efficient depth estimation approach by first (a) detecting and evaluating descriptors for interest points, then (b) learning to match and triangulate a small set of interest points, and finally (c) densifying this sparse set of 3D points using CNNs. An end-to-end network efficiently performs all three steps within a deep learning framework and trained with intermediate 2D image and 3D geometric supervision, along with depth supervision. Crucially, our first step complements pose estimation using interest point detection and descriptor learning. We demonstrate state-of-the-art results on depth estimation with lower compute for different scene lengths. Furthermore, our method generalizes to newer environments and the descriptors output by our network compare favorably to strong baselines. Code is available at https://github.com/magicleap/DELTAS", "field": [], "task": ["Depth Estimation", "Interest Point Detection", "Monocular Depth Estimation", "Pose Estimation"], "method": [], "dataset": ["ScanNetV2"], "metric": ["Average mean absolute error", "absolute relative error"], "title": "DELTAS: Depth Estimation by Learning Triangulation And densification of Sparse points"} {"abstract": "In this paper, we tackle the new Cross-Domain Few-Shot Learning benchmark proposed by the CVPR 2020 Challenge. To this end, we build upon state-of-the-art methods in domain adaptation and few-shot learning to create a system that can be trained to perform both tasks. Inspired by the need to create models designed to be fine-tuned, we explore the integration of transfer-learning (fine-tuning) with meta-learning algorithms, to train a network that has specific layers that are designed to be adapted at a later fine-tuning stage. To do so, we modify the episodic training process to include a first-order MAML-based meta-learning algorithm, and use a Graph Neural Network model as the subsequent meta-learning module. We find that our proposed method helps to boost accuracy significantly, especially when combined with data augmentation. In our final results, we combine the novel method with the baseline method in a simple ensemble, and achieve an average accuracy of 73.78% on the benchmark. This is a 6.51% improvement over existing benchmarks that were trained solely on miniImagenet.", "field": [], "task": ["Cross-Domain Few-Shot", "cross-domain few-shot learning", "Data Augmentation", "Domain Adaptation", "Few-Shot Learning", "Meta-Learning", "Transfer Learning"], "method": [], "dataset": ["miniImagenet"], "metric": ["Accuracy (%)"], "title": "Cross-Domain Few-Shot Learning with Meta Fine-Tuning"} {"abstract": "In this paper, we demonstrate that by utilizing sparse word representations, it becomes possible to surpass the results of more complex task-specific models on the task of fine-grained all-words word sense disambiguation. Our proposed algorithm relies on an overcomplete set of semantic basis vectors that allows us to obtain sparse contextualized word representations. We introduce such an information theory-inspired synset representation based on the co-occurrence of word senses and non-zero coordinates for word forms which allows us to achieve an aggregated F-score of 78.8 over a combination of five standard word sense disambiguating benchmark datasets. We also demonstrate the general applicability of our proposed framework by evaluating it towards part-of-speech tagging on four different treebanks. Our results indicate a significant improvement over the application of the dense word representations.", "field": [], "task": ["Part-Of-Speech Tagging", "Word Sense Disambiguation"], "method": [], "dataset": ["Supervised:"], "metric": ["Senseval 2", "Senseval 3", "SemEval 2013", "SemEval 2007", "SemEval 2015"], "title": "Sparsity Makes Sense: Word Sense Disambiguation Using Sparse Contextualized Word Representations"} {"abstract": "A major obstacle in Word Sense Disambiguation (WSD) is that word senses are not uniformly distributed, causing existing models to generally perform poorly on senses that are either rare or unseen during training. We propose a bi-encoder model that independently embeds (1) the target word with its surrounding context and (2) the dictionary definition, or gloss, of each sense. The encoders are jointly optimized in the same representation space, so that sense disambiguation can be performed by finding the nearest sense embedding for each target word embedding. Our system outperforms previous state-of-the-art models on English all-words WSD; these gains predominantly come from improved performance on rare senses, leading to a 31.1{\\%} error reduction on less frequent senses over prior work. This demonstrates that rare senses can be more effectively disambiguated by modeling their definitions.", "field": [], "task": ["Word Sense Disambiguation"], "method": [], "dataset": ["Supervised:"], "metric": ["Senseval 2", "Senseval 3", "SemEval 2013", "SemEval 2007", "SemEval 2015"], "title": "Moving Down the Long Tail of Word Sense Disambiguation with Gloss Informed Bi-encoders"} {"abstract": "Neural network approaches to Named-Entity Recognition reduce the need for\ncarefully hand-crafted features. While some features do remain in\nstate-of-the-art systems, lexical features have been mostly discarded, with the\nexception of gazetteers. In this work, we show that this is unfair: lexical\nfeatures are actually quite useful. We propose to embed words and entity types\ninto a low-dimensional vector space we train from annotated data produced by\ndistant supervision thanks to Wikipedia. From this, we compute - offline - a\nfeature vector representing each word. When used with a vanilla recurrent\nneural network model, this representation yields substantial improvements. We\nestablish a new state-of-the-art F1 score of 87.95 on ONTONOTES 5.0, while\nmatching state-of-the-art performance with a F1 score of 91.73 on the\nover-studied CONLL-2003 dataset.", "field": [], "task": ["Named Entity Recognition"], "method": [], "dataset": ["Ontonotes v5 (English)", "CoNLL 2003 (English)"], "metric": ["F1"], "title": "Robust Lexical Features for Improved Neural Network Named-Entity Recognition"} {"abstract": "Recent advances in facial landmark detection achieve success by learning\ndiscriminative features from rich deformation of face shapes and poses. Besides\nthe variance of faces themselves, the intrinsic variance of image styles, e.g.,\ngrayscale vs. color images, light vs. dark, intense vs. dull, and so on, has\nconstantly been overlooked. This issue becomes inevitable as increasing web\nimages are collected from various sources for training neural networks. In this\nwork, we propose a style-aggregated approach to deal with the large intrinsic\nvariance of image styles for facial landmark detection. Our method transforms\noriginal face images to style-aggregated images by a generative adversarial\nmodule. The proposed scheme uses the style-aggregated image to maintain face\nimages that are more robust to environmental changes. Then the original face\nimages accompanying with style-aggregated ones play a duet to train a landmark\ndetector which is complementary to each other. In this way, for each face, our\nmethod takes two images as input, i.e., one in its original style and the other\nin the aggregated style. In experiments, we observe that the large variance of\nimage styles would degenerate the performance of facial landmark detectors.\nMoreover, we show the robustness of our method to the large variance of image\nstyles by comparing to a variant of our approach, in which the generative\nadversarial module is removed, and no style-aggregated images are used. Our\napproach is demonstrated to perform well when compared with state-of-the-art\nalgorithms on benchmark datasets AFLW and 300-W. Code is publicly available on\nGitHub: https://github.com/D-X-Y/SAN", "field": [], "task": ["Facial Landmark Detection"], "method": [], "dataset": ["300W", "AFLW-Full", "AFLW-Front"], "metric": ["NME", "Mean NME "], "title": "Style Aggregated Network for Facial Landmark Detection"} {"abstract": "We apply basic statistical reasoning to signal reconstruction by machine\nlearning -- learning to map corrupted observations to clean signals -- with a\nsimple and powerful conclusion: it is possible to learn to restore images by\nonly looking at corrupted examples, at performance at and sometimes exceeding\ntraining using clean data, without explicit image priors or likelihood models\nof the corruption. In practice, we show that a single model learns photographic\nnoise removal, denoising synthetic Monte Carlo images, and reconstruction of\nundersampled MRI scans -- all corrupted by different processes -- based on\nnoisy data only.", "field": [], "task": ["Denoising", "Image Restoration", "Salt-And-Pepper Noise Removal"], "method": [], "dataset": ["BSD300 Noise Level 50%", "Kodak24 Noise Level 30%", "BSD300 Noise Level 70%", "Kodak24 Noise Level 70%", "BSD300 Noise Level 30%", "Kodak24 Noise Level 50%"], "metric": ["PSNR"], "title": "Noise2Noise: Learning Image Restoration without Clean Data"} {"abstract": "Dense depth cues are important and have wide applications in various computer vision tasks. In autonomous driving, LIDAR sensors are adopted to acquire depth measurements around the vehicle to perceive the surrounding environments. However, depth maps obtained by LIDAR are generally sparse because of its hardware limitation. The task of depth completion attracts increasing attention, which aims at generating a dense depth map from an input sparse depth map. To effectively utilize multi-scale features, we propose three novel sparsity-invariant operations, based on which, a sparsity-invariant multi-scale encoder-decoder network (HMS-Net) for handling sparse inputs and sparse feature maps is also proposed. Additional RGB features could be incorporated to further improve the depth completion performance. Our extensive experiments and component analysis on two public benchmarks, KITTI depth completion benchmark and NYU-depth-v2 dataset, demonstrate the effectiveness of the proposed approach. As of Aug. 12th, 2018, on KITTI depth completion leaderboard, our proposed model without RGB guidance ranks first among all peer-reviewed methods without using RGB information, and our model with RGB guidance ranks second among all RGB-guided methods.", "field": [], "task": ["Autonomous Driving", "Depth Completion"], "method": [], "dataset": ["KITTI Depth Completion"], "metric": ["MAE", "RMSE"], "title": "HMS-Net: Hierarchical Multi-scale Sparsity-invariant Network for Sparse Depth Completion"} {"abstract": "Novice programmers often struggle with the formal syntax of programming\nlanguages. To assist them, we design a novel programming language correction\nframework amenable to reinforcement learning. The framework allows an agent to\nmimic human actions for text navigation and editing. We demonstrate that the\nagent can be trained through self-exploration directly from the raw input, that\nis, program text itself, without any knowledge of the formal syntax of the\nprogramming language. We leverage expert demonstrations for one tenth of the\ntraining data to accelerate training. The proposed technique is evaluated on\n6975 erroneous C programs with typographic errors, written by students during\nan introductory programming course. Our technique fixes 14% more programs and\n29% more compiler error messages relative to those fixed by a state-of-the-art\ntool, DeepFix, which uses a fully supervised neural machine translation\napproach.", "field": [], "task": ["Machine Translation", "Program Repair"], "method": [], "dataset": ["DeepFix"], "metric": ["Average Success Rate"], "title": "Deep Reinforcement Learning for Programming Language Correction"} {"abstract": "Semi-supervised learning methods based on generative adversarial networks\n(GANs) obtained strong empirical results, but it is not clear 1) how the\ndiscriminator benefits from joint training with a generator, and 2) why good\nsemi-supervised classification performance and a good generator cannot be\nobtained at the same time. Theoretically, we show that given the discriminator\nobjective, good semisupervised learning indeed requires a bad generator, and\npropose the definition of a preferred generator. Empirically, we derive a novel\nformulation based on our analysis that substantially improves over feature\nmatching GANs, obtaining state-of-the-art results on multiple benchmark\ndatasets.", "field": [], "task": ["Semi-Supervised Image Classification"], "method": [], "dataset": ["CIFAR-10, 4000 Labels"], "metric": ["Accuracy"], "title": "Good Semi-supervised Learning that Requires a Bad GAN"} {"abstract": "In this work, we introduce the challenging problem of joint multi-person pose\nestimation and tracking of an unknown number of persons in unconstrained\nvideos. Existing methods for multi-person pose estimation in images cannot be\napplied directly to this problem, since it also requires to solve the problem\nof person association over time in addition to the pose estimation for each\nperson. We therefore propose a novel method that jointly models multi-person\npose estimation and tracking in a single formulation. To this end, we represent\nbody joint detections in a video by a spatio-temporal graph and solve an\ninteger linear program to partition the graph into sub-graphs that correspond\nto plausible body pose trajectories for each person. The proposed approach\nimplicitly handles occlusion and truncation of persons. Since the problem has\nnot been addressed quantitatively in the literature, we introduce a challenging\n\"Multi-Person PoseTrack\" dataset, and also propose a completely unconstrained\nevaluation protocol that does not make any assumptions about the scale, size,\nlocation or the number of persons. Finally, we evaluate the proposed approach\nand several baseline methods on our new dataset.", "field": [], "task": ["Multi-Person Pose Estimation", "Multi-Person Pose Estimation and Tracking", "Pose Estimation", "Pose Tracking"], "method": [], "dataset": ["Multi-Person PoseTrack"], "metric": ["MOTA", "Mean mAP", "MOTP"], "title": "PoseTrack: Joint Multi-Person Pose Estimation and Tracking"} {"abstract": "We present an approach to training neural networks to generate sequences\nusing actor-critic methods from reinforcement learning (RL). Current\nlog-likelihood training methods are limited by the discrepancy between their\ntraining and testing modes, as models must generate tokens conditioned on their\nprevious guesses rather than the ground-truth tokens. We address this problem\nby introducing a \\textit{critic} network that is trained to predict the value\nof an output token, given the policy of an \\textit{actor} network. This results\nin a training procedure that is much closer to the test phase, and allows us to\ndirectly optimize for a task-specific score such as BLEU. Crucially, since we\nleverage these techniques in the supervised learning setting rather than the\ntraditional RL setting, we condition the critic network on the ground-truth\noutput. We show that our method leads to improved performance on both a\nsynthetic task, and for German-English machine translation. Our analysis paves\nthe way for such methods to be applied in natural language generation tasks,\nsuch as machine translation, caption generation, and dialogue modelling.", "field": [], "task": ["Machine Translation", "Spelling Correction", "Text Generation"], "method": [], "dataset": ["IWSLT2015 German-English", "IWSLT2014 German-English", "IWSLT2015 English-German"], "metric": ["BLEU score"], "title": "An Actor-Critic Algorithm for Sequence Prediction"} {"abstract": "In this paper, we study the problem of question answering when reasoning over\nmultiple facts is required. We propose Query-Reduction Network (QRN), a variant\nof Recurrent Neural Network (RNN) that effectively handles both short-term\n(local) and long-term (global) sequential dependencies to reason over multiple\nfacts. QRN considers the context sentences as a sequence of state-changing\ntriggers, and reduces the original query to a more informed query as it\nobserves each trigger (context sentence) through time. Our experiments show\nthat QRN produces the state-of-the-art results in bAbI QA and dialog tasks, and\nin a real goal-oriented dialog dataset. In addition, QRN formulation allows\nparallelization on RNN's time axis, saving an order of magnitude in time\ncomplexity for training and inference.", "field": [], "task": ["Goal-Oriented Dialog", "Question Answering"], "method": [], "dataset": ["bAbi"], "metric": ["Accuracy (trained on 1k)", "Mean Error Rate", "Accuracy (trained on 10k)"], "title": "Query-Reduction Networks for Question Answering"} {"abstract": "A long-standing challenge in coreference resolution has been the\nincorporation of entity-level information - features defined over clusters of\nmentions instead of mention pairs. We present a neural network based\ncoreference system that produces high-dimensional vector representations for\npairs of coreference clusters. Using these representations, our system learns\nwhen combining clusters is desirable. We train the system with a\nlearning-to-search algorithm that teaches it which local decisions (cluster\nmerges) will lead to a high-scoring final coreference partition. The system\nsubstantially outperforms the current state-of-the-art on the English and\nChinese portions of the CoNLL 2012 Shared Task dataset despite using few\nhand-engineered features.", "field": [], "task": ["Coreference Resolution"], "method": [], "dataset": ["OntoNotes"], "metric": ["F1"], "title": "Improving Coreference Resolution by Learning Entity-Level Distributed Representations"} {"abstract": "Unsupervised methods for learning distributed representations of words are\nubiquitous in today's NLP research, but far less is known about the best ways\nto learn distributed phrase or sentence representations from unlabelled data.\nThis paper is a systematic comparison of models that learn such\nrepresentations. We find that the optimal approach depends critically on the\nintended application. Deeper, more complex models are preferable for\nrepresentations to be used in supervised systems, but shallow log-linear models\nwork best for building representation spaces that can be decoded with simple\nspatial distance metrics. We also propose two new unsupervised\nrepresentation-learning objectives designed to optimise the trade-off between\ntraining time, domain portability and performance.", "field": [], "task": ["Representation Learning", "Unsupervised Representation Learning"], "method": [], "dataset": ["SUBJ"], "metric": ["Accuracy"], "title": "Learning Distributed Representations of Sentences from Unlabelled Data"} {"abstract": "Semantic matching is of central importance to many natural language tasks\n\\cite{bordes2014semantic,RetrievalQA}. A successful matching algorithm needs to\nadequately model the internal structures of language objects and the\ninteraction between them. As a step toward this goal, we propose convolutional\nneural network models for matching two sentences, by adapting the convolutional\nstrategy in vision and speech. The proposed models not only nicely represent\nthe hierarchical structures of sentences with their layer-by-layer composition\nand pooling, but also capture the rich matching patterns at different levels.\nOur models are rather generic, requiring no prior knowledge on language, and\ncan hence be applied to matching tasks of different nature and in different\nlanguages. The empirical study on a variety of matching tasks demonstrates the\nefficacy of the proposed model on a variety of matching tasks and its\nsuperiority to competitor models.", "field": [], "task": ["Question Answering"], "method": [], "dataset": ["SemEvalCQA"], "metric": ["P@1", "MAP"], "title": "Convolutional Neural Network Architectures for Matching Natural Language Sentences"} {"abstract": "We address the problem of generating images across two drastically different views, namely ground (street) and aerial (overhead) views. Image synthesis by itself is a very challenging computer vision task and is even more so when generation is conditioned on an image in another view. Due the difference in viewpoints, there is small overlapping field of view and little common content between these two views. Here, we try to preserve the pixel information between the views so that the generated image is a realistic representation of cross view input image. For this, we propose to use homography as a guide to map the images between the views based on the common field of view to preserve the details in the input image. We then use generative adversarial networks to inpaint the missing regions in the transformed image and add realism to it. Our exhaustive evaluation and model comparison demonstrate that utilizing geometry constraints adds fine details to the generated images and can be a better approach for cross view image synthesis than purely pixel based synthesis methods.", "field": [], "task": ["Cross-View Image-to-Image Translation", "Image Generation"], "method": [], "dataset": ["Dayton (256\u00d7256) - ground-to-aerial"], "metric": ["SSIM"], "title": "Cross-view image synthesis using geometry-guided conditional GANs"} {"abstract": "Understanding search queries is a hard problem as it involves dealing with\n\"word salad\" text ubiquitously issued by users. However, if a query resembles a\nwell-formed question, a natural language processing pipeline is able to perform\nmore accurate interpretation, thus reducing downstream compounding errors.\nHence, identifying whether or not a query is well formed can enhance query\nunderstanding. Here, we introduce a new task of identifying a well-formed\nnatural language question. We construct and release a dataset of 25,100\npublicly available questions classified into well-formed and non-wellformed\ncategories and report an accuracy of 70.7% on the test set. We also show that\nour classifier can be used to improve the performance of neural\nsequence-to-sequence models for generating questions for reading comprehension.", "field": [], "task": ["Query Wellformedness"], "method": [], "dataset": ["Query Wellformedness"], "metric": ["Accuracy"], "title": "Identifying Well-formed Natural Language Questions"} {"abstract": "In recent years, graph neural networks (GNNs) have emerged as a powerful neural architecture to learn vector representations of nodes and graphs in a supervised, end-to-end fashion. Up to now, GNNs have only been evaluated empirically---showing promising results. The following work investigates GNNs from a theoretical point of view and relates them to the $1$-dimensional Weisfeiler-Leman graph isomorphism heuristic ($1$-WL). We show that GNNs have the same expressiveness as the $1$-WL in terms of distinguishing non-isomorphic (sub-)graphs. Hence, both algorithms also have the same shortcomings. Based on this, we propose a generalization of GNNs, so-called $k$-dimensional GNNs ($k$-GNNs), which can take higher-order graph structures at multiple scales into account. These higher-order structures play an essential role in the characterization of social networks and molecule graphs. Our experimental evaluation confirms our theoretical findings as well as confirms that higher-order information is useful in the task of graph classification and regression.", "field": [], "task": ["Graph Classification", "Regression"], "method": [], "dataset": ["IMDb-B", "PROTEINS", "NCI1", "MUTAG", "IMDb-M"], "metric": ["Accuracy"], "title": "Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks"} {"abstract": "Inspired by the increasing desire to efficiently tune machine learning hyper-parameters, in this work we rigorously analyse conventional and non-conventional assumptions inherent to Bayesian optimisation. Across an extensive set of experiments we conclude that: 1) the majority of hyper-parameter tuning tasks exhibit heteroscedasticity and non-stationarity, 2) multi-objective acquisition ensembles with Pareto-front solutions significantly improve queried configurations, and 3) robust acquisition maximisation affords empirical advantages relative to its non-robust counterparts. We hope these findings may serve as guiding principles, both for practitioners and for further research in the field.", "field": [], "task": ["Bayesian Optimisation", "Hyperparameter Optimization"], "method": [], "dataset": ["Bayesmark"], "metric": ["Mean"], "title": "An Empirical Study of Assumptions in Bayesian Optimisation"} {"abstract": "Many face recognition systems boost the performance using deep learning\nmodels, but only a few researches go into the mechanisms for dealing with\nonline registration. Although we can obtain discriminative facial features\nthrough the state-of-the-art deep model training, how to decide the best\nthreshold for practical use remains a challenge. We develop a technique of\nadaptive threshold mechanism to improve the recognition accuracy. We also\ndesign a face recognition system along with the registering procedure to handle\nonline registration. Furthermore, we introduce a new evaluation protocol to\nbetter evaluate the performance of an algorithm for real-world scenarios. Under\nour proposed protocol, our method can achieve a 22\\% accuracy improvement on\nthe LFW dataset.", "field": [], "task": ["Face Recognition"], "method": [], "dataset": ["LFW (Online Open Set)", "Adience (Online Open Set)", "Color FERET (Online Open Set)"], "metric": ["Average Accuracy (10 times)"], "title": "Data-specific Adaptive Threshold for Face Recognition and Authentication"} {"abstract": "The explosive growth in video streaming gives rise to challenges on performing video understanding at high accuracy and low computation cost. Conventional 2D CNNs are computationally cheap but cannot capture temporal relationships; 3D CNN based methods can achieve good performance but are computationally intensive, making it expensive to deploy. In this paper, we propose a generic and effective Temporal Shift Module (TSM) that enjoys both high efficiency and high performance. Specifically, it can achieve the performance of 3D CNN but maintain 2D CNN's complexity. TSM shifts part of the channels along the temporal dimension; thus facilitate information exchanged among neighboring frames. It can be inserted into 2D CNNs to achieve temporal modeling at zero computation and zero parameters. We also extended TSM to online setting, which enables real-time low-latency online video recognition and video object detection. TSM is accurate and efficient: it ranks the first place on the Something-Something leaderboard upon publication; on Jetson Nano and Galaxy Note8, it achieves a low latency of 13ms and 35ms for online video recognition. The code is available at: https://github.com/mit-han-lab/temporal-shift-module.", "field": [], "task": ["Action Classification", "Action Recognition", "Object Detection", "Video Object Detection", "Video Recognition", "Video Understanding"], "method": [], "dataset": ["Kinetics-400", "ImageNet VID", "Something-Something V2", "Something-Something V1"], "metric": ["Top 1 Accuracy", "Top-5 Accuracy", "Top-1 Accuracy", "MAP", "Top 5 Accuracy", "Vid acc@1"], "title": "TSM: Temporal Shift Module for Efficient Video Understanding"} {"abstract": "Recognising dialogue acts (DA) is important for many natural language processing tasks such as dialogue generation and intention recognition. In this paper, we propose a dual-attention hierarchical recurrent neural network for DA classification. Our model is partially inspired by the observation that conversational utterances are normally associated with both a DA and a topic, where the former captures the social act and the latter describes the subject matter. However, such a dependency between DAs and topics has not been utilised by most existing systems for DA classification. With a novel dual task-specific attention mechanism, our model is able, for utterances, to capture information about both DAs and topics, as well as information about the interactions between them. Experimental results show that by modelling topic as an auxiliary task, our model can significantly improve DA classification, yielding better or comparable performance to the state-of-the-art method on three public datasets.", "field": [], "task": ["Dialogue Act Classification", "Dialogue Generation", "Intent Detection"], "method": [], "dataset": ["Switchboard corpus", "ICSI Meeting Recorder Dialog Act (MRDA) corpus"], "metric": ["Accuracy"], "title": "A Dual-Attention Hierarchical Recurrent Neural Network for Dialogue Act Classification"} {"abstract": "Dialogue Act Recognition (DAR) is a challenging problem in dialogue\ninterpretation, which aims to attach semantic labels to utterances and\ncharacterize the speaker's intention. Currently, many existing approaches\nformulate the DAR problem ranging from multi-classification to structured\nprediction, which suffer from handcrafted feature extensions and attentive\ncontextual structural dependencies. In this paper, we consider the problem of\nDAR from the viewpoint of extending richer Conditional Random Field (CRF)\nstructural dependencies without abandoning end-to-end training. We incorporate\nhierarchical semantic inference with memory mechanism on the utterance\nmodeling. We then extend structured attention network to the linear-chain\nconditional random field layer which takes into account both contextual\nutterances and corresponding dialogue acts. The extensive experiments on two\nmajor benchmark datasets Switchboard Dialogue Act (SWDA) and Meeting Recorder\nDialogue Act (MRDA) datasets show that our method achieves better performance\nthan other state-of-the-art solutions to the problem. It is a remarkable fact\nthat our method is nearly close to the human annotator's performance on SWDA\nwithin 2% gap.", "field": [], "task": ["Dialogue Act Classification", "Dialogue Interpretation", "Structured Prediction"], "method": [], "dataset": ["Switchboard corpus", "ICSI Meeting Recorder Dialog Act (MRDA) corpus"], "metric": ["Accuracy"], "title": "Dialogue Act Recognition via CRF-Attentive Structured Network"} {"abstract": "Weakly supervised object detection (WSOD) using only image-level annotations has attracted a growing attention over the past few years. Whereas such task is typically addressed with a domain-specific solution focused on natural images, we show that a simple multiple instance approach applied on pre-trained deep features yields excellent performances on non-photographic datasets, possibly including new classes. The approach does not include any fine-tuning or cross-domain learning and is therefore efficient and possibly applicable to arbitrary datasets and classes. We investigate several flavors of the proposed approach, some including multi-layers perceptron and polyhedral classifiers. Despite its simplicity, our method shows competitive results on a range of publicly available datasets, including paintings (People-Art, IconArt), watercolors, cliparts and comics and allows to quickly learn unseen visual categories.", "field": [], "task": ["Multiple Instance Learning", "Object Detection", "Weakly Supervised Object Detection"], "method": [], "dataset": ["PeopleArt", "Watercolor2k", "Clipart1k", "CASPAPaintings", "IconArt", "Comic2k"], "metric": ["Mean mAP", "MAP"], "title": "Multiple instance learning on deep features for weakly supervised object detection with extreme domain shifts"} {"abstract": "This paper analyzes the impact of higher-order inference (HOI) on the task of coreference resolution. HOI has been adapted by almost all recent coreference resolution models without taking much investigation on its true effectiveness over representation learning. To make a comprehensive analysis, we implement an end-to-end coreference system as well as four HOI approaches, attended antecedent, entity equalization, span clustering, and cluster merging, where the latter two are our original methods. We find that given a high-performing encoder such as SpanBERT, the impact of HOI is negative to marginal, providing a new perspective of HOI to this task. Our best model using cluster merging shows the Avg-F1 of 80.2 on the CoNLL 2012 shared task dataset in English.", "field": [], "task": ["Coreference Resolution", "Representation Learning"], "method": [], "dataset": ["CoNLL 2012"], "metric": ["Avg F1"], "title": "Revealing the Myth of Higher-Order Inference in Coreference Resolution"} {"abstract": "Recently, scene text detection has become an active research topic in\ncomputer vision and document analysis, because of its great importance and\nsignificant challenge. However, vast majority of the existing methods detect\ntext within local regions, typically through extracting character, word or line\nlevel candidates followed by candidate aggregation and false positive\nelimination, which potentially exclude the effect of wide-scope and long-range\ncontextual cues in the scene. To take full advantage of the rich information\navailable in the whole natural image, we propose to localize text in a holistic\nmanner, by casting scene text detection as a semantic segmentation problem. The\nproposed algorithm directly runs on full images and produces global, pixel-wise\nprediction maps, in which detections are subsequently formed. To better make\nuse of the properties of text, three types of information regarding text\nregion, individual characters and their relationship are estimated, with a\nsingle Fully Convolutional Network (FCN) model. With such predictions of text\nproperties, the proposed algorithm can simultaneously handle horizontal,\nmulti-oriented and curved text in real-world natural images. The experiments on\nstandard benchmarks, including ICDAR 2013, ICDAR 2015 and MSRA-TD500,\ndemonstrate that the proposed algorithm substantially outperforms previous\nstate-of-the-art approaches. Moreover, we report the first baseline result on\nthe recently-released, large-scale dataset COCO-Text.", "field": [], "task": ["Scene Text", "Scene Text Detection", "Semantic Segmentation"], "method": [], "dataset": ["COCO-Text"], "metric": ["F-Measure", "Recall", "Precision"], "title": "Scene Text Detection via Holistic, Multi-Channel Prediction"} {"abstract": "Pretext tasks and contrastive learning have been successful in self-supervised learning for video retrieval and recognition. In this study, we analyze their optimization targets and utilize the hyper-sphere feature space to explore the connections between them, indicating the compatibility and consistency of these two different learning methods. Based on the analysis, we propose a self-supervised training method, referred as Pretext-Contrastive Learning (PCL), to learn video representations. Extensive experiments based on different combinations of pretext task baselines and contrastive losses confirm the strong agreement with their self-supervised learning targets, demonstrating the effectiveness and the generality of PCL. The combination of pretext tasks and contrastive losses showed significant improvements in both video retrieval and recognition over the corresponding baselines. And we can also outperform current state-of-the-art methods in the same manner. Further, our PCL is flexible and can be applied to almost all existing pretext task methods.", "field": [], "task": ["Self-Supervised Action Recognition", "Self-Supervised Learning", "Self-supervised Video Retrieval", "Video Retrieval"], "method": [], "dataset": ["UCF101", "HMDB51"], "metric": ["3-fold Accuracy", "Pre-Training Dataset", "Top-1 Accuracy"], "title": "Self-Supervised Video Representation Using Pretext-Contrastive Learning"} {"abstract": "Speech enhancement is challenging because of the diversity of background noise types. Most of the existing methods are focused on modelling the speech rather than the noise. In this paper, we propose a novel idea to model speech and noise simultaneously in a two-branch convolutional neural network, namely SN-Net. In SN-Net, the two branches predict speech and noise, respectively. Instead of information fusion only at the final output layer, interaction modules are introduced at several intermediate feature domains between the two branches to benefit each other. Such an interaction can leverage features learned from one branch to counteract the undesired part and restore the missing component of the other and thus enhance their discrimination capabilities. We also design a feature extraction module, namely residual-convolution-and-attention (RA), to capture the correlations along temporal and frequency dimensions for both the speech and the noises. Evaluations on public datasets show that the interaction module plays a key role in simultaneous modeling and the SN-Net outperforms the state-of-the-art by a large margin on various evaluation metrics. The proposed SN-Net also shows superior performance for speaker separation.", "field": [], "task": ["Speaker Separation", "Speech Enhancement"], "method": [], "dataset": ["Deep Noise Suppression (DNS) Challenge"], "metric": ["SI-SDR", "PESQ-WB"], "title": "Interactive Speech and Noise Modeling for Speech Enhancement"} {"abstract": "Neural network applications generally benefit from larger-sized models, but for current speech enhancement models, larger scale networks often suffer from decreased robustness to the variety of real-world use cases beyond what is encountered in training data. We introduce several innovations that lead to better large neural networks for speech enhancement. The novel PoCoNet architecture is a convolutional neural network that, with the use of frequency-positional embeddings, is able to more efficiently build frequency-dependent features in the early layers. A semi-supervised method helps increase the amount of conversational training data by pre-enhancing noisy datasets, improving performance on real recordings. A new loss function biased towards preserving speech quality helps the optimization better match human perceptual opinions on speech quality. Ablation experiments and objective and human opinion metrics show the benefits of the proposed improvements.", "field": [], "task": ["Speech Enhancement"], "method": [], "dataset": ["Deep Noise Suppression (DNS) Challenge"], "metric": ["MOS (NRT, real recordings)", "MOS (NRT, no reverb)", "MOS (NRT)", "PESQ-WB", "MOS (NRT, reverb)"], "title": "PoCoNet: Better Speech Enhancement with Frequency-Positional Embeddings, Semi-Supervised Conversational Data, and Biased Loss"} {"abstract": "Over the past few years, speech enhancement methods based on deep learning have greatly surpassed traditional methods based on spectral subtraction and spectral estimation. Many of these new techniques operate directly in the the short-time Fourier transform (STFT) domain, resulting in a high computational complexity. In this work, we propose PercepNet, an efficient approach that relies on human perception of speech by focusing on the spectral envelope and on the periodicity of the speech. We demonstrate high-quality, real-time enhancement of fullband (48 kHz) speech with less than 5% of a CPU core.", "field": [], "task": ["Speech Enhancement"], "method": [], "dataset": ["Deep Noise Suppression (DNS) Challenge"], "metric": ["MOS (RT, no reverb)", "MOS (RT)", "MOS (RT, real recordings)", "MOS (RT, reverb)"], "title": "A Perceptually-Motivated Approach for Low-Complexity, Real-Time Enhancement of Fullband Speech"} {"abstract": "Existing image generator networks rely heavily on spatial convolutions and, optionally, self-attention blocks in order to gradually synthesize images in a coarse-to-fine manner. Here, we present a new architecture for image generators, where the color value at each pixel is computed independently given the value of a random latent vector and the coordinate of that pixel. No spatial convolutions or similar operations that propagate information across pixels are involved during the synthesis. We analyze the modeling capabilities of such generators when trained in an adversarial fashion, and observe the new generators to achieve similar generation quality to state-of-the-art convolutional generators. We also investigate several interesting properties unique to the new architecture.", "field": [], "task": ["Image Generation"], "method": [], "dataset": ["Landscapes 256 x 256", "Satellite-Landscapes 256 x 256", "LSUN Churches 256 x 256", "Satellite-Buildings 256 x 256", "FFHQ 256 x 256"], "metric": ["FID"], "title": "Image Generators with Conditionally-Independent Pixel Synthesis"} {"abstract": "Recently, attempts have been made to collect millions of videos to train CNN\nmodels for action recognition in videos. However, curating such large-scale\nvideo datasets requires immense human labor, and training CNNs on millions of\nvideos demands huge computational resources. In contrast, collecting action\nimages from the Web is much easier and training on images requires much less\ncomputation. In addition, labeled web images tend to contain discriminative\naction poses, which highlight discriminative portions of a video's temporal\nprogression. We explore the question of whether we can utilize web action\nimages to train better CNN models for action recognition in videos. We collect\n23.8K manually filtered images from the Web that depict the 101 actions in the\nUCF101 action video dataset. We show that by utilizing web action images along\nwith videos in training, significant performance boosts of CNN models can be\nachieved. We then investigate the scalability of the process by leveraging\ncrawled web images (unfiltered) for UCF101 and ActivityNet. We replace 16.2M\nvideo frames by 393K unfiltered images and get comparable performance.", "field": [], "task": ["Action Recognition", "Action Recognition In Videos", "Action Recognition In Videos ", "Temporal Action Localization"], "method": [], "dataset": ["ActivityNet"], "metric": ["mAP"], "title": "Do Less and Achieve More: Training CNNs for Action Recognition Utilizing Action Images from the Web"} {"abstract": "The recent advance in neural network architecture and training algorithms\nhave shown the effectiveness of representation learning. The neural\nnetwork-based models generate better representation than the traditional ones.\nThey have the ability to automatically learn the distributed representation for\nsentences and documents. To this end, we proposed a novel model that addresses\nseveral issues that are not adequately modeled by the previously proposed\nmodels, such as the memory problem and incorporating the knowledge of document\nstructure. Our model uses a hierarchical structured self-attention mechanism to\ncreate the sentence and document embeddings. This architecture mirrors the\nhierarchical structure of the document and in turn enables us to obtain better\nfeature representation. The attention mechanism provides extra source of\ninformation to guide the summary extraction. The new model treated the\nsummarization task as a classification problem in which the model computes the\nrespective probabilities of sentence-summary membership. The model predictions\nare broken up by several features such as information content, salience,\nnovelty and positional representation. The proposed model was evaluated on two\nwell-known datasets, the CNN / Daily Mail, and DUC 2002. The experimental\nresults show that our model outperforms the current extractive state-of-the-art\nby a considerable margin.", "field": [], "task": ["Document Summarization", "Extractive Text Summarization", "Hierarchical structure", "Representation Learning", "Text Summarization"], "method": [], "dataset": ["CNN / Daily Mail (Anonymized)"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "A Hierarchical Structured Self-Attentive Model for Extractive Document Summarization (HSSAS)"} {"abstract": "We propose a neural network based approach for extracting models from dynamic data using ordinary and partial differential equations. In particular, given a time-series or spatio-temporal dataset, we seek to identify an accurate governing system which respects the intrinsic differential structure. The unknown governing model is parameterized by using both (shallow) multilayer perceptrons and nonlinear differential terms, in order to incorporate relevant correlations between spatio-temporal samples. We demonstrate the approach on several examples where the data is sampled from various dynamical systems and give a comparison to recurrent networks and other data-discovery methods. In addition, we show that for MNIST and Fashion MNIST, our approach lowers the parameter cost as compared to other deep neural networks.", "field": [], "task": ["Image Classification", "Time Series"], "method": [], "dataset": ["MNIST", "Fashion-MNIST"], "metric": ["Percentage error"], "title": "NeuPDE: Neural Network Based Ordinary and Partial Differential Equations for Modeling Time-Dependent Data"} {"abstract": "Previous researchers have considered sentiment analysis as a document classification task, in which input documents are classified into predefined sentiment classes. Although there are sentences in a document that support important evidences for sentiment analysis and sentences that do not, they have treated the document as a bag of sentences. In other words, they have not considered the importance of each sentence in the document. To effectively determine polarity of a document, each sentence in the document should be dealt with different degrees of importance. To address this problem, we propose a document-level sentence classification model based on deep neural networks, in which the importance degrees of sentences in documents are automatically determined through gate mechanisms. To verify our new sentiment analysis model, we conducted experiments using the sentiment datasets in the four different domains such as movie reviews, hotel reviews, restaurant reviews, and music reviews. In the experiments, the proposed model outperformed previous state-of-the-art models that do not consider importance differences of sentences in a document. The experimental results show that the importance of sentences should be considered in a document-level sentiment classification task.", "field": [], "task": ["Document Classification", "Sentence Classification", "Sentiment Analysis", "Text Classification"], "method": [], "dataset": ["IMDb", "IMDb-M"], "metric": ["Accuracy (2 classes)", "Accuracy (10 classes)", "Accuracy"], "title": "Improving Document-Level Sentiment Classification Using Importance of Sentences"} {"abstract": "Transfer of pre-trained representations can improve sample efficiency and reduce computational requirements for new tasks. However, representations used for transfer are usually generic, and are not tailored to a particular distribution of downstream tasks. We explore the use of expert representations for transfer with a simple, yet effective, strategy. We train a diverse set of experts by exploiting existing label structures, and use cheap-to-compute performance proxies to select the relevant expert for each target task. This strategy scales the process of transferring to new tasks, since it does not revisit the pre-training data during transfer. Accordingly, it requires little extra compute per target task, and results in a speed-up of 2-3 orders of magnitude compared to competing approaches. Further, we provide an adapter-based architecture able to compress many experts into a single model. We evaluate our approach on two different data sources and demonstrate that it outperforms baselines on over 20 diverse vision tasks in both cases.", "field": [], "task": ["Image Classification", "Transfer Learning"], "method": [], "dataset": ["VTAB-1k"], "metric": ["Top-1 Accuracy"], "title": "Scalable Transfer Learning with Expert Models"} {"abstract": "Aspect based sentiment analysis (ABSA) involves three fundamental subtasks: aspect term extraction, opinion term extraction, and aspect-level sentiment classification. Early works only focused on solving one of these subtasks individually. Some recent work focused on solving a combination of two subtasks, e.g., extracting aspect terms along with sentiment polarities or extracting the aspect and opinion terms pair-wisely. More recently, the triple extraction task has been proposed, i.e., extracting the (aspect term, opinion term, sentiment polarity) triples from a sentence. However, previous approaches fail to solve all subtasks in a unified end-to-end framework. In this paper, we propose a complete solution for ABSA. We construct two machine reading comprehension (MRC) problems, and solve all subtasks by joint training two BERT-MRC models with parameters sharing. We conduct experiments on these subtasks and results on several benchmark datasets demonstrate the effectiveness of our proposed framework, which significantly outperforms existing state-of-the-art methods.", "field": [], "task": ["Aspect-Based Sentiment Analysis", "Aspect Sentiment Triplet Extraction", "Machine Reading Comprehension", "Reading Comprehension", "Sentiment Analysis"], "method": [], "dataset": ["SemEval"], "metric": ["F1"], "title": "A Joint Training Dual-MRC Framework for Aspect Based Sentiment Analysis"} {"abstract": "Following the success of deep convolutional networks, state-of-the-art\nmethods for 3d human pose estimation have focused on deep end-to-end systems\nthat predict 3d joint locations given raw image pixels. Despite their excellent\nperformance, it is often not easy to understand whether their remaining error\nstems from a limited 2d pose (visual) understanding, or from a failure to map\n2d poses into 3-dimensional positions. With the goal of understanding these\nsources of error, we set out to build a system that given 2d joint locations\npredicts 3d positions. Much to our surprise, we have found that, with current\ntechnology, \"lifting\" ground truth 2d joint locations to 3d space is a task\nthat can be solved with a remarkably low error rate: a relatively simple deep\nfeed-forward network outperforms the best reported result by about 30\\% on\nHuman3.6M, the largest publicly available 3d pose estimation benchmark.\nFurthermore, training our system on the output of an off-the-shelf\nstate-of-the-art 2d detector (\\ie, using images as input) yields state of the\nart results -- this includes an array of systems that have been trained\nend-to-end specifically for this task. Our results indicate that a large\nportion of the error of modern deep 3d pose estimation systems stems from their\nvisual analysis, and suggests directions to further advance the state of the\nart in 3d human pose estimation.", "field": [], "task": ["3D Human Pose Estimation", "3D Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["Human3.6M", "HumanEva-I", "Geometric Pose Affordance "], "metric": ["Average MPJPE (mm)", "MPJPE (CS)", "Mean Reconstruction Error (mm)", "Multi-View or Monocular", "PCK3D (CS)", "PCK3D (CA)", "MPJPE (CA)"], "title": "A simple yet effective baseline for 3d human pose estimation"} {"abstract": "Though tremendous strides have been made in object recognition, one of the\nremaining open challenges is detecting small objects. We explore three aspects\nof the problem in the context of finding small faces: the role of scale\ninvariance, image resolution, and contextual reasoning. While most recognition\napproaches aim to be scale-invariant, the cues for recognizing a 3px tall face\nare fundamentally different than those for recognizing a 300px tall face. We\ntake a different approach and train separate detectors for different scales. To\nmaintain efficiency, detectors are trained in a multi-task fashion: they make\nuse of features extracted from multiple layers of single (deep) feature\nhierarchy. While training detectors for large objects is straightforward, the\ncrucial challenge remains training detectors for small objects. We show that\ncontext is crucial, and define templates that make use of massively-large\nreceptive fields (where 99% of the template extends beyond the object of\ninterest). Finally, we explore the role of scale in pre-trained deep networks,\nproviding ways to extrapolate networks tuned for limited scales to rather\nextreme ranges. We demonstrate state-of-the-art results on\nmassively-benchmarked face datasets (FDDB and WIDER FACE). In particular, when\ncompared to prior art on WIDER FACE, our results reduce error by a factor of 2\n(our models produce an AP of 82% while prior art ranges from 29-64%).", "field": [], "task": ["Face Detection", "Object Recognition"], "method": [], "dataset": ["WIDER Face (Hard)", "WIDER Face (Medium)", "WIDER Face (Easy)"], "metric": ["AP"], "title": "Finding Tiny Faces"} {"abstract": "We study the problem of segmenting moving objects in unconstrained videos.\nGiven a video, the task is to segment all the objects that exhibit independent\nmotion in at least one frame. We formulate this as a learning problem and\ndesign our framework with three cues: (i) independent object motion between a\npair of frames, which complements object recognition, (ii) object appearance,\nwhich helps to correct errors in motion estimation, and (iii) temporal\nconsistency, which imposes additional constraints on the segmentation. The\nframework is a two-stream neural network with an explicit memory module. The\ntwo streams encode appearance and motion cues in a video sequence respectively,\nwhile the memory module captures the evolution of objects over time, exploiting\nthe temporal consistency. The motion stream is a convolutional neural network\ntrained on synthetic videos to segment independently moving objects in the\noptical flow field. The module to build a 'visual memory' in video, i.e., a\njoint representation of all the video frames, is realized with a convolutional\nrecurrent unit learned from a small number of training video sequences.\n For every pixel in a frame of a test video, our approach assigns an object or\nbackground label based on the learned spatio-temporal features as well as the\n'visual memory' specific to the video. We evaluate our method extensively on\nthree benchmarks, DAVIS, Freiburg-Berkeley motion segmentation dataset and\nSegTrack. In addition, we provide an extensive ablation study to investigate\nboth the choice of the training data and the influence of each component in the\nproposed framework.", "field": [], "task": ["Motion Estimation", "Motion Segmentation", "Object Recognition", "Optical Flow Estimation", "Unsupervised Video Object Segmentation"], "method": [], "dataset": ["DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "Learning to Segment Moving Objects"} {"abstract": "Interactive object selection is a very important research problem and has\nmany applications. Previous algorithms require substantial user interactions to\nestimate the foreground and background distributions. In this paper, we present\na novel deep learning based algorithm which has a much better understanding of\nobjectness and thus can reduce user interactions to just a few clicks. Our\nalgorithm transforms user provided positive and negative clicks into two\nEuclidean distance maps which are then concatenated with the RGB channels of\nimages to compose (image, user interactions) pairs. We generate many of such\npairs by combining several random sampling strategies to model user click\npatterns and use them to fine tune deep Fully Convolutional Networks (FCNs).\nFinally the output probability maps of our FCN 8s model is integrated with\ngraph cut optimization to refine the boundary segments. Our model is trained on\nthe PASCAL segmentation dataset and evaluated on other datasets with different\nobject classes. Experimental results on both seen and unseen objects clearly\ndemonstrate that our algorithm has a good generalization ability and is\nsuperior to all existing interactive object selection approaches.", "field": [], "task": ["Interactive Segmentation"], "method": [], "dataset": ["GrabCut", "DAVIS", "SBD"], "metric": ["NoC@90", "NoC@85"], "title": "Deep Interactive Object Selection"} {"abstract": "Online platforms can be divided into information-oriented and social-oriented\ndomains. The former refers to forums or E-commerce sites that emphasize\nuser-item interactions, like Trip.com and Amazon; whereas the latter refers to\nsocial networking services (SNSs) that have rich user-user connections, such as\nFacebook and Twitter. Despite their heterogeneity, these two domains can be\nbridged by a few overlapping users, dubbed as bridge users. In this work, we\naddress the problem of cross-domain social recommendation, i.e., recommending\nrelevant items of information domains to potential users of social networks. To\nour knowledge, this is a new problem that has rarely been studied before.\n Existing cross-domain recommender systems are unsuitable for this task since\nthey have either focused on homogeneous information domains or assumed that\nusers are fully overlapped. Towards this end, we present a novel Neural Social\nCollaborative Ranking (NSCR) approach, which seamlessly sews up the user-item\ninteractions in information domains and user-user connections in SNSs. In the\ninformation domain part, the attributes of users and items are leveraged to\nstrengthen the embedding learning of users and items. In the SNS part, the\nembeddings of bridge users are propagated to learn the embeddings of other\nnon-bridge users. Extensive experiments on two real-world datasets demonstrate\nthe effectiveness and rationality of our NSCR method.", "field": [], "task": ["Collaborative Ranking", "Recommendation Systems"], "method": [], "dataset": ["Epinions", "WeChat"], "metric": ["MAE", "P@10", "RMSE", "AUC"], "title": "Item Silk Road: Recommending Items from Information Domains to Social Users"} {"abstract": "In this paper, we propose our Correlation For Completion Network (CFCNet), an end-to-end deep learning model that uses the correlation between two data sources to perform sparse depth completion. CFCNet learns to capture, to the largest extent, the semantically correlated features between RGB and depth information. Through pairs of image pixels and the visible measurements in a sparse depth map, CFCNet facilitates feature-level mutual transformation of different data sources. Such a transformation enables CFCNet to predict features and reconstruct data of missing depth measurements according to their corresponding, transformed RGB features. We extend canonical correlation analysis to a 2D domain and formulate it as one of our training objectives (i.e. 2d deep canonical correlation, or \"2D2CCA loss\"). Extensive experiments validate the ability and flexibility of our CFCNet compared to the state-of-the-art methods on both indoor and outdoor scenes with different real-life sparse patterns. Codes are available at: https://github.com/choyingw/CFCNet.", "field": [], "task": ["Depth Completion"], "method": [], "dataset": ["KITTI Depth Completion 500 points"], "metric": ["RMSE "], "title": "Deep RGB-D Canonical Correlation Analysis For Sparse Depth Completion"} {"abstract": "Inspired by the effectiveness of adversarial training in the area of\nGenerative Adversarial Networks we present a new approach for learning feature\nrepresentations in person re-identification. We investigate different types of\nbias that typically occur in re-ID scenarios, i.e., pose, body part and camera\nview, and propose a general approach to address them. We introduce an\nadversarial strategy for controlling bias, named Bias-controlled Adversarial\nframework (BCA), with two complementary branches to reduce or to enhance\nbias-related features. The results and comparison to the state of the art on\ndifferent benchmarks show that our framework is an effective strategy for\nperson re-identification. The performance improvements are in both full and\npartial views of persons.", "field": [], "task": ["Person Re-Identification"], "method": [], "dataset": ["DukeMTMC-reID", "Market-1501"], "metric": ["Rank-1", "MAP"], "title": "Person Re-identification with Bias-controlled Adversarial Training"} {"abstract": "Video object segmentation is an essential task in robot manipulation to\nfacilitate grasping and learning affordances. Incremental learning is important\nfor robotics in unstructured environments, since the total number of objects\nand their variations can be intractable. Inspired by the children learning\nprocess, human robot interaction (HRI) can be utilized to teach robots about\nthe world guided by humans similar to how children learn from a parent or a\nteacher. A human teacher can show potential objects of interest to the robot,\nwhich is able to self adapt to the teaching signal without providing manual\nsegmentation labels. We propose a novel teacher-student learning paradigm to\nteach robots about their surrounding environment. A two-stream motion and\nappearance \"teacher\" network provides pseudo-labels to adapt an appearance\n\"student\" network. The student network is able to segment the newly learned\nobjects in other scenes, whether they are static or in motion. We also\nintroduce a carefully designed dataset that serves the proposed HRI setup,\ndenoted as (I)nteractive (V)ideo (O)bject (S)egmentation. Our IVOS dataset\ncontains teaching videos of different objects, and manipulation tasks. Unlike\nprevious datasets, IVOS provides manipulation tasks sequences with segmentation\nannotation along with the waypoints for the robot trajectories. It also\nprovides segmentation annotation for the different transformations such as\ntranslation, scale, planar rotation, and out-of-plane rotation. Our proposed\nadaptation method outperforms the state-of-the-art on DAVIS and FBMS with 6.8%\nand 1.2% in F-measure respectively. It improves over the baseline on IVOS\ndataset with 46.1% and 25.9% in mIoU.", "field": [], "task": ["Human robot interaction", "Incremental Learning", "Semantic Segmentation", "Unsupervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "Video Object Segmentation using Teacher-Student Adaptation in a Human Robot Interaction (HRI) Setting"} {"abstract": "Recent BIO-tagging-based neural semantic role labeling models are very high\nperforming, but assume gold predicates as part of the input and cannot\nincorporate span-level features. We propose an end-to-end approach for jointly\npredicting all predicates, arguments spans, and the relations between them. The\nmodel makes independent decisions about what relationship, if any, holds\nbetween every possible word-span pair, and learns contextualized span\nrepresentations that provide rich, shared input features for each decision.\nExperiments demonstrate that this approach sets a new state of the art on\nPropBank SRL without gold predicates.", "field": [], "task": ["Semantic Role Labeling"], "method": [], "dataset": ["CoNLL 2005", "OntoNotes", "CoNLL 2012"], "metric": ["F1"], "title": "Jointly Predicting Predicates and Arguments in Neural Semantic Role Labeling"} {"abstract": "In single image deblurring, the \"coarse-to-fine\" scheme, i.e. gradually\nrestoring the sharp image on different resolutions in a pyramid, is very\nsuccessful in both traditional optimization-based methods and recent\nneural-network-based approaches. In this paper, we investigate this strategy\nand propose a Scale-recurrent Network (SRN-DeblurNet) for this deblurring task.\nCompared with the many recent learning-based approaches in [25], it has a\nsimpler network structure, a smaller number of parameters and is easier to\ntrain. We evaluate our method on large-scale deblurring datasets with complex\nmotion. Results show that our method can produce better quality results than\nstate-of-the-arts, both quantitatively and qualitatively.", "field": [], "task": ["Deblurring"], "method": [], "dataset": ["RealBlur-R", "RealBlur-J", "GoPro", "RealBlur-J (trained on GoPro)", "RealBlur-R (trained on GoPro)", "HIDE (trained on GOPRO)"], "metric": ["SSIM", "SSIM (sRGB)", "PSNR", "PSNR (sRGB)"], "title": "Scale-recurrent Network for Deep Image Deblurring"} {"abstract": "Face alignment algorithms locate a set of landmark points in images of faces taken in unrestricted situations. State-of-the-art approaches typically fail or lose accuracy in the presence of occlusions, strong deformations, large pose variations and ambiguous configurations. In this paper we present 3DDE, a robust and efficient face alignment algorithm based on a coarse-to-fine cascade of ensembles of regression trees. It is initialized by robustly fitting a 3D face model to the probability maps produced by a convolutional neural network. With this initialization we address self-occlusions and large face rotations. Further, the regressor implicitly imposes a prior face shape on the solution, addressing occlusions and ambiguous face configurations. Its coarse-to-fine structure tackles the combinatorial explosion of parts deformation. In the experiments performed, 3DDE improves the state-of-the-art in 300W, COFW, AFLW and WFLW data sets. Finally, we perform cross-dataset experiments that reveal the existence of a significant data set bias in these benchmarks.", "field": [], "task": ["Face Alignment", "Face Model", "Facial Landmark Detection", "Regression"], "method": [], "dataset": ["WFLW", "300W", "COFW", "AFLW-Full"], "metric": ["Mean NME", "Fullset (public)", "AUC@0.1 (all)", "NME", "ME (%, all) ", "FR@0.1(%, all)", "Mean Error Rate"], "title": "Face Alignment using a 3D Deeply-initialized Ensemble of Regression Trees"} {"abstract": "In recent years, Graph Neural Networks (GNNs), which can naturally integrate node information and topological structure, have been demonstrated to be powerful in learning on graph data. These advantages of GNNs provide great potential to advance social recommendation since data in social recommender systems can be represented as user-user social graph and user-item graph; and learning latent factors of users and items is the key. However, building social recommender systems based on GNNs faces challenges. For example, the user-item graph encodes both interactions and their associated opinions; social relations have heterogeneous strengths; users involve in two graphs (e.g., the user-user social graph and the user-item graph). To address the three aforementioned challenges simultaneously, in this paper, we present a novel graph neural network framework (GraphRec) for social recommendations. In particular, we provide a principled approach to jointly capture interactions and opinions in the user-item graph and propose the framework GraphRec, which coherently models two graphs and heterogeneous strengths. Extensive experiments on two real-world datasets demonstrate the effectiveness of the proposed framework GraphRec. Our code is available at \\url{https://github.com/wenqifan03/GraphRec-WWW19}", "field": [], "task": ["Recommendation Systems"], "method": [], "dataset": ["Epinions"], "metric": ["MAE", "RMSE"], "title": "Graph Neural Networks for Social Recommendation"} {"abstract": "To date, most of recent work under the retrieval-reader framework for open-domain QA focuses on either extractive or generative reader exclusively. In this paper, we study a hybrid approach for leveraging the strengths of both models. We apply novel techniques to enhance both extractive and generative readers built upon recent pretrained neural language models, and find that proper training methods can provide large improvement over previous state-of-the-art models. We demonstrate that a simple hybrid approach by combining answers from both readers can efficiently take advantages of extractive and generative answer inference strategies and outperforms single models as well as homogeneous ensembles. Our approach outperforms previous state-of-the-art models by 3.3 and 2.7 points in exact match on NaturalQuestions and TriviaQA respectively.", "field": [], "task": ["Open-Domain Question Answering", "Question Answering"], "method": [], "dataset": ["EfficientQA test", "TriviaQA", "EfficientQA dev", "Natural Questions (short)"], "metric": ["F1", "Accuracy"], "title": "UnitedQA: A Hybrid Approach for Open Domain Question Answering"} {"abstract": "AMR-to-text generation is a problem recently introduced to the NLP community, in which the goal is to generate sentences from Abstract Meaning Representation (AMR) graphs. Sequence-to-sequence models can be used to this end by converting the AMR graphs to strings. Approaching the problem while working directly with graphs requires the use of graph-to-sequence models that encode the AMR graph into a vector representation. Such encoding has been shown to be beneficial in the past, and unlike sequential encoding, it allows us to explicitly capture reentrant structures in the AMR graphs. We investigate the extent to which reentrancies (nodes with multiple parents) have an impact on AMR-to-text generation by comparing graph encoders to tree encoders, where reentrancies are not preserved. We show that improvements in the treatment of reentrancies and long-range dependencies contribute to higher overall scores for graph encoders. Our best model achieves 24.40 BLEU on LDC2015E86, outperforming the state of the art by 1.1 points and 24.54 BLEU on LDC2017T10, outperforming the state of the art by 1.24 points.", "field": [], "task": ["AMR-to-Text Generation", "Graph-to-Sequence", "Text Generation"], "method": [], "dataset": ["LDC2015E86:"], "metric": ["BLEU"], "title": "Structural Neural Encoders for AMR-to-text Generation"} {"abstract": "Common-sense and background knowledge is required to understand natural\nlanguage, but in most neural natural language understanding (NLU) systems, this\nknowledge must be acquired from training corpora during learning, and then it\nis static at test time. We introduce a new architecture for the dynamic\nintegration of explicit background knowledge in NLU models. A general-purpose\nreading module reads background knowledge in the form of free-text statements\n(together with task-specific text inputs) and yields refined word\nrepresentations to a task-specific NLU architecture that reprocesses the task\ninputs with these representations. Experiments on document question answering\n(DQA) and recognizing textual entailment (RTE) demonstrate the effectiveness\nand flexibility of the approach. Analysis shows that our model learns to\nexploit knowledge in a semantically appropriate way.", "field": [], "task": ["Common Sense Reasoning", "Natural Language Inference", "Natural Language Understanding", "Question Answering"], "method": [], "dataset": ["TriviaQA"], "metric": ["EM", "F1"], "title": "Dynamic Integration of Background Knowledge in Neural NLU Systems"} {"abstract": "We introduce $k$NN-LMs, which extend a pre-trained neural language model (LM) by linearly interpolating it with a $k$-nearest neighbors ($k$NN) model. The nearest neighbors are computed according to distance in the pre-trained LM embedding space, and can be drawn from any text collection, including the original LM training data. Applying this augmentation to a strong Wikitext-103 LM, with neighbors drawn from the original training set, our $k$NN-LM achieves a new state-of-the-art perplexity of 15.79 - a 2.9 point improvement with no additional training. We also show that this approach has implications for efficiently scaling up to larger training sets and allows for effective domain adaptation, by simply varying the nearest neighbor datastore, again without further training. Qualitatively, the model is particularly helpful in predicting rare patterns, such as factual knowledge. Together, these results strongly suggest that learning similarity between sequences of text is easier than predicting the next word, and that nearest neighbor search is an effective approach for language modeling in the long tail.", "field": [], "task": ["Domain Adaptation", "Language Modelling"], "method": [], "dataset": ["WikiText-103"], "metric": ["Number of params", "Validation perplexity", "Test perplexity"], "title": "Generalization through Memorization: Nearest Neighbor Language Models"} {"abstract": "Layer normalization (LayerNorm) is a technique to normalize the distributions of intermediate layers. It enables smoother gradients, faster training, and better generalization accuracy. However, it is still unclear where the effectiveness stems from. In this paper, our main contribution is to take a step further in understanding LayerNorm. Many of previous studies believe that the success of LayerNorm comes from forward normalization. Unlike them, we find that the derivatives of the mean and variance are more important than forward normalization by re-centering and re-scaling backward gradients. Furthermore, we find that the parameters of LayerNorm, including the bias and gain, increase the risk of over-fitting and do not work in most cases. Experiments show that a simple version of LayerNorm (LayerNorm-simple) without the bias and gain outperforms LayerNorm on four datasets. It obtains the state-of-the-art performance on En-Vi machine translation. To address the over-fitting problem, we propose a new normalization method, Adaptive Normalization (AdaNorm), by replacing the bias and gain with a new transformation function. Experiments show that AdaNorm demonstrates better results than LayerNorm on seven out of eight datasets.", "field": [], "task": ["Machine Translation"], "method": [], "dataset": ["IWSLT2015 English-Vietnamese"], "metric": ["BLEU"], "title": "Understanding and Improving Layer Normalization"} {"abstract": "Video super-resolution plays an important role in surveillance video analysis and ultra-high-definition video display, which has drawn much attention in both the research and industrial communities. Although many deep learning-based VSR methods have been proposed, it is hard to directly compare these methods since the different loss functions and training datasets have a significant impact on the super-resolution results. In this work, we carefully study and compare three temporal modeling methods (2D CNN with early fusion, 3D CNN with slow fusion and Recurrent Neural Network) for video super-resolution. We also propose a novel Recurrent Residual Network (RRN) for efficient video super-resolution, where residual learning is utilized to stabilize the training of RNN and meanwhile to boost the super-resolution performance. Extensive experiments show that the proposed RRN is highly computational efficiency and produces temporal consistent VSR results with finer details than other temporal modeling methods. Besides, the proposed method achieves state-of-the-art results on several widely used benchmarks.", "field": [], "task": ["Super-Resolution", "Video Super-Resolution"], "method": [], "dataset": ["Vid4 - 4x upscaling", "UDM10 - 4x upscaling", "SPMCS - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Revisiting Temporal Modeling for Video Super-resolution"} {"abstract": "This work studies the use of visual semantic representations to align entities in heterogeneous knowledge graphs (KGs). Images are natural components of many existing KGs. By combining visual knowledge with other auxiliary information, we show that the proposed new approach, EVA, creates a holistic entity representation that provides strong signals for cross-graph entity alignment. Besides, previous entity alignment methods require human labelled seed alignment, restricting availability. EVA provides a completely unsupervised solution by leveraging the visual similarity of entities to create an initial seed dictionary (visual pivots). Experiments on benchmark data sets DBP15k and DWY15k show that EVA offers state-of-the-art performance on both monolingual and cross-lingual entity alignment tasks. Furthermore, we discover that images are particularly useful to align long-tail KG entities, which inherently lack the structural contexts necessary for capturing the correspondences.", "field": [], "task": ["Entity Alignment", "Knowledge Graphs"], "method": [], "dataset": ["DBP15k zh-en", "dbp15k fr-en", "dbp15k ja-en"], "metric": ["Hits@1"], "title": "Visual Pivoting for (Unsupervised) Entity Alignment"} {"abstract": "Most object recognition approaches predominantly focus on learning discriminative visual patterns while overlooking the holistic object structure. Though important, structure modeling usually requires significant manual annotations and therefore is labor-intensive. In this paper, we propose to \"look into object\" (explicitly yet intrinsically model the object structure) through incorporating self-supervisions into the traditional framework. We show the recognition backbone can be substantially enhanced for more robust representation learning, without any cost of extra annotation and inference speed. Specifically, we first propose an object-extent learning module for localizing the object according to the visual patterns shared among the instances in the same category. We then design a spatial context learning module for modeling the internal structures of the object, through predicting the relative positions within the extent. These two modules can be easily plugged into any backbone networks during training and detached at inference time. Extensive experiments show that our look-into-object approach (LIO) achieves large performance gain on a number of benchmarks, including generic object recognition (ImageNet) and fine-grained object recognition tasks (CUB, Cars, Aircraft). We also show that this learning paradigm is highly generalizable to other tasks such as object detection and segmentation (MS COCO). Project page: https://github.com/JDAI-CV/LIO.", "field": [], "task": ["Fine-Grained Image Classification", "Image Recognition", "Instance Segmentation", "Object Detection", "Object Recognition", "Representation Learning", "Semantic Segmentation"], "method": [], "dataset": ["CUB-200-2011", "Stanford Cars", "ImageNet", "FGVC Aircraft"], "metric": ["Accuracy", "Top-1 Error Rate"], "title": "Look-into-Object: Self-supervised Structure Modeling for Object Recognition"} {"abstract": "Sentiment analysis in conversations has gained increasing attention in recent years for the growing amount of applications it can serve, e.g., sentiment analysis, recommender systems, and human-robot interaction. The main difference between conversational sentiment analysis and single sentence sentiment analysis is the existence of context information which may influence the sentiment of an utterance in a dialogue. How to effectively encode contextual information in dialogues, however, remains a challenge. Existing approaches employ complicated deep learning structures to distinguish different parties in a conversation and then model the context information. In this paper, we propose a fast, compact and parameter-efficient party-ignorant framework named bidirectional emotional recurrent unit for conversational sentiment analysis. In our system, a generalized neural tensor block followed by a two-channel classifier is designed to perform context compositionality and sentiment classification, respectively. Extensive experiments on three standard datasets demonstrate that our model outperforms the state of the art in most cases.", "field": [], "task": ["Emotion Recognition in Conversation", "Human robot interaction"], "method": [], "dataset": ["IEMOCAP", "MELD"], "metric": ["Weighted Macro-F1", "F1", "Accuracy"], "title": "BiERU: Bidirectional Emotional Recurrent Unit for Conversational Sentiment Analysis"} {"abstract": "One of the core components of modern spoken dialogue systems is the belief\ntracker, which estimates the user's goal at every step of the dialogue.\nHowever, most current approaches have difficulty scaling to larger, more\ncomplex dialogue domains. This is due to their dependency on either: a) Spoken\nLanguage Understanding models that require large amounts of annotated training\ndata; or b) hand-crafted lexicons for capturing some of the linguistic\nvariation in users' language. We propose a novel Neural Belief Tracking (NBT)\nframework which overcomes these problems by building on recent advances in\nrepresentation learning. NBT models reason over pre-trained word vectors,\nlearning to compose them into distributed representations of user utterances\nand dialogue context. Our evaluation on two datasets shows that this approach\nsurpasses past limitations, matching the performance of state-of-the-art models\nwhich rely on hand-crafted semantic lexicons and outperforming them when such\nlexicons are not provided.", "field": [], "task": ["Dialogue State Tracking", "Representation Learning", "Spoken Dialogue Systems", "Spoken Language Understanding"], "method": [], "dataset": ["Wizard-of-Oz", "Second dialogue state tracking challenge"], "metric": ["Joint", "Price", "Area", "Food", "Request"], "title": "Neural Belief Tracker: Data-Driven Dialogue State Tracking"} {"abstract": "Fine-Grained Visual Classification (FGVC) is an important computer vision problem that involves small diversity within the different classes, and often requires expert annotators to collect data. Utilizing this notion of small visual diversity, we revisit Maximum-Entropy learning in the context of fine-grained classification, and provide a training routine that maximizes the entropy of the output probability distribution for training convolutional neural networks on FGVC tasks. We provide a theoretical as well as empirical justification of our approach, and achieve state-of-the-art performance across a variety of classification tasks in FGVC, that can potentially be extended to any fine-tuning task. Our method is robust to different hyperparameter values, amount of training data and amount of training label noise and can hence be a valuable tool in many similar problems.", "field": [], "task": ["Fine-Grained Image Classification"], "method": [], "dataset": ["NABirds"], "metric": ["Accuracy"], "title": "Maximum-Entropy Fine Grained Classification"} {"abstract": "Coherence plays a critical role in producing a high-quality summary from a\ndocument. In recent years, neural extractive summarization is becoming\nincreasingly attractive. However, most of them ignore the coherence of\nsummaries when extracting sentences. As an effort towards extracting coherent\nsummaries, we propose a neural coherence model to capture the cross-sentence\nsemantic and syntactic coherence patterns. The proposed neural coherence model\nobviates the need for feature engineering and can be trained in an end-to-end\nfashion using unlabeled data. Empirical results show that the proposed neural\ncoherence model can efficiently capture the cross-sentence coherence patterns.\nUsing the combined output of the neural coherence model and ROUGE package as\nthe reward, we design a reinforcement learning method to train a proposed\nneural extractive summarizer which is named Reinforced Neural Extractive\nSummarization (RNES) model. The RNES model learns to optimize coherence and\ninformative importance of the summary simultaneously. Experimental results show\nthat the proposed RNES outperforms existing baselines and achieves\nstate-of-the-art performance in term of ROUGE on CNN/Daily Mail dataset. The\nqualitative evaluation indicates that summaries produced by RNES are more\ncoherent and readable.", "field": [], "task": ["Feature Engineering", "Text Summarization"], "method": [], "dataset": ["CNN / Daily Mail (Anonymized)"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Learning to Extract Coherent Summary via Deep Reinforcement Learning"} {"abstract": "Conversational emotion recognition (CER) has attracted increasing interests in the natural language processing (NLP) community. Different from the vanilla emotion recognition, effective speaker-sensitive utterance representation is one major challenge for CER. In this paper, we exploit speaker identification (SI) as an auxiliary task to enhance the utterance representation in conversations. By this method, we can learn better speaker-aware contextual representations from the additional SI corpus. Experiments on two benchmark datasets demonstrate that the proposed architecture is highly effective for CER, obtaining new state-of-the-art results on two datasets.", "field": [], "task": ["Emotion Recognition in Conversation", "Multi-Task Learning", "Speaker Identification"], "method": [], "dataset": ["MELD", "EmoryNLP"], "metric": ["Weighted Macro-F1"], "title": "Multi-Task Learning with Auxiliary Speaker Identification for Conversational Emotion Recognition"} {"abstract": "Coreference resolution systems are typically trained with heuristic loss\nfunctions that require careful tuning. In this paper we instead apply\nreinforcement learning to directly optimize a neural mention-ranking model for\ncoreference evaluation metrics. We experiment with two approaches: the\nREINFORCE policy gradient algorithm and a reward-rescaled max-margin objective.\nWe find the latter to be more effective, resulting in significant improvements\nover the current state-of-the-art on the English and Chinese portions of the\nCoNLL 2012 Shared Task.", "field": [], "task": ["Coreference Resolution"], "method": [], "dataset": ["OntoNotes"], "metric": ["F1"], "title": "Deep Reinforcement Learning for Mention-Ranking Coreference Models"} {"abstract": "Semantic parses are directed acyclic graphs (DAGs), so semantic parsing should be modeled as graph prediction. But predicting graphs presents difficult technical challenges, so it is simpler and more common to predict the linearized graphs found in semantic parsing datasets using well-understood sequence models. The cost of this simplicity is that the predicted strings may not be well-formed graphs. We present recurrent neural network DAG grammars, a graph-aware sequence model that ensures only well-formed graphs while sidestepping many difficulties in graph prediction. We test our model on the Parallel Meaning Bank---a multilingual semantic graphbank. Our approach yields competitive results in English and establishes the first results for German, Italian and Dutch.", "field": [], "task": ["DRS Parsing", "Semantic Parsing"], "method": [], "dataset": ["PMB-2.2.0"], "metric": ["F1"], "title": "Semantic Graph Parsing with Recurrent Neural Network DAG Grammars"} {"abstract": "Fine-Grained Visual Classification (FGVC) datasets contain small sample\nsizes, along with significant intra-class variation and inter-class similarity.\nWhile prior work has addressed intra-class variation using localization and\nsegmentation techniques, inter-class similarity may also affect feature\nlearning and reduce classification performance. In this work, we address this\nproblem using a novel optimization procedure for the end-to-end neural network\ntraining on FGVC tasks. Our procedure, called Pairwise Confusion (PC) reduces\noverfitting by intentionally {introducing confusion} in the activations. With\nPC regularization, we obtain state-of-the-art performance on six of the most\nwidely-used FGVC datasets and demonstrate improved localization ability. {PC}\nis easy to implement, does not need excessive hyperparameter tuning during\ntraining, and does not add significant overhead during test time.", "field": [], "task": ["Fine-Grained Image Classification"], "method": [], "dataset": [" CUB-200-2011", "Oxford 102 Flowers", "CUB-200-2011", "Stanford Dogs", "Stanford Cars", "NABirds"], "metric": ["Accuracy"], "title": "Pairwise Confusion for Fine-Grained Visual Classification"} {"abstract": "We present our submission to the IWCS 2019 shared task on semantic parsing, a transition-based parser that uses explicit word-meaning pairings, but no explicit representation of syntax. Parsing decisions are made based on vector representations of parser states, encoded via stack-LSTMs (Ballesteros et al., 2017), as well as some heuristic rules. Our system reaches 70.88{\\%} f-score in the competition.", "field": [], "task": ["DRS Parsing", "Semantic Parsing"], "method": [], "dataset": ["PMB-2.2.0"], "metric": ["F1"], "title": "Transition-based DRS Parsing Using Stack-LSTMs"} {"abstract": "We aim to divide the problem space of fine-grained recognition into some specific regions. To achieve this, we develop a unified framework based on a mixture of experts. Due to limited data available for the fine-grained recognition problem, it is not feasible to learn diverse experts by using a data division strategy. To tackle the problem, we promote diversity among experts by combing an expert gradually-enhanced learning strategy and a Kullback-Leibler divergence based constraint. The strategy learns new experts on the dataset with the prior knowledge from former experts and adds them to the model sequentially, while the introduced constraint forces the experts to produce diverse prediction distribution. These drive the experts to learn the task from different aspects, making them specialized in different subspace problems. Experiments show that the resulting model improves the classification performance and achieves the state-of-the-art performance on several fine-grained benchmark datasets.\r", "field": [], "task": ["Fine-Grained Image Classification"], "method": [], "dataset": [" CUB-200-2011", "Stanford Cars", "NABirds"], "metric": ["Accuracy"], "title": "Learning a Mixture of Granularity-Specific Experts for Fine-Grained Categorization"} {"abstract": "The main requisite for fine-grained recognition task is to focus on subtle discriminative details that make the subordinate classes different from each other. We note that existing methods implicitly address this requirement and leave it to a data-driven pipeline to figure out what makes a subordinate class different from the others. This results in two major limitations: First, the network focuses on the most obvious distinctions between classes and overlooks more subtle inter-class variations. Second, the chance of misclassifying a given sample in any of the negative classes is considered equal, while in fact, confusions generally occur among only the most similar classes. Here, we propose to explicitly force the network to find the subtle differences among closely related classes. In this pursuit, we introduce two key novelties that can be easily plugged into existing end-to-end deep learning pipelines. On one hand, we introduce diversification block which masks the most salient features for an input to force the network to use more subtle cues for its correct classification. Concurrently, we introduce a gradient-boosting loss function that focuses only on the confusing classes for each sample and therefore moves swiftly along the direction on the loss surface that seeks to resolve these ambiguities. The synergy between these two blocks helps the network to learn more effective feature representations. Comprehensive experiments are performed on five challenging datasets. Our approach outperforms existing methods using similar experimental setting on all five datasets.", "field": [], "task": ["Fine-Grained Image Classification"], "method": [], "dataset": [" CUB-200-2011", "Stanford Dogs", "Stanford Cars", "FGVC Aircraft"], "metric": ["Accuracy"], "title": "Fine-grained Recognition: Accounting for Subtle Differences between Similar Classes"} {"abstract": "Supervised relation extraction methods based on deep neural network play an important role in the recent information extraction field. However, at present, their performance still fails to reach a good level due to the existence of complicated relations. On the other hand, recently proposed pre-trained language models (PLMs) have achieved great success in multiple tasks of natural language processing through fine-tuning when combined with the model of downstream tasks. However, original standard tasks of PLM do not include the relation extraction task yet. We believe that PLMs can also be used to solve the relation extraction problem, but it is necessary to establish a specially designed downstream task model or even loss function for dealing with complicated relations. In this paper, a new network architecture with a special loss function is designed to serve as a downstream model of PLMs for supervised relation extraction. Experiments have shown that our method significantly exceeded the current optimal baseline models across multiple public datasets of relation extraction.", "field": [], "task": ["Language Modelling", "Relation Extraction"], "method": [], "dataset": ["SemEval-2010 Task 8"], "metric": ["F1"], "title": "Downstream Model Design of Pre-trained Language Model for Relation Extraction Task"} {"abstract": "Enabling effective and efficient machine learning (ML) over large-scale graph data (e.g., graphs with billions of edges) can have a huge impact on both industrial and scientific applications. However, community efforts to advance large-scale graph ML have been severely limited by the lack of a suitable public benchmark. For KDD Cup 2021, we present OGB Large-Scale Challenge (OGB-LSC), a collection of three real-world datasets for advancing the state-of-the-art in large-scale graph ML. OGB-LSC provides graph datasets that are orders of magnitude larger than existing ones and covers three core graph learning tasks -- link prediction, graph regression, and node classification. Furthermore, OGB-LSC provides dedicated baseline experiments, scaling up expressive graph ML models to the massive datasets. We show that the expressive models significantly outperform simple scalable baselines, indicating an opportunity for dedicated efforts to further improve graph ML at scale. Our datasets and baseline code are released and maintained as part of our OGB initiative (Hu et al., 2020). We hope OGB-LSC at KDD Cup 2021 can empower the community to discover innovative solutions for large-scale graph ML.", "field": [], "task": ["Graph Learning", "Graph Regression", "Link Prediction", "Node Classification", "Regression"], "method": [], "dataset": ["MAG240M-LSC", "WikiKG90M-LSC", "PCQM4M-LSC"], "metric": ["Validation MAE", "Test Accuracy", "Test MAE", "Test MRR", "Validation MRR", "Validation Accuracy"], "title": "OGB-LSC: A Large-Scale Challenge for Machine Learning on Graphs"} {"abstract": "Single-view depth estimation using CNNs trained from unlabelled videos has shown significant promise. However, the excellent results have mostly been obtained in street-scene driving scenarios, and such methods often fail in other settings, particularly indoor videos taken by handheld devices, in which case the ego-motion is often degenerate, i.e., the rotation dominates the translation. In this work, we establish that the degenerate camera motions exhibited in handheld settings are a critical obstacle for unsupervised depth learning. A main contribution of our work is fundamental analysis which shows that the rotation behaves as noise during training, as opposed to the translation (baseline) which provides supervision signals. To capitalise on our findings, we propose a novel data pre-processing method for effective training, i.e., we search for image pairs with modest translation and remove their rotation via the proposed weak image rectification. With our pre-processing, existing unsupervised models can be trained well in challenging scenarios (e.g., NYUv2 dataset), and the results outperform the unsupervised SOTA by a large margin (0.147 vs. 0.189 in the AbsRel error).", "field": [], "task": ["Depth Estimation", "Monocular Depth Estimation", "Rectification", "Self-Supervised Learning"], "method": [], "dataset": ["NYU-Depth V2"], "metric": ["RMSE"], "title": "Unsupervised Depth Learning in Challenging Indoor Video: Weak Rectification to Rescue"} {"abstract": "This paper presents two variations of architecture referred to as RANet and BIRANet. The proposed architecture aims to use radar signal data along with RGB camera images to form a robust detection network that works efficiently, even in variable lighting and weather conditions such as rain, dust, fog, and others. First, radar information is fused in the feature extractor network. Second, radar points are used to generate guided anchors. Third, a method is proposed to improve region proposal network targets. BIRANet yields 72.3/75.3% average AP/AR on the NuScenes dataset, which is better than the performance of our base network Faster-RCNN with Feature pyramid network(FFPN). RANet gives 69.6/71.9% average AP/AR on the same dataset, which is reasonably acceptable performance. Also, both BIRANet and RANet are evaluated to be robust towards the noise.", "field": [], "task": ["2D Object Detection", "Autonomous Vehicles", "Object Detection", "Region Proposal", "Robust Object Detection", "Sensor Fusion"], "method": [], "dataset": ["nuScenes"], "metric": ["AR(m)", "AP(m)", "AR(s)", "MAP", "AP(s)", "AP75", "AP85", "AR(l)", "AP50", "AP(l)", "AR"], "title": "Radar+RGB Attentive Fusion for Robust Object Detection in Autonomous Vehicles"} {"abstract": "In this paper, we are concerned with the detection of a particular type of objects with extreme aspect ratios, namely slender objects. In real-world scenarios as well as widely-used datasets (such as COCO), slender objects are actually very common. However, this type of object has been largely overlooked by previous object detection algorithms. Upon our investigation, for a classical object detection method, a drastic drop of 18.9% mAP on COCO is observed, if solely evaluated on slender objects. Therefore, We systematically study the problem of slender object detection in this work. Accordingly, an analytical framework with carefully designed benchmark and evaluation protocols is established, in which different algorithms and modules can be inspected and compared. Our key findings include: 1) the essential role of anchors in label assignment; 2) the descriptive capability of the 2-point representation; 3) the crucial strategies for improving the detection of slender objects and regular objects. Our work identifies and extends the insights of existing methods that are previously underexploited. Furthermore, we propose a feature adaption strategy that achieves clear and consistent improvements over current representative object detection methods. In particular, a natural and effective extension of the center prior, which leads to a significant improvement on slender objects, is devised. We believe this work opens up new opportunities and calibrates ablation standards for future research in the field of object detection.", "field": [], "task": ["Object Detection"], "method": [], "dataset": ["COCO+"], "metric": ["mAR (COCO+ XS)"], "title": "Slender Object Detection: Diagnoses and Improvements"} {"abstract": "The goal of few-shot learning is to classify unseen categories with few labeled samples. Recently, the low-level information metric-learning based methods have achieved satisfying performance, since local representations (LRs) are more consistent between seen and unseen classes. However, most of these methods deal with each category in the support set independently, which is not sufficient to measure the relation between features, especially in a certain task. Moreover, the low-level information-based metric learning method suffers when dominant objects of different scales exist in a complex background. To address these issues, this paper proposes a novel Multi-scale Adaptive Task Attention Network (MATANet) for few-shot learning. Specifically, we first use a multi-scale feature generator to generate multiple features at different scales. Then, an adaptive task attention module is proposed to select the most important LRs among the entire task. Afterwards, a similarity-to-class module and a fusion layer are utilized to calculate a joint multi-scale similarity between the query image and the support set. Extensive experiments on popular benchmarks clearly show the effectiveness of the proposed MATANet compared with state-of-the-art methods.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Metric Learning"], "method": [], "dataset": ["Stanford Dogs 5-way (5-shot)", "CUB-200-2011 5-way (5-shot)", "Mini-Imagenet 5-way (1-shot)", "Stanford Cars 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "CUB-200-2011 5-way (1-shot)", "Stanford Cars 5-way (5-shot)", "Stanford Dogs 5-way (1-shot)"], "metric": ["Accuracy"], "title": "Multi-scale Adaptive Task Attention Network for Few-Shot Learning"} {"abstract": "In this paper, we propose a new paradigm for the task of entity-relation extraction. We cast the task as a multi-turn question answering problem, i.e., the extraction of entities and relations is transformed to the task of identifying answer spans from the context. This multi-turn QA formalization comes with several key advantages: firstly, the question query encodes important information for the entity/relation class we want to identify; secondly, QA provides a natural way of jointly modeling entity and relation; and thirdly, it allows us to exploit the well developed machine reading comprehension (MRC) models. Experiments on the ACE and the CoNLL04 corpora demonstrate that the proposed paradigm significantly outperforms previous best models. We are able to obtain the state-of-the-art results on all of the ACE04, ACE05 and CoNLL04 datasets, increasing the SOTA results on the three datasets to 49.4 (+1.0), 60.2 (+0.6) and 68.9 (+2.1), respectively. Additionally, we construct a newly developed dataset RESUME in Chinese, which requires multi-step reasoning to construct entity dependencies, as opposed to the single-step dependency extraction in the triplet exaction in previous datasets. The proposed multi-turn QA model also achieves the best performance on the RESUME dataset.", "field": [], "task": ["Machine Reading Comprehension", "Question Answering", "Reading Comprehension", "Relation Extraction"], "method": [], "dataset": ["CoNLL04", "ACE 2005", "ACE 2004"], "metric": ["Sentence Encoder", "NER Micro F1", "RE+ Micro F1"], "title": "Entity-Relation Extraction as Multi-Turn Question Answering"} {"abstract": "We propose a self-supervised spatiotemporal learning technique which leverages the chronological order of videos. Our method can learn the spatiotemporal representation of the video by predicting the order of shuffled clips from the video. The category of the video is not required, which gives our technique the potential to take advantage of infinite unannotated videos. There exist related works which use frames, while compared to frames, clips are more consistent with the video dynamics. Clips can help to reduce the uncertainty of orders and are more appropriate to learn a video representation. The 3D convolutional neural networks are utilized to extract features for clips, and these features are processed to predict the actual order. The learned representations are evaluated via nearest neighbor retrieval experiments. We also use the learned networks as the pre-trained models and finetune them on the action recognition task. Three types of 3D convolutional neural networks are tested in experiments, and we gain large improvements compared to existing self-supervised methods.\r", "field": [], "task": ["Action Recognition", "Self-Supervised Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["UCF101", "HMDB51"], "metric": ["3-fold Accuracy", "Pre-Training Dataset", "Top-1 Accuracy"], "title": "Self-Supervised Spatiotemporal Learning via Video Clip Order Prediction"} {"abstract": "We contribute HAA500, a manually annotated human-centric atomic action dataset for action recognition on 500 classes with over 591k labeled frames. Unlike existing atomic action datasets, where coarse-grained atomic actions were labeled with action-verbs, e.g., \"Throw\", HAA500 contains fine-grained atomic actions where only consistent actions fall under the same label, e.g., \"Baseball Pitching\" vs \"Free Throw in Basketball\", to minimize ambiguities in action classification. HAA500 has been carefully curated to capture the movement of human figures with less spatio-temporal label noises to greatly enhance the training of deep neural networks. The advantages of HAA500 include: 1) human-centric actions with a high average of 69.7% detectable joints for the relevant human poses; 2) each video captures the essential elements of an atomic action without irrelevant frames; 3) fine-grained atomic action classes. Our extensive experiments validate the benefits of human-centric and atomic characteristics of HAA, which enables the trained model to improve prediction by attending to atomic human poses. We detail the HAA500 dataset statistics and collection methodology, and compare quantitatively with existing action recognition datasets.", "field": [], "task": ["Action Classification", "Action Classification ", "Action Recognition"], "method": [], "dataset": ["HAA500"], "metric": ["Top-1 (%)"], "title": "HAA500: Human-Centric Atomic Action Dataset with Curated Videos"} {"abstract": "The paper presents a novel method of finding a fragment in a long temporal sequence similar to the set of shorter sequences. We are the first to propose an algorithm for such a search that does not rely on computing the average sequence from query examples. Instead, we use query examples as is, utilizing all of them simultaneously. The introduced method based on the Dynamic Time Warping (DTW) technique is suited explicitly for few-shot query-by-example retrieval tasks. We evaluate it on two different few-shot problems from the field of Natural Language Processing. The results show it either outperforms baselines and previous approaches or achieves comparable results when a low number of examples is available.", "field": [], "task": ["Semantic Retrieval"], "method": [], "dataset": ["Contract Discovery"], "metric": ["Soft-F1"], "title": "Dynamic Boundary Time Warping for Sub-sequence Matching with Few Examples"} {"abstract": "Clauses and sentences rarely stand on their own in an actual discourse; rather, the relationship\r\nbetween them carries important information that allows the discourse to express a meaning as a\r\nwhole beyond the sum of its individual parts. Rhetorical analysis seeks to uncover this coherence\r\nstructure. In this article, we present CODRA\u2014 a COmplete probabilistic Discriminative\r\nframework for performing Rhetorical Analysis in accordance with Rhetorical Structure Theory,\r\nwhich posits a tree representation of a discourse.\r\nCODRA comprises a discourse segmenter and a discourse parser. First, the discourse\r\nsegmenter, which is based on a binary classifier, identifies the elementary discourse units in a\r\ngiven text. Then the discourse parser builds a discourse tree by applying an optimal parsing\r\nalgorithm to probabilities inferred from two Conditional Random Fields: one for intra-sentential\r\nparsing and the other for multi-sentential parsing. We present two approaches to combine these\r\ntwo stages of parsing effectively. By conducting a series of empirical evaluations over two\r\ndifferent data sets, we demonstrate that CODRA significantly outperforms the state-of-the-art,\r\noften by a wide margin. We also show that a reranking of the k-best parse hypotheses generated\r\nby CODRA can potentially improve the accuracy even further", "field": [], "task": ["Discourse Parsing"], "method": [], "dataset": ["RST-DT"], "metric": ["RST-Parseval (Relation)", "RST-Parseval (Span)", "RST-Parseval (Nuclearity)"], "title": "Two-stage Discourse Parser with a Sliding Window"} {"abstract": "Discourse parsing is an integral part of understanding information flow and\nargumentative structure in documents. Most previous research has focused on\ninducing and evaluating models from the English RST Discourse Treebank.\nHowever, discourse treebanks for other languages exist, including Spanish,\nGerman, Basque, Dutch and Brazilian Portuguese. The treebanks share the same\nunderlying linguistic theory, but differ slightly in the way documents are\nannotated. In this paper, we present (a) a new discourse parser which is\nsimpler, yet competitive (significantly better on 2/3 metrics) to state of the\nart for English, (b) a harmonization of discourse treebanks across languages,\nenabling us to present (c) what to the best of our knowledge are the first\nexperiments on cross-lingual discourse parsing.", "field": [], "task": ["Discourse Parsing"], "method": [], "dataset": ["RST-DT"], "metric": ["RST-Parseval (Relation)", "RST-Parseval (Nuclearity)", "RST-Parseval (Span)", "RST-Parseval (Full)"], "title": "Cross-lingual RST Discourse Parsing"} {"abstract": "Current state-of-the-art approaches to skeleton-based action recognition are\nmostly based on recurrent neural networks (RNN). In this paper, we propose a\nnovel convolutional neural networks (CNN) based framework for both action\nclassification and detection. Raw skeleton coordinates as well as skeleton\nmotion are fed directly into CNN for label prediction. A novel skeleton\ntransformer module is designed to rearrange and select important skeleton\njoints automatically. With a simple 7-layer network, we obtain 89.3% accuracy\non validation set of the NTU RGB+D dataset. For action detection in untrimmed\nvideos, we develop a window proposal network to extract temporal segment\nproposals, which are further classified within the same network. On the recent\nPKU-MMD dataset, we achieve 93.7% mAP, surpassing the baseline by a large\nmargin.", "field": [], "task": ["Action Classification", "Action Classification ", "Action Detection", "Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D", "PKU-MMD"], "metric": ["Accuracy (CS)", "Accuracy (CV)", "mAP@0.50 (CV)", "mAP@0.50 (CS)"], "title": "Skeleton-based Action Recognition with Convolutional Neural Networks"} {"abstract": "We present an efficient method for the semi-supervised video object segmentation. Our method achieves accuracy competitive with state-of-the-art methods while running in a fraction of time compared to others. To this end, we propose a deep Siamese encoder-decoder network that is designed to take advantage of mask propagation and object detection while avoiding the weaknesses of both approaches. Our network, learned through a two-stage training process that exploits both synthetic and real data, works robustly without any online learning or post-processing. We validate our method on four benchmark sets that cover single and multiple object segmentation. On all the benchmark sets, our method shows comparable accuracy while having the order of magnitude faster runtime. We also provide extensive ablation and add-on studies to analyze and evaluate our framework.", "field": [], "task": ["Object Detection", "Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation", "Visual Object Tracking"], "method": [], "dataset": ["DAVIS 2017 (val)", "DAVIS 2017 (test-dev)", "DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "Fast Video Object Segmentation by Reference-Guided Mask Propagation"} {"abstract": "For the segmentation of moving objects in videos, the analysis of long-term point trajectories has been very popular recently. In this paper, we formulate the segmentation of a video sequence based on point trajectories as a minimum cost multicut problem. Unlike the commonly used spectral clustering formulation, the minimum cost multicut formulation gives natural rise to optimize not only for a cluster assignment but also for the number of clusters while allowing for varying cluster sizes. In this setup, we provide a method to create a long-term point trajectory graph with attractive and repulsive binary terms and outperform state-of-the-art methods based on spectral clustering on the FBMS-59 dataset and on the motion subtask of the VSB100 dataset.", "field": [], "task": ["Unsupervised Video Object Segmentation"], "method": [], "dataset": ["DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "Motion Trajectory Segmentation via Minimum Cost Multicuts"} {"abstract": "In this paper, we present a fast and strong neural approach for general purpose text matching applications. We explore what is sufficient to build a fast and well-performed text matching model and propose to keep three key features available for inter-sequence alignment: original point-wise features, previous aligned features, and contextual features while simplifying all the remaining components. We conduct experiments on four well-studied benchmark datasets across tasks of natural language inference, paraphrase identification and answer selection. The performance of our model is on par with the state-of-the-art on all datasets with much fewer parameters and the inference speed is at least 6 times faster compared with similarly performed ones.", "field": [], "task": ["Answer Selection", "Natural Language Inference", "Paraphrase Identification", "Question Answering", "Text Matching"], "method": [], "dataset": ["SciTail", "SNLI", "Quora Question Pairs", "WikiQA"], "metric": ["% Test Accuracy", "MAP", "Parameters", "MRR", "Accuracy", "% Train Accuracy"], "title": "Simple and Effective Text Matching with Richer Alignment Features"} {"abstract": "In this paper, we describe two systems we developed for the three tracks we have participated in the BEA-2019 GEC Shared Task. We investigate competitive classification models with bi-directional recurrent neural networks (Bi-RNN) and neural machine translation (NMT) models. For different tracks, we use ensemble systems to selectively combine the NMT models, the classification models, and some rules, and demonstrate that an ensemble solution can effectively improve GEC performance over single systems. Our GEC systems ranked the first in the Unrestricted Track, and the third in both the Restricted Track and the Low Resource Track.", "field": [], "task": ["Grammatical Error Correction", "Machine Translation"], "method": [], "dataset": ["BEA-2019 (test)"], "metric": ["F0.5"], "title": "The LAIX Systems in the BEA-2019 GEC Shared Task"} {"abstract": "We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. In classification problems, ICT moves the decision boundary to low-density regions of the data distribution. Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark datasets. Our theoretical analysis shows that ICT corresponds to a certain type of data-adaptive regularization with unlabeled points which reduces overfitting to labeled points under high confidence values.", "field": [], "task": ["Semi-Supervised Image Classification"], "method": [], "dataset": ["CIFAR-10, 2000 Labels", "CIFAR-10, 4000 Labels", "SVHN, 1000 labels", "CIFAR-10, 1000 Labels"], "metric": ["Accuracy"], "title": "Interpolation Consistency Training for Semi-Supervised Learning"} {"abstract": "This paper proposes a new neural architecture for collaborative ranking with\nimplicit feedback. Our model, LRML (\\textit{Latent Relational Metric Learning})\nis a novel metric learning approach for recommendation. More specifically,\ninstead of simple push-pull mechanisms between user and item pairs, we propose\nto learn latent relations that describe each user item interaction. This helps\nto alleviate the potential geometric inflexibility of existing metric learing\napproaches. This enables not only better performance but also a greater extent\nof modeling capability, allowing our model to scale to a larger number of\ninteractions. In order to do so, we employ a augmented memory module and learn\nto attend over these memory blocks to construct latent relations. The\nmemory-based attention module is controlled by the user-item interaction,\nmaking the learned relation vector specific to each user-item pair. Hence, this\ncan be interpreted as learning an exclusive and optimal relational translation\nfor each user-item interaction. The proposed architecture demonstrates the\nstate-of-the-art performance across multiple recommendation benchmarks. LRML\noutperforms other metric learning models by $6\\%-7.5\\%$ in terms of Hits@10 and\nnDCG@10 on large datasets such as Netflix and MovieLens20M. Moreover,\nqualitative studies also demonstrate evidence that our proposed model is able\nto infer and encode explicit sentiment, temporal and attribute information\ndespite being only trained on implicit feedback. As such, this ascertains the\nability of LRML to uncover hidden relational structure within implicit\ndatasets.", "field": [], "task": ["Collaborative Ranking", "Metric Learning", "Recommendation Systems"], "method": [], "dataset": ["MovieLens 1M", "MovieLens 20M", "Netflix"], "metric": ["nDCG@10", "HR@10"], "title": "Latent Relational Metric Learning via Memory-based Attention for Collaborative Ranking"} {"abstract": "Interest in emotion recognition in conversations (ERC) has been increasing in various fields, because it can be used to analyze user behaviors and detect fake news. Many recent ERC methods use graph-based neural networks to take the relationships between the utterances of the speakers into account. In particular, the state-of-the-art method considers self- and inter-speaker dependencies in conversations by using relational graph attention networks (RGAT). However, graph-based neural networks do not take sequential information into account. In this paper, we propose relational position encodings that provide RGAT with sequential information reflecting the relational graph structure. Accordingly, our RGAT model can capture both the speaker dependency and the sequential information. Experiments on four ERC datasets show that our model is beneficial to recognizing emotions expressed in conversations. In addition, our approach empirically outperforms the state-of-the-art on all of the benchmark datasets.", "field": [], "task": ["Emotion Recognition", "Emotion Recognition in Conversation"], "method": [], "dataset": ["IEMOCAP", "MELD", "EmoryNLP", "DailyDialog"], "metric": ["Weighted Macro-F1", "F1", "Micro-F1"], "title": "Relation-aware Graph Attention Networks with Relational Position Encodings for Emotion Recognition in Conversations"} {"abstract": "We consider the task of 3D joints location and orientation prediction from a monocular video with the skinned multi-person linear (SMPL) model. We first infer 2D joints locations with an off-the-shelf pose estimation algorithm. We use the SPIN algorithm and estimate initial predictions of body pose, shape and camera parameters from a deep regression neural network. We then adhere to the SMPLify algorithm which receives those initial parameters, and optimizes them so that inferred 3D joints from the SMPL model would fit the 2D joints locations. This algorithm involves a projection step of 3D joints to the 2D image plane. The conventional approach is to follow weak perspective assumptions which use ad-hoc focal length. Through experimentation on the 3D Poses in the Wild (3DPW) dataset, we show that using full perspective projection, with the correct camera center and an approximated focal length, provides favorable results. Our algorithm has resulted in a winning entry for the 3DPW Challenge, reaching first place in joints orientation accuracy.", "field": [], "task": ["3D Human Pose Estimation", "Pose Estimation", "Regression"], "method": [], "dataset": ["3D Poses in the Wild Challenge"], "metric": ["MPJPE", "MPJAE"], "title": "Beyond Weak Perspective for Monocular 3D Human Pose Estimation"} {"abstract": "Contrastive learning has nearly closed the gap between supervised and self-supervised learning of image representations. Existing extensions of contrastive learning to the domain of video data however do not explicitly attempt to represent the internal distinctiveness across the temporal dimension of video clips. We develop a new temporal contrastive learning framework consisting of two novel losses to improve upon existing contrastive self-supervised video representation learning methods. The first loss adds the task of discriminating between non-overlapping clips from the same video, whereas the second loss aims to discriminate between timesteps of the feature map of an input clip in order to increase the temporal diversity of the features. Temporal contrastive learning achieves significant improvement over the state-of-the-art results in downstream video understanding tasks such as action recognition, limited-label action classification, and nearest-neighbor video retrieval on video datasets across multiple 3D CNN architectures. With the commonly used 3D-ResNet-18 architecture, we achieve 82.4% (+5.1% increase over the previous best) top-1 accuracy on UCF101 and 52.9% (+5.4% increase) on HMDB51 action classification, and 56.2% (+11.7% increase) Top-1 Recall on UCF101 nearest neighbor video retrieval.", "field": [], "task": ["Action Classification", "Action Classification ", "Action Recognition", "Representation Learning", "Self-Supervised Action Recognition", "Self-Supervised Learning", "Self-supervised Video Retrieval", "Video Retrieval", "Video Understanding"], "method": [], "dataset": ["UCF101", "HMDB51"], "metric": ["3-fold Accuracy", "Pre-Training Dataset", "Top-1 Accuracy"], "title": "TCLR: Temporal Contrastive Learning for Video Representation"} {"abstract": "We capitalize on large amounts of unlabeled video in order to learn a model\nof scene dynamics for both video recognition tasks (e.g. action classification)\nand video generation tasks (e.g. future prediction). We propose a generative\nadversarial network for video with a spatio-temporal convolutional architecture\nthat untangles the scene's foreground from the background. Experiments suggest\nthis model can generate tiny videos up to a second at full frame rate better\nthan simple baselines, and we show its utility at predicting plausible futures\nof static images. Moreover, experiments and visualizations show the model\ninternally learns useful features for recognizing actions with minimal\nsupervision, suggesting scene dynamics are a promising signal for\nrepresentation learning. We believe generative video models can impact many\napplications in video understanding and simulation.", "field": [], "task": ["Action Classification", "Action Classification ", "Future prediction", "Representation Learning", "Self-Supervised Action Recognition", "Video Generation", "Video Recognition", "Video Understanding"], "method": [], "dataset": ["UCF-101 16 frames, 64x64, Unconditional", "UCF101", "UCF-101 16 frames, Unconditional, Single GPU"], "metric": ["Inception Score", "3-fold Accuracy", "Pre-Training Dataset"], "title": "Generating Videos with Scene Dynamics"} {"abstract": "Unseen Action Recognition (UAR) aims to recognise novel action categories\nwithout training examples. While previous methods focus on inner-dataset\nseen/unseen splits, this paper proposes a pipeline using a large-scale training\nsource to achieve a Universal Representation (UR) that can generalise to a more\nrealistic Cross-Dataset UAR (CD-UAR) scenario. We first address UAR as a\nGeneralised Multiple-Instance Learning (GMIL) problem and discover\n'building-blocks' from the large-scale ActivityNet dataset using distribution\nkernels. Essential visual and semantic components are preserved in a shared\nspace to achieve the UR that can efficiently generalise to new datasets.\nPredicted UR exemplars can be improved by a simple semantic adaptation, and\nthen an unseen action can be directly recognised using UR during the test.\nWithout further training, extensive experiments manifest significant\nimprovements over the UCF101 and HMDB51 benchmarks.", "field": [], "task": ["Action Recognition", "Multiple Instance Learning", "Temporal Action Localization"], "method": [], "dataset": ["UCF101", "HMDB-51", "ActivityNet"], "metric": ["Average accuracy of 3 splits", "mAP", "3-fold Accuracy"], "title": "Towards Universal Representation for Unseen Action Recognition"} {"abstract": "Text recognition has attracted considerable research interests because of its various applications. The cutting-edge text recognition methods are based on attention mechanisms. However, most of attention methods usually suffer from serious alignment problem due to its recurrency alignment operation, where the alignment relies on historical decoding results. To remedy this issue, we propose a decoupled attention network (DAN), which decouples the alignment operation from using historical decoding results. DAN is an effective, flexible and robust end-to-end text recognizer, which consists of three components: 1) a feature encoder that extracts visual features from the input image; 2) a convolutional alignment module that performs the alignment operation based on visual features from the encoder; and 3) a decoupled text decoder that makes final prediction by jointly using the feature map and attention maps. Experimental results show that DAN achieves state-of-the-art performance on multiple text recognition tasks, including offline handwritten text recognition and regular/irregular scene text recognition.", "field": [], "task": ["Scene Text", "Scene Text Recognition"], "method": [], "dataset": ["ICDAR2013", "ICDAR2015", "ICDAR 2003", "SVT"], "metric": ["Accuracy"], "title": "Decoupled Attention Network for Text Recognition"} {"abstract": "We seek to understand the arrow of time in videos -- what makes videos look like they are playing forwards or backwards? Can we visualize the cues? Can the arrow of time be a supervisory signal useful for activity analysis? To this end, we build three large-scale video datasets and apply a learning-based approach to these tasks. To learn the arrow of time efficiently and reliably, we design a ConvNet suitable for extended temporal footprints and for class activation visualization, and study the effect of artificial cues, such as cinematographic conventions, on learning. Our trained model achieves state-of-the-art performance on large-scale real-world video datasets. Through cluster analysis and localization of important regions for the prediction, we examine learned visual cues that are consistent among many samples and show when and where they occur. Lastly, we use the trained ConvNet for two applications: self-supervision for action recognition, and video forensics -- determining whether Hollywood film clips have been deliberately reversed in time, often used as special effects.", "field": [], "task": ["Action Recognition", "Self-Supervised Action Recognition", "Temporal Action Localization", "Video Forensics"], "method": [], "dataset": ["UCF101"], "metric": ["3-fold Accuracy", "Pre-Training Dataset"], "title": "Learning and Using the Arrow of Time"} {"abstract": "Most state-of-the-art semi-supervised video object segmentation methods rely\non a pixel-accurate mask of a target object provided for the first frame of a\nvideo. However, obtaining a detailed segmentation mask is expensive and\ntime-consuming. In this work we explore an alternative way of identifying a\ntarget object, namely by employing language referring expressions. Besides\nbeing a more practical and natural way of pointing out a target object, using\nlanguage specifications can help to avoid drift as well as make the system more\nrobust to complex dynamics and appearance variations. Leveraging recent\nadvances of language grounding models designed for images, we propose an\napproach to extend them to video data, ensuring temporally coherent\npredictions. To evaluate our method we augment the popular video object\nsegmentation benchmarks, DAVIS'16 and DAVIS'17 with language descriptions of\ntarget objects. We show that our language-supervised approach performs on par\nwith the methods which have access to a pixel-level mask of the target object\non DAVIS'16 and is competitive to methods using scribbles on the challenging\nDAVIS'17 dataset.", "field": [], "task": ["Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["DAVIS 2017 (val)", "DAVIS 2016", "DAVIS-2017"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "mIoU", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "Video Object Segmentation with Language Referring Expressions"} {"abstract": "Text-level discourse parsing remains a challenge. The current state-of-the-art overall accuracy in relation assignment is 55.73%, achieved by Joty et al. (2013). However, their model has a high order of time complexity, and thus cannot be applied in practice. In this work, we develop a much faster model whose time complexity is linear in the number of sentences. Our model adopts a greedy bottom-up approach, with two linear-chain CRFs applied in cascade as local classifiers. To enhance the accuracy of the pipeline, we add additional constraints in the Viterbi decoding of the first CRF. In addition to efficiency, our parser also significantly outperforms the state of the art. Moreover, our novel approach of post-editing, which modifies a fully-built tree by considering information from constituents on upper levels, can further improve the accuracy.", "field": [], "task": ["Discourse Parsing"], "method": [], "dataset": ["RST-DT"], "metric": ["RST-Parseval (Relation)", "RST-Parseval (Span)", "RST-Parseval (Nuclearity)"], "title": "A Linear-Time Bottom-Up Discourse Parser with Constraints and Post-Editing"} {"abstract": "The common approach to 3D human pose estimation is predicting the body joint\ncoordinates relative to the hip. This works well for a single person but is\ninsufficient in the case of multiple interacting people. Methods predicting\nabsolute coordinates first estimate a root-relative pose then calculate the\ntranslation via a secondary optimization task. We propose a neural network that\npredicts joints in a camera centered coordinate system instead of a\nroot-relative one. Unlike previous methods, our network works in a single step\nwithout any post-processing. Our network beats previous methods on the\nMuPoTS-3D dataset and achieves state-of-the-art results.", "field": [], "task": ["3D Human Pose Estimation", "Depth Estimation", "Pose Estimation"], "method": [], "dataset": ["MuPoTS-3D"], "metric": ["MPJPE"], "title": "Absolute Human Pose Estimation with Depth Prediction Network"} {"abstract": "In this paper, we propose a deep neural network architecture for object\nrecognition based on recurrent neural networks. The proposed network, called\nReNet, replaces the ubiquitous convolution+pooling layer of the deep\nconvolutional neural network with four recurrent neural networks that sweep\nhorizontally and vertically in both directions across the image. We evaluate\nthe proposed ReNet on three widely-used benchmark datasets; MNIST, CIFAR-10 and\nSVHN. The result suggests that ReNet is a viable alternative to the deep\nconvolutional neural network, and that further investigation is needed.", "field": [], "task": ["Image Classification", "Object Recognition"], "method": [], "dataset": ["SVHN", "MNIST", "CIFAR-10"], "metric": ["Percentage error", "Percentage correct"], "title": "ReNet: A Recurrent Neural Network Based Alternative to Convolutional Networks"} {"abstract": "Face recognition has evolved as a prominent biometric authentication modality. However, vulnerability to presentation attacks curtails its reliable deployment. Automatic detection of presentation attacks is essential for secure use of face recognition technology in unattended scenarios. In this work, we introduce a Convolutional Neural Network (CNN) based framework for presentation attack detection, with deep pixel-wise supervision. The framework uses only frame level information making it suitable for deployment in smart devices with minimal computational and time overhead. We demonstrate the effectiveness of the proposed approach in public datasets for both intra as well as cross-dataset experiments. The proposed approach achieves an HTER of 0% in Replay Mobile dataset and an ACER of 0.42% in Protocol-1 of OULU dataset outperforming state of the art methods.", "field": [], "task": ["Face Anti-Spoofing", "Face Presentation Attack Detection", "Face Recognition"], "method": [], "dataset": ["Replay Mobile"], "metric": ["HTER"], "title": "Deep Pixel-wise Binary Supervision for Face Presentation Attack Detection"} {"abstract": "Majority of the text modelling techniques yield only point-estimates of document embeddings and lack in capturing the uncertainty of the estimates. These uncertainties give a notion of how well the embeddings represent a document. We present Bayesian subspace multinomial model (Bayesian SMM), a generative log-linear model that learns to represent documents in the form of Gaussian distributions, thereby encoding the uncertainty in its co-variance. Additionally, in the proposed Bayesian SMM, we address a commonly encountered problem of intractability that appears during variational inference in mixed-logit models. We also present a generative Gaussian linear classifier for topic identification that exploits the uncertainty in document embeddings. Our intrinsic evaluation using perplexity measure shows that the proposed Bayesian SMM fits the data better as compared to the state-of-the-art neural variational document model on Fisher speech and 20Newsgroups text corpora. Our topic identification experiments show that the proposed systems are robust to over-fitting on unseen test data. The topic ID results show that the proposed model is outperforms state-of-the-art unsupervised topic models and achieve comparable results to the state-of-the-art fully supervised discriminative models.", "field": [], "task": ["Topic Models", "Variational Inference"], "method": [], "dataset": ["20 Newsgroups"], "metric": ["Test perplexity"], "title": "Learning document embeddings along with their uncertainties"} {"abstract": "It has been widely proven that modelling long-range dependencies in fully convolutional networks (FCNs) via global aggregation modules is critical for complex scene understanding tasks such as semantic segmentation and object detection. However, global aggregation is often dominated by features of large patterns and tends to oversmooth regions that contain small patterns (e.g., boundaries and small objects). To resolve this problem, we propose to first use \\emph{Global Aggregation} and then \\emph{Local Distribution}, which is called GALD, where long-range dependencies are more confidently used inside large pattern regions and vice versa. The size of each pattern at each position is estimated in the network as a per-channel mask map. GALD is end-to-end trainable and can be easily plugged into existing FCNs with various global aggregation modules for a wide range of vision tasks, and consistently improves the performance of state-of-the-art object detection and instance segmentation approaches. In particular, GALD used in semantic segmentation achieves new state-of-the-art performance on Cityscapes test set with mIoU 83.3\\%. Code is available at: \\url{https://github.com/lxtGH/GALD-Net}", "field": [], "task": ["Instance Segmentation", "Object Detection", "Scene Understanding", "Semantic Segmentation"], "method": [], "dataset": ["PASCAL VOC 2007", "Cityscapes test"], "metric": ["Mean IoU", "Mean IoU (class)"], "title": "Global Aggregation then Local Distribution in Fully Convolutional Networks"} {"abstract": "Recently, Frankle & Carbin (2019) demonstrated that randomly-initialized dense networks contain subnetworks that once found can be trained to reach test accuracy comparable to the trained dense network. However, finding these high performing trainable subnetworks is expensive, requiring iterative process of training and pruning weights. In this paper, we propose (and prove) a stronger Multi-Prize Lottery Ticket Hypothesis: A sufficiently over-parameterized neural network with random weights contains several subnetworks (winning tickets) that (a) have comparable accuracy to a dense target network with learned weights (prize 1), (b) do not require any further training to achieve prize 1 (prize 2), and (c) is robust to extreme forms of quantization (i.e., binary weights and/or activation) (prize 3). This provides a new paradigm for learning compact yet highly accurate binary neural networks simply by pruning and quantizing randomly weighted full precision neural networks. We also propose an algorithm for finding multi-prize tickets (MPTs) and test it by performing a series of experiments on CIFAR-10 and ImageNet datasets. Empirical results indicate that as models grow deeper and wider, multi-prize tickets start to reach similar (and sometimes even higher) test accuracy compared to their significantly larger and full-precision counterparts that have been weight-trained. Without ever updating the weight values, our MPTs-1/32 not only set new binary weight network state-of-the-art (SOTA) Top-1 accuracy -- 94.8% on CIFAR-10 and 74.03% on ImageNet -- but also outperform their full-precision counterparts by 1.78% and 0.76%, respectively. Further, our MPT-1/1 achieves SOTA Top-1 accuracy (91.9%) for binary neural networks on CIFAR-10. Code and pre-trained models are available at: https://github.com/chrundle/biprop.", "field": [], "task": ["Quantization"], "method": [], "dataset": ["ImageNet"], "metric": ["Top-1"], "title": "Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network"} {"abstract": "While building a text-to-speech system for the Arabic language, we found that the system synthesized speeches with many pronunciation errors. The primary source of these errors is the lack of diacritics in modern standard Arabic writing. These diacritics are small strokes that appear above or below each letter to provide pronunciation and grammatical information. We propose three deep learning models to recover Arabic text diacritics based on our work in a text-to-speech synthesis system using deep learning. The first model is a baseline model used to test how a simple deep learning model performs on the corpora. The second model is based on an encoder-decoder architecture, which resembles our text-to-speech synthesis model with many modifications to suit this problem. The last model is based on the encoder part of the text-to-speech model, which achieves state-of-the-art performances in both word error rate and diacritic error rate metrics. These models will benefit a wide range of natural language processing applications such as text-to-speech, part-of-speech tagging, and machine translation.", "field": [], "task": ["Arabic Text Diacritization", "Machine Translation", "Part-Of-Speech Tagging", "Speech Synthesis", "Text-To-Speech Synthesis"], "method": [], "dataset": ["Tashkeela"], "metric": ["Diacritic Error Rate", "Word Error Rate (WER)"], "title": "Effective Deep Learning Models for Automatic Diacritization of Arabic Text"} {"abstract": "Visual Question Answering (VQA) models have struggled with counting objects\nin natural images so far. We identify a fundamental problem due to soft\nattention in these models as a cause. To circumvent this problem, we propose a\nneural network component that allows robust counting from object proposals.\nExperiments on a toy task show the effectiveness of this component and we\nobtain state-of-the-art accuracy on the number category of the VQA v2 dataset\nwithout negatively affecting other categories, even outperforming ensemble\nmodels with our single model. On a difficult balanced pair metric, the\ncomponent gives a substantial improvement in counting over a strong baseline by\n6.6%.", "field": [], "task": ["Visual Question Answering"], "method": [], "dataset": ["VQA v2 test-std", "VQA v2 test-dev"], "metric": ["overall", "Accuracy"], "title": "Learning to Count Objects in Natural Images for Visual Question Answering"} {"abstract": "Video object segmentation is challenging due to fast moving objects, deforming shapes, and cluttered backgrounds. Optical flow can be used to propagate an object segmentation over time but, unfortunately, flow is often inaccurate, particularly around object boundaries. Such boundaries are precisely where we want our segmentation to be accurate. To obtain accurate segmentation across time, we propose an efficient algorithm that considers video segmentation and optical flow estimation simultaneously. For video segmentation, we formulate a principled, multi-scale, spatio-temporal objective function that uses optical flow to propagate information between frames. For optical flow estimation, particularly at object boundaries, we compute the flow independently in the segmented regions and recompose the results. We call the process object flow and demonstrate the effectiveness of jointly optimizing optical flow and video segmentation using an iterative scheme. Experiments on the SegTrack v2 and Youtube-Objects datasets show that the proposed algorithm performs favorably against the other state-of-the-art methods.", "field": [], "task": ["Optical Flow Estimation", "Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["DAVIS 2016", "YouTube"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "mIoU", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "Video Segmentation via Object Flow"} {"abstract": "We suggest a new idea of Editorial Network - a mixed extractive-abstractive\nsummarization approach, which is applied as a post-processing step over a given\nsequence of extracted sentences. Our network tries to imitate the decision\nprocess of a human editor during summarization. Within such a process, each\nextracted sentence may be either kept untouched, rephrased or completely\nrejected. We further suggest an effective way for training the \"editor\" based\non a novel soft-labeling approach. Using the CNN/DailyMail dataset we\ndemonstrate the effectiveness of our approach compared to state-of-the-art\nextractive-only or abstractive-only baseline methods.", "field": [], "task": ["Abstractive Text Summarization", "Document Summarization"], "method": [], "dataset": ["CNN / Daily Mail"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "An Editorial Network for Enhanced Document Summarization"} {"abstract": "Video Object Segmentation, and video processing in general, has been\nhistorically dominated by methods that rely on the temporal consistency and\nredundancy in consecutive video frames. When the temporal smoothness is\nsuddenly broken, such as when an object is occluded, or some frames are missing\nin a sequence, the result of these methods can deteriorate significantly or\nthey may not even produce any result at all. This paper explores the orthogonal\napproach of processing each frame independently, i.e disregarding the temporal\ninformation. In particular, it tackles the task of semi-supervised video object\nsegmentation: the separation of an object from the background in a video, given\nits mask in the first frame. We present Semantic One-Shot Video Object\nSegmentation (OSVOS-S), based on a fully-convolutional neural network\narchitecture that is able to successively transfer generic semantic\ninformation, learned on ImageNet, to the task of foreground segmentation, and\nfinally to learning the appearance of a single annotated object of the test\nsequence (hence one shot). We show that instance level semantic information,\nwhen combined effectively, can dramatically improve the results of our previous\nmethod, OSVOS. We perform experiments on two recent video segmentation\ndatabases, which show that OSVOS-S is both the fastest and most accurate method\nin the state of the art.", "field": [], "task": ["Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["DAVIS 2017 (val)", "DAVIS 2017 (test-dev)", "DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "Video Object Segmentation Without Temporal Information"} {"abstract": "This paper tackles the problem of video object segmentation, given some user\nannotation which indicates the object of interest. The problem is formulated as\npixel-wise retrieval in a learned embedding space: we embed pixels of the same\nobject instance into the vicinity of each other, using a fully convolutional\nnetwork trained by a modified triplet loss as the embedding model. Then the\nannotated pixels are set as reference and the rest of the pixels are classified\nusing a nearest-neighbor approach. The proposed method supports different kinds\nof user input such as segmentation mask in the first frame (semi-supervised\nscenario), or a sparse set of clicked points (interactive scenario). In the\nsemi-supervised scenario, we achieve results competitive with the state of the\nart but at a fraction of computation cost (275 milliseconds per frame). In the\ninteractive scenario where the user is able to refine their input iteratively,\nthe proposed method provides instant response to each input, and reaches\ncomparable quality to competing methods with much less interaction.", "field": [], "task": ["Metric Learning", "Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation", "Visual Object Tracking"], "method": [], "dataset": ["DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "Blazingly Fast Video Object Segmentation with Pixel-Wise Metric Learning"} {"abstract": "We propose a novel video object segmentation algorithm based on pixel-level\nmatching using Convolutional Neural Networks (CNN). Our network aims to\ndistinguish the target area from the background on the basis of the pixel-level\nsimilarity between two object units. The proposed network represents a target\nobject using features from different depth layers in order to take advantage of\nboth the spatial details and the category-level semantic information.\nFurthermore, we propose a feature compression technique that drastically\nreduces the memory requirements while maintaining the capability of feature\nrepresentation. Two-stage training (pre-training and fine-tuning) allows our\nnetwork to handle any target object regardless of its category (even if the\nobject's type does not belong to the pre-training data) or of variations in its\nappearance through a video sequence. Experiments on large datasets demonstrate\nthe effectiveness of our model - against related methods - in terms of\naccuracy, speed, and stability. Finally, we introduce the transferability of\nour network to different domains, such as the infrared data domain.", "field": [], "task": ["Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation", "Visual Object Tracking"], "method": [], "dataset": ["DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "Pixel-Level Matching for Video Object Segmentation using Convolutional Neural Networks"} {"abstract": "We present a novel approach to video segmentation using multiple object proposals. The problem is formulated as a minimization of a novel energy function defined over a fully connected graph of object proposals. Our model combines appearance with long-range point tracks, which is key to ensure robustness with respect to fast motion and occlusions over longer video sequences. As opposed to previous approaches based on object proposals, we do not seek the best per-frame object hypotheses to perform the segmentation. Instead, we combine multiple, potentially imperfect proposals to improve overall segmentation accuracy and ensure robustness to outliers. Overall, the basic algorithm consists of three steps. First, we generate a very large number of object proposals for each video frame using existing techniques. Next, we perform an SVM-based pruning step to retain only high quality proposals with sufficiently discriminative power. Finally, we determine the fore- and background classification by solving for the maximum a posteriori of a fully connected conditional random field, defined using our novel energy function. Experimental results on a well established dataset demonstrate that our method compares favorably to several recent state-of-the-art approaches.", "field": [], "task": ["Semi-Supervised Video Object Segmentation", "Video Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "Fully Connected Object Proposals for Video Segmentation"} {"abstract": "We propose a novel method for combining synthetic and real images when training networks to determine geometric information from a single image. We suggest a method for mapping both image types into a single, shared domain. This is connected to a primary network for end-to-end training. Ideally, this results in images from two domains that present shared information to the primary network. Our experiments demonstrate significant improvements over the state-of-the-art in two important domains, surface normal estimation of human faces and monocular depth estimation for outdoor scenes, both in an unsupervised setting.", "field": [], "task": ["Depth Estimation", "Monocular Depth Estimation", "Surface Normals Estimation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["Make3D", "KITTI Eigen split"], "metric": ["RMSE log", "Delta < 1.25^2", "Sq Rel", "Delta < 1.25^3", "Abs Rel", "RMSE", "absolute relative error", "Delta < 1.25"], "title": "SharinGAN: Combining Synthetic and Real Data for Unsupervised Geometry Estimation"} {"abstract": "This paper addresses the problem of video object segmentation, where the\ninitial object mask is given in the first frame of an input video. We propose a\nnovel spatio-temporal Markov Random Field (MRF) model defined over pixels to\nhandle this problem. Unlike conventional MRF models, the spatial dependencies\namong pixels in our model are encoded by a Convolutional Neural Network (CNN).\nSpecifically, for a given object, the probability of a labeling to a set of\nspatially neighboring pixels can be predicted by a CNN trained for this\nspecific object. As a result, higher-order, richer dependencies among pixels in\nthe set can be implicitly modeled by the CNN. With temporal dependencies\nestablished by optical flow, the resulting MRF model combines both spatial and\ntemporal cues for tackling video object segmentation. However, performing\ninference in the MRF model is very difficult due to the very high-order\ndependencies. To this end, we propose a novel CNN-embedded algorithm to perform\napproximate inference in the MRF. This algorithm proceeds by alternating\nbetween a temporal fusion step and a feed-forward CNN step. When initialized\nwith an appearance-based one-shot segmentation CNN, our model outperforms the\nwinning entries of the DAVIS 2017 Challenge, without resorting to model\nensembling or any dedicated detectors.", "field": [], "task": ["One-Shot Segmentation", "Optical Flow Estimation", "Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["DAVIS 2017 (val)", "DAVIS 2017 (test-dev)", "DAVIS 2016", "YouTube"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "mIoU", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "CNN in MRF: Video Object Segmentation via Inference in A CNN-Based Higher-Order Spatio-Temporal MRF"} {"abstract": "Question classification is the task of predicting the entity type of the answering sentence for a given question in natural language. It plays an important role in finding or constructing accurate answers and therefore helps to improve quality of automated question answering systems. Different lexical, syntactical and semantic features was extracted automatically from a question to serve the classification in previous studies. However, combining all those features doesn\u2019t always give the best results for all types of questions. Different from previous studies, this paper focuses on the problem of how to extract and select efficient features adapting to each different types of question. We first propose a method of using a feature selection algorithm to determine appropriate features corresponding to different question types. Secondly, we design a new type of features, which is based on question patterns. We tested our proposed approach on the benchmark dataset TREC and using Support Vector Machines (SVM) for the classification algorithm. The experiment shows obtained results with the accuracies of 95.2% and 91.6% for coarse grain and fine grain data sets respectively, which are much better in comparison with the previous studies.", "field": [], "task": ["Feature Selection", "Question Answering", "Text Classification"], "method": [], "dataset": ["TREC-50"], "metric": ["Error"], "title": "Improving Question Classification by Feature Extraction and Selection"} {"abstract": "The uptake of deep learning in natural language generation (NLG) led to the release of both small and relatively large parallel corpora for training neural models. The existing data-to-text datasets are, however, aimed at task-oriented dialogue systems, and often thus limited in diversity and versatility. They are typically crowdsourced, with much of the noise left in them. Moreover, current neural NLG models do not take full advantage of large training data, and due to their strong generalizing properties produce sentences that look template-like regardless. We therefore present a new corpus of 7K samples, which (1) is clean despite being crowdsourced, (2) has utterances of 9 generalizable and conversational dialogue act types, making it more suitable for open-domain dialogue systems, and (3) explores the domain of video games, which is new to dialogue systems despite having excellent potential for supporting rich conversations.", "field": [], "task": ["Data-to-Text Generation", "Task-Oriented Dialogue Systems", "Text Generation"], "method": [], "dataset": ["ViGGO"], "metric": ["BLEU"], "title": "ViGGO: A Video Game Corpus for Data-To-Text Generation in Open-Domain Conversation"} {"abstract": "JPEG is one of the most commonly used standards among lossy image compression\nmethods. However, JPEG compression inevitably introduces various kinds of\nartifacts, especially at high compression rates, which could greatly affect the\nQuality of Experience (QoE). Recently, convolutional neural network (CNN) based\nmethods have shown excellent performance for removing the JPEG artifacts. Lots\nof efforts have been made to deepen the CNNs and extract deeper features, while\nrelatively few works pay attention to the receptive field of the network. In\nthis paper, we illustrate that the quality of output images can be\nsignificantly improved by enlarging the receptive fields in many cases. One\nstep further, we propose a Dual-domain Multi-scale CNN (DMCNN) to take full\nadvantage of redundancies on both the pixel and DCT domains. Experiments show\nthat DMCNN sets a new state-of-the-art for the task of JPEG artifact removal.", "field": [], "task": ["Image Compression", "JPEG Artifact Correction", "JPEG Artifact Removal"], "method": [], "dataset": ["ICB (Quality 10 Color)", "ICB (Quality 20 Color)", "ICB (Quality 20 Grayscale)", "ICB (Quality 10 Grayscale)"], "metric": ["SSIM", "PSNR", "PSNR-B"], "title": "DMCNN: Dual-Domain Multi-Scale Convolutional Neural Network for Compression Artifacts Removal"} {"abstract": "Semi-supervised learning has been an effective paradigm for leveraging unlabeled data to reduce the reliance on labeled data. We propose CoMatch, a new semi-supervised learning method that unifies dominant approaches and addresses their limitations. CoMatch jointly learns two representations of the training data, their class probabilities and low-dimensional embeddings. The two representations interact with each other to jointly evolve. The embeddings impose a smoothness constraint on the class probabilities to improve the pseudo-labels, whereas the pseudo-labels regularize the structure of the embeddings through graph-based contrastive learning. CoMatch achieves state-of-the-art performance on multiple datasets. It achieves substantial accuracy improvements on the label-scarce CIFAR-10 and STL-10. On ImageNet with 1% labels, CoMatch achieves a top-1 accuracy of 66.0%, outperforming FixMatch by 12.6%. Furthermore, CoMatch achieves better representation learning performance on downstream tasks, outperforming both supervised learning and self-supervised learning. Code and pre-trained models are available at https://github.com/salesforce/CoMatch.", "field": [], "task": ["Representation Learning", "Self-Supervised Learning", "Semi-Supervised Image Classification"], "method": [], "dataset": ["CIFAR-10, 40 Labels", "ImageNet - 10% labeled data", "CIFAR-10, 80 Labels", "CIFAR-10, 20 Labels", "STL-10, 1000 Labels", "ImageNet - 1% labeled data"], "metric": ["Percentage error", "Top 5 Accuracy", "Top 1 Accuracy", "Accuracy"], "title": "CoMatch: Semi-supervised Learning with Contrastive Graph Regularization"} {"abstract": "Gaze behavior is an important non-verbal cue in social signal processing and\nhuman-computer interaction. In this paper, we tackle the problem of person- and\nhead pose-independent 3D gaze estimation from remote cameras, using a\nmulti-modal recurrent convolutional neural network (CNN). We propose to combine\nface, eyes region, and face landmarks as individual streams in a CNN to\nestimate gaze in still images. Then, we exploit the dynamic nature of gaze by\nfeeding the learned features of all the frames in a sequence to a many-to-one\nrecurrent module that predicts the 3D gaze vector of the last frame. Our\nmulti-modal static solution is evaluated on a wide range of head poses and gaze\ndirections, achieving a significant improvement of 14.6% over the state of the\nart on EYEDIAP dataset, further improved by 4% when the temporal modality is\nincluded.", "field": [], "task": ["Gaze Estimation"], "method": [], "dataset": ["EYEDIAP (floating target)", "EYEDIAP (screen target)"], "metric": ["Angular Error"], "title": "Recurrent CNN for 3D Gaze Estimation using Appearance and Shape Cues"} {"abstract": "Several end-to-end deep learning approaches have been recently presented\nwhich extract either audio or visual features from the input images or audio\nsignals and perform speech recognition. However, research on end-to-end\naudiovisual models is very limited. In this work, we present an end-to-end\naudiovisual model based on residual networks and Bidirectional Gated Recurrent\nUnits (BGRUs). To the best of our knowledge, this is the first audiovisual\nfusion model which simultaneously learns to extract features directly from the\nimage pixels and audio waveforms and performs within-context word recognition\non a large publicly available dataset (LRW). The model consists of two streams,\none for each modality, which extract features directly from mouth regions and\nraw waveforms. The temporal dynamics in each stream/modality are modeled by a\n2-layer BGRU and the fusion of multiple streams/modalities takes place via\nanother 2-layer BGRU. A slight improvement in the classification rate over an\nend-to-end audio-only and MFCC-based model is reported in clean audio\nconditions and low levels of noise. In presence of high levels of noise, the\nend-to-end audiovisual model significantly outperforms both audio-only models.", "field": [], "task": ["Lipreading", "Speech Recognition"], "method": [], "dataset": ["Lip Reading in the Wild"], "metric": ["Top-1 Accuracy"], "title": "End-to-end Audiovisual Speech Recognition"} {"abstract": "A residual-networks family with hundreds or even thousands of layers\ndominates major image recognition tasks, but building a network by simply\nstacking residual blocks inevitably limits its optimization ability. This paper\nproposes a novel residual-network architecture, Residual networks of Residual\nnetworks (RoR), to dig the optimization ability of residual networks. RoR\nsubstitutes optimizing residual mapping of residual mapping for optimizing\noriginal residual mapping. In particular, RoR adds level-wise shortcut\nconnections upon original residual networks to promote the learning capability\nof residual networks. More importantly, RoR can be applied to various kinds of\nresidual networks (ResNets, Pre-ResNets and WRN) and significantly boost their\nperformance. Our experiments demonstrate the effectiveness and versatility of\nRoR, where it achieves the best performance in all residual-network-like\nstructures. Our RoR-3-WRN58-4+SD models achieve new state-of-the-art results on\nCIFAR-10, CIFAR-100 and SVHN, with test errors 3.77%, 19.73% and 1.59%,\nrespectively. RoR-3 models also achieve state-of-the-art results compared to\nResNets on ImageNet data set.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["SVHN"], "metric": ["Percentage error"], "title": "Residual Networks of Residual Networks: Multilevel Residual Networks"} {"abstract": "For many applications of question answering (QA), being able to explain why a given model chose an answer is critical. However, the lack of labeled data for answer justifications makes learning this difficult and expensive. Here we propose an approach that uses answer ranking as distant supervision for learning how to select informative justifications, where justifications serve as inferential connections between the question and the correct answer while often containing little lexical overlap with either. We propose a neural network architecture for QA that reranks answer justifications as an intermediate (and human-interpretable) step in answer selection. Our approach is informed by a set of features designed to combine both learned representations and explicit features to capture the connection between questions, answers, and answer justifications. We show that with this end-to-end approach we are able to significantly improve upon a strong IR baseline in both justification ranking (+9{\\%} rated highly relevant) and answer selection (+6{\\%} P@1).", "field": [], "task": ["Answer Selection", "Interpretable Machine Learning", "Question Answering"], "method": [], "dataset": ["AI2 Kaggle Dataset"], "metric": ["P@1"], "title": "Tell Me Why: Using Question Answering as Distant Supervision for Answer Justification"} {"abstract": "In recent years many deep neural networks have been proposed to solve Reading Comprehension (RC) tasks. Most of these models suffer from reasoning over long documents and do not trivially generalize to cases where the answer is not present as a span in a given document. We present a novel neural-based architecture that is capable of extracting relevant regions based on a given question-document pair and generating a well-formed answer. To show the effectiveness of our architecture, we conducted several experiments on the recently proposed and challenging RC dataset {`}NarrativeQA{'}. The proposed architecture outperforms state-of-the-art results by 12.62{\\%} (ROUGE-L) relative improvement.", "field": [], "task": ["Question Answering", "Reading Comprehension"], "method": [], "dataset": ["NarrativeQA"], "metric": ["Rouge-L", "BLEU-4", "METEOR", "BLEU-1"], "title": "Cut to the Chase: A Context Zoom-in Network for Reading Comprehension"} {"abstract": "An effective method to generate a large number of parallel sentences for training improved neural machine translation (NMT) systems is the use of the back-translations of the target-side monolingual data. The standard back-translation method has been shown to be unable to efficiently utilize the available huge amount of existing monolingual data because of the inability of translation models to differentiate between the authentic and synthetic parallel data during training. Tagging, or using gates, has been used to enable translation models to distinguish between synthetic and authentic data, improving standard back-translation and also enabling the use of iterative back-translation on language pairs that underperformed using standard back-translation. In this work, we approach back-translation as a domain adaptation problem, eliminating the need for explicit tagging. In the approach -- \\emph{tag-less back-translation} -- the synthetic and authentic parallel data are treated as out-of-domain and in-domain data respectively and, through pre-training and fine-tuning, the translation model is shown to be able to learn more efficiently from them during training. Experimental results have shown that the approach outperforms the standard and tagged back-translation approaches on low resource English-Vietnamese and English-German neural machine translation.", "field": [], "task": ["Domain Adaptation", "Machine Translation"], "method": [], "dataset": ["IWSLT2014 German-English"], "metric": ["BLEU score"], "title": "Tag-less Back-Translation"} {"abstract": "The relational facts in sentences are often complicated. Different relational triplets may have overlaps in a sentence. We divided the sentences into three types according to triplet overlap degree, including Normal, EntityPairOverlap and SingleEntiyOverlap. Existing methods mainly focus on Normal class and fail to extract relational triplets precisely. In this paper, we propose an end-to-end model based on sequence-to-sequence learning with copy mechanism, which can jointly extract relational facts from sentences of any of these classes. We adopt two different strategies in decoding process: employing only one united decoder or applying multiple separated decoders. We test our models in two public datasets and our model outperform the baseline method significantly.", "field": [], "task": ["Feature Engineering", "Relation Extraction"], "method": [], "dataset": ["NYT", "WebNLG"], "metric": ["F1"], "title": "Extracting Relational Facts by an End-to-End Neural Model with Copy Mechanism"} {"abstract": "Syntax has been a useful source of information for statistical RST discourse parsing. Under the neural setting, a common approach integrates syntax by a recursive neural network (RNN), requiring discrete output trees produced by a supervised syntax parser. In this paper, we propose an implicit syntax feature extraction approach, using hidden-layer vectors extracted from a neural syntax parser. In addition, we propose a simple transition-based model as the baseline, further enhancing it with dynamic oracle. Experiments on the standard dataset show that our baseline model with dynamic oracle is highly competitive. When implicit syntax features are integrated, we are able to obtain further improvements, better than using explicit Tree-RNN.", "field": [], "task": ["Discourse Parsing", "Word Embeddings"], "method": [], "dataset": ["RST-DT"], "metric": ["RST-Parseval (Relation)", "RST-Parseval (Nuclearity)", "RST-Parseval (Span)", "RST-Parseval (Full)"], "title": "Transition-based Neural RST Parsing with Implicit Syntax Features"} {"abstract": "Models trained in the context of continual learning (CL) should be able to learn from a stream of data over an undefined period of time. The main challenges herein are: 1) maintaining old knowledge while simultaneously benefiting from it when learning new tasks, and 2) guaranteeing model scalability with a growing amount of data to learn from. In order to tackle these challenges, we introduce Dynamic Generative Memory (DGM) - a synaptic plasticity driven framework for continual learning. DGM relies on conditional generative adversarial networks with learnable connection plasticity realized with neural masking. Specifically, we evaluate two variants of neural masking: applied to (i) layer activations and (ii) to connection weights directly. Furthermore, we propose a dynamic network expansion mechanism that ensures sufficient model capacity to accommodate for continually incoming tasks. The amount of added capacity is determined dynamically from the learned binary mask. We evaluate DGM in the continual class-incremental setup on visual classification tasks.", "field": [], "task": ["Continual Learning"], "method": [], "dataset": ["ImageNet-50 (5 tasks) "], "metric": ["Accuracy"], "title": "Learning to Remember: A Synaptic Plasticity Driven Framework for Continual Learning"} {"abstract": "Many localized languages struggle to reap the benefits of recent advancements\nin character recognition systems due to the lack of substantial amount of\nlabeled training data. This is due to the difficulty in generating large\namounts of labeled data for such languages and inability of deep learning\ntechniques to properly learn from small number of training samples. We solve\nthis problem by introducing a technique of generating new training samples from\nthe existing samples, with realistic augmentations which reflect actual\nvariations that are present in human hand writing, by adding random controlled\nnoise to their corresponding instantiation parameters. Our results with a mere\n200 training samples per class surpass existing character recognition results\nin the EMNIST-letter dataset while achieving the existing results in the three\ndatasets: EMNIST-balanced, EMNIST-digits, and MNIST. We also develop a strategy\nto effectively use a combination of loss functions to improve reconstructions.\nOur system is useful in character recognition for localized languages that lack\nmuch labeled training data and even in other related more general contexts such\nas object recognition.", "field": [], "task": ["Few-Shot Image Classification", "Image Classification", "Image Generation"], "method": [], "dataset": ["MNIST", "EMNIST-Letters", "Fashion-MNIST"], "metric": ["Percentage error", "Accuracy"], "title": "TextCaps : Handwritten Character Recognition with Very Small Datasets"} {"abstract": "Multi-person pose estimation is a challenging problem. Existing methods are mostly two-stage based--one stage for proposal generation and the other for allocating poses to corresponding persons. However, such two-stage methods generally suffer low efficiency. In this work, we present the first single-stage model, Single-stage multi-person Pose Machine (SPM), to simplify the pipeline and lift the efficiency for multi-person pose estimation. To achieve this, we propose a novel Structured Pose Representation (SPR) that unifies person instance and body joint position representations. Based on SPR, we develop the SPM model that can directly predict structured poses for multiple persons in a single stage, and thus offer a more compact pipeline and attractive efficiency advantage over two-stage methods. In particular, SPR introduces the root joints to indicate different person instances and human body joint positions are encoded into their displacements w.r.t. the roots. To better predict long-range displacements for some joints, SPR is further extended to hierarchical representations. Based on SPR, SPM can efficiently perform multi-person poses estimation by simultaneously predicting root joints (location of instances) and body joint displacements via CNNs. Moreover, to demonstrate the generality of SPM, we also apply it to multi-person 3D pose estimation. Comprehensive experiments on benchmarks MPII, extended PASCAL-Person-Part, MSCOCO and CMU Panoptic clearly demonstrate the state-of-the-art efficiency of SPM for multi-person 2D/3D pose estimation, together with outstanding accuracy.", "field": [], "task": ["3D Pose Estimation", "Keypoint Detection", "Multi-Person Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["MPII Multi-Person", "COCO test-dev"], "metric": ["APM", "AP75", "AP", "APL", "mAP@0.5", "AP50"], "title": "Single-Stage Multi-Person Pose Machines"} {"abstract": "Fine-grained image categorization is challenging due to the subtle inter-class differences.We posit that exploiting the rich relationships between channels can help capture such differences since different channels correspond to different semantics. In this paper, we propose a channel interaction network (CIN), which models the channel-wise interplay both within an image and across images. For a single image, a self-channel interaction (SCI) module is proposed to explore channel-wise correlation within the image. This allows the model to learn the complementary features from the correlated channels, yielding stronger fine-grained features. Furthermore, given an image pair, we introduce a contrastive channel interaction (CCI) module to model the cross-sample channel interaction with a metric learning framework, allowing the CIN to distinguish the subtle visual differences between images. Our model can be trained efficiently in an end-to-end fashion without the need of multi-stage training and testing. Finally, comprehensive experiments are conducted on three publicly available benchmarks, where the proposed method consistently outperforms the state-of-theart approaches, such as DFL-CNN (Wang, Morariu, and Davis 2018) and NTS (Yang et al. 2018).", "field": [], "task": ["Image Categorization", "Metric Learning"], "method": [], "dataset": [" CUB-200-2011", "Stanford Cars", "FGVC Aircraft"], "metric": ["Accuracy"], "title": "Channel Interaction Networks for Fine-Grained Image Categorization"} {"abstract": "In this work, we present a novel data-driven method for robust 6DoF object pose estimation from a single RGBD image. Unlike previous methods that directly regressing pose parameters, we tackle this challenging task with a keypoint-based approach. Specifically, we propose a deep Hough voting network to detect 3D keypoints of objects and then estimate the 6D pose parameters within a least-squares fitting manner. Our method is a natural extension of 2D-keypoint approaches that successfully work on RGB based 6DoF estimation. It allows us to fully utilize the geometric constraint of rigid objects with the extra depth information and is easy for a network to learn and optimize. Extensive experiments were conducted to demonstrate the effectiveness of 3D-keypoint detection in the 6D pose estimation task. Experimental results also show our method outperforms the state-of-the-art methods by large margins on several benchmarks. Code and video are available at https://github.com/ethnhe/PVN3D.git.", "field": [], "task": ["6D Pose Estimation", "6D Pose Estimation using RGBD", "Keypoint Detection", "Pose Estimation"], "method": [], "dataset": ["LineMOD", "YCB-Video"], "metric": ["Mean ADD", "ADDS AUC", "Accuracy (ADD)", "Mean ADD-S"], "title": "PVN3D: A Deep Point-wise 3D Keypoints Voting Network for 6DoF Pose Estimation"} {"abstract": "The Word Mover's Distance (WMD) proposed by Kusner et al. is a distance between documents that takes advantage of semantic relations among words that are captured by their embeddings. This distance proved to be quite effective, obtaining state-of-art error rates for classification tasks, but is also impracticable for large collections/documents due to its computational complexity. For circumventing this problem, variants of WMD have been proposed. Among them, Relaxed Word Mover's Distance (RWMD) is one of the most successful due to its simplicity, effectiveness, and also because of its fast implementations. Relying on assumptions that are supported by empirical properties of the distances between embeddings, we propose an approach to speed up both WMD and RWMD. Experiments over 10 datasets suggest that our approach leads to a significant speed-up in document classification tasks while maintaining the same error rates.", "field": [], "task": ["Document Classification"], "method": [], "dataset": ["BBCSport", "Amazon", "Reuters-21578", "20NEWS", "Classic", "Recipe", "Twitter", "Ohsumed"], "metric": ["Accuracy"], "title": "Speeding up Word Mover's Distance and its variants via properties of distances between embeddings"} {"abstract": "Deep 3D CNNs for video action recognition are designed to learn powerful representations in the joint spatio-temporal feature space. In practice however, because of the large number of parameters and computations involved, they may under-perform in the lack of sufficiently large datasets for training them at scale. In this paper we introduce spatial gating in spatial-temporal decomposition of 3D kernels. We implement this concept with Gate-Shift Module (GSM). GSM is lightweight and turns a 2D-CNN into a highly efficient spatio-temporal feature extractor. With GSM plugged in, a 2D-CNN learns to adaptively route features through time and combine them, at almost no additional parameters and computational overhead. We perform an extensive evaluation of the proposed module to study its effectiveness in video action recognition, achieving state-of-the-art results on Something Something-V1 and Diving48 datasets, and obtaining competitive results on EPIC-Kitchens with far less model complexity.", "field": [], "task": ["Action Recognition"], "method": [], "dataset": ["Something-Something V1"], "metric": ["Top 1 Accuracy"], "title": "Gate-Shift Networks for Video Action Recognition"} {"abstract": "Recent work pre-training Transformers with self-supervised objectives on large text corpora has shown great success when fine-tuned on downstream NLP tasks including text summarization. However, pre-training objectives tailored for abstractive text summarization have not been explored. Furthermore there is a lack of systematic evaluation across diverse domains. In this work, we propose pre-training large Transformer-based encoder-decoder models on massive text corpora with a new self-supervised objective. In PEGASUS, important sentences are removed/masked from an input document and are generated together as one output sequence from the remaining sentences, similar to an extractive summary. We evaluated our best PEGASUS model on 12 downstream summarization tasks spanning news, science, stories, instructions, emails, patents, and legislative bills. Experiments demonstrate it achieves state-of-the-art performance on all 12 downstream datasets measured by ROUGE scores. Our model also shows surprising performance on low-resource summarization, surpassing previous state-of-the-art results on 6 datasets with only 1000 examples. Finally we validated our results using human evaluation and show that our model summaries achieve human performance on multiple datasets.", "field": [], "task": ["Abstractive Text Summarization", "Text Summarization"], "method": [], "dataset": ["arXiv", "CNN / Daily Mail", "GigaWord", "X-Sum", "Pubmed"], "metric": ["ROUGE-L", "ROUGE-3", "ROUGE-1", "ROUGE-2"], "title": "PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization"} {"abstract": "Current pre-training works in natural language generation pay little attention to the problem of exposure bias on downstream tasks. To address this issue, we propose an enhanced multi-flow sequence to sequence pre-training and fine-tuning framework named ERNIE-GEN, which bridges the discrepancy between training and inference with an infilling generation mechanism and a noise-aware generation method. To make generation closer to human writing patterns, this framework introduces a span-by-span generation flow that trains the model to predict semantically-complete spans consecutively rather than predicting word by word. Unlike existing pre-training methods, ERNIE-GEN incorporates multi-granularity target sampling to construct pre-training data, which enhances the correlation between encoder and decoder. Experimental results demonstrate that ERNIE-GEN achieves state-of-the-art results with a much smaller amount of pre-training data and parameters on a range of language generation tasks, including abstractive summarization (Gigaword and CNN/DailyMail), question generation (SQuAD), dialogue generation (Persona-Chat) and generative question answering (CoQA).", "field": [], "task": ["Abstractive Text Summarization", "Dialogue Generation", "Generative Question Answering", "Question Generation", "Text Generation", "Text Summarization"], "method": [], "dataset": ["CNN / Daily Mail", "GigaWord", "SQuAD1.1", "CoQA", "GigaWord-10k"], "metric": ["ROUGE-1", "F1-Score", "ROUGE-2", "ROUGE-L", "BLEU-4"], "title": "ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation"} {"abstract": "In real-world crowd counting applications, the crowd densities vary greatly\nin spatial and temporal domains. A detection based counting method will\nestimate crowds accurately in low density scenes, while its reliability in\ncongested areas is downgraded. A regression based approach, on the other hand,\ncaptures the general density information in crowded regions. Without knowing\nthe location of each person, it tends to overestimate the count in low density\nareas. Thus, exclusively using either one of them is not sufficient to handle\nall kinds of scenes with varying densities. To address this issue, a novel\nend-to-end crowd counting framework, named DecideNet (DEteCtIon and Density\nEstimation Network) is proposed. It can adaptively decide the appropriate\ncounting mode for different locations on the image based on its real density\nconditions. DecideNet starts with estimating the crowd density by generating\ndetection and regression based density maps separately. To capture inevitable\nvariation in densities, it incorporates an attention module, meant to\nadaptively assess the reliability of the two types of estimations. The final\ncrowd counts are obtained with the guidance of the attention module to adopt\nsuitable estimations from the two kinds of density maps. Experimental results\nshow that our method achieves state-of-the-art performance on three challenging\ncrowd counting datasets.", "field": [], "task": ["Crowd Counting", "Density Estimation", "Regression"], "method": [], "dataset": ["WorldExpo\u201910"], "metric": ["Average MAE"], "title": "DecideNet: Counting Varying Density Crowds Through Attention Guided Detection and Density Estimation"} {"abstract": "One of the key components for video deblurring is how to exploit neighboring frames. Recent state-of-the-art methods either used aligned adjacent frames to the center frame or propagated the information on past frames to the current frame recurrently. Here we propose multi-blur-to-deblur (MB2D), a novel concept to exploit neighboring frames for efficient video deblurring. Firstly, inspired by unsharp masking, we argue that using more blurred images with long exposures as additional inputs significantly improves performance. Secondly, we propose multi-blurring recurrent neural network (MBRNN) that can synthesize more blurred images from neighboring frames, yielding substantially improved performance with existing video deblurring methods. Lastly, we propose multi-scale deblurring with connecting recurrent feature map from MBRNN (MSDR) to achieve state-of-the-art performance on the popular GoPro and Su datasets in fast and memory efficient ways.", "field": [], "task": ["Deblurring"], "method": [], "dataset": ["GoPro", "DVD "], "metric": ["SSIM", "PSNR"], "title": "Blur More To Deblur Better: Multi-Blur2Deblur For Efficient Video Deblurring"} {"abstract": "Reaching the performance of fully supervised learning with unlabeled data and only labeling one sample per class might be ideal for deep learning applications. We demonstrate for the first time the potential for building one-shot semi-supervised (BOSS) learning on Cifar-10 and SVHN up to attain test accuracies that are comparable to fully supervised learning. Our method combines class prototype refining, class balancing, and self-training. A good prototype choice is essential and we propose a technique for obtaining iconic examples. In addition, we demonstrate that class balancing methods substantially improve accuracy results in semi-supervised learning to levels that allow self-training to reach the level of fully supervised learning performance. Rigorous empirical evaluations provide evidence that labeling large datasets is not necessary for training deep neural networks. We made our code available at https://github.com/lnsmith54/BOSS to facilitate replication and for use with future real-world applications.", "field": [], "task": ["Semi-Supervised Image Classification"], "method": [], "dataset": ["cifar-10, 10 Labels"], "metric": ["Accuracy (Test)"], "title": "Building One-Shot Semi-supervised (BOSS) Learning up to Fully Supervised Performance"} {"abstract": "Recovering sharp video sequence from a motion-blurred image is highly ill-posed due to the significant loss of motion information in the blurring process. For event-based cameras, however, fast motion can be captured as events at high time rate, raising new opportunities to exploring effective solutions. In this paper, we start from a sequential formulation of event-based motion deblurring, then show how its optimization can be unfolded with a novel end-to-end deep architecture. The proposed architecture is a convolutional recurrent neural network that integrates visual and temporal knowledge of both global and local scales in principled manner. To further improve the reconstruction, we propose a differentiable directional event filtering module to effectively extract rich boundary prior from the stream of events. We conduct extensive experiments on the synthetic GoPro dataset and a large newly introduced dataset captured by a DAVIS240C camera. The proposed approach achieves state-of-the-art reconstruction quality, and generalizes better to handling real-world motion blur.", "field": [], "task": ["Deblurring"], "method": [], "dataset": ["GoPro"], "metric": ["SSIM", "PSNR"], "title": "Learning Event-Based Motion Deblurring"} {"abstract": "Natural language generation lies at the core of generative dialogue systems\nand conversational agents. We describe an ensemble neural language generator,\nand present several novel methods for data representation and augmentation that\nyield improved results in our model. We test the model on three datasets in the\nrestaurant, TV and laptop domains, and report both objective and subjective\nevaluations of our best model. Using a range of automatic metrics, as well as\nhuman evaluators, we show that our approach achieves better results than\nstate-of-the-art models on the same datasets.", "field": [], "task": ["Data-to-Text Generation", "Text Generation"], "method": [], "dataset": ["E2E NLG Challenge"], "metric": ["ROUGE-L", "BLEU", "NIST", "METEOR"], "title": "A Deep Ensemble Model with Slot Alignment for Sequence-to-Sequence Natural Language Generation"} {"abstract": "Ever since the successful application of sequence to sequence learning for neural machine translation systems (Sutskever et al., 2014), interest has surged in its applicability towards language generation in other problem domains. In the area of natural language generation (NLG), there has been a great deal of interest in end-to-end (E2E) neural models that learn and generate natural language sentence realizations in one step. In this paper, we present TNT-NLG System 1, our first system submission to the E2E NLG Challenge, where we generate natural language (NL) realizations from meaning representations (MRs) in the restaurant domain by massively expanding the training dataset. We develop two models for this system, based on Dusek et al.\u2019s (2016a) open source baseline model and context-aware neural language generator. Starting with the MR and NL pairs from the E2E generation challenge dataset, we explode the size of the training set using PERSONAGE (Mairesse and Walker, 2010), a statistical generator able to produce varied realizations from MRs, and use our expanded data as contextual input into our models. We present evaluation results using automated and human evaluation metrics, and describe directions for future work.", "field": [], "task": ["Data-to-Text Generation", "Machine Translation", "Text Generation"], "method": [], "dataset": ["E2E NLG Challenge"], "metric": ["NIST", "METEOR", "CIDEr", "ROUGE-L", "BLEU"], "title": "TNT-NLG, System 1: Using a statistical NLG to massively augment crowd-sourced data for neural generation"} {"abstract": "Benefiting from deep learning research and large-scale datasets, saliency prediction has achieved significant success in the past decade. However, it still remains challenging to predict saliency maps on images in new domains that lack sufficient data for data-hungry models. To solve this problem, we propose a few-shot transfer learning paradigm for saliency prediction, which enables efficient transfer of knowledge learned from the existing large-scale saliency datasets to a target domain with limited labeled examples. Specifically, very few target domain examples are used as the reference to train a model with a source domain dataset such that the training process can converge to a local minimum in favor of the target domain. Then, the learned model is further fine-tuned with the reference. The proposed framework is gradient-based and model-agnostic. We conduct comprehensive experiments and ablation study on various source domain and target domain pairs. The results show that the proposed framework achieves a significant performance improvement. The code is publicly available at \\url{https://github.com/luoyan407/n-reference}.", "field": [], "task": ["Few-Shot Transfer Learning for Saliency Prediction", "Saliency Prediction", "Transfer Learning"], "method": [], "dataset": ["SALICON->WebpageSaliency - 1-shot", "SALICON->WebpageSaliency - EUB", "SALICON->WebpageSaliency - 10-shot ", "SALICON->WebpageSaliency - 5-shot "], "metric": ["NSS", "CC", "AUC"], "title": "$n$-Reference Transfer Learning for Saliency Prediction"} {"abstract": "Mining a set of meaningful topics organized into a hierarchy is intuitively appealing since topic correlations are ubiquitous in massive text corpora. To account for potential hierarchical topic structures, hierarchical topic models generalize flat topic models by incorporating latent topic hierarchies into their generative modeling process. However, due to their purely unsupervised nature, the learned topic hierarchy often deviates from users' particular needs or interests. To guide the hierarchical topic discovery process with minimal user supervision, we propose a new task, Hierarchical Topic Mining, which takes a category tree described by category names only, and aims to mine a set of representative terms for each category from a text corpus to help a user comprehend his/her interested topics. We develop a novel joint tree and text embedding method along with a principled optimization procedure that allows simultaneous modeling of the category tree structure and the corpus generative process in the spherical space for effective category-representative term discovery. Our comprehensive experiments show that our model, named JoSH, mines a high-quality set of hierarchical topics with high efficiency and benefits weakly-supervised hierarchical text classification tasks.", "field": [], "task": ["Text Classification", "Topic Models"], "method": [], "dataset": ["arXiv", "NYT"], "metric": ["Topic coherence@5", "MACC"], "title": "Hierarchical Topic Mining via Joint Spherical Tree and Text Embedding"} {"abstract": "This paper contributes a new high quality dataset for person re-identification, named \"Market-1501\". Generally, current datasets: 1) are limited in scale; 2) consist of hand-drawn bboxes, which are unavailable under realistic settings; 3) have only one ground truth and one query image for each identity (close environment). To tackle these problems, the proposed Market-1501 dataset is featured in three aspects. First, it contains over 32,000 annotated bboxes, plus a distractor set of over 500K images, making it the largest person re-id dataset to date. Second, images in Market-1501 dataset are produced using the Deformable Part Model (DPM) as pedestrian detector. Third, our dataset is collected in an open system, where each identity has multiple images under each camera. As a minor contribution, inspired by recent advances in large-scale image search, this paper proposes an unsupervised Bag-of-Words descriptor. We view person re-identification as a special task of image search. In experiment, we show that the proposed descriptor yields competitive accuracy on VIPeR, CUHK03, and Market-1501 datasets, and is scalable on the large-scale 500k dataset.", "field": [], "task": ["Image Retrieval", "Person Re-Identification"], "method": [], "dataset": ["DukeMTMC-reID", "Market-1501"], "metric": ["Rank-1", "MAP"], "title": "Scalable Person Re-Identification: A Benchmark"} {"abstract": "Temporal information extraction is a challenging task. Here we describe Chrono, a hybrid rule-based and machine learning system that identifies temporal expressions in text and normalizes them into the SCATE schema. After minor parsing logic adjustments, Chrono has emerged as the top performing system for SemEval 2018 Task 6: Parsing Time Normalizations.", "field": [], "task": ["Temporal Information Extraction", "Timex normalization"], "method": [], "dataset": ["PNT"], "metric": ["F1-Score"], "title": "Chrono at SemEval-2018 Task 6: A System for Normalizing Temporal Expressions"} {"abstract": "Research on human action classification has made significant progresses in the past few years. Most deep learning methods focus on improving performance by adding more network components. We propose, however, to better utilize auxiliary mechanisms, including hierarchical classification, network pruning, and skeleton-based preprocessing, to boost the model robustness and performance. We test the effectiveness of our method on four commonly used testing datasets: NTU RGB+D 60, NTU RGB+D 120, Northwestern-UCLA Multiview Action 3D, and UTD Multimodal Human Action Dataset. Our experiments show that our method can achieve either comparable or better performance on all four datasets. In particular, our method sets up a new baseline for NTU 120, the largest dataset among the four. We also analyze our method with extensive comparisons and ablation studies.", "field": [], "task": ["Action Classification", "Action Classification ", "Action Recognition", "Network Pruning", "Skeleton Based Action Recognition"], "method": [], "dataset": ["NTU RGB+D", "N-UCLA", "NTU RGB+D 120"], "metric": ["Accuracy (CS)", "Accuracy (Cross-Subject)", "Accuracy (CV)", "Accuracy (Cross-Setup)", "Accuracy"], "title": "Hierarchical Action Classification with Network Pruning"} {"abstract": "Learning graph representations via low-dimensional embeddings that preserve\nrelevant network properties is an important class of problems in machine\nlearning. We here present a novel method to embed directed acyclic graphs.\nFollowing prior work, we first advocate for using hyperbolic spaces which\nprovably model tree-like structures better than Euclidean geometry. Second, we\nview hierarchical relations as partial orders defined using a family of nested\ngeodesically convex cones. We prove that these entailment cones admit an\noptimal shape with a closed form expression both in the Euclidean and\nhyperbolic spaces, and they canonically define the embedding learning process.\nExperiments show significant improvements of our method over strong recent\nbaselines both in terms of representational capacity and generalization.", "field": [], "task": ["Graph Embedding", "Hypernym Discovery", "Link Prediction", "Representation Learning"], "method": [], "dataset": ["WordNet"], "metric": ["Accuracy"], "title": "Hyperbolic Entailment Cones for Learning Hierarchical Embeddings"} {"abstract": "Temporal expressions are words or phrases that describe a point, duration or recurrence in time. Automatically annotating these expressions is a research goal of increasing interest. Recognising them can be achieved with minimally supervised machine learning, but interpreting them accurately (normalisation) is a complex task requiring human knowledge. In this paper, we present TIMEN, a community-driven tool for temporal expression normalisation. TIMEN is derived from current best approaches and is an independent tool, enabling easy integration in existing systems. We argue that temporal expression normalisation can only be effectively performed with a large knowledge base and set of rules. Our solution is a framework and system with which to capture this knowledge for different languages. Using both existing and newly-annotated data, we present results showing competitive performance and invite the IE community to contribute to a knowledge base in order to solve the temporal expression normalisation problem.", "field": [], "task": ["Information Retrieval", "Knowledge Base Population", "Question Answering", "Timex normalization"], "method": [], "dataset": ["TimeBank"], "metric": ["F1-Score"], "title": "TIMEN: An Open Temporal Expression Normalisation Resource"} {"abstract": "Skeleton-based human action recognition has recently drawn increasing\nattentions with the availability of large-scale skeleton datasets. The most\ncrucial factors for this task lie in two aspects: the intra-frame\nrepresentation for joint co-occurrences and the inter-frame representation for\nskeletons' temporal evolutions. In this paper we propose an end-to-end\nconvolutional co-occurrence feature learning framework. The co-occurrence\nfeatures are learned with a hierarchical methodology, in which different levels\nof contextual information are aggregated gradually. Firstly point-level\ninformation of each joint is encoded independently. Then they are assembled\ninto semantic representation in both spatial and temporal domains.\nSpecifically, we introduce a global spatial aggregation scheme, which is able\nto learn superior joint co-occurrence features over local aggregation. Besides,\nraw skeleton coordinates as well as their temporal difference are integrated\nwith a two-stream paradigm. Experiments show that our approach consistently\noutperforms other state-of-the-arts on action recognition and detection\nbenchmarks like NTU RGB+D, SBU Kinect Interaction and PKU-MMD.", "field": [], "task": ["Action Recognition", "RF-based Pose Estimation", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": [" RF-MMD", "NTU RGB+D", "PKU-MMD"], "metric": ["Accuracy (CS)", "mAP (@0.1, Visible)", "mAP@0.50 (CV)", "Accuracy (CV)", "mAP (@0.1, Through-wall)", "mAP@0.50 (CS)"], "title": "Co-occurrence Feature Learning from Skeleton Data for Action Recognition and Detection with Hierarchical Aggregation"} {"abstract": "The continually increasing number of documents produced each year\nnecessitates ever improving information processing methods for searching,\nretrieving, and organizing text. Central to these information processing\nmethods is document classification, which has become an important application\nfor supervised learning. Recently the performance of these traditional\nclassifiers has degraded as the number of documents has increased. This is\nbecause along with this growth in the number of documents has come an increase\nin the number of categories. This paper approaches this problem differently\nfrom current document classification methods that view the problem as\nmulti-class classification. Instead we perform hierarchical classification\nusing an approach we call Hierarchical Deep Learning for Text classification\n(HDLTex). HDLTex employs stacks of deep learning architectures to provide\nspecialized understanding at each level of the document hierarchy.", "field": [], "task": ["Document Classification", "Multi-class Classification", "Text Classification"], "method": [], "dataset": ["WOS-5736", "WOS-11967", "WOS-46985"], "metric": ["Accuracy"], "title": "HDLTex: Hierarchical Deep Learning for Text Classification"} {"abstract": "A large number of mainstream applications, like temporal search, event detection, and trend identification, assume knowledge of the timestamp of every document in a given textual collection. In many cases, however, the required timestamps are either unavailable or ambiguous. A characteristic instance of this problem emerges in the context of large repositories of old digitized documents. For such documents, the timestamp may be corrupted during the digitization process, or may simply be unavailable. In this paper, we study the task of approximating the timestamp of a document, so-called document dating. We propose a contentbased method and use recent advances in the domain of term burstiness, which allow it to overcome the drawbacks of previous document dating methods, e.g. the fix time partition strategy. We use an extensive experimental evaluation on different datasets to validate the efficacy and advantages of our methodology, showing that our method outperforms the state of the art methods on document dating.", "field": [], "task": ["Document Dating"], "method": [], "dataset": ["APW", "NYT"], "metric": ["Accuracy"], "title": "A Burstiness-aware Approach for Document Dating"} {"abstract": "Hyperspectral images (HSIs) provide rich spectral-spatial information with stacked hundreds of contiguous narrowbands. Due to the existence of noise and band correlation, the selection of informative spectral-spatial kernel features poses a challenge. This is often addressed by using convolutional neural networks (CNNs) with receptive field (RF) having fixed sizes. However, these solutions cannot enable neurons to effectively adjust RF sizes and cross-channel dependencies when forward and backward propagations are used to optimize the network. In this article, we present an attention-based adaptive spectral-spatial kernel improved residual network (A\u00b2S\u00b2K-ResNet) with spectral attention to capture discriminative spectral-spatial features for HSI classification in an end-to-end training fashion. In particular, the proposed network learns selective 3-D convolutional kernels to jointly extract spectral-spatial features using improved 3-D ResBlocks and adopts an efficient feature recalibration (EFR) mechanism to boost the classification performance. Extensive experiments are performed on three well-known hyperspectral data sets, i.e., IP, KSC, and UP, and the proposed A\u00b2S\u00b2K-ResNet can provide better classification results in terms of overall accuracy (OA), average accuracy (AA), and Kappa compared with the existing methods investigated. The source code will be made available at https://github.com/suvojit-0x55aa/A2S2K-ResNet.", "field": [], "task": ["Hyperspectral Image Classification", "Image Classification"], "method": [], "dataset": ["Indian Pines", "Kennedy Space Center", "Pavia University"], "metric": ["Overall Accuracy"], "title": "Attention-Based Adaptive Spectral-Spatial Kernel ResNet for Hyperspectral Image Classification"} {"abstract": "The recent explosive interest on transformers has suggested their potential to become powerful \"universal\" models for computer vision tasks, such as classification, detection, and segmentation. However, how further transformers can go - are they ready to take some more notoriously difficult vision tasks, e.g., generative adversarial networks (GANs)? Driven by that curiosity, we conduct the first pilot study in building a GAN \\textbf{completely free of convolutions}, using only pure transformer-based architectures. Our vanilla GAN architecture, dubbed \\textbf{TransGAN}, consists of a memory-friendly transformer-based generator that progressively increases feature resolution while decreasing embedding dimension, and a patch-level discriminator that is also transformer-based. We then demonstrate TransGAN to notably benefit from data augmentations (more than standard GANs), a multi-task co-training strategy for the generator, and a locally initialized self-attention that emphasizes the neighborhood smoothness of natural images. Equipped with those findings, TransGAN can effectively scale up with bigger models and high-resolution image datasets. Specifically, our best architecture achieves highly competitive performance compared to current state-of-the-art GANs based on convolutional backbones. Specifically, TransGAN sets \\textbf{new state-of-the-art} IS score of 10.10 and FID score of 25.32 on STL-10. It also reaches competitive 8.64 IS score and 11.89 FID score on Cifar-10, and 12.23 FID score on CelebA $64\\times64$, respectively. We also conclude with a discussion of the current limitations and future potential of TransGAN. The code is available at \\url{https://github.com/VITA-Group/TransGAN}.", "field": [], "task": ["Image Generation"], "method": [], "dataset": ["STL-10", "CelebA 64x64", "CIFAR-10"], "metric": ["Inception score", "FID"], "title": "TransGAN: Two Transformers Can Make One Strong GAN"} {"abstract": "We present a deep learning method for the interactive video object\nsegmentation. Our method is built upon two core operations, interaction and\npropagation, and each operation is conducted by Convolutional Neural Networks.\nThe two networks are connected both internally and externally so that the\nnetworks are trained jointly and interact with each other to solve the complex\nvideo object segmentation problem. We propose a new multi-round training scheme\nfor the interactive video object segmentation so that the networks can learn\nhow to understand the user's intention and update incorrect estimations during\nthe training. At the testing time, our method produces high-quality results and\nalso runs fast enough to work with users interactively. We evaluated the\nproposed method quantitatively on the interactive track benchmark at the DAVIS\nChallenge 2018. We outperformed other competing methods by a significant margin\nin both the speed and the accuracy. We also demonstrated that our method works\nwell with real user interactions.", "field": [], "task": ["Interactive Video Object Segmentation", "Semantic Segmentation", "Video Object Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["DAVIS 2017"], "metric": ["AUC-J", "J@60s"], "title": "Fast User-Guided Video Object Segmentation by Interaction-and-Propagation Networks"} {"abstract": "Multi-label image and video classification are fundamental yet challenging tasks in computer vision. The main challenges lie in capturing spatial or temporal dependencies between labels and discovering the locations of discriminative features for each class. In order to overcome these challenges, we propose to use cross-modality attention with semantic graph embedding for multi label classification. Based on the constructed label graph, we propose an adjacency-based similarity graph embedding method to learn semantic label embeddings, which explicitly exploit label relationships. Then our novel cross-modality attention maps are generated with the guidance of learned label embeddings. Experiments on two multi-label image classification datasets (MS-COCO and NUS-WIDE) show our method outperforms other existing state-of-the-arts. In addition, we validate our method on a large multi-label video classification dataset (YouTube-8M Segments) and the evaluation results demonstrate the generalization capability of our method.", "field": [], "task": ["Graph Embedding", "Image Classification", "Multi-Label Classification", "Video Classification"], "method": [], "dataset": ["MS-COCO", "NUS-WIDE"], "metric": ["mAP", "MAP"], "title": "Cross-Modality Attention with Semantic Graph Embedding for Multi-Label Classification"} {"abstract": "We introduce dense relational captioning, a novel image captioning task which aims to generate multiple captions with respect to relational information between objects in a visual scene. Relational captioning provides explicit descriptions of each relationship between object combinations. This framework is advantageous in both diversity and amount of information, leading to a comprehensive image understanding based on relationships, e.g., relational proposal generation. For relational understanding between objects, the part-of-speech (POS, i.e., subject-object-predicate categories) can be a valuable prior information to guide the causal sequence of words in a caption. We enforce our framework to not only learn to generate captions but also predict the POS of each word. To this end, we propose the multi-task triple-stream network (MTTSNet) which consists of three recurrent units responsible for each POS which is trained by jointly predicting the correct captions and POS for each word. In addition, we found that the performance of MTTSNet can be improved by modulating the object embeddings with an explicit relational module. We demonstrate that our proposed model can generate more diverse and richer captions, via extensive experimental analysis on large scale datasets and several metrics. We additionally extend analysis to an ablation study, applications on holistic image captioning, scene graph generation, and retrieval tasks.", "field": [], "task": ["Graph Generation", "Image Captioning", "Relational Captioning", "Scene Graph Generation"], "method": [], "dataset": ["relational captioning dataset"], "metric": ["Image-Level Recall"], "title": "Dense Relational Image Captioning via Multi-task Triple-Stream Networks"} {"abstract": "Nowadays, neural networks play an important role in the task of relation\nclassification. By designing different neural architectures, researchers have\nimproved the performance to a large extent in comparison with traditional\nmethods. However, existing neural networks for relation classification are\nusually of shallow architectures (e.g., one-layer convolutional neural networks\nor recurrent networks). They may fail to explore the potential representation\nspace in different abstraction levels. In this paper, we propose deep recurrent\nneural networks (DRNNs) for relation classification to tackle this challenge.\nFurther, we propose a data augmentation method by leveraging the directionality\nof relations. We evaluated our DRNNs on the SemEval-2010 Task~8, and achieve an\nF1-score of 86.1%, outperforming previous state-of-the-art recorded results.", "field": [], "task": ["Data Augmentation", "Relation Classification"], "method": [], "dataset": ["SemEval 2010 Task 8"], "metric": ["F1"], "title": "Improved Relation Classification by Deep Recurrent Neural Networks with Data Augmentation"} {"abstract": "The dominant neural architectures in question answer retrieval are based on\nrecurrent or convolutional encoders configured with complex word matching\nlayers. Given that recent architectural innovations are mostly new word\ninteraction layers or attention-based matching mechanisms, it seems to be a\nwell-established fact that these components are mandatory for good performance.\nUnfortunately, the memory and computation cost incurred by these complex\nmechanisms are undesirable for practical applications. As such, this paper\ntackles the question of whether it is possible to achieve competitive\nperformance with simple neural architectures. We propose a simple but novel\ndeep learning architecture for fast and efficient question-answer ranking and\nretrieval. More specifically, our proposed model, \\textsc{HyperQA}, is a\nparameter efficient neural network that outperforms other parameter intensive\nmodels such as Attentive Pooling BiLSTMs and Multi-Perspective CNNs on multiple\nQA benchmarks. The novelty behind \\textsc{HyperQA} is a pairwise ranking\nobjective that models the relationship between question and answer embeddings\nin Hyperbolic space instead of Euclidean space. This empowers our model with a\nself-organizing ability and enables automatic discovery of latent hierarchies\nwhile learning embeddings of questions and answers. Our model requires no\nfeature engineering, no similarity matrix matching, no complicated attention\nmechanisms nor over-parameterized layers and yet outperforms and remains\ncompetitive to many models that have these functionalities on multiple\nbenchmarks.", "field": [], "task": ["Feature Engineering", "Question Answering", "Representation Learning"], "method": [], "dataset": ["TrecQA", "YahooCQA", "SemEvalCQA", "WikiQA"], "metric": ["P@1", "MRR", "MAP"], "title": "Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering"} {"abstract": "When considering person re-identification (re-ID) as a retrieval process,\nre-ranking is a critical step to improve its accuracy. Yet in the re-ID\ncommunity, limited effort has been devoted to re-ranking, especially those\nfully automatic, unsupervised solutions. In this paper, we propose a\nk-reciprocal encoding method to re-rank the re-ID results. Our hypothesis is\nthat if a gallery image is similar to the probe in the k-reciprocal nearest\nneighbors, it is more likely to be a true match. Specifically, given an image,\na k-reciprocal feature is calculated by encoding its k-reciprocal nearest\nneighbors into a single vector, which is used for re-ranking under the Jaccard\ndistance. The final distance is computed as the combination of the original\ndistance and the Jaccard distance. Our re-ranking method does not require any\nhuman interaction or any labeled data, so it is applicable to large-scale\ndatasets. Experiments on the large-scale Market-1501, CUHK03, MARS, and PRW\ndatasets confirm the effectiveness of our method.", "field": [], "task": ["Person Re-Identification"], "method": [], "dataset": ["CUHK03 detected", "Market-1501", "CUHK03 labeled", "CUHK03"], "metric": ["Rank-1", "MAP"], "title": "Re-ranking Person Re-identification with k-reciprocal Encoding"} {"abstract": "Matching pedestrians across multiple camera views, known as human\nre-identification, is a challenging research problem that has numerous\napplications in visual surveillance. With the resurgence of Convolutional\nNeural Networks (CNNs), several end-to-end deep Siamese CNN architectures have\nbeen proposed for human re-identification with the objective of projecting the\nimages of similar pairs (i.e. same identity) to be closer to each other and\nthose of dissimilar pairs to be distant from each other. However, current\nnetworks extract fixed representations for each image regardless of other\nimages which are paired with it and the comparison with other images is done\nonly at the final level. In this setting, the network is at risk of failing to\nextract finer local patterns that may be essential to distinguish positive\npairs from hard negative pairs. In this paper, we propose a gating function to\nselectively emphasize such fine common local patterns by comparing the\nmid-level features across pairs of images. This produces flexible\nrepresentations for the same image according to the images they are paired\nwith. We conduct experiments on the CUHK03, Market-1501 and VIPeR datasets and\ndemonstrate improved performance compared to a baseline Siamese CNN\narchitecture.", "field": [], "task": ["Person Re-Identification"], "method": [], "dataset": ["Market-1501"], "metric": ["Rank-1", "MAP"], "title": "Gated Siamese Convolutional Neural Network Architecture for Human Re-Identification"} {"abstract": "Learning automatically the structure of object categories remains an\nimportant open problem in computer vision. In this paper, we propose a novel\nunsupervised approach that can discover and learn landmarks in object\ncategories, thus characterizing their structure. Our approach is based on\nfactorizing image deformations, as induced by a viewpoint change or an object\ndeformation, by learning a deep neural network that detects landmarks\nconsistently with such visual effects. Furthermore, we show that the learned\nlandmarks establish meaningful correspondences between different object\ninstances in a category without having to impose this requirement explicitly.\nWe assess the method qualitatively on a variety of object types, natural and\nman-made. We also show that our unsupervised landmarks are highly predictive of\nmanually-annotated landmarks in face benchmark datasets, and can be used to\nregress these with a high degree of accuracy.", "field": [], "task": ["Unsupervised Facial Landmark Detection"], "method": [], "dataset": ["AFLW-MTFL", "MAFL", "300W"], "metric": ["NME"], "title": "Unsupervised learning of object landmarks by factorized spatial embeddings"} {"abstract": "We are interested in the large-scale learning of Mahalanobis distances, with\na particular focus on person re-identification.\n We propose a metric learning formulation called Weighted Approximate Rank\nComponent Analysis (WARCA). WARCA optimizes the precision at top ranks by\ncombining the WARP loss with a regularizer that favors orthonormal linear\nmappings, and avoids rank-deficient embeddings. Using this new regularizer\nallows us to adapt the large-scale WSABIE procedure and to leverage the Adam\nstochastic optimization algorithm, which results in an algorithm that scales\ngracefully to very large data-sets. Also, we derive a kernelized version which\nallows to take advantage of state-of-the-art features for re-identification\nwhen data-set size permits kernel computation.\n Benchmarks on recent and standard re-identification data-sets show that our\nmethod beats existing state-of-the-art techniques both in term of accuracy and\nspeed. We also provide experimental analysis to shade lights on the properties\nof the regularizer we use, and how it improves performance.", "field": [], "task": ["Metric Learning", "Person Re-Identification", "Stochastic Optimization"], "method": [], "dataset": ["Market-1501"], "metric": ["Rank-1"], "title": "Scalable Metric Learning via Weighted Approximate Rank Component Analysis"} {"abstract": "The reading comprehension task, that asks questions about a given evidence\ndocument, is a central problem in natural language understanding. Recent\nformulations of this task have typically focused on answer selection from a set\nof candidates pre-defined manually or through the use of an external NLP\npipeline. However, Rajpurkar et al. (2016) recently released the SQuAD dataset\nin which the answers can be arbitrary strings from the supplied text. In this\npaper, we focus on this answer extraction task, presenting a novel model\narchitecture that efficiently builds fixed length representations of all spans\nin the evidence document with a recurrent network. We show that scoring\nexplicit span representations significantly improves performance over other\napproaches that factor the prediction into separate predictions about words or\nstart and end markers. Our approach improves upon the best published results of\nWang & Jiang (2016) by 5% and decreases the error of Rajpurkar et al.'s\nbaseline by > 50%.", "field": [], "task": ["Answer Selection", "Natural Language Understanding", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["SQuAD1.1 dev", "SQuAD1.1"], "metric": ["EM", "F1"], "title": "Learning Recurrent Span Representations for Extractive Question Answering"} {"abstract": "We introduce a simple and effective method for regularizing large\nconvolutional neural networks. We replace the conventional deterministic\npooling operations with a stochastic procedure, randomly picking the activation\nwithin each pooling region according to a multinomial distribution, given by\nthe activities within the pooling region. The approach is hyper-parameter free\nand can be combined with other regularization approaches, such as dropout and\ndata augmentation. We achieve state-of-the-art performance on four image\ndatasets, relative to other approaches that do not utilize data augmentation.", "field": [], "task": ["Data Augmentation", "Image Classification"], "method": [], "dataset": ["SVHN", "CIFAR-100", "CIFAR-10"], "metric": ["Percentage error", "Percentage correct"], "title": "Stochastic Pooling for Regularization of Deep Convolutional Neural Networks"} {"abstract": "The primary aim of single-image super-resolution is to construct high-resolution (HR) images from corresponding low-resolution (LR) inputs. In previous approaches, which have generally been supervised, the training objective typically measures a pixel-wise average distance between the super-resolved (SR) and HR images. Optimizing such metrics often leads to blurring, especially in high variance (detailed) regions. We propose an alternative formulation of the super-resolution problem based on creating realistic SR images that downscale correctly. We present an algorithm addressing this problem, PULSE (Photo Upsampling via Latent Space Exploration), which generates high-resolution, realistic images at resolutions previously unseen in the literature. It accomplishes this in an entirely self-supervised fashion and is not confined to a specific degradation operator used during training, unlike previous methods (which require supervised training on databases of LR-HR image pairs). Instead of starting with the LR image and slowly adding detail, PULSE traverses the high-resolution natural image manifold, searching for images that downscale to the original LR image. This is formalized through the \"downscaling loss,\" which guides exploration through the latent space of a generative model. By leveraging properties of high-dimensional Gaussians, we restrict the search space to guarantee realistic outputs. PULSE thereby generates super-resolved images that both are realistic and downscale correctly. We show proof of concept of our approach in the domain of face super-resolution (i.e., face hallucination). We also present a discussion of the limitations and biases of the method as currently implemented with an accompanying model card with relevant metrics. Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously possible.", "field": [], "task": ["Face Hallucination", "Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["FFHQ 256 x 256 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models"} {"abstract": "Hand action recognition is a special case of human action recognition with applications in human robot interaction, virtual reality or life-logging systems. Building action classifiers that are useful to recognize such heterogeneous set of activities is very challenging. There are very subtle changes across different actions from a given application but also large variations across domains (e.g. virtual reality vs life-logging). This work introduces a novel skeleton-based hand motion representation model that tackles this problem. The framework we propose is agnostic to the application domain or camera recording view-point. We demonstrate the performance of our proposed motion representation model both working for a single specific domain (intra-domain action classification) and working for different unseen domains (cross-domain action classification). For the intra-domain case, our approach gets better or similar performance than current state-of-the-art methods on well-known hand action recognition benchmarks. And when performing cross-domain hand action recognition (i.e., training our motion representation model in frontal-view recordings and testing it both for egocentric and third-person views), our approach achieves comparable results to the state-of-the-art methods that are trained intra-domain.", "field": [], "task": ["Action Classification", "Action Classification ", "Action Recognition", "Human robot interaction", "Temporal Action Localization"], "method": [], "dataset": ["SHREC 2017 track on 3D Hand Gesture Recognition", "First-Person Hand Action Benchmark"], "metric": ["3:1 Accuracy", "1:1 Accuracy", "14 gestures accuracy", "1:3 Accuracy", "Cross-person Accuracy", "28 gestures accuracy"], "title": "Domain and View-point Agnostic Hand Action Recognition"} {"abstract": "Existing metric learning losses can be categorized into two classes: pair-based and proxy-based losses. The former class can leverage fine-grained semantic relations between data points, but slows convergence in general due to its high training complexity. In contrast, the latter class enables fast and reliable convergence, but cannot consider the rich data-to-data relations. This paper presents a new proxy-based loss that takes advantages of both pair- and proxy-based methods and overcomes their limitations. Thanks to the use of proxies, our loss boosts the speed of convergence and is robust against noisy labels and outliers. At the same time, it allows embedding vectors of data to interact with each other in its gradients to exploit data-to-data relations. Our method is evaluated on four public benchmarks, where a standard network trained with our loss achieves state-of-the-art performance and most quickly converges.", "field": [], "task": ["Fine-Grained Image Classification", "Fine-Grained Vehicle Classification", "Metric Learning"], "method": [], "dataset": [" CUB-200-2011", "CARS196", "Stanford Online Products"], "metric": ["R@1"], "title": "Proxy Anchor Loss for Deep Metric Learning"} {"abstract": "The goal of this work is to train strong models for visual speech recognition without requiring human annotated ground truth data. We achieve this by distilling from an Automatic Speech Recognition (ASR) model that has been trained on a large-scale audio-only corpus. We use a cross-modal distillation method that combines Connectionist Temporal Classification (CTC) with a frame-wise cross-entropy loss. Our contributions are fourfold: (i) we show that ground truth transcriptions are not necessary to train a lip reading system; (ii) we show how arbitrary amounts of unlabelled video data can be leveraged to improve performance; (iii) we demonstrate that distillation significantly speeds up training; and, (iv) we obtain state-of-the-art results on the challenging LRS2 and LRS3 datasets for training only on publicly available data.", "field": [], "task": ["Lipreading", "Lip Reading", "Speech Recognition", "Visual Speech Recognition"], "method": [], "dataset": ["LRS2"], "metric": ["Word Error Rate (WER)"], "title": "ASR is all you need: cross-modal distillation for lip reading"} {"abstract": "Most learning algorithms are not invariant to the scale of the function that\nis being approximated. We propose to adaptively normalize the targets used in\nlearning. This is useful in value-based reinforcement learning, where the\nmagnitude of appropriate value approximations can change over time when we\nupdate the policy of behavior. Our main motivation is prior work on learning to\nplay Atari games, where the rewards were all clipped to a predetermined range.\nThis clipping facilitates learning across many different games with a single\nlearning algorithm, but a clipped reward function can result in qualitatively\ndifferent behavior. Using the adaptive normalization we can remove this\ndomain-specific heuristic without diminishing overall performance.", "field": [], "task": ["Atari Games"], "method": [], "dataset": ["Atari 2600 Amidar", "Atari 2600 River Raid", "Atari 2600 Beam Rider", "Atari 2600 Video Pinball", "Atari 2600 Demon Attack", "Atari 2600 Enduro", "Atari 2600 Alien", "Atari 2600 Boxing", "Atari 2600 Bank Heist", "Atari 2600 Tutankham", "Atari 2600 Time Pilot", "Atari 2600 Space Invaders", "Atari 2600 Assault", "Atari 2600 Gravitar", "Atari 2600 Ice Hockey", "Atari 2600 Bowling", "Atari 2600 Private Eye", "Atari 2600 Berzerk", "Atari 2600 Asterix", "Atari 2600 Breakout", "Atari 2600 Name This Game", "Atari 2600 Crazy Climber", "Atari 2600 Pong", "Atari 2600 Krull", "Atari 2600 Freeway", "Atari 2600 James Bond", "Atari 2600 Robotank", "Atari 2600 Kangaroo", "Atari 2600 Venture", "Atari 2600 Asteroids", "Atari 2600 Fishing Derby", "Atari 2600 Ms. Pacman", "Atari 2600 Seaquest", "Atari 2600 Tennis", "Atari 2600 Zaxxon", "Atari 2600 Frostbite", "Atari 2600 Star Gunner", "Atari 2600 Double Dunk", "Atari 2600 Battle Zone", "Atari 2600 Gopher", "Atari 2600 Road Runner", "Atari 2600 Atlantis", "Atari 2600 Kung-Fu Master", "Atari 2600 Chopper Command", "Atari 2600 Up and Down", "Atari 2600 Wizard of Wor", "Atari 2600 Q*Bert", "Atari 2600 Centipede", "Atari 2600 HERO"], "metric": ["Score"], "title": "Learning values across many orders of magnitude"} {"abstract": "Exploring contextual information in the local region is important for shape\nunderstanding and analysis. Existing studies often employ hand-crafted or\nexplicit ways to encode contextual information of local regions. However, it is\nhard to capture fine-grained contextual information in hand-crafted or explicit\nmanners, such as the correlation between different areas in a local region,\nwhich limits the discriminative ability of learned features. To resolve this\nissue, we propose a novel deep learning model for 3D point clouds, named\nPoint2Sequence, to learn 3D shape features by capturing fine-grained contextual\ninformation in a novel implicit way. Point2Sequence employs a novel sequence\nlearning model for point clouds to capture the correlations by aggregating\nmulti-scale areas of each local region with attention. Specifically,\nPoint2Sequence first learns the feature of each area scale in a local region.\nThen, it captures the correlation between area scales in the process of\naggregating all area scales using a recurrent neural network (RNN) based\nencoder-decoder structure, where an attention mechanism is proposed to\nhighlight the importance of different area scales. Experimental results show\nthat Point2Sequence achieves state-of-the-art performance in shape\nclassification and segmentation tasks.", "field": [], "task": ["3D Part Segmentation", "3D Point Cloud Classification", "Shape Representation Of 3D Point Clouds"], "method": [], "dataset": ["ShapeNet-Part", "ModelNet40"], "metric": ["Overall Accuracy", "Instance Average IoU"], "title": "Point2Sequence: Learning the Shape Representation of 3D Point Clouds with an Attention-based Sequence to Sequence Network"} {"abstract": "A longstanding question in computer vision concerns the representation of 3D\nshapes for recognition: should 3D shapes be represented with descriptors\noperating on their native 3D formats, such as voxel grid or polygon mesh, or\ncan they be effectively represented with view-based descriptors? We address\nthis question in the context of learning to recognize 3D shapes from a\ncollection of their rendered views on 2D images. We first present a standard\nCNN architecture trained to recognize the shapes' rendered views independently\nof each other, and show that a 3D shape can be recognized even from a single\nview at an accuracy far higher than using state-of-the-art 3D shape\ndescriptors. Recognition rates further increase when multiple views of the\nshapes are provided. In addition, we present a novel CNN architecture that\ncombines information from multiple views of a 3D shape into a single and\ncompact shape descriptor offering even better recognition performance. The same\narchitecture can be applied to accurately recognize human hand-drawn sketches\nof shapes. We conclude that a collection of 2D views can be highly informative\nfor 3D shape recognition and is amenable to emerging CNN architectures and\ntheir derivatives.", "field": [], "task": ["3D Point Cloud Classification", "3D Shape Recognition"], "method": [], "dataset": ["ModelNet40"], "metric": ["Overall Accuracy"], "title": "Multi-view Convolutional Neural Networks for 3D Shape Recognition"} {"abstract": "To bridge the gap between Machine Reading Comprehension (MRC) models and human beings, which is mainly reflected in the hunger for data and the robustness to noise, in this paper, we explore how to integrate the neural networks of MRC models with the general knowledge of human beings. On the one hand, we propose a data enrichment method, which uses WordNet to extract inter-word semantic connections as general knowledge from each given passage-question pair. On the other hand, we propose an end-to-end MRC model named as Knowledge Aided Reader (KAR), which explicitly uses the above extracted general knowledge to assist its attention mechanisms. Based on the data enrichment method, KAR is comparable in performance with the state-of-the-art MRC models, and significantly more robust to noise than them. When only a subset (20%-80%) of the training examples are available, KAR outperforms the state-of-the-art MRC models by a large margin, and is still reasonably robust to noise.", "field": [], "task": ["Machine Reading Comprehension", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["SQuAD1.1 dev", "SQuAD1.1"], "metric": ["EM", "F1"], "title": "Explicit Utilization of General Knowledge in Machine Reading Comprehension"} {"abstract": "We present a joint model for entity-level relation extraction from documents. In contrast to other approaches - which focus on local intra-sentence mention pairs and thus require annotations on mention level - our model operates on entity level. To do so, a multi-task approach is followed that builds upon coreference resolution and gathers relevant signals via multi-instance learning with multi-level representations combining global entity and local mention information. We achieve state-of-the-art relation extraction results on the DocRED dataset and report the first entity-level end-to-end relation extraction results for future reference. Finally, our experimental results suggest that a joint approach is on par with task-specific learning, though more efficient due to shared parameters and training steps.", "field": [], "task": ["Coreference Resolution", "Named Entity Recognition", "Nested Named Entity Recognition", "Relation Extraction"], "method": [], "dataset": ["DocRED"], "metric": ["Ign F1", "F1"], "title": "An End-to-end Model for Entity-level Relation Extraction using Multi-instance Learning"} {"abstract": "Machine reading comprehension with unanswerable questions aims to abstain\nfrom answering when no answer can be inferred. In addition to extract answers,\nprevious works usually predict an additional \"no-answer\" probability to detect\nunanswerable cases. However, they fail to validate the answerability of the\nquestion by verifying the legitimacy of the predicted answer. To address this\nproblem, we propose a novel read-then-verify system, which not only utilizes a\nneural reader to extract candidate answers and produce no-answer probabilities,\nbut also leverages an answer verifier to decide whether the predicted answer is\nentailed by the input snippets. Moreover, we introduce two auxiliary losses to\nhelp the reader better handle answer extraction as well as no-answer detection,\nand investigate three different architectures for the answer verifier. Our\nexperiments on the SQuAD 2.0 dataset show that our system achieves a score of\n74.2 F1 on the test set, achieving state-of-the-art results at the time of\nsubmission (Aug. 28th, 2018).", "field": [], "task": ["Machine Reading Comprehension", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["SQuAD2.0 dev", "SQuAD2.0"], "metric": ["EM", "F1"], "title": "Read + Verify: Machine Reading Comprehension with Unanswerable Questions"} {"abstract": "Knowledge base completion (KBC) aims to automatically infer missing facts by exploiting information already present in a knowledge base (KB). A promising approach for KBC is to embed knowledge into latent spaces and make predictions from learned embeddings. However, existing embedding models are subject to at least one of the following limitations: (1) theoretical inexpressivity, (2) lack of support for prominent inference patterns (e.g., hierarchies), (3) lack of support for KBC over higher-arity relations, and (4) lack of support for incorporating logical rules. Here, we propose a spatio-translational embedding model, called BoxE, that simultaneously addresses all these limitations. BoxE embeds entities as points, and relations as a set of hyper-rectangles (or boxes), which spatially characterize basic logical properties. This seemingly simple abstraction yields a fully expressive model offering a natural encoding for many desired logical properties. BoxE can both capture and inject rules from rich classes of rule languages, going well beyond individual inference patterns. By design, BoxE naturally applies to higher-arity KBs. We conduct a detailed experimental analysis, and show that BoxE achieves state-of-the-art performance, both on benchmark knowledge graphs and on more general KBs, and we empirically show the power of integrating logical rules.", "field": [], "task": ["Knowledge Base Completion", "Knowledge Graphs", "Link Prediction"], "method": [], "dataset": ["FB-AUTO", "JF17K", "YAGO3-10", "FB15k-237"], "metric": ["Hits@1", "Hit@1", "MRR", "Hits@10", "Hit@10"], "title": "BoxE: A Box Embedding Model for Knowledge Base Completion"} {"abstract": "Deep neural networks for machine comprehension typically utilizes only word\nor character embeddings without explicitly taking advantage of structured\nlinguistic information such as constituency trees and dependency trees. In this\npaper, we propose structural embedding of syntactic trees (SEST), an algorithm\nframework to utilize structured information and encode them into vector\nrepresentations that can boost the performance of algorithms for the machine\ncomprehension. We evaluate our approach using a state-of-the-art neural\nattention model on the SQuAD dataset. Experimental results demonstrate that our\nmodel can accurately identify the syntactic boundaries of the sentences and\nextract answers that are syntactically coherent over the baseline methods.", "field": [], "task": ["Question Answering", "Reading Comprehension"], "method": [], "dataset": ["SQuAD1.1 dev", "SQuAD1.1"], "metric": ["EM", "F1"], "title": "Structural Embedding of Syntactic Trees for Machine Comprehension"} {"abstract": "We introduce a simple semi-supervised learning approach for images based on\nin-painting using an adversarial loss. Images with random patches removed are\npresented to a generator whose task is to fill in the hole, based on the\nsurrounding pixels. The in-painted images are then presented to a discriminator\nnetwork that judges if they are real (unaltered training images) or not. This\ntask acts as a regularizer for standard supervised training of the\ndiscriminator. Using our approach we are able to directly train large VGG-style\nnetworks in a semi-supervised fashion. We evaluate on STL-10 and PASCAL\ndatasets, where our approach obtains performance comparable or superior to\nexisting methods.", "field": [], "task": ["Image Classification", "Semi-Supervised Image Classification"], "method": [], "dataset": ["STL-10, 1000 Labels", "STL-10"], "metric": ["Percentage correct", "Accuracy"], "title": "Semi-Supervised Learning with Context-Conditional Generative Adversarial Networks"} {"abstract": "There is compelling evidence that coreference prediction would benefit from\nmodeling global information about entity-clusters. Yet, state-of-the-art\nperformance can be achieved with systems treating each mention prediction\nindependently, which we attribute to the inherent difficulty of crafting\ninformative cluster-level features. We instead propose to use recurrent neural\nnetworks (RNNs) to learn latent, global representations of entity clusters\ndirectly from their mentions. We show that such representations are especially\nuseful for the prediction of pronominal mentions, and can be incorporated into\nan end-to-end coreference system that outperforms the state of the art without\nrequiring any additional search.", "field": [], "task": ["Coreference Resolution"], "method": [], "dataset": ["OntoNotes"], "metric": ["F1"], "title": "Learning Global Features for Coreference Resolution"} {"abstract": "Learning to construct text representations in end-to-end systems can be\ndifficult, as natural languages are highly compositional and task-specific\nannotated datasets are often limited in size. Methods for directly supervising\nlanguage composition can allow us to guide the models based on existing\nknowledge, regularizing them towards more robust and interpretable\nrepresentations. In this paper, we investigate how objectives at different\ngranularities can be used to learn better language representations and we\npropose an architecture for jointly learning to label sentences and tokens. The\npredictions at each level are combined together using an attention mechanism,\nwith token-level labels also acting as explicit supervision for composing\nsentence-level representations. Our experiments show that by learning to\nperform these tasks jointly on multiple levels, the model achieves substantial\nimprovements for both sentence classification and sequence labeling.", "field": [], "task": ["Grammatical Error Detection", "Sentence Classification"], "method": [], "dataset": ["CoNLL-2014 A2", "FCE", "CoNLL-2014 A1", "JFLEG"], "metric": ["F0.5"], "title": "Jointly Learning to Label Sentences and Tokens"} {"abstract": "We address the problem of graph classification based only on structural information. Inspired by natural language processing techniques (NLP), our model sequentially embeds information to estimate class membership probabilities. Besides, we experiment with NLP-like variational regularization techniques, making the model predict the next node in the sequence as it reads it. We experimentally show that our model achieves state-of-the-art classification results on several standard molecular datasets. Finally, we perform a qualitative analysis and give some insights on whether the node prediction helps the model better classify graphs.", "field": [], "task": ["Graph Classification"], "method": [], "dataset": ["PROTEINS", "MUTAG", "ENZYMES", "NCI1"], "metric": ["Accuracy"], "title": "Variational Recurrent Neural Networks for Graph Classification"} {"abstract": "In this paper, we propose a novel model for high-dimensional data, called the\nHybrid Orthogonal Projection and Estimation (HOPE) model, which combines a\nlinear orthogonal projection and a finite mixture model under a unified\ngenerative modeling framework. The HOPE model itself can be learned\nunsupervised from unlabelled data based on the maximum likelihood estimation as\nwell as discriminatively from labelled data. More interestingly, we have shown\nthe proposed HOPE models are closely related to neural networks (NNs) in a\nsense that each hidden layer can be reformulated as a HOPE model. As a result,\nthe HOPE framework can be used as a novel tool to probe why and how NNs work,\nmore importantly, to learn NNs in either supervised or unsupervised ways. In\nthis work, we have investigated the HOPE framework to learn NNs for several\nstandard tasks, including image recognition on MNIST and speech recognition on\nTIMIT. Experimental results have shown that the HOPE framework yields\nsignificant performance gains over the current state-of-the-art methods in\nvarious types of NN learning problems, including unsupervised feature learning,\nsupervised or semi-supervised learning.", "field": [], "task": ["Image Classification", "Speech Recognition"], "method": [], "dataset": ["MNIST"], "metric": ["Percentage error"], "title": "Hybrid Orthogonal Projection and Estimation (HOPE): A New Framework to Probe and Learn Neural Networks"} {"abstract": "Dependency tree structures capture long-distance and syntactic relationships between words in a sentence. The syntactic relations (e.g., nominal subject, object) can potentially infer the existence of certain named entities. In addition, the performance of a named entity recognizer could benefit from the long-distance dependencies between the words in dependency trees. In this work, we propose a simple yet effective dependency-guided LSTM-CRF model to encode the complete dependency trees and capture the above properties for the task of named entity recognition (NER). The data statistics show strong correlations between the entity types and dependency relations. We conduct extensive experiments on several standard datasets and demonstrate the effectiveness of the proposed model in improving NER and achieving state-of-the-art performance. Our analysis reveals that the significant improvements mainly result from the dependency relations and long-distance interactions provided by dependency trees.", "field": [], "task": ["Named Entity Recognition"], "method": [], "dataset": ["Ontonotes v5 (English)", "ontontoes chinese v5"], "metric": ["F1"], "title": "Dependency-Guided LSTM-CRF for Named Entity Recognition"} {"abstract": "Objective functions for training of deep networks for face-related\nrecognition tasks, such as facial expression recognition (FER), usually\nconsider each sample independently. In this work, we present a novel\npeak-piloted deep network (PPDN) that uses a sample with peak expression (easy\nsample) to supervise the intermediate feature responses for a sample of\nnon-peak expression (hard sample) of the same type and from the same subject.\nThe expression evolving process from non-peak expression to peak expression can\nthus be implicitly embedded in the network to achieve the invariance to\nexpression intensities. A special purpose back-propagation procedure, peak\ngradient suppression (PGS), is proposed for network training. It drives the\nintermediate-layer feature responses of non-peak expression samples towards\nthose of the corresponding peak expression samples, while avoiding the inverse.\nThis avoids degrading the recognition capability for samples of peak expression\ndue to interference from their non-peak expression counterparts. Extensive\ncomparisons on two popular FER datasets, Oulu-CASIA and CK+, demonstrate the\nsuperiority of the PPDN over state-ofthe-art FER methods, as well as the\nadvantages of both the network structure and the optimization strategy.\nMoreover, it is shown that PPDN is a general architecture, extensible to other\ntasks by proper definition of peak and non-peak samples. This is validated by\nexperiments that show state-of-the-art performance on pose-invariant face\nrecognition, using the Multi-PIE dataset.", "field": [], "task": ["Face Recognition", "Facial Expression Recognition", "Robust Face Recognition"], "method": [], "dataset": ["Oulu-CASIA"], "metric": ["Accuracy (10-fold)"], "title": "Peak-Piloted Deep Network for Facial Expression Recognition"} {"abstract": "Neural natural language generation (NNLG) systems are known for their pathological outputs, i.e. generating text which is unrelated to the input specification. In this paper, we show the impact of semantic noise on state-of-the-art NNLG models which implement different semantic control mechanisms. We find that cleaned data can improve semantic correctness by up to 97%, while maintaining fluency. We also find that the most common error is omitting information, rather than hallucination.", "field": [], "task": ["Data-to-Text Generation", "Text Generation"], "method": [], "dataset": ["Cleaned E2E NLG Challenge"], "metric": ["BLEU"], "title": "Semantic Noise Matters for Neural Natural Language Generation"} {"abstract": "Sentences produced by abstractive summarization systems can be ungrammatical and fail to preserve the original meanings, despite being locally fluent. In this paper we propose to remedy this problem by jointly generating a sentence and its syntactic dependency parse while performing abstraction. If generating a word can introduce an erroneous relation to the summary, the behavior must be discouraged. The proposed method thus holds promise for producing grammatical sentences and encouraging the summary to stay true-to-original. Our contributions of this work are twofold. First, we present a novel neural architecture for abstractive summarization that combines a sequential decoder with a tree-based decoder in a synchronized manner to generate a summary sentence and its syntactic parse. Secondly, we describe a novel human evaluation protocol to assess if, and to what extent, a summary remains true to its original meanings. We evaluate our method on a number of summarization datasets and demonstrate competitive results against strong baselines.", "field": [], "task": ["Abstractive Text Summarization", "Text Summarization"], "method": [], "dataset": ["GigaWord"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Joint Parsing and Generation for Abstractive Summarization"} {"abstract": "Convolutional Neural Nets (CNNs) have become the reference technology for many computer vision problems. Although CNNs for facial landmark detection are very robust, they still lack accuracy when processing images acquired in unrestricted conditions. In this paper we investigate the use of a cascade of Neural Net regressors to increase the accuracy of the estimated facial landmarks. To this end we append two encoder-decoder CNNs with the same architecture. The first net produces a set of heatmaps with a rough estimation of landmark locations. The second, trained with synthetically generated occlusions, refines the location of ambiguous and occluded landmarks. Finally, a densely connected layer with shared weights among all heatmaps, accurately regresses the landmark coordinates. The proposed approach achieves state-of-the-art results in 300W, COFW and WFLW that are widely considered the most challenging public data sets.", "field": [], "task": ["Face Alignment", "Facial Landmark Detection"], "method": [], "dataset": ["WFLW", "COFW", "300W"], "metric": ["Fullset (public)", "AUC@0.1 (all)", "NME", "ME (%, all) ", "FR@0.1(%, all)", "Mean Error Rate"], "title": "Cascade of Encoder-Decoder CNNs with Learned Coordinates Regressor for Robust Facial Landmarks Detection"} {"abstract": "This paper presents a robust multi-class multi-object tracking (MCMOT)\nformulated by a Bayesian filtering framework. Multi-object tracking for\nunlimited object classes is conducted by combining detection responses and\nchanging point detection (CPD) algorithm. The CPD model is used to observe\nabrupt or abnormal changes due to a drift and an occlusion based spatiotemporal\ncharacteristics of track states. The ensemble of convolutional neural network\n(CNN) based object detector and Lucas-Kanede Tracker (KLT) based motion\ndetector is employed to compute the likelihoods of foreground regions as the\ndetection responses of different object classes. Extensive experiments are\nperformed using lately introduced challenging benchmark videos; ImageNet VID\nand MOT benchmark dataset. The comparison to state-of-the-art video tracking\ntechniques shows very encouraging results.", "field": [], "task": ["Multi-Object Tracking", "Object Tracking"], "method": [], "dataset": ["KITTI Tracking test"], "metric": ["MOTA"], "title": "Multi-Class Multi-Object Tracking using Changing Point Detection"} {"abstract": "Inferring the 6DoF pose of an object from a single RGB image is an important but challenging task, especially under heavy occlusion. While recent approaches improve upon the two stage approaches by training an end-to-end pipeline, they do not leverage local and global constraints. In this paper, we propose pairwise feature extraction to integrate local constraints, and triplet regularization to integrate global constraints for improved 6DoF object pose estimation. Coupled with better augmentation, our approach achieves state of the art results on the challenging Occlusion Linemod dataset, with a 9% improvement over the previous state of the art, and achieves competitive results on the Linemod dataset.", "field": [], "task": ["6D Pose Estimation using RGB"], "method": [], "dataset": ["LineMOD", "Occlusion LineMOD"], "metric": ["Mean ADD"], "title": "End-to-End Differentiable 6DoF Object Pose Estimation with Local and Global Constraints"} {"abstract": "Part-level representations are important for robust person re-identification (ReID), but in practice feature quality suffers due to the body part misalignment problem. In this paper, we present a robust, compact, and easy-to-use method called the Multi-task Part-aware Network (MPN), which is designed to extract semantically aligned part-level features from pedestrian images. MPN solves the body part misalignment problem via multi-task learning (MTL) in the training stage. More specifically, it builds one main task (MT) and one auxiliary task (AT) for each body part on the top of the same backbone model. The ATs are equipped with a coarse prior of the body part locations for training images. ATs then transfer the concept of the body parts to the MTs via optimizing the MT parameters to identify part-relevant channels from the backbone model. Concept transfer is accomplished by means of two novel alignment strategies: namely, parameter space alignment via hard parameter sharing and feature space alignment in a class-wise manner. With the aid of the learned high-quality parameters, MTs can independently extract semantically aligned part-level features from relevant channels in the testing stage. MPN has three key advantages: 1) it does not need to conduct body part detection in the inference stage; 2) its model is very compact and efficient for both training and testing; 3) in the training stage, it requires only coarse priors of body part locations, which are easy to obtain. Systematic experiments on four large-scale ReID databases demonstrate that MPN consistently outperforms state-of-the-art approaches by significant margins.", "field": [], "task": ["Multi-Task Learning", "Person Re-Identification"], "method": [], "dataset": ["CUHK03 detected", "MSMT17", "CUHK03 labeled", "DukeMTMC-reID", "Market-1501"], "metric": ["Rank-1", "mAP", "MAP"], "title": "Multi-task Learning with Coarse Priors for Robust Part-aware Person Re-identification"} {"abstract": "Humans have a natural instinct to identify unknown object instances in their environments. The intrinsic curiosity about these unknown instances aids in learning about them, when the corresponding knowledge is eventually available. This motivates us to propose a novel computer vision problem called: `Open World Object Detection', where a model is tasked to: 1) identify objects that have not been introduced to it as `unknown', without explicit supervision to do so, and 2) incrementally learn these identified unknown categories without forgetting previously learned classes, when the corresponding labels are progressively received. We formulate the problem, introduce a strong evaluation protocol and provide a novel solution, which we call ORE: Open World Object Detector, based on contrastive clustering and energy based unknown identification. Our experimental evaluation and ablation studies analyze the efficacy of ORE in achieving Open World objectives. As an interesting by-product, we find that identifying and characterizing unknown instances helps to reduce confusion in an incremental object detection setting, where we achieve state-of-the-art performance, with no extra methodological effort. We hope that our work will attract further research into this newly identified, yet crucial research direction.", "field": [], "task": ["Open World Object Detection"], "method": [], "dataset": ["COCO 2017 (Electronic, Indoor, Kitchen, Furniture)", "COCO 2017 (Sports, Food)", "PASCAL VOC 2007", "COCO 2017 (Outdoor, Accessories, Appliance, Truck)"], "metric": ["A-OSE", "WI", "MAP"], "title": "Towards Open World Object Detection"} {"abstract": "Predicting human behavior is a difficult and crucial task required for motion planning. It is challenging in large part due to the highly uncertain and multi-modal set of possible outcomes in real-world domains such as autonomous driving. Beyond single MAP trajectory prediction, obtaining an accurate probability distribution of the future is an area of active interest. We present MultiPath, which leverages a fixed set of future state-sequence anchors that correspond to modes of the trajectory distribution. At inference, our model predicts a discrete distribution over the anchors and, for each anchor, regresses offsets from anchor waypoints along with uncertainties, yielding a Gaussian mixture at each time step. Our model is efficient, requiring only one forward inference pass to obtain multi-modal future distributions, and the output is parametric, allowing compact communication and analytical probabilistic queries. We show on several datasets that our model achieves more accurate predictions, and compared to sampling baselines, does so with an order of magnitude fewer trajectories.", "field": [], "task": ["Autonomous Driving", "Motion Planning", "Trajectory Prediction"], "method": [], "dataset": ["PAID"], "metric": ["minFDE3", "minADE3"], "title": "MultiPath: Multiple Probabilistic Anchor Trajectory Hypotheses for Behavior Prediction"} {"abstract": "Surveillance videos are able to capture a variety of realistic anomalies. In\nthis paper, we propose to learn anomalies by exploiting both normal and\nanomalous videos. To avoid annotating the anomalous segments or clips in\ntraining videos, which is very time consuming, we propose to learn anomaly\nthrough the deep multiple instance ranking framework by leveraging weakly\nlabeled training videos, i.e. the training labels (anomalous or normal) are at\nvideo-level instead of clip-level. In our approach, we consider normal and\nanomalous videos as bags and video segments as instances in multiple instance\nlearning (MIL), and automatically learn a deep anomaly ranking model that\npredicts high anomaly scores for anomalous video segments. Furthermore, we\nintroduce sparsity and temporal smoothness constraints in the ranking loss\nfunction to better localize anomaly during training. We also introduce a new\nlarge-scale first of its kind dataset of 128 hours of videos. It consists of\n1900 long and untrimmed real-world surveillance videos, with 13 realistic\nanomalies such as fighting, road accident, burglary, robbery, etc. as well as\nnormal activities. This dataset can be used for two tasks. First, general\nanomaly detection considering all anomalies in one group and all normal\nactivities in another group. Second, for recognizing each of 13 anomalous\nactivities. Our experimental results show that our MIL method for anomaly\ndetection achieves significant improvement on anomaly detection performance as\ncompared to the state-of-the-art approaches. We provide the results of several\nrecent deep learning baselines on anomalous activity recognition. The low\nrecognition performance of these baselines reveals that our dataset is very\nchallenging and opens more opportunities for future work. The dataset is\navailable at: https://webpages.uncc.edu/cchen62/dataset.html", "field": [], "task": ["Activity Recognition", "Anomaly Detection", "Anomaly Detection In Surveillance Videos", "Multiple Instance Learning"], "method": [], "dataset": ["UBI-Fights"], "metric": ["AUC"], "title": "Real-world Anomaly Detection in Surveillance Videos"} {"abstract": "Most recently, there has been significant interest in learning contextual representations for various NLP tasks, by leveraging large scale text corpora to train large neural language models with self-supervised learning objectives, such as Masked Language Model (MLM). However, based on a pilot study, we observe three issues of existing general-purpose language models when they are applied to text-to-SQL semantic parsers: fail to detect column mentions in the utterances, fail to infer column mentions from cell values, and fail to compose complex SQL queries. To mitigate these issues, we present a model pre-training framework, Generation-Augmented Pre-training (GAP), that jointly learns representations of natural language utterances and table schemas by leveraging generation models to generate pre-train data. GAP MODEL is trained on 2M utterance-schema pairs and 30K utterance-schema-SQL triples, whose utterances are produced by generative models. Based on experimental results, neural semantic parsers that leverage GAP MODEL as a representation encoder obtain new state-of-the-art results on both SPIDER and CRITERIA-TO-SQL benchmarks.", "field": [], "task": ["Language Modelling", "Self-Supervised Learning", "Semantic Parsing", "Text-To-Sql"], "method": [], "dataset": ["spider"], "metric": ["Accuracy (Test)", "Accuracy (Dev)", "Accuracy"], "title": "Learning Contextual Representations for Semantic Parsing with Generation-Augmented Pre-Training"} {"abstract": "Few-shot segmentation segments object regions of new classes with a few of manual annotations. Its key step is to establish the transformation module between support images (annotated images) and query images (unlabeled images), so that the segmentation cues of support images can guide the segmentation of query images. The existing methods form transformation model based on global cues, which however ignores the local cues that are verified in this paper to be very important for the transformation. This paper proposes a new transformation module based on local cues, where the relationship of the local features is used for transformation. To enhance the generalization performance of the network, the relationship matrix is calculated in a high-dimensional metric embedding space based on cosine distance. In addition, to handle the challenging mapping problem from the low-level local relationships to high-level semantic cues, we propose to apply generalized inverse matrix of the annotation matrix of support images to transform the relationship matrix linearly, which is non-parametric and class-agnostic. The result by the matrix transformation can be regarded as an attention map with high-level semantic cues, based on which a transformation module can be built simply.The proposed transformation module is a general module that can be used to replace the transformation module in the existing few-shot segmentation frameworks. We verify the effectiveness of the proposed method on Pascal VOC 2012 dataset. The value of mIoU achieves at 57.0% in 1-shot and 60.6% in 5-shot, which outperforms the state-of-the-art method by 1.6% and 3.5%, respectively.", "field": [], "task": ["Few-Shot Semantic Segmentation"], "method": [], "dataset": ["PASCAL-5i (5-Shot)", "PASCAL-5i (1-Shot)"], "metric": ["Mean IoU"], "title": "A New Local Transformation Module for Few-shot Segmentation"} {"abstract": "Convolutional neural networks (CNNs) are very popular nowadays for image\nprocessing. CNNs allow one to learn optimal filters in a (mostly) supervised\nmachine learning context. However this typically requires abundant labelled\ntraining data to estimate the filter parameters. Alternative strategies have\nbeen deployed for reducing the number of parameters and / or filters to be\nlearned and thus decrease overfitting. In the context of reverting to preset\nfilters, we propose here a computationally efficient harmonic block that uses\nDiscrete Cosine Transform (DCT) filters in CNNs. In this work we examine the\nperformance of harmonic networks in limited training data scenario. We validate\nexperimentally that its performance compares well against scattering networks\nthat use wavelets as preset filters.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["STL-10"], "metric": ["Percentage correct"], "title": "Harmonic Networks with Limited Training Samples"} {"abstract": "Similarity learning has been recognized as a crucial step for object tracking. However, existing multiple object tracking methods only use sparse ground truth matching as the training objective, while ignoring the majority of the informative regions on the images. In this paper, we present Quasi-Dense Similarity Learning, which densely samples hundreds of region proposals on a pair of images for contrastive learning. We can directly combine this similarity learning with existing detection methods to build Quasi-Dense Tracking (QDTrack) without turning to displacement regression or motion priors. We also find that the resulting distinctive feature space admits a simple nearest neighbor search at the inference time. Despite its simplicity, QDTrack outperforms all existing methods on MOT, BDD100K, Waymo, and TAO tracking benchmarks. It achieves 68.7 MOTA at 20.3 FPS on MOT17 without using external training data. Compared to methods with similar detectors, it boosts almost 10 points of MOTA and significantly decreases the number of ID switches on BDD100K and Waymo datasets. The code is available at https://github.com/SysCV/qdtrack", "field": [], "task": ["Metric Learning", "Multiple Object Tracking", "Object Detection", "Object Tracking", "One-Shot Object Detection", "Regression"], "method": [], "dataset": ["KITTI Tracking test"], "metric": ["MOTA"], "title": "Quasi-Dense Similarity Learning for Multiple Object Tracking"} {"abstract": "Depth estimation and scene parsing are two particularly important tasks in\nvisual scene understanding. In this paper we tackle the problem of simultaneous\ndepth estimation and scene parsing in a joint CNN. The task can be typically\ntreated as a deep multi-task learning problem [42]. Different from previous\nmethods directly optimizing multiple tasks given the input training data, this\npaper proposes a novel multi-task guided prediction-and-distillation network\n(PAD-Net), which first predicts a set of intermediate auxiliary tasks ranging\nfrom low level to high level, and then the predictions from these intermediate\nauxiliary tasks are utilized as multi-modal input via our proposed multi-modal\ndistillation modules for the final tasks. During the joint learning, the\nintermediate tasks not only act as supervision for learning more robust deep\nrepresentations but also provide rich multi-modal information for improving the\nfinal tasks. Extensive experiments are conducted on two challenging datasets\n(i.e. NYUD-v2 and Cityscapes) for both the depth estimation and scene parsing\ntasks, demonstrating the effectiveness of the proposed approach.", "field": [], "task": ["Depth Estimation", "Multi-Task Learning", "Scene Parsing", "Scene Understanding"], "method": [], "dataset": ["NYU-Depth V2"], "metric": ["RMS"], "title": "PAD-Net: Multi-Tasks Guided Prediction-and-Distillation Network for Simultaneous Depth Estimation and Scene Parsing"} {"abstract": "In this paper, we propose a weakly supervised deep temporal encoding-decoding solution for anomaly detection in surveillance videos using multiple instance learning. The proposed approach uses both abnormal and normal video clips during the training phase which is developed in the multiple instance framework where we treat video as a bag and video clips as instances in the bag. Our main contribution lies in the proposed novel approach to consider temporal relations between video instances. We deal with video instances (clips) as a sequential visual data rather than independent instances. We employ a deep temporal and encoder network that is designed to capture spatial-temporal evolution of video instances over time. We also propose a new loss function that is smoother than similar loss functions recently presented in the computer vision literature, and therefore; enjoys faster convergence and improved tolerance to local minima during the training phase. The proposed temporal encoding-decoding approach with modified loss is benchmarked against the state-of-the-art in simulation studies. The results show that the proposed method performs similar to or better than the state-of-the-art solutions for anomaly detection in video surveillance applications.", "field": [], "task": ["Anomaly Detection", "Anomaly Detection In Surveillance Videos", "Multiple Instance Learning"], "method": [], "dataset": ["ShanghaiTech Weakly Supervised", "UCF-Crime"], "metric": ["ROC AUC", "AUC-ROC"], "title": "Multiple Instance-Based Video Anomaly Detection using Deep Temporal Encoding-Decoding"} {"abstract": "Motivations like domain adaptation, transfer learning, and feature learning\nhave fueled interest in inducing embeddings for rare or unseen words, n-grams,\nsynsets, and other textual features. This paper introduces a la carte\nembedding, a simple and general alternative to the usual word2vec-based\napproaches for building such representations that is based upon recent\ntheoretical results for GloVe-like embeddings. Our method relies mainly on a\nlinear transformation that is efficiently learnable using pretrained word\nvectors and linear regression. This transform is applicable on the fly in the\nfuture when a new text feature or rare word is encountered, even if only a\nsingle usage example is available. We introduce a new dataset showing how the a\nla carte method requires fewer examples of words in context to learn\nhigh-quality embeddings and we obtain state-of-the-art results on a nonce task\nand some unsupervised document classification tasks.", "field": [], "task": ["Document Classification", "Domain Adaptation", "Regression", "Text Classification", "Transfer Learning"], "method": [], "dataset": ["CR", "SST-2 Binary classification", "MR", "IMDb", "SST-5 Fine-grained classification", "TREC-6", "SUBJ", "MPQA"], "metric": ["Error", "Accuracy (2 classes)", "Accuracy (10 classes)", "Accuracy"], "title": "A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors"} {"abstract": "We present a neural network architecture and training method designed to\nenable very rapid training and low implementation complexity. Due to its\ntraining speed and very few tunable parameters, the method has strong potential\nfor applications requiring frequent retraining or online training. The approach\nis characterized by (a) convolutional filters based on biologically inspired\nvisual processing filters, (b) randomly-valued classifier-stage input weights,\n(c) use of least squares regression to train the classifier output weights in a\nsingle batch, and (d) linear classifier-stage output units. We demonstrate the\nefficacy of the method by applying it to image classification. Our results\nmatch existing state-of-the-art results on the MNIST (0.37% error) and\nNORB-small (2.2% error) image classification databases, but with very fast\ntraining times compared to standard deep network approaches. The network's\nperformance on the Google Street View House Number (SVHN) (4% error) database\nis also competitive with state-of-the art methods.", "field": [], "task": ["Image Classification", "Regression"], "method": [], "dataset": ["SVHN", "MNIST", "CIFAR-10"], "metric": ["Percentage error", "Percentage correct"], "title": "Enhanced Image Classification With a Fast-Learning Shallow Convolutional Neural Network"} {"abstract": "Face recognition capabilities have recently made extraordinary leaps. Though\nthis progress is at least partially due to ballooning training set sizes --\nhuge numbers of face images downloaded and labeled for identity -- it is not\nclear if the formidable task of collecting so many images is truly necessary.\nWe propose a far more accessible means of increasing training data sizes for\nface recognition systems. Rather than manually harvesting and labeling more\nfaces, we simply synthesize them. We describe novel methods of enriching an\nexisting dataset with important facial appearance variations by manipulating\nthe faces it contains. We further apply this synthesis approach when matching\nquery images represented using a standard convolutional neural network. The\neffect of training and testing with synthesized images is extensively tested on\nthe LFW and IJB-A (verification and identification) benchmarks and Janus CS2.\nThe performances obtained by our approach match state of the art results\nreported by systems trained on millions of downloaded images.", "field": [], "task": ["Face Recognition", "Face Verification"], "method": [], "dataset": ["IJB-A"], "metric": ["TAR @ FAR=0.01"], "title": "Do We Really Need to Collect Millions of Faces for Effective Face Recognition?"} {"abstract": "Linear Support Vector Machines (SVMs) have become very popular in vision as part of state-of-the-art object recognition and other classification tasks but require high dimensional feature spaces for good performance. Deep learning methods can find more compact representations but current methods employ multilayer perceptrons that require solving a difficult, non-convex optimization problem. We propose a deep non-linear classifier whose layers are SVMs and which incorporates random projection as its core stacking element. Our method learns layers of linear SVMs recursively transforming the original data manifold through a random projection of the weak prediction computed from each layer. Our method scales as linear SVMs, does not rely on any kernel computations or nonconvex optimization, and exhibits better generalization ability than kernel-based SVMs. This is especially true when the number of training samples is smaller than the dimensionality of data, a common scenario in many real-world applications. The use of random projections is key to our method, as we show in the experiments section, in which we observe a consistent improvement over previous --often more complicated-- methods on several vision and speech benchmarks.", "field": [], "task": ["Image Classification", "Object Recognition"], "method": [], "dataset": ["CIFAR-10"], "metric": ["Percentage correct"], "title": "Learning with Recursive Perceptual Representations"} {"abstract": "6-DoF object pose estimation from a single RGB image is a fundamental and long-standing problem in computer vision. Current leading approaches solve it by training deep networks to either regress both rotation and translation from image directly or to construct 2D-3D correspondences and further solve them via PnP indirectly. We argue that rotation and translation should be treated differently for their significant difference. In this work, we propose a novel 6-DoF pose estimation approach: Coordinates-based Disentangled Pose Network (CDPN), which disentangles the pose to predict rotation and translation separately to achieve highly accurate and robust pose estimation. Our method is flexible, efficient, highly accurate and can deal with texture-less and occluded objects. Extensive experiments on LINEMOD and Occlusion datasets are conducted and demonstrate the superiority of our approach. Concretely, our approach significantly exceeds the state-of-the- art RGB-based methods on commonly used metrics.\r", "field": [], "task": ["6D Pose Estimation using RGB", "Pose Estimation"], "method": [], "dataset": ["LineMOD"], "metric": ["Mean ADD", "Accuracy (ADD)", "Accuracy"], "title": "CDPN: Coordinates-Based Disentangled Pose Network for Real-Time RGB-Based 6-DoF Object Pose Estimation"} {"abstract": "Self-supervised learning based on instance discrimination has shown remarkable progress. In particular, contrastive learning, which regards each image as well as its augmentations as a separate class, and pushes all other images away, has been proved effective for pretraining. However, contrasting two images that are de facto similar in semantic space is hard for optimization and not applicable for general representations. In this paper, we tackle the representation inefficiency of contrastive learning and propose a hierarchical training strategy to explicitly model the invariance to semantic similar images in a bottom-up way. This is achieved by extending the contrastive loss to allow for multiple positives per anchor, and explicitly pulling semantically similar images/patches together at the earlier layers as well as the last embedding space. In this way, we are able to learn feature representation that is more discriminative throughout different layers, which we find is beneficial for fast convergence. The hierarchical semantic aggregation strategy produces more discriminative representation on several unsupervised benchmarks. Notably, on ImageNet with ResNet-50 as backbone, we reach $76.4\\%$ top-1 accuracy with linear evaluation, and $75.1\\%$ top-1 accuracy with only $10\\%$ labels.", "field": [], "task": ["Representation Learning", "Self-Supervised Image Classification", "Self-Supervised Learning"], "method": [], "dataset": ["ImageNet"], "metric": ["Top 1 Accuracy (kNN, k=20)", "Top 1 Accuracy"], "title": "Hierarchical Semantic Aggregation for Contrastive Representation Learning"} {"abstract": "Scientific article summarization is challenging: large, annotated corpora are not available, and the summary should ideally include the article's impacts on research community. This paper provides novel solutions to these two challenges. We 1) develop and release the first large-scale manually-annotated corpus for scientific papers (on computational linguistics) by enabling faster annotation, and 2) propose summarization methods that integrate the authors' original highlights (abstract) and the article's actual impacts on the community (citations), to create comprehensive, hybrid summaries. We conduct experiments to demonstrate the efficacy of our corpus in training data-driven models for scientific paper summarization and the advantage of our hybrid summaries over abstracts and traditional citation-based summaries. Our large annotated corpus and hybrid methods provide a new framework for scientific paper summarization research.", "field": [], "task": ["Scientific Document Summarization", "Text Summarization"], "method": [], "dataset": ["CL-SciSumm"], "metric": ["ROUGE-2"], "title": "ScisummNet: A Large Annotated Corpus and Content-Impact Models for Scientific Paper Summarization with Citation Networks"} {"abstract": "The image, question (combined with the history for de-referencing), and the\ncorresponding answer are three vital components of visual dialog. Classical\nvisual dialog systems integrate the image, question, and history to search for\nor generate the best matched answer, and so, this approach significantly\nignores the role of the answer. In this paper, we devise a novel\nimage-question-answer synergistic network to value the role of the answer for\nprecise visual dialog. We extend the traditional one-stage solution to a\ntwo-stage solution. In the first stage, candidate answers are coarsely scored\naccording to their relevance to the image and question pair. Afterward, in the\nsecond stage, answers with high probability of being correct are re-ranked by\nsynergizing with image and question. On the Visual Dialog v1.0 dataset, the\nproposed synergistic network boosts the discriminative visual dialog model to\nachieve a new state-of-the-art of 57.88\\% normalized discounted cumulative\ngain. A generative visual dialog model equipped with the proposed technique\nalso shows promising improvements.", "field": [], "task": ["Visual Dialog"], "method": [], "dataset": ["Visual Dialog v1.0 test-std"], "metric": ["MRR (x 100)", "NDCG (x 100)", "R@5", "Mean", "R@1"], "title": "Image-Question-Answer Synergistic Network for Visual Dialog"} {"abstract": "In aspect-based sentiment analysis, extracting aspect terms along with the\nopinions being expressed from user-generated content is one of the most\nimportant subtasks. Previous studies have shown that exploiting connections\nbetween aspect and opinion terms is promising for this task. In this paper, we\npropose a novel joint model that integrates recursive neural networks and\nconditional random fields into a unified framework for explicit aspect and\nopinion terms co-extraction. The proposed model learns high-level\ndiscriminative features and double propagate information between aspect and\nopinion terms, simultaneously. Moreover, it is flexible to incorporate\nhand-crafted features into the proposed model to further boost its information\nextraction performance. Experimental results on the SemEval Challenge 2014\ndataset show the superiority of our proposed model over several baseline\nmethods as well as the winning systems of the challenge.", "field": [], "task": ["Aspect-Based Sentiment Analysis", "Sentiment Analysis"], "method": [], "dataset": ["SemEval 2014 Task 4 Sub Task 1"], "metric": ["Restaurant (F1)", "Laptop (F1)"], "title": "Recursive Neural Conditional Random Fields for Aspect-based Sentiment Analysis"} {"abstract": "Predicting the future behavior of moving agents is essential for real world applications. It is challenging as the intent of the agent and the corresponding behavior is unknown and intrinsically multimodal. Our key insight is that for prediction within a moderate time horizon, the future modes can be effectively captured by a set of target states. This leads to our target-driven trajectory prediction (TNT) framework. TNT has three stages which are trained end-to-end. It first predicts an agent's potential target states $T$ steps into the future, by encoding its interactions with the environment and the other agents. TNT then generates trajectory state sequences conditioned on targets. A final stage estimates trajectory likelihoods and a final compact set of trajectory predictions is selected. This is in contrast to previous work which models agent intents as latent variables, and relies on test-time sampling to generate diverse trajectories. We benchmark TNT on trajectory prediction of vehicles and pedestrians, where we outperform state-of-the-art on Argoverse Forecasting, INTERACTION, Stanford Drone and an in-house Pedestrian-at-Intersection dataset.", "field": [], "task": ["Trajectory Prediction"], "method": [], "dataset": ["Stanford Drone", "PAID", "Argoverse CVPR 2020"], "metric": ["p-minADE (K=6)", "FDE(8/12) @K=5", "MR (K=1)", "MR (K=6)", "minFDE (K=1)", "DAC (K=6)", "minFDE3", "DAC (K=1)", "minFDE (K=6)", "minADE (K=1)", "minADE3", "minADE (K=6)", "ADE (8/12) @K=5", "p-minFDE (K=6)"], "title": "TNT: Target-driveN Trajectory Prediction"} {"abstract": "Neural networks trained with backpropagation often struggle to identify\nclasses that have been observed a small number of times. In applications where\nmost class labels are rare, such as language modelling, this can become a\nperformance bottleneck. One potential remedy is to augment the network with a\nfast-learning non-parametric model which stores recent activations and class\nlabels into an external memory. We explore a simplified architecture where we\ntreat a subset of the model parameters as fast memory stores. This can help\nretain information over longer time intervals than a traditional memory, and\ndoes not require additional space or compute. In the case of image\nclassification, we display faster binding of novel classes on an Omniglot image\ncurriculum task. We also show improved performance for word-based language\nmodels on news reports (GigaWord), books (Project Gutenberg) and Wikipedia\narticles (WikiText-103) --- the latter achieving a state-of-the-art perplexity\nof 29.2.", "field": [], "task": ["Image Classification", "Language Modelling", "Omniglot"], "method": [], "dataset": ["WikiText-103"], "metric": ["Validation perplexity", "Test perplexity"], "title": "Fast Parametric Learning with Activation Memorization"} {"abstract": "We introduce a new convolutional layer named the Temporal Gaussian Mixture (TGM) layer and present how it can be used to efficiently capture longer-term temporal information in continuous activity videos. The TGM layer is a temporal convolutional layer governed by a much smaller set of parameters (e.g., location/variance of Gaussians) that are fully differentiable. We present our fully convolutional video models with multiple TGM layers for activity detection. The extensive experiments on multiple datasets, including Charades and MultiTHUMOS, confirm the effectiveness of TGM layers, significantly outperforming the state-of-the-arts.", "field": [], "task": ["Action Detection", "Activity Detection"], "method": [], "dataset": ["Multi-THUMOS", "Charades"], "metric": ["mAP"], "title": "Temporal Gaussian Mixture Layer for Videos"} {"abstract": "This paper shows that simply prescribing \"none of the above\" labels to\nunlabeled data has a beneficial regularization effect to supervised learning.\nWe call it universum prescription by the fact that the prescribed labels cannot\nbe one of the supervised labels. In spite of its simplicity, universum\nprescription obtained competitive results in training deep convolutional\nnetworks for CIFAR-10, CIFAR-100, STL-10 and ImageNet datasets. A qualitative\njustification of these approaches using Rademacher complexity is presented. The\neffect of a regularization parameter -- probability of sampling from unlabeled\ndata -- is also studied empirically.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["CIFAR-100", "CIFAR-10"], "metric": ["Percentage correct"], "title": "Universum Prescription: Regularization using Unlabeled Data"} {"abstract": "3D point cloud semantic and instance segmentation is crucial and fundamental for 3D scene understanding. Due to the complex structure, point sets are distributed off balance and diversely, which appears as both category imbalance and pattern imbalance. As a result, deep networks can easily forget the non-dominant cases during the learning process, resulting in unsatisfactory performance. Although re-weighting can reduce the influence of the well-classified examples, they cannot handle the non-dominant patterns during the dynamic training. In this paper, we propose a memory-augmented network to learn and memorize the representative prototypes that cover diverse samples universally. Specifically, a memory module is introduced to alleviate the forgetting issue by recording the patterns seen in mini-batch training. The learned memory items consistently reflect the interpretable and meaningful information for both dominant and non-dominant categories and cases. The distorted observations and rare cases can thus be augmented by retrieving the stored prototypes, leading to better performances and generalization. Exhaustive experiments on the benchmarks, i.e. S3DIS and ScanNetV2, reflect the superiority of our method on both effectiveness and efficiency. Not only the overall accuracy but also nondominant classes have improved substantially.", "field": [], "task": ["3D Instance Segmentation", "Instance Segmentation", "Scene Understanding", "Semantic Segmentation"], "method": [], "dataset": ["ScanNet(v2)"], "metric": ["Mean AP @ 0.5"], "title": "Learning and Memorizing Representative Prototypes for 3D Point Cloud Semantic and Instance Segmentation"} {"abstract": "Tracking in urban street scenes plays a central role in autonomous systems\nsuch as self-driving cars. Most of the current vision-based tracking methods\nperform tracking in the image domain. Other approaches, eg based on LIDAR and\nradar, track purely in 3D. While some vision-based tracking methods invoke 3D\ninformation in parts of their pipeline, and some 3D-based methods utilize\nimage-based information in components of their approach, we propose to use\nimage- and world-space information jointly throughout our method. We present\nour tracking pipeline as a 3D extension of image-based tracking. From enhancing\nthe detections with 3D measurements to the reported positions of every tracked\nobject, we use world-space 3D information at every stage of processing. We\naccomplish this by our novel coupled 2D-3D Kalman filter, combined with a\nconceptually clean and extendable hypothesize-and-select framework. Our\napproach matches the current state-of-the-art on the official KITTI benchmark,\nwhich performs evaluation in the 2D image domain only. Further experiments show\nsignificant improvements in 3D localization precision by enabling our coupled\n2D-3D tracking.", "field": [], "task": ["Self-Driving Cars"], "method": [], "dataset": ["KITTI Tracking test"], "metric": ["MOTA"], "title": "Combined Image- and World-Space Tracking in Traffic Scenes"} {"abstract": "Art is an expression of human creativity, skill and technology. An exceptionally rich source of visual content. In the context of AI image processing systems, artworks represent one of the most challenging domains conceivable: Properly perceiving art requires attention to detail, a huge generalization capacity, and recognizing both simple and complex visual patterns. To challenge the AI community, this work introduces a novel image classification task focused on museum art mediums, the MAMe dataset. Data is gathered from three different museums, and aggregated by art experts into 29 classes of mediums (i.e. materials and techniques). For each class, MAMe contains a minimum of 850 high-resolution and variable shape images (700 for training, 150 for test). The combination of volume, resolution and shape allows MAMe to fill a void in current image classification challenges, empowering research in aspects so far overseen by the research community. After reviewing the singularity of MAMe in the context of current image classification tasks, a thorough description of the task is provided, together with dataset statistics. Experiments are conducted to evaluate the impact of using high-resolution images, variable shape inputs and both of these properties together. Results illustrate the positive impact in performance when using high-resolution images, while highlighting the lack of solutions to exploit variable shapes. An additional experiment exposes the distinctiveness between the MAMe dataset and the prototypical ImageNet dataset. Finally, the baselines are inspected using explainability methods and expert knowledge, to gain insights on the challenges that remain ahead.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["MAMe"], "metric": ["Acc"], "title": "A Closer Look at Art Mediums: The MAMe Image Classification Dataset"} {"abstract": "We present KERMIT, a simple insertion-based approach to generative modeling for sequences and sequence pairs. KERMIT models the joint distribution and its decompositions (i.e., marginals and conditionals) using a single neural network and, unlike much prior work, does not rely on a prespecified factorization of the data distribution. During training, one can feed KERMIT paired data $(x, y)$ to learn the joint distribution $p(x, y)$, and optionally mix in unpaired data $x$ or $y$ to refine the marginals $p(x)$ or $p(y)$. During inference, we have access to the conditionals $p(x \\mid y)$ and $p(y \\mid x)$ in both directions. We can also sample from the joint distribution or the marginals. The model supports both serial fully autoregressive decoding and parallel partially autoregressive decoding, with the latter exhibiting an empirically logarithmic runtime. We demonstrate through experiments in machine translation, representation learning, and zero-shot cloze question answering that our unified approach is capable of matching or exceeding the performance of dedicated state-of-the-art systems across a wide range of tasks without the need for problem-specific architectural adaptation.", "field": [], "task": ["Machine Translation", "Question Answering", "Representation Learning"], "method": [], "dataset": ["WMT2014 English-German"], "metric": ["BLEU score"], "title": "KERMIT: Generative Insertion-Based Modeling for Sequences"} {"abstract": "Semantic video segmentation is challenging due to the sheer amount of data\nthat needs to be processed and labeled in order to construct accurate models.\nIn this paper we present a deep, end-to-end trainable methodology to video\nsegmentation that is capable of leveraging information present in unlabeled\ndata in order to improve semantic estimates. Our model combines a convolutional\narchitecture and a spatio-temporal transformer recurrent layer that are able to\ntemporally propagate labeling information by means of optical flow, adaptively\ngated based on its locally estimated uncertainty. The flow, the recognition and\nthe gated temporal propagation modules can be trained jointly, end-to-end. The\ntemporal, gated recurrent flow propagation component of our model can be\nplugged into any static semantic segmentation architecture and turn it into a\nweakly supervised video processing one. Our extensive experiments in the\nchallenging CityScapes and Camvid datasets, and based on multiple deep\narchitectures, indicate that the resulting model can leverage unlabeled\ntemporal frames, next to a labeled one, in order to improve both the video\nsegmentation accuracy and the consistency of its temporal labeling, at no\nadditional annotation cost and with little extra computation.", "field": [], "task": ["Optical Flow Estimation", "Semantic Segmentation", "Video Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["Cityscapes val"], "metric": ["mIoU"], "title": "Semantic Video Segmentation by Gated Recurrent Flow Propagation"} {"abstract": "Monocular estimation of 3d human pose has attracted increased attention with the availability of large ground-truth motion capture datasets. However, the diversity of training data available is limited and it is not clear to what extent methods generalize outside the specific datasets they are trained on. In this work we carry out a systematic study of the diversity and biases present in specific datasets and its effect on cross-dataset generalization across a compendium of 5 pose datasets. We specifically focus on systematic differences in the distribution of camera viewpoints relative to a body-centered coordinate frame. Based on this observation, we propose an auxiliary task of predicting the camera viewpoint in addition to pose. We find that models trained to jointly predict viewpoint and pose systematically show significantly improved cross-dataset generalization.", "field": [], "task": ["3D Human Pose Estimation", "Motion Capture", "Pose Estimation"], "method": [], "dataset": ["Surreal", "Human3.6M", "MPI-INF-3DHP", "3DPW"], "metric": ["Average MPJPE (mm)", "PCK3D", "PA-MPJPE", "Using 2D ground-truth joints", "Multi-View or Monocular", "MJPE", "3DPCK", "MPJPE"], "title": "Predicting Camera Viewpoint Improves Cross-dataset Generalization for 3D Human Pose Estimation"} {"abstract": "Generative flows are promising tractable models for density modeling that define probabilistic distributions with invertible transformations. However, tractability imposes architectural constraints on generative flows, making them less expressive than other types of generative models. In this work, we study a previously overlooked constraint that all the intermediate representations must have the same dimensionality with the original data due to invertibility, limiting the width of the network. We tackle this constraint by augmenting the data with some extra dimensions and jointly learning a generative flow for augmented data as well as the distribution of augmented dimensions under a variational inference framework. Our approach, VFlow, is a generalization of generative flows and therefore always performs better. Combining with existing generative flows, VFlow achieves a new state-of-the-art 2.98 bits per dimension on the CIFAR-10 dataset and is more compact than previous models to reach similar modeling quality.", "field": [], "task": ["Density Estimation", "Image Generation", "Normalising Flows", "Variational Inference"], "method": [], "dataset": ["CIFAR-10"], "metric": ["bits/dimension"], "title": "VFlow: More Expressive Generative Flows with Variational Data Augmentation"} {"abstract": "In this paper, a multichannel EEG emotion recognition method based on a novel dynamical graph convolutional neural networks (DGCNN) is proposed. The basic idea of the proposed EEG emotion recognition method is to use a graph to model the multichannel EEG features and then perform EEG emotion classification based on this model. Different from the traditional graph convolutional neural networks (GCNN) methods, however, the proposed DGCNN method can dynamically learn the intrinsic relationship between different electroencephalogram (EEG) channels, represented by an adjacency matrix, via training a neural network so as to benefit for more discriminative EEG feature extraction. Then, the learned adjacency matrix is used for learning more discriminative features for improving the EEG emotion recognition. We conduct extensive experiments on the SJTU emotion EEG dataset (SEED) and DREAMER dataset. The experimental results demonstrate that the proposed method achieves better recognition performance than the state-of-the-art methods, in which the average recognition accuracy of 90.4\\% is achieved for subject dependent experiment while 79.95\\% for subject independent cross-validation one on the SEED database, and the average accuracies of 86.23\\%, 84.54\\% and 85.02\\% are respectively obtained for valence, arousal and dominance classifications on the DREAMER database.", "field": [], "task": ["EEG", "Emotion Classification", "Emotion Recognition"], "method": [], "dataset": ["SEED-IV"], "metric": ["Accuracy"], "title": "EEG emotion recognition using dynamical graph convolutional neural networks"} {"abstract": "In document-level sentiment classification, each document must be mapped to a fixed length vector. Document embedding models map each document to a dense, low-dimensional vector in continuous vector space. This paper proposes training document embeddings using cosine similarity instead of dot product. Experiments on the IMDB dataset show that accuracy is improved when using cosine similarity compared to using dot product, while using feature combination with Naive Bayes weighted bag of n-grams achieves a new state of the art accuracy of 97.42{\\%}. Code to reproduce all experiments is available at https://github.com/tanthongtan/dv-cosine", "field": [], "task": ["Document Embedding", "Sentiment Analysis"], "method": [], "dataset": ["IMDb"], "metric": ["Accuracy"], "title": "Sentiment Classification Using Document Embeddings Trained with Cosine Similarity"} {"abstract": "Recent years have seen remarkable progress in semantic segmentation. Yet, it\nremains a challenging task to apply segmentation techniques to video-based\napplications. Specifically, the high throughput of video streams, the sheer\ncost of running fully convolutional networks, together with the low-latency\nrequirements in many real-world applications, e.g. autonomous driving, present\na significant challenge to the design of the video segmentation framework. To\ntackle this combined challenge, we develop a framework for video semantic\nsegmentation, which incorporates two novel components: (1) a feature\npropagation module that adaptively fuses features over time via spatially\nvariant convolution, thus reducing the cost of per-frame computation; and (2)\nan adaptive scheduler that dynamically allocate computation based on accuracy\nprediction. Both components work together to ensure low latency while\nmaintaining high segmentation quality. On both Cityscapes and CamVid, the\nproposed framework obtained competitive performance compared to the state of\nthe art, while substantially reducing the latency, from 360 ms to 119 ms.", "field": [], "task": ["Autonomous Driving", "Semantic Segmentation", "Video Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["Cityscapes val"], "metric": ["mIoU"], "title": "Low-Latency Video Semantic Segmentation"} {"abstract": "The topological structure of skeleton data plays a significant role in human action recognition. Combining the topological structure with graph convolutional networks has achieved remarkable performance. In existing methods, modeling the topological structure of skeleton data only considered the connections between the joints and bones, and directly use physical information. However, there exists an unknown problem to investigate the key joints, bones and body parts in every human action. In this paper, we propose the centrality graph convolutional networks to uncover the overlooked topological information, and best take advantage of the information to distinguish key joints, bones, and body parts. A novel centrality graph convolutional network firstly highlights the effects of the key joints and bones to bring a definite improvement. Besides, the topological information of the skeleton sequence is explored and combined to further enhance the performance in a four-channel framework. Moreover, the reconstructed graph is implemented by the adaptive methods on the training process, which further yields improvements. Our model is validated by two large-scale datasets, NTU-RGB+D and Kinetics, and outperforms the state-of-the-art methods.", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D", "Kinetics-Skeleton dataset"], "metric": ["Accuracy (CS)", "Accuracy (CV)", "Accuracy"], "title": "Centrality Graph Convolutional Networks for Skeleton-based Action Recognition"} {"abstract": "Semantic scene understanding is crucial for robust and safe autonomous navigation, particularly so in off-road environments. Recent deep learning advances for 3D semantic segmentation rely heavily on large sets of training data, however existing autonomy datasets either represent urban environments or lack multimodal off-road data. We fill this gap with RELLIS-3D, a multimodal dataset collected in an off-road environment, which contains annotations for 13,556 LiDAR scans and 6,235 images. The data was collected on the Rellis Campus of Texas A&M University, and presents challenges to existing algorithms related to class imbalance and environmental topography. Additionally, we evaluate the current state of the art deep learning semantic segmentation models on this dataset. Experimental results show that RELLIS-3D presents challenges for algorithms designed for segmentation in urban environments. This novel dataset provides the resources needed by researchers to continue to develop more advanced algorithms and investigate new research directions to enhance autonomous navigation in off-road environments. RELLIS-3D will be published at https://github.com/unmannedlab/RELLIS-3D.", "field": [], "task": ["3D Semantic Segmentation", "Autonomous Navigation", "Scene Understanding", "Semantic Segmentation"], "method": [], "dataset": ["RELLIS-3D Dataset"], "metric": ["Mean IoU (class)"], "title": "RELLIS-3D Dataset: Data, Benchmarks and Analysis"} {"abstract": "Hypernymy, textual entailment, and image captioning can be seen as special\ncases of a single visual-semantic hierarchy over words, sentences, and images.\nIn this paper we advocate for explicitly modeling the partial order structure\nof this hierarchy. Towards this goal, we introduce a general method for\nlearning ordered representations, and show how it can be applied to a variety\nof tasks involving images and language. We show that the resulting\nrepresentations improve performance over current approaches for hypernym\nprediction and image-caption retrieval.", "field": [], "task": ["Cross-Modal Retrieval", "Image Captioning", "Natural Language Inference"], "method": [], "dataset": ["SNLI"], "metric": ["Parameters", "% Train Accuracy", "% Test Accuracy"], "title": "Order-Embeddings of Images and Language"} {"abstract": "One of the leading single-channel speech separation (SS) models is based on a TasNet with a dual-path segmentation technique, where the size of each segment remains unchanged throughout all layers. In contrast, our key finding is that multi-granularity features are essential for enhancing contextual modeling and computational efficiency. We introduce a self-attentive network with a novel sandglass-shape, namely Sandglasset, which advances the state-of-the-art (SOTA) SS performance at significantly smaller model size and computational cost. Forward along each block inside Sandglasset, the temporal granularity of the features gradually becomes coarser until reaching half of the network blocks, and then successively turns finer towards the raw signal level. We also unfold that residual connections between features with the same granularity are critical for preserving information after passing through the bottleneck layer. Experiments show our Sandglasset with only 2.3M parameters has achieved the best results on two benchmark SS datasets -- WSJ0-2mix and WSJ0-3mix, where the SI-SNRi scores have been improved by absolute 0.8 dB and 2.4 dB, respectively, comparing to the prior SOTA results.", "field": [], "task": ["Speech Separation"], "method": [], "dataset": ["wsj0-2mix", "WSJ0-3mix"], "metric": ["SI-SDRi"], "title": "Sandglasset: A Light Multi-Granularity Self-attentive Network For Time-Domain Speech Separation"} {"abstract": "Recent insights on language and vision with neural networks have been successfully applied to simple single-image visual question answering. However, to tackle real-life question answering problems on multimedia collections such as personal photo albums, we have to look at whole collections with sequences of photos. This paper proposes a new multimodal MemexQA task: given a sequence of photos from a user, the goal is to automatically answer questions that help users recover their memory about an event captured in these photos. In addition to a text answer, a few grounding photos are also given to justify the answer. The grounding photos are necessary as they help users quickly verifying the answer. Towards solving the task, we 1) present the MemexQA dataset, the first publicly available multimodal question answering dataset consisting of real personal photo albums; 2) propose an end-to-end trainable network that makes use of a hierarchical process to dynamically determine what media and what time to focus on in the sequential data to answer the question. Experimental results on the MemexQA dataset demonstrate that our model outperforms strong baselines and yields the most relevant grounding photos on this challenging task.", "field": [], "task": ["Memex Question Answering", "Question Answering", "Visual Question Answering"], "method": [], "dataset": ["MemexQA"], "metric": ["Accuracy"], "title": "Focal Visual-Text Attention for Memex Question Answering"} {"abstract": "Abstractive text summarization aims to shorten long text documents into a\nhuman readable form that contains the most important facts from the original\ndocument. However, the level of actual abstraction as measured by novel phrases\nthat do not appear in the source document remains low in existing approaches.\nWe propose two techniques to improve the level of abstraction of generated\nsummaries. First, we decompose the decoder into a contextual network that\nretrieves relevant parts of the source document, and a pretrained language\nmodel that incorporates prior knowledge about language generation. Second, we\npropose a novelty metric that is optimized directly through policy learning to\nencourage the generation of novel phrases. Our model achieves results\ncomparable to state-of-the-art models, as determined by ROUGE scores and human\nevaluations, while achieving a significantly higher level of abstraction as\nmeasured by n-gram overlap with the source document.", "field": [], "task": ["Abstractive Text Summarization", "Language Modelling", "Text Generation", "Text Summarization"], "method": [], "dataset": ["CNN / Daily Mail", "CNN / Daily Mail (Anonymized)"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Improving Abstraction in Text Summarization"} {"abstract": "Designing a lightweight and robust portrait segmentation algorithm is an important task for a wide range of face applications. However, the problem has been considered as a subset of the object segmentation problem and less handled in the semantic segmentation field. Obviously, portrait segmentation has its unique requirements. First, because the portrait segmentation is performed in the middle of a whole process of many real-world applications, it requires extremely lightweight models. Second, there has not been any public datasets in this domain that contain a sufficient number of images with unbiased statistics. To solve the first problem, we introduce the new extremely lightweight portrait segmentation model SINet, containing an information blocking decoder and spatial squeeze modules. The information blocking decoder uses confidence estimates to recover local spatial information without spoiling global consistency. The spatial squeeze module uses multiple receptive fields to cope with various sizes of consistency in the image. To tackle the second problem, we propose a simple method to create additional portrait segmentation data which can improve accuracy on the EG1800 dataset. In our qualitative and quantitative analysis on the EG1800 dataset, we show that our method outperforms various existing lightweight segmentation models. Our method reduces the number of parameters from 2.1M to 86.9K (around 95.9% reduction), while maintaining the accuracy under an 1% margin from the state-of-the-art portrait segmentation method. We also show our model is successfully executed on a real mobile device with 100.6 FPS. In addition, we demonstrate that our method can be used for general semantic segmentation on the Cityscapes dataset. The code and dataset are available in https://github.com/HYOJINPARK/ExtPortraitSeg .", "field": [], "task": ["Portrait Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["Cityscapes test"], "metric": ["Mean IoU (class)"], "title": "SINet: Extreme Lightweight Portrait Segmentation Networks with Spatial Squeeze Modules and Information Blocking Decoder"} {"abstract": "Visual anomaly detection addresses the problem of classification or localization of regions in an image that deviate from their normal appearance. A popular approach trains an auto-encoder on anomaly-free images and performs anomaly detection by calculating the difference between the input and the reconstructed image. This approach assumes that the auto-encoder will be unable to accurately reconstruct anomalous regions. But in practice neural networks generalize well even to anomalies and reconstruct them sufficiently well, thus reducing the detection capabilities. Accurate reconstruction is far less likely if the anomaly pixels were not visible to the auto-encoder. We thus cast anomaly detection as a self-supervised reconstruction-by-inpainting problem. Our approach (RIAD) randomly removes partial image regions and reconstructs the image from partial inpaintings, thus addressing the drawbacks of auto-enocoding methods. RIAD is extensively evaluated on several benchmarks and sets a new state-of-the art on a recent highly challenging anomaly detection benchmark.", "field": [], "task": ["Anomaly Detection"], "method": [], "dataset": ["MVTec AD"], "metric": ["Segmentation AUROC"], "title": "Reconstruction by Inpainting for Visual Anomaly Detection"} {"abstract": "During the past years, deep learning brought a big step in performance of music source separation algorithms. A lot has been done on the architecture optimisation, but training data remains an important bias for model comparison. In this work, we choose to work with the frugal and well-known original TasNet neural network and to focus on simple methods to exploit a relatively important dataset. Our results on the MUSDB test set outperform all previous state of the art approaches with extra data on the following source categories:vocals, accompaniment, drums, bass and in average. We believe that our results on how to shape a training set can apply to any type of architecture.", "field": [], "task": ["Music Source Separation"], "method": [], "dataset": ["MUSDB18"], "metric": ["SDR (vocals)", "SDR (other)", "SDR (drums)", "SDR (bass)"], "title": "A frugal approach to music source separation"} {"abstract": "Current state-of-the-art speech recognition systems build on recurrent neural\nnetworks for acoustic and/or language modeling, and rely on feature extraction\npipelines to extract mel-filterbanks or cepstral coefficients. In this paper we\npresent an alternative approach based solely on convolutional neural networks,\nleveraging recent advances in acoustic models from the raw waveform and\nlanguage modeling. This fully convolutional approach is trained end-to-end to\npredict characters from the raw waveform, removing the feature extraction step\naltogether. An external convolutional language model is used to decode words.\nOn Wall Street Journal, our model matches the current state-of-the-art. On\nLibrispeech, we report state-of-the-art performance among end-to-end models,\nincluding Deep Speech 2 trained with 12 times more acoustic data and\nsignificantly more linguistic data.", "field": [], "task": ["End-To-End Speech Recognition", "Language Modelling", "Speech Recognition"], "method": [], "dataset": ["LibriSpeech test-other", "WSJ eval92", "LibriSpeech test-clean", "WSJ eval93"], "metric": ["Word Error Rate (WER)"], "title": "Fully Convolutional Speech Recognition"} {"abstract": "The ability to predict protein function from structure is becoming increasingly important as the number of structures resolved is growing more rapidly than our capacity to study function. Current methods for predicting protein function are mostly reliant on identifying a similar protein of known function. For proteins that are highly dissimilar or are only similar to proteins also lacking functional annotations, these methods fail. Here, we show that protein function can be predicted as enzymatic or not without resorting to alignments. We describe 1178 high-resolution proteins in a structurally non-redundant subset of the Protein Data Bank using simple features such as secondary-structure content, amino acid propensities, surface properties and ligands. The subset is split into two functional groupings, enzymes and non-enzymes. We use the support vector machine-learning algorithm to develop models that are capable of assigning the protein class. Validation of the method shows that the function can be predicted to an accuracy of 77% using 52 features to describe each protein. An adaptive search of possible subsets of features produces a simplified model based on 36 features that predicts at an accuracy of 80%. We compare the method to sequence-based methods that also avoid calculating alignments and predict a recently released set of unrelated proteins. The most useful features for distinguishing enzymes from non-enzymes are secondary-structure content, amino acid frequencies, number of disulphide bonds and size of the largest cleft. This method is applicable to any structure as it does not require the identification of sequence or structural similarity to a protein of known function.", "field": [], "task": ["Graph Classification"], "method": [], "dataset": ["PROTEINS"], "metric": ["Accuracy"], "title": "Distinguishing Enzyme Structures from Non-enzymes Without Alignments"} {"abstract": "High accuracy video label prediction (classification) models are attributed to large scale data. These data could be frame feature sequences extracted by a pre-trained convolutional-neural-network, which promote the efficiency for creating models. Unsupervised solutions such as feature average pooling, as a simple label-independent parameter-free based method, has limited ability to represent the video. While the supervised methods, like RNN, can greatly improve the recognition accuracy. However, the video length is usually long, and there are hierarchical relationships between frames across events in the video, the performance of RNN based models are decreased. In this paper, we proposes a novel video classification method based on a deep convolutional graph neural network(DCGN). The proposed method utilize the characteristics of the hierarchical structure of the video, and performed multi-level feature extraction on the video frame sequence through the graph network, obtained a video representation re ecting the event semantics hierarchically. We test our model on YouTube-8M Large-Scale Video Understanding dataset, and the result outperforms RNN based benchmarks.", "field": [], "task": ["Hierarchical structure", "Video Classification", "Video Understanding"], "method": [], "dataset": ["YouTube-8M"], "metric": ["Hit@1"], "title": "Hierarchical Video Frame Sequence Representation with Deep Convolutional Graph Network"} {"abstract": "Visual signals in a video can be divided into content and motion. While\ncontent specifies which objects are in the video, motion describes their\ndynamics. Based on this prior, we propose the Motion and Content decomposed\nGenerative Adversarial Network (MoCoGAN) framework for video generation. The\nproposed framework generates a video by mapping a sequence of random vectors to\na sequence of video frames. Each random vector consists of a content part and a\nmotion part. While the content part is kept fixed, the motion part is realized\nas a stochastic process. To learn motion and content decomposition in an\nunsupervised manner, we introduce a novel adversarial learning scheme utilizing\nboth image and video discriminators. Extensive experimental results on several\nchallenging datasets with qualitative and quantitative comparison to the\nstate-of-the-art approaches, verify effectiveness of the proposed framework. In\naddition, we show that MoCoGAN allows one to generate videos with same content\nbut different motion as well as videos with different content and same motion.", "field": [], "task": ["Video Generation"], "method": [], "dataset": ["UCF-101 16 frames, 64x64, Unconditional", "UCF-101 16 frames, Unconditional, Single GPU"], "metric": ["Inception Score"], "title": "MoCoGAN: Decomposing Motion and Content for Video Generation"} {"abstract": "Despite success on a wide range of problems related to vision, generative adversarial networks (GANs) often suffer from inferior performance due to unstable training, especially for text generation. To solve this issue, we propose a new variational GAN training framework which enjoys superior training stability. Our approach is inspired by a connection of GANs and reinforcement learning under a variational perspective. The connection leads to (1) probability ratio clipping that regularizes generator training to prevent excessively large updates, and (2) a sample re-weighting mechanism that improves discriminator training by downplaying bad-quality fake samples. Moreover, our variational GAN framework can provably overcome the training issue in many GANs that an optimal discriminator cannot provide any informative gradient to training generator. By plugging the training approach in diverse state-of-the-art GAN architectures, we obtain significantly improved performance over a range of tasks, including text generation, text style transfer, and image generation.", "field": [], "task": ["Image Generation", "Style Transfer", "Text Generation", "Text Style Transfer"], "method": [], "dataset": ["EMNLP2017 WMT", "CIFAR-10"], "metric": ["BLEU-5", "FID", "BLEU-2", "NLLgen", "BLEU-3", "BLEU-4", "Inception score"], "title": "Improving GAN Training with Probability Ratio Clipping and Sample Reweighting"} {"abstract": "This paper tackles the reduction of redundant repeating generation that is\noften observed in RNN-based encoder-decoder models. Our basic idea is to\njointly estimate the upper-bound frequency of each target vocabulary in the\nencoder and control the output words based on the estimation in the decoder.\nOur method shows significant improvement over a strong RNN-based\nencoder-decoder baseline and achieved its best results on an abstractive\nsummarization benchmark.", "field": [], "task": ["Abstractive Text Summarization"], "method": [], "dataset": ["GigaWord", "DUC 2004 Task 1"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Cutting-off Redundant Repeating Generations for Neural Abstractive Summarization"} {"abstract": "Predictive business process monitoring methods exploit logs of completed\ncases of a process in order to make predictions about running cases thereof.\nExisting methods in this space are tailor-made for specific prediction tasks.\nMoreover, their relative accuracy is highly sensitive to the dataset at hand,\nthus requiring users to engage in trial-and-error and tuning when applying them\nin a specific setting. This paper investigates Long Short-Term Memory (LSTM)\nneural networks as an approach to build consistently accurate models for a wide\nrange of predictive process monitoring tasks. First, we show that LSTMs\noutperform existing techniques to predict the next event of a running case and\nits timestamp. Next, we show how to use models for predicting the next task in\norder to predict the full continuation of a running case. Finally, we apply the\nsame approach to predict the remaining time, and show that this approach\noutperforms existing tailor-made methods.", "field": [], "task": ["Multivariate Time Series Forecasting", "Predictive Process Monitoring", "Time Series Prediction"], "method": [], "dataset": ["BPI challenge '12", "Helpdesk"], "metric": ["Accuracy"], "title": "Predictive Business Process Monitoring with LSTM Neural Networks"} {"abstract": "Recent advances in convolutional neural networks (CNN) have achieved\nremarkable results in locating objects in images. In these networks, the\ntraining procedure usually requires providing bounding boxes or the maximum\nnumber of expected objects. In this paper, we address the task of estimating\nobject locations without annotated bounding boxes which are typically\nhand-drawn and time consuming to label. We propose a loss function that can be\nused in any fully convolutional network (FCN) to estimate object locations.\nThis loss function is a modification of the average Hausdorff distance between\ntwo unordered sets of points. The proposed method has no notion of bounding\nboxes, region proposals, or sliding windows. We evaluate our method with three\ndatasets designed to locate people's heads, pupil centers and plant centers. We\noutperform state-of-the-art generic object detectors and methods fine-tuned for\npupil tracking.", "field": [], "task": ["Object Localization"], "method": [], "dataset": ["Plant", "Pupil", "Mall"], "metric": ["Precision", "F-Score", "Recall"], "title": "Locating Objects Without Bounding Boxes"} {"abstract": "Many applications of machine learning require a model to make accurate pre-dictions on test examples that are distributionally different from training ones, while task-specific labels are scarce during training. An effective approach to this challenge is to pre-train a model on related tasks where data is abundant, and then fine-tune it on a downstream task of interest. While pre-training has been effective in many language and vision domains, it remains an open question how to effectively use pre-training on graph datasets. In this paper, we develop a new strategy and self-supervised methods for pre-training Graph Neural Networks (GNNs). The key to the success of our strategy is to pre-train an expressive GNN at the level of individual nodes as well as entire graphs so that the GNN can learn useful local and global representations simultaneously. We systematically study pre-training on multiple graph classification datasets. We find that naive strategies, which pre-train GNNs at the level of either entire graphs or individual nodes, give limited improvement and can even lead to negative transfer on many downstream tasks. In contrast, our strategy avoids negative transfer and improves generalization significantly across downstream tasks, leading up to 9.4% absolute improvements in ROC-AUC over non-pre-trained models and achieving state-of-the-art performance for molecular property prediction and protein function prediction.", "field": [], "task": ["Graph Classification", "Molecular Property Prediction", "Protein Function Prediction", "Representation Learning"], "method": [], "dataset": ["MUV", "ToxCast", "HIV dataset", "ClinTox", "BACE", "Tox21", "BBBP", "SIDER"], "metric": ["AUC"], "title": "Strategies for Pre-training Graph Neural Networks"} {"abstract": "Each year, the treatment decisions for more than 230,000 breast cancer\npatients in the U.S. hinge on whether the cancer has metastasized away from the\nbreast. Metastasis detection is currently performed by pathologists reviewing\nlarge expanses of biological tissues. This process is labor intensive and\nerror-prone. We present a framework to automatically detect and localize tumors\nas small as 100 x 100 pixels in gigapixel microscopy images sized 100,000 x\n100,000 pixels. Our method leverages a convolutional neural network (CNN)\narchitecture and obtains state-of-the-art results on the Camelyon16 dataset in\nthe challenging lesion-level tumor detection task. At 8 false positives per\nimage, we detect 92.4% of the tumors, relative to 82.7% by the previous best\nautomated approach. For comparison, a human pathologist attempting exhaustive\nsearch achieved 73.2% sensitivity. We achieve image-level AUC scores above 97%\non both the Camelyon16 test set and an independent set of 110 slides. In\naddition, we discover that two slides in the Camelyon16 training set were\nerroneously labeled normal. Our approach could considerably reduce false\nnegative rates in metastasis detection.", "field": [], "task": ["Medical Object Detection"], "method": [], "dataset": ["Barrett\u2019s Esophagus"], "metric": ["Mean Accuracy"], "title": "Detecting Cancer Metastases on Gigapixel Pathology Images"} {"abstract": "Reinforcement learning algorithms rely on carefully engineering environment\nrewards that are extrinsic to the agent. However, annotating each environment\nwith hand-designed, dense rewards is not scalable, motivating the need for\ndeveloping reward functions that are intrinsic to the agent. Curiosity is a\ntype of intrinsic reward function which uses prediction error as reward signal.\nIn this paper: (a) We perform the first large-scale study of purely\ncuriosity-driven learning, i.e. without any extrinsic rewards, across 54\nstandard benchmark environments, including the Atari game suite. Our results\nshow surprisingly good performance, and a high degree of alignment between the\nintrinsic curiosity objective and the hand-designed extrinsic rewards of many\ngame environments. (b) We investigate the effect of using different feature\nspaces for computing prediction error and show that random features are\nsufficient for many popular RL game benchmarks, but learned features appear to\ngeneralize better (e.g. to novel game levels in Super Mario Bros.). (c) We\ndemonstrate limitations of the prediction-based rewards in stochastic setups.\nGame-play videos and code are at\nhttps://pathak22.github.io/large-scale-curiosity/", "field": [], "task": ["Atari Games", "SNES Games"], "method": [], "dataset": ["Atari 2600 Venture", "Atari 2600 Private Eye", "Atari 2600 Montezuma's Revenge", "Atari 2600 Freeway", "Atari 2600 Gravitar"], "metric": ["Score"], "title": "Large-Scale Study of Curiosity-Driven Learning"} {"abstract": "Recognizing emotions in conversations is a challenging task due to the presence of contextual dependencies governed by self- and inter-personal influences. Recent approaches have focused on modeling these dependencies primarily via supervised learning. However, purely supervised strategies demand large amounts of annotated data, which is lacking in most of the available corpora in this task. To tackle this challenge, we look at transfer learning approaches as a viable alternative. Given the large amount of available conversational data, we investigate whether generative conversational models can be leveraged to transfer affective knowledge for detecting emotions in context. We propose an approach, TL-ERC, where we pre-train a hierarchical dialogue model on multi-turn conversations (source) and then transfer its parameters to a conversational emotion classifier (target). In addition to the popular practice of using pre-trained sentence encoders, our approach also incorporates recurrent parameters that model inter-sentential context across the whole conversation. Based on this idea, we perform several experiments across multiple datasets and find improvement in performance and robustness against limited training data. TL-ERC also achieves better validation performances in significantly fewer epochs. Overall, we infer that knowledge acquired from dialogue generators can indeed help recognize emotions in conversations.", "field": [], "task": ["Emotion Recognition", "Emotion Recognition in Conversation", "Transfer Learning"], "method": [], "dataset": ["IEMOCAP", "DailyDialog"], "metric": ["Micro-F1", "F1"], "title": "Conversational Transfer Learning for Emotion Recognition"} {"abstract": "Learning based methods have shown very promising results for the task of\ndepth estimation in single images. However, most existing approaches treat\ndepth prediction as a supervised regression problem and as a result, require\nvast quantities of corresponding ground truth depth data for training. Just\nrecording quality depth data in a range of environments is a challenging\nproblem. In this paper, we innovate beyond existing approaches, replacing the\nuse of explicit depth data during training with easier-to-obtain binocular\nstereo footage.\n We propose a novel training objective that enables our convolutional neural\nnetwork to learn to perform single image depth estimation, despite the absence\nof ground truth depth data. Exploiting epipolar geometry constraints, we\ngenerate disparity images by training our network with an image reconstruction\nloss. We show that solving for image reconstruction alone results in poor\nquality depth images. To overcome this problem, we propose a novel training\nloss that enforces consistency between the disparities produced relative to\nboth the left and right images, leading to improved performance and robustness\ncompared to existing approaches. Our method produces state of the art results\nfor monocular depth estimation on the KITTI driving dataset, even outperforming\nsupervised methods that have been trained with ground truth depth.", "field": [], "task": ["Depth Estimation", "Image Reconstruction", "Monocular Depth Estimation", "Regression"], "method": [], "dataset": ["KITTI Eigen split unsupervised"], "metric": ["absolute relative error"], "title": "Unsupervised Monocular Depth Estimation with Left-Right Consistency"} {"abstract": "This paper investigates the construction of a strong baseline based on general purpose sequence-to-sequence models for constituency parsing. We incorporate several techniques that were mainly developed in natural language generation tasks, e.g., machine translation and summarization, and demonstrate that the sequence-to-sequence model achieves the current top-notch parsers{'} performance (almost) without requiring any explicit task-specific knowledge or architecture of constituent parsing.", "field": [], "task": ["Abstractive Text Summarization", "Constituency Parsing", "Machine Translation", "Text Generation"], "method": [], "dataset": ["Penn Treebank"], "metric": ["F1 score"], "title": "An Empirical Study of Building a Strong Baseline for Constituency Parsing"} {"abstract": "Cross-scene crowd counting is a challenging task where no laborious data annotation is required for counting people in new target surveillance crowd scenes unseen in the training set. The performance of most existing crowd counting methods drops significantly when they are applied to an unseen scene. To address this problem, we propose a deep convolutional neural network (CNN) for crowd counting, and it is trained alternatively with two related learning objectives, crowd density and crowd count. This proposed switchable learning approach is able to obtain better local optimum for both objectives. To handle an unseen target crowd scene, we present a data-driven method to fine-tune the trained CNN model for the target scene. A new dataset including 108 crowd scenes with nearly 200,000 head annotations is introduced to better evaluate the accuracy of cross-scene crowd counting methods. Extensive experiments on the proposed and another two existing datasets demonstrate the effectiveness and reliability of our approach.", "field": [], "task": ["Crowd Counting"], "method": [], "dataset": ["UCF CC 50", "ShanghaiTech A", "WorldExpo\u201910", "ShanghaiTech B"], "metric": ["MAE", "Average MAE"], "title": "Cross-Scene Crowd Counting via Deep Convolutional Neural Networks"} {"abstract": "In this paper, we design a simple yet powerful deep network architecture, U$^2$-Net, for salient object detection (SOD). The architecture of our U$^2$-Net is a two-level nested U-structure. The design has the following advantages: (1) it is able to capture more contextual information from different scales thanks to the mixture of receptive fields of different sizes in our proposed ReSidual U-blocks (RSU), (2) it increases the depth of the whole architecture without significantly increasing the computational cost because of the pooling operations used in these RSU blocks. This architecture enables us to train a deep network from scratch without using backbones from image classification tasks. We instantiate two models of the proposed architecture, U$^2$-Net (176.3 MB, 30 FPS on GTX 1080Ti GPU) and U$^2$-Net$^{\\dagger}$ (4.7 MB, 40 FPS), to facilitate the usage in different environments. Both models achieve competitive performance on six SOD datasets. The code is available: https://github.com/NathanUA/U-2-Net.", "field": [], "task": ["Image Classification", "Object Detection", "RGB Salient Object Detection", "Saliency Detection", "Salient Object Detection"], "method": [], "dataset": ["DUT-OMRON", "HKU-IS"], "metric": ["{max}F\u03b2", "MAE", "relaxFb\u03b2", "Sm", "Fw\u03b2"], "title": "U$^2$-Net: Going Deeper with Nested U-Structure for Salient Object Detection"} {"abstract": "Convolutional Neural Networks (CNNs) have been consistently proved state-of-the-art results in image Super-Resolution (SR), representing an exceptional opportunity for the remote sensing field to extract further information and knowledge from captured data. However, most of the works published in the literature have been focusing on the Single-Image Super-Resolution problem so far. At present, satellite based remote sensing platforms offer huge data availability with high temporal resolution and low spatial resolution. In this context, the presented research proposes a novel residual attention model (RAMS) that efficiently tackles the multi-image super-resolution task, simultaneously exploiting spatial and temporal correlations to combine multiple images. We introduce the mechanism of visual feature attention with 3D convolutions in order to obtain an aware data fusion and information extraction of the multiple low-resolution images, transcending limitations of the local region of convolutional operations. Moreover, having multiple inputs with the same scene, our representation learning network makes extensive use of nestled residual connections to let flow redundant low-frequency signals and focus the computation on more important high-frequency components. Extensive experimentation and evaluations against other available solutions, either for single or multi-image super-resolution, have demonstrated that the proposed deep learning-based solution can be considered state-of-the-art for Multi-Image Super-Resolution for remote sensing applications.", "field": [], "task": ["Image Super-Resolution", "Multi-Frame Super-Resolution", "Representation Learning", "Super-Resolution"], "method": [], "dataset": ["PROBA-V"], "metric": ["Normalized cPSNR"], "title": "Multi-image Super Resolution of Remotely Sensed Images using Residual Feature Attention Deep Neural Networks"} {"abstract": "Object grasping is critical for many applications, which is also a challenging computer vision problem. However, for cluttered scene, current researches suffer from the problems of insufficient training data and the lacking of evaluation benchmarks. In this work, we contribute a large-scale grasp pose detection dataset with a unified evaluation system. Our dataset contains 97,280 RGB-D image with over one billion grasp poses. Meanwhile, our evaluation system directly reports whether a grasping is successful by analytic computation, which is able to evaluate any kind of grasp poses without exhaustively labeling ground-truth. In addition, we propose an end-to-end grasp pose prediction network given point cloud inputs, where we learn approaching direction and operation parameters in a decoupled manner. A novel grasp affinity field is also designed to improve the grasping robustness. We conduct extensive experiments to show that our dataset and evaluation system can align well with real-world experiments and our proposed network achieves the state-of-the-art performance. Our dataset, source code and models are publicly available at www.graspnet.net.\r", "field": [], "task": ["Pose Prediction", "Robotic Grasping"], "method": [], "dataset": ["GraspNet-1Billion"], "metric": ["AP"], "title": "GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping"} {"abstract": "Not all people are equally easy to identify: color statistics might be enough\nfor some cases while others might require careful reasoning about high- and\nlow-level details. However, prevailing person re-identification(re-ID) methods\nuse one-size-fits-all high-level embeddings from deep convolutional networks\nfor all cases. This might limit their accuracy on difficult examples or makes\nthem needlessly expensive for the easy ones. To remedy this, we present a new\nperson re-ID model that combines effective embeddings built on multiple\nconvolutional network layers, trained with deep-supervision. On traditional\nre-ID benchmarks, our method improves substantially over the previous\nstate-of-the-art results on all five datasets that we evaluate on. We then\npropose two new formulations of the person re-ID problem under\nresource-constraints, and show how our model can be used to effectively trade\noff accuracy and computation in the presence of resource constraints. Code and\npre-trained models are available at https://github.com/mileyan/DARENet.", "field": [], "task": ["Person Re-Identification"], "method": [], "dataset": ["CUHK03 detected", "DukeMTMC-reID", "Market-1501", "CUHK03 labeled"], "metric": ["Rank-1", "MAP"], "title": "Resource Aware Person Re-identification across Multiple Resolutions"} {"abstract": "The modern digital world is becoming more and more multimodal. Looking on the internet, images are often associated with the text, so classification problems with these two modalities are very common.\r\nIn this paper, we examine multimodal classification using textual information and visual representations of the same concept.\r\nWe investigate two main basic methods to perform multimodal fusion and adapt them with stacking techniques to better handle this type of problem.\r\nHere, we use UPMC Food-101, which is a difficult and noisy multimodal dataset that well represents this category of multimodal problems.\r\nOur results show that the proposed early fusion technique combined with a stacking-based approach exceeds the state of the art on the dataset used.", "field": [], "task": ["Document Text Classification", "Image Classification", "Multimodal Deep Learning", "Multimodal Text and Image Classification"], "method": [], "dataset": ["Food-101"], "metric": ["Accuracy (%)"], "title": "Image and Text fusion for UPMC Food-101 \\\\using BERT and CNNs"} {"abstract": "Actions are more than just movements and trajectories: we cook to eat and we\nhold a cup to drink from it. A thorough understanding of videos requires going\nbeyond appearance modeling and necessitates reasoning about the sequence of\nactivities, as well as the higher-level constructs such as intentions. But how\ndo we model and reason about these? We propose a fully-connected temporal CRF\nmodel for reasoning over various aspects of activities that includes objects,\nactions, and intentions, where the potentials are predicted by a deep network.\nEnd-to-end training of such structured models is a challenging endeavor: For\ninference and learning we need to construct mini-batches consisting of whole\nvideos, leading to mini-batches with only a few videos. This causes\nhigh-correlation between data points leading to breakdown of the backprop\nalgorithm. To address this challenge, we present an asynchronous variational\ninference method that allows efficient end-to-end training. Our method achieves\na classification mAP of 22.4% on the Charades benchmark, outperforming the\nstate-of-the-art (17.2% mAP), and offers equal gains on the task of temporal\nlocalization.", "field": [], "task": ["Action Classification", "Action Recognition", "Temporal Action Localization", "Temporal Localization", "Variational Inference"], "method": [], "dataset": ["Charades"], "metric": ["mAP", "MAP"], "title": "Asynchronous Temporal Fields for Action Recognition"} {"abstract": "We present the Latvian Twitter Eater Corpus - a set of tweets in the narrow domain related to food, drinks, eating and drinking. The corpus has been collected over time-span of over 8 years and includes over 2 million tweets entailed with additional useful data. We also separate two sub-corpora of question and answer tweets and sentiment annotated tweets. We analyse contents of the corpus and demonstrate use-cases for the sub-corpora by training domain-specific question-answering and sentiment-analysis models using data from the corpus.", "field": [], "task": ["Question Answering", "Sentiment Analysis"], "method": [], "dataset": ["Latvian Twitter Eater Sentiment Dataset"], "metric": ["Accuracy"], "title": "What Can We Learn From Almost a Decade of Food Tweets"} {"abstract": "Person re-identification is an important technique towards automatic search\nof a person's presence in a surveillance video. Two fundamental problems are\ncritical for person re-identification, feature representation and metric\nlearning. An effective feature representation should be robust to illumination\nand viewpoint changes, and a discriminant metric should be learned to match\nvarious person images. In this paper, we propose an effective feature\nrepresentation called Local Maximal Occurrence (LOMO), and a subspace and\nmetric learning method called Cross-view Quadratic Discriminant Analysis\n(XQDA). The LOMO feature analyzes the horizontal occurrence of local features,\nand maximizes the occurrence to make a stable representation against viewpoint\nchanges. Besides, to handle illumination variations, we apply the Retinex\ntransform and a scale invariant texture operator. To learn a discriminant\nmetric, we propose to learn a discriminant low dimensional subspace by\ncross-view quadratic discriminant analysis, and simultaneously, a QDA metric is\nlearned on the derived subspace. We also present a practical computation method\nfor XQDA, as well as its regularization. Experiments on four challenging person\nre-identification databases, VIPeR, QMUL GRID, CUHK Campus, and CUHK03, show\nthat the proposed method improves the state-of-the-art rank-1 identification\nrates by 2.2%, 4.88%, 28.91%, and 31.55% on the four databases, respectively.", "field": [], "task": ["Metric Learning", "Person Re-Identification"], "method": [], "dataset": ["DukeMTMC-reID", "Market-1501"], "metric": ["Rank-1", "MAP"], "title": "Person Re-identification by Local Maximal Occurrence Representation and Metric Learning"} {"abstract": "Virtual assistants such as Google Assistant, Amazon Alexa, and Apple Siri enable users to interact with a large number of services and APIs on the web using natural language. In this work, we investigate two methods for Natural Language Generation (NLG) using a single domain-independent model across a large number of APIs. First, we propose a schema-guided approach which conditions the generation on a schema describing the API in natural language. Our second method investigates the use of a small number of templates, growing linearly in number of slots, to convey the semantics of the API. To generate utterances for an arbitrary slot combination, a few simple templates are first concatenated to give a semantically correct, but possibly incoherent and ungrammatical utterance. A pre-trained language model is subsequently employed to rewrite it into coherent, natural sounding text. Through automatic metrics and human evaluation, we show that our method improves over strong baselines, is robust to out-of-domain inputs and shows improved sample efficiency.", "field": [], "task": ["Data-to-Text Generation", "Language Modelling", "Text Generation"], "method": [], "dataset": ["MULTIWOZ 2.1"], "metric": ["BLEU"], "title": "Template Guided Text Generation for Task-Oriented Dialogue"} {"abstract": "Generative Adversarial Networks (GANs) coupled with self-supervised tasks have shown promising results in unconditional and semi-supervised image generation. We propose a self-supervised approach (LT-GAN) to improve the generation quality and diversity of images by estimating the GAN-induced transformation (i.e. transformation induced in the generated images by perturbing the latent space of generator). Specifically, given two pairs of images where each pair comprises of a generated image and its transformed version, the self-supervision task aims to identify whether the latent transformation applied in the given pair is same to that of the other pair. Hence, this auxiliary loss encourages the generator to produce images that are distinguishable by the auxiliary network, which in turn promotes the synthesis of semantically consistent images with respect to latent transformations. We show the efficacy of this pretext task by improving the image generation quality in terms of FID on state-of-the-art models for both conditional and unconditional settings on CIFAR-10, CelebA-HQ and ImageNet datasets. Moreover, we empirically show that LT-GAN helps in improving controlled image editing for CelebA-HQ and ImageNet over baseline models. We experimentally demonstrate that our proposed LT self-supervision task can be effectively combined with other state-of-the-art training techniques for added benefits. Consequently, we show that our approach achieves the new state-of-the-art FID score of 9.8 on conditional CIFAR-10 image generation.", "field": [], "task": ["Image Generation"], "method": [], "dataset": ["CelebA-HQ 128x128", "CIFAR-10"], "metric": ["FID"], "title": "LT-GAN: Self-Supervised GAN with Latent Transformation Detection"} {"abstract": "Deep learning methods have shown promise in unsupervised domain adaptation, which aims to leverage a labeled source domain to learn a classifier for the unlabeled target domain with a different distribution. However, such methods typically learn a domain-invariant representation space to match the marginal distributions of the source and target domains, while ignoring their fine-level structures. In this paper, we propose Cluster Alignment with a Teacher (CAT) for unsupervised domain adaptation, which can effectively incorporate the discriminative clustering structures in both domains for better adaptation. Technically, CAT leverages an implicit ensembling teacher model to reliably discover the class-conditional structure in the feature space for the unlabeled target domain. Then CAT forces the features of both the source and the target domains to form discriminative class-conditional clusters and aligns the corresponding clusters across domains. Empirical results demonstrate that CAT achieves state-of-the-art results in several unsupervised domain adaptation scenarios.", "field": [], "task": ["Domain Adaptation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["SVNH-to-MNIST", "ImageCLEF-DA", "USPS-to-MNIST", "Office-31", "MNIST-to-USPS"], "metric": ["Average Accuracy", "Accuracy"], "title": "Cluster Alignment with a Teacher for Unsupervised Domain Adaptation"} {"abstract": "Common object counting in a natural scene is a challenging problem in computer vision with numerous real-world applications. Existing image-level supervised common object counting approaches only predict the global object count and rely on additional instance-level supervision to also determine object locations. We propose an image-level supervised approach that provides both the global object count and the spatial distribution of object instances by constructing an object category density map. Motivated by psychological studies, we further reduce image-level supervision using a limited object count information (up to four). To the best of our knowledge, we are the first to propose image-level supervised density map estimation for common object counting and demonstrate its effectiveness in image-level supervised instance segmentation. Comprehensive experiments are performed on the PASCAL VOC and COCO datasets. Our approach outperforms existing methods, including those using instance-level supervision, on both datasets for common object counting. Moreover, our approach improves state-of-the-art image-level supervised instance segmentation with a relative gain of 17.8% in terms of average best overlap, on the PASCAL VOC 2012 dataset. Code link: https://github.com/GuoleiSun/CountSeg", "field": [], "task": ["Instance Segmentation", "Object Counting", "Semantic Segmentation"], "method": [], "dataset": ["Pascal VOC 2007 count-test", "COCO count-test"], "metric": ["m-reIRMSE", "mRMSE-nz", "m-reIRMSE-nz", "mRMSE", "m-relRMSE"], "title": "Object Counting and Instance Segmentation with Image-level Supervision"} {"abstract": "Pre-trained representations are becoming crucial for many NLP and perception tasks. While representation learning in NLP has transitioned to training on raw text without human annotations, visual and vision-language representations still rely heavily on curated training datasets that are expensive or require expert knowledge. For vision applications, representations are mostly learned using datasets with explicit class labels such as ImageNet or OpenImages. For vision-language, popular datasets like Conceptual Captions, MSCOCO, or CLIP all involve a non-trivial data collection (and cleaning) process. This costly curation process limits the size of datasets and hence hinders the scaling of trained models. In this paper, we leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps in the Conceptual Captions dataset. A simple dual-encoder architecture learns to align visual and language representations of the image and text pairs using a contrastive loss. We show that the scale of our corpus can make up for its noise and leads to state-of-the-art representations even with such a simple learning scheme. Our visual representation achieves strong performance when transferred to classification tasks such as ImageNet and VTAB. The aligned visual and language representations also set new state-of-the-art results on Flickr30K and MSCOCO benchmarks, even when compared with more sophisticated cross-attention models. The representations also enable cross-modality search with complex text and text + image queries.", "field": [], "task": ["Cross-Modal Retrieval", "Fine-Grained Image Classification", "Image Classification", "Representation Learning"], "method": [], "dataset": ["Flickr30k", "VTAB-1k", "Oxford-IIIT Pets", "COCO 2014", "Flowers-102", "Food-101", "Stanford Cars", "ImageNet"], "metric": ["Number of params", "Image-to-text R@5", "Top 1 Accuracy", "Image-to-text R@1", "Image-to-text R@10", "Text-to-image R@10", "Text-to-image R@1", "Top-1 Accuracy", "Accuracy", "Top 5 Accuracy", "Text-to-image R@5"], "title": "Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision"} {"abstract": "This paper tackles the problem of learning a finer representation than the one provided by training labels. This enables fine-grained category retrieval of images in a collection annotated with coarse labels only. Our network is learned with a nearest-neighbor classifier objective, and an instance loss inspired by self-supervised learning. By jointly leveraging the coarse labels and the underlying fine-grained latent space, it significantly improves the accuracy of category-level retrieval methods. Our strategy outperforms all competing methods for retrieving or classifying images at a finer granularity than that available at train time. It also improves the accuracy for transfer learning tasks to fine-grained datasets, thereby establishing the new state of the art on five public benchmarks, like iNaturalist-2018.", "field": [], "task": ["Fine-Grained Image Classification", "Image Classification", "Self-Supervised Learning", "Transfer Learning"], "method": [], "dataset": ["iNaturalist 2019", "CIFAR-100", "Oxford 102 Flowers", "iNaturalist 2018", "Flowers-102", "Food-101", "Stanford Cars", "ImageNet"], "metric": ["Top 1 Accuracy", "Percentage correct", "Accuracy", "Top-1 Accuracy"], "title": "Grafit: Learning fine-grained image representations with coarse labels"} {"abstract": "Although skeleton-based action recognition has achieved great success in recent years, most of the existing methods may suffer from a large model size and slow execution speed. To alleviate this issue, we analyze skeleton sequence properties to propose a Double-feature Double-motion Network (DD-Net) for skeleton-based action recognition. By using a lightweight network structure (i.e., 0.15 million parameters), DD-Net can reach a super fast speed, as 3,500 FPS on one GPU, or, 2,000 FPS on one CPU. By employing robust features, DD-Net achieves the state-of-the-art performance on our experimental datasets: SHREC (i.e., hand actions) and JHMDB (i.e., body actions). Our code will be released with this paper later.", "field": [], "task": ["Action Recognition", "Hand Gesture Recognition", "Skeleton Based Action Recognition"], "method": [], "dataset": ["SHREC 2017 track on 3D Hand Gesture Recognition", "JHMDB (2D poses only)", "J-HMDB"], "metric": ["Speed (FPS)", "14 gestures accuracy", "Accuracy (pose)", "Accuracy", "Average accuracy of 3 splits", "28 gestures accuracy", "No. parameters", "Accuracy (RGB+pose)"], "title": "Make Skeleton-based Action Recognition Model Smaller, Faster and Better"} {"abstract": "We present a multilingual Named Entity Recognition approach based on a robust\nand general set of features across languages and datasets. Our system combines\nshallow local information with clustering semi-supervised features induced on\nlarge amounts of unlabeled text. Understanding via empirical experimentation\nhow to effectively combine various types of clustering features allows us to\nseamlessly export our system to other datasets and languages. The result is a\nsimple but highly competitive system which obtains state of the art results\nacross five languages and twelve datasets. The results are reported on standard\nshared task evaluation data such as CoNLL for English, Spanish and Dutch.\nFurthermore, and despite the lack of linguistically motivated features, we also\nreport best results for languages such as Basque and German. In addition, we\ndemonstrate that our method also obtains very competitive results even when the\namount of supervised data is cut by half, alleviating the dependency on\nmanually annotated data. Finally, the results show that our emphasis on\nclustering features is crucial to develop robust out-of-domain models. The\nsystem and models are freely available to facilitate its use and guarantee the\nreproducibility of results.", "field": [], "task": ["Named Entity Recognition"], "method": [], "dataset": ["CoNLL 2003 (English)"], "metric": ["F1"], "title": "Robust Multilingual Named Entity Recognition with Shallow Semi-Supervised Features"} {"abstract": "Recently, image representation built upon Convolutional Neural Network (CNN)\nhas been shown to provide effective descriptors for image search, outperforming\npre-CNN features as short-vector representations. Yet such models are not\ncompatible with geometry-aware re-ranking methods and still outperformed, on\nsome particular object retrieval benchmarks, by traditional image search\nsystems relying on precise descriptor matching, geometric re-ranking, or query\nexpansion. This work revisits both retrieval stages, namely initial search and\nre-ranking, by employing the same primitive information derived from the CNN.\nWe build compact feature vectors that encode several image regions without the\nneed to feed multiple inputs to the network. Furthermore, we extend integral\nimages to handle max-pooling on convolutional layer activations, allowing us to\nefficiently localize matching objects. The resulting bounding box is finally\nused for image re-ranking. As a result, this paper significantly improves\nexisting CNN-based recognition pipeline: We report for the first time results\ncompeting with traditional methods on the challenging Oxford5k and Paris6k\ndatasets.", "field": [], "task": ["Image Retrieval"], "method": [], "dataset": ["Par106k", "Par6k", "Oxf105k"], "metric": ["mAP", "MAP"], "title": "Particular object retrieval with integral max-pooling of CNN activations"} {"abstract": "Word vectors and Language Models (LMs) pretrained on a large amount of unlabelled data can dramatically improve various Natural Language Processing (NLP) tasks. However, the measure and impact of similarity between pretraining data and target task data are left to intuition. We propose three cost-effective measures to quantify different aspects of similarity between source pretraining and target task data. We demonstrate that these measures are good predictors of the usefulness of pretrained models for Named Entity Recognition (NER) over 30 data pairs. Results also suggest that pretrained LMs are more effective and more predictable than pretrained word vectors, but pretrained word vectors are better when pretraining data is dissimilar.", "field": [], "task": ["Named Entity Recognition"], "method": [], "dataset": ["WetLab", "JNLPBA"], "metric": ["F1"], "title": "Using Similarity Measures to Select Pretraining Data for NER"} {"abstract": "Skeleton-based human action recognition has attracted great interest thanks to the easy accessibility of the human skeleton data. Recently, there is a trend of using very deep feedforward neural networks to model the 3D coordinates of joints without considering the computational efficiency. In this paper, we propose a simple yet effective semantics-guided neural network (SGN) for skeleton-based action recognition. We explicitly introduce the high level semantics of joints (joint type and frame index) into the network to enhance the feature representation capability. In addition, we exploit the relationship of joints hierarchically through two modules, i.e., a joint-level module for modeling the correlations of joints in the same frame and a framelevel module for modeling the dependencies of frames by taking the joints in the same frame as a whole. A strong baseline is proposed to facilitate the study of this field. With an order of magnitude smaller model size than most previous works, SGN achieves the state-of-the-art performance on the NTU60, NTU120, and SYSU datasets. The source code is available at https://github.com/microsoft/SGN.", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D", "N-UCLA", "SYSU 3D"], "metric": ["Accuracy (CS)", "Accuracy (CV)", "Accuracy"], "title": "Semantics-Guided Neural Networks for Efficient Skeleton-Based Human Action Recognition"} {"abstract": "Cross-view image translation is challenging because it involves images with\ndrastically different views and severe deformation. In this paper, we propose a\nnovel approach named Multi-Channel Attention SelectionGAN (SelectionGAN) that\nmakes it possible to generate images of natural scenes in arbitrary viewpoints,\nbased on an image of the scene and a novel semantic map. The proposed\nSelectionGAN explicitly utilizes the semantic information and consists of two\nstages. In the first stage, the condition image and the target semantic map are\nfed into a cycled semantic-guided generation network to produce initial coarse\nresults. In the second stage, we refine the initial results by using a\nmulti-channel attention selection mechanism. Moreover, uncertainty maps\nautomatically learned from attentions are used to guide the pixel loss for\nbetter network optimization. Extensive experiments on Dayton, CVUSA and Ego2Top\ndatasets show that our model is able to generate significantly better results\nthan the state-of-the-art methods. The source code, data and trained models are\navailable at https://github.com/Ha0Tang/SelectionGAN.", "field": [], "task": ["Bird View Synthesis", "Cross-View Image-to-Image Translation", "Image-to-Image Translation"], "method": [], "dataset": ["cvusa", "Dayton (256\u00d7256) - ground-to-aerial", "Dayton (64x64) - ground-to-aerial", "Dayton (64\u00d764) - aerial-to-ground", "Ego2Top", "Dayton (256\u00d7256) - aerial-to-ground"], "metric": ["SSIM"], "title": "Multi-Channel Attention Selection GAN with Cascaded Semantic Guidance for Cross-View Image Translation"} {"abstract": "Semi-supervised learning (SSL) provides an effective\r\nmeans of leveraging unlabeled data to improve a model\u2019s\r\nperformance. In this paper, we demonstrate the power of a\r\nsimple combination of two common SSL methods: consistency regularization and pseudo-labeling. Our algorithm,\r\nFixMatch, first generates pseudo-labels using the model\u2019s\r\npredictions on weakly-augmented unlabeled images. For a\r\ngiven image, the pseudo-label is only retained if the model\r\nproduces a high-confidence prediction. The model is then\r\ntrained to predict the pseudo-label when fed a stronglyaugmented version of the same image. Despite its simplicity, we show that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks, including 94.93% accuracy on CIFAR-10\r\nwith 250 labels and 88.61% accuracy with 40 \u2013 just 4 labels per class. Since FixMatch bears many similarities\r\nto existing SSL methods that achieve worse performance,\r\nwe carry out an extensive ablation study to tease apart\r\nthe experimental factors that are most important to FixMatch\u2019s success. We make our code available at https:\r\n//github.com/google-research/fixmatch.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["STL-10"], "metric": ["Percentage correct"], "title": "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence"} {"abstract": "Fine-Grained Visual Categorization (FGVC) is a challenging topic in computer vision. It is a problem characterized by large intra-class differences and subtle inter-class differences. In this paper, we tackle this problem in a weakly supervised manner, where neural network models are getting fed with additional data using a data augmentation technique through a visual attention mechanism. We perform domain adaptive knowledge transfer via fine-tuning on our base network model. We perform our experiment on six challenging and commonly used FGVC datasets, and we show competitive improvement on accuracies by using attention-aware data augmentation techniques with features derived from deep learning model InceptionV3, pre-trained on large scale datasets. Our method outperforms competitor methods on multiple FGVC datasets and showed competitive results on other datasets. Experimental studies show that transfer learning from large scale datasets can be utilized effectively with visual attention based data augmentation, which can obtain state-of-the-art results on several FGVC datasets. We present a comprehensive analysis of our experiments. Our method achieves state-of-the-art results in multiple fine-grained classification datasets including challenging CUB200-2011 bird, Flowers-102, and FGVC-Aircrafts datasets.", "field": [], "task": ["Data Augmentation", "Fine-Grained Image Classification", "Fine-Grained Visual Categorization", "Image Classification", "Transfer Learning"], "method": [], "dataset": ["FGVC Aircraft", "CUB-200-2011", "Flowers-102", "Food-101", "Stanford Dogs", "Stanford Cars"], "metric": ["Top-1", "Top 1 Accuracy", "Accuracy"], "title": "Domain Adaptive Transfer Learning on Visual Attention Aware Data Augmentation for Fine-grained Visual Categorization"} {"abstract": "Most existing subspace clustering methods hinge on self-expression of handcrafted representations and are unaware of potential clustering errors. Thus they perform unsatisfactorily on real data with complex underlying subspaces. To solve this issue, we propose a novel deep adversarial subspace clustering (DASC) model, which learns more favorable sample representations by deep learning for subspace clustering, and more importantly introduces adversarial learning to supervise sample representation learning and subspace clustering. Specifically, DASC consists of a subspace clustering generator and a quality-verifying discriminator, which learn against each other. The generator produces subspace estimation and sample clustering. The discriminator evaluates current clustering performance by inspecting whether the re-sampled data from estimated subspaces have consistent subspace properties, and supervises the generator to progressively improve subspace clustering. Experimental results on the handwritten recognition, face and object clustering tasks demonstrate the advantages of DASC over shallow and few deep subspace clustering models. Moreover, to our best knowledge, this is the first successful application of GAN-alike model for unsupervised subspace clustering, which also paves the way for deep learning to solve other unsupervised learning problems.", "field": [], "task": ["Image Clustering", "Representation Learning"], "method": [], "dataset": ["coil-40", "UMist"], "metric": ["NMI", "Accuracy"], "title": "Deep Adversarial Subspace Clustering"} {"abstract": "We introduce Wavesplit, an end-to-end source separation system. From a single mixture, the model infers a representation for each source and then estimates each source signal given the inferred representations. The model is trained to jointly perform both tasks from the raw waveform. Wavesplit infers a set of source representations via clustering, which addresses the fundamental permutation problem of separation. For speech separation, our sequence-wide speaker representations provide a more robust separation of long, challenging recordings compared to prior work. Wavesplit redefines the state-of-the-art on clean mixtures of 2 or 3 speakers (WSJ0-2/3mix), as well as in noisy and reverberated settings (WHAM/WHAMR). We also set a new benchmark on the recent LibriMix dataset. Finally, we show that Wavesplit is also applicable to other domains, by separating fetal and maternal heart rates from a single abdominal electrocardiogram.", "field": [], "task": ["Data Augmentation", "Speech Separation"], "method": [], "dataset": ["wsj0-2mix"], "metric": ["SI-SDRi"], "title": "Wavesplit: End-to-End Speech Separation by Speaker Clustering"} {"abstract": "Inspired by recent advances of deep learning in instance segmentation and\nobject tracking, we introduce video object segmentation problem as a concept of\nguided instance segmentation. Our model proceeds on a per-frame basis, guided\nby the output of the previous frame towards the object of interest in the next\nframe. We demonstrate that highly accurate object segmentation in videos can be\nenabled by using a convnet trained with static images only. The key ingredient\nof our approach is a combination of offline and online learning strategies,\nwhere the former serves to produce a refined mask from the previous frame\nestimate and the latter allows to capture the appearance of the specific object\ninstance. Our method can handle different types of input annotations: bounding\nboxes and segments, as well as incorporate multiple annotated frames, making\nthe system suitable for diverse applications. We obtain competitive results on\nthree different datasets, independently from the type of input annotation.", "field": [], "task": ["Instance Segmentation", "Object Tracking", "Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation", "Visual Object Tracking"], "method": [], "dataset": ["DAVIS 2016", "YouTube"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "mIoU", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "Learning Video Object Segmentation from Static Images"} {"abstract": "Multi-person pose estimation in images and videos is an important yet\nchallenging task with many applications. Despite the large improvements in\nhuman pose estimation enabled by the development of convolutional neural\nnetworks, there still exist a lot of difficult cases where even the\nstate-of-the-art models fail to correctly localize all body joints. This\nmotivates the need for an additional refinement step that addresses these\nchallenging cases and can be easily applied on top of any existing method. In\nthis work, we introduce a pose refinement network (PoseRefiner) which takes as\ninput both the image and a given pose estimate and learns to directly predict a\nrefined pose by jointly reasoning about the input-output space. In order for\nthe network to learn to refine incorrect body joint predictions, we employ a\nnovel data augmentation scheme for training, where we model \"hard\" human pose\ncases. We evaluate our approach on four popular large-scale pose estimation\nbenchmarks such as MPII Single- and Multi-Person Pose Estimation, PoseTrack\nPose Estimation, and PoseTrack Pose Tracking, and report systematic improvement\nover the state of the art.", "field": [], "task": ["Data Augmentation", "Keypoint Detection", "Multi-Person Pose Estimation", "Multi-Person Pose Estimation and Tracking", "Pose Estimation", "Pose Tracking"], "method": [], "dataset": ["PoseTrack2018", "MPII Single Person", "MPII Multi-Person"], "metric": ["PCKh@0.5", "MOTA", "MAP", "AP", "mAP@0.5"], "title": "Learning to Refine Human Pose Estimation"} {"abstract": "Processing point cloud data is an important component of many real-world systems. As such, a wide variety of point-based approaches have been proposed, reporting steady benchmark improvements over time. We study the key ingredients of this progress and uncover two critical results. First, we find that auxiliary factors like different evaluation schemes, data augmentation strategies, and loss functions, which are independent of the model architecture, make a large difference in performance. The differences are large enough that they obscure the effect of architecture. When these factors are controlled for, PointNet++, a relatively older network, performs competitively with recent methods. Second, a very simple projection-based method, which we refer to as SimpleView, performs surprisingly well. It achieves on par or better results than sophisticated state-of-the-art methods on ModelNet40, while being half the size of PointNet++. It also outperforms state-of-the-art methods on ScanObjectNN, a real-world point cloud benchmark, and demonstrates better cross-dataset generalization.\n", "field": [], "task": ["3D Point Cloud Classification"], "method": [], "dataset": ["ScanObjectNN", "ModelNet40"], "metric": ["Overall Accuracy"], "title": "Revisiting Point Cloud Classification with a Simple and Effective Baseline"} {"abstract": "Recent advances in adaptive object detection have achieved compelling results in virtue of adversarial feature adaptation to mitigate the distributional shifts along the detection pipeline. Whilst adversarial adaptation significantly enhances the transferability of feature representations, the feature discriminability of object detectors remains less investigated. Moreover, transferability and discriminability may come at a contradiction in adversarial adaptation given the complex combinations of objects and the differentiated scene layouts between domains. In this paper, we propose a Hierarchical Transferability Calibration Network (HTCN) that hierarchically (local-region/image/instance) calibrates the transferability of feature representations for harmonizing transferability and discriminability. The proposed model consists of three components: (1) Importance Weighted Adversarial Training with input Interpolation (IWAT-I), which strengthens the global discriminability by re-weighting the interpolated image-level features; (2) Context-aware Instance-Level Alignment (CILA) module, which enhances the local discriminability by capturing the underlying complementary effect between the instance-level feature and the global context information for the instance-level feature alignment; (3) local feature masks that calibrate the local transferability to provide semantic guidance for the following discriminative pattern alignment. Experimental results show that HTCN significantly outperforms the state-of-the-art methods on benchmark datasets.", "field": [], "task": ["Object Detection", "Weakly Supervised Object Detection"], "method": [], "dataset": ["Cityscapes-to-Foggy Cityscapes"], "metric": ["mAP"], "title": "Harmonizing Transferability and Discriminability for Adapting Object Detectors"} {"abstract": "Most existing Re-IDentification (Re-ID) methods are highly dependent on precise bounding boxes that enable images to be aligned with each other. However, due to the challenging practical scenarios, current detection models often produce inaccurate bounding boxes, which inevitably degenerate the performance of existing Re-ID algorithms. In this paper, we propose a novel coarse-to-fine pyramid model to relax the need of bounding boxes, which not only incorporates local and global information, but also integrates the gradual cues between them. The pyramid model is able to match at different scales and then search for the correct image of the same identity, even when the image pairs are not aligned. In addition, in order to learn discriminative identity representation, we explore a dynamic training scheme to seamlessly unify two losses and extract appropriate shared information between them. Experimental results clearly demonstrate that the proposed method achieves the state-of-the-art results on three datasets. Especially, our approach exceeds the current best method by 9.5% on the most challenging CUHK03 dataset.", "field": [], "task": ["Person Re-Identification"], "method": [], "dataset": ["CUHK03 detected", "DukeMTMC-reID", "Market-1501", "CUHK03 labeled"], "metric": ["Rank-1", "MAP"], "title": "Pyramidal Person Re-IDentification via Multi-Loss Dynamic Training"} {"abstract": "Describes an audio dataset of spoken words designed to help train and\nevaluate keyword spotting systems. Discusses why this task is an interesting\nchallenge, and why it requires a specialized dataset that is different from\nconventional datasets used for automatic speech recognition of full sentences.\nSuggests a methodology for reproducible and comparable accuracy metrics for\nthis task. Describes how the data was collected and verified, what it contains,\nprevious versions and properties. Concludes by reporting baseline results of\nmodels trained on this dataset.", "field": [], "task": ["Accuracy Metrics", "Keyword Spotting", "Speech Recognition"], "method": [], "dataset": ["TensorFlow"], "metric": ["TFMA"], "title": "Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition"} {"abstract": "Top-performing deep architectures are trained on massive amounts of labeled\ndata. In the absence of labeled data for a certain task, domain adaptation\noften provides an attractive option given that labeled data of similar nature\nbut from a different domain (e.g. synthetic images) are available. Here, we\npropose a new approach to domain adaptation in deep architectures that can be\ntrained on large amount of labeled data from the source domain and large amount\nof unlabeled data from the target domain (no labeled target-domain data is\nnecessary).\n As the training progresses, the approach promotes the emergence of \"deep\"\nfeatures that are (i) discriminative for the main learning task on the source\ndomain and (ii) invariant with respect to the shift between the domains. We\nshow that this adaptation behaviour can be achieved in almost any feed-forward\nmodel by augmenting it with few standard layers and a simple new gradient\nreversal layer. The resulting augmented architecture can be trained using\nstandard backpropagation.\n Overall, the approach can be implemented with little effort using any of the\ndeep-learning packages. The method performs very well in a series of image\nclassification experiments, achieving adaptation effect in the presence of big\ndomain shifts and outperforming previous state-of-the-art on Office datasets.", "field": [], "task": ["Domain Adaptation", "Image Classification", "Transfer Learning", "Unsupervised Domain Adaptation", "Unsupervised Image-To-Image Translation"], "method": [], "dataset": ["SVNH-to-MNIST", "Office-Home", "UCF-to-HMDBfull", "Olympic-to-HMDBsmall", "Office-31", "HMDBsmall-to-UCF", "HMDBfull-to-UCF", "UCF-to-Olympic", "UCF-to-HMDBsmall"], "metric": ["Classification Accuracy", "Accuracy"], "title": "Unsupervised Domain Adaptation by Backpropagation"} {"abstract": "In many real-world problems, collecting a large number of labeled samples is infeasible. Few-shot learning (FSL) is the dominant approach to address this issue, where the objective is to quickly adapt to novel categories in presence of a limited number of samples. FSL tasks have been predominantly solved by leveraging the ideas from gradient-based meta-learning and metric learning approaches. However, recent works have demonstrated the significance of powerful feature representations with a simple embedding network that can outperform existing sophisticated FSL algorithms. In this work, we build on this insight and propose a novel training mechanism that simultaneously enforces equivariance and invariance to a general set of geometric transformations. Equivariance or invariance has been employed standalone in the previous works; however, to the best of our knowledge, they have not been used jointly. Simultaneous optimization for both of these contrasting objectives allows the model to jointly learn features that are not only independent of the input transformation but also the features that encode the structure of geometric transformations. These complementary sets of features help generalize well to novel classes with only a few data samples. We achieve additional improvements by incorporating a novel self-supervised distillation objective. Our extensive experimentation shows that even without knowledge distillation our proposed method can outperform current state-of-the-art FSL methods on five popular benchmark datasets.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Knowledge Distillation", "Meta-Learning", "Metric Learning"], "method": [], "dataset": ["FC100 5-way (1-shot)", "CIFAR-FS 5-way (5-shot)", "Meta-Dataset", "Mini-Imagenet 5-way (1-shot)", "Tiered ImageNet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "CIFAR-FS 5-way (1-shot)", "FC100 5-way (5-shot)", "Tiered ImageNet 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Exploring Complementary Strengths of Invariant and Equivariant Representations for Few-Shot Learning"} {"abstract": "Advancing research in the emerging field of deep graph learning requires new tools to support tensor computation over graphs. In this paper, we present the design principles and implementation of Deep Graph Library (DGL). DGL distills the computational patterns of GNNs into a few generalized sparse tensor operations suitable for extensive parallelization. By advocating graph as the central programming abstraction, DGL can perform optimizations transparently. By cautiously adopting a framework-neutral design, DGL allows users to easily port and leverage the existing components across multiple deep learning frameworks. Our evaluation shows that DGL significantly outperforms other popular GNN-oriented frameworks in both speed and memory consumption over a variety of benchmarks and has little overhead for small scale workloads.", "field": [], "task": ["Graph Learning", "Node Classification"], "method": [], "dataset": ["Cora"], "metric": ["Accuracy"], "title": "Deep Graph Library: A Graph-Centric, Highly-Performant Package for Graph Neural Networks"} {"abstract": "Previous approaches for scene text detection usually rely on manually defined sliding windows. This work presents an intuitive two-stage region-based method to detect multi-oriented text without any prior knowledge regarding the textual shape. In the first stage, we estimate the possible locations of text instances by detecting and linking corners instead of shifting a set of default anchors. The quadrilateral proposals are geometry adaptive, which allows our method to cope with various text aspect ratios and orientations. In the second stage, we design a new pooling layer named Dual-RoI Pooling which embeds data augmentation inside the region-wise subnetwork for more robust classification and regression over these proposals. Experimental results on public benchmarks confirm that the proposed method is capable of achieving comparable performance with state-of-the-art methods. The code is publicly available at https://github.com/xhzdeng/crpn", "field": [], "task": ["Data Augmentation", "Regression", "Robust classification", "Scene Text", "Scene Text Detection"], "method": [], "dataset": ["ICDAR 2013", "ICDAR 2015", "COCO-Text"], "metric": ["F-Measure", "Recall", "Precision"], "title": "Detecting Multi-Oriented Text with Corner-based Region Proposals"} {"abstract": "Many Few-Shot Learning research works have two stages: pre-training base model and adapting to novel model. In this paper, we propose to use closed-form base learner, which constrains the adapting stage with pre-trained base model to get better generalized novel model. Following theoretical analysis proves its rationality as well as indication of how to train a well-generalized base model. We then conduct experiments on four benchmarks and achieve state-of-the-art performance in all cases. Notably, we achieve the accuracy of 87.75% on 5-shot miniImageNet which approximately outperforms existing methods by 10%.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning"], "method": [], "dataset": ["FC100 5-way (1-shot)", "CIFAR-FS 5-way (5-shot)", "Mini-Imagenet 5-way (1-shot)", "Tiered ImageNet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "CIFAR-FS 5-way (1-shot)", "FC100 5-way (5-shot)", "Tiered ImageNet 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Generalized Adaptation for Few-Shot Learning"} {"abstract": "We introduce a new and rigorously-formulated PAC-Bayes few-shot meta-learning algorithm that implicitly learns a prior distribution of the model of interest. Our proposed method extends the PAC-Bayes framework from a single task setting to the few-shot learning setting to upper-bound generalisation errors on unseen tasks and samples. We also propose a generative-based approach to model the shared prior and the posterior of task-specific model parameters more expressively compared to the usual diagonal Gaussian assumption. We show that the models trained with our proposed meta-learning algorithm are well calibrated and accurate, with state-of-the-art calibration and classification results on few-shot classification (mini-ImageNet and tiered-ImageNet) and regression (multi-modal task-distribution regression) benchmarks.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Meta-Learning", "Regression"], "method": [], "dataset": ["Mini-Imagenet 5-way (1-shot)", "Tiered ImageNet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "Tiered ImageNet 5-way (5-shot)"], "metric": ["Accuracy"], "title": "PAC-Bayesian Meta-learning with Implicit Prior and Posterior"} {"abstract": "Intrusion detection system (IDS) has become an essential layer in all the latest ICT system due to an urge towards cyber safety in the day-to-day world. Reasons including uncertainty in finding the types of attacks and increased the complexity of advanced cyber attacks, IDS calls for the need of integration of Deep Neural Networks (DNNs). In this paper, DNNs have been utilized to predict the attacks on Network Intrusion Detection System (N-IDS). A DNN with 0.1 rate of learning is applied and is run for 1000 number of epochs and KDDCup-`99\u2019 dataset has been used for training and benchmarking the network. For comparison purposes, the training is done on the same dataset with several other classical machine learning algorithms and DNN of layers ranging from 1 to 5. The results were compared and concluded that a DNN of 3 layers has superior performance over all the other classical machine learning algorithms.", "field": [], "task": ["Intrusion Detection", "Network Intrusion Detection"], "method": [], "dataset": ["KDD "], "metric": ["Accuracy"], "title": "Evaluating Shallow and Deep Neural Networks for Network Intrusion Detection Systems in Cyber Security"} {"abstract": "Popularized as 'bottom-up' attention, bounding box (or region) based visual features have recently surpassed vanilla grid-based convolutional features as the de facto standard for vision and language tasks like visual question answering (VQA). However, it is not clear whether the advantages of regions (e.g. better localization) are the key reasons for the success of bottom-up attention. In this paper, we revisit grid features for VQA, and find they can work surprisingly well - running more than an order of magnitude faster with the same accuracy (e.g. if pre-trained in a similar fashion). Through extensive experiments, we verify that this observation holds true across different VQA models (reporting a state-of-the-art accuracy on VQA 2.0 test-std, 72.71), datasets, and generalizes well to other tasks like image captioning. As grid features make the model design and training process much simpler, this enables us to train them end-to-end and also use a more flexible network design. We learn VQA models end-to-end, from pixels directly to answers, and show that strong performance is achievable without using any region annotations in pre-training. We hope our findings help further improve the scientific understanding and the practical application of VQA. Code and features will be made available.", "field": [], "task": ["Image Captioning", "Question Answering", "Visual Question Answering"], "method": [], "dataset": ["VQA v2 test-std", "VQA v2 test-dev"], "metric": ["number", "overall", "other", "yes/no", "Accuracy"], "title": "In Defense of Grid Features for Visual Question Answering"} {"abstract": "Recently, a number of competitive methods have tackled unsupervised representation learning by maximising the mutual information between the representations produced from augmentations. The resulting representations are then invariant to stochastic augmentation strategies, and can be used for downstream tasks such as clustering or classification. Yet data augmentations preserve many properties of an image and so there is potential for a suboptimal choice of representation that relies on matching easy-to-find features in the data. We demonstrate that greedy or local methods of maximising mutual information (such as stochastic gradient optimisation) discover local optima of the mutual information criterion; the resulting representations are also less-ideally suited to complex downstream tasks. Earlier work has not specifically identified or addressed this issue. We introduce deep hierarchical object grouping (DHOG) that computes a number of distinct discrete representations of images in a hierarchical order, eventually generating representations that better optimise the mutual information objective. We also find that these representations align better with the downstream task of grouping into underlying object classes. We tested DHOG on unsupervised clustering, which is a natural downstream test as the target representation is a discrete labelling of the data. We achieved new state-of-the-art results on the three main benchmarks without any prefiltering or Sobel-edge detection that proved necessary for many previous methods to work. We obtain accuracy improvements of: 4.3% on CIFAR-10, 1.5% on CIFAR-100-20, and 7.2% on SVHN.", "field": [], "task": ["Edge Detection", "Representation Learning", "Unsupervised Representation Learning"], "method": [], "dataset": ["CIFAR-10"], "metric": ["Train set", "ARI", "Backbone", "NMI", "Accuracy"], "title": "DHOG: Deep Hierarchical Object Grouping"} {"abstract": "We introduce Data Diversification: a simple but effective strategy to boost neural machine translation (NMT) performance. It diversifies the training data by using the predictions of multiple forward and backward models and then merging them with the original dataset on which the final NMT model is trained. Our method is applicable to all NMT models. It does not require extra monolingual data like back-translation, nor does it add more computations and parameters like ensembles of models. Our method achieves state-of-the-art BLEU scores of 30.7 and 43.7 in the WMT'14 English-German and English-French translation tasks, respectively. It also substantially improves on 8 other translation tasks: 4 IWSLT tasks (English-German and English-French) and 4 low-resource translation tasks (English-Nepali and English-Sinhala). We demonstrate that our method is more effective than knowledge distillation and dual learning, it exhibits strong correlation with ensembles of models, and it trades perplexity off for better BLEU score. We have released our source code at https://github.com/nxphi47/data_diversification", "field": [], "task": ["Knowledge Distillation", "Machine Translation"], "method": [], "dataset": ["WMT2014 English-German", "IWSLT2014 German-English"], "metric": ["BLEU score"], "title": "Data Diversification: A Simple Strategy For Neural Machine Translation"} {"abstract": "Neural Architecture Search (NAS) achieved many breakthroughs in recent years. In spite of its remarkable progress, many algorithms are restricted to particular search spaces. They also lack efficient mechanisms to reuse knowledge when confronting multiple tasks. These challenges preclude their applicability, and motivate our proposal of CATCH, a novel Context-bAsed meTa reinforcement learning (RL) algorithm for transferrable arChitecture searcH. The combination of meta-learning and RL allows CATCH to efficiently adapt to new tasks while being agnostic to search spaces. CATCH utilizes a probabilistic encoder to encode task properties into latent context variables, which then guide CATCH's controller to quickly \"catch\" top-performing networks. The contexts also assist a network evaluator in filtering inferior candidates and speed up learning. Extensive experiments demonstrate CATCH's universality and search efficiency over many other widely-recognized algorithms. It is also capable of handling cross-domain architecture search as competitive networks on ImageNet, COCO, and Cityscapes are identified. This is the first work to our knowledge that proposes an efficient transferrable NAS solution while maintaining robustness across various settings.", "field": [], "task": ["Meta-Learning", "Meta Reinforcement Learning", "Neural Architecture Search"], "method": [], "dataset": ["NAS-Bench-201, ImageNet-16-120"], "metric": ["Search time (s)", "Accuracy (val)"], "title": "CATCH: Context-based Meta Reinforcement Learning for Transferrable Architecture Search"} {"abstract": "Existing RGB-D salient object detection (SOD) approaches concentrate on the cross-modal fusion between the RGB stream and the depth stream. They do not deeply explore the effect of the depth map itself. In this work, we design a single stream network to directly use the depth map to guide early fusion and middle fusion between RGB and depth, which saves the feature encoder of the depth stream and achieves a lightweight and real-time model. We tactfully utilize depth information from two perspectives: (1) Overcoming the incompatibility problem caused by the great difference between modalities, we build a single stream encoder to achieve the early fusion, which can take full advantage of ImageNet pre-trained backbone model to extract rich and discriminative features. (2) We design a novel depth-enhanced dual attention module (DEDA) to efficiently provide the fore-/back-ground branches with the spatially filtered features, which enables the decoder to optimally perform the middle fusion. Besides, we put forward a pyramidally attended feature extraction module (PAFE) to accurately localize the objects of different scales. Extensive experiments demonstrate that the proposed model performs favorably against most state-of-the-art methods under different evaluation metrics. Furthermore, this model is 55.5\\% lighter than the current lightest model and runs at a real-time speed of 32 FPS when processing a $384 \\times 384$ image.", "field": [], "task": ["Object Detection", "RGB-D Salient Object Detection", "RGB Salient Object Detection", "Salient Object Detection"], "method": [], "dataset": ["NJU2K"], "metric": ["Average MAE", "S-Measure", "max F-Measure"], "title": "A Single Stream Network for Robust and Real-time RGB-D Salient Object Detection"} {"abstract": "Recent years have witnessed the significant progress\r\non convolutional neural networks (CNNs) in dynamic scene\r\ndeblurring. While most of the CNN models are generally learned\r\nby the reconstruction loss defined on training data, incorporating\r\nsuitable image priors as well as regularization terms into the\r\nnetwork architecture could boost the deblurring performance.\r\nIn this work, we propose a Dark and Bright Channel Priors\r\nembedded Network (DBCPeNet) to plug the channel priors into\r\na neural network for effective dynamic scene deblurring. A\r\nnovel trainable dark and bright channel priors embedded layer\r\n(DBCPeL) is developed to aggregate both channel priors and\r\nblurry image representations, and a sparse regularization is\r\nintroduced to regularize the DBCPeNet model learning. Furthermore, we present an effective multi-scale network architecture,\r\nnamely image full scale exploitation (IFSE), which works in both\r\ncoarse-to-fine and fine-to-coarse manners for better exploiting\r\ninformation flow across scales. Experimental results on the GoPro\r\nand Kohler datasets show that our proposed DBCPeNet performs \u00a8\r\nfavorably against state-of-the-art deep image deblurring methods\r\nin terms of both quantitative metrics and visual quality.", "field": [], "task": ["Deblurring"], "method": [], "dataset": ["GoPro"], "metric": ["SSIM", "PSNR"], "title": "Dark and Bright Channel Prior Embedded Network for Dynamic Scene Deblurring"} {"abstract": "Inspired by speech recognition, recent state-of-the-art algorithms mostly\nconsider scene text recognition as a sequence prediction problem. Though\nachieving excellent performance, these methods usually neglect an important\nfact that text in images are actually distributed in two-dimensional space. It\nis a nature quite different from that of speech, which is essentially a\none-dimensional signal. In principle, directly compressing features of text\ninto a one-dimensional form may lose useful information and introduce extra\nnoise. In this paper, we approach scene text recognition from a two-dimensional\nperspective. A simple yet effective model, called Character Attention Fully\nConvolutional Network (CA-FCN), is devised for recognizing the text of\narbitrary shapes. Scene text recognition is realized with a semantic\nsegmentation network, where an attention mechanism for characters is adopted.\nCombined with a word formation module, CA-FCN can simultaneously recognize the\nscript and predict the position of each character. Experiments demonstrate that\nthe proposed algorithm outperforms previous methods on both regular and\nirregular text datasets. Moreover, it is proven to be more robust to imprecise\nlocalizations in the text detection phase, which are very common in practice.", "field": [], "task": ["Scene Text", "Scene Text Recognition", "Semantic Segmentation", "Speech Recognition"], "method": [], "dataset": ["ICDAR2013", "SVT"], "metric": ["Accuracy"], "title": "Scene Text Recognition from Two-Dimensional Perspective"} {"abstract": "In this work, we establish dense correspondences between RGB image and a\nsurface-based representation of the human body, a task we refer to as dense\nhuman pose estimation. We first gather dense correspondences for 50K persons\nappearing in the COCO dataset by introducing an efficient annotation pipeline.\nWe then use our dataset to train CNN-based systems that deliver dense\ncorrespondence 'in the wild', namely in the presence of background, occlusions\nand scale variations. We improve our training set's effectiveness by training\nan 'inpainting' network that can fill in missing groundtruth values and report\nclear improvements with respect to the best results that would be achievable in\nthe past. We experiment with fully-convolutional networks and region-based\nmodels and observe a superiority of the latter; we further improve accuracy\nthrough cascading, obtaining a system that delivers highly0accurate results in\nreal time. Supplementary materials and videos are provided on the project page\nhttp://densepose.org", "field": [], "task": ["Pose Estimation"], "method": [], "dataset": ["DensePose-COCO"], "metric": ["AP"], "title": "DensePose: Dense Human Pose Estimation In The Wild"} {"abstract": "Many predictive tasks of web applications need to model categorical\nvariables, such as user IDs and demographics like genders and occupations. To\napply standard machine learning techniques, these categorical predictors are\nalways converted to a set of binary features via one-hot encoding, making the\nresultant feature vector highly sparse. To learn from such sparse data\neffectively, it is crucial to account for the interactions between features.\n Factorization Machines (FMs) are a popular solution for efficiently using the\nsecond-order feature interactions. However, FM models feature interactions in a\nlinear way, which can be insufficient for capturing the non-linear and complex\ninherent structure of real-world data. While deep neural networks have recently\nbeen applied to learn non-linear feature interactions in industry, such as the\nWide&Deep by Google and DeepCross by Microsoft, the deep structure meanwhile\nmakes them difficult to train.\n In this paper, we propose a novel model Neural Factorization Machine (NFM)\nfor prediction under sparse settings. NFM seamlessly combines the linearity of\nFM in modelling second-order feature interactions and the non-linearity of\nneural network in modelling higher-order feature interactions. Conceptually,\nNFM is more expressive than FM since FM can be seen as a special case of NFM\nwithout hidden layers. Empirical results on two regression tasks show that with\none hidden layer only, NFM significantly outperforms FM with a 7.3% relative\nimprovement. Compared to the recent deep learning methods Wide&Deep and\nDeepCross, our NFM uses a shallower structure but offers better performance,\nbeing much easier to train and tune in practice.", "field": [], "task": ["Link Prediction", "Regression"], "method": [], "dataset": ["MovieLens 25M", "Yelp"], "metric": ["Hits@10", "nDCG@10", "HR@10"], "title": "Neural Factorization Machines for Sparse Predictive Analytics"} {"abstract": "Histology images are inherently symmetric under rotation, where each orientation is equally as likely to appear. However, this rotational symmetry is not widely utilised as prior knowledge in modern Convolutional Neural Networks (CNNs), resulting in data hungry models that learn independent features at each orientation. Allowing CNNs to be rotation-equivariant removes the necessity to learn this set of transformations from the data and instead frees up model capacity, allowing more discriminative features to be learned. This reduction in the number of required parameters also reduces the risk of overfitting. In this paper, we propose Dense Steerable Filter CNNs (DSF-CNNs) that use group convolutions with multiple rotated copies of each filter in a densely connected framework. Each filter is defined as a linear combination of steerable basis filters, enabling exact rotation and decreasing the number of trainable parameters compared to standard filters. We also provide the first in-depth comparison of different rotation-equivariant CNNs for histology image analysis and demonstrate the advantage of encoding rotational symmetry into modern architectures. We show that DSF-CNNs achieve state-of-the-art performance, with significantly fewer parameters, when applied to three different tasks in the area of computational pathology: breast tumour classification, colon gland segmentation and multi-tissue nuclear segmentation.", "field": [], "task": ["Breast Tumour Classification", "Colorectal Gland Segmentation:", "Multi-tissue Nucleus Segmentation", "Nuclear Segmentation"], "method": [], "dataset": ["CRAG", "Kumar", "PCam"], "metric": ["F1-score", "Hausdorff Distance (mm)", "AUC", "Dice"], "title": "Dense Steerable Filter CNNs for Exploiting Rotational Symmetry in Histology Images"} {"abstract": "This paper proposes to tackle open- domain question answering using Wikipedia\nas the unique knowledge source: the answer to any factoid question is a text\nspan in a Wikipedia article. This task of machine reading at scale combines the\nchallenges of document retrieval (finding the relevant articles) with that of\nmachine comprehension of text (identifying the answer spans from those\narticles). Our approach combines a search component based on bigram hashing and\nTF-IDF matching with a multi-layer recurrent neural network model trained to\ndetect answers in Wikipedia paragraphs. Our experiments on multiple existing QA\ndatasets indicate that (1) both modules are highly competitive with respect to\nexisting counterparts and (2) multitask learning using distant supervision on\ntheir combination is an effective complete system on this challenging task.", "field": [], "task": ["Open-Domain Question Answering", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["SQuAD1.1", "SearchQA", "Natural Questions (long)", "Natural Questions (short)", "SQuAD1.1 dev", "Quasart-T"], "metric": ["EM", "F1"], "title": "Reading Wikipedia to Answer Open-Domain Questions"} {"abstract": "Surgical tool presence detection and surgical phase recognition are two fundamental yet challenging tasks in surgical video analysis and also very essential components in various applications in modern operating rooms. While these two analysis tasks are highly correlated in clinical practice as the surgical process is well-defined, most previous methods tackled them separately, without making full use of their relatedness. In this paper, we present a novel method by developing a multi-task recurrent convolutional network with correlation loss (MTRCNet-CL) to exploit their relatedness to simultaneously boost the performance of both tasks. Specifically, our proposed MTRCNet-CL model has an end-to-end architecture with two branches, which share earlier feature encoders to extract general visual features while holding respective higher layers targeting for specific tasks. Given that temporal information is crucial for phase recognition, long-short term memory (LSTM) is explored to model the sequential dependencies in the phase recognition branch. More importantly, a novel and effective correlation loss is designed to model the relatedness between tool presence and phase identification of each video frame, by minimizing the divergence of predictions from the two branches. Mutually leveraging both low-level feature sharing and high-level prediction correlating, our MTRCNet-CL method can encourage the interactions between the two tasks to a large extent, and hence can bring about benefits to each other. Extensive experiments on a large surgical video dataset (Cholec80) demonstrate outstanding performance of our proposed method, consistently exceeding the state-of-the-art methods by a large margin (e.g., 89.1% v.s. 81.0% for the mAP in tool presence detection and 87.4% v.s. 84.5% for F1 score in phase recognition). The code can be found on our project website.", "field": [], "task": ["Surgical tool detection"], "method": [], "dataset": ["Cholec80"], "metric": ["mAP"], "title": "Multi-Task Recurrent Convolutional Network with Correlation Loss for Surgical Video Analysis"} {"abstract": "We present a new approach to modeling sequential data: the deep equilibrium model (DEQ). Motivated by an observation that the hidden layers of many existing deep sequence models converge towards some fixed point, we propose the DEQ approach that directly finds these equilibrium points via root-finding. Such a method is equivalent to running an infinite depth (weight-tied) feedforward network, but has the notable advantage that we can analytically backpropagate through the equilibrium point using implicit differentiation. Using this approach, training and prediction in these networks require only constant memory, regardless of the effective \"depth\" of the network. We demonstrate how DEQs can be applied to two state-of-the-art deep sequence models: self-attention transformers and trellis networks. On large-scale language modeling tasks, such as the WikiText-103 benchmark, we show that DEQs 1) often improve performance over these state-of-the-art models (for similar parameter counts); 2) have similar computational requirements to existing models; and 3) vastly reduce memory consumption (often the bottleneck for training large sequence models), demonstrating an up-to 88% memory reduction in our experiments. The code is available at https://github.com/locuslab/deq .", "field": [], "task": ["Language Modelling"], "method": [], "dataset": ["Penn Treebank (Word Level)", "WikiText-103"], "metric": ["Number of params", "Test perplexity", "Params"], "title": "Deep Equilibrium Models"} {"abstract": "Facial landmark detection aims to localize the anatomically defined points of human faces. In this paper, we study facial landmark detection from partially labeled facial images. A typical approach is to (1) train a detector on the labeled images; (2) generate new training samples using this detector's prediction as pseudo labels of unlabeled images; (3) retrain the detector on the labeled samples and partial pseudo labeled samples. In this way, the detector can learn from both labeled and unlabeled data to become robust. In this paper, we propose an interaction mechanism between a teacher and two students to generate more reliable pseudo labels for unlabeled data, which are beneficial to semi-supervised facial landmark detection. Specifically, the two students are instantiated as dual detectors. The teacher learns to judge the quality of the pseudo labels generated by the students and filter out unqualified samples before the retraining stage. In this way, the student detectors get feedback from their teacher and are retrained by premium data generated by itself. Since the two students are trained by different samples, a combination of their predictions will be more robust as the final prediction compared to either prediction. Extensive experiments on 300-W and AFLW benchmarks show that the interactions between teacher and students contribute to better utilization of the unlabeled data and achieves state-of-the-art performance.", "field": [], "task": ["Facial Landmark Detection"], "method": [], "dataset": ["300W", "300W (Full)"], "metric": ["NME", "Mean NME "], "title": "Teacher Supervises Students How to Learn From Partially Labeled Images for Facial Landmark Detection"} {"abstract": "We present a new technique for learning visual-semantic embeddings for\ncross-modal retrieval. Inspired by hard negative mining, the use of hard\nnegatives in structured prediction, and ranking loss functions, we introduce a\nsimple change to common loss functions used for multi-modal embeddings. That,\ncombined with fine-tuning and use of augmented data, yields significant gains\nin retrieval performance. We showcase our approach, VSE++, on MS-COCO and\nFlickr30K datasets, using ablation studies and comparisons with existing\nmethods. On MS-COCO our approach outperforms state-of-the-art methods by 8.8%\nin caption retrieval and 11.3% in image retrieval (at R@1).", "field": [], "task": ["Cross-Modal Retrieval", "Image Retrieval", "Structured Prediction"], "method": [], "dataset": ["Flickr30k"], "metric": ["Image-to-text R@5", "Image-to-text R@1", "Image-to-text R@10", "Text-to-image R@10", "Text-to-image R@1", "Text-to-image R@5"], "title": "VSE++: Improving Visual-Semantic Embeddings with Hard Negatives"} {"abstract": "Tracking has traditionally been the art of following interest points through space and time. This changed with the rise of powerful deep networks. Nowadays, tracking is dominated by pipelines that perform object detection followed by temporal association, also known as tracking-by-detection. In this paper, we present a simultaneous detection and tracking algorithm that is simpler, faster, and more accurate than the state of the art. Our tracker, CenterTrack, applies a detection model to a pair of images and detections from the prior frame. Given this minimal input, CenterTrack localizes objects and predicts their associations with the previous frame. That's it. CenterTrack is simple, online (no peeking into the future), and real-time. It achieves 67.3% MOTA on the MOT17 challenge at 22 FPS and 89.4% MOTA on the KITTI tracking benchmark at 15 FPS, setting a new state of the art on both datasets. CenterTrack is easily extended to monocular 3D tracking by regressing additional 3D attributes. Using monocular video input, it achieves 28.3% AMOTA@0.2 on the newly released nuScenes 3D tracking benchmark, substantially outperforming the monocular baseline on this benchmark while running at 28 FPS.", "field": [], "task": ["Multi-Object Tracking", "Object Detection"], "method": [], "dataset": ["KITTI Tracking test"], "metric": ["MOTA"], "title": "Tracking Objects as Points"} {"abstract": "A great proportion of sequence-to-sequence (Seq2Seq) models for Neural\nMachine Translation (NMT) adopt Recurrent Neural Network (RNN) to generate\ntranslation word by word following a sequential order. As the studies of\nlinguistics have proved that language is not linear word sequence but sequence\nof complex structure, translation at each step should be conditioned on the\nwhole target-side context. To tackle the problem, we propose a new NMT model\nthat decodes the sequence with the guidance of its structural prediction of the\ncontext of the target sequence. Our model generates translation based on the\nstructural prediction of the target-side context so that the translation can be\nfreed from the bind of sequential order. Experimental results demonstrate that\nour model is more competitive compared with the state-of-the-art methods, and\nthe analysis reflects that our model is also robust to translating sentences of\ndifferent lengths and it also reduces repetition with the instruction from the\ntarget-side context for decoding.", "field": [], "task": ["Machine Translation"], "method": [], "dataset": ["IWSLT2015 English-Vietnamese"], "metric": ["BLEU"], "title": "Deconvolution-Based Global Decoding for Neural Machine Translation"} {"abstract": "Comprehensive visual understanding requires detection frameworks that can effectively learn and utilize object interactions while analyzing objects individually. This is the main objective in Human-Object Interaction (HOI) detection task. In particular, relative spatial reasoning and structural connections between objects are essential cues for analyzing interactions, which is addressed by the proposed Visual-Spatial-Graph Network (VSGNet) architecture. VSGNet extracts visual features from the human-object pairs, refines the features with spatial configurations of the pair, and utilizes the structural connections between the pair via graph convolutions. The performance of VSGNet is thoroughly evaluated using the Verbs in COCO (V-COCO) and HICO-DET datasets. Experimental results indicate that VSGNet outperforms state-of-the-art solutions by 8% or 4 mAP in V-COCO and 16% or 3 mAP in HICO-DET.", "field": [], "task": ["Human-Object Interaction Detection"], "method": [], "dataset": ["HICO-DET", "V-COCO"], "metric": ["Time Per Frame(ms)", "MAP"], "title": "VSGNet: Spatial Attention Network for Detecting Human Object Interactions Using Graph Convolutions"} {"abstract": "Speaker intent detection and semantic slot filling are two critical tasks in\nspoken language understanding (SLU) for dialogue systems. In this paper, we\ndescribe a recurrent neural network (RNN) model that jointly performs intent\ndetection, slot filling, and language modeling. The neural network model keeps\nupdating the intent estimation as word in the transcribed utterance arrives and\nuses it as contextual features in the joint model. Evaluation of the language\nmodel and online SLU model is made on the ATIS benchmarking data set. On\nlanguage modeling task, our joint model achieves 11.8% relative reduction on\nperplexity comparing to the independent training language model. On SLU tasks,\nour joint model outperforms the independent task training model by 22.3% on\nintent detection error rate, with slight degradation on slot filling F1 score.\nThe joint model also shows advantageous performance in the realistic ASR\nsettings with noisy speech input.", "field": [], "task": ["Intent Detection", "Language Modelling", "Slot Filling", "Spoken Language Understanding"], "method": [], "dataset": ["ATIS"], "metric": ["F1", "Accuracy"], "title": "Joint Online Spoken Language Understanding and Language Modeling with Recurrent Neural Networks"} {"abstract": "We propose to leverage multiple sources of information to compute an estimate of the number of individuals present in an extremely dense crowd visible in a single image. Due to problems including perspective, occlusion, clutter, and few pixels per person, counting by human detection in such images is almost impossible. Instead, our approach relies on multiple sources such as low confidence head detections, repetition of texture elements (using SIFT), and frequency-domain analysis to estimate counts, along with confidence associated with observing individuals, in an image region. Secondly, we employ a global consistency constraint on counts using Markov Random Field. This caters for disparity in counts in local neighborhoods and across scales. We tested our approach on a new dataset of fifty crowd images containing 64K annotated humans, with the head counts ranging from 94 to 4543. This is in stark contrast to datasets used for existing methods which contain not more than tens of individuals. We experimentally demonstrate the efficacy and reliability of the proposed approach by quantifying the counting performance.", "field": [], "task": ["Crowd Counting", "Human Detection"], "method": [], "dataset": ["UCF CC 50", "UCF-QNRF"], "metric": ["MAE"], "title": "Multi-source Multi-scale Counting in Extremely Dense Crowd Images"} {"abstract": "Knowledge graph embedding models have gained significant attention in AI research. Recent works have shown that the inclusion of background knowledge, such as logical rules, can improve the performance of embeddings in downstream machine learning tasks. However, so far, most existing models do not allow the inclusion of rules. We address the challenge of including rules and present a new neural based embedding model (LogicENN). We prove that LogicENN can learn every ground truth of encoded rules in a knowledge graph. To the best of our knowledge, this has not been proved so far for the neural based family of embedding models. Moreover, we derive formulae for the inclusion of various rules, including (anti-)symmetric, inverse, irreflexive and transitive, implication, composition, equivalence and negation. Our formulation allows to avoid grounding for implication and equivalence relations. Our experiments show that LogicENN outperforms the state-of-the-art models in link prediction.", "field": [], "task": ["Graph Embedding", "Knowledge Graph Embedding", "Knowledge Graphs", "Link Prediction"], "method": [], "dataset": [" FB15k", "WN18", "FB15k-237"], "metric": ["Hits@10", "MR", "MRR"], "title": "LogicENN: A Neural Based Knowledge Graphs Embedding Model with Logical Rules"} {"abstract": "Recognizing objects from subcategories with very subtle differences remains a challenging task due to the large intra-class and small inter-class variation. Recent work tackles this problem in a weakly-supervised manner: object parts are first detected and the corresponding part-specific features are extracted for fine-grained classification. However, these methods typically treat the part-specific features of each image in isolation while neglecting their relationships between different images. In this paper, we propose Cross-X learning, a simple yet effective approach that exploits the relationships between different images and between different network layers for robust multi-scale feature learning. Our approach involves two novel components: (i) a cross-category cross-semantic regularizer that guides the extracted features to represent semantic parts and, (ii) a cross-layer regularizer that improves the robustness of multi-scale features by matching the prediction distribution across multiple layers. Our approach can be easily trained end-to-end and is scalable to large datasets like NABirds. We empirically analyze the contributions of different components of our approach and demonstrate its robustness, effectiveness and state-of-the-art performance on five benchmark datasets. Code is available at \\url{https://github.com/cswluo/CrossX}.", "field": [], "task": ["Fine-Grained Image Classification", "Fine-Grained Visual Categorization"], "method": [], "dataset": [" CUB-200-2011", "Stanford Cars", "FGVC Aircraft", "NABirds"], "metric": ["Accuracy"], "title": "Cross-X Learning for Fine-Grained Visual Categorization"} {"abstract": "In graph neural networks (GNNs), pooling operators compute local summaries of input graphs to capture their global properties, and they are fundamental for building deep GNNs that learn hierarchical representations. In this work, we propose the Node Decimation Pooling (NDP), a pooling operator for GNNs that generates coarser graphs while preserving the overall graph topology. During training, the GNN learns new node representations and fits them to a pyramid of coarsened graphs, which is computed offline in a pre-processing stage. NDP consists of three steps. First, a node decimation procedure selects the nodes belonging to one side of the partition identified by a spectral algorithm that approximates the \\maxcut{} solution. Afterwards, the selected nodes are connected with Kron reduction to form the coarsened graph. Finally, since the resulting graph is very dense, we apply a sparsification procedure that prunes the adjacency matrix of the coarsened graph to reduce the computational cost in the GNN. Notably, we show that it is possible to remove many edges without significantly altering the graph structure. Experimental results show that NDP is more efficient compared to state-of-the-art graph pooling operators while reaching, at the same time, competitive performance on a significant variety of graph classification tasks.", "field": [], "task": ["Graph Classification", "Representation Learning"], "method": [], "dataset": ["COLLAB", "ENZYMES", "REDDIT-B", "PROTEINS", "D&D", "NCI1", "MUTAG", "Mutagenicity", "Bench-hard", "5pt. Bench-Easy"], "metric": ["Accuracy"], "title": "Hierarchical Representation Learning in Graph Neural Networks with Node Decimation Pooling"} {"abstract": "We introduce a globally normalized transition-based neural network model that\nachieves state-of-the-art part-of-speech tagging, dependency parsing and\nsentence compression results. Our model is a simple feed-forward neural network\nthat operates on a task-specific transition system, yet achieves comparable or\nbetter accuracies than recurrent models. We discuss the importance of global as\nopposed to local normalization: a key insight is that the label bias problem\nimplies that globally normalized models can be strictly more expressive than\nlocally normalized models.", "field": [], "task": ["Dependency Parsing", "Part-Of-Speech Tagging", "Sentence Compression"], "method": [], "dataset": ["Penn Treebank"], "metric": ["UAS", "POS", "LAS"], "title": "Globally Normalized Transition-Based Neural Networks"} {"abstract": "Recent advances in neural variational inference have spawned a renaissance in\ndeep latent variable models. In this paper we introduce a generic variational\ninference framework for generative and conditional models of text. While\ntraditional variational methods derive an analytic approximation for the\nintractable distributions over latent variables, here we construct an inference\nnetwork conditioned on the discrete text input to provide the variational\ndistribution. We validate this framework on two very different text modelling\napplications, generative document modelling and supervised question answering.\nOur neural variational document model combines a continuous stochastic document\nrepresentation with a bag-of-words generative model and achieves the lowest\nreported perplexities on two standard test corpora. The neural answer selection\nmodel employs a stochastic representation layer within an attention mechanism\nto extract the semantics between a question and answer pair. On two question\nanswering benchmarks this model exceeds all previous published benchmarks.", "field": [], "task": ["Answer Selection", "Latent Variable Models", "Question Answering", "Topic Models", "Variational Inference"], "method": [], "dataset": ["QASent", "20 Newsgroups", "WikiQA"], "metric": ["MRR", "Test perplexity", "MAP"], "title": "Neural Variational Inference for Text Processing"} {"abstract": "In this paper, we propose a novel end-to-end trainable Video Question\nAnswering (VideoQA) framework with three major components: 1) a new\nheterogeneous memory which can effectively learn global context information\nfrom appearance and motion features; 2) a redesigned question memory which\nhelps understand the complex semantics of question and highlights queried\nsubjects; and 3) a new multimodal fusion layer which performs multi-step\nreasoning by attending to relevant visual and textual hints with self-updated\nattention. Our VideoQA model firstly generates the global context-aware visual\nand textual features respectively by interacting current inputs with memory\ncontents. After that, it makes the attentional fusion of the multimodal visual\nand textual representations to infer the correct answer. Multiple cycles of\nreasoning can be made to iteratively refine attention weights of the multimodal\ndata and improve the final representation of the QA pair. Experimental results\ndemonstrate our approach achieves state-of-the-art performance on four VideoQA\nbenchmark datasets.", "field": [], "task": ["Question Answering", "Video Question Answering", "Visual Question Answering"], "method": [], "dataset": ["MSRVTT-QA", "MSVD-QA"], "metric": ["Accuracy"], "title": "Heterogeneous Memory Enhanced Multimodal Attention Model for Video Question Answering"} {"abstract": "Recent works on representation learning for graph structured data\npredominantly focus on learning distributed representations of graph\nsubstructures such as nodes and subgraphs. However, many graph analytics tasks\nsuch as graph classification and clustering require representing entire graphs\nas fixed length feature vectors. While the aforementioned approaches are\nnaturally unequipped to learn such representations, graph kernels remain as the\nmost effective way of obtaining them. However, these graph kernels use\nhandcrafted features (e.g., shortest paths, graphlets, etc.) and hence are\nhampered by problems such as poor generalization. To address this limitation,\nin this work, we propose a neural embedding framework named graph2vec to learn\ndata-driven distributed representations of arbitrary sized graphs. graph2vec's\nembeddings are learnt in an unsupervised manner and are task agnostic. Hence,\nthey could be used for any downstream task such as graph classification,\nclustering and even seeding supervised representation learning approaches. Our\nexperiments on several benchmark and large real-world datasets show that\ngraph2vec achieves significant improvements in classification and clustering\naccuracies over substructure representation learning approaches and are\ncompetitive with state-of-the-art graph kernels.", "field": [], "task": ["Graph Classification", "Graph Embedding", "Graph Matching", "Representation Learning"], "method": [], "dataset": ["NCI109", "Android Malware Dataset", "PROTEINS", "NCI1", "MUTAG", "PTC"], "metric": ["Accuracy"], "title": "graph2vec: Learning Distributed Representations of Graphs"} {"abstract": "We combine supervised learning with unsupervised learning in deep neural\nnetworks. The proposed model is trained to simultaneously minimize the sum of\nsupervised and unsupervised cost functions by backpropagation, avoiding the\nneed for layer-wise pre-training. Our work builds on the Ladder network\nproposed by Valpola (2015), which we extend by combining the model with\nsupervision. We show that the resulting model reaches state-of-the-art\nperformance in semi-supervised MNIST and CIFAR-10 classification, in addition\nto permutation-invariant MNIST classification with all labels.", "field": [], "task": ["Semi-Supervised Image Classification"], "method": [], "dataset": ["CIFAR-10, 4000 Labels"], "metric": ["Accuracy"], "title": "Semi-Supervised Learning with Ladder Networks"} {"abstract": "Sentence embeddings have become an essential part of today's natural language processing (NLP) systems, especially together advanced deep learning methods. Although pre-trained sentence encoders are available in the general domain, none exists for biomedical texts to date. In this work, we introduce BioSentVec: the first open set of sentence embeddings trained with over 30 million documents from both scholarly articles in PubMed and clinical notes in the MIMIC-III Clinical Database. We evaluate BioSentVec embeddings in two sentence pair similarity tasks in different text genres. Our benchmarking results demonstrate that the BioSentVec embeddings can better capture sentence semantics compared to the other competitive alternatives and achieve state-of-the-art performance in both tasks. We expect BioSentVec to facilitate the research and development in biomedical text mining and to complement the existing resources in biomedical word embeddings. BioSentVec is publicly available at https://github.com/ncbi-nlp/BioSentVec", "field": [], "task": ["Sentence Embeddings", "Sentence Embeddings For Biomedical Texts", "Word Embeddings"], "method": [], "dataset": ["BIOSSES", "MedSTS"], "metric": ["Pearson Correlation"], "title": "BioSentVec: creating sentence embeddings for biomedical texts"} {"abstract": "We propose a principled method for gradient-based regularization of the critic of GAN-like models trained by adversarially optimizing the kernel of a Maximum Mean Discrepancy (MMD). We show that controlling the gradient of the critic is vital to having a sensible loss function, and devise a method to enforce exact, analytical gradient constraints at no additional cost compared to existing approximate techniques based on additive regularizers. The new loss function is provably continuous, and experiments show that it stabilizes and accelerates training, giving image generation models that outperform state-of-the art methods on $160 \\times 160$ CelebA and $64 \\times 64$ unconditional ImageNet.", "field": [], "task": ["Image Generation"], "method": [], "dataset": ["CIFAR-10"], "metric": ["Inception score", "FID"], "title": "On gradient regularizers for MMD GANs"} {"abstract": "We present SlowFast networks for video recognition. Our model involves (i) a Slow pathway, operating at low frame rate, to capture spatial semantics, and (ii) a Fast pathway, operating at high frame rate, to capture motion at fine temporal resolution. The Fast pathway can be made very lightweight by reducing its channel capacity, yet can learn useful temporal information for video recognition. Our models achieve strong performance for both action classification and detection in video, and large improvements are pin-pointed as contributions by our SlowFast concept. We report state-of-the-art accuracy on major video recognition benchmarks, Kinetics, Charades and AVA. Code has been made available at: https://github.com/facebookresearch/SlowFast", "field": [], "task": ["Action Classification", "Action Classification ", "Action Detection", "Action Recognition", "Action Recognition In Videos", "Video Recognition"], "method": [], "dataset": ["Kinetics-400", "Something-Something V2", "Kinetics-600", "Diving-48", "AVA v2.1", "Charades"], "metric": ["Top-5 Accuracy", "Vid acc@5", "mAP (Val)", "MAP", "Top-1 Accuracy", "Accuracy", "Vid acc@1"], "title": "SlowFast Networks for Video Recognition"} {"abstract": "We propose an algorithm to predict room layout from a single image that\ngeneralizes across panoramas and perspective images, cuboid layouts and more\ngeneral layouts (e.g. L-shape room). Our method operates directly on the\npanoramic image, rather than decomposing into perspective images as do recent\nworks. Our network architecture is similar to that of RoomNet, but we show\nimprovements due to aligning the image based on vanishing points, predicting\nmultiple layout elements (corners, boundaries, size and translation), and\nfitting a constrained Manhattan layout to the resulting predictions. Our method\ncompares well in speed and accuracy to other existing work on panoramas,\nachieves among the best accuracy for perspective images, and can handle both\ncuboid-shaped and more general Manhattan layouts.", "field": [], "task": ["3D Room Layouts From A Single RGB Panorama"], "method": [], "dataset": ["Stanford 2D-3D", "PanoContext", "Realtor360"], "metric": ["3DIoU"], "title": "LayoutNet: Reconstructing the 3D Room Layout from a Single RGB Image"} {"abstract": "Neural network models recently proposed for question answering (QA) primarily\nfocus on capturing the passage-question relation. However, they have minimal\ncapability to link relevant facts distributed across multiple sentences which\nis crucial in achieving deeper understanding, such as performing multi-sentence\nreasoning, co-reference resolution, etc. They also do not explicitly focus on\nthe question and answer type which often plays a critical role in QA. In this\npaper, we propose a novel end-to-end question-focused multi-factor attention\nnetwork for answer extraction. Multi-factor attentive encoding using\ntensor-based transformation aggregates meaningful facts even when they are\nlocated in multiple sentences. To implicitly infer the answer type, we also\npropose a max-attentional question aggregation mechanism to encode a question\nvector based on the important words in a question. During prediction, we\nincorporate sequence-level encoding of the first wh-word and its immediately\nfollowing word as an additional source of question type information. Our\nproposed model achieves significant improvements over the best prior\nstate-of-the-art results on three large-scale challenging QA datasets, namely\nNewsQA, TriviaQA, and SearchQA.", "field": [], "task": ["Open-Domain Question Answering", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["NewsQA", "SearchQA"], "metric": ["EM", "Unigram Acc", "F1", "N-gram F1"], "title": "A Question-Focused Multi-Factor Attention Network for Question Answering"} {"abstract": "Semi-supervised learning has proven to be a powerful paradigm for leveraging unlabeled data to mitigate the reliance on large labeled datasets. In this work, we unify the current dominant approaches for semi-supervised learning to produce a new algorithm, MixMatch, that works by guessing low-entropy labels for data-augmented unlabeled examples and mixing labeled and unlabeled data using MixUp. We show that MixMatch obtains state-of-the-art results by a large margin across many datasets and labeled data amounts. For example, on CIFAR-10 with 250 labels, we reduce error rate by a factor of 4 (from 38% to 11%) and by a factor of 2 on STL-10. We also demonstrate how MixMatch can help achieve a dramatically better accuracy-privacy trade-off for differential privacy. Finally, we perform an ablation study to tease apart which components of MixMatch are most important for its success.", "field": [], "task": ["Image Classification", "Semi-Supervised Image Classification"], "method": [], "dataset": ["CIFAR-10, 500 Labels", "CIFAR-100", "SVHN, 500 Labels", "CIFAR-10", "CIFAR-10, 4000 Labels", "CIFAR-10, 2000 Labels", "CIFAR-10, 250 Labels", "STL-10, 1000 Labels", "SVHN, 250 Labels", "SVHN, 1000 labels", "STL-10", "SVHN", "CIFAR-10, 1000 Labels", "STL-10, 5000 Labels", "SVHN, 2000 Labels", "SVHN, 4000 Labels"], "metric": ["Percentage error", "Percentage correct", "Accuracy"], "title": "MixMatch: A Holistic Approach to Semi-Supervised Learning"} {"abstract": "Recent development of large-scale question answering (QA) datasets triggered\na substantial amount of research into end-to-end neural architectures for QA.\nIncreasingly complex systems have been conceived without comparison to simpler\nneural baseline systems that would justify their complexity. In this work, we\npropose a simple heuristic that guides the development of neural baseline\nsystems for the extractive QA task. We find that there are two ingredients\nnecessary for building a high-performing neural QA system: first, the awareness\nof question words while processing the context and second, a composition\nfunction that goes beyond simple bag-of-words modeling, such as recurrent\nneural networks. Our results show that FastQA, a system that meets these two\nrequirements, can achieve very competitive performance compared with existing\nmodels. We argue that this surprising finding puts results of previous systems\nand the complexity of recent QA datasets into perspective.", "field": [], "task": ["Question Answering", "Reading Comprehension"], "method": [], "dataset": ["NewsQA", "SQuAD1.1 dev", "SQuAD1.1"], "metric": ["EM", "F1"], "title": "Making Neural QA as Simple as Possible but not Simpler"} {"abstract": "We address temporal action localization in untrimmed long videos. This is\nimportant because videos in real applications are usually unconstrained and\ncontain multiple action instances plus video content of background scenes or\nother activities. To address this challenging issue, we exploit the\neffectiveness of deep networks in temporal action localization via three\nsegment-based 3D ConvNets: (1) a proposal network identifies candidate segments\nin a long video that may contain actions; (2) a classification network learns\none-vs-all action classification model to serve as initialization for the\nlocalization network; and (3) a localization network fine-tunes on the learned\nclassification network to localize each action instance. We propose a novel\nloss function for the localization network to explicitly consider temporal\noverlap and therefore achieve high temporal localization accuracy. Only the\nproposal network and the localization network are used during prediction. On\ntwo large-scale benchmarks, our approach achieves significantly superior\nperformances compared with other state-of-the-art systems: mAP increases from\n1.7% to 7.4% on MEXaction2 and increases from 15.0% to 19.0% on THUMOS 2014,\nwhen the overlap threshold for evaluation is set to 0.5.", "field": [], "task": ["Action Classification", "Action Classification ", "Action Localization", "Temporal Action Localization", "Temporal Localization"], "method": [], "dataset": ["MEXaction2", "THUMOS\u201914"], "metric": ["mAP@0.2", "mAP", "mAP@0.3", "mAP IOU@0.5", "mAP IOU@0.2", "mAP IOU@0.4", "mAP@0.4", "mAP@0.1", "mAP IOU@0.3", "mAP@0.5", "mAP IOU@0.1"], "title": "Temporal Action Localization in Untrimmed Videos via Multi-stage CNNs"} {"abstract": "Domain adaptation in person re-identification (re-ID) has always been a challenging task. In this work, we explore how to harness the natural similar characteristics existing in the samples from the target domain for learning to conduct person re-ID in an unsupervised manner. Concretely, we propose a Self-similarity Grouping (SSG) approach, which exploits the potential similarity (from global body to local parts) of unlabeled samples to automatically build multiple clusters from different views. These independent clusters are then assigned with labels, which serve as the pseudo identities to supervise the training process. We repeatedly and alternatively conduct such a grouping and training process until the model is stable. Despite the apparent simplify, our SSG outperforms the state-of-the-arts by more than 4.6% (DukeMTMC to Market1501) and 4.4% (Market1501 to DukeMTMC) in mAP, respectively. Upon our SSG, we further introduce a clustering-guided semisupervised approach named SSG ++ to conduct the one-shot domain adaption in an open set setting (i.e. the number of independent identities from the target domain is unknown). Without spending much effort on labeling, our SSG ++ can further promote the mAP upon SSG by 10.7% and 6.9%, respectively. Our Code is available at: https://github.com/OasisYang/SSG .", "field": [], "task": ["Domain Adaptation", "One-Shot Learning", "Person Re-Identification", "Unsupervised Domain Adaptation", "Unsupervised Person Re-Identification"], "method": [], "dataset": ["Market to Duke", "Market to MSMT", "Market-1501->MSMT17", "DukeMTMC-reID->MSMT17", "DukeMTMC-reID", "Duke to MSMT", "Duke to Market", "Market-1501"], "metric": ["rank-10", "mAP", "Rank-10", "MAP", "Rank-1", "rank-1", "Rank-5", "rank-5"], "title": "Self-similarity Grouping: A Simple Unsupervised Cross Domain Adaptation Approach for Person Re-identification"} {"abstract": "Symmetry detection has been a classical problem in computer graphics, many of which using traditional geometric methods. In recent years, however, we have witnessed the arising deep learning changed the landscape of computer graphics. In this paper, we aim to solve the symmetry detection of the occluded point cloud in a deep-learning fashion. To the best of our knowledge, we are the first to utilize deep learning to tackle such a problem. In such a deep learning framework, double supervisions: points on the symmetry plane and normal vectors are employed to help us pinpoint the symmetry plane. We conducted experiments on the YCB- video dataset and demonstrate the efficacy of our method.", "field": [], "task": ["Occluded 3D Object Symmetry Detection", "Symmetry Detection"], "method": [], "dataset": ["YCB-Video"], "metric": ["PR AUC"], "title": "Symmetry Detection of Occluded Point Cloud Using Deep Learning"} {"abstract": "Regression via classification (RvC) is a common method used for regression problems in deep learning, where the target variable belongs to a set of continuous values. By discretizing the target into a set of non-overlapping classes, it has been shown that training a classifier can improve neural network accuracy compared to using a standard regression approach. However, it is not clear how the set of discrete classes should be chosen and how it affects the overall solution. In this work, we propose that using several discrete data representations simultaneously can improve neural network learning compared to a single representation. Our approach is end-to-end differentiable and can be added as a simple extension to conventional learning methods, such as deep neural networks. We test our method on three challenging tasks and show that our method reduces the prediction error compared to a baseline RvC approach while maintaining a similar model complexity.", "field": [], "task": ["Age Estimation", "Head Pose Estimation", "Historical Color Image Dating", "Regression"], "method": [], "dataset": ["HCI", "UTKFace", "BIWI"], "metric": ["MAE", "MAE (trained with BIWI data)"], "title": "Deep Ordinal Regression with Label Diversity"} {"abstract": "State-of-the-art navigation methods leverage a spatial memory to generalize to new environments, but their occupancy maps are limited to capturing the geometric structures directly observed by the agent. We propose occupancy anticipation, where the agent uses its egocentric RGB-D observations to infer the occupancy state beyond the visible regions. In doing so, the agent builds its spatial awareness more rapidly, which facilitates efficient exploration and navigation in 3D environments. By exploiting context in both the egocentric views and top-down maps our model successfully anticipates a broader map of the environment, with performance significantly better than strong baselines. Furthermore, when deployed for the sequential decision-making tasks of exploration and navigation, our model outperforms state-of-the-art methods on the Gibson and Matterport3D datasets. Our approach is the winning entry in the 2020 Habitat PointNav Challenge. Project page: http://vision.cs.utexas.edu/projects/occupancy_anticipation/", "field": [], "task": ["Decision Making", "Efficient Exploration", "Robot Navigation"], "method": [], "dataset": ["Habitat 2020 Point Nav test-std"], "metric": ["SOFT_SPL", "DISTANCE_TO_GOAL", "SUCCESS", "SPL"], "title": "Occupancy Anticipation for Efficient Exploration and Navigation"} {"abstract": "Deep learning has improved performance on many natural language processing\n(NLP) tasks individually. However, general NLP models cannot emerge within a\nparadigm that focuses on the particularities of a single metric, dataset, and\ntask. We introduce the Natural Language Decathlon (decaNLP), a challenge that\nspans ten tasks: question answering, machine translation, summarization,\nnatural language inference, sentiment analysis, semantic role labeling,\nzero-shot relation extraction, goal-oriented dialogue, semantic parsing, and\ncommonsense pronoun resolution. We cast all tasks as question answering over a\ncontext. Furthermore, we present a new Multitask Question Answering Network\n(MQAN) jointly learns all tasks in decaNLP without any task-specific modules or\nparameters in the multitask setting. MQAN shows improvements in transfer\nlearning for machine translation and named entity recognition, domain\nadaptation for sentiment analysis and natural language inference, and zero-shot\ncapabilities for text classification. We demonstrate that the MQAN's\nmulti-pointer-generator decoder is key to this success and performance further\nimproves with an anti-curriculum training strategy. Though designed for\ndecaNLP, MQAN also achieves state of the art results on the WikiSQL semantic\nparsing task in the single-task setting. We also release code for procuring and\nprocessing data, training and evaluating models, and reproducing all\nexperiments for decaNLP.", "field": [], "task": ["Domain Adaptation", "Machine Translation", "Named Entity Recognition", "Natural Language Inference", "Question Answering", "Relation Extraction", "Semantic Parsing", "Semantic Role Labeling", "Sentiment Analysis", "Text Classification", "Transfer Learning"], "method": [], "dataset": ["MultiNLI"], "metric": ["Accuracy"], "title": "The Natural Language Decathlon: Multitask Learning as Question Answering"} {"abstract": "This work proposes the continuous conditional generative adversarial network (CcGAN), the first generative model for image generation conditional on continuous, scalar conditions (termed regression labels). Existing conditional GANs (cGANs) are mainly designed for categorical conditions (eg, class labels); conditioning on regression labels is mathematically distinct and raises two fundamental problems:(P1) Since there may be very few (even zero) real images for some regression labels, minimizing existing empirical versions of cGAN losses (aka empirical cGAN losses) often fails in practice;(P2) Since regression labels are scalar and infinitely many, conventional label input methods are not applicable. The proposed CcGAN solves the above problems, respectively, by (S1) reformulating existing empirical cGAN losses to be appropriate for the continuous scenario; and (S2) proposing a naive label input (NLI) method and an improved label input (ILI) method to incorporate regression labels into the generator and the discriminator. The reformulation in (S1) leads to two novel empirical discriminator losses, termed the hard vicinal discriminator loss (HVDL) and the soft vicinal discriminator loss (SVDL) respectively, and a novel empirical generator loss. The error bounds of a discriminator trained with HVDL and SVDL are derived under mild assumptions in this work. Two new benchmark datasets (RC-49 and Cell-200) and a novel evaluation metric (Sliding Fr\\'echet Inception Distance) are also proposed for this continuous scenario. Our experiments on the Circular 2-D Gaussians, RC-49, UTKFace, Cell-200, and Steering Angle datasets show that CcGAN is able to generate diverse, high-quality samples from the image distribution conditional on a given regression label. Moreover, in these experiments, CcGAN substantially outperforms cGAN both visually and quantitatively.", "field": [], "task": ["Image Generation", "Regression"], "method": [], "dataset": ["RC-49"], "metric": ["Intra-FID"], "title": "Continuous Conditional Generative Adversarial Networks for Image Generation: Novel Losses and Label Input Mechanisms"} {"abstract": "Classification problems solved with deep neural networks (DNNs) typically rely on a closed world paradigm, and optimize over a single objective (e.g., minimization of the cross-entropy loss). This setup dismisses all kinds of supporting signals that can be used to reinforce the existence or absence of a particular pattern. The increasing need for models that are interpretable by design makes the inclusion of said contextual signals a crucial necessity. To this end, we introduce the notion of Self-Supervised Autogenous Learning (SSAL) models. A SSAL objective is realized through one or more additional targets that are derived from the original supervised classification task, following architectural principles found in multi-task learning. SSAL branches impose low-level priors into the optimization process (e.g., grouping). The ability of using SSAL branches during inference, allow models to converge faster, focusing on a richer set of class-relevant features. We show that SSAL models consistently outperform the state-of-the-art while also providing structured predictions that are more interpretable.", "field": [], "task": ["Image Classification", "Multi-Task Learning"], "method": [], "dataset": ["CIFAR-100", "ImageNet"], "metric": ["Percentage correct", "Top 1 Accuracy"], "title": "Contextual Classification Using Self-Supervised Auxiliary Models for Deep Neural Networks"} {"abstract": "The field of Automatic Facial Expression Analysis has grown rapidly in recent\nyears. However, despite progress in new approaches as well as benchmarking\nefforts, most evaluations still focus on either posed expressions, near-frontal\nrecordings, or both. This makes it hard to tell how existing expression\nrecognition approaches perform under conditions where faces appear in a wide\nrange of poses (or camera views), displaying ecologically valid expressions.\nThe main obstacle for assessing this is the availability of suitable data, and\nthe challenge proposed here addresses this limitation. The FG 2017 Facial\nExpression Recognition and Analysis challenge (FERA 2017) extends FERA 2015 to\nthe estimation of Action Units occurrence and intensity under different camera\nviews. In this paper we present the third challenge in automatic recognition of\nfacial expressions, to be held in conjunction with the 12th IEEE conference on\nFace and Gesture Recognition, May 2017, in Washington, United States. Two\nsub-challenges are defined: the detection of AU occurrence, and the estimation\nof AU intensity. In this work we outline the evaluation protocol, the data\nused, and the results of a baseline method for both sub-challenges.", "field": [], "task": ["Facial Action Unit Detection", "Facial Expression Recognition", "Gesture Recognition"], "method": [], "dataset": ["BP4D"], "metric": ["F1", "Average Accuracy"], "title": "FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge"} {"abstract": "Recent reports suggest that a generic supervised deep CNN model trained on a\nlarge-scale dataset reduces, but does not remove, dataset bias on a standard\nbenchmark. Fine-tuning deep models in a new domain can require a significant\namount of data, which for many applications is simply not available. We propose\na new CNN architecture which introduces an adaptation layer and an additional\ndomain confusion loss, to learn a representation that is both semantically\nmeaningful and domain invariant. We additionally show that a domain confusion\nmetric can be used for model selection to determine the dimension of an\nadaptation layer and the best position for the layer in the CNN architecture.\nOur proposed adaptation method offers empirical performance which exceeds\npreviously published results on a standard benchmark visual domain adaptation\ntask.", "field": [], "task": ["Domain Adaptation", "Model Selection"], "method": [], "dataset": ["Office-Caltech"], "metric": ["Average Accuracy"], "title": "Deep Domain Confusion: Maximizing for Domain Invariance"} {"abstract": "Conventional training of a deep CNN based object detector demands a large number of bounding box annotations, which may be unavailable for rare categories. In this work we develop a few-shot object detector that can learn to detect novel objects from only a few annotated examples. Our proposed model leverages fully labeled base classes and quickly adapts to novel classes, using a meta feature learner and a reweighting module within a one-stage detection architecture. The feature learner extracts meta features that are generalizable to detect novel object classes, using training data from base classes with sufficient samples. The reweighting module transforms a few support examples from the novel classes to a global vector that indicates the importance or relevance of meta features for detecting the corresponding objects. These two modules, together with a detection prediction module, are trained end-to-end based on an episodic few-shot learning scheme and a carefully designed loss function. Through extensive experiments we demonstrate that our model outperforms well-established baselines by a large margin for few-shot object detection, on multiple datasets and settings. We also present analysis on various aspects of our proposed model, aiming to provide some inspiration for future few-shot detection works.", "field": [], "task": ["Few-Shot Learning", "Few-Shot Object Detection", "Image Classification", "Meta-Learning", "Object Detection"], "method": [], "dataset": ["MS-COCO (30-shot)", "MS-COCO (10-shot)"], "metric": ["AP"], "title": "Few-shot Object Detection via Feature Reweighting"} {"abstract": "This paper presents RuSentiment, a new dataset for sentiment analysis of social media posts in Russian, and a new set of comprehensive annotation guidelines that are extensible to other languages. RuSentiment is currently the largest in its class for Russian, with 31,185 posts annotated with Fleiss{'} kappa of 0.58 (3 annotations per post). To diversify the dataset, 6,950 posts were pre-selected with an active learning-style strategy. We report baseline classification results, and we also release the best-performing embeddings trained on 3.2B tokens of Russian VKontakte posts.", "field": [], "task": ["Active Learning", "Sentiment Analysis", "Word Embeddings"], "method": [], "dataset": ["RuSentiment"], "metric": ["Weighted F1"], "title": "RuSentiment: An Enriched Sentiment Analysis Dataset for Social Media in Russian"} {"abstract": "Distantly supervised open-domain question answering (DS-QA) aims to find answers in collections of unlabeled text. Existing DS-QA models usually retrieve related paragraphs from a large-scale corpus and apply reading comprehension technique to extract answers from the most relevant paragraph. They ignore the rich information contained in other paragraphs. Moreover, distant supervision data inevitably accompanies with the wrong labeling problem, and these noisy data will substantially degrade the performance of DS-QA. To address these issues, we propose a novel DS-QA model which employs a paragraph selector to filter out those noisy paragraphs and a paragraph reader to extract the correct answer from those denoised paragraphs. Experimental results on real-world datasets show that our model can capture useful information from noisy data and achieve significant improvements on DS-QA as compared to all baselines.", "field": [], "task": ["Denoising", "Information Retrieval", "Open-Domain Question Answering", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["SearchQA", "Quasar", "Quasart-T"], "metric": ["N-gram F1", "Unigram Acc", "F1", "EM", "EM (Quasar-T)", "F1 (Quasar-T)"], "title": "Denoising Distantly Supervised Open-Domain Question Answering"} {"abstract": "Hand pose estimation from 3D depth images, has been explored widely using various kinds of techniques in the field of computer vision. Though, deep learning based method improve the performance greatly recently, however, this problem still remains unsolved due to lack of large datasets, like ImageNet or effective data synthesis methods. In this paper, we propose HandAugment, a method to synthesize image data to augment the training process of the neural networks. Our method has two main parts: First, We propose a scheme of two-stage neural networks. This scheme can make the neural networks focus on the hand regions and thus to improve the performance. Second, we introduce a simple and effective method to synthesize data by combining real and synthetic image together in the image space. Finally, we show that our method achieves the first place in the task of depth-based 3D hand pose estimation in HANDS 2019 challenge.", "field": [], "task": ["3D Hand Pose Estimation", "Data Augmentation", "Hand Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["HANDS 2019"], "metric": ["Average 3D Error"], "title": "HandAugment: A Simple Data Augmentation Method for Depth-Based 3D Hand Pose Estimation"} {"abstract": "We present a convolution-free approach to video classification built exclusively on self-attention over space and time. Our method, named \"TimeSformer,\" adapts the standard Transformer architecture to video by enabling spatiotemporal feature learning directly from a sequence of frame-level patches. Our experimental study compares different self-attention schemes and suggests that \"divided attention,\" where temporal attention and spatial attention are separately applied within each block, leads to the best video classification accuracy among the design choices considered. Despite the radically different design compared to the prominent paradigm of 3D convolutional architectures for video, TimeSformer achieves state-of-the-art results on several major action recognition benchmarks, including the best reported accuracy on Kinetics-400 and Kinetics-600. Furthermore, our model is faster to train and has higher test-time efficiency compared to competing architectures. Code and pretrained models will be made publicly available.", "field": [], "task": ["Action Classification", "Action Recognition", "Video Classification", "Video Question Answering", "Video Understanding"], "method": [], "dataset": ["Kinetics-400", "Howto100M-QA", "Diving-48", "Something-Something V2"], "metric": ["Top-1 Accuracy", "Vid acc@5", "Vid acc@1", "Accuracy"], "title": "Is Space-Time Attention All You Need for Video Understanding?"} {"abstract": "Many new proposals for scene text recognition (STR) models have been introduced in recent years. While each claim to have pushed the boundary of the technology, a holistic and fair comparison has been largely missing in the field due to the inconsistent choices of training and evaluation datasets. This paper addresses this difficulty with three major contributions. First, we examine the inconsistencies of training and evaluation datasets, and the performance gap results from inconsistencies. Second, we introduce a unified four-stage STR framework that most existing STR models fit into. Using this framework allows for the extensive evaluation of previously proposed STR modules and the discovery of previously unexplored module combinations. Third, we analyze the module-wise contributions to performance in terms of accuracy, speed, and memory demand, under one consistent set of training and evaluation datasets. Such analyses clean up the hindrance on the current comparisons to understand the performance gain of the existing modules.", "field": [], "task": ["Scene Text", "Scene Text Recognition"], "method": [], "dataset": ["ICDAR2013", "ICDAR2015", "ICDAR 2003", "SVT"], "metric": ["Accuracy"], "title": "What Is Wrong With Scene Text Recognition Model Comparisons? Dataset and Model Analysis"} {"abstract": "One-class novelty detection is to identify anomalous instances that do not conform to the expected normal instances. In this paper, the Generative Adversarial Networks (GANs) based on encoder-decoder-encoder pipeline are used for detection and achieve state-of-the-art performance. However, deep neural networks are too over-parameterized to deploy on resource-limited devices. Therefore, Progressive Knowledge Distillation with GANs (PKDGAN) is proposed to learn compact and fast novelty detection networks. The P-KDGAN is a novel attempt to connect two standard GANs by the designed distillation loss for transferring knowledge from the teacher to the student. The progressive learning of knowledge distillation is a two-step approach that continuously improves the performance of the student GAN and achieves better performance than single step methods. In the first step, the student GAN learns the basic knowledge totally from the teacher via guiding of the pretrained teacher GAN with fixed weights. In the second step, joint fine-training is adopted for the knowledgeable teacher and student GANs to further improve the performance and stability. The experimental results on CIFAR-10, MNIST, and FMNIST show that our method improves the performance of the student GAN by 2.44%, 1.77%, and 1.73% when compressing the computation at ratios of 24.45:1, 311.11:1, and 700:1, respectively.", "field": [], "task": ["Anomaly Detection", "Knowledge Distillation", "Unsupervised Anomaly Detection"], "method": [], "dataset": ["MNIST", "\t Fashion-MNIST", "CIFAR-10"], "metric": ["ROC AUC", "AUC-ROC"], "title": "P-KDGAN: Progressive Knowledge Distillation with GANs for One-class Novelty Detection"} {"abstract": "Few-shot image classification aims to classify unseen classes with limited labelled samples. Recent works benefit from the meta-learning process with episodic tasks and can fast adapt to class from training to testing. Due to the limited number of samples for each task, the initial embedding network for meta-learning becomes an essential component and can largely affect the performance in practice. To this end, most of the existing methods highly rely on the efficient embedding network. Due to the limited labelled data, the scale of embedding network is constrained under a supervised learning(SL) manner which becomes a bottleneck of the few-shot learning methods. In this paper, we proposed to train a more generalized embedding network with self-supervised learning (SSL) which can provide robust representation for downstream tasks by learning from the data itself. We evaluate our work by extensive comparisons with previous baseline methods on two few-shot classification datasets ({\\em i.e.,} MiniImageNet and CUB) and achieve better performance over baselines. Tests on four datasets in cross-domain few-shot learning classification show that the proposed method achieves state-of-the-art results and further prove the robustness of the proposed model. Our code is available at \\hyperref[https://github.com/phecy/SSL-FEW-SHOT.]{https://github.com/phecy/SSL-FEW-SHOT.}", "field": [], "task": ["Cross-Domain Few-Shot", "cross-domain few-shot learning", "Few-Shot Image Classification", "Few-Shot Learning", "Image Classification", "Meta-Learning", "Self-Supervised Learning"], "method": [], "dataset": ["Mini-Imagenet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "Mini-ImageNet - 1-Shot Learning", "CUB 200 5-way 1-shot", "CUB 200 5-way 5-shot"], "metric": ["Accuracy"], "title": "Self-Supervised Learning For Few-Shot Image Classification"} {"abstract": "Simplicity is the ultimate sophistication. Differentiable Architecture Search (DARTS) has now become one of the mainstream paradigms of neural architecture search. However, it largely suffers from several disturbing factors of optimization process whose results are unstable to reproduce. FairDARTS points out that skip connections natively have an unfair advantage in exclusive competition which primarily leads to dramatic performance collapse. While FairDARTS turns the unfair competition into a collaborative one, we instead impede such unfair advantage by injecting unbiased random noise into skip operations' output. In effect, the optimizer should perceive this difficulty at each training step and refrain from overshooting on skip connections, but in a long run it still converges to the right solution area since no bias is added to the gradient. We name this novel approach as NoisyDARTS. Our experiments on CIFAR-10 and ImageNet attest that it can effectively break the skip connection's unfair advantage and yield better performance. It generates a series of models that achieve state-of-the-art results on both datasets. Code will be made available at https://github.com/xiaomi-automl/NoisyDARTS.", "field": [], "task": ["AutoML", "Image Classification", "Neural Architecture Search"], "method": [], "dataset": ["ImageNet", "CIFAR-10"], "metric": ["Search Time (GPU days)", "MACs", "Percentage correct", "Top-1 Error Rate", "FLOPS", "Params", "Parameters", "Accuracy"], "title": "Noisy Differentiable Architecture Search"} {"abstract": "Entity alignment aims to identify equivalent entity pairs from different Knowledge Graphs (KGs), which is essential in integrating multi-source KGs. Recently, with the introduction of GNNs into entity alignment, the architectures of recent models have become more and more complicated. We even find two counter-intuitive phenomena within these methods: (1) The standard linear transformation in GNNs is not working well. (2) Many advanced KG embedding models designed for link prediction task perform poorly in entity alignment. In this paper, we abstract existing entity alignment methods into a unified framework, Shape-Builder & Alignment, which not only successfully explains the above phenomena but also derives two key criteria for an ideal transformation operation. Furthermore, we propose a novel GNNs-based method, Relational Reflection Entity Alignment (RREA). RREA leverages Relational Reflection Transformation to obtain relation specific embeddings for each entity in a more efficient way. The experimental results on real-world datasets show that our model significantly outperforms the state-of-the-art methods, exceeding by 5.8%-10.9% on Hits@1.", "field": [], "task": ["Entity Alignment", "Knowledge Graphs", "Link Prediction"], "method": [], "dataset": ["DBP15k zh-en"], "metric": ["Hits@1"], "title": "Relational Reflection Entity Alignment"} {"abstract": "We introduce a simple recurrent variational auto-encoder architecture that\nsignificantly improves image modeling. The system represents the\nstate-of-the-art in latent variable models for both the ImageNet and Omniglot\ndatasets. We show that it naturally separates global conceptual information\nfrom lower level details, thus addressing one of the fundamentally desired\nproperties of unsupervised learning. Furthermore, the possibility of\nrestricting ourselves to storing only global information about an image allows\nus to achieve high quality 'conceptual compression'.", "field": [], "task": ["Image Generation", "Latent Variable Models", "Omniglot"], "method": [], "dataset": ["CIFAR-10"], "metric": ["bits/dimension"], "title": "Towards Conceptual Compression"} {"abstract": "Effective and efficient mitigation of malware is a long-time endeavor in the\ninformation security community. The development of an anti-malware system that\ncan counteract an unknown malware is a prolific activity that may benefit\nseveral sectors. We envision an intelligent anti-malware system that utilizes\nthe power of deep learning (DL) models. Using such models would enable the\ndetection of newly-released malware through mathematical generalization. That\nis, finding the relationship between a given malware $x$ and its corresponding\nmalware family $y$, $f: x \\mapsto y$. To accomplish this feat, we used the\nMalimg dataset (Nataraj et al., 2011) which consists of malware images that\nwere processed from malware binaries, and then we trained the following DL\nmodels 1 to classify each malware family: CNN-SVM (Tang, 2013), GRU-SVM\n(Agarap, 2017), and MLP-SVM. Empirical evidence has shown that the GRU-SVM\nstands out among the DL models with a predictive accuracy of ~84.92%. This\nstands to reason for the mentioned model had the relatively most sophisticated\narchitecture design among the presented models. The exploration of an even more\noptimal DL-SVM model is the next stage towards the engineering of an\nintelligent anti-malware system.", "field": [], "task": ["Malware Classification"], "method": [], "dataset": ["Malimg Dataset"], "metric": ["Accuracy"], "title": "Towards Building an Intelligent Anti-Malware System: A Deep Learning Approach using Support Vector Machine (SVM) for Malware Classification"} {"abstract": "This paper introduces a new large-scale music dataset, MusicNet, to serve as\na source of supervision and evaluation of machine learning methods for music\nresearch. MusicNet consists of hundreds of freely-licensed classical music\nrecordings by 10 composers, written for 11 instruments, together with\ninstrument/note annotations resulting in over 1 million temporal labels on 34\nhours of chamber music performances under various studio and microphone\nconditions.\n The paper defines a multi-label classification task to predict notes in\nmusical recordings, along with an evaluation protocol, and benchmarks several\nmachine learning architectures for this task: i) learning from spectrogram\nfeatures; ii) end-to-end learning with a neural net; iii) end-to-end learning\nwith a convolutional neural net. These experiments show that end-to-end models\ntrained for note prediction learn frequency selective filters as a low-level\nrepresentation of audio.", "field": [], "task": ["Multi-Label Classification", "Music Transcription"], "method": [], "dataset": ["MusicNet"], "metric": ["APS"], "title": "Learning Features of Music from Scratch"} {"abstract": "Clustering is one of the most fundamental tasks in machine learning. Recently, deep clustering has become a major trend in clustering techniques. Representation learning often plays an important role in the effectiveness of deep clustering, and thus can be a principal cause of performance degradation. In this paper, we propose a clustering-friendly representation learning method using instance discrimination and feature decorrelation. Our deep-learning-based representation learning method is motivated by the properties of classical spectral clustering. Instance discrimination learns similarities among data and feature decorrelation removes redundant correlation among features. We utilize an instance discrimination method in which learning individual instance classes leads to learning similarity among instances. Through detailed experiments and examination, we show that the approach can be adapted to learning a latent space for clustering. We design novel softmax-formulated decorrelation constraints for learning. In evaluations of image clustering using CIFAR-10 and ImageNet-10, our method achieves accuracy of 81.5% and 95.4%, respectively, far above state-of-the-art values. We also show that the softmax-formulated constraints are compatible with various neural networks.", "field": [], "task": ["Deep Clustering", "Image Clustering", "Representation Learning"], "method": [], "dataset": ["Imagenet-dog-15", "CIFAR-100", "CIFAR-10", "ImageNet-10", "STL-10"], "metric": ["Train set", "Train Split", "ARI", "Backbone", "Train Set", "NMI", "Accuracy"], "title": "Clustering-friendly Representation Learning via Instance Discrimination and Feature Decorrelation"} {"abstract": "While knowledge distillation (transfer) has been attracting attentions from the research community, the recent development in the fields has heightened the need for reproducible studies and highly generalized frameworks to lower barriers to such high-quality, reproducible deep learning research. Several researchers voluntarily published frameworks used in their knowledge distillation studies to help other interested researchers reproduce their original work. Such frameworks, however, are usually neither well generalized nor maintained, thus researchers are still required to write a lot of code to refactor/build on the frameworks for introducing new methods, models, datasets and designing experiments. In this paper, we present our developed open-source framework built on PyTorch and dedicated for knowledge distillation studies. The framework is designed to enable users to design experiments by declarative PyYAML configuration files, and helps researchers complete the recently proposed ML Code Completeness Checklist. Using the developed framework, we demonstrate its various efficient training strategies, and implement a variety of knowledge distillation methods. We also reproduce some of their original experimental results on the ImageNet and COCO datasets presented at major machine learning conferences such as ICLR, NeurIPS, CVPR and ECCV, including recent state-of-the-art methods. All the source code, configurations, log files and trained model weights are publicly available at https://github.com/yoshitomo-matsubara/torchdistill .", "field": [], "task": ["Image Classification", "Instance Segmentation", "Knowledge Distillation", "Object Detection"], "method": [], "dataset": ["ImageNet", "COCO test-dev"], "metric": ["box AP", "mask AP", "Top 1 Accuracy"], "title": "torchdistill: A Modular, Configuration-Driven Framework for Knowledge Distillation"} {"abstract": "In 2D image processing, some attempts decompose images into high and low frequency components for describing edge and smooth parts respectively. Similarly, the contour and flat area of 3D objects, such as the boundary and seat area of a chair, describe different but also complementary geometries. However, such investigation is lost in previous deep networks that understand point clouds by directly treating all points or local patches equally. To solve this problem, we propose Geometry-Disentangled Attention Network (GDANet). GDANet introduces Geometry-Disentangle Module to dynamically disentangle point clouds into the contour and flat part of 3D objects, respectively denoted by sharp and gentle variation components. Then GDANet exploits Sharp-Gentle Complementary Attention Module that regards the features from sharp and gentle variation components as two holistic representations, and pays different attentions to them while fusing them respectively with original point cloud features. In this way, our method captures and refines the holistic and complementary 3D geometric semantics from two distinct disentangled components to supplement the local information. Extensive experiments on 3D object classification and segmentation benchmarks demonstrate that GDANet achieves the state-of-the-arts with fewer parameters. Code is released on https://github.com/mutianxu/GDANet.", "field": [], "task": ["3D Object Classification", "Object Classification"], "method": [], "dataset": ["ShapeNet-Part", "ModelNet40"], "metric": ["Overall Accuracy", "Class Average IoU", "Instance Average IoU"], "title": "Learning Geometry-Disentangled Representation for Complementary Understanding of 3D Object Point Cloud"} {"abstract": "Attention-based learning for fine-grained image recognition remains a\nchallenging task, where most of the existing methods treat each object part in\nisolation, while neglecting the correlations among them. In addition, the\nmulti-stage or multi-scale mechanisms involved make the existing methods less\nefficient and hard to be trained end-to-end. In this paper, we propose a novel\nattention-based convolutional neural network (CNN) which regulates multiple\nobject parts among different input images. Our method first learns multiple\nattention region features of each input image through the one-squeeze\nmulti-excitation (OSME) module, and then apply the multi-attention multi-class\nconstraint (MAMC) in a metric learning framework. For each anchor feature, the\nMAMC functions by pulling same-attention same-class features closer, while\npushing different-attention or different-class features away. Our method can be\neasily trained end-to-end, and is highly efficient which requires only one\ntraining stage. Moreover, we introduce Dogs-in-the-Wild, a comprehensive dog\nspecies dataset that surpasses similar existing datasets by category coverage,\ndata volume and annotation quality. This dataset will be released upon\nacceptance to facilitate the research of fine-grained image recognition.\nExtensive experiments are conducted to show the substantial improvements of our\nmethod on four benchmark datasets.", "field": [], "task": ["Fine-Grained Image Recognition", "Metric Learning"], "method": [], "dataset": ["Stanford Cars"], "metric": ["Accuracy"], "title": "Multi-Attention Multi-Class Constraint for Fine-grained Image Recognition"} {"abstract": "Depth Completion deals with the problem of converting a sparse depth map to a dense one, given the corresponding color image. Convolutional spatial propagation network (CSPN) is one of the state-of-the-art (SoTA) methods of depth completion, which recovers structural details of the scene. In this paper, we propose CSPN++, which further improves its effectiveness and efficiency by learning adaptive convolutional kernel sizes and the number of iterations for the propagation, thus the context and computational resources needed at each pixel could be dynamically assigned upon requests. Specifically, we formulate the learning of the two hyper-parameters as an architecture selection problem where various configurations of kernel sizes and numbers of iterations are first defined, and then a set of soft weighting parameters are trained to either properly assemble or select from the pre-defined configurations at each pixel. In our experiments, we find weighted assembling can lead to significant accuracy improvements, which we referred to as \"context-aware CSPN\", while weighted selection, \"resource-aware CSPN\" can reduce the computational resource significantly with similar or better accuracy. Besides, the resource needed for CSPN++ can be adjusted w.r.t. the computational budget automatically. Finally, to avoid the side effects of noise or inaccurate sparse depths, we embed a gated network inside CSPN++, which further improves the performance. We demonstrate the effectiveness of CSPN++on the KITTI depth completion benchmark, where it significantly improves over CSPN and other SoTA methods.", "field": [], "task": ["Depth Completion"], "method": [], "dataset": ["KITTI Depth Completion Validation"], "metric": ["RMSE"], "title": "CSPN++: Learning Context and Resource Aware Convolutional Spatial Propagation Networks for Depth Completion"} {"abstract": "When a large feedforward neural network is trained on a small training set,\nit typically performs poorly on held-out test data. This \"overfitting\" is\ngreatly reduced by randomly omitting half of the feature detectors on each\ntraining case. This prevents complex co-adaptations in which a feature detector\nis only helpful in the context of several other specific feature detectors.\nInstead, each neuron learns to detect a feature that is generally helpful for\nproducing the correct answer given the combinatorially large variety of\ninternal contexts in which it must operate. Random \"dropout\" gives big\nimprovements on many benchmark tasks and sets new records for speech and object\nrecognition.", "field": [], "task": ["Image Classification", "Object Recognition"], "method": [], "dataset": ["CIFAR-10"], "metric": ["Percentage correct"], "title": "Improving neural networks by preventing co-adaptation of feature detectors"} {"abstract": "For human pose estimation in monocular images, joint occlusions and\noverlapping upon human bodies often result in deviated pose predictions. Under\nthese circumstances, biologically implausible pose predictions may be produced.\nIn contrast, human vision is able to predict poses by exploiting geometric\nconstraints of joint inter-connectivity. To address the problem by\nincorporating priors about the structure of human bodies, we propose a novel\nstructure-aware convolutional network to implicitly take such priors into\naccount during training of the deep network. Explicit learning of such\nconstraints is typically challenging. Instead, we design discriminators to\ndistinguish the real poses from the fake ones (such as biologically implausible\nones). If the pose generator (G) generates results that the discriminator fails\nto distinguish from real ones, the network successfully learns the priors.", "field": [], "task": ["Pose Estimation"], "method": [], "dataset": ["MPII Human Pose"], "metric": ["PCKh-0.5"], "title": "Adversarial PoseNet: A Structure-aware Convolutional Network for Human Pose Estimation"} {"abstract": "Estimating crowd count in densely crowded scenes is an extremely challenging\ntask due to non-uniform scale variations. In this paper, we propose a novel\nend-to-end cascaded network of CNNs to jointly learn crowd count classification\nand density map estimation. Classifying crowd count into various groups is\ntantamount to coarsely estimating the total count in the image thereby\nincorporating a high-level prior into the density estimation network. This\nenables the layers in the network to learn globally relevant discriminative\nfeatures which aid in estimating highly refined density maps with lower count\nerror. The joint training is performed in an end-to-end fashion. Extensive\nexperiments on highly challenging publicly available datasets show that the\nproposed method achieves lower count error and better quality density maps as\ncompared to the recent state-of-the-art methods.", "field": [], "task": ["Crowd Counting", "Density Estimation", "Multi-Task Learning"], "method": [], "dataset": ["UCF CC 50", "UCF-QNRF", "ShanghaiTech A", "ShanghaiTech B"], "metric": ["MAE", "MSE"], "title": "CNN-based Cascaded Multi-task Learning of High-level Prior and Density Estimation for Crowd Counting"} {"abstract": "Text-based person search aims to retrieve the pedestrian images that best match a given textual description from gallery images. Previous methods utilize the soft-attention mechanism to infer the semantic alignments between the regions of image and the corresponding words in sentence. However, these methods may fuse the irrelevant multi-modality features together which cause matching redundancy problem. In this work, we propose a novel hi\u0002erarchical Gumbel attention network for text-based person search via Gumbel top-k re-parameterization algorithm. Specifically, it adaptively selects the strong semantically relevant image regions and words/phrases from images and texts for precise alignment and similarity calculation. This hard selection strategy is able to fuse the strong-relevant multi-modality features for alleviating the problem of matching redundancy. Meanwhile, a Gumbel top-k re\u0002parameterization algorithm is designed as a low-variance, unbiased gradient estimator to handle the discreteness problem of hard atten\u0002tion mechanism by an end-to-end manner. Moreover, a hierarchical adaptive matching strategy is employed by the model from three different granularities, i.e., word-level, phrase-level, and sentence\u0002level, towards fine-grained matching. Extensive experimental re\u0002sults demonstrate the state-of-the-art performance. Compared the existed best method, we achieve the 8.24% Rank-1 and 7.6% mAP relative improvements in the text-to-image retrieval task, and 5.58% Rank-1 and 6.3% mAP relative improvements in the image-to-text retrieval task on CUHK-PEDES dataset, respectively", "field": [], "task": ["Image Retrieval", "Image-to-Text Retrieval", "Person Search", "Text based Person Retrieval", "Text-to-Image Retrieval"], "method": [], "dataset": ["CUHK-PEDES"], "metric": ["R@10", "R@1", "R@5"], "title": "Hierarchical Gumbel Attention Network for Text-based Person Search"} {"abstract": "Fine-grained classification is a challenging problem, due to subtle differences among highly-confused categories. Most approaches address this difficulty by learning discriminative representation of individual input image. On the other hand, humans can effectively identify contrastive clues by comparing image pairs. Inspired by this fact, this paper proposes a simple but effective Attentive Pairwise Interaction Network (API-Net), which can progressively recognize a pair of fine-grained images by interaction. Specifically, API-Net first learns a mutual feature vector to capture semantic differences in the input pair. It then compares this mutual vector with individual vectors to generate gates for each input image. These distinct gate vectors inherit mutual context on semantic differences, which allow API-Net to attentively capture contrastive clues by pairwise interaction between two images. Additionally, we train API-Net in an end-to-end manner with a score ranking regularization, which can further generalize API-Net by taking feature priorities into account. We conduct extensive experiments on five popular benchmarks in fine-grained classification. API-Net outperforms the recent SOTA methods, i.e., CUB-200-2011 (90.0%), Aircraft(93.9%), Stanford Cars (95.3%), Stanford Dogs (90.3%), and NABirds (88.1%).", "field": [], "task": ["Fine-Grained Image Classification"], "method": [], "dataset": ["FGVC Aircraft", " CUB-200-2011", "Stanford Dogs", "Stanford Cars", "NABirds"], "metric": ["Accuracy"], "title": "Learning Attentive Pairwise Interaction for Fine-Grained Classification"} {"abstract": "Face detection and alignment in unconstrained environment are challenging due\nto various poses, illuminations and occlusions. Recent studies show that deep\nlearning approaches can achieve impressive performance on these two tasks. In\nthis paper, we propose a deep cascaded multi-task framework which exploits the\ninherent correlation between them to boost up their performance. In particular,\nour framework adopts a cascaded structure with three stages of carefully\ndesigned deep convolutional networks that predict face and landmark location in\na coarse-to-fine manner. In addition, in the learning process, we propose a new\nonline hard sample mining strategy that can improve the performance\nautomatically without manual sample selection. Our method achieves superior\naccuracy over the state-of-the-art techniques on the challenging FDDB and WIDER\nFACE benchmark for face detection, and AFLW benchmark for face alignment, while\nkeeps real time performance.", "field": [], "task": ["Face Alignment", "Face Detection"], "method": [], "dataset": ["WIDER Face (Hard)", "WIDER Face (Medium)", "WIDER Face (Easy)"], "metric": ["AP"], "title": "Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks"} {"abstract": "Timely assessment of compound toxicity is one of the biggest challenges\nfacing the pharmaceutical industry today. A significant proportion of compounds\nidentified as potential leads are ultimately discarded due to the toxicity they\ninduce. In this paper, we propose a novel machine learning approach for the\nprediction of molecular activity on ToxCast targets. We combine extreme\ngradient boosting with fully-connected and graph-convolutional neural network\narchitectures trained on QSAR physical molecular property descriptors, PubChem\nmolecular fingerprints, and SMILES sequences. Our ensemble predictor leverages\nthe strengths of each individual technique, significantly outperforming\nexisting state-of-the art models on the ToxCast and Tox21 toxicity-prediction\ndatasets. We provide free access to molecule toxicity prediction using our\nmodel at http://www.owkin.com/toxicblend.", "field": [], "task": ["Drug Discovery"], "method": [], "dataset": ["Tox21"], "metric": ["AUC"], "title": "ToxicBlend: Virtual Screening of Toxic Compounds with Ensemble Predictors"} {"abstract": "A reliable and accurate 3D tracking framework is essential for predicting future locations of surrounding objects and planning the observer's actions in numerous applications such as autonomous driving. We propose a framework that can effectively associate moving objects over time and estimate their full 3D bounding box information from a sequence of 2D images captured on a moving platform. The object association leverages quasi-dense similarity learning to identify objects in various poses and viewpoints with appearance cues only. After initial 2D association, we further utilize 3D bounding boxes depth-ordering heuristics for robust instance association and motion-based 3D trajectory prediction for re-identification of occluded vehicles. In the end, an LSTM-based object velocity learning module aggregates the long-term trajectory information for more accurate motion extrapolation. Experiments on our proposed simulation data and real-world benchmarks, including KITTI, nuScenes, and Waymo datasets, show that our tracking framework offers robust object association and tracking on urban-driving scenarios. On the Waymo Open benchmark, we establish the first camera-only baseline in the 3D tracking and 3D detection challenges. Our quasi-dense 3D tracking pipeline achieves impressive improvements on the nuScenes 3D tracking benchmark with near five times tracking accuracy of the best vision-only submission among all published methods. Our code, data and trained models are available at https://github.com/SysCV/qd-3dt.", "field": [], "task": ["3D Object Tracking", "Autonomous Driving", "Object Tracking", "Trajectory Prediction"], "method": [], "dataset": ["KITTI Tracking test"], "metric": ["MOTA"], "title": "Monocular Quasi-Dense 3D Object Tracking"} {"abstract": "Egocentric video recognition is a natural testbed for diverse interaction reasoning. Due to the large action vocabulary in egocentric video datasets, recent studies usually utilize a two-branch structure for action recognition, ie, one branch for verb classification and the other branch for noun classification. However, correlation studies between the verb and the noun branches have been largely ignored. Besides, the two branches fail to exploit local features due to the absence of a position-aware attention mechanism. In this paper, we propose a novel Symbiotic Attention framework leveraging Privileged information (SAP) for egocentric video recognition. Finer position-aware object detection features can facilitate the understanding of actor's interaction with the object. We introduce these features in action recognition and regard them as privileged information. Our framework enables mutual communication among the verb branch, the noun branch, and the privileged information. This communication process not only injects local details into global features but also exploits implicit guidance about the spatio-temporal position of an on-going action. We introduce novel symbiotic attention (SA) to enable effective communication. It first normalizes the detection guided features on one branch to underline the action-relevant information from the other branch. SA adaptively enhances the interactions among the three sources. To further catalyze this communication, spatial relations are uncovered for the selection of most action-relevant information. It identifies the most valuable and discriminative feature for classification. We validate the effectiveness of our SAP quantitatively and qualitatively. Notably, it achieves the state-of-the-art on two large-scale egocentric video datasets.", "field": [], "task": ["Action Recognition", "Egocentric Activity Recognition", "Object Detection", "Video Recognition"], "method": [], "dataset": ["EGTEA"], "metric": ["Mean class accuracy", "Average Accuracy"], "title": "Symbiotic Attention with Privileged Information for Egocentric Action Recognition"} {"abstract": "Generating novel, yet realistic, images of persons is a challenging task due\nto the complex interplay between the different image factors, such as the\nforeground, background and pose information. In this work, we aim at generating\nsuch images based on a novel, two-stage reconstruction pipeline that learns a\ndisentangled representation of the aforementioned image factors and generates\nnovel person images at the same time. First, a multi-branched reconstruction\nnetwork is proposed to disentangle and encode the three factors into embedding\nfeatures, which are then combined to re-compose the input image itself. Second,\nthree corresponding mapping functions are learned in an adversarial manner in\norder to map Gaussian noise to the learned embedding feature space, for each\nfactor respectively. Using the proposed framework, we can manipulate the\nforeground, background and pose of the input image, and also sample new\nembedding features to generate such targeted manipulations, that provide more\ncontrol over the generation process. Experiments on Market-1501 and Deepfashion\ndatasets show that our model does not only generate realistic person images\nwith new foregrounds, backgrounds and poses, but also manipulates the generated\nfactors and interpolates the in-between states. Another set of experiments on\nMarket-1501 shows that our model can also be beneficial for the person\nre-identification task.", "field": [], "task": ["Gesture-to-Gesture Translation", "Image Generation", "Person Re-Identification", "Pose Transfer"], "method": [], "dataset": ["Senz3D", "NTU Hand Digit", "Deep-Fashion"], "metric": ["SSIM", "PSNR", "AMT", "IS"], "title": "Disentangled Person Image Generation"} {"abstract": "Gradient-based meta-learning methods leverage gradient descent to learn the\ncommonalities among various tasks. While previous such methods have been\nsuccessful in meta-learning tasks, they resort to simple gradient descent\nduring meta-testing. Our primary contribution is the {\\em MT-net}, which\nenables the meta-learner to learn on each layer's activation space a subspace\nthat the task-specific learner performs gradient descent on. Additionally, a\ntask-specific learner of an {\\em MT-net} performs gradient descent with respect\nto a meta-learned distance metric, which warps the activation space to be more\nsensitive to task identity. We demonstrate that the dimension of this learned\nsubspace reflects the complexity of the task-specific learner's adaptation\ntask, and also that our model is less sensitive to the choice of initial\nlearning rates than previous gradient-based meta-learning methods. Our method\nachieves state-of-the-art or comparable performance on few-shot classification\nand regression tasks.", "field": [], "task": ["Few-Shot Image Classification", "Meta-Learning", "Regression"], "method": [], "dataset": ["OMNIGLOT - 1-Shot, 5-way", "Mini-Imagenet 5-way (1-shot)", "OMNIGLOT - 1-Shot, 20-way"], "metric": ["Accuracy"], "title": "Gradient-Based Meta-Learning with Learned Layerwise Metric and Subspace"} {"abstract": "Many classic methods have shown non-local self-similarity in natural images\nto be an effective prior for image restoration. However, it remains unclear and\nchallenging to make use of this intrinsic property via deep networks. In this\npaper, we propose a non-local recurrent network (NLRN) as the first attempt to\nincorporate non-local operations into a recurrent neural network (RNN) for\nimage restoration. The main contributions of this work are: (1) Unlike existing\nmethods that measure self-similarity in an isolated manner, the proposed\nnon-local module can be flexibly integrated into existing deep networks for\nend-to-end training to capture deep feature correlation between each location\nand its neighborhood. (2) We fully employ the RNN structure for its parameter\nefficiency and allow deep feature correlation to be propagated along adjacent\nrecurrent states. This new design boosts robustness against inaccurate\ncorrelation estimation due to severely degraded images. (3) We show that it is\nessential to maintain a confined neighborhood for computing deep feature\ncorrelation given degraded images. This is in contrast to existing practice\nthat deploys the whole image. Extensive experiments on both image denoising and\nsuper-resolution tasks are conducted. Thanks to the recurrent non-local\noperations and correlation propagation, the proposed NLRN achieves superior\nresults to state-of-the-art methods with much fewer parameters.", "field": [], "task": ["Denoising", "Image Denoising", "Image Restoration", "Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["Urban100 sigma25", "Darmstadt Noise Dataset", "Set14 - 4x upscaling", "BSD68 sigma50", "Set12 sigma50", "Set12 sigma15", "Urban100 sigma50", "BSD68 sigma25", "BSD100 - 4x upscaling", "BSD68 sigma15", "Urban100 sigma15", "Set12 sigma30", "Set5 - 4x upscaling", "BSD200 sigma50", "BSD200 sigma70", "BSD200 sigma30", "Urban100 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Non-Local Recurrent Network for Image Restoration"} {"abstract": "Many deep learning architectures have been proposed to model the\ncompositionality in text sequences, requiring a substantial number of\nparameters and expensive computations. However, there has not been a rigorous\nevaluation regarding the added value of sophisticated compositional functions.\nIn this paper, we conduct a point-by-point comparative study between Simple\nWord-Embedding-based Models (SWEMs), consisting of parameter-free pooling\noperations, relative to word-embedding-based RNN/CNN models. Surprisingly,\nSWEMs exhibit comparable or even superior performance in the majority of cases\nconsidered. Based upon this understanding, we propose two additional pooling\nstrategies over learned word embeddings: (i) a max-pooling operation for\nimproved interpretability; and (ii) a hierarchical pooling operation, which\npreserves spatial (n-gram) information within text sequences. We present\nexperiments on 17 datasets encompassing three tasks: (i) (long) document\nclassification; (ii) text sequence matching; and (iii) short text tasks,\nincluding classification and tagging. The source code and datasets can be\nobtained from https:// github.com/dinghanshen/SWEM.", "field": [], "task": ["Document Classification", "Named Entity Recognition", "Sentiment Analysis", "Subjectivity Analysis", "Text Classification", "Word Embeddings"], "method": [], "dataset": ["MultiNLI", "Yelp Fine-grained classification", "Yelp Binary classification", "Yahoo! Answers", "DBpedia", "SST-2 Binary classification", "MSRP", "SNLI", "WikiQA", "CoNLL 2000", "MR", "AG News", "CoNLL 2003 (English)", "SST-5 Fine-grained classification", "TREC-6", "SUBJ", "Quora Question Pairs"], "metric": ["% Test Accuracy", "Matched", "MAP", "Error", "MRR", "F1", "Accuracy", "Mismatched"], "title": "Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms"} {"abstract": "We present a network architecture for processing point clouds that directly\noperates on a collection of points represented as a sparse set of samples in a\nhigh-dimensional lattice. Naively applying convolutions on this lattice scales\npoorly, both in terms of memory and computational cost, as the size of the\nlattice increases. Instead, our network uses sparse bilateral convolutional\nlayers as building blocks. These layers maintain efficiency by using indexing\nstructures to apply convolutions only on occupied parts of the lattice, and\nallow flexible specifications of the lattice structure enabling hierarchical\nand spatially-aware feature learning, as well as joint 2D-3D reasoning. Both\npoint-based and image-based representations can be easily incorporated in a\nnetwork with such layers and the resulting model can be trained in an\nend-to-end manner. We present results on 3D segmentation tasks where our\napproach outperforms existing state-of-the-art techniques.", "field": [], "task": ["3D Part Segmentation", "3D Semantic Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["ShapeNet-Part", "SemanticKITTI", "ScanNet"], "metric": ["3DIoU", "Class Average IoU", "Instance Average IoU", "mIoU"], "title": "SPLATNet: Sparse Lattice Networks for Point Cloud Processing"} {"abstract": "The ability to consolidate information of different types is at the core of\nintelligence, and has tremendous practical value in allowing learning for one\ntask to benefit from generalizations learned for others. In this paper we\ntackle the challenging task of improving semantic parsing performance, taking\nUCCA parsing as a test case, and AMR, SDP and Universal Dependencies (UD)\nparsing as auxiliary tasks. We experiment on three languages, using a uniform\ntransition-based system and learning architecture for all parsing tasks.\nDespite notable conceptual, formal and domain differences, we show that\nmultitask learning significantly improves UCCA parsing in both in-domain and\nout-of-domain settings.", "field": [], "task": ["Semantic Parsing", "UCCA Parsing"], "method": [], "dataset": ["SemEval 2019 Task 1"], "metric": ["English-20K (open) F1", "English-Wiki (open) F1"], "title": "Multitask Parsing Across Semantic Representations"} {"abstract": "Interacting with relational databases through natural language helps users of\nany background easily query and analyze a vast amount of data. This requires a\nsystem that understands users' questions and converts them to SQL queries\nautomatically. In this paper we present a novel approach, TypeSQL, which views\nthis problem as a slot filling task. Additionally, TypeSQL utilizes type\ninformation to better understand rare entities and numbers in natural language\nquestions. We test this idea on the WikiSQL dataset and outperform the prior\nstate-of-the-art by 5.5% in much less time. We also show that accessing the\ncontent of databases can significantly improve the performance when users'\nqueries are not well-formed. TypeSQL gets 82.6% accuracy, a 17.5% absolute\nimprovement compared to the previous content-sensitive model.", "field": [], "task": ["Slot Filling", "Text-To-Sql"], "method": [], "dataset": ["WikiSQL"], "metric": ["Execution Accuracy"], "title": "TypeSQL: Knowledge-based Type-Aware Neural Text-to-SQL Generation"} {"abstract": "We propose a self-supervised approach for learning representations and\nrobotic behaviors entirely from unlabeled videos recorded from multiple\nviewpoints, and study how this representation can be used in two robotic\nimitation settings: imitating object interactions from videos of humans, and\nimitating human poses. Imitation of human behavior requires a\nviewpoint-invariant representation that captures the relationships between\nend-effectors (hands or robot grippers) and the environment, object attributes,\nand body pose. We train our representations using a metric learning loss, where\nmultiple simultaneous viewpoints of the same observation are attracted in the\nembedding space, while being repelled from temporal neighbors which are often\nvisually similar but functionally different. In other words, the model\nsimultaneously learns to recognize what is common between different-looking\nimages, and what is different between similar-looking images. This signal\ncauses our model to discover attributes that do not change across viewpoint,\nbut do change across time, while ignoring nuisance variables such as\nocclusions, motion blur, lighting and background. We demonstrate that this\nrepresentation can be used by a robot to directly mimic human poses without an\nexplicit correspondence, and that it can be used as a reward function within a\nreinforcement learning algorithm. While representations are learned from an\nunlabeled collection of task-related videos, robot behaviors such as pouring\nare learned by watching a single 3rd-person demonstration by a human. Reward\nfunctions obtained by following the human demonstrations under the learned\nrepresentation enable efficient reinforcement learning that is practical for\nreal-world robotic systems. Video results, open-source code and dataset are\navailable at https://sermanet.github.io/imitate", "field": [], "task": ["Metric Learning", "Self-Supervised Learning", "Video Alignment"], "method": [], "dataset": ["UPenn Action"], "metric": ["Kendall's Tau"], "title": "Time-Contrastive Networks: Self-Supervised Learning from Video"} {"abstract": "This work proposed a novel learning objective to train a deep neural network\nto perform end-to-end image pixel clustering. We applied the approach to\ninstance segmentation, which is at the intersection of image semantic\nsegmentation and object detection. We utilize the most fundamental property of\ninstance labeling -- the pairwise relationship between pixels -- as the\nsupervision to formulate the learning objective, then apply it to train a fully\nconvolutional network (FCN) for learning to perform pixel-wise clustering. The\nresulting clusters can be used as the instance labeling directly. To support\nlabeling of an unlimited number of instance, we further formulate ideas from\ngraph coloring theory into the proposed learning objective. The evaluation on\nthe Cityscapes dataset demonstrates strong performance and therefore proof of\nthe concept. Moreover, our approach won the second place in the lane detection\ncompetition of 2017 CVPR Autonomous Driving Challenge, and was the top\nperformer without using external data.", "field": [], "task": ["Autonomous Driving", "Instance Segmentation", "Lane Detection", "Object Detection", "Semantic Segmentation"], "method": [], "dataset": ["TuSimple"], "metric": ["F1 score", "Accuracy"], "title": "Learning to Cluster for Proposal-Free Instance Segmentation"} {"abstract": "Multi-person articulated pose tracking in unconstrained videos is an\nimportant while challenging problem. In this paper, going along the road of\ntop-down approaches, we propose a decent and efficient pose tracker based on\npose flows. First, we design an online optimization framework to build the\nassociation of cross-frame poses and form pose flows (PF-Builder). Second, a\nnovel pose flow non-maximum suppression (PF-NMS) is designed to robustly reduce\nredundant pose flows and re-link temporal disjoint ones. Extensive experiments\nshow that our method significantly outperforms best-reported results on two\nstandard Pose Tracking datasets by 13 mAP 25 MOTA and 6 mAP 3 MOTA\nrespectively. Moreover, in the case of working on detected poses in individual\nframes, the extra computation of pose tracker is very minor, guaranteeing\nonline 10FPS tracking. Our source codes are made publicly\navailable(https://github.com/YuliangXiu/PoseFlow).", "field": [], "task": ["Pose Tracking"], "method": [], "dataset": ["COCO test-challenge", "PoseTrack2017"], "metric": ["ARM", "MOTA", "AR"], "title": "Pose Flow: Efficient Online Pose Tracking"} {"abstract": "Domain adaptation is critical for success in new, unseen environments.\nAdversarial adaptation models applied in feature spaces discover domain\ninvariant representations, but are difficult to visualize and sometimes fail to\ncapture pixel-level and low-level domain shifts. Recent work has shown that\ngenerative adversarial networks combined with cycle-consistency constraints are\nsurprisingly effective at mapping images between domains, even without the use\nof aligned image pairs. We propose a novel discriminatively-trained\nCycle-Consistent Adversarial Domain Adaptation model. CyCADA adapts\nrepresentations at both the pixel-level and feature-level, enforces\ncycle-consistency while leveraging a task loss, and does not require aligned\npairs. Our model can be applied in a variety of visual recognition and\nprediction settings. We show new state-of-the-art results across multiple\nadaptation tasks, including digit classification and semantic segmentation of\nroad scenes demonstrating transfer from synthetic to real world domains.", "field": [], "task": ["Domain Adaptation", "Image-to-Image Translation", "Semantic Segmentation", "Synthetic-to-Real Translation", "Unsupervised Image-To-Image Translation"], "method": [], "dataset": ["GTAV-to-Cityscapes Labels", "SVNH-to-MNIST", "SYNTHIA Fall-to-Winter", "SVHN-to-MNIST"], "metric": ["Per-pixel Accuracy", "fwIOU", "mIoU", "Classification Accuracy", "Accuracy"], "title": "CyCADA: Cycle-Consistent Adversarial Domain Adaptation"} {"abstract": "We present a new method for synthesizing high-resolution photo-realistic\nimages from semantic label maps using conditional generative adversarial\nnetworks (conditional GANs). Conditional GANs have enabled a variety of\napplications, but the results are often limited to low-resolution and still far\nfrom realistic. In this work, we generate 2048x1024 visually appealing results\nwith a novel adversarial loss, as well as new multi-scale generator and\ndiscriminator architectures. Furthermore, we extend our framework to\ninteractive visual manipulation with two additional features. First, we\nincorporate object instance segmentation information, which enables object\nmanipulations such as removing/adding objects and changing the object category.\nSecond, we propose a method to generate diverse results given the same input,\nallowing users to edit the object appearance interactively. Human opinion\nstudies demonstrate that our method significantly outperforms existing methods,\nadvancing both the quality and the resolution of deep image synthesis and\nediting.", "field": [], "task": ["Conditional Image Generation", "Fundus to Angiography Generation", "Image Generation", "Image-to-Image Translation", "Instance Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["Fundus Fluorescein Angiogram Photographs & Colour Fundus Images of Diabetic Patients", "ADE20K Labels-to-Photos", "COCO-Stuff Labels-to-Photos", "ADE20K-Outdoor Labels-to-Photos", "Cityscapes Labels-to-Photo"], "metric": ["FID", "Per-pixel Accuracy", "Kernel Inception Distance", "mIoU", "Accuracy"], "title": "High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs"} {"abstract": "We propose MoNoise: a normalization model focused on generalizability and\nefficiency, it aims at being easily reusable and adaptable. Normalization is\nthe task of translating texts from a non- canonical domain to a more canonical\ndomain, in our case: from social media data to standard language. Our proposed\nmodel is based on a modular candidate generation in which each module is\nresponsible for a different type of normalization action. The most important\ngeneration modules are a spelling correction system and a word embeddings\nmodule. Depending on the definition of the normalization task, a static lookup\nlist can be crucial for performance. We train a random forest classifier to\nrank the candidates, which generalizes well to all different types of\nnormaliza- tion actions. Most features for the ranking originate from the\ngeneration modules; besides these features, N-gram features prove to be an\nimportant source of information. We show that MoNoise beats the\nstate-of-the-art on different normalization benchmarks for English and Dutch,\nwhich all define the task of normalization slightly different.", "field": [], "task": ["Lexical Normalization", "Spelling Correction", "Word Embeddings"], "method": [], "dataset": ["LexNorm"], "metric": ["Accuracy"], "title": "MoNoise: Modeling Noise Using a Modular Normalization System"} {"abstract": "We propose a novel deep learning model for joint document-level entity\ndisambiguation, which leverages learned neural representations. Key components\nare entity embeddings, a neural attention mechanism over local context windows,\nand a differentiable joint inference stage for disambiguation. Our approach\nthereby combines benefits of deep learning with more traditional approaches\nsuch as graphical models and probabilistic mention-entity maps. Extensive\nexperiments show that we are able to obtain competitive or state-of-the-art\naccuracy at moderate computational costs.", "field": [], "task": ["Entity Disambiguation"], "method": [], "dataset": ["AQUAINT", "WNED-WIKI", "MSNBC", "WNED-CWEB", "ACE2004", "AIDA-CoNLL"], "metric": ["Micro-F1", "In-KB Accuracy"], "title": "Deep Joint Entity Disambiguation with Local Neural Attention"} {"abstract": "Semantic segmentation of 3D point clouds is a challenging problem with\nnumerous real-world applications. While deep learning has revolutionized the\nfield of image semantic segmentation, its impact on point cloud data has been\nlimited so far. Recent attempts, based on 3D deep learning approaches\n(3D-CNNs), have achieved below-expected results. Such methods require\nvoxelizations of the underlying point cloud data, leading to decreased spatial\nresolution and increased memory consumption. Additionally, 3D-CNNs greatly\nsuffer from the limited availability of annotated datasets.\n In this paper, we propose an alternative framework that avoids the\nlimitations of 3D-CNNs. Instead of directly solving the problem in 3D, we first\nproject the point cloud onto a set of synthetic 2D-images. These images are\nthen used as input to a 2D-CNN, designed for semantic segmentation. Finally,\nthe obtained prediction scores are re-projected to the point cloud to obtain\nthe segmentation results. We further investigate the impact of multiple\nmodalities, such as color, depth and surface normals, in a multi-stream network\narchitecture. Experiments are performed on the recent Semantic3D dataset. Our\napproach sets a new state-of-the-art by achieving a relative gain of 7.9 %,\ncompared to the previous best approach.", "field": [], "task": ["Semantic Segmentation"], "method": [], "dataset": ["Semantic3D"], "metric": ["mIoU"], "title": "Deep Projective 3D Semantic Segmentation"} {"abstract": "Mechanical devices such as engines, vehicles, aircrafts, etc., are typically\ninstrumented with numerous sensors to capture the behavior and health of the\nmachine. However, there are often external factors or variables which are not\ncaptured by sensors leading to time-series which are inherently unpredictable.\nFor instance, manual controls and/or unmonitored environmental conditions or\nload may lead to inherently unpredictable time-series. Detecting anomalies in\nsuch scenarios becomes challenging using standard approaches based on\nmathematical models that rely on stationarity, or prediction models that\nutilize prediction errors to detect anomalies. We propose a Long Short Term\nMemory Networks based Encoder-Decoder scheme for Anomaly Detection (EncDec-AD)\nthat learns to reconstruct 'normal' time-series behavior, and thereafter uses\nreconstruction error to detect anomalies. We experiment with three publicly\navailable quasi predictable time-series datasets: power demand, space shuttle,\nand ECG, and two real-world engine datasets with both predictive and\nunpredictable behavior. We show that EncDec-AD is robust and can detect\nanomalies from predictable, unpredictable, periodic, aperiodic, and\nquasi-periodic time-series. Further, we show that EncDec-AD is able to detect\nanomalies from short time-series (length as small as 30) as well as long\ntime-series (length as large as 500).", "field": [], "task": ["Anomaly Detection", "Outlier Detection", "Time Series", "Time Series Classification"], "method": [], "dataset": ["ECG5000", "Physionet 2017 Atrial Fibrillation"], "metric": ["AUC", "Accuracy"], "title": "LSTM-based Encoder-Decoder for Multi-sensor Anomaly Detection"} {"abstract": "This article offers an empirical exploration on the use of character-level\nconvolutional networks (ConvNets) for text classification. We constructed\nseveral large-scale datasets to show that character-level convolutional\nnetworks could achieve state-of-the-art or competitive results. Comparisons are\noffered against traditional models such as bag of words, n-grams and their\nTFIDF variants, and deep learning models such as word-based ConvNets and\nrecurrent neural networks.", "field": [], "task": ["Sentiment Analysis", "Text Classification"], "method": [], "dataset": ["Yelp Fine-grained classification", "Yelp Binary classification", "AG News", "DBpedia"], "metric": ["Error"], "title": "Character-level Convolutional Networks for Text Classification"} {"abstract": "In this work, we present a novel neural network based architecture for\ninducing compositional crosslingual word representations. Unlike previously\nproposed methods, our method fulfills the following three criteria; it\nconstrains the word-level representations to be compositional, it is capable of\nleveraging both bilingual and monolingual data, and it is scalable to large\nvocabularies and large quantities of data. The key component of our approach is\nwhat we refer to as a monolingual inclusion criterion, that exploits the\nobservation that phrases are more closely semantically related to their\nsub-phrases than to other randomly sampled phrases. We evaluate our method on a\nwell-established crosslingual document classification task and achieve results\nthat are either comparable, or greatly improve upon previous state-of-the-art\nmethods. Concretely, our method reaches a level of 92.7% and 84.4% accuracy for\nthe English to German and German to English sub-tasks respectively. The former\nadvances the state of the art by 0.9% points of accuracy, the latter is an\nabsolute improvement upon the previous state of the art by 7.7% points of\naccuracy and an improvement of 33.0% in error reduction.", "field": [], "task": ["Document Classification"], "method": [], "dataset": ["Reuters RCV1/RCV2 English-to-German", "Reuters RCV1/RCV2 German-to-English"], "metric": ["Accuracy"], "title": "Leveraging Monolingual Data for Crosslingual Compositional Word Representations"} {"abstract": "Online Multi-Object Tracking (MOT) has wide applications in time-critical video analysis scenarios, such as robot navigation and autonomous driving. In tracking-by-detection, a major challenge of online MOT is how to robustly associate noisy object detections on a new video frame with previously tracked objects. In this work, we formulate the online MOT problem as decision making in Markov Decision Processes (MDPs), where the lifetime of an object is modeled with a MDP. Learning a similarity function for data association is equivalent to learning a policy for the MDP, and the policy learning is approached in a reinforcement learning fashion which benefits from both advantages of offline-learning and online-learning for data association. Moreover, our framework can naturally handle the birth/death and appearance/disappearance of targets by treating them as state transitions in the MDP while leveraging existing online single object tracking methods. We conduct experiments on the MOT Benchmark to verify the effectiveness of our method.", "field": [], "task": ["Autonomous Driving", "Decision Making", "Multi-Object Tracking", "Object Tracking", "Online Multi-Object Tracking", "Robot Navigation"], "method": [], "dataset": ["KITTI Tracking test"], "metric": ["MOTA"], "title": "Learning to Track: Online Multi-Object Tracking by Decision Making"} {"abstract": "Distributed representations of meaning are a natural way to encode covariance\nrelationships between words and phrases in NLP. By overcoming data sparsity\nproblems, as well as providing information about semantic relatedness which is\nnot available in discrete representations, distributed representations have\nproven useful in many NLP tasks. Recent work has shown how compositional\nsemantic representations can successfully be applied to a number of monolingual\napplications such as sentiment analysis. At the same time, there has been some\ninitial success in work on learning shared word-level representations across\nlanguages. We combine these two approaches by proposing a method for learning\ndistributed representations in a multilingual setup. Our model learns to assign\nsimilar embeddings to aligned sentences and dissimilar ones to sentence which\nare not aligned while not requiring word alignments. We show that our\nrepresentations are semantically informative and apply them to a cross-lingual\ndocument classification task where we outperform the previous state of the art.\nFurther, by employing parallel corpora of multiple language pairs we find that\nour model learns representations that capture semantic relationships across\nlanguages for which no parallel data was used.", "field": [], "task": ["Cross-Lingual Document Classification", "Document Classification", "Sentiment Analysis", "Word Alignment"], "method": [], "dataset": ["Reuters RCV1/RCV2 English-to-German", "Reuters RCV1/RCV2 German-to-English"], "metric": ["Accuracy"], "title": "Multilingual Distributed Representations without Word Alignment"} {"abstract": "Traditional methods of computer vision and machine learning cannot match\nhuman performance on tasks such as the recognition of handwritten digits or\ntraffic signs. Our biologically plausible deep artificial neural network\narchitectures can. Small (often minimal) receptive fields of convolutional\nwinner-take-all neurons yield large network depth, resulting in roughly as many\nsparsely connected neural layers as found in mammals between retina and visual\ncortex. Only winner neurons are trained. Several deep neural columns become\nexperts on inputs preprocessed in different ways; their predictions are\naveraged. Graphics cards allow for fast training. On the very competitive MNIST\nhandwriting benchmark, our method is the first to achieve near-human\nperformance. On a traffic sign recognition benchmark it outperforms humans by a\nfactor of two. We also improve the state-of-the-art on a plethora of common\nimage classification benchmarks.", "field": [], "task": ["Image Classification", "Traffic Sign Recognition"], "method": [], "dataset": ["GTSRB", "MNIST", "CIFAR-10"], "metric": ["Percentage error", "Percentage correct", "Accuracy"], "title": "Multi-column Deep Neural Networks for Image Classification"} {"abstract": "Deep learning methods have started to dominate the research progress of\nvideo-based person re-identification (re-id). However, existing methods mostly\nconsider supervised learning, which requires exhaustive manual efforts for\nlabelling cross-view pairwise data. Therefore, they severely lack scalability\nand practicality in real-world video surveillance applications. In this work,\nto address the video person re-id task, we formulate a novel Deep Association\nLearning (DAL) scheme, the first end-to-end deep learning method using none of\nthe identity labels in model initialisation and training. DAL learns a deep\nre-id matching model by jointly optimising two margin-based association losses\nin an end-to-end manner, which effectively constrains the association of each\nframe to the best-matched intra-camera representation and cross-camera\nrepresentation. Existing standard CNNs can be readily employed within our DAL\nscheme. Experiment results demonstrate that our proposed DAL significantly\noutperforms current state-of-the-art unsupervised video person re-id methods on\nthree benchmarks: PRID 2011, iLIDS-VID and MARS.", "field": [], "task": ["Person Re-Identification", "Unsupervised Person Re-Identification", "Unsupervised Representation Learning", "Video-Based Person Re-Identification"], "method": [], "dataset": ["PRID2011"], "metric": ["Rank-1", "Rank-20", "Rank-5"], "title": "Deep Association Learning for Unsupervised Video Person Re-identification"} {"abstract": "Background: Finding biomedical named entities is one of the most essential tasks in biomedical text mining. Recently, deep learning-based approaches have been applied to biomedical named entity recognition (BioNER) and showed promising results. However, as deep learning approaches need an abundant amount of training data, a lack of data can hinder performance. BioNER datasets are scarce resources and each dataset covers only a small subset of entity types. Furthermore, many bio entities are polysemous, which is one of the major obstacles in named entity recognition. Results: To address the lack of data and the entity type misclassification problem, we propose CollaboNet which utilizes a combination of multiple NER models. In CollaboNet, models trained on a different dataset are connected to each other so that a target model obtains information from other collaborator models to reduce false positives. Every model is an expert on their target entity type and takes turns serving as a target and a collaborator model during training time. The experimental results show that CollaboNet can be used to greatly reduce the number of false positives and misclassified entities including polysemous words. CollaboNet achieved state-of-the-art performance in terms of precision, recall and F1 score. Conclusions: We demonstrated the benefits of combining multiple models for BioNER. Our model has successfully reduced the number of misclassified entities and improved the performance by leveraging multiple datasets annotated for different entity types. Given the state-of-the-art performance of our model, we believe that CollaboNet can improve the accuracy of downstream biomedical text mining applications such as bio-entity relation extraction.", "field": [], "task": ["Named Entity Recognition", "Relation Extraction"], "method": [], "dataset": ["BC5CDR"], "metric": ["F1"], "title": "CollaboNet: collaboration of deep neural networks for biomedical named entity recognition"} {"abstract": "Directed graphs have been widely used in Community Question Answering\nservices (CQAs) to model asymmetric relationships among different types of\nnodes in CQA graphs, e.g., question, answer, user. Asymmetric transitivity is\nan essential property of directed graphs, since it can play an important role\nin downstream graph inference and analysis. Question difficulty and user\nexpertise follow the characteristic of asymmetric transitivity. Maintaining\nsuch properties, while reducing the graph to a lower dimensional vector\nembedding space, has been the focus of much recent research. In this paper, we\ntackle the challenge of directed graph embedding with asymmetric transitivity\npreservation and then leverage the proposed embedding method to solve a\nfundamental task in CQAs: how to appropriately route and assign newly posted\nquestions to users with the suitable expertise and interest in CQAs. The\ntechnique incorporates graph hierarchy and reachability information naturally\nby relying on a non-linear transformation that operates on the core\nreachability and implicit hierarchy within such graphs. Subsequently, the\nmethodology levers a factorization-based approach to generate two embedding\nvectors for each node within the graph, to capture the asymmetric transitivity.\nExtensive experiments show that our framework consistently and significantly\noutperforms the state-of-the-art baselines on two diverse real-world tasks:\nlink prediction, and question difficulty estimation and expert finding in\nonline forums like Stack Exchange. Particularly, our framework can support\ninductive embedding learning for newly posted questions (unseen nodes during\ntraining), and therefore can properly route and assign these kinds of questions\nto experts in CQAs.", "field": [], "task": ["Community Question Answering", "Graph Embedding", "Link Prediction", "Question Answering"], "method": [], "dataset": ["Gnutella", "Cit-HepPH", "Wiki-Vote"], "metric": ["AUC"], "title": "ATP: Directed Graph Embedding with Asymmetric Transitivity Preservation"} {"abstract": "Machine translation is highly sensitive to the size and quality of the training data, which has led to an increasing interest in collecting and filtering large parallel corpora. In this paper, we propose a new method for this task based on multilingual sentence embeddings. In contrast to previous approaches, which rely on nearest neighbor retrieval with a hard threshold over cosine similarity, our proposed method accounts for the scale inconsistencies of this measure, considering the margin between a given sentence pair and its closest candidates instead. Our experiments show large improvements over existing methods. We outperform the best published results on the BUCC mining task and the UN reconstruction task by more than 10 F1 and 30 precision points, respectively. Filtering the English-German ParaCrawl corpus with our approach, we obtain 31.2 BLEU points on newstest2014, an improvement of more than one point over the best official filtered version.", "field": [], "task": ["Cross-Lingual Bitext Mining", "Machine Translation", "Parallel Corpus Mining", "Sentence Embeddings"], "method": [], "dataset": ["BUCC German-to-English", "BUCC French-to-English"], "metric": ["F1 score"], "title": "Margin-based Parallel Corpus Mining with Multilingual Sentence Embeddings"} {"abstract": "Conditional text-to-image generation is an active area of research, with many possible applications. Existing research has primarily focused on generating a single image from available conditioning information in one step. One practical extension beyond one-step generation is a system that generates an image iteratively, conditioned on ongoing linguistic input or feedback. This is significantly more challenging than one-step generation tasks, as such a system must understand the contents of its generated images with respect to the feedback history, the current feedback, as well as the interactions among concepts present in the feedback history. In this work, we present a recurrent image generation model which takes into account both the generated output up to the current step as well as all past instructions for generation. We show that our model is able to generate the background, add new objects, and apply simple transformations to existing objects. We believe our approach is an important step toward interactive generation. Code and data is available at: https://www.microsoft.com/en-us/research/project/generative-neural-visual-artist-geneva/ .", "field": [], "task": ["Image Generation", "Text-to-Image Generation"], "method": [], "dataset": ["GeNeVA (CoDraw)", "GeNeVA (i-CLEVR)"], "metric": ["F1-score", "rsim"], "title": "Tell, Draw, and Repeat: Generating and Modifying Images Based on Continual Linguistic Instruction"} {"abstract": "Entity Linking (EL) systems aim to automatically map mentions of an entity in text to the corresponding entity in a Knowledge Graph (KG). Degree of connectivity of an entity in the KG directly affects an EL system{'}s ability to correctly link mentions in text to the entity in KG. This causes many EL systems to perform well for entities well connected to other entities in KG, bringing into focus the role of KG density in EL. In this paper, we propose Entity Linking using Densified Knowledge Graphs (ELDEN). ELDEN is an EL system which first densifies the KG with co-occurrence statistics from a large text corpus, and then uses the densified KG to train entity embeddings. Entity similarity measured using these trained entity embeddings result in improved EL. ELDEN outperforms state-of-the-art EL system on benchmark datasets. Due to such densification, ELDEN performs well for sparsely connected entities in the KG too. ELDEN{'}s approach is simple, yet effective. We have made ELDEN{'}s code and data publicly available.", "field": [], "task": ["Entity Disambiguation", "Entity Embeddings", "Entity Linking", "Knowledge Graphs"], "method": [], "dataset": ["AIDA-CoNLL"], "metric": ["In-KB Accuracy"], "title": "ELDEN: Improved Entity Linking Using Densified Knowledge Graphs"} {"abstract": "The high-quality node embeddings learned from the Graph Neural Networks (GNNs) have been applied to a wide range of node-based applications and some of them have achieved state-of-the-art (SOTA) performance. However, when applying node embeddings learned from GNNs to generate graph embeddings, the scalar node representation may not suffice to preserve the node/graph properties efficiently, resulting in sub-optimal graph embeddings.\n\nInspired by the Capsule Neural Network (CapsNet), we propose the Capsule Graph Neural Network (CapsGNN), which adopts the concept of capsules to address the weakness in existing GNN-based graph embeddings algorithms. By extracting node features in the form of capsules, routing mechanism can be utilized to capture important information at the graph level. As a result, our model generates multiple embeddings for each graph to capture graph properties from different aspects. The attention module incorporated in CapsGNN is used to tackle graphs with various sizes which also enables the model to focus on critical parts of the graphs.\n\nOur extensive evaluations with 10 graph-structured datasets demonstrate that CapsGNN has a powerful mechanism that operates to capture macroscopic properties of the whole graph by data-driven. It outperforms other SOTA techniques on several graph classification tasks, by virtue of the new instrument.", "field": [], "task": ["Graph Classification"], "method": [], "dataset": ["COLLAB", "RE-M12K", "IMDb-B", "ENZYMES", "PROTEINS", "D&D", "NCI1", "MUTAG", "IMDb-M", "RE-M5K"], "metric": ["Accuracy"], "title": "Capsule Graph Neural Network"} {"abstract": "Accurate depth estimation from images is a fundamental task in many\napplications including scene understanding and reconstruction. Existing\nsolutions for depth estimation often produce blurry approximations of low\nresolution. This paper presents a convolutional neural network for computing a\nhigh-resolution depth map given a single RGB image with the help of transfer\nlearning. Following a standard encoder-decoder architecture, we leverage\nfeatures extracted using high performing pre-trained networks when initializing\nour encoder along with augmentation and training strategies that lead to more\naccurate results. We show how, even for a very simple decoder, our method is\nable to achieve detailed high-resolution depth maps. Our network, with fewer\nparameters and training iterations, outperforms state-of-the-art on two\ndatasets and also produces qualitatively better results that capture object\nboundaries more faithfully. Code and corresponding pre-trained weights are made\npublicly available.", "field": [], "task": ["Depth Estimation", "Monocular Depth Estimation", "Transfer Learning"], "method": [], "dataset": ["NYU-Depth V2", "KITTI Eigen split"], "metric": ["RMSE", "absolute relative error"], "title": "High Quality Monocular Depth Estimation via Transfer Learning"} {"abstract": "The rapid progress in synthetic image generation and manipulation has now come to a point where it raises significant concerns for the implications towards society. At best, this leads to a loss of trust in digital content, but could potentially cause further harm by spreading false information or fake news. This paper examines the realism of state-of-the-art image manipulations, and how difficult it is to detect them, either automatically or by humans. To standardize the evaluation of detection methods, we propose an automated benchmark for facial manipulation detection. In particular, the benchmark is based on DeepFakes, Face2Face, FaceSwap and NeuralTextures as prominent representatives for facial manipulations at random compression level and size. The benchmark is publicly available and contains a hidden test set as well as a database of over 1.8 million manipulated images. This dataset is over an order of magnitude larger than comparable, publicly available, forgery datasets. Based on this data, we performed a thorough analysis of data-driven forgery detectors. We show that the use of additional domainspecific knowledge improves forgery detection to unprecedented accuracy, even in the presence of strong compression, and clearly outperforms human observers.", "field": [], "task": ["DeepFake Detection", "Face Swapping", "Fake Image Detection", "Image Generation"], "method": [], "dataset": ["FaceForensics"], "metric": ["Total Accuracy", "FSF", "NT", "FS", "DF", "Real"], "title": "FaceForensics++: Learning to Detect Manipulated Facial Images"} {"abstract": "We propose an effective deep learning approach to aesthetics quality\nassessment that relies on a new type of pre-trained features, and apply it to\nthe AVA data set, the currently largest aesthetics database. While previous\napproaches miss some of the information in the original images, due to taking\nsmall crops, down-scaling or warping the originals during training, we propose\nthe first method that efficiently supports full resolution images as an input,\nand can be trained on variable input sizes. This allows us to significantly\nimprove upon the state of the art, increasing the Spearman rank-order\ncorrelation coefficient (SRCC) of ground-truth mean opinion scores (MOS) from\nthe existing best reported of 0.612 to 0.756. To achieve this performance, we\nextract multi-level spatially pooled (MLSP) features from all convolutional\nblocks of a pre-trained InceptionResNet-v2 network, and train a custom shallow\nConvolutional Neural Network (CNN) architecture on these new features.", "field": [], "task": ["Aesthetics Quality Assessment", "Image Quality Assessment"], "method": [], "dataset": ["AVA"], "metric": ["Accuracy"], "title": "Effective Aesthetics Prediction with Multi-level Spatially Pooled Features"} {"abstract": "Domain adaptation for semantic image segmentation is very necessary since\nmanually labeling large datasets with pixel-level labels is expensive and time\nconsuming. Existing domain adaptation techniques either work on limited\ndatasets, or yield not so good performance compared with supervised learning.\nIn this paper, we propose a novel bidirectional learning framework for domain\nadaptation of segmentation. Using the bidirectional learning, the image\ntranslation model and the segmentation adaptation model can be learned\nalternatively and promote to each other. Furthermore, we propose a\nself-supervised learning algorithm to learn a better segmentation adaptation\nmodel and in return improve the image translation model. Experiments show that\nour method is superior to the state-of-the-art methods in domain adaptation of\nsegmentation with a big margin. The source code is available at\nhttps://github.com/liyunsheng13/BDL.", "field": [], "task": ["Domain Adaptation", "Image-to-Image Translation", "Self-Supervised Learning", "Semantic Segmentation", "Synthetic-to-Real Translation"], "method": [], "dataset": ["GTAV-to-Cityscapes Labels", "SYNTHIA-to-Cityscapes"], "metric": ["mIoU (13 classes)", "mIoU"], "title": "Bidirectional Learning for Domain Adaptation of Semantic Segmentation"} {"abstract": "In this paper, we propose a novel edge-labeling graph neural network (EGNN), which adapts a deep neural network on the edge-labeling graph, for few-shot learning. The previous graph neural network (GNN) approaches in few-shot learning have been based on the node-labeling framework, which implicitly models the intra-cluster similarity and the inter-cluster dissimilarity. In contrast, the proposed EGNN learns to predict the edge-labels rather than the node-labels on the graph that enables the evolution of an explicit clustering by iteratively updating the edge-labels with direct exploitation of both intra-cluster similarity and the inter-cluster dissimilarity. It is also well suited for performing on various numbers of classes without retraining, and can be easily extended to perform a transductive inference. The parameters of the EGNN are learned by episodic training with an edge-labeling loss to obtain a well-generalizable model for unseen low-data problem. On both of the supervised and semi-supervised few-shot image classification tasks with two benchmark datasets, the proposed EGNN significantly improves the performances over the existing GNNs.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Image Classification"], "method": [], "dataset": ["Mini-Imagenet 5-way (5-shot)", "Tiered ImageNet 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Edge-labeling Graph Neural Network for Few-shot Learning"} {"abstract": "Hyperbolic embeddings have recently gained attention in machine learning due to their ability to represent hierarchical data more accurately and succinctly than their Euclidean analogues. However, multi-relational knowledge graphs often exhibit multiple simultaneous hierarchies, which current hyperbolic models do not capture. To address this, we propose a model that embeds multi-relational graph data in the Poincar\\'e ball model of hyperbolic space. Our Multi-Relational Poincar\\'e model (MuRP) learns relation-specific parameters to transform entity embeddings by M\\\"obius matrix-vector multiplication and M\\\"obius addition. Experiments on the hierarchical WN18RR knowledge graph show that our Poincar\\'e embeddings outperform their Euclidean counterpart and existing embedding methods on the link prediction task, particularly at lower dimensionality.", "field": [], "task": ["Entity Embeddings", "Knowledge Graphs", "Link Prediction"], "method": [], "dataset": ["WN18RR", "FB15k-237"], "metric": ["Hits@10", "MRR", "Hits@3", "Hits@1"], "title": "Multi-relational Poincar\u00e9 Graph Embeddings"} {"abstract": "In this paper we are interested in recognizing human actions from sequences of 3D skeleton data. For this purpose we combine a 3D Convolutional Neural Network with body representations based on Euclidean Distance Matrices (EDMs), which have been recently shown to be very effective to capture the geometric structure of the human pose. One inherent limitation of the EDMs, however, is that they are defined up to a permutation of the skeleton joints, i.e., randomly shuffling the ordering of the joints yields many different representations. In oder to address this issue we introduce a novel architecture that simultaneously, and in an end-to-end manner, learns an optimal transformation of the joints, while optimizing the rest of parameters of the convolutional network. The proposed approach achieves state-of-the-art results on 3 benchmarks, including the recent NTU RGB-D dataset, for which we improve on previous LSTM-based methods by more than 10 percentage points, also surpassing other CNN-based methods while using almost 1000 times fewer parameters.", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "3D CNNs on Distance Matrices for Human Action Recognition"} {"abstract": "Few-shot learning is a challenging problem that has attracted more and more attention recently since abundant training samples are difficult to obtain in practical applications. Meta-learning has been proposed to address this issue, which focuses on quickly adapting a predictor as a base-learner to new tasks, given limited labeled samples. However, a critical challenge for meta-learning is the representation deficiency since it is hard to discover common information from a small number of training samples or even one, as is the representation of key features from such little information. As a result, a meta-learner cannot be trained well in a high-dimensional parameter space to generalize to new tasks. Existing methods mostly resort to extracting less expressive features so as to avoid the representation deficiency. Aiming at learning better representations, we propose a meta-learning approach with complemented representations network (MCRNet) for few-shot image classification. In particular, we embed a latent space, where latent codes are reconstructed with extra representation information to complement the representation deficiency. Furthermore, the latent space is established with variational inference, collaborating well with different base-learners, and can be extended to other models. Finally, our end-to-end framework achieves the state-of-the-art performance in image classification on three standard few-shot learning datasets.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Image Classification", "Meta-Learning", "Variational Inference"], "method": [], "dataset": ["FC100 5-way (1-shot)", "Mini-Imagenet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "CIFAR-FS 5-way (1-shot)", "FC100 5-way (5-shot)", "CIFAR-FS 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Complementing Representation Deficiency in Few-shot Image Classification: A Meta-Learning Approach"} {"abstract": "We present PARADE, an end-to-end Transformer-based model that considers document-level context for document reranking. PARADE leverages passage-level relevance representations to predict a document relevance score, overcoming the limitations of previous approaches that perform inference on passages independently. Experiments on two ad-hoc retrieval benchmarks demonstrate PARADE's effectiveness over such methods. We conduct extensive analyses on PARADE's efficiency, highlighting several strategies for improving it. When combined with knowledge distillation, a PARADE model with 72\\% fewer parameters achieves effectiveness competitive with previous approaches using BERT-Base. Our code is available at \\url{https://github.com/canjiali/PARADE}.", "field": [], "task": ["Ad-Hoc Information Retrieval", "Knowledge Distillation"], "method": [], "dataset": ["TREC Robust04"], "metric": ["P@20", "nDCG@20"], "title": "PARADE: Passage Representation Aggregation for Document Reranking"} {"abstract": "The task of retrieving video content relevant to natural language queries plays a critical role in effectively handling internet-scale datasets. Most of the existing methods for this caption-to-video retrieval problem do not fully exploit cross-modal cues present in video. Furthermore, they aggregate per-frame visual features with limited or no temporal information. In this paper, we present a multi-modal transformer to jointly encode the different modalities in video, which allows each of them to attend to the others. The transformer architecture is also leveraged to encode and model the temporal information. On the natural language side, we investigate the best practices to jointly optimize the language embedding together with the multi-modal transformer. This novel framework allows us to establish state-of-the-art results for video retrieval on three datasets. More details are available at http://thoth.inrialpes.fr/research/MMT.", "field": [], "task": ["Video Retrieval"], "method": [], "dataset": ["MSR-VTT-1kA", "LSMDC", "ActivityNet"], "metric": ["text-to-video Median Rank", "text-to-video R@5", "text-to-video R@50", "text-to-video R@1", "text-to-video Mean Rank", "text-to-video R@10"], "title": "Multi-modal Transformer for Video Retrieval"} {"abstract": "Efficiently modeling dynamic motion information in videos is crucial for action recognition task. Most state-of-the-art methods heavily rely on dense optical flow as motion representation. Although combining optical flow with RGB frames as input can achieve excellent recognition performance, the optical flow extraction is very time-consuming. This undoubtably will count against real-time action recognition. In this paper, we shed light on fast action recognition by lifting the reliance on optical flow. Our motivation lies in the observation that small displacements of motion boundaries are the most critical ingredients for distinguishing actions, so we design a novel motion cue called Persistence of Appearance (PA). In contrast to optical flow, our PA focuses more on distilling the motion information at boundaries. Also, it is more efficient by only accumulating pixel-wise differences in feature space, instead of using exhaustive patch-wise search of all the possible motion vectors. Our PA is over 1000x faster (8196fps vs. 8fps) than conventional optical flow in terms of motion modeling speed. To further aggregate the short-term dynamics in PA to long-term dynamics, we also devise a global temporal fusion strategy called Various-timescale Aggregation Pooling (VAP) that can adaptively model long-range temporal relationships across various timescales. We finally incorporate the proposed PA and VAP to form a unified framework called Persistent Appearance Network (PAN) with strong temporal modeling ability. Extensive experiments on six challenging action recognition benchmarks verify that our PAN outperforms recent state-of-the-art methods at low FLOPs. Codes and models are available at: https://github.com/zhang-can/PAN-PyTorch.", "field": [], "task": ["Action Recognition", "Optical Flow Estimation", "Video Understanding"], "method": [], "dataset": ["Jester", "Something-Something V2", "Something-Something V1"], "metric": ["Top 1 Accuracy", "Val", "Top-5 Accuracy", "Top-1 Accuracy", "Top 5 Accuracy"], "title": "PAN: Towards Fast Action Recognition via Learning Persistence of Appearance"} {"abstract": "In today's heavily overparameterized models, the value of the training loss provides few guarantees on model generalization ability. Indeed, optimizing only the training loss value, as is commonly done, can easily lead to suboptimal model quality. Motivated by the connection between geometry of the loss landscape and generalization -- including a generalization bound that we prove here -- we introduce a novel, effective procedure for instead simultaneously minimizing loss value and loss sharpness. In particular, our procedure, Sharpness-Aware Minimization (SAM), seeks parameters that lie in neighborhoods having uniformly low loss; this formulation results in a min-max optimization problem on which gradient descent can be performed efficiently. We present empirical results showing that SAM improves model generalization across a variety of benchmark datasets (e.g., CIFAR-{10, 100}, ImageNet, finetuning tasks) and models, yielding novel state-of-the-art performance for several. Additionally, we find that SAM natively provides robustness to label noise on par with that provided by state-of-the-art procedures that specifically target learning with noisy labels.", "field": [], "task": ["Fine-Grained Image Classification", "Image Classification", "Learning with noisy labels"], "method": [], "dataset": ["FGVC Aircraft", "Stanford Cars", "CIFAR-100", "CIFAR-10", "Oxford-IIIT Pets", "Flowers-102", "Food-101", "SVHN", "Fashion-MNIST", "ImageNet", "Birdsnap"], "metric": ["Number of params", "Top 1 Accuracy", "Percentage error", "Percentage correct", "Top-1 Error Rate", "Accuracy", "Top 5 Accuracy"], "title": "Sharpness-Aware Minimization for Efficiently Improving Generalization"} {"abstract": "Domain adaptive person Re-Identification (ReID) is challenging owing to the domain gap and shortage of annotations on target scenarios. To handle those two challenges, this paper proposes a coupling optimization method including the Domain-Invariant Mapping (DIM) method and the Global-Local distance Optimization (GLO), respectively. Different from previous methods that transfer knowledge in two stages, the DIM achieves a more efficient one-stage knowledge transfer by mapping images in labeled and unlabeled datasets to a shared feature space. GLO is designed to train the ReID model with unsupervised setting on the target domain. Instead of relying on existing optimization strategies designed for supervised training, GLO involves more images in distance optimization, and achieves better robustness to noisy label prediction. GLO also integrates distance optimizations in both the global dataset and local training batch, thus exhibits better training efficiency. Extensive experiments on three large-scale datasets, i.e., Market-1501, DukeMTMC-reID, and MSMT17, show that our coupling optimization outperforms state-of-the-art methods by a large margin. Our method also works well in unsupervised training, and even outperforms several recent domain adaptive methods.", "field": [], "task": ["Domain Adaptive Person Re-Identification", "Person Re-Identification", "Transfer Learning", "Unsupervised Person Re-Identification"], "method": [], "dataset": ["Market-1501->MSMT17", "DukeMTMC-reID->Market-1501", "DukeMTMC-reID->MSMT17", "Market-1501->DukeMTMC-reID"], "metric": ["Rank-1", "mAP"], "title": "Domain Adaptive Person Re-Identification via Coupling Optimization"} {"abstract": "In this paper, we present a novel two-pass approach to unify streaming and non-streaming end-to-end (E2E) speech recognition in a single model. Our model adopts the hybrid CTC/attention architecture, in which the conformer layers in the encoder are modified. We propose a dynamic chunk-based attention strategy to allow arbitrary right context length. At inference time, the CTC decoder generates n-best hypotheses in a streaming way. The inference latency could be easily controlled by only changing the chunk size. The CTC hypotheses are then rescored by the attention decoder to get the final result. This efficient rescoring process causes very little sentence-level latency. Our experiments on the open 170-hour AISHELL-1 dataset show that, the proposed method can unify the streaming and non-streaming model simply and efficiently. On the AISHELL-1 test set, our unified model achieves 5.60% relative character error rate (CER) reduction in non-streaming ASR compared to a standard non-streaming transformer. The same model achieves 5.42% CER with 640ms latency in a streaming ASR system.", "field": [], "task": ["Speech Recognition"], "method": [], "dataset": ["AISHELL-1"], "metric": ["Word Error Rate (WER)"], "title": "Unified Streaming and Non-streaming Two-pass End-to-end Model for Speech Recognition"} {"abstract": "Discriminative clustering has been successfully applied to a number of\nweakly-supervised learning tasks. Such applications include person and action\nrecognition, text-to-video alignment, object co-segmentation and colocalization\nin videos and images. One drawback of discriminative clustering, however, is\nits limited scalability. We address this issue and propose an online\noptimization algorithm based on the Block-Coordinate Frank-Wolfe algorithm. We\napply the proposed method to the problem of weakly supervised learning of\nactions and actors from movies together with corresponding movie scripts. The\nscaling up of the learning problem to 66 feature length movies enables us to\nsignificantly improve weakly supervised action recognition.", "field": [], "task": ["Action Recognition", "Temporal Action Localization", "Video Alignment", "Video Retrieval", "Weakly-Supervised Action Recognition"], "method": [], "dataset": ["LSMDC"], "metric": ["text-to-video R@1", "text-to-video R@10", "text-to-video Median Rank", "text-to-video R@5"], "title": "Learning from Video and Text via Large-Scale Discriminative Clustering"} {"abstract": "Monocular cameras are one of the most commonly used sensors in the automotive\nindustry for autonomous vehicles. One major drawback using a monocular camera\nis that it only makes observations in the two dimensional image plane and can\nnot directly measure the distance to objects. In this paper, we aim at filling\nthis gap by developing a multi-object tracking algorithm that takes an image as\ninput and produces trajectories of detected objects in a world coordinate\nsystem. We solve this by using a deep neural network trained to detect and\nestimate the distance to objects from a single input image. The detections from\na sequence of images are fed in to a state-of-the art Poisson multi-Bernoulli\nmixture tracking filter. The combination of the learned detector and the PMBM\nfilter results in an algorithm that achieves 3D tracking using only mono-camera\nimages as input. The performance of the algorithm is evaluated both in 3D world\ncoordinates, and 2D image coordinates, using the publicly available KITTI\nobject tracking dataset. The algorithm shows the ability to accurately track\nobjects, correctly handle data associations, even when there is a big overlap\nof the objects in the image, and is one of the top performing algorithms on the\nKITTI object tracking benchmark. Furthermore, the algorithm is efficient,\nrunning on average close to 20 frames per second.", "field": [], "task": ["3D Multi-Object Tracking", "Autonomous Vehicles", "Multi-Object Tracking", "Object Tracking"], "method": [], "dataset": ["KITTI Tracking test"], "metric": ["MOTA"], "title": "Mono-Camera 3D Multi-Object Tracking Using Deep Learning Detections and PMBM Filtering"} {"abstract": "Temporal action proposal generation is an important yet challenging problem,\nsince temporal proposals with rich action content are indispensable for\nanalysing real-world videos with long duration and high proportion irrelevant\ncontent. This problem requires methods not only generating proposals with\nprecise temporal boundaries, but also retrieving proposals to cover truth\naction instances with high recall and high overlap using relatively fewer\nproposals. To address these difficulties, we introduce an effective proposal\ngeneration method, named Boundary-Sensitive Network (BSN), which adopts \"local\nto global\" fashion. Locally, BSN first locates temporal boundaries with high\nprobabilities, then directly combines these boundaries as proposals. Globally,\nwith Boundary-Sensitive Proposal feature, BSN retrieves proposals by evaluating\nthe confidence of whether a proposal contains an action within its region. We\nconduct experiments on two challenging datasets: ActivityNet-1.3 and THUMOS14,\nwhere BSN outperforms other state-of-the-art temporal action proposal\ngeneration methods with high recall and high temporal precision. Finally,\nfurther experiments demonstrate that by combining existing action classifiers,\nour method significantly improves the state-of-the-art temporal action\ndetection performance.", "field": [], "task": ["Action Detection", "Temporal Action Localization", "Temporal Action Proposal Generation"], "method": [], "dataset": ["THUMOS' 14", "ActivityNet-1.3", "THUMOS\u201914"], "metric": ["AUC (test)", "mAP", "AR@200", "mAP@0.3", "mAP IOU@0.6", "mAP IOU@0.7", "mAP IOU@0.95", "AUC (val)", "mAP IOU@0.5", "mAP IOU@0.4", "mAP@0.4", "AR@500", "mAP IOU@0.3", "mAP@0.5", "mAP IOU@0.75", "AR@50", "AR@1000", "AR@100"], "title": "BSN: Boundary Sensitive Network for Temporal Action Proposal Generation"} {"abstract": "Due to the fast inference and good performance, discriminative learning\nmethods have been widely studied in image denoising. However, these methods\nmostly learn a specific model for each noise level, and require multiple models\nfor denoising images with different noise levels. They also lack flexibility to\ndeal with spatially variant noise, limiting their applications in practical\ndenoising. To address these issues, we present a fast and flexible denoising\nconvolutional neural network, namely FFDNet, with a tunable noise level map as\nthe input. The proposed FFDNet works on downsampled sub-images, achieving a\ngood trade-off between inference speed and denoising performance. In contrast\nto the existing discriminative denoisers, FFDNet enjoys several desirable\nproperties, including (i) the ability to handle a wide range of noise levels\n(i.e., [0, 75]) effectively with a single network, (ii) the ability to remove\nspatially variant noise by specifying a non-uniform noise level map, and (iii)\nfaster speed than benchmark BM3D even on CPU without sacrificing denoising\nperformance. Extensive experiments on synthetic and real noisy images are\nconducted to evaluate FFDNet in comparison with state-of-the-art denoisers. The\nresults show that FFDNet is effective and efficient, making it highly\nattractive for practical denoising applications.", "field": [], "task": ["Denoising", "Image Denoising"], "method": [], "dataset": ["Kodak25 sigma50", "Darmstadt Noise Dataset", "CBSD68 sigma15", "Kodak25 sigma25", "McMaster sigma15", "Clip300 sigma60", "CBSD68 sigma50", "BSD68 sigma50", "BSD68 sigma75", "Clip300 sigma35", "BSD68 sigma35", "Kodak25 sigma75", "BSD68 sigma25", "Kodak25 sigma35", "Clip300 sigma25", "CBSD68 sigma25", "Set12 sigma15", "McMaster sigma35", "McMaster sigma50", "Clip300 sigma15", "Clip300 sigma50", "Kodak25 sigma15", "BSD68 sigma15", "McMaster sigma25", "CBSD68 sigma35", "CBSD68 sigma75", "McMaster sigma75"], "metric": ["PSNR"], "title": "FFDNet: Toward a Fast and Flexible Solution for CNN based Image Denoising"} {"abstract": "Skeleton-based human action recognition has recently attracted increasing attention thanks to the accessibility and the popularity of 3D skeleton data. One of the key challenges in skeleton-based action recognition lies in the large view variations when capturing data. In order to alleviate the effects of view variations, this paper introduces a novel view adaptation scheme, which automatically determines the virtual observation viewpoints in a learning based data driven manner. We design two view adaptive neural networks, i.e., VA-RNN based on RNN, and VA-CNN based on CNN. For each network, a novel view adaptation module learns and determines the most suitable observation viewpoints, and transforms the skeletons to those viewpoints for the end-to-end recognition with a main classification network. Ablation studies find that the proposed view adaptive models are capable of transforming the skeletons of various viewpoints to much more consistent virtual viewpoints which largely eliminates the viewpoint influence. In addition, we design a two-stream scheme (referred to as VA-fusion) that fuses the scores of the two networks to provide the fused prediction. Extensive experimental evaluations on five challenging benchmarks demonstrate that the effectiveness of the proposed view-adaptive networks and superior performance over state-of-the-art approaches. The source code is available at https://github.com/microsoft/View-Adaptive-Neural-Networks-for-Skeleton-based-Human-Action-Recognition.", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["SYSU 3D", "NTU RGB+D", "N-UCLA", "SBU", "UWA3D"], "metric": ["Accuracy (CS)", "Accuracy (CV)", "Accuracy"], "title": "View Adaptive Neural Networks for High Performance Skeleton-based Human Action Recognition"} {"abstract": "We study 3D shape modeling from a single image and make contributions to it\nin three aspects. First, we present Pix3D, a large-scale benchmark of diverse\nimage-shape pairs with pixel-level 2D-3D alignment. Pix3D has wide applications\nin shape-related tasks including reconstruction, retrieval, viewpoint\nestimation, etc. Building such a large-scale dataset, however, is highly\nchallenging; existing datasets either contain only synthetic data, or lack\nprecise alignment between 2D images and 3D shapes, or only have a small number\nof images. Second, we calibrate the evaluation criteria for 3D shape\nreconstruction through behavioral studies, and use them to objectively and\nsystematically benchmark cutting-edge reconstruction algorithms on Pix3D.\nThird, we design a novel model that simultaneously performs 3D reconstruction\nand pose estimation; our multi-task learning approach achieves state-of-the-art\nperformance on both tasks.", "field": [], "task": ["3D Reconstruction", "3D Shape Modeling", "3D Shape Reconstruction", "Multi-Task Learning", "Pose Estimation", "Viewpoint Estimation"], "method": [], "dataset": ["Pix3D"], "metric": ["R@16", "R@8", "EMD", "R@2", "R@4", "R@1", "TIoU", "R@32", "CD"], "title": "Pix3D: Dataset and Methods for Single-Image 3D Shape Modeling"} {"abstract": "The Jaccard index, also referred to as the intersection-over-union score, is\ncommonly employed in the evaluation of image segmentation results given its\nperceptual qualities, scale invariance - which lends appropriate relevance to\nsmall objects, and appropriate counting of false negatives, in comparison to\nper-pixel losses. We present a method for direct optimization of the mean\nintersection-over-union loss in neural networks, in the context of semantic\nimage segmentation, based on the convex Lov\\'asz extension of submodular\nlosses. The loss is shown to perform better with respect to the Jaccard index\nmeasure than the traditionally used cross-entropy loss. We show quantitative\nand qualitative differences between optimizing the Jaccard index per image\nversus optimizing the Jaccard index taken over an entire dataset. We evaluate\nthe impact of our method in a semantic segmentation pipeline and show\nsubstantially improved intersection-over-union segmentation scores on the\nPascal VOC and Cityscapes datasets using state-of-the-art deep learning\nsegmentation architectures.", "field": [], "task": ["Semantic Segmentation"], "method": [], "dataset": ["PASCAL VOC 2012 test", "Cityscapes test"], "metric": ["Time (ms)", "Mean IoU", "mIoU", "Mean IoU (class)", "Frame (fps)"], "title": "The Lov\u00e1sz-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks"} {"abstract": "In this paper, we introduce the concept of learning latent super-events from\nactivity videos, and present how it benefits activity detection in continuous\nvideos. We define a super-event as a set of multiple events occurring together\nin videos with a particular temporal organization; it is the opposite concept\nof sub-events. Real-world videos contain multiple activities and are rarely\nsegmented (e.g., surveillance videos), and learning latent super-events allows\nthe model to capture how the events are temporally related in videos. We design\ntemporal structure filters that enable the model to focus on particular\nsub-intervals of the videos, and use them together with a soft attention\nmechanism to learn representations of latent super-events. Super-event\nrepresentations are combined with per-frame or per-segment CNNs to provide\nframe-level annotations. Our approach is designed to be fully differentiable,\nenabling end-to-end learning of latent super-event representations jointly with\nthe activity detector using them. Our experiments with multiple public video\ndatasets confirm that the proposed concept of latent super-event learning\nsignificantly benefits activity detection, advancing the state-of-the-arts.", "field": [], "task": ["Action Detection", "Activity Detection"], "method": [], "dataset": ["Multi-THUMOS", "Charades"], "metric": ["mAP"], "title": "Learning Latent Super-Events to Detect Multiple Activities in Videos"} {"abstract": "Current state-of-the-art solutions for motion capture from a single camera\nare optimization driven: they optimize the parameters of a 3D human model so\nthat its re-projection matches measurements in the video (e.g. person\nsegmentation, optical flow, keypoint detections etc.). Optimization models are\nsusceptible to local minima. This has been the bottleneck that forced using\nclean green-screen like backgrounds at capture time, manual initialization, or\nswitching to multiple cameras as input resource. In this work, we propose a\nlearning based motion capture model for single camera input. Instead of\noptimizing mesh and skeleton parameters directly, our model optimizes neural\nnetwork weights that predict 3D shape and skeleton configurations given a\nmonocular RGB video. Our model is trained using a combination of strong\nsupervision from synthetic data, and self-supervision from differentiable\nrendering of (a) skeletal keypoints, (b) dense 3D mesh motion, and (c)\nhuman-background segmentation, in an end-to-end framework. Empirically we show\nour model combines the best of both worlds of supervised learning and test-time\noptimization: supervised learning initializes the model parameters in the right\nregime, ensuring good pose and surface initialization at test time, without\nmanual effort. Self-supervision by back-propagating through differentiable\nrendering allows (unsupervised) adaptation of the model to the test data, and\noffers much tighter fit than a pretrained fixed model. We show that the\nproposed model improves with experience and converges to low-error solutions\nwhere previous optimization methods fail.", "field": [], "task": ["3D Human Pose Estimation", "Motion Capture", "Optical Flow Estimation", "Self-Supervised Learning"], "method": [], "dataset": ["Surreal"], "metric": ["MPJPE"], "title": "Self-supervised Learning of Motion Capture"} {"abstract": "In this paper, we address semantic segmentation of road-objects from 3D LiDAR\npoint clouds. In particular, we wish to detect and categorize instances of\ninterest, such as cars, pedestrians and cyclists. We formulate this problem as\na point- wise classification problem, and propose an end-to-end pipeline called\nSqueezeSeg based on convolutional neural networks (CNN): the CNN takes a\ntransformed LiDAR point cloud as input and directly outputs a point-wise label\nmap, which is then refined by a conditional random field (CRF) implemented as a\nrecurrent layer. Instance-level labels are then obtained by conventional\nclustering algorithms. Our CNN model is trained on LiDAR point clouds from the\nKITTI dataset, and our point-wise segmentation labels are derived from 3D\nbounding boxes from KITTI. To obtain extra training data, we built a LiDAR\nsimulator into Grand Theft Auto V (GTA-V), a popular video game, to synthesize\nlarge amounts of realistic training data. Our experiments show that SqueezeSeg\nachieves high accuracy with astonishingly fast and stable runtime (8.7 ms per\nframe), highly desirable for autonomous driving applications. Furthermore,\nadditionally training on synthesized data boosts validation accuracy on\nreal-world data. Our source code and synthesized data will be open-sourced.", "field": [], "task": ["3D Semantic Segmentation", "Autonomous Driving", "Semantic Segmentation"], "method": [], "dataset": ["SemanticKITTI"], "metric": ["mIoU"], "title": "SqueezeSeg: Convolutional Neural Nets with Recurrent CRF for Real-Time Road-Object Segmentation from 3D LiDAR Point Cloud"} {"abstract": "Previous work combines word-level and character-level representations using\nconcatenation or scalar weighting, which is suboptimal for high-level tasks\nlike reading comprehension. We present a fine-grained gating mechanism to\ndynamically combine word-level and character-level representations based on\nproperties of the words. We also extend the idea of fine-grained gating to\nmodeling the interaction between questions and paragraphs for reading\ncomprehension. Experiments show that our approach can improve the performance\non reading comprehension tasks, achieving new state-of-the-art results on the\nChildren's Book Test dataset. To demonstrate the generality of our gating\nmechanism, we also show improved results on a social media tag prediction task.", "field": [], "task": ["Question Answering", "Reading Comprehension"], "method": [], "dataset": ["SQuAD1.1 dev", "SQuAD1.1"], "metric": ["EM", "F1"], "title": "Words or Characters? Fine-grained Gating for Reading Comprehension"} {"abstract": "This paper introduces SC2LE (StarCraft II Learning Environment), a\nreinforcement learning environment based on the StarCraft II game. This domain\nposes a new grand challenge for reinforcement learning, representing a more\ndifficult class of problems than considered in most prior work. It is a\nmulti-agent problem with multiple players interacting; there is imperfect\ninformation due to a partially observed map; it has a large action space\ninvolving the selection and control of hundreds of units; it has a large state\nspace that must be observed solely from raw input feature planes; and it has\ndelayed credit assignment requiring long-term strategies over thousands of\nsteps. We describe the observation, action, and reward specification for the\nStarCraft II domain and provide an open source Python-based interface for\ncommunicating with the game engine. In addition to the main game maps, we\nprovide a suite of mini-games focusing on different elements of StarCraft II\ngameplay. For the main game maps, we also provide an accompanying dataset of\ngame replay data from human expert players. We give initial baseline results\nfor neural networks trained from this data to predict game outcomes and player\nactions. Finally, we present initial baseline results for canonical deep\nreinforcement learning agents applied to the StarCraft II domain. On the\nmini-games, these agents learn to achieve a level of play that is comparable to\na novice player. However, when trained on the main game, these agents are\nunable to make significant progress. Thus, SC2LE offers a new and challenging\nenvironment for exploring deep reinforcement learning algorithms and\narchitectures.", "field": [], "task": ["Real-Time Strategy Games", "Starcraft", "Starcraft II"], "method": [], "dataset": ["CollectMineralShards", "MoveToBeacon"], "metric": ["Max Score"], "title": "StarCraft II: A New Challenge for Reinforcement Learning"} {"abstract": "We address the problem of activity detection in continuous, untrimmed video\nstreams. This is a difficult task that requires extracting meaningful\nspatio-temporal features to capture activities, accurately localizing the start\nand end times of each activity. We introduce a new model, Region Convolutional\n3D Network (R-C3D), which encodes the video streams using a three-dimensional\nfully convolutional network, then generates candidate temporal regions\ncontaining activities, and finally classifies selected regions into specific\nactivities. Computation is saved due to the sharing of convolutional features\nbetween the proposal and the classification pipelines. The entire model is\ntrained end-to-end with jointly optimized localization and classification\nlosses. R-C3D is faster than existing methods (569 frames per second on a\nsingle Titan X Maxwell GPU) and achieves state-of-the-art results on THUMOS'14.\nWe further demonstrate that our model is a general activity detection framework\nthat does not rely on assumptions about particular dataset properties by\nevaluating our approach on ActivityNet and Charades. Our code is available at\nhttp://ai.bu.edu/r-c3d/.", "field": [], "task": ["Action Detection", "Activity Detection"], "method": [], "dataset": ["Charades", "ActivityNet-1.3", "THUMOS\u201914"], "metric": ["mAP@0.2", "mAP", "mAP@0.3", "mAP IOU@0.5", "mAP IOU@0.2", "mAP IOU@0.4", "mAP@0.4", "mAP@0.1", "mAP IOU@0.3", "mAP@0.5", "mAP IOU@0.1"], "title": "R-C3D: Region Convolutional 3D Network for Temporal Activity Detection"} {"abstract": "Neural networks have proven effective at solving difficult problems but\ndesigning their architectures can be challenging, even for image classification\nproblems alone. Our goal is to minimize human participation, so we employ\nevolutionary algorithms to discover such networks automatically. Despite\nsignificant computational requirements, we show that it is now possible to\nevolve models with accuracies within the range of those published in the last\nyear. Specifically, we employ simple evolutionary techniques at unprecedented\nscales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting\nfrom trivial initial conditions and reaching accuracies of 94.6% (95.6% for\nensemble) and 77.0%, respectively. To do this, we use novel and intuitive\nmutation operators that navigate large search spaces; we stress that no human\nparticipation is required once evolution starts and that the output is a\nfully-trained model. Throughout this work, we place special emphasis on the\nrepeatability of results, the variability in the outcomes and the computational\nrequirements.", "field": [], "task": ["Hyperparameter Optimization", "Image Classification", "Neural Architecture Search"], "method": [], "dataset": ["CIFAR-100", "CIFAR-10"], "metric": ["Percentage correct"], "title": "Large-Scale Evolution of Image Classifiers"} {"abstract": "Relational reasoning is a central component of generally intelligent\nbehavior, but has proven difficult for neural networks to learn. In this paper\nwe describe how to use Relation Networks (RNs) as a simple plug-and-play module\nto solve problems that fundamentally hinge on relational reasoning. We tested\nRN-augmented networks on three tasks: visual question answering using a\nchallenging dataset called CLEVR, on which we achieve state-of-the-art,\nsuper-human performance; text-based question answering using the bAbI suite of\ntasks; and complex reasoning about dynamic physical systems. Then, using a\ncurated dataset called Sort-of-CLEVR we show that powerful convolutional\nnetworks do not have a general capacity to solve relational questions, but can\ngain this capacity when augmented with RNs. Our work shows how a deep learning\narchitecture equipped with an RN module can implicitly discover and learn to\nreason about entities and their relations.", "field": [], "task": ["Image Retrieval with Multi-Modal Query", "Question Answering", "Relational Reasoning", "Visual Question Answering"], "method": [], "dataset": ["CLEVR", "Fashion200k"], "metric": ["Recall@50", "Recall@1", "Recall@10", "Accuracy"], "title": "A simple neural network module for relational reasoning"} {"abstract": "The MNIST dataset has become a standard benchmark for learning,\nclassification and computer vision systems. Contributing to its widespread\nadoption are the understandable and intuitive nature of the task, its\nrelatively small size and storage requirements and the accessibility and\nease-of-use of the database itself. The MNIST database was derived from a\nlarger dataset known as the NIST Special Database 19 which contains digits,\nuppercase and lowercase handwritten letters. This paper introduces a variant of\nthe full NIST dataset, which we have called Extended MNIST (EMNIST), which\nfollows the same conversion paradigm used to create the MNIST dataset. The\nresult is a set of datasets that constitute a more challenging classification\ntasks involving letters and digits, and that shares the same image structure\nand parameters as the original MNIST task, allowing for direct compatibility\nwith all existing classifiers and systems. Benchmark results are presented\nalong with a validation of the conversion process through the comparison of the\nclassification results on converted NIST digits and the MNIST digits.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["EMNIST-Digits", "EMNIST-Letters", "EMNIST-Balanced"], "metric": ["Accuracy (%)", "Accuracy"], "title": "EMNIST: an extension of MNIST to handwritten letters"} {"abstract": "Directly reading documents and being able to answer questions from them is an\nunsolved challenge. To avoid its inherent difficulty, question answering (QA)\nhas been directed towards using Knowledge Bases (KBs) instead, which has proven\neffective. Unfortunately KBs often suffer from being too restrictive, as the\nschema cannot support certain types of answers, and too sparse, e.g. Wikipedia\ncontains much more information than Freebase. In this work we introduce a new\nmethod, Key-Value Memory Networks, that makes reading documents more viable by\nutilizing different encodings in the addressing and output stages of the memory\nread operation. To compare using KBs, information extraction or Wikipedia\ndocuments directly in a single framework we construct an analysis tool,\nWikiMovies, a QA dataset that contains raw text alongside a preprocessed KB, in\nthe domain of movies. Our method reduces the gap between all three settings. It\nalso achieves state-of-the-art results on the existing WikiQA benchmark.", "field": [], "task": ["Question Answering"], "method": [], "dataset": ["WikiQA"], "metric": ["MRR", "MAP"], "title": "Key-Value Memory Networks for Directly Reading Documents"} {"abstract": "In this paper, we present supervision-by-registration, an unsupervised\napproach to improve the precision of facial landmark detectors on both images\nand video. Our key observation is that the detections of the same landmark in\nadjacent frames should be coherent with registration, i.e., optical flow.\nInterestingly, the coherency of optical flow is a source of supervision that\ndoes not require manual labeling, and can be leveraged during detector\ntraining. For example, we can enforce in the training loss function that a\ndetected landmark at frame$_{t-1}$ followed by optical flow tracking from\nframe$_{t-1}$ to frame$_t$ should coincide with the location of the detection\nat frame$_t$. Essentially, supervision-by-registration augments the training\nloss function with a registration loss, thus training the detector to have\noutput that is not only close to the annotations in labeled images, but also\nconsistent with registration on large amounts of unlabeled videos. End-to-end\ntraining with the registration loss is made possible by a differentiable\nLucas-Kanade operation, which computes optical flow registration in the forward\npass, and back-propagates gradients that encourage temporal coherency in the\ndetector. The output of our method is a more precise image-based facial\nlandmark detector, which can be applied to single images or video. With\nsupervision-by-registration, we demonstrate (1) improvements in facial landmark\ndetection on both images (300W, ALFW) and video (300VW, Youtube-Celebrities),\nand (2) significant reduction of jittering in video detections.", "field": [], "task": ["Facial Landmark Detection", "Optical Flow Estimation"], "method": [], "dataset": ["300-VW (C)"], "metric": ["AUC0.08 private"], "title": "Supervision-by-Registration: An Unsupervised Approach to Improve the Precision of Facial Landmark Detectors"} {"abstract": "High performance face detection remains a very challenging problem,\nespecially when there exists many tiny faces. This paper presents a novel\nsingle-shot face detector, named Selective Refinement Network (SRN), which\nintroduces novel two-step classification and regression operations selectively\ninto an anchor-based face detector to reduce false positives and improve\nlocation accuracy simultaneously. In particular, the SRN consists of two\nmodules: the Selective Two-step Classification (STC) module and the Selective\nTwo-step Regression (STR) module. The STC aims to filter out most simple\nnegative anchors from low level detection layers to reduce the search space for\nthe subsequent classifier, while the STR is designed to coarsely adjust the\nlocations and sizes of anchors from high level detection layers to provide\nbetter initialization for the subsequent regressor. Moreover, we design a\nReceptive Field Enhancement (RFE) block to provide more diverse receptive\nfield, which helps to better capture faces in some extreme poses. As a\nconsequence, the proposed SRN detector achieves state-of-the-art performance on\nall the widely used face detection benchmarks, including AFW, PASCAL face,\nFDDB, and WIDER FACE datasets. Codes will be released to facilitate further\nstudies on the face detection problem.", "field": [], "task": ["Face Detection", "Regression"], "method": [], "dataset": ["WIDER Face (Medium)", "WIDER Face (Easy)", "Annotated Faces in the Wild", "PASCAL Face", "WIDER Face (Hard)", "FDDB"], "metric": ["AP"], "title": "Selective Refinement Network for High Performance Face Detection"} {"abstract": "Multi-object tracking (MOT) becomes more challenging when objects of interest have similar appearances. In that case, the motion cues are particularly useful for discriminating multiple objects. However, for online 2D MOT in scenes acquired from moving cameras, observable motion cues are complicated by global camera movements and thus not always smooth or predictable. To deal with such unexpected camera motion for online 2D MOT, a structural motion constraint between objects has been utilized thanks to its robustness to camera motion. In this paper, we propose a new data association method that effectively exploits structural motion constraints in the presence of large camera motion. In addition, to further improve the robustness of data association against mis-detections and clutters, a novel event aggregation approach is developed to integrate structural constraints in assignment costs for online MOT. Experimental results on a large number of datasets demonstrate the effectiveness of the proposed algorithm for online 2D MOT.", "field": [], "task": ["Multi-Object Tracking", "Object Tracking", "Online Multi-Object Tracking"], "method": [], "dataset": ["KITTI Tracking test"], "metric": ["MOTA"], "title": "Online Multi-Object Tracking via Structural Constraint Event Aggregation"} {"abstract": "Word sense induction (WSI), or the task of automatically discovering multiple\nsenses or meanings of a word, has three main challenges: domain adaptability,\nnovel sense detection, and sense granularity flexibility. While current latent\nvariable models are known to solve the first two challenges, they are not\nflexible to different word sense granularities, which differ very much among\nwords, from aardvark with one sense, to play with over 50 senses. Current\nmodels either require hyperparameter tuning or nonparametric induction of the\nnumber of senses, which we find both to be ineffective. Thus, we aim to\neliminate these requirements and solve the sense granularity problem by\nproposing AutoSense, a latent variable model based on two observations: (1)\nsenses are represented as a distribution over topics, and (2) senses generate\npairings between the target word and its neighboring word. These observations\nalleviate the problem by (a) throwing garbage senses and (b) additionally\ninducing fine-grained word senses. Results show great improvements over the\nstate-of-the-art models on popular WSI datasets. We also show that AutoSense is\nable to learn the appropriate sense granularity of a word. Finally, we apply\nAutoSense to the unsupervised author name disambiguation task where the sense\ngranularity problem is more evident and show that AutoSense is evidently better\nthan competing models. We share our data and code here:\nhttps://github.com/rktamplayo/AutoSense.", "field": [], "task": ["Latent Variable Models", "Word Sense Induction"], "method": [], "dataset": ["SemEval 2013", "SemEval 2010 WSI"], "metric": ["F_NMI", "F-BC", "V-Measure", "AVG", "F-Score"], "title": "AutoSense Model for Word Sense Induction"} {"abstract": "Despite the success of deep neural networks (DNNs) in image classification\ntasks, the human-level performance relies on massive training data with\nhigh-quality manual annotations, which are expensive and time-consuming to\ncollect. There exist many inexpensive data sources on the web, but they tend to\ncontain inaccurate labels. Training on noisy labeled datasets causes\nperformance degradation because DNNs can easily overfit to the label noise. To\novercome this problem, we propose a noise-tolerant training algorithm, where a\nmeta-learning update is performed prior to conventional gradient update. The\nproposed meta-learning method simulates actual training by generating synthetic\nnoisy labels, and train the model such that after one gradient update using\neach set of synthetic noisy labels, the model does not overfit to the specific\nnoise. We conduct extensive experiments on the noisy CIFAR-10 dataset and the\nClothing1M dataset. The results demonstrate the advantageous performance of the\nproposed method compared to several state-of-the-art baselines.", "field": [], "task": ["Image Classification", "Learning with noisy labels", "Meta-Learning"], "method": [], "dataset": ["Clothing1M"], "metric": ["Accuracy"], "title": "Learning to Learn from Noisy Labeled Data"} {"abstract": "Computational models of visual attention are at the crossroad of disciplines like cognitive science, computational neuroscience, and computer vision. This paper proposes a model of attentional scanpath that is based on the principle that there are foundational laws that drive the emergence of visual attention. We devise variational laws of the eye-movement that rely on a generalized view of the Least Action Principle in physics. The potential energy captures details as well as peripheral visual features, while the kinetic energy corresponds with the classic interpretation in analytic mechanics. In addition, the Lagrangian contains a brightness invariance term, which characterizes significantly the scanpath trajectories. We obtain differential equations of visual attention as the stationary point of the generalized action, and we propose an algorithm to estimate the model parameters. Finally, we report experimental results to validate the model in tasks of saliency detection.", "field": [], "task": ["Saliency Detection", "Scanpath prediction"], "method": [], "dataset": ["CAT2000"], "metric": ["NSS", "AUC"], "title": "Variational Laws of Visual Attention for Dynamic Scenes"} {"abstract": "In many real-world prediction tasks, class labels include information about the relative ordering between labels, which is not captured by commonly-used loss functions such as multi-category cross-entropy. Recently, the deep learning community adopted ordinal regression frameworks to take such ordering information into account. Neural networks were equipped with ordinal regression capabilities by transforming ordinal targets into binary classification subtasks. However, this method suffers from inconsistencies among the different binary classifiers. To resolve these inconsistencies, we propose the COnsistent RAnk Logits (CORAL) framework with strong theoretical guarantees for rank-monotonicity and consistent confidence scores. Moreover, the proposed method is architecture-agnostic and can extend arbitrary state-of-the-art deep neural network classifiers for ordinal regression tasks. The empirical evaluation of the proposed rank-consistent method on a range of face-image datasets for age prediction shows a substantial reduction of the prediction error compared to the reference ordinal regression network.", "field": [], "task": ["Age And Gender Classification", "Age Estimation", "Gender Prediction", "Regression"], "method": [], "dataset": ["MORPH Album2", "UTKFace", "AFAD", "CACD"], "metric": ["MAE"], "title": "Rank consistent ordinal regression for neural networks with application to age estimation"} {"abstract": "Learning good feature embeddings for images often requires substantial\ntraining data. As a consequence, in settings where training data is limited\n(e.g., few-shot and zero-shot learning), we are typically forced to use a\ngeneric feature embedding across various tasks. Ideally, we want to construct\nfeature embeddings that are tuned for the given task. In this work, we propose\nTask-Aware Feature Embedding Networks (TAFE-Nets) to learn how to adapt the\nimage representation to a new task in a meta learning fashion. Our network is\ncomposed of a meta learner and a prediction network. Based on a task input, the\nmeta learner generates parameters for the feature layers in the prediction\nnetwork so that the feature embedding can be accurately adjusted for that task.\nWe show that TAFE-Net is highly effective in generalizing to new tasks or\nconcepts and evaluate the TAFE-Net on a range of benchmarks in zero-shot and\nfew-shot learning. Our model matches or exceeds the state-of-the-art on all\ntasks. In particular, our approach improves the prediction accuracy of unseen\nattribute-object pairs by 4 to 15 points on the challenging visual\nattribute-object composition task.", "field": [], "task": ["Few-Shot Learning", "Meta-Learning", "Zero-Shot Learning"], "method": [], "dataset": ["SUN - 0-Shot", "AWA2 - 0-Shot", "aPY - 0-Shot", "AWA1 - 0-Shot", "CUB-200 - 0-Shot Learning"], "metric": ["Accuracy"], "title": "TAFE-Net: Task-Aware Feature Embeddings for Low Shot Learning"} {"abstract": "Knowledge graphs capture structured information and relations between a set of entities or items. As such knowledge graphs represent an attractive source of information that could help improve recommender systems. However, existing approaches in this domain rely on manual feature engineering and do not allow for an end-to-end training. Here we propose Knowledge-aware Graph Neural Networks with Label Smoothness regularization (KGNN-LS) to provide better recommendations. Conceptually, our approach computes user-specific item embeddings by first applying a trainable function that identifies important knowledge graph relationships for a given user. This way we transform the knowledge graph into a user-specific weighted graph and then apply a graph neural network to compute personalized item embeddings. To provide better inductive bias, we rely on label smoothness assumption, which posits that adjacent items in the knowledge graph are likely to have similar user relevance labels/scores. Label smoothness provides regularization over the edge weights and we prove that it is equivalent to a label propagation scheme on a graph. We also develop an efficient implementation that shows strong scalability with respect to the knowledge graph size. Experiments on four datasets show that our method outperforms state of the art baselines. KGNN-LS also achieves strong performance in cold-start scenarios where user-item interactions are sparse.", "field": [], "task": ["Feature Engineering", "Knowledge Graphs", "Recommendation Systems"], "method": [], "dataset": ["Last.FM", "MovieLens 20M", "Book-Crossing", "Dianping-Food"], "metric": ["Recall@100", "Recall@50", "Recall@2", "Recall@10"], "title": "Knowledge-aware Graph Neural Networks with Label Smoothness Regularization for Recommender Systems"} {"abstract": "Over the past few years, we have witnessed the success of deep learning in image recognition thanks to the availability of large-scale human-annotated datasets such as PASCAL VOC, ImageNet, and COCO. Although these datasets have covered a wide range of object categories, there are still a significant number of objects that are not included. Can we perform the same task without a lot of human annotations? In this paper, we are interested in few-shot object segmentation where the number of annotated training examples are limited to 5 only. To evaluate and validate the performance of our approach, we have built a few-shot segmentation dataset, FSS-1000, which consists of 1000 object classes with pixelwise annotation of ground-truth segmentation. Unique in FSS-1000, our dataset contains significant number of objects that have never been seen or annotated in previous datasets, such as tiny daily objects, merchandise, cartoon characters, logos, etc. We build our baseline model using standard backbone networks such as VGG-16, ResNet-101, and Inception. To our surprise, we found that training our model from scratch using FSS-1000 achieves comparable and even better results than training with weights pre-trained by ImageNet which is more than 100 times larger than FSS-1000. Both our approach and dataset are simple, effective, and easily extensible to learn segmentation of new object classes given very few annotated training examples. Dataset is available at https://github.com/HKUSTCV/FSS-1000.", "field": [], "task": ["Few-Shot Semantic Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["FSS-1000"], "metric": ["Mean IoU"], "title": "FSS-1000: A 1000-Class Dataset for Few-Shot Segmentation"} {"abstract": "The rapid growth of video on the internet has made searching for video content using natural language queries a significant challenge. Human-generated queries for video datasets `in the wild' vary a lot in terms of degree of specificity, with some queries describing specific details such as the names of famous identities, content from speech, or text available on the screen. Our goal is to condense the multi-modal, extremely high dimensional information from videos into a single, compact video representation for the task of video retrieval using free-form text queries, where the degree of specificity is open-ended. For this we exploit existing knowledge in the form of pre-trained semantic embeddings which include 'general' features such as motion, appearance, and scene features from visual content. We also explore the use of more 'specific' cues from ASR and OCR which are intermittently available for videos and find that these signals remain challenging to use effectively for retrieval. We propose a collaborative experts model to aggregate information from these different pre-trained experts and assess our approach empirically on five retrieval benchmarks: MSR-VTT, LSMDC, MSVD, DiDeMo, and ActivityNet. Code and data can be found at www.robots.ox.ac.uk/~vgg/research/collaborative-experts/. This paper contains a correction to results reported in the previous version.", "field": [], "task": ["Video Retrieval"], "method": [], "dataset": ["MSR-VTT-1kA", "MSVD", "LSMDC", "ActivityNet", "MSR-VTT", "DiDeMo"], "metric": ["text-to-video Median Rank", "text-to-video R@5", "video-to-text Mean Rank", "video-to-text R@10", "text-to-video R@50", "text-to-video R@1", "text-to-video Mean Rank", "video-to-text Median Rank", "video-to-text R@1", "text-to-video R@10", "video-to-text R@5"], "title": "Use What You Have: Video Retrieval Using Representations From Collaborative Experts"} {"abstract": "Background: Despite recent significant progress in the development of automatic sleep staging methods, building a good model still remains a big challenge for sleep studies with a small cohort due to the data-variability and data-inefficiency issues. This work presents a deep transfer learning approach to overcome these issues and enable transferring knowledge from a large dataset to a small cohort for automatic sleep staging. Methods: We start from a generic end-to-end deep learning framework for sequence-to-sequence sleep staging and derive two networks as the means for transfer learning. The networks are first trained in the source domain (i.e. the large database). The pretrained networks are then finetuned in the target domain (i.e. the small cohort) to complete knowledge transfer. We employ the Montreal Archive of Sleep Studies (MASS) database consisting of 200 subjects as the source domain and study deep transfer learning on three different target domains: the Sleep Cassette subset and the Sleep Telemetry subset of the Sleep-EDF Expanded database, and the Surrey-cEEGrid database. The target domains are purposely adopted to cover different degrees of data mismatch to the source domains. Results: Our experimental results show significant performance improvement on automatic sleep staging on the target domains achieved with the proposed deep transfer learning approach. Conclusions: These results suggest the efficacy of the proposed approach in addressing the above-mentioned data-variability and data-inefficiency issues. Significance: As a consequence, it would enable one to improve the quality of automatic sleep staging models when the amount of data is relatively small. The source code and the pretrained models are available at http://github.com/pquochuy/sleep_transfer_learning.", "field": [], "task": ["Automatic Sleep Stage Classification", "Multimodal Sleep Stage Detection", "Sleep Stage Detection", "Transfer Learning"], "method": [], "dataset": ["Surrey-PSG", "Surrey-cEEGGrid", "Sleep-EDF-ST", "Sleep-EDF-SC"], "metric": ["Accuracy"], "title": "Towards More Accurate Automatic Sleep Staging via Deep Transfer Learning"} {"abstract": "This paper studies learning the representations of whole graphs in both unsupervised and semi-supervised scenarios. Graph-level representations are critical in a variety of real-world applications such as predicting the properties of molecules and community analysis in social networks. Traditional graph kernel based methods are simple, yet effective for obtaining fixed-length representations for graphs but they suffer from poor generalization due to hand-crafted designs. There are also some recent methods based on language models (e.g. graph2vec) but they tend to only consider certain substructures (e.g. subtrees) as graph representatives. Inspired by recent progress of unsupervised representation learning, in this paper we proposed a novel method called InfoGraph for learning graph-level representations. We maximize the mutual information between the graph-level representation and the representations of substructures of different scales (e.g., nodes, edges, triangles). By doing so, the graph-level representations encode aspects of the data that are shared across different scales of substructures. Furthermore, we further propose InfoGraph*, an extension of InfoGraph for semi-supervised scenarios. InfoGraph* maximizes the mutual information between unsupervised graph representations learned by InfoGraph and the representations learned by existing supervised methods. As a result, the supervised encoder learns from unlabeled data while preserving the latent semantic space favored by the current supervised task. Experimental results on the tasks of graph classification and molecular property prediction show that InfoGraph is superior to state-of-the-art baselines and InfoGraph* can achieve performance competitive with state-of-the-art semi-supervised models.", "field": [], "task": ["Graph Classification", "Molecular Property Prediction", "Representation Learning", "Unsupervised Representation Learning"], "method": [], "dataset": ["IMDb-M", "PTC", "IMDb-B", "MUTAG"], "metric": ["Accuracy"], "title": "InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization"} {"abstract": "In matrix factorization, available graph side-information may not be well suited for the matrix completion problem, having edges that disagree with the latent-feature relations learnt from the incomplete data matrix. We show that removing these $\\textit{contested}$ edges improves prediction accuracy and scalability. We identify the contested edges through a highly-efficient graphical lasso approximation. The identification and removal of contested edges adds no computational complexity to state-of-the-art graph-regularized matrix factorization, remaining linear with respect to the number of non-zeros. Computational load even decreases proportional to the number of edges removed. Formulating a probabilistic generative model and using expectation maximization to extend graph-regularised alternating least squares (GRALS) guarantees convergence. Rich simulated experiments illustrate the desired properties of the resulting algorithm. On real data experiments we demonstrate improved prediction accuracy with fewer graph edges (empirical evidence that graph side-information is often inaccurate). A 300 thousand dimensional graph with three million edges (Yahoo music side-information) can be analyzed in under ten minutes on a standard laptop computer demonstrating the efficiency of our graph update.", "field": [], "task": ["Matrix Completion", "Recommendation Systems"], "method": [], "dataset": ["YahooMusic", "Flixster Monti", "MovieLens 20M", "Douban Monti", "MovieLens 100K"], "metric": ["RMSE (u1 Splits)", "RMSE"], "title": "Scalable Probabilistic Matrix Factorization with Graph-Based Priors"} {"abstract": "Generating diverse sequences is important in many NLP applications such as question generation or summarization that exhibit semantically one-to-many relationships between source and the target sequences. We present a method to explicitly separate diversification from generation using a general plug-and-play module (called SELECTOR) that wraps around and guides an existing encoder-decoder model. The diversification stage uses a mixture of experts to sample different binary masks on the source sequence for diverse content selection. The generation stage uses a standard encoder-decoder model given each selected content from the source sequence. Due to the non-differentiable nature of discrete sampling and the lack of ground truth labels for binary mask, we leverage a proxy for ground truth mask and adopt stochastic hard-EM for training. In question generation (SQuAD) and abstractive summarization (CNN-DM), our method demonstrates significant improvements in accuracy, diversity and training efficiency, including state-of-the-art top-1 accuracy in both datasets, 6% gain in top-5 accuracy, and 3.7 times faster training over a state of the art model. Our code is publicly available at https://github.com/clovaai/FocusSeq2Seq.", "field": [], "task": ["Abstractive Text Summarization", "Document Summarization", "Question Generation"], "method": [], "dataset": ["CNN / Daily Mail", "SQuAD1.1"], "metric": ["ROUGE-L", "BLEU-4", "ROUGE-1", "ROUGE-2"], "title": "Mixture Content Selection for Diverse Sequence Generation"} {"abstract": "Despite the recent success of end-to-end learned representations,\nhand-crafted optical flow features are still widely used in video analysis\ntasks. To fill this gap, we propose TVNet, a novel end-to-end trainable neural\nnetwork, to learn optical-flow-like features from data. TVNet subsumes a\nspecific optical flow solver, the TV-L1 method, and is initialized by unfolding\nits optimization iterations as neural layers. TVNet can therefore be used\ndirectly without any extra learning. Moreover, it can be naturally concatenated\nwith other task-specific networks to formulate an end-to-end architecture, thus\nmaking our method more efficient than current multi-stage approaches by\navoiding the need to pre-compute and store features on disk. Finally, the\nparameters of the TVNet can be further fine-tuned by end-to-end training. This\nenables TVNet to learn richer and task-specific patterns beyond exact optical\nflow. Extensive experiments on two action recognition benchmarks verify the\neffectiveness of the proposed approach. Our TVNet achieves better accuracies\nthan all compared methods, while being competitive with the fastest counterpart\nin terms of features extraction time.", "field": [], "task": ["Action Recognition", "Optical Flow Estimation", "Video Understanding"], "method": [], "dataset": ["UCF101", "HMDB-51"], "metric": ["Average accuracy of 3 splits", "3-fold Accuracy"], "title": "End-to-End Learning of Motion Representation for Video Understanding"} {"abstract": "In Visual Question Answering (VQA), answers have a great correlation with question meaning and visual contents. Thus, to selectively utilize image, question and answer information, we propose a novel trilinear interaction model which simultaneously learns high level associations between these three inputs. In addition, to overcome the interaction complexity, we introduce a multimodal tensor-based PARALIND decomposition which efficiently parameterizes trilinear interaction between the three inputs. Moreover, knowledge distillation is first time applied in Free-form Opened-ended VQA. It is not only for reducing the computational cost and required memory but also for transferring knowledge from trilinear interaction model to bilinear interaction model. The extensive experiments on benchmarking datasets TDIUC, VQA-2.0, and Visual7W show that the proposed compact trilinear interaction model achieves state-of-the-art results when using a single model on all three datasets.", "field": [], "task": ["Knowledge Distillation", "Question Answering", "Visual Question Answering"], "method": [], "dataset": ["VQA v2 test-dev", "Visual7W", "TDIUC"], "metric": ["Percentage correct", "Accuracy"], "title": "Compact Trilinear Interaction for Visual Question Answering"} {"abstract": "Recent studies in deep learning-based speech separation have proven the superiority of time-domain approaches to conventional time-frequency-based methods. Unlike the time-frequency domain approaches, the time-domain separation systems often receive input sequences consisting of a huge number of time steps, which introduces challenges for modeling extremely long sequences. Conventional recurrent neural networks (RNNs) are not effective for modeling such long sequences due to optimization difficulties, while one-dimensional convolutional neural networks (1-D CNNs) cannot perform utterance-level sequence modeling when its receptive field is smaller than the sequence length. In this paper, we propose dual-path recurrent neural network (DPRNN), a simple yet effective method for organizing RNN layers in a deep structure to model extremely long sequences. DPRNN splits the long sequential input into smaller chunks and applies intra- and inter-chunk operations iteratively, where the input length can be made proportional to the square root of the original sequence length in each operation. Experiments show that by replacing 1-D CNN with DPRNN and apply sample-level modeling in the time-domain audio separation network (TasNet), a new state-of-the-art performance on WSJ0-2mix is achieved with a 20 times smaller model than the previous best system.", "field": [], "task": ["Speech Separation"], "method": [], "dataset": ["wsj0-2mix"], "metric": ["SI-SDRi"], "title": "Dual-path RNN: efficient long sequence modeling for time-domain single-channel speech separation"} {"abstract": "The task of named entity recognition (NER) is normally divided into nested NER and flat NER depending on whether named entities are nested or not. Models are usually separately developed for the two tasks, since sequence labeling models, the most widely used backbone for flat NER, are only able to assign a single label to a particular token, which is unsuitable for nested NER where a token may be assigned several labels. In this paper, we propose a unified framework that is capable of handling both flat and nested NER tasks. Instead of treating the task of NER as a sequence labeling problem, we propose to formulate it as a machine reading comprehension (MRC) task. For example, extracting entities with the \\textsc{per} label is formalized as extracting answer spans to the question \"{\\it which person is mentioned in the text?}\". This formulation naturally tackles the entity overlapping issue in nested NER: the extraction of two overlapping entities for different categories requires answering two independent questions. Additionally, since the query encodes informative prior knowledge, this strategy facilitates the process of entity extraction, leading to better performances for not only nested NER, but flat NER. We conduct experiments on both {\\em nested} and {\\em flat} NER datasets. Experimental results demonstrate the effectiveness of the proposed formulation. We are able to achieve vast amount of performance boost over current SOTA models on nested NER datasets, i.e., +1.28, +2.55, +5.44, +6.37, respectively on ACE04, ACE05, GENIA and KBP17, along with SOTA results on flat NER datasets, i.e.,+0.24, +1.95, +0.21, +1.49 respectively on English CoNLL 2003, English OntoNotes 5.0, Chinese MSRA, Chinese OntoNotes 4.0.", "field": [], "task": ["Chinese Named Entity Recognition", "Entity Extraction using GAN", "Machine Reading Comprehension", "Named Entity Recognition", "Nested Mention Recognition", "Nested Named Entity Recognition", "Reading Comprehension"], "method": [], "dataset": ["GENIA", "OntoNotes 4", "ACE 2004", "MSRA", "ACE 2005", "Ontonotes v5 (English)", "CoNLL 2003 (English)"], "metric": ["F1"], "title": "A Unified MRC Framework for Named Entity Recognition"} {"abstract": "In this work, we describe a new, general, and efficient method for unstructured point cloud labeling. As the question of efficiently using deep Convolutional Neural Networks (CNNs) on 3D data is still a pending issue, we propose a framework which applies CNNs on multiple 2D image views (or snapshots) of the point cloud. The approach consists in three core ideas. (i) We pick many suitable snapshots of the point cloud. We generate two types of images: a Red-Green-Blue (RGB) view and a depth composite view containing geometric features. (ii) We then perform a pixel-wise labeling of each pair of 2D snapshots using fully convolutional networks. Different architectures are tested to achieve a profitable fusion of our heterogeneous inputs. (iii) Finally, we perform fast back-projection of the label predictions in the 3D space using efficient buffering to label every 3D point. Experiments show that our method is suitable for various types of point clouds such as Lidar or photogrammetric data.", "field": [], "task": ["Semantic Segmentation"], "method": [], "dataset": ["Semantic3D"], "metric": ["mIoU"], "title": "Unstructured point cloud semantic labelingusing deep segmentation networks"} {"abstract": "Visual and audio modalities are highly correlated, yet they contain different information. Their strong correlation makes it possible to predict the semantics of one from the other with good accuracy. Their intrinsic differences make cross-modal prediction a potentially more rewarding pretext task for self-supervised learning of video and audio representations compared to within-modality learning. Based on this intuition, we propose Cross-Modal Deep Clustering (XDC), a novel self-supervised method that leverages unsupervised clustering in one modality (e.g., audio) as a supervisory signal for the other modality (e.g., video). This cross-modal supervision helps XDC utilize the semantic correlation and the differences between the two modalities. Our experiments show that XDC outperforms single-modality clustering and other multi-modal variants. XDC achieves state-of-the-art accuracy among self-supervised methods on multiple video and audio benchmarks. Most importantly, our video model pretrained on large-scale unlabeled data significantly outperforms the same model pretrained with full-supervision on ImageNet and Kinetics for action recognition on HMDB51 and UCF101. To the best of our knowledge, XDC is the first self-supervised learning method that outperforms large-scale fully-supervised pretraining for action recognition on the same architecture.", "field": [], "task": ["Action Recognition", "Audio Classification", "Deep Clustering", "Representation Learning", "Self-Supervised Action Recognition", "Self-Supervised Audio Classification", "Self-Supervised Learning"], "method": [], "dataset": ["DCASE", "UCF101", "HMDB51", "ESC-50"], "metric": ["3-fold Accuracy", "PRE-TRAINING DATASET", "Pre-Training Dataset", "Top-1 Accuracy"], "title": "Self-Supervised Learning by Cross-Modal Audio-Video Clustering"} {"abstract": "The ability to identify the same person from multiple camera views without the explicit use of facial recognition is receiving commercial and academic interest. The current status-quo solutions are based on attention neural models. In this paper, we propose Attention and CL loss, which is a hybrid of center and Online Soft Mining (OSM) loss added to the attention loss on top of a temporal attention-based neural network. The proposed loss function applied with bag-of-tricks for training surpasses the state of the art on the common person Re-ID datasets, MARS and PRID 2011. Our source code is publicly available on github.", "field": [], "task": ["Person Re-Identification", "Video-Based Person Re-Identification"], "method": [], "dataset": ["MARS", "PRID2011"], "metric": ["Rank-1", "mAP"], "title": "Video Person Re-ID: Fantastic Techniques and Where to Find Them"} {"abstract": "Recent strategies achieved ensembling \"for free\" by fitting concurrently diverse subnetworks inside a single base network. The main idea during training is that each subnetwork learns to classify only one of the multiple inputs simultaneously provided. However, the question of how to best mix these multiple inputs has not been studied so far. In this paper, we introduce MixMo, a new generalized framework for learning multi-input multi-output deep subnetworks. Our key motivation is to replace the suboptimal summing operation hidden in previous approaches by a more appropriate mixing mechanism. For that purpose, we draw inspiration from successful mixed sample data augmentations. We show that binary mixing in features - particularly with rectangular patches from CutMix - enhances results by making subnetworks stronger and more diverse. We improve state of the art for image classification on CIFAR-100 and Tiny ImageNet datasets. Our easy to implement models notably outperform data augmented deep ensembles, without the inference and memory overheads. As we operate in features and simply better leverage the expressiveness of large networks, we open a new line of research complementary to previous works.", "field": ["Image Data Augmentation", "Graph Embeddings"], "task": [], "method": ["CutMix", "LINE", "Large-scale Information Network Embedding"], "dataset": ["Tiny ImageNet Classification", "CIFAR-100", "CIFAR-10"], "metric": ["Validation Acc", "Percentage correct"], "title": "MixMo: Mixing Multiple Inputs for Multiple Outputs via Deep Subnetworks"} {"abstract": "Recommender systems need to mirror the complexity of the environment they are applied in. The more we know about what might benefit the user, the more objectives the recommender system has. In addition there may be multiple stakeholders - sellers, buyers, shareholders - in addition to legal and ethical constraints. Simultaneously optimizing for a multitude of objectives, correlated and not correlated, having the same scale or not, has proven difficult so far. We introduce a stochastic multi-gradient descent approach to recommender systems (MGDRec) to solve this problem. We show that this exceeds state-of-the-art methods in traditional objective mixtures, like revenue and recall. Not only that, but through gradient normalization we can combine fundamentally different objectives, having diverse scales, into a single coherent framework. We show that uncorrelated objectives, like the proportion of quality products, can be improved alongside accuracy. Through the use of stochasticity, we avoid the pitfalls of calculating full gradients and provide a clear setting for its applicability.", "field": [], "task": ["Recommendation Systems"], "method": [], "dataset": ["Amazon Books", "MovieLens 20M"], "metric": ["Recall@20"], "title": "Multi-Gradient Descent for Multi-Objective Recommender Systems"} {"abstract": "Click-through rate (CTR) prediction is a crucial task in online display advertising. The embedding-based neural networks have been proposed to learn both explicit feature interactions through a shallow component and deep feature interactions using a deep neural network (DNN) component. These sophisticated models, however, slow down the prediction inference by at least hundreds of times. To address the issue of significantly increased serving delay and high memory usage for ad serving in production, this paper presents \\emph{DeepLight}: a framework to accelerate the CTR predictions in three aspects: 1) accelerate the model inference via explicitly searching informative feature interactions in the shallow component; 2) prune redundant layers and parameters at intra-layer and inter-layer level in the DNN component; 3) promote the sparsity of the embedding layer to preserve the most discriminant signals. By combining the above efforts, the proposed approach accelerates the model inference by 46X on Criteo dataset and 27X on Avazu dataset without any loss on the prediction accuracy. This paves the way for successfully deploying complicated embedding-based neural networks in production for ad serving.", "field": [], "task": ["Click-Through Rate Prediction"], "method": [], "dataset": ["Avazu", "Criteo"], "metric": ["Log Loss", "LogLoss", "AUC"], "title": "DeepLight: Deep Lightweight Feature Interactions for Accelerating CTR Predictions in Ad Serving"} {"abstract": "Estimating 3D poses of multiple humans in real-time is a classic but still challenging task in computer vision. Its major difficulty lies in the ambiguity in cross-view association of 2D poses and the huge state space when there are multiple people in multiple views. In this paper, we present a novel solution for multi-human 3D pose estimation from multiple calibrated camera views. It takes 2D poses in different camera coordinates as inputs and aims for the accurate 3D poses in the global coordinate. Unlike previous methods that associate 2D poses among all pairs of views from scratch at every frame, we exploit the temporal consistency in videos to match the 2D inputs with 3D poses directly in 3-space. More specifically, we propose to retain the 3D pose for each person and update them iteratively via the cross-view multi-human tracking. This novel formulation improves both accuracy and efficiency, as we demonstrated on widely-used public datasets. To further verify the scalability of our method, we propose a new large-scale multi-human dataset with 12 to 28 camera views. Without bells and whistles, our solution achieves 154 FPS on 12 cameras and 34 FPS on 28 cameras, indicating its ability to handle large-scale real-world applications. The proposed dataset will be released soon.", "field": [], "task": ["3D Multi-Person Pose Estimation", "3D Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["Campus", "Shelf"], "metric": ["PCP3D"], "title": "Cross-View Tracking for Multi-Human 3D Pose Estimation at over 100 FPS"} {"abstract": "A popular method for anomaly detection is to use the generator of an adversarial network to formulate anomaly scores over reconstruction loss of input. Due to the rare occurrence of anomalies, optimizing such networks can be a cumbersome task. Another possible approach is to use both generator and discriminator for anomaly detection. However, attributed to the involvement of adversarial training, this model is often unstable in a way that the performance fluctuates drastically with each training step. In this study, we propose a framework that effectively generates stable results across a wide range of training steps and allows us to use both the generator and the discriminator of an adversarial model for efficient and robust anomaly detection. Our approach transforms the fundamental role of a discriminator from identifying real and fake data to distinguishing between good and bad quality reconstructions. To this end, we prepare training examples for the good quality reconstruction by employing the current generator, whereas poor quality examples are obtained by utilizing an old state of the same generator. This way, the discriminator learns to detect subtle distortions that often appear in reconstructions of the anomaly inputs. Extensive experiments performed on Caltech-256 and MNIST image datasets for novelty detection show superior results. Furthermore, on UCSD Ped2 video dataset for anomaly detection, our model achieves a frame-level AUC of 98.1%, surpassing recent state-of-the-art methods.", "field": [], "task": ["Anomaly Detection", "One-class classifier"], "method": [], "dataset": ["MNIST-test"], "metric": ["F1 score"], "title": "Old is Gold: Redefining the Adversarially Learned One-Class Classifier Training Paradigm"} {"abstract": "We study the problem of semi-supervised learning on graphs, for which graph neural networks (GNNs) have been extensively explored. However, most existing GNNs inherently suffer from the limitations of over-smoothing, non-robustness, and weak-generalization when labeled nodes are scarce. In this paper, we propose a simple yet effective framework---GRAPH RANDOM NEURAL NETWORKS (GRAND)---to address these issues. In GRAND, we first design a random propagation strategy to perform graph data augmentation. Then we leverage consistency regularization to optimize the prediction consistency of unlabeled nodes across different data augmentations. Extensive experiments on graph benchmark datasets suggest that GRAND significantly outperforms state-of-the-art GNN baselines on semi-supervised node classification. Finally, we show that GRAND mitigates the issues of over-smoothing and non-robustness, exhibiting better generalization behavior than existing GNNs. The source code of GRAND is publicly available at https://github.com/Grand20/grand.", "field": [], "task": ["Data Augmentation", "Graph Learning", "Node Classification"], "method": [], "dataset": ["Cora with Public Split: fixed 20 nodes per class", "CiteSeer with Public Split: fixed 20 nodes per class", "PubMed with Public Split: fixed 20 nodes per class"], "metric": ["Accuracy"], "title": "Graph Random Neural Network for Semi-Supervised Learning on Graphs"} {"abstract": "Recovering the 3D shape of an object from single or multiple images with deep neural networks has been attracting increasing attention in the past few years. Mainstream works (e.g. 3D-R2N2) use recurrent neural networks (RNNs) to sequentially fuse feature maps of input images. However, RNN-based approaches are unable to produce consistent reconstruction results when given the same input images with different orders. Moreover, RNNs may forget important features from early input images due to long-term memory loss. To address these issues, we propose a novel framework for single-view and multi-view 3D object reconstruction, named Pix2Vox++. By using a well-designed encoder-decoder, it generates a coarse 3D volume from each input image. A multi-scale context-aware fusion module is then introduced to adaptively select high-quality reconstructions for different parts from all coarse 3D volumes to obtain a fused 3D volume. To further correct the wrongly recovered parts in the fused 3D volume, a refiner is adopted to generate the final output. Experimental results on the ShapeNet, Pix3D, and Things3D benchmarks show that Pix2Vox++ performs favorably against state-of-the-art methods in terms of both accuracy and efficiency.", "field": [], "task": ["3D Object Reconstruction", "Object Reconstruction"], "method": [], "dataset": ["Data3D\u2212R2N2"], "metric": ["3DIoU"], "title": "Pix2Vox++: Multi-scale Context-aware 3D Object Reconstruction from Single and Multiple Images"} {"abstract": "We present SwapNet, a framework to transfer garments across images of people with arbitrary body pose, shape, and clothing. Garment transfer is a challenging task that requires (i) disentangling the features of the clothing from the body pose and shape and (ii) realistic synthesis of the garment texture on the new body. We present a neural network architecture that tackles these sub-problems with two task-specific sub-networks. Since acquiring pairs of images showing the same clothing on different bodies is difficult, we propose a novel weakly-supervised approach that generates training pairs from a single image via data augmentation. We present the first fully automatic method for garment transfer in unconstrained images without solving the difficult 3D reconstruction problem. We demonstrate a variety of transfer results and highlight our advantages over traditional image-to-image and analogy pipelines.", "field": [], "task": ["Virtual Try-on"], "method": [], "dataset": ["FashionIQ"], "metric": ["10 fold Cross validation"], "title": "SwapNet: Garment Transfer in Single View Images"} {"abstract": "Motion plays a crucial role in understanding videos and most state-of-the-art neural models for video classification incorporate motion information typically using optical flows extracted by a separate off-the-shelf method. As the frame-by-frame optical flows require heavy computation, incorporating motion information has remained a major computational bottleneck for video understanding. In this work, we replace external and heavy computation of optical flows with internal and light-weight learning of motion features. We propose a trainable neural module, dubbed MotionSqueeze, for effective motion feature extraction. Inserted in the middle of any neural network, it learns to establish correspondences across frames and convert them into motion features, which are readily fed to the next downstream layer for better prediction. We demonstrate that the proposed method provides a significant gain on four standard benchmarks for action recognition with only a small amount of additional cost, outperforming the state of the art on Something-Something-V1&V2 datasets.", "field": [], "task": ["Action Classification", "Action Recognition", "Video Classification", "Video Understanding"], "method": [], "dataset": ["Kinetics-400", "HMDB-51", "Something-Something V2", "Something-Something V1"], "metric": ["Top 1 Accuracy", "Top-5 Accuracy", "Top-1 Accuracy", "Average accuracy of 3 splits", "Top 5 Accuracy", "Vid acc@1"], "title": "MotionSqueeze: Neural Motion Feature Learning for Video Understanding"} {"abstract": "In this paper, we introduce a new reinforcement learning (RL) based neural architecture search (NAS) methodology for effective and efficient generative adversarial network (GAN) architecture search. The key idea is to formulate the GAN architecture search problem as a Markov decision process (MDP) for smoother architecture sampling, which enables a more effective RL-based search algorithm by targeting the potential global optimal architecture. To improve efficiency, we exploit an off-policy GAN architecture search algorithm that makes efficient use of the samples generated by previous policies. Evaluation on two standard benchmark datasets (i.e., CIFAR-10 and STL-10) demonstrates that the proposed method is able to discover highly competitive architectures for generally better image generation results with a considerably reduced computational burden: 7 GPU hours. Our code is available at https://github.com/Yuantian013/E2GAN.", "field": [], "task": ["Image Generation", "Neural Architecture Search"], "method": [], "dataset": ["STL-10", "CIFAR-10"], "metric": ["Inception score", "FID"], "title": "Off-Policy Reinforcement Learning for Efficient and Effective GAN Architecture Search"} {"abstract": "We introduce Transductive Infomation Maximization (TIM) for few-shot learning. Our method maximizes the mutual information between the query features and their label predictions for a given few-shot task, in conjunction with a supervision loss based on the support set. Furthermore, we propose a new alternating-direction solver for our mutual-information loss, which substantially speeds up transductive-inference convergence over gradient-based optimization, while yielding similar accuracy. TIM inference is modular: it can be used on top of any base-training feature extractor. Following standard transductive few-shot settings, our comprehensive experiments demonstrate that TIM outperforms state-of-the-art methods significantly across various datasets and networks, while used on top of a fixed feature extractor trained with simple cross-entropy on the base classes, without resorting to complex meta-learning schemes. It consistently brings between 2% and 5% improvement in accuracy over the best performing method, not only on all the well-established few-shot benchmarks but also on more challenging scenarios,with domain shifts and larger numbers of classes.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Meta-Learning"], "method": [], "dataset": ["Mini-ImageNet - 5-Shot Learning", "Mini-Imagenet 5-way (1-shot)", "Tiered ImageNet 5-way (1-shot)", "Tiered ImageNet 5-way (5-shot)", "Mini-Imagenet 20-way (1-shot)", "Mini-Imagenet 10-way (1-shot)", "Mini-ImageNet to CUB - 5 shot learning", "CUB 200 5-way 1-shot", "Mini-Imagenet 20-way (5-shot)", "CUB 200 5-way 5-shot", "Mini-Imagenet 10-way (5-shot)"], "metric": ["Accuracy"], "title": "Transductive Information Maximization For Few-Shot Learning"} {"abstract": "We study the multi-round response generation in visual dialog, where a\nresponse is generated according to a visually grounded conversational history.\nGiven a triplet: an image, Q&A history, and current question, all the\nprevailing methods follow a codec (i.e., encoder-decoder) fashion in a\nsupervised learning paradigm: a multimodal encoder encodes the triplet into a\nfeature vector, which is then fed into the decoder for the current answer\ngeneration, supervised by the ground-truth. However, this conventional\nsupervised learning does NOT take into account the impact of imperfect history,\nviolating the conversational nature of visual dialog and thus making the codec\nmore inclined to learn history bias but not contextual reasoning. To this end,\ninspired by the actor-critic policy gradient in reinforcement learning, we\npropose a novel training paradigm called History Advantage Sequence Training\n(HAST). Specifically, we intentionally impose wrong answers in the history,\nobtaining an adverse critic, and see how the historic error impacts the codec's\nfuture behavior by History Advantage-a quantity obtained by subtracting the\nadverse critic from the gold reward of ground-truth history. Moreover, to make\nthe codec more sensitive to the history, we propose a novel attention network\ncalled History-Aware Co-Attention Network (HACAN) which can be effectively\ntrained by using HAST. Experimental results on three benchmarks: VisDial\nv0.9&v1.0 and GuessWhat?!, show that the proposed HAST strategy consistently\noutperforms the state-of-the-art supervised counterparts.", "field": [], "task": ["Visual Dialog", "Visual Reasoning"], "method": [], "dataset": ["Visual Dialog v1.0 test-std", "VisDial v0.9 val"], "metric": ["MRR (x 100)", "R@10", "NDCG (x 100)", "R@5", "Mean Rank", "MRR", "Mean", "R@1"], "title": "Making History Matter: History-Advantage Sequence Training for Visual Dialog"} {"abstract": "Joint extraction of entities and relations is an important task in natural language processing (NLP), which aims to capture all relational triplets from plain texts. This is a big challenge due to some of the triplets extracted from one sentence may have overlapping entities. Most existing methods perform entity recognition followed by relation detection between every possible entity pairs, which usually suffers from numerous redundant operations. In this paper, we propose a relation-specific attention network (RSAN) to handle the issue. Our RSAN utilizes relation-aware attention mechanism to construct specific sentence representations for each relation, and then performs sequence labeling to extract its corresponding head and tail entities. Experiments on two public datasets show that our model can effectively extract overlapping triplets and achieve state-of-the-art performance. Our code is available at https://github.com/Anery/RSAN", "field": [], "task": ["Joint Entity and Relation Extraction", "Relation Extraction"], "method": [], "dataset": ["NYT", "WebNLG"], "metric": ["F1"], "title": "A Relation-Specific Attention Network for Joint Entity and Relation Extraction"} {"abstract": "It is well known that human gaze carries significant information about visual attention. However, there are three main difficulties in incorporating the gaze data in an attention mechanism of deep neural networks: 1) the gaze fixation points are likely to have measurement errors due to blinking and rapid eye movements; 2) it is unclear when and how much the gaze data is correlated with visual attention; and 3) gaze data is not always available in many real-world situations. In this work, we introduce an effective probabilistic approach to integrate human gaze into spatiotemporal attention for egocentric activity recognition. Specifically, we represent the locations of gaze fixation points as structured discrete latent variables to model their uncertainties. In addition, we model the distribution of gaze fixations using a variational method. The gaze distribution is learned during the training process so that the ground-truth annotations of gaze locations are no longer needed in testing situations since they are predicted from the learned gaze distribution. The predicted gaze locations are used to provide informative attentional cues to improve the recognition performance. Our method outperforms all the previous state-of-the-art approaches on EGTEA, which is a large-scale dataset for egocentric activity recognition provided with gaze measurements. We also perform an ablation study and qualitative analysis to demonstrate that our attention mechanism is effective.", "field": [], "task": ["Action Recognition", "Egocentric Activity Recognition"], "method": [], "dataset": ["EGTEA"], "metric": ["Mean class accuracy", "Average Accuracy"], "title": "Integrating Human Gaze into Attention for Egocentric Activity Recognition"} {"abstract": "Previous work introduced transition-based algorithms to form a unified architecture of parsing rhetorical structures (including span, nuclearity and relation), but did not achieve satisfactory performance. In this paper, we propose that transition-based model is more appropriate for parsing the naked discourse tree (i.e., identifying span and nuclearity) due to data sparsity. At the same time, we argue that relation labeling can benefit from naked tree structure and should be treated elaborately with consideration of three kinds of relations including within-sentence, across-sentence and across-paragraph relations. Thus, we design a pipelined two-stage parsing method for generating an RST tree from text. Experimental results show that our method achieves state-of-the-art performance, especially on span and nuclearity identification.", "field": [], "task": ["Discourse Parsing"], "method": [], "dataset": ["RST-DT"], "metric": ["RST-Parseval (Relation)", "RST-Parseval (Span)", "RST-Parseval (Nuclearity)"], "title": "A Two-Stage Parsing Method for Text-Level Discourse Analysis"} {"abstract": "Recurrent neural networks are powerful models for processing sequential data,\nbut they are generally plagued by vanishing and exploding gradient problems.\nUnitary recurrent neural networks (uRNNs), which use unitary recurrence\nmatrices, have recently been proposed as a means to avoid these issues.\nHowever, in previous experiments, the recurrence matrices were restricted to be\na product of parameterized unitary matrices, and an open question remains: when\ndoes such a parameterization fail to represent all unitary matrices, and how\ndoes this restricted representational capacity limit what can be learned? To\naddress this question, we propose full-capacity uRNNs that optimize their\nrecurrence matrix over all unitary matrices, leading to significantly improved\nperformance over uRNNs that use a restricted-capacity recurrence matrix. Our\ncontribution consists of two main components. First, we provide a theoretical\nargument to determine if a unitary parameterization has restricted capacity.\nUsing this argument, we show that a recently proposed unitary parameterization\nhas restricted capacity for hidden state dimension greater than 7. Second, we\nshow how a complete, full-capacity unitary recurrence matrix can be optimized\nover the differentiable manifold of unitary matrices. The resulting\nmultiplicative gradient step is very simple and does not require gradient\nclipping or learning rate adaptation. We confirm the utility of our claims by\nempirically evaluating our new full-capacity uRNNs on both synthetic and\nnatural data, achieving superior performance compared to both LSTMs and the\noriginal restricted-capacity uRNNs.", "field": [], "task": ["Sequential Image Classification"], "method": [], "dataset": ["Sequential MNIST"], "metric": ["Permuted Accuracy", "Unpermuted Accuracy"], "title": "Full-Capacity Unitary Recurrent Neural Networks"} {"abstract": "Clustering is central to many data-driven application domains and has been\nstudied extensively in terms of distance functions and grouping algorithms.\nRelatively little work has focused on learning representations for clustering.\nIn this paper, we propose Deep Embedded Clustering (DEC), a method that\nsimultaneously learns feature representations and cluster assignments using\ndeep neural networks. DEC learns a mapping from the data space to a\nlower-dimensional feature space in which it iteratively optimizes a clustering\nobjective. Our experimental evaluations on image and text corpora show\nsignificant improvement over state-of-the-art methods.", "field": [], "task": ["Image Clustering", "Unsupervised Image Classification"], "method": [], "dataset": ["CMU-PIE", "Imagenet-dog-15", "YouTube Faces DB", "CIFAR-100", "CIFAR-10", "Tiny-ImageNet", "ImageNet-10", "STL-10", "SVHN"], "metric": ["Acc", "Train set", "Train Split", "ARI", "# of clusters (k)", "Backbone", "Train Set", "NMI", "Accuracy"], "title": "Unsupervised Deep Embedding for Clustering Analysis"} {"abstract": "Disentangling conversations mixed together in a single stream of messages is a difficult task, made harder by the lack of large manually annotated datasets. We created a new dataset of 77,563 messages manually annotated with reply-structure graphs that both disentangle conversations and define internal conversation structure. Our dataset is 16 times larger than all previously released datasets combined, the first to include adjudication of annotation disagreements, and the first to include context. We use our data to re-examine prior work, in particular, finding that 80% of conversations in a widely used dialogue corpus are either missing messages or contain extra messages. Our manually-annotated data presents an opportunity to develop robust data-driven methods for conversation disentanglement, which will help advance dialogue research.", "field": [], "task": ["Conversation Disentanglement"], "method": [], "dataset": ["irc-disentanglement", "Linux IRC (Ch2 Elsner)", "Linux IRC (Ch2 Kummerfeld)"], "metric": ["F", "P", "Local", "1-1", "Shen F-1", "VI", "R"], "title": "A Large-Scale Corpus for Conversation Disentanglement"} {"abstract": "Most conditional generation tasks expect diverse outputs given a single conditional context. However, conditional generative adversarial networks (cGANs) often focus on the prior conditional information and ignore the input noise vectors, which contribute to the output variations. Recent attempts to resolve the mode collapse issue for cGANs are usually task-specific and computationally expensive. In this work, we propose a simple yet effective regularization term to address the mode collapse issue for cGANs. The proposed method explicitly maximizes the ratio of the distance between generated images with respect to the corresponding latent codes, thus encouraging the generators to explore more minor modes during training. This mode seeking regularization term is readily applicable to various conditional generation tasks without imposing training overhead or modifying the original network structures. We validate the proposed algorithm on three conditional image synthesis tasks including categorical generation, image-to-image translation, and text-to-image synthesis with different baseline models. Both qualitative and quantitative results demonstrate the effectiveness of the proposed regularization method for improving diversity without loss of quality.", "field": [], "task": ["Image Generation", "Image-to-Image Translation", "Multimodal Unsupervised Image-To-Image Translation"], "method": [], "dataset": ["AFHQ", "CelebA-HQ", "CIFAR-10"], "metric": ["FID"], "title": "Mode Seeking Generative Adversarial Networks for Diverse Image Synthesis"} {"abstract": "Recent success of semantic segmentation approaches on demanding road driving\ndatasets has spurred interest in many related application fields. Many of these\napplications involve real-time prediction on mobile platforms such as cars,\ndrones and various kinds of robots. Real-time setup is challenging due to\nextraordinary computational complexity involved. Many previous works address\nthe challenge with custom lightweight architectures which decrease\ncomputational complexity by reducing depth, width and layer capacity with\nrespect to general purpose architectures. We propose an alternative approach\nwhich achieves a significantly better performance across a wide range of\ncomputing budgets. First, we rely on a light-weight general purpose\narchitecture as the main recognition engine. Then, we leverage light-weight\nupsampling with lateral connections as the most cost-effective solution to\nrestore the prediction resolution. Finally, we propose to enlarge the receptive\nfield by fusing shared features at multiple resolutions in a novel fashion.\nExperiments on several road driving datasets show a substantial advantage of\nthe proposed approach, either with ImageNet pre-trained parameters or when we\nlearn from scratch. Our Cityscapes test submission entitled SwiftNetRN-18\ndelivers 75.5% MIoU and achieves 39.9 Hz on 1024x2048 images on GTX1080Ti.", "field": [], "task": ["Real-Time Semantic Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["Cityscapes test"], "metric": ["Mean IoU (class)", "Frame (fps)", "mIoU"], "title": "In Defense of Pre-trained ImageNet Architectures for Real-time Semantic Segmentation of Road-driving Images"} {"abstract": "Semantic segmentation generates comprehensive understanding of scenes through densely predicting the category for each pixel. High-level features from Deep Convolutional Neural Networks already demonstrate their effectiveness in semantic segmentation tasks, however the coarse resolution of high-level features often leads to inferior results for small/thin objects where detailed information is important. It is natural to consider importing low level features to compensate for the lost detailed information in high-level features.Unfortunately, simply combining multi-level features suffers from the semantic gap among them. In this paper, we propose a new architecture, named Gated Fully Fusion (GFF), to selectively fuse features from multiple levels using gates in a fully connected way. Specifically, features at each level are enhanced by higher-level features with stronger semantics and lower-level features with more details, and gates are used to control the propagation of useful information which significantly reduces the noises during fusion. We achieve the state of the art results on four challenging scene parsing datasets including Cityscapes, Pascal Context, COCO-stuff and ADE20K.", "field": [], "task": ["Scene Parsing", "Scene Understanding", "Semantic Segmentation"], "method": [], "dataset": ["Cityscapes test"], "metric": ["Mean IoU (class)"], "title": "GFF: Gated Fully Fusion for Semantic Segmentation"} {"abstract": "We show that Neural Ordinary Differential Equations (ODEs) learn representations that preserve the topology of the input space and prove that this implies the existence of functions Neural ODEs cannot represent. To address these limitations, we introduce Augmented Neural ODEs which, in addition to being more expressive models, are empirically more stable, generalize better and have a lower computational cost than Neural ODEs.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["SVHN", "MNIST", "CIFAR-10"], "metric": ["Percentage error", "Percentage correct", "Accuracy"], "title": "Augmented Neural ODEs"} {"abstract": "Node classification and graph classification are two graph learning problems\nthat predict the class label of a node and the class label of a graph\nrespectively. A node of a graph usually represents a real-world entity, e.g., a\nuser in a social network, or a protein in a protein-protein interaction\nnetwork. In this work, we consider a more challenging but practically useful\nsetting, in which a node itself is a graph instance. This leads to a\nhierarchical graph perspective which arises in many domains such as social\nnetwork, biological network and document collection. For example, in a social\nnetwork, a group of people with shared interests forms a user group, whereas a\nnumber of user groups are interconnected via interactions or common members. We\nstudy the node classification problem in the hierarchical graph where a `node'\nis a graph instance, e.g., a user group in the above example. As labels are\nusually limited in real-world data, we design two novel semi-supervised\nsolutions named \\underline{SE}mi-supervised gr\\underline{A}ph\nc\\underline{L}assification via \\underline{C}autious/\\underline{A}ctive\n\\underline{I}teration (or SEAL-C/AI in short). SEAL-C/AI adopt an iterative\nframework that takes turns to build or update two classifiers, one working at\nthe graph instance level and the other at the hierarchical graph level. To\nsimplify the representation of the hierarchical graph, we propose a novel\nsupervised, self-attentive graph embedding method called SAGE, which embeds\ngraph instances of arbitrary size into fixed-length vectors. Through\nexperiments on synthetic data and Tencent QQ group data, we demonstrate that\nSEAL-C/AI not only outperform competing methods by a significant margin in\nterms of accuracy/Macro-F1, but also generate meaningful interpretations of the\nlearned representations.", "field": [], "task": ["Graph Classification", "Graph Embedding", "Graph Learning", "Node Classification"], "method": [], "dataset": ["D&D", "PROTEINS"], "metric": ["Accuracy"], "title": "Semi-Supervised Graph Classification: A Hierarchical Graph Perspective"} {"abstract": "Emotion is intrinsic to humans and consequently emotion understanding is a key part of human-like artificial intelligence (AI). Emotion recognition in conversation (ERC) is becoming increasingly popular as a new research frontier in natural language processing (NLP) due to its ability to mine opinions from the plethora of publicly available conversational data in platforms such as Facebook, Youtube, Reddit, Twitter, and others. Moreover, it has potential applications in health-care systems (as a tool for psychological analysis), education (understanding student frustration) and more. Additionally, ERC is also extremely important for generating emotion-aware dialogues that require an understanding of the user's emotions. Catering to these needs calls for effective and scalable conversational emotion-recognition algorithms. However, it is a strenuous problem to solve because of several research challenges. In this paper, we discuss these challenges and shed light on the recent research in this field. We also describe the drawbacks of these approaches and discuss the reasons why they fail to successfully overcome the research challenges in ERC.", "field": [], "task": ["Emotion Recognition", "Emotion Recognition in Conversation"], "method": [], "dataset": ["EC"], "metric": ["Micro-F1"], "title": "Emotion Recognition in Conversation: Research Challenges, Datasets, and Recent Advances"} {"abstract": "As the complexity of neural network models has grown, it has become increasingly important to optimize their design automatically through metalearning. Methods for discovering hyperparameters, topologies, and learning rate schedules have lead to significant increases in performance. This paper shows that loss functions can be optimized with metalearning as well, and result in similar improvements. The method, Genetic Loss-function Optimization (GLO), discovers loss functions de novo, and optimizes them for a target task. Leveraging techniques from genetic programming, GLO builds loss functions hierarchically from a set of operators and leaf nodes. These functions are repeatedly recombined and mutated to find an optimal structure, and then a covariance-matrix adaptation evolutionary strategy (CMA-ES) is used to find optimal coefficients. Networks trained with GLO loss functions are found to outperform the standard cross-entropy loss on standard image classification tasks. Training with these new loss functions requires fewer steps, results in lower test error, and allows for smaller datasets to be used. Loss-function optimization thus provides a new dimension of metalearning, and constitutes an important step towards AutoML.", "field": [], "task": ["AutoML", "Image Classification"], "method": [], "dataset": ["MNIST"], "metric": ["Percentage error"], "title": "Improved Training Speed, Accuracy, and Data Utilization Through Loss Function Optimization"} {"abstract": "This study proposes a Neural Attentive Bag-of-Entities model, which is a neural network model that performs text classification using entities in a knowledge base. Entities provide unambiguous and relevant semantic signals that are beneficial for capturing semantics in texts. We combine simple high-recall entity detection based on a dictionary, to detect entities in a document, with a novel neural attention mechanism that enables the model to focus on a small number of unambiguous and relevant entities. We tested the effectiveness of our model using two standard text classification datasets (i.e., the 20 Newsgroups and R8 datasets) and a popular factoid question answering dataset based on a trivia quiz game. As a result, our model achieved state-of-the-art results on all datasets. The source code of the proposed model is available online at https://github.com/wikipedia2vec/wikipedia2vec.", "field": [], "task": ["Question Answering", "Text Classification"], "method": [], "dataset": ["R8", "20NEWS"], "metric": ["F-measure", "Accuracy"], "title": "Neural Attentive Bag-of-Entities Model for Text Classification"} {"abstract": "In this paper, we aim to solve for unsupervised domain adaptation of classifiers where we have access to label information for the source domain while these are not available for a target domain. While various methods have been proposed for solving these including adversarial discriminator based methods, most approaches have focused on the entire image based domain adaptation. In an image, there would be regions that can be adapted better, for instance, the foreground object may be similar in nature. To obtain such regions, we propose methods that consider the probabilistic certainty estimate of various regions and specify focus on these during classification for adaptation. We observe that just by incorporating the probabilistic certainty of the discriminator while training the classifier, we are able to obtain state of the art results on various datasets as compared against all the recent methods. We provide a thorough empirical analysis of the method by providing ablation analysis, statistical significance test, and visualization of the attention maps and t-SNE embeddings. These evaluations convincingly demonstrate the effectiveness of the proposed approach.", "field": [], "task": ["Domain Adaptation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["Office-31", "Office-Home", "ImageCLEF-DA"], "metric": ["Average Accuracy", "Accuracy"], "title": "Attending to Discriminative Certainty for Domain Adaptation"} {"abstract": "The need for fine-grained perception in autonomous driving systems has resulted in recently increased research on online semantic segmentation of single-scan LiDAR. Despite the emerging datasets and technological advancements, it remains challenging due to three reasons: (1) the need for near-real-time latency with limited hardware; (2) uneven or even long-tailed distribution of LiDAR points across space; and (3) an increasing number of extremely fine-grained semantic classes. In an attempt to jointly tackle all the aforementioned challenges, we propose a new LiDAR-specific, nearest-neighbor-free segmentation algorithm - PolarNet. Instead of using common spherical or bird's-eye-view projection, our polar bird's-eye-view representation balances the points across grid cells in a polar coordinate system, indirectly aligning a segmentation network's attention with the long-tailed distribution of the points along the radial axis. We find that our encoding scheme greatly increases the mIoU in three drastically different segmentation datasets of real urban LiDAR single scans while retaining near real-time throughput.", "field": [], "task": ["3D Semantic Segmentation", "Autonomous Driving", "Semantic Segmentation"], "method": [], "dataset": ["SemanticKITTI"], "metric": ["mIoU"], "title": "PolarNet: An Improved Grid Representation for Online LiDAR Point Clouds Semantic Segmentation"} {"abstract": "Scene text recognition is a hot research topic in computer vision. Recently, many recognition methods based on the encoder-decoder framework have been proposed, and they can handle scene texts of perspective distortion and curve shape. Nevertheless, they still face lots of challenges like image blur, uneven illumination, and incomplete characters. We argue that most encoder-decoder methods are based on local visual features without explicit global semantic information. In this work, we propose a semantics enhanced encoder-decoder framework to robustly recognize low-quality scene texts. The semantic information is used both in the encoder module for supervision and in the decoder module for initializing. In particular, the state-of-the art ASTER method is integrated into the proposed framework as an exemplar. Extensive experiments demonstrate that the proposed framework is more robust for low-quality text images, and achieves state-of-the-art results on several benchmark datasets.", "field": [], "task": ["Scene Text", "Scene Text Recognition"], "method": [], "dataset": ["ICDAR2013", "ICDAR2015", "SVT"], "metric": ["Accuracy"], "title": "SEED: Semantics Enhanced Encoder-Decoder Framework for Scene Text Recognition"} {"abstract": "In this work, we address the challenging issue of scene segmentation. To increase the feature similarity of the same object while keeping the feature discrimination of different objects, we explore to propagate information throughout the image under the control of objects' boundaries. To this end, we first propose to learn the boundary as an additional semantic class to enable the network to be aware of the boundary layout. Then, we propose unidirectional acyclic graphs (UAGs) to model the function of undirected cyclic graphs (UCGs), which structurize the image via building graphic pixel-by-pixel connections, in an efficient and effective way. Furthermore, we propose a boundary-aware feature propagation (BFP) module to harvest and propagate the local features within their regions isolated by the learned boundaries in the UAG-structured image. The proposed BFP is capable of splitting the feature propagation into a set of semantic groups via building strong connections among the same segment region but weak connections between different segment regions. Without bells and whistles, our approach achieves new state-of-the-art segmentation performance on three challenging semantic segmentation datasets, i.e., PASCAL-Context, CamVid, and Cityscapes.", "field": [], "task": ["Scene Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["PASCAL Context", "Cityscapes test"], "metric": ["Mean IoU (class)", "mIoU"], "title": "Boundary-Aware Feature Propagation for Scene Segmentation"} {"abstract": "Commonsense reasoning aims to empower machines with the human ability to make presumptions about ordinary situations in our daily life. In this paper, we propose a textual inference framework for answering commonsense questions, which effectively utilizes external, structured commonsense knowledge graphs to perform explainable inferences. The framework first grounds a question-answer pair from the semantic space to the knowledge-based symbolic space as a schema graph, a related sub-graph of external knowledge graphs. It represents schema graphs with a novel knowledge-aware graph network module named KagNet, and finally scores answers with graph representations. Our model is based on graph convolutional networks and LSTMs, with a hierarchical path-based attention mechanism. The intermediate attention scores make it transparent and interpretable, which thus produce trustworthy inferences. Using ConceptNet as the only external resource for Bert-based models, we achieved state-of-the-art performance on the CommonsenseQA, a large-scale dataset for commonsense reasoning.", "field": [], "task": ["Common Sense Reasoning", "Knowledge Base Question Answering", "Knowledge Graphs", "Natural Language Inference"], "method": [], "dataset": ["CommonsenseQA"], "metric": ["Accuracy"], "title": "KagNet: Knowledge-Aware Graph Networks for Commonsense Reasoning"} {"abstract": "We introduce propagation kernels, a general graph-kernel framework for efficiently measuring the similarity of structured data. Propagation kernels are based on monitoring how information spreads through a set of given graphs. They leverage early-stage distributions from propagation schemes such as random walks to capture structural information encoded in node labels, attributes, and edge information. This has two benefits. First, off-the-shelf propagation schemes can be used to naturally construct kernels for many graph types, including labeled, partially labeled, unlabeled, directed, and attributed graphs. Second, by leveraging existing efficient and informative propagation schemes, propagation kernels can be considerably faster than state-of-the-art approaches without sacrificing predictive performance. We will also show that if the graphs at hand have a regular structure, for instance when modeling image or video data, one can exploit this regularity to scale the kernel computation to large databases of graphs with thousands of nodes. We support our contributions by exhaustive experiments on a number of real-world graphs from a variety of application domains.", "field": [], "task": ["Graph Classification"], "method": [], "dataset": ["NCI109", "D&D", "MUTAG", "NCI1"], "metric": ["Accuracy"], "title": "Propagation kernels: efficient graph kernels from propagated information"} {"abstract": "We consider the importance of different utterances in the context for selecting the response usually depends on the current query. In this paper, we propose the model TripleNet to fully model the task with the triple instead of in previous works. The heart of TripleNet is a novel attention mechanism named triple attention to model the relationships within the triple at four levels. The new mechanism updates the representation for each element based on the attention with the other two concurrently and symmetrically. We match the triple centered on the response from char to context level for prediction. Experimental results on two large-scale multi-turn response selection datasets show that the proposed model can significantly outperform the state-of-the-art methods. TripleNet source code is available at https://github.com/wtma/TripleNet", "field": [], "task": ["Conversational Response Selection"], "method": [], "dataset": ["Ubuntu Dialogue (v1, Ranking)"], "metric": ["R10@1", "R10@5", "R2@1", "R10@2"], "title": "TripleNet: Triple Attention Network for Multi-Turn Response Selection in Retrieval-based Chatbots"} {"abstract": "We present GraphMix, a regularization method for Graph Neural Network based semi-supervised object classification, whereby we propose to train a fully-connected network jointly with the graph neural network via parameter sharing and interpolation-based regularization. Further, we provide a theoretical analysis of how GraphMix improves the generalization bounds of the underlying graph neural network, without making any assumptions about the \"aggregation\" layer or the depth of the graph neural networks. We experimentally validate this analysis by applying GraphMix to various architectures such as Graph Convolutional Networks, Graph Attention Networks and Graph-U-Net. Despite its simplicity, we demonstrate that GraphMix can consistently improve or closely match state-of-the-art performance using even simpler architectures such as Graph Convolutional Networks, across three established graph benchmarks: Cora, Citeseer and Pubmed citation network datasets, as well as three newly proposed datasets: Cora-Full, Co-author-CS and Co-author-Physics.", "field": [], "task": ["Generalization Bounds", "Node Classification", "Object Classification"], "method": [], "dataset": ["Coauthor CS", "Coauthor Physics", "Pubmed random partition", "Cora: fixed 5 node per class", "Bitcoin-OTC", "CiteSeer with Public Split: fixed 5 nodes per class", "Citeseer random partition", "Cora: fixed 10 node per class", "Bitcoin-Alpha", "Cora with Public Split: fixed 20 nodes per class", "Cora random partition", "Cora Full-supervised", "CiteSeer with Public Split: fixed 20 nodes per class", "PubMed with Public Split: fixed 20 nodes per class"], "metric": ["F1-score", "Accuracy"], "title": "GraphMix: Improved Training of GNNs for Semi-Supervised Learning"} {"abstract": "To minimize the annotation costs associated with the training of semantic segmentation models, researchers have extensively investigated weakly-supervised segmentation approaches. In the current weakly-supervised segmentation methods, the most widely adopted approach is based on visualization. However, the visualization results are not generally equal to semantic segmentation. Therefore, to perform accurate semantic segmentation under the weakly supervised condition, it is necessary to consider the mapping functions that convert the visualization results into semantic segmentation. For such mapping functions, the conditional random field and iterative re-training using the outputs of a segmentation model are usually used. However, these methods do not always guarantee improvements in accuracy; therefore, if we apply these mapping functions iteratively multiple times, eventually the accuracy will not improve or will decrease. In this paper, to make the most of such mapping functions, we assume that the results of the mapping function include noise, and we improve the accuracy by removing noise. To achieve our aim, we propose the self-supervised difference detection module, which estimates noise from the results of the mapping functions by predicting the difference between the segmentation masks before and after the mapping. We verified the effectiveness of the proposed method by performing experiments on the PASCAL Visual Object Classes 2012 dataset, and we achieved 64.9\\% in the val set and 65.5\\% in the test set. Both of the results become new state-of-the-art under the same setting of weakly supervised semantic segmentation.", "field": [], "task": ["Semantic Segmentation", "Weakly-Supervised Semantic Segmentation"], "method": [], "dataset": ["PASCAL VOC 2012 test", "PASCAL VOC 2012 val"], "metric": ["Mean IoU", "mIoU"], "title": "Self-Supervised Difference Detection for Weakly-Supervised Semantic Segmentation"} {"abstract": "Human-motion generation is a long-standing challenging task due to the requirement of accurately modeling complex and diverse dynamic patterns. Most existing methods adopt sequence models such as RNN to directly model transitions in the original action space. Due to high dimensionality and potential noise, such modeling of action transitions is particularly challenging. In this paper, we focus on skeleton-based action generation and propose to model smooth and diverse transitions on a latent space of action sequences with much lower dimensionality. Conditioned on a latent sequence, actions are generated by a frame-wise decoder shared by all latent action-poses. Specifically, an implicit RNN is defined to model smooth latent sequences, whose randomness (diversity) is controlled by noise from the input. Different from standard action-prediction methods, our model can generate action sequences from pure noise without any conditional action poses. Remarkably, it can also generate unseen actions from mixed classes during training. Our model is learned with a bi-directional generative-adversarial-net framework, which not only can generate diverse action sequences of a particular class or mix classes, but also learns to classify action sequences within the same model. Experimental results show the superiority of our method in both diverse action-sequence generation and classification, relative to existing methods.", "field": [], "task": ["Action Generation"], "method": [], "dataset": ["NTU RGB+D", "Human3.6M"], "metric": ["MMD"], "title": "Learning Diverse Stochastic Human-Action Generators by Learning Smooth Latent Transitions"} {"abstract": "Pre-training techniques have been verified successfully in a variety of NLP tasks in recent years. Despite the widespread use of pre-training models for NLP applications, they almost exclusively focus on text-level manipulation, while neglecting layout and style information that is vital for document image understanding. In this paper, we propose the \\textbf{LayoutLM} to jointly model interactions between text and layout information across scanned document images, which is beneficial for a great number of real-world document image understanding tasks such as information extraction from scanned documents. Furthermore, we also leverage image features to incorporate words' visual information into LayoutLM. To the best of our knowledge, this is the first time that text and layout are jointly learned in a single framework for document-level pre-training. It achieves new state-of-the-art results in several downstream tasks, including form understanding (from 70.72 to 79.27), receipt understanding (from 94.02 to 95.24) and document image classification (from 93.07 to 94.42). The code and pre-trained LayoutLM models are publicly available at \\url{https://aka.ms/layoutlm}.", "field": [], "task": ["Document Image Classification", "Document Layout Analysis", "Image Classification"], "method": [], "dataset": ["RVL-CDIP"], "metric": ["Accuracy"], "title": "LayoutLM: Pre-training of Text and Layout for Document Image Understanding"} {"abstract": "We consider the problem of cross-view geo-localization. The primary challenge of this task is to learn the robust feature against large viewpoint changes. Existing benchmarks can help, but are limited in the number of viewpoints. Image pairs, containing two viewpoints, e.g., satellite and ground, are usually provided, which may compromise the feature learning. Besides phone cameras and satellites, in this paper, we argue that drones could serve as the third platform to deal with the geo-localization problem. In contrast to the traditional ground-view images, drone-view images meet fewer obstacles, e.g., trees, and could provide a comprehensive view when flying around the target place. To verify the effectiveness of the drone platform, we introduce a new multi-view multi-source benchmark for drone-based geo-localization, named University-1652. University-1652 contains data from three platforms, i.e., synthetic drones, satellites and ground cameras of 1,652 university buildings around the world. To our knowledge, University-1652 is the first drone-based geo-localization dataset and enables two new tasks, i.e., drone-view target localization and drone navigation. As the name implies, drone-view target localization intends to predict the location of the target place via drone-view images. On the other hand, given a satellite-view query image, drone navigation is to drive the drone to the area of interest in the query. We use this dataset to analyze a variety of off-the-shelf CNN features and propose a strong CNN baseline on this challenging dataset. The experiments show that University-1652 helps the model to learn the viewpoint-invariant features and also has good generalization ability in the real-world scenario.", "field": [], "task": ["Drone navigation", "Drone-view target localization", "Image-Based Localization"], "method": [], "dataset": ["cvusa", "University-1652"], "metric": ["recall@5", "recall@top1%", "recall@1", "Recall@10", "AP"], "title": "University-1652: A Multi-view Multi-source Benchmark for Drone-based Geo-localization"} {"abstract": "Direct prediction of 3D body pose and shape remains a challenge even for\nhighly parameterized deep learning models. Mapping from the 2D image space to\nthe prediction space is difficult: perspective ambiguities make the loss\nfunction noisy and training data is scarce. In this paper, we propose a novel\napproach (Neural Body Fitting (NBF)). It integrates a statistical body model\nwithin a CNN, leveraging reliable bottom-up semantic body part segmentation and\nrobust top-down body model constraints. NBF is fully differentiable and can be\ntrained using 2D and 3D annotations. In detailed experiments, we analyze how\nthe components of our model affect performance, especially the use of part\nsegmentations as an explicit intermediate representation, and present a robust,\nefficiently trainable framework for 3D human pose estimation from 2D images\nwith competitive results on standard benchmarks. Code will be made available at\nhttp://github.com/mohomran/neural_body_fitting", "field": [], "task": ["3D Human Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["Human3.6M"], "metric": ["Average MPJPE (mm)"], "title": "Neural Body Fitting: Unifying Deep Learning and Model-Based Human Pose and Shape Estimation"} {"abstract": "Entity linking is a standard component in modern retrieval system that is often performed by third-party toolkits. Despite the plethora of open source options, it is difficult to find a single system that has a modular architecture where certain components may be replaced, does not depend on external sources, can easily be updated to newer Wikipedia versions, and, most important of all, has state-of-the-art performance. The REL system presented in this paper aims to fill that gap. Building on state-of-the-art neural components from natural language processing research, it is provided as a Python package as well as a web API. We also report on an experimental comparison against both well-established systems and the current state-of-the-art on standard entity linking benchmarks.", "field": [], "task": ["Entity Linking"], "method": [], "dataset": ["AIDA-CoNLL"], "metric": ["Micro-F1 strong", "Macro-F1 strong"], "title": "REL: An Entity Linker Standing on the Shoulders of Giants"} {"abstract": "We propose a new class of implicit networks, the multiscale deep equilibrium model (MDEQ), suited to large-scale and highly hierarchical pattern recognition domains. An MDEQ directly solves for and backpropagates through the equilibrium points of multiple feature resolutions simultaneously, using implicit differentiation to avoid storing intermediate states (and thus requiring only $O(1)$ memory consumption). These simultaneously-learned multi-resolution features allow us to train a single model on a diverse set of tasks and loss functions, such as using a single MDEQ to perform both image classification and semantic segmentation. We illustrate the effectiveness of this approach on two large-scale vision tasks: ImageNet classification and semantic segmentation on high-resolution images from the Cityscapes dataset. In both settings, MDEQs are able to match or exceed the performance of recent competitive computer vision models: the first time such performance and scale have been achieved by an implicit deep learning approach. The code and pre-trained models are at https://github.com/locuslab/mdeq .", "field": [], "task": ["Image Classification", "Semantic Segmentation"], "method": [], "dataset": ["Cityscapes val", "ImageNet"], "metric": ["Top 1 Accuracy", "mIoU"], "title": "Multiscale Deep Equilibrium Models"} {"abstract": "Deep Convolutional Neural Networks (DCNNs) are currently the method of choice both for generative, as well as for discriminative learning in computer vision and machine learning. The success of DCNNs can be attributed to the careful selection of their building blocks (e.g., residual blocks, rectifiers, sophisticated normalization schemes, to mention but a few). In this paper, we propose $\\Pi$-Nets, a new class of function approximators based on polynomial expansions. $\\Pi$-Nets are polynomial neural networks, i.e., the output is a high-order polynomial of the input. The unknown parameters, which are naturally represented by high-order tensors, are estimated through a collective tensor factorization with factors sharing. We introduce three tensor decompositions that significantly reduce the number of parameters and show how they can be efficiently implemented by hierarchical neural networks. We empirically demonstrate that $\\Pi$-Nets are very expressive and they even produce good results without the use of non-linear activation functions in a large battery of tasks and signals, i.e., images, graphs, and audio. When used in conjunction with activation functions, $\\Pi$-Nets produce state-of-the-art results in three challenging tasks, i.e. image generation, face verification and 3D mesh representation learning. The source code is available at \\url{https://github.com/grigorisg9gr/polynomial_nets}.", "field": [], "task": ["Conditional Image Generation", "Face Identification", "Face Recognition", "Face Verification", "Image Classification", "Image Generation", "Representation Learning"], "method": [], "dataset": ["MegaFace", "LFW", "CFP-FF", "CIFAR-10", "CFP-FP", "ImageNet"], "metric": ["FID", "Top 1 Accuracy", "Percentage correct", "Accuracy", "Top 5 Accuracy", "Inception score"], "title": "Deep Polynomial Neural Networks"} {"abstract": "Generative Adversarial Networks (GANs) have made great progress in synthesizing realistic images in recent years. However, they are often trained on image datasets with either too few samples or too many classes belonging to different data distributions. Consequently, GANs are prone to underfitting or overfitting, making the analysis of them difficult and constrained. Therefore, in order to conduct a thorough study on GANs while obviating unnecessary interferences introduced by the datasets, we train them on artificial datasets where there are infinitely many samples and the real data distributions are simple, high-dimensional and have structured manifolds. Moreover, the generators are designed such that optimal sets of parameters exist. Empirically, we find that under various distance measures, the generator fails to learn such parameters with the GAN training procedure. We also find that training mixtures of GANs leads to more performance gain compared to increasing the network depth or width when the model complexity is high enough. Our experimental results demonstrate that a mixture of generators can discover different modes or different classes automatically in an unsupervised setting, which we attribute to the distribution of the generation and discrimination tasks across multiple generators and discriminators. As an example of the generalizability of our conclusions to realistic datasets, we train a mixture of GANs on the CIFAR-10 dataset and our method significantly outperforms the state-of-the-art in terms of popular metrics, i.e., Inception Score (IS) and Fr\\'echet Inception Distance (FID).", "field": [], "task": ["Conditional Image Generation", "Image Generation"], "method": [], "dataset": ["CIFAR-10"], "metric": ["Inception score", "FID"], "title": "Lessons Learned from the Training of GANs on Artificial Datasets"} {"abstract": "In this article, we propose a Dual Relation-aware Attention Network (DRANet) to handle the task of scene segmentation. How to efficiently exploit context is essential for pixel-level recognition. To address the issue, we adaptively capture contextual information based on the relation-aware attention mechanism. Especially, we append two types of attention modules on the top of the dilated fully convolutional network (FCN), which model the contextual dependencies in spatial and channel dimensions, respectively. In the attention modules, we adopt a self-attention mechanism to model semantic associations between any two pixels or channels. Each pixel or channel can adaptively aggregate context from all pixels or channels according to their correlations. To reduce the high cost of computation and memory caused by the abovementioned pairwise association computation, we further design two types of compact attention modules. In the compact attention modules, each pixel or channel is built into association only with a few numbers of gathering centers and obtains corresponding context aggregation over these gathering centers. Meanwhile, we add a cross-level gating decoder to selectively enhance spatial details that boost the performance of the network. We conduct extensive experiments to validate the effectiveness of our network and achieve new state-of-the-art segmentation performance on four challenging scene segmentation data sets, i.e., Cityscapes, ADE20K, PASCAL Context, and COCO Stuff data sets. In particular, a Mean IoU score of 82.9% on the Cityscapes test set is achieved without using extra coarse annotated data.", "field": [], "task": ["Scene Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["ADE20K", "COCO-Stuff test", "PASCAL Context", "Cityscapes test"], "metric": ["Mean IoU (class)", "Validation mIoU", "mIoU"], "title": "Scene Segmentation with Dual Relation-aware Attention Network"} {"abstract": "Oriented object detection in aerial images is a challenging task as the objects in aerial images are displayed in arbitrary directions and are usually densely packed. Current oriented object detection methods mainly rely on two-stage anchor-based detectors. However, the anchor-based detectors typically suffer from a severe imbalance issue between the positive and negative anchor boxes. To address this issue, in this work we extend the horizontal keypoint-based object detector to the oriented object detection task. In particular, we first detect the center keypoints of the objects, based on which we then regress the box boundary-aware vectors (BBAVectors) to capture the oriented bounding boxes. The box boundary-aware vectors are distributed in the four quadrants of a Cartesian coordinate system for all arbitrarily oriented objects. To relieve the difficulty of learning the vectors in the corner cases, we further classify the oriented bounding boxes into horizontal and rotational bounding boxes. In the experiment, we show that learning the box boundary-aware vectors is superior to directly predicting the width, height, and angle of an oriented bounding box, as adopted in the baseline method. Besides, the proposed method competes favorably with state-of-the-art methods. Code is available at https://github.com/yijingru/BBAVectors-Oriented-Object-Detection.", "field": [], "task": ["Object Detection", "Object Detection In Aerial Images"], "method": [], "dataset": ["DOTA"], "metric": ["mAP"], "title": "Oriented Object Detection in Aerial Images with Box Boundary-Aware Vectors"} {"abstract": "Objective Semi-supervised video object segmentation refers to segmenting the object in subsequent frames given the object label in the first frame. Existing algorithms are mostly based on the objectives of matching and propagation strategies, which often make use of the previous frame with masking or optical flow. This paper explores a new propagation method, uses short-term matching modules to extract the information of the previous frame and apply it in propagation, and proposes the network of Long-Short-Term similarity matching for video object segmentation (LSMOVS) Method: By conducting pixel-level matching and correlation between long-term matching module and short-term matching module with the first frame and previous frame, global similarity map and local similarity map are obtained, as well as feature pattern of current frame and masking of previous frame. After two refine networks, final results are obtained through segmentation network. Results: According to the experiments on the two data sets DAVIS 2016 and 2017, the method of this paper achieves favorable average of region similarity and contour accuracy without online fine tuning, which achieves 86.5% and 77.4% in terms of single target and multiple targets. Besides, the count of segmented frames per second reached 21. Conclusion: The short-term matching module proposed in this paper is more conducive to extracting the information of the previous frame than only the mask. By combining the long-term matching module with the short-term matching module, the whole network can achieve efficient video object segmentation without online fine tuning", "field": [], "task": ["Optical Flow Estimation", "Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["DAVIS 2017 (val)", "DAVIS 2017 (test-dev)", "DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "LSMVOS: Long-Short-Term Similarity Matching for Video Object"} {"abstract": "he number of malicious programs has grown both in number and in sophistication. Analyzing the malicious intent of\r\nvast amounts of data requires huge resources and thus, effective categorization of malware is required. In this paper,\r\nthe content of a malicious program is represented as an entropy stream, where each value describes the amount of entropy of a small chunk of code in a specific location of the file. Wavelet transforms are then applied to this entropy signal to\r\ndescribe the variation in the entropic energy. Motivated by the visual similarity between streams of entropy of malicious\r\nsoftware belonging to the same family, we propose a file agnostic deep learning approach for categorization of malware.\r\nOur method exploits the fact that most variants are generated by using common obfuscation techniques and that compression and encryption algorithms retain some properties present in the original code. This allows us to find discriminative patterns that almost all variants in a family share. Our method has been evaluated using the data provided by Microsoft for the BigData Innovators Gathering Anti-Malware Prediction Challenge, and achieved promising results in comparison with the State of the Art.", "field": [], "task": ["Malware Classification"], "method": [], "dataset": ["Microsoft Malware Classification Challenge"], "metric": ["Accuracy (10-fold)", "Macro F1 (10-fold)", "LogLoss"], "title": "Classification of Malware by Using Structural Entropy on Convolutional Neural Networks"} {"abstract": "In this work, we perform an extensive investigation of two state-of-the-art (SotA) methods for the task of Entity Alignment in Knowledge Graphs. Therefore, we first carefully examine the benchmarking process and identify several shortcomings, which make the results reported in the original works not always comparable. Furthermore, we suspect that it is a common practice in the community to make the hyperparameter optimization directly on a test set, reducing the informative value of reported performance. Thus, we select a representative sample of benchmarking datasets and describe their properties. We also examine different initializations for entity representations since they are a decisive factor for model performance. Furthermore, we use a shared train/validation/test split for a fair evaluation setting in which we evaluate all methods on all datasets. In our evaluation, we make several interesting findings. While we observe that most of the time SotA approaches perform better than baselines, they have difficulties when the dataset contains noise, which is the case in most real-life applications. Moreover, we find out in our ablation study that often different features of SotA methods are crucial for good performance than previously assumed. The code is available at https://github.com/mberr/ea-sota-comparison.", "field": [], "task": ["Entity Alignment", "Hyperparameter Optimization", "Knowledge Graphs"], "method": [], "dataset": ["DBP15k zh-en", "dbp15k fr-en", "dbp15k ja-en"], "metric": ["Hits@1"], "title": "A Critical Assessment of State-of-the-Art in Entity Alignment"} {"abstract": "Neural architecture search has proven to be highly effective in the design of computationally efficient, task-specific convolutional neural networks across several areas of computer vision. In 2D human pose estimation, however, its application has been limited by high computational demands. Hypothesizing that neural architecture search holds great potential for 2D human pose estimation, we propose a new weight transfer scheme that relaxes function-preserving mutations, enabling us to accelerate neuroevolution in a flexible manner. Our method produces 2D human pose network designs that are more efficient and more accurate than state-of-the-art hand-designed networks. In fact, the generated networks can process images at higher resolutions using less computation than previous networks at lower resolutions, permitting us to push the boundaries of 2D human pose estimation. Our baseline network designed using neuroevolution, which we refer to as EvoPose2D-S, provides comparable accuracy to SimpleBaseline while using 4.9x fewer floating-point operations and 13.5x fewer parameters. Our largest network, EvoPose2D-L, achieves new state-of-the-art accuracy on the Microsoft COCO Keypoints benchmark while using 2.0x fewer operations and 4.3x fewer parameters than its nearest competitor.", "field": [], "task": ["2D Human Pose Estimation", "Keypoint Detection", "Multi-Person Pose Estimation", "Neural Architecture Search", "Pose Estimation"], "method": [], "dataset": ["COCO", "COCO test-dev"], "metric": ["Test AP", "Validation AP", "APM", "AP75", "AP", "APL", "AP50", "AR"], "title": "EvoPose2D: Pushing the Boundaries of 2D Human Pose Estimation using Neuroevolution"} {"abstract": "This work considers the problem of domain shift in person re-identification.Being trained on one dataset, a re-identification model usually performs much worse on unseen data. Partially this gap is caused by the relatively small scale of person re-identification datasets (compared to face recognition ones, for instance), but it is also related to training objectives. We propose to use the metric learning objective, namely AM-Softmax loss, and some additional training practices to build well-generalizing, yet, computationally efficient models. We use recently proposed Omni-Scale Network (OSNet) architecture combined with several training tricks and architecture adjustments to obtain state-of-the art results in cross-domain generalization problem on a large-scale MSMT17 dataset in three setups: MSMT17-all->DukeMTMC, MSMT17-train->Market1501 and MSMT17-all->Market1501.", "field": [], "task": ["Domain Generalization", "Face Recognition", "Metric Learning", "Person Re-Identification"], "method": [], "dataset": ["MSMT17"], "metric": ["Rank-1", "mAP"], "title": "Building Computationally Efficient and Well-Generalizing Person Re-Identification Models with Metric Learning"} {"abstract": "We present a self-supervised learning approach for optical flow. Our method\ndistills reliable flow estimations from non-occluded pixels, and uses these\npredictions as ground truth to learn optical flow for hallucinated occlusions.\nWe further design a simple CNN to utilize temporal information from multiple\nframes for better flow estimation. These two principles lead to an approach\nthat yields the best performance for unsupervised optical flow learning on the\nchallenging benchmarks including MPI Sintel, KITTI 2012 and 2015. More notably,\nour self-supervised pre-trained model provides an excellent initialization for\nsupervised fine-tuning. Our fine-tuned models achieve state-of-the-art results\non all three datasets. At the time of writing, we achieve EPE=4.26 on the\nSintel benchmark, outperforming all submitted methods.", "field": [], "task": ["Optical Flow Estimation", "Self-Supervised Learning"], "method": [], "dataset": ["KITTI 2012", "Sintel-final", "Sintel-clean", "KITTI 2015"], "metric": ["Average End-Point Error", "Fl-all"], "title": "SelFlow: Self-Supervised Learning of Optical Flow"} {"abstract": "Network embedding (or graph embedding) has been widely used in many real-world applications. However, existing methods mainly focus on networks with single-typed nodes/edges and cannot scale well to handle large networks. Many real-world networks consist of billions of nodes and edges of multiple types, and each node is associated with different attributes. In this paper, we formalize the problem of embedding learning for the Attributed Multiplex Heterogeneous Network and propose a unified framework to address this problem. The framework supports both transductive and inductive learning. We also give the theoretical analysis of the proposed framework, showing its connection with previous works and proving its better expressiveness. We conduct systematical evaluations for the proposed framework on four different genres of challenging datasets: Amazon, YouTube, Twitter, and Alibaba. Experimental results demonstrate that with the learned embeddings from the proposed framework, we can achieve statistically significant improvements (e.g., 5.99-28.23% lift by F1 scores; p<<0.01, t-test) over previous state-of-the-art methods for link prediction. The framework has also been successfully deployed on the recommendation system of a worldwide leading e-commerce company, Alibaba Group. Results of the offline A/B tests on product recommendation further confirm the effectiveness and efficiency of the framework in practice.", "field": [], "task": ["Graph Embedding", "Link Prediction", "Network Embedding", "Product Recommendation", "Representation Learning"], "method": [], "dataset": ["Alibaba-S", "Amazon", "YouTube", "Twitter", "Alibaba"], "metric": ["ROC AUC", "PR AUC", "F1-Score"], "title": "Representation Learning for Attributed Multiplex Heterogeneous Network"} {"abstract": "In the last years, a big interest of both the scientific community and the market has been devoted to the design of audio surveillance systems, able to analyse the audio stream and to identify events of interest; this is particularly true in security applications, in which the audio analytics can be profitably used as an alternative to video analytics systems, but also combined with them. Within this context, in this paper we propose a novel recurrent convolutional neural network architecture, named DENet; it is based on a new layer that we call denoising-enhancement (DE) layer, which performs denoising and enhancement of the original signal by applying an attention map on the components of the band-filtered signal. Differently from state-of-the-art methodologies, DENet takes as input the lossless raw waveform and is able to automatically learn the evolution of the frequencies-of-interest over time, by combining the proposed layer with a bidirectional gated recurrent unit. Using the feedbacks coming from classifications related to consecutive frames (i.e. that belong to the same event), the proposed method is able to drastically reduce the misclassifications. We carried out experiments on the MIVIA Audio Events and MIVIA Road Events public datasets, confirming the effectiveness of our approach with respect to other state-of-the-art methodologies.", "field": [], "task": ["Denoising", "Sound Event Detection"], "method": [], "dataset": ["Mivia Road Events", "Mivia Audio Events"], "metric": ["Rank-1 Recognition Rate"], "title": "DENet: a deep architecture for audio surveillance applications"} {"abstract": "Recently, the connectionist temporal classification (CTC) model coupled with\nrecurrent (RNN) or convolutional neural networks (CNN), made it easier to train\nspeech recognition systems in an end-to-end fashion. However in real-valued\nmodels, time frame components such as mel-filter-bank energies and the cepstral\ncoefficients obtained from them, together with their first and second order\nderivatives, are processed as individual elements, while a natural alternative\nis to process such components as composed entities. We propose to group such\nelements in the form of quaternions and to process these quaternions using the\nestablished quaternion algebra. Quaternion numbers and quaternion neural\nnetworks have shown their efficiency to process multidimensional inputs as\nentities, to encode internal dependencies, and to solve many tasks with less\nlearning parameters than real-valued models. This paper proposes to integrate\nmultiple feature views in quaternion-valued convolutional neural network\n(QCNN), to be used for sequence-to-sequence mapping with the CTC model.\nPromising results are reported using simple QCNNs in phoneme recognition\nexperiments with the TIMIT corpus. More precisely, QCNNs obtain a lower phoneme\nerror rate (PER) with less learning parameters than a competing model based on\nreal-valued CNNs.", "field": [], "task": ["Speech Recognition"], "method": [], "dataset": ["TIMIT"], "metric": ["Percentage error"], "title": "Quaternion Convolutional Neural Networks for End-to-End Automatic Speech Recognition"} {"abstract": "Target-oriented sentiment classification aims at classifying sentiment\npolarities over individual opinion targets in a sentence. RNN with attention\nseems a good fit for the characteristics of this task, and indeed it achieves\nthe state-of-the-art performance. After re-examining the drawbacks of attention\nmechanism and the obstacles that block CNN to perform well in this\nclassification task, we propose a new model to overcome these issues. Instead\nof attention, our model employs a CNN layer to extract salient features from\nthe transformed word representations originated from a bi-directional RNN\nlayer. Between the two layers, we propose a component to generate\ntarget-specific representations of words in the sentence, meanwhile incorporate\na mechanism for preserving the original contextual information from the RNN\nlayer. Experiments show that our model achieves a new state-of-the-art\nperformance on a few benchmarks.", "field": [], "task": ["Aspect-Based Sentiment Analysis"], "method": [], "dataset": ["SemEval 2014 Task 4 Sub Task 2"], "metric": ["Laptop (Acc)", "Restaurant (Acc)", "Mean Acc (Restaurant + Laptop)"], "title": "Transformation Networks for Target-Oriented Sentiment Classification"} {"abstract": "This paper introduces the pipeline to scale the largest dataset in egocentric vision EPIC-KITCHENS. The effort culminates in EPIC-KITCHENS-100, a collection of 100~hours, 20M frames, 90K actions in 700 variable-length videos, capturing long-term unscripted activities in 45 environments, using head-mounted cameras. Compared to its previous version, EPIC-KITCHENS-100 has been annotated using a novel pipeline that allows denser (54\\% more actions per minute) and more complete annotations of fine-grained actions (+128\\% more action segments). This collection also enables evaluating the \"test of time\" - i.e. whether models trained on data collected in 2018 can generalise to new footage collected under the same hypotheses albeit \"two years on\". The dataset is aligned with 6 challenges: action recognition (full and weak supervision), action detection, action anticipation, cross-modal retrieval (from captions), as well as unsupervised domain adaptation for action recognition. For each challenge, we define the task, provide baselines and evaluation metrics.", "field": [], "task": ["Action Anticipation", "Action Detection", "Action Recognition", "Cross-Modal Retrieval", "Domain Adaptation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["EPIC-KITCHENS-100"], "metric": ["Accuracy"], "title": "Rescaling Egocentric Vision"} {"abstract": "In this work, we propose Adversarial Complementary Learning (ACoL) to\nautomatically localize integral objects of semantic interest with weak\nsupervision. We first mathematically prove that class localization maps can be\nobtained by directly selecting the class-specific feature maps of the last\nconvolutional layer, which paves a simple way to identify object regions. We\nthen present a simple network architecture including two parallel-classifiers\nfor object localization. Specifically, we leverage one classification branch to\ndynamically localize some discriminative object regions during the forward\npass. Although it is usually responsive to sparse parts of the target objects,\nthis classifier can drive the counterpart classifier to discover new and\ncomplementary object regions by erasing its discovered regions from the feature\nmaps. With such an adversarial learning, the two parallel-classifiers are\nforced to leverage complementary object regions for classification and can\nfinally generate integral object localization together. The merits of ACoL are\nmainly two-fold: 1) it can be trained in an end-to-end manner; 2) dynamically\nerasing enables the counterpart classifier to discover complementary object\nregions more effectively. We demonstrate the superiority of our ACoL approach\nin a variety of experiments. In particular, the Top-1 localization error rate\non the ILSVRC dataset is 45.14%, which is the new state-of-the-art.", "field": [], "task": ["Object Localization", "Weakly-Supervised Object Localization"], "method": [], "dataset": ["ILSVRC 2016"], "metric": ["Top-5 Error"], "title": "Adversarial Complementary Learning for Weakly Supervised Object Localization"} {"abstract": "This paper introduces geometry and object shape and pose costs for\nmulti-object tracking in urban driving scenarios. Using images from a monocular\ncamera alone, we devise pairwise costs for object tracks, based on several 3D\ncues such as object pose, shape, and motion. The proposed costs are agnostic to\nthe data association method and can be incorporated into any optimization\nframework to output the pairwise data associations. These costs are easy to\nimplement, can be computed in real-time, and complement each other to account\nfor possible errors in a tracking-by-detection framework. We perform an\nextensive analysis of the designed costs and empirically demonstrate consistent\nimprovement over the state-of-the-art under varying conditions that employ a\nrange of object detectors, exhibit a variety in camera and object motions, and,\nmore importantly, are not reliant on the choice of the association framework.\nWe also show that, by using the simplest of associations frameworks (two-frame\nHungarian assignment), we surpass the state-of-the-art in multi-object-tracking\non road scenes. More qualitative and quantitative results can be found at the\nfollowing URL: https://junaidcs032.github.io/Geometry_ObjectShape_MOT/.", "field": [], "task": ["Multi-Object Tracking", "Object Tracking", "Online Multi-Object Tracking"], "method": [], "dataset": ["KITTI Tracking test", "KITTI"], "metric": ["MOTA", "MOTP"], "title": "Beyond Pixels: Leveraging Geometry and Shape Cues for Online Multi-Object Tracking"} {"abstract": "In this paper, we propose an interactive matching network (IMN) for the multi-turn response selection task. First, IMN constructs word representations from three aspects to address the challenge of out-of-vocabulary (OOV) words. Second, an attentive hierarchical recurrent encoder (AHRE), which is capable of encoding sentences hierarchically and generating more descriptive representations by aggregating with an attention mechanism, is designed. Finally, the bidirectional interactions between whole multi-turn contexts and response candidates are calculated to derive the matching information between them. Experiments on four public datasets show that IMN outperforms the baseline models on all metrics, achieving a new state-of-the-art performance and demonstrating compatibility across domains for multi-turn response selection.", "field": [], "task": ["Conversational Response Selection"], "method": [], "dataset": ["Ubuntu Dialogue (v1, Ranking)"], "metric": ["R10@1", "R10@5", "R2@1", "R10@2"], "title": "Interactive Matching Network for Multi-Turn Response Selection in Retrieval-Based Chatbots"} {"abstract": "We propose a method for human activity recognition from RGB data that does\nnot rely on any pose information during test time and does not explicitly\ncalculate pose information internally. Instead, a visual attention module\nlearns to predict glimpse sequences in each frame. These glimpses correspond to\ninterest points in the scene that are relevant to the classified activities. No\nspatial coherence is forced on the glimpse locations, which gives the module\nliberty to explore different points at each frame and better optimize the\nprocess of scrutinizing visual information. Tracking and sequentially\nintegrating this kind of unstructured data is a challenge, which we address by\nseparating the set of glimpses from a set of recurrent tracking/recognition\nworkers. These workers receive glimpses, jointly performing subsequent motion\ntracking and activity prediction. The glimpses are soft-assigned to the\nworkers, optimizing coherence of the assignments in space, time and feature\nspace using an external memory module. No hard decisions are taken, i.e. each\nglimpse point is assigned to all existing workers, albeit with different\nimportance. Our methods outperform state-of-the-art methods on the largest\nhuman activity recognition dataset available to-date; NTU RGB+D Dataset, and on\na smaller human action recognition dataset Northwestern-UCLA Multiview Action\n3D Dataset. Our code is publicly available at\nhttps://github.com/fabienbaradel/glimpse_clouds.", "field": [], "task": ["Action Recognition", "Activity Prediction", "Activity Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D", "N-UCLA"], "metric": ["Accuracy (CS)", "Accuracy (CV)", "Accuracy"], "title": "Glimpse Clouds: Human Activity Recognition from Unstructured Feature Points"} {"abstract": "Modern cars are incorporating an increasing number of driver assist features,\namong which automatic lane keeping. The latter allows the car to properly\nposition itself within the road lanes, which is also crucial for any subsequent\nlane departure or trajectory planning decision in fully autonomous cars.\nTraditional lane detection methods rely on a combination of highly-specialized,\nhand-crafted features and heuristics, usually followed by post-processing\ntechniques, that are computationally expensive and prone to scalability due to\nroad scene variations. More recent approaches leverage deep learning models,\ntrained for pixel-wise lane segmentation, even when no markings are present in\nthe image due to their big receptive field. Despite their advantages, these\nmethods are limited to detecting a pre-defined, fixed number of lanes, e.g.\nego-lanes, and can not cope with lane changes. In this paper, we go beyond the\naforementioned limitations and propose to cast the lane detection problem as an\ninstance segmentation problem - in which each lane forms its own instance -\nthat can be trained end-to-end. To parametrize the segmented lane instances\nbefore fitting the lane, we further propose to apply a learned perspective\ntransformation, conditioned on the image, in contrast to a fixed \"bird's-eye\nview\" transformation. By doing so, we ensure a lane fitting which is robust\nagainst road plane changes, unlike existing approaches that rely on a fixed,\npre-defined transformation. In summary, we propose a fast lane detection\nalgorithm, running at 50 fps, which can handle a variable number of lanes and\ncope with lane changes. We verify our method on the tuSimple dataset and\nachieve competitive results.", "field": [], "task": ["Instance Segmentation", "Lane Detection", "Semantic Segmentation"], "method": [], "dataset": ["TuSimple"], "metric": ["F1 score", "Accuracy"], "title": "Towards End-to-End Lane Detection: an Instance Segmentation Approach"} {"abstract": "Learning individual-level causal effects from observational data, such as\ninferring the most effective medication for a specific patient, is a problem of\ngrowing importance for policy makers. The most important aspect of inferring\ncausal effects from observational data is the handling of confounders, factors\nthat affect both an intervention and its outcome. A carefully designed\nobservational study attempts to measure all important confounders. However,\neven if one does not have direct access to all confounders, there may exist\nnoisy and uncertain measurement of proxies for confounders. We build on recent\nadvances in latent variable modeling to simultaneously estimate the unknown\nlatent space summarizing the confounders and the causal effect. Our method is\nbased on Variational Autoencoders (VAE) which follow the causal structure of\ninference with proxies. We show our method is significantly more robust than\nexisting methods, and matches the state-of-the-art on previous benchmarks\nfocused on individual treatment effects.", "field": [], "task": ["Causal Inference", "Latent Variable Models"], "method": [], "dataset": ["IDHP"], "metric": ["Average Treatment Effect Error"], "title": "Causal Effect Inference with Deep Latent-Variable Models"} {"abstract": "Most of the unsupervised dependency parsers are based on first-order probabilistic generative models that only consider local parent-child information. Inspired by second-order supervised dependency parsing, we proposed a second-order extension of unsupervised neural dependency models that incorporate grandparent-child or sibling information. We also propose a novel design of the neural parameterization and optimization methods of the dependency models. In second-order models, the number of grammar rules grows cubically with the increase of vocabulary size, making it difficult to train lexicalized models that may contain thousands of words. To circumvent this problem while still benefiting from both second-order parsing and lexicalization, we use the agreement-based learning framework to jointly train a second-order unlexicalized model and a first-order lexicalized model. Experiments on multiple datasets show the effectiveness of our second-order models compared with recent state-of-the-art methods. Our joint model achieves a 10% improvement over the previous state-of-the-art parser on the full WSJ test set", "field": [], "task": ["Dependency Grammar Induction", "Dependency Parsing"], "method": [], "dataset": ["WSJ10", "WSJ"], "metric": ["UAS"], "title": "Second-Order Unsupervised Neural Dependency Parsing"} {"abstract": "Pedestrian analysis plays a vital role in intelligent video surveillance and\nis a key component for security-centric computer vision systems. Despite that\nthe convolutional neural networks are remarkable in learning discriminative\nfeatures from images, the learning of comprehensive features of pedestrians for\nfine-grained tasks remains an open problem. In this study, we propose a new\nattention-based deep neural network, named as HydraPlus-Net (HP-net), that\nmulti-directionally feeds the multi-level attention maps to different feature\nlayers. The attentive deep features learned from the proposed HP-net bring\nunique advantages: (1) the model is capable of capturing multiple attentions\nfrom low-level to semantic-level, and (2) it explores the multi-scale\nselectiveness of attentive features to enrich the final feature representations\nfor a pedestrian image. We demonstrate the effectiveness and generality of the\nproposed HP-net for pedestrian analysis on two tasks, i.e. pedestrian attribute\nrecognition and person re-identification. Intensive experimental results have\nbeen provided to prove that the HP-net outperforms the state-of-the-art methods\non various datasets.", "field": [], "task": ["Pedestrian Attribute Recognition", "Person Re-Identification"], "method": [], "dataset": ["RAP", "PA-100K", "PETA"], "metric": ["Accuracy"], "title": "HydraPlus-Net: Attentive Deep Features for Pedestrian Analysis"} {"abstract": "We consider the question: what can be learnt by looking at and listening to a\nlarge number of unlabelled videos? There is a valuable, but so far untapped,\nsource of information contained in the video itself -- the correspondence\nbetween the visual and the audio streams, and we introduce a novel\n\"Audio-Visual Correspondence\" learning task that makes use of this. Training\nvisual and audio networks from scratch, without any additional supervision\nother than the raw unconstrained videos themselves, is shown to successfully\nsolve this task, and, more interestingly, result in good visual and audio\nrepresentations. These features set the new state-of-the-art on two sound\nclassification benchmarks, and perform on par with the state-of-the-art\nself-supervised approaches on ImageNet classification. We also demonstrate that\nthe network is able to localize objects in both modalities, as well as perform\nfine-grained recognition tasks.", "field": [], "task": ["Audio Classification"], "method": [], "dataset": ["AudioSet", "ESC-50"], "metric": ["Test mAP", "Top-1 Accuracy"], "title": "Look, Listen and Learn"} {"abstract": "Multi-label image classification is a fundamental but challenging task in\ncomputer vision. Great progress has been achieved by exploiting semantic\nrelations between labels in recent years. However, conventional approaches are\nunable to model the underlying spatial relations between labels in multi-label\nimages, because spatial annotations of the labels are generally not provided.\nIn this paper, we propose a unified deep neural network that exploits both\nsemantic and spatial relations between labels with only image-level\nsupervisions. Given a multi-label image, our proposed Spatial Regularization\nNetwork (SRN) generates attention maps for all labels and captures the\nunderlying relations between them via learnable convolutions. By aggregating\nthe regularized classification results with original results by a ResNet-101\nnetwork, the classification performance can be consistently improved. The whole\ndeep neural network is trained end-to-end with only image-level annotations,\nthus requires no additional efforts on image annotations. Extensive evaluations\non 3 public datasets with different types of labels show that our approach\nsignificantly outperforms state-of-the-arts and has strong generalization\ncapability. Analysis of the learned SRN model demonstrates that it can\neffectively capture both semantic and spatial relations of labels for improving\nclassification performance.", "field": [], "task": ["Image Classification", "Multi-Label Classification"], "method": [], "dataset": ["MS-COCO", "NUS-WIDE"], "metric": ["mAP", "MAP"], "title": "Learning Spatial Regularization with Image-level Supervisions for Multi-label Image Classification"} {"abstract": "Learning sophisticated feature interactions behind user behaviors is critical\nin maximizing CTR for recommender systems. Despite great progress, existing\nmethods seem to have a strong bias towards low- or high-order interactions, or\nrequire expertise feature engineering. In this paper, we show that it is\npossible to derive an end-to-end learning model that emphasizes both low- and\nhigh-order feature interactions. The proposed model, DeepFM, combines the power\nof factorization machines for recommendation and deep learning for feature\nlearning in a new neural network architecture. Compared to the latest Wide \\&\nDeep model from Google, DeepFM has a shared input to its \"wide\" and \"deep\"\nparts, with no need of feature engineering besides raw features. Comprehensive\nexperiments are conducted to demonstrate the effectiveness and efficiency of\nDeepFM over the existing models for CTR prediction, on both benchmark data and\ncommercial data.", "field": [], "task": ["Click-Through Rate Prediction", "Feature Engineering", "Recommendation Systems"], "method": [], "dataset": ["Bing News", "Amazon", "MovieLens 20M", "Criteo", "Company*", "Dianping"], "metric": ["Log Loss", "AUC"], "title": "DeepFM: A Factorization-Machine based Neural Network for CTR Prediction"} {"abstract": "Recently, several deep learning-based image super-resolution methods have\nbeen developed by stacking massive numbers of layers. However, this leads too\nlarge model sizes and high computational complexities, thus some recursive\nparameter-sharing methods have been also proposed. Nevertheless, their designs\ndo not properly utilize the potential of the recursive operation. In this\npaper, we propose a novel, lightweight, and efficient super-resolution method\nto maximize the usefulness of the recursive architecture, by introducing block\nstate-based recursive network. By taking advantage of utilizing the block\nstate, the recursive part of our model can easily track the status of the\ncurrent image features. We show the benefits of the proposed method in terms of\nmodel size, speed, and efficiency. In addition, we show that our method\noutperforms the other state-of-the-art methods.", "field": [], "task": ["Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["Set5 - 4x upscaling", "Urban100 - 4x upscaling", "BSD100 - 4x upscaling", "Set14 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Lightweight and Efficient Image Super-Resolution with Block State-based Recursive Network"} {"abstract": "The fundamental role of hypernymy in NLP has motivated the development of\nmany methods for the automatic identification of this relation, most of which\nrely on word distribution. We investigate an extensive number of such\nunsupervised measures, using several distributional semantic models that differ\nby context type and feature weighting. We analyze the performance of the\ndifferent methods based on their linguistic motivation. Comparison to the\nstate-of-the-art supervised methods shows that while supervised methods\ngenerally outperform the unsupervised ones, the former are sensitive to the\ndistribution of training instances, hurting their reliability. Being based on\ngeneral linguistic hypotheses and independent from training data, unsupervised\nmeasures are more robust, and therefore are still useful artillery for\nhypernymy detection.", "field": [], "task": ["Hypernym Discovery"], "method": [], "dataset": ["Medical domain", "Music domain", "General"], "metric": ["P@5", "MRR", "MAP"], "title": "Hypernyms under Siege: Linguistically-motivated Artillery for Hypernymy Detection"} {"abstract": "Weakly supervised learning of object detection is an important problem in\nimage understanding that still does not have a satisfactory solution. In this\npaper, we address this problem by exploiting the power of deep convolutional\nneural networks pre-trained on large-scale image-level classification tasks. We\npropose a weakly supervised deep detection architecture that modifies one such\nnetwork to operate at the level of image regions, performing simultaneously\nregion selection and classification. Trained as an image classifier, the\narchitecture implicitly learns object detectors that are better than\nalternative weakly supervised detection systems on the PASCAL VOC data. The\nmodel, which is a simple and elegant end-to-end architecture, outperforms\nstandard data augmentation and fine-tuning techniques for the task of\nimage-level classification as well.", "field": [], "task": ["Data Augmentation", "Object Detection", "Weakly Supervised Object Detection"], "method": [], "dataset": ["HICO-DET", "Watercolor2k", "COCO test-dev", "PASCAL VOC 2007", "Charades"], "metric": ["AP50", "MAP"], "title": "Weakly Supervised Deep Detection Networks"} {"abstract": "Named Entity Disambiguation (NED) refers to the task of resolving multiple\nnamed entity mentions in a document to their correct references in a knowledge\nbase (KB) (e.g., Wikipedia). In this paper, we propose a novel embedding method\nspecifically designed for NED. The proposed method jointly maps words and\nentities into the same continuous vector space. We extend the skip-gram model\nby using two models. The KB graph model learns the relatedness of entities\nusing the link structure of the KB, whereas the anchor context model aims to\nalign vectors such that similar words and entities occur close to one another\nin the vector space by leveraging KB anchors and their context words. By\ncombining contexts based on the proposed embedding with standard NED features,\nwe achieved state-of-the-art accuracy of 93.1% on the standard CoNLL dataset\nand 85.2% on the TAC 2010 dataset.", "field": [], "task": ["Entity Disambiguation", "Entity Linking"], "method": [], "dataset": ["TAC2010", "AIDA-CoNLL"], "metric": ["Micro Precision", "In-KB Accuracy"], "title": "Joint Learning of the Embedding of Words and Entities for Named Entity Disambiguation"} {"abstract": "We introduce a new representation learning approach for domain adaptation, in\nwhich data at training and test time come from similar but different\ndistributions. Our approach is directly inspired by the theory on domain\nadaptation suggesting that, for effective domain transfer to be achieved,\npredictions must be made based on features that cannot discriminate between the\ntraining (source) and test (target) domains. The approach implements this idea\nin the context of neural network architectures that are trained on labeled data\nfrom the source domain and unlabeled data from the target domain (no labeled\ntarget-domain data is necessary). As the training progresses, the approach\npromotes the emergence of features that are (i) discriminative for the main\nlearning task on the source domain and (ii) indiscriminate with respect to the\nshift between the domains. We show that this adaptation behaviour can be\nachieved in almost any feed-forward model by augmenting it with few standard\nlayers and a new gradient reversal layer. The resulting augmented architecture\ncan be trained using standard backpropagation and stochastic gradient descent,\nand can thus be implemented with little effort using any of the deep learning\npackages. We demonstrate the success of our approach for two distinct\nclassification problems (document sentiment analysis and image classification),\nwhere state-of-the-art domain adaptation performance on standard benchmarks is\nachieved. We also validate the approach for descriptor learning task in the\ncontext of person re-identification application.", "field": [], "task": ["Domain Adaptation", "Image Classification", "Person Re-Identification", "Representation Learning", "Sentiment Analysis"], "method": [], "dataset": ["SVNH-to-MNIST", "Synth Digits-to-SVHN", "Office-Home", "Multi-Domain Sentiment Dataset", "MNIST-to-MNIST-M", "Syn2Real-C"], "metric": ["DVD", "Average", "Kitchen", "Electronics", "Accuracy", "Books"], "title": "Domain-Adversarial Training of Neural Networks"} {"abstract": "Deep Recurrent Neural Network architectures, though remarkably capable at\nmodeling sequences, lack an intuitive high-level spatio-temporal structure.\nThat is while many problems in computer vision inherently have an underlying\nhigh-level structure and can benefit from it. Spatio-temporal graphs are a\npopular tool for imposing such high-level intuitions in the formulation of real\nworld problems. In this paper, we propose an approach for combining the power\nof high-level spatio-temporal graphs and sequence learning success of Recurrent\nNeural Networks~(RNNs). We develop a scalable method for casting an arbitrary\nspatio-temporal graph as a rich RNN mixture that is feedforward, fully\ndifferentiable, and jointly trainable. The proposed method is generic and\nprincipled as it can be used for transforming any spatio-temporal graph through\nemploying a certain set of well defined steps. The evaluations of the proposed\napproach on a diverse set of problems, ranging from modeling human motion to\nobject interactions, shows improvement over the state-of-the-art with a large\nmargin. We expect this method to empower new approaches to problem formulation\nthrough high-level spatio-temporal graphs and Recurrent Neural Networks.", "field": [], "task": ["Human Pose Forecasting", "Skeleton Based Action Recognition"], "method": [], "dataset": ["Human3.6M", "CAD-120"], "metric": ["MAR, walking, 400ms", "MAR, walking, 1,000ms", "Accuracy"], "title": "Structural-RNN: Deep Learning on Spatio-Temporal Graphs"} {"abstract": "In this work we explore recent advances in Recurrent Neural Networks for\nlarge scale Language Modeling, a task central to language understanding. We\nextend current models to deal with two key challenges present in this task:\ncorpora and vocabulary sizes, and complex, long term structure of language. We\nperform an exhaustive study on techniques such as character Convolutional\nNeural Networks or Long-Short Term Memory, on the One Billion Word Benchmark.\nOur best single model significantly improves state-of-the-art perplexity from\n51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20),\nwhile an ensemble of models sets a new record by improving perplexity from 41.0\ndown to 23.7. We also release these models for the NLP and ML community to\nstudy and improve upon.", "field": [], "task": ["Language Modelling"], "method": [], "dataset": ["One Billion Word"], "metric": ["Number of params", "PPL"], "title": "Exploring the Limits of Language Modeling"} {"abstract": "The JPEG image compression algorithm is the most popular method of image compression because of its ability for large compression ratios. However, to achieve such high compression, information is lost. For aggressive quantization settings, this leads to a noticeable reduction in image quality. Artifact correction has been studied in the context of deep neural networks for some time, but the current state-of-the-art methods require a different model to be trained for each quality setting, greatly limiting their practical application. We solve this problem by creating a novel architecture which is parameterized by the JPEG files quantization matrix. This allows our single model to achieve state-of-the-art performance over models trained for specific quality settings.", "field": [], "task": ["Image Compression", "JPEG Artifact Correction", "Quantization"], "method": [], "dataset": ["BSDS500 (Quality 10 Color)", "BSDS500 (Quality 30 Grayscale)", "LIVE1 (Quality 10 Color)", "ICB (Quality 10 Grayscale)", "ICB (Quality 10 Color)", "LIVE1 (Quality 20 Color)", "BSDS500 (Quality 20 Color)", "Classic5 (Quality 20 Grayscale)", "LIVE1 (Quality 30 Color)", "ICB (Quality 30 Color)", "ICB (Quality 30 Grayscale)", "Live1 (Quality 10 Grayscale)", "Classic5 (Quality 10 Grayscale)", "BSDS500 (Quality 20 Grayscale)", "LIVE1 (Quality 20 Grayscale)", "ICB (Quality 20 Color)", "LIVE1 (Quality 30 Grayscale)", "BSDS500 (Quality 30 Color)", "Classic5 (Quality 30 Grayscale)", "ICB (Quality 20 Grayscale)", "BSDS500 (Quality 10 Grayscale)"], "metric": ["SSIM", "PSNR", "PSNR-B"], "title": "Quantization Guided JPEG Artifact Correction"} {"abstract": "Generating semantically coherent responses is still a major challenge in\ndialogue generation. Different from conventional text generation tasks, the\nmapping between inputs and responses in conversations is more complicated,\nwhich highly demands the understanding of utterance-level semantic dependency,\na relation between the whole meanings of inputs and outputs. To address this\nproblem, we propose an Auto-Encoder Matching (AEM) model to learn such\ndependency. The model contains two auto-encoders and one mapping module. The\nauto-encoders learn the semantic representations of inputs and responses, and\nthe mapping module learns to connect the utterance-level representations.\nExperimental results from automatic and human evaluations demonstrate that our\nmodel is capable of generating responses of high coherence and fluency compared\nto baseline models. The code is available at https://github.com/lancopku/AMM", "field": [], "task": ["Dialogue Generation", "Text Generation"], "method": [], "dataset": ["DailyDialog"], "metric": ["BLEU-3", "BLEU-4", "BLEU-2", "BLEU-1"], "title": "An Auto-Encoder Matching Model for Learning Utterance-Level Semantic Dependency in Dialogue Generation"} {"abstract": "The design of neural network architectures is an important component for\nachieving state-of-the-art performance with machine learning systems across a\nbroad array of tasks. Much work has endeavored to design and build\narchitectures automatically through clever construction of a search space\npaired with simple learning algorithms. Recent progress has demonstrated that\nsuch meta-learning methods may exceed scalable human-invented architectures on\nimage classification tasks. An open question is the degree to which such\nmethods may generalize to new domains. In this work we explore the construction\nof meta-learning techniques for dense image prediction focused on the tasks of\nscene parsing, person-part segmentation, and semantic image segmentation.\nConstructing viable search spaces in this domain is challenging because of the\nmulti-scale representation of visual information and the necessity to operate\non high resolution imagery. Based on a survey of techniques in dense image\nprediction, we construct a recursive search space and demonstrate that even\nwith efficient random search, we can identify architectures that outperform\nhuman-invented architectures and achieve state-of-the-art performance on three\ndense prediction tasks including 82.7\\% on Cityscapes (street scene parsing),\n71.3\\% on PASCAL-Person-Part (person-part segmentation), and 87.9\\% on PASCAL\nVOC 2012 (semantic image segmentation). Additionally, the resulting\narchitecture is more computationally efficient, requiring half the parameters\nand half the computational cost as previous state of the art systems.", "field": [], "task": ["Image Classification", "Meta-Learning", "Scene Parsing", "Semantic Segmentation", "Street Scene Parsing"], "method": [], "dataset": ["PASCAL-Part", "PASCAL VOC 2012 test", "Cityscapes test"], "metric": ["Mean IoU", "Mean IoU (class)", "mIoU"], "title": "Searching for Efficient Multi-Scale Architectures for Dense Image Prediction"} {"abstract": "Fake news are nowadays an issue of pressing concern, given their recent rise\nas a potential threat to high-quality journalism and well-informed public\ndiscourse. The Fake News Challenge (FNC-1) was organized in 2017 to encourage\nthe development of machine learning-based classification systems for stance\ndetection (i.e., for identifying whether a particular news article agrees,\ndisagrees, discusses, or is unrelated to a particular news headline), thus\nhelping in the detection and analysis of possible instances of fake news. This\narticle presents a new approach to tackle this stance detection problem, based\non the combination of string similarity features with a deep neural\narchitecture that leverages ideas previously advanced in the context of\nlearning efficient text representations, document classification, and natural\nlanguage inference. Specifically, we use bi-directional Recurrent Neural\nNetworks, together with max-pooling over the temporal/sequential dimension and\nneural attention, for representing (i) the headline, (ii) the first two\nsentences of the news article, and (iii) the entire news article. These\nrepresentations are then combined/compared, complemented with similarity\nfeatures inspired on other FNC-1 approaches, and passed to a final layer that\npredicts the stance of the article towards the headline. We also explore the\nuse of external sources of information, specifically large datasets of sentence\npairs originally proposed for training and evaluating natural language\ninference methods, in order to pre-train specific components of the neural\nnetwork architecture (e.g., the RNNs used for encoding sentences). The obtained\nresults attest to the effectiveness of the proposed ideas and show that our\nmodel, particularly when considering pre-training and the combination of neural\nrepresentations together with similarity features, slightly outperforms the\nprevious state-of-the-art.", "field": [], "task": ["Document Classification", "Natural Language Inference", "Representation Learning", "Stance Detection"], "method": [], "dataset": ["MultiNLI", "FNC-1", "SNLI"], "metric": ["Per-class Accuracy (Disagree)", "% Test Accuracy", "Weighted Accuracy", "Matched", "Per-class Accuracy (Discuss)", "Per-class Accuracy (Unrelated)", "Mismatched", "Per-class Accuracy (Agree)"], "title": "Combining Similarity Features and Deep Representation Learning for Stance Detection in the Context of Checking Fake News"} {"abstract": "In this work, we demonstrate that 3D poses in video can be effectively\nestimated with a fully convolutional model based on dilated temporal\nconvolutions over 2D keypoints. We also introduce back-projection, a simple and\neffective semi-supervised training method that leverages unlabeled video data.\nWe start with predicted 2D keypoints for unlabeled video, then estimate 3D\nposes and finally back-project to the input 2D keypoints. In the supervised\nsetting, our fully-convolutional model outperforms the previous best result\nfrom the literature by 6 mm mean per-joint position error on Human3.6M,\ncorresponding to an error reduction of 11%, and the model also shows\nsignificant improvements on HumanEva-I. Moreover, experiments with\nback-projection show that it comfortably outperforms previous state-of-the-art\nresults in semi-supervised settings where labeled data is scarce. Code and\nmodels are available at https://github.com/facebookresearch/VideoPose3D", "field": [], "task": ["3D Human Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["Human3.6M"], "metric": ["Average MPJPE (mm)", "Multi-View or Monocular", "Using 2D ground-truth joints"], "title": "3D human pose estimation in video with temporal convolutions and semi-supervised training"} {"abstract": "In this work, we propose an end-to-end constrained clustering scheme to tackle the person re-identification (re-id) problem. Deep neural networks (DNN) have recently proven to be effective on person re-identification task. In particular, rather than leveraging solely a probe-gallery similarity, diffusing the similarities among the gallery images in an end-to-end manner has proven to be effective in yielding a robust probe-gallery affinity. However, existing methods do not apply probe image as a constraint, and are prone to noise propagation during the similarity diffusion process. To overcome this, we propose an intriguing scheme which treats person-image retrieval problem as a {\\em constrained clustering optimization} problem, called deep constrained dominant sets (DCDS). Given a probe and gallery images, we re-formulate person re-id problem as finding a constrained cluster, where the probe image is taken as a constraint (seed) and each cluster corresponds to a set of images corresponding to the same person. By optimizing the constrained clustering in an end-to-end manner, we naturally leverage the contextual knowledge of a set of images corresponding to the given person-images. We further enhance the performance by integrating an auxiliary net alongside DCDS, which employs a multi-scale Resnet. To validate the effectiveness of our method we present experiments on several benchmark datasets and show that the proposed method can outperform state-of-the-art methods.", "field": [], "task": ["Image Retrieval", "Person Re-Identification"], "method": [], "dataset": ["DukeMTMC-reID", "Market-1501", "CUHK03"], "metric": ["Rank-1", "Rank-5", "MAP"], "title": "Deep Constrained Dominant Sets for Person Re-identification"} {"abstract": "Most graph kernels are an instance of the class of $\\mathcal{R}$-Convolution kernels, which measure the similarity of objects by comparing their substructures. Despite their empirical success, most graph kernels use a naive aggregation of the final set of substructures, usually a sum or average, thereby potentially discarding valuable information about the distribution of individual components. Furthermore, only a limited instance of these approaches can be extended to continuously attributed graphs. We propose a novel method that relies on the Wasserstein distance between the node feature vector distributions of two graphs, which allows to find subtler differences in data sets by considering graphs as high-dimensional objects, rather than simple means. We further propose a Weisfeiler-Lehman inspired embedding scheme for graphs with continuous node attributes and weighted edges, enhance it with the computed Wasserstein distance, and thus improve the state-of-the-art prediction performance on several graph classification tasks.", "field": [], "task": ["Graph Classification"], "method": [], "dataset": ["ENZYMES", "PROTEINS", "D&D", "NCI1", "MUTAG", "PTC"], "metric": ["Accuracy"], "title": "Wasserstein Weisfeiler-Lehman Graph Kernels"} {"abstract": "Recent advances in semi-supervised learning have shown tremendous potential in overcoming a major barrier to the success of modern machine learning algorithms: access to vast amounts of human-labeled training data. Previous algorithms based on consistency regularization can harness the abundance of unlabeled data to produce impressive results on a number of semi-supervised benchmarks, approaching the performance of strong supervised baselines using only a fraction of the available labeled data. In this work, we challenge the long-standing success of consistency regularization by introducing self-supervised regularization as the basis for combining semantic feature representations from unlabeled data. We perform extensive comparative experiments to demonstrate the effectiveness of self-supervised regularization for supervised and semi-supervised image classification on SVHN, CIFAR-10, and CIFAR-100 benchmark datasets. We present two main results: (1) models augmented with self-supervised regularization significantly improve upon traditional supervised classifiers without the need for unlabeled data; (2) together with unlabeled data, our models yield semi-supervised performance competitive with, and in many cases exceeding, prior state-of-the-art consistency baselines. Lastly, our models have the practical utility of being efficiently trained end-to-end and require no additional hyper-parameters to tune for optimal performance beyond the standard set for training neural networks. Reference code and data are available at https://github.com/vuptran/sesemi", "field": [], "task": ["Image Classification", "Multi-Task Learning", "Semi-Supervised Image Classification"], "method": [], "dataset": ["SVHN, 500 Labels", "CIFAR-10, 2000 Labels", "SVHN, 250 Labels", "SVHN, 1000 labels", "CIFAR-10, 1000 Labels", "cifar-100, 10000 Labels", "CIFAR-10, 4000 Labels"], "metric": ["Accuracy"], "title": "Exploring Self-Supervised Regularization for Supervised and Semi-Supervised Learning"} {"abstract": "A spoken language understanding (SLU) system includes two main tasks, slot filling (SF) and intent detection (ID). The joint model for the two tasks is becoming a tendency in SLU. But the bi-directional interrelated connections between the intent and slots are not established in the existing joint models. In this paper, we propose a novel bi-directional interrelated model for joint intent detection and slot filling. We introduce an SF-ID network to establish direct connections for the two tasks to help them promote each other mutually. Besides, we design an entirely new iteration mechanism inside the SF-ID network to enhance the bi-directional interrelated connections. The experimental results show that the relative improvement in the sentence-level semantic frame accuracy of our model is 3.79% and 5.42% on ATIS and Snips datasets, respectively, compared to the state-of-the-art model.", "field": [], "task": ["Intent Detection", "Slot Filling", "Spoken Language Understanding"], "method": [], "dataset": ["ATIS", "SNIPS"], "metric": ["Slot F1 Score", "Intent Accuracy", "F1", "Accuracy"], "title": "A Novel Bi-directional Interrelated Model for Joint Intent Detection and Slot Filling"} {"abstract": "Existing approaches to recipe generation are unable to create recipes for users with culinary preferences but incomplete knowledge of ingredients in specific dishes. We propose a new task of personalized recipe generation to help these users: expanding a name and incomplete ingredient details into complete natural-text instructions aligned with the user's historical preferences. We attend on technique- and recipe-level representations of a user's previously consumed recipes, fusing these 'user-aware' representations in an attention fusion layer to control recipe text generation. Experiments on a new dataset of 180K recipes and 700K interactions show our model's ability to generate plausible and personalized recipes compared to non-personalized baselines.", "field": [], "task": ["Recipe Generation", "Text Generation"], "method": [], "dataset": ["Food.com"], "metric": ["D-1", "BLEU-1", "BPE Perplexity", "D-2", "Rouge-L", "BLEU-4"], "title": "Generating Personalized Recipes from Historical User Preferences"} {"abstract": "Aspect-based sentiment analysis (ABSA) has attracted increasing attention recently due to its broad applications. In existing ABSA datasets, most sentences contain only one aspect or multiple aspects with the same sentiment polarity, which makes ABSA task degenerate to sentence-level sentiment analysis. In this paper, we present a new large-scale Multi-Aspect Multi-Sentiment (MAMS) dataset, in which each sentence contains at least two different aspects with different sentiment polarities. The release of this dataset would push forward the research in this field. In addition, we propose simple yet effective CapsNet and CapsNet-BERT models which combine the strengths of recent NLP advances. Experiments on our new dataset show that the proposed model significantly outperforms the state-of-the-art baseline methods", "field": [], "task": ["Aspect-Based Sentiment Analysis", "Sentiment Analysis"], "method": [], "dataset": ["MAMS"], "metric": ["Acc"], "title": "A Challenge Dataset and Effective Models for Aspect-Based Sentiment Analysis"} {"abstract": "The pre-training of text encoders normally processes text as a sequence of tokens corresponding to small text units, such as word pieces in English and characters in Chinese. It omits information carried by larger text granularity, and thus the encoders cannot easily adapt to certain combinations of characters. This leads to a loss of important semantic information, which is especially problematic for Chinese because the language does not have explicit word boundaries. In this paper, we propose ZEN, a BERT-based Chinese (Z) text encoder Enhanced by N-gram representations, where different combinations of characters are considered during training. As a result, potential word or phase boundaries are explicitly pre-trained and fine-tuned with the character encoder (BERT). Therefore ZEN incorporates the comprehensive information of both the character sequence and words or phrases it contains. Experimental results illustrated the effectiveness of ZEN on a series of Chinese NLP tasks. We show that ZEN, using less resource than other published encoders, can achieve state-of-the-art performance on most tasks. Moreover, it is shown that reasonable performance can be obtained when ZEN is trained on a small corpus, which is important for applying pre-training techniques to scenarios with limited data. The code and pre-trained models of ZEN are available at https://github.com/sinovation/zen.", "field": [], "task": ["Chinese Named Entity Recognition", "Chinese Word Segmentation", "Document Classification", "Natural Language Inference", "Part-Of-Speech Tagging", "Sentence Pair Modeling", "Sentiment Analysis"], "method": [], "dataset": ["MSR", "MSRA"], "metric": ["F1"], "title": "ZEN: Pre-training Chinese Text Encoder Enhanced by N-gram Representations"} {"abstract": "Despite various methods are proposed to make progress in pedestrian attribute recognition, a crucial problem on existing datasets is often neglected, namely, a large number of identical pedestrian identities in train and test set, which is not consistent with practical application. Thus, images of the same pedestrian identity in train set and test set are extremely similar, leading to overestimated performance of state-of-the-art methods on existing datasets. To address this problem, we propose two realistic datasets PETA\\textsubscript{$zs$} and RAPv2\\textsubscript{$zs$} following zero-shot setting of pedestrian identities based on PETA and RAPv2 datasets. Furthermore, compared to our strong baseline method, we have observed that recent state-of-the-art methods can not make performance improvement on PETA, RAPv2, PETA\\textsubscript{$zs$} and RAPv2\\textsubscript{$zs$}. Thus, through solving the inherent attribute imbalance in pedestrian attribute recognition, an efficient method is proposed to further improve the performance. Experiments on existing and proposed datasets verify the superiority of our method by achieving state-of-the-art performance.", "field": [], "task": ["Pedestrian Attribute Recognition"], "method": [], "dataset": ["PA-100K"], "metric": ["Accuracy"], "title": "Rethinking of Pedestrian Attribute Recognition: Realistic Datasets with Efficient Method"} {"abstract": "Unsupervised learning has always been appealing to machine learning researchers and practitioners, allowing them to avoid an expensive and complicated process of labeling the data. However, unsupervised learning of complex data is challenging, and even the best approaches show much weaker performance than their supervised counterparts. Self-supervised deep learning has become a strong instrument for representation learning in computer vision. However, those methods have not been evaluated in a fully unsupervised setting. In this paper, we propose a simple scheme for unsupervised classification based on self-supervised representations. We evaluate the proposed approach with several recent self-supervised methods showing that it achieves competitive results for ImageNet classification (39% accuracy on ImageNet with 1000 clusters and 46% with overclustering). We suggest adding the unsupervised evaluation to a set of standard benchmarks for self-supervised learning. The code is available at https://github.com/Randl/kmeans_selfsuper", "field": [], "task": ["Image Clustering", "Representation Learning", "Self-Supervised Learning", "Unsupervised Image Classification"], "method": [], "dataset": ["ImageNet"], "metric": ["Accuracy (%)", "ARI"], "title": "Self-Supervised Learning for Large-Scale Unsupervised Image Clustering"} {"abstract": "We study unsupervised video representation learning that seeks to learn both motion and appearance features from unlabeled video only, which can be reused for downstream tasks such as action recognition. This task, however, is extremely challenging due to 1) the highly complex spatial-temporal information in videos; and 2) the lack of labeled data for training. Unlike the representation learning for static images, it is difficult to construct a suitable self-supervised task to well model both motion and appearance features. More recently, several attempts have been made to learn video representation through video playback speed prediction. However, it is non-trivial to obtain precise speed labels for the videos. More critically, the learnt models may tend to focus on motion pattern and thus may not learn appearance features well. In this paper, we observe that the relative playback speed is more consistent with motion pattern, and thus provide more effective and stable supervision for representation learning. Therefore, we propose a new way to perceive the playback speed and exploit the relative speed between two video clips as labels. In this way, we are able to well perceive speed and learn better motion features. Moreover, to ensure the learning of appearance features, we further propose an appearance-focused task, where we enforce the model to perceive the appearance difference between two video clips. We show that optimizing the two tasks jointly consistently improves the performance on two downstream tasks, namely action recognition and video retrieval. Remarkably, for action recognition on UCF101 dataset, we achieve 93.7% accuracy without the use of labeled data for pre-training, which outperforms the ImageNet supervised pre-trained model. Code and pre-trained models can be found at https://github.com/PeihaoChen/RSPNet.", "field": [], "task": ["Action Recognition", "Representation Learning", "Self-Supervised Action Recognition", "Video Retrieval"], "method": [], "dataset": ["UCF101", "HMDB51"], "metric": ["3-fold Accuracy", "Pre-Training Dataset", "Top-1 Accuracy"], "title": "RSPNet: Relative Speed Perception for Unsupervised Video Representation Learning"} {"abstract": "Prevalent models based on artificial neural network (ANN) for sentence\nclassification often classify sentences in isolation without considering the\ncontext in which sentences appear. This hampers the traditional sentence\nclassification approaches to the problem of sequential sentence classification,\nwhere structured prediction is needed for better overall classification\nperformance. In this work, we present a hierarchical sequential labeling\nnetwork to make use of the contextual information within surrounding sentences\nto help classify the current sentence. Our model outperforms the\nstate-of-the-art results by 2%-3% on two benchmarking datasets for sequential\nsentence classification in medical scientific abstracts.", "field": [], "task": ["Sentence Classification", "Structured Prediction"], "method": [], "dataset": ["PubMed 20k RCT"], "metric": ["F1"], "title": "Hierarchical Neural Networks for Sequential Sentence Classification in Medical Scientific Abstracts"} {"abstract": "Identifying the veracity of a news article is an interesting problem while\nautomating this process can be a challenging task. Detection of a news article\nas fake is still an open question as it is contingent on many factors which the\ncurrent state-of-the-art models fail to incorporate. In this paper, we explore\na subtask to fake news identification, and that is stance detection. Given a\nnews article, the task is to determine the relevance of the body and its claim.\nWe present a novel idea that combines the neural, statistical and external\nfeatures to provide an efficient solution to this problem. We compute the\nneural embedding from the deep recurrent model, statistical features from the\nweighted n-gram bag-of-words model and handcrafted external features with the\nhelp of feature engineering heuristics. Finally, using deep neural layer all\nthe features are combined, thereby classifying the headline-body news pair as\nagree, disagree, discuss, or unrelated. We compare our proposed technique with\nthe current state-of-the-art models on the fake news challenge dataset. Through\nextensive experiments, we find that the proposed model outperforms all the\nstate-of-the-art techniques including the submissions to the fake news\nchallenge.", "field": [], "task": ["Fake News Detection", "Feature Engineering", "Stance Detection"], "method": [], "dataset": ["FNC-1"], "metric": ["Per-class Accuracy (Disagree)", "Weighted Accuracy", "Per-class Accuracy (Discuss)", "Per-class Accuracy (Unrelated)", "Per-class Accuracy (Agree)"], "title": "On the Benefit of Combining Neural, Statistical and External Features for Fake News Identification"} {"abstract": "Humans gather information by engaging in conversations involving a series of\ninterconnected questions and answers. For machines to assist in information\ngathering, it is therefore essential to enable them to answer conversational\nquestions. We introduce CoQA, a novel dataset for building Conversational\nQuestion Answering systems. Our dataset contains 127k questions with answers,\nobtained from 8k conversations about text passages from seven diverse domains.\nThe questions are conversational, and the answers are free-form text with their\ncorresponding evidence highlighted in the passage. We analyze CoQA in depth and\nshow that conversational questions have challenging phenomena not present in\nexisting reading comprehension datasets, e.g., coreference and pragmatic\nreasoning. We evaluate strong conversational and reading comprehension models\non CoQA. The best system obtains an F1 score of 65.4%, which is 23.4 points\nbehind human performance (88.8%), indicating there is ample room for\nimprovement. We launch CoQA as a challenge to the community at\nhttp://stanfordnlp.github.io/coqa/", "field": [], "task": ["Generative Question Answering", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["CoQA"], "metric": ["Overall", "Out-of-domain", "F1-Score", "In-domain"], "title": "CoQA: A Conversational Question Answering Challenge"} {"abstract": "Anomalies are ubiquitous in all scientific fields and can express an unexpected event due to incomplete knowledge about the data distribution or an unknown process that suddenly comes into play and distorts the observations. Due to such events' rarity, it is common to train deep learning models on \"normal\", i.e. non-anomalous, datasets only, thus letting the neural network to model the distribution beneath the input data. In this context, we propose our deep learning approach to the anomaly detection problem named Multi-LayerOne-Class Classification (MOCCA). We explicitly leverage the piece-wise nature of deep neural networks by exploiting information extracted at different depths to detect abnormal data instances. We show how combining the representations extracted from multiple layers of a model leads to higher discrimination performance than typical approaches proposed in the literature that are based neural networks' final output only. We propose to train the model by minimizing the $L_2$ distance between the input representation and a reference point, the anomaly-free training data centroid, at each considered layer. We conduct extensive experiments on publicly available datasets for anomaly detection, namely CIFAR10, MVTec AD, and ShanghaiTech, considering both the single-image and video-based scenarios. We show that our method reaches superior performances compared to the state-of-the-art approaches available in the literature. Moreover, we provide a model analysis to give insight on how our approach works.", "field": [], "task": ["Anomaly Detection"], "method": [], "dataset": ["MVTec AD"], "metric": ["Overall AUC"], "title": "MOCCA: Multi-Layer One-Class Classification for Anomaly Detection"} {"abstract": "Iterative generative models, such as noise conditional score networks and denoising diffusion probabilistic models, produce high quality samples by gradually denoising an initial noise vector. However, their denoising process has many steps, making them 2-3 orders of magnitude slower than other generative models such as GANs and VAEs. In this paper, we establish a novel connection between knowledge distillation and image generation with a technique that distills a multi-step denoising process into a single step, resulting in a sampling speed similar to other single-step generative models. Our Denoising Student generates high quality samples comparable to GANs on the CIFAR-10 and CelebA datasets, without adversarial training. We demonstrate that our method scales to higher resolutions through experiments on 256 x 256 LSUN. Code and checkpoints are available at https://github.com/tcl9876/Denoising_Student", "field": [], "task": ["Denoising", "Image Generation", "Knowledge Distillation"], "method": [], "dataset": ["CIFAR-10"], "metric": ["Inception score", "FID"], "title": "Knowledge Distillation in Iterative Generative Models for Improved Sampling Speed"} {"abstract": "Estimating the 3D position and orientation of objects in the environment with a single RGB camera is a critical and challenging task for low-cost urban autonomous driving and mobile robots. Most of the existing algorithms are based on the geometric constraints in 2D-3D correspondence, which stems from generic 6D object pose estimation. We first identify how the ground plane provides additional clues in depth reasoning in 3D detection in driving scenes. Based on this observation, we then improve the processing of 3D anchors and introduce a novel neural network module to fully utilize such application-specific priors in the framework of deep learning. Finally, we introduce an efficient neural network embedded with the proposed module for 3D object detection. We further verify the power of the proposed module with a neural network designed for monocular depth prediction. The two proposed networks achieve state-of-the-art performances on the KITTI 3D object detection and depth prediction benchmarks, respectively. The code will be published in https://www.github.com/Owen-Liuyuxuan/visualDet3D", "field": [], "task": ["3D Object Detection", "6D Pose Estimation using RGB", "Autonomous Driving", "Depth Estimation", "Monocular 3D Object Detection", "Object Detection", "Pose Estimation"], "method": [], "dataset": ["KITTI Cars Hard", "KITTI Cars Moderate"], "metric": ["AP Hard", "AP Medium"], "title": "Ground-aware Monocular 3D Object Detection for Autonomous Driving"} {"abstract": "Semi-supervised learning, i.e., training networks with both labeled and unlabeled data, has made significant progress recently. However, existing works have primarily focused on image classification tasks and neglected object detection which requires more annotation effort. In this work, we revisit the Semi-Supervised Object Detection (SS-OD) and identify the pseudo-labeling bias issue in SS-OD. To address this, we introduce Unbiased Teacher, a simple yet effective approach that jointly trains a student and a gradually progressing teacher in a mutually-beneficial manner. Together with a class-balance loss to downweight overly confident pseudo-labels, Unbiased Teacher consistently improved state-of-the-art methods by significant margins on COCO-standard, COCO-additional, and VOC datasets. Specifically, Unbiased Teacher achieves 6.8 absolute mAP improvements against state-of-the-art method when using 1% of labeled data on MS-COCO, achieves around 10 mAP improvements against the supervised baseline when using only 0.5, 1, 2% of labeled data on MS-COCO.", "field": [], "task": ["Image Classification", "Object Detection", "Semi-Supervised Object Detection"], "method": [], "dataset": ["COCO 1% labeled data"], "metric": ["mAP"], "title": "Unbiased Teacher for Semi-Supervised Object Detection"} {"abstract": "Datasets with significant proportions of noisy (incorrect) class labels\npresent challenges for training accurate Deep Neural Networks (DNNs). We\npropose a new perspective for understanding DNN generalization for such\ndatasets, by investigating the dimensionality of the deep representation\nsubspace of training samples. We show that from a dimensionality perspective,\nDNNs exhibit quite distinctive learning styles when trained with clean labels\nversus when trained with a proportion of noisy labels. Based on this finding,\nwe develop a new dimensionality-driven learning strategy, which monitors the\ndimensionality of subspaces during training and adapts the loss function\naccordingly. We empirically demonstrate that our approach is highly tolerant to\nsignificant proportions of noisy labels, and can effectively learn\nlow-dimensional local subspaces that capture the data distribution.", "field": [], "task": ["Learning with noisy labels"], "method": [], "dataset": ["mini WebVision 1.0"], "metric": ["Top-5 Accuracy", "ImageNet Top-1 Accuracy", "ImageNet Top-5 Accuracy", "Top-1 Accuracy"], "title": "Dimensionality-Driven Learning with Noisy Labels"} {"abstract": "In this paper, we introduce 'Coarse-Fine Networks', a two-stream architecture which benefits from different abstractions of temporal resolution to learn better video representations for long-term motion. Traditional Video models process inputs at one (or few) fixed temporal resolution without any dynamic frame selection. However, we argue that, processing multiple temporal resolutions of the input and doing so dynamically by learning to estimate the importance of each frame can largely improve video representations, specially in the domain of temporal activity localization. To this end, we propose (1) `Grid Pool', a learned temporal downsampling layer to extract coarse features, and, (2) `Multi-stage Fusion', a spatio-temporal attention mechanism to fuse a fine-grained context with the coarse features. We show that our method can outperform the state-of-the-arts for action detection in public datasets including Charades with a significantly reduced compute and memory footprint.", "field": [], "task": ["Action Detection", "Activity Detection"], "method": [], "dataset": ["Charades"], "metric": ["mAP"], "title": "Coarse-Fine Networks for Temporal Activity Detection in Videos"} {"abstract": "Recently, deep neural networks are widely applied in recommender systems for their effectiveness in capturing/modeling users' preferences. Especially, the attention mechanism in deep learning enables recommender systems to incorporate various features in an adaptive way. Specifically, as for the next item recommendation task, we have the following three observations: 1) users' sequential behavior records aggregate at time positions (\"time-aggregation\"), 2) users have personalized taste that is related to the \"time-aggregation\" phenomenon (\"personalized time-aggregation\"), and 3) users' short-term interests play an important role in the next item prediction/recommendation. In this paper, we propose a new Time-aware Long- and Short-term Attention Network (TLSAN) to address those observations mentioned above. Specifically, TLSAN consists of two main components. Firstly, TLSAN models \"personalized time-aggregation\" and learn user-specific temporal taste via trainable personalized time position embeddings with category-aware correlations in long-term behaviors. Secondly, long- and short-term feature-wise attention layers are proposed to effectively capture users' long- and short-term preferences for accurate recommendation. Especially, the attention mechanism enables TLSAN to utilize users' preferences in an adaptive way, and its usage in long- and short-term layers enhances TLSAN's ability of dealing with sparse interaction data. Extensive experiments are conducted on Amazon datasets from different fields (also with different size), and the results show that TLSAN outperforms state-of-the-art baselines in both capturing users' preferences and performing time-sensitive next-item recommendation.", "field": [], "task": ["Recommendation Systems"], "method": [], "dataset": ["Amazon Games", "Amazon Product Data", "Amazon Beauty"], "metric": ["AUC"], "title": "TLSAN: Time-aware Long- and Short-term Attention Network for Next-item Recommendation"} {"abstract": "Luminoso participated in the SemEval 2018 task on \"Capturing Discriminative\nAttributes\" with a system based on ConceptNet, an open knowledge graph focused\non general knowledge. In this paper, we describe how we trained a linear\nclassifier on a small number of semantically-informed features to achieve an\n$F_1$ score of 0.7368 on the task, close to the task's high score of 0.75.", "field": [], "task": ["Relation Extraction"], "method": [], "dataset": ["SemEval 2018 Task 10"], "metric": ["F1-Score"], "title": "Luminoso at SemEval-2018 Task 10: Distinguishing Attributes Using Text Corpora and Relational Knowledge"} {"abstract": "Graph Convolutional Networks (GCNs) have shown significant improvements in\nsemi-supervised learning on graph-structured data. Concurrently, unsupervised\nlearning of graph embeddings has benefited from the information contained in\nrandom walks. In this paper, we propose a model: Network of GCNs (N-GCN), which\nmarries these two lines of work. At its core, N-GCN trains multiple instances\nof GCNs over node pairs discovered at different distances in random walks, and\nlearns a combination of the instance outputs which optimizes the classification\nobjective. Our experiments show that our proposed N-GCN model improves\nstate-of-the-art baselines on all of the challenging node classification tasks\nwe consider: Cora, Citeseer, Pubmed, and PPI. In addition, our proposed method\nhas other desirable properties, including generalization to recently proposed\nsemi-supervised learning methods such as GraphSAGE, allowing us to propose\nN-SAGE, and resilience to adversarial input perturbations.", "field": [], "task": ["Node Classification"], "method": [], "dataset": ["Cora", "Pubmed", "Citeseer"], "metric": ["Accuracy"], "title": "N-GCN: Multi-scale Graph Convolution for Semi-supervised Node Classification"} {"abstract": "Recent studies have shown remarkable success in image-to-image translation\nfor two domains. However, existing approaches have limited scalability and\nrobustness in handling more than two domains, since different models should be\nbuilt independently for every pair of image domains. To address this\nlimitation, we propose StarGAN, a novel and scalable approach that can perform\nimage-to-image translations for multiple domains using only a single model.\nSuch a unified model architecture of StarGAN allows simultaneous training of\nmultiple datasets with different domains within a single network. This leads to\nStarGAN's superior quality of translated images compared to existing models as\nwell as the novel capability of flexibly translating an input image to any\ndesired target domain. We empirically demonstrate the effectiveness of our\napproach on a facial attribute transfer and a facial expression synthesis\ntasks.", "field": [], "task": ["Image-to-Image Translation"], "method": [], "dataset": ["RaFD"], "metric": ["Classification Error"], "title": "StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation"} {"abstract": "We present a neural network model - based on CNNs, RNNs and a novel attention\nmechanism - which achieves 84.2% accuracy on the challenging French Street Name\nSigns (FSNS) dataset, significantly outperforming the previous state of the art\n(Smith'16), which achieved 72.46%. Furthermore, our new method is much simpler\nand more general than the previous approach. To demonstrate the generality of\nour model, we show that it also performs well on an even more challenging\ndataset derived from Google Street View, in which the goal is to extract\nbusiness names from store fronts. Finally, we study the speed/accuracy tradeoff\nthat results from using CNN feature extractors of different depths.\nSurprisingly, we find that deeper is not always better (in terms of accuracy,\nas well as speed). Our resulting model is simple, accurate and fast, allowing\nit to be used at scale on a variety of challenging real-world text extraction\nproblems.", "field": [], "task": ["Optical Character Recognition"], "method": [], "dataset": ["FSNS - Test"], "metric": ["Sequence error"], "title": "Attention-based Extraction of Structured Information from Street View Imagery"} {"abstract": "Deep learning has achieved a remarkable performance breakthrough in several\nfields, most notably in speech recognition, natural language processing, and\ncomputer vision. In particular, convolutional neural network (CNN)\narchitectures currently produce state-of-the-art performance on a variety of\nimage analysis tasks such as object detection and recognition. Most of deep\nlearning research has so far focused on dealing with 1D, 2D, or 3D\nEuclidean-structured data such as acoustic signals, images, or videos.\nRecently, there has been an increasing interest in geometric deep learning,\nattempting to generalize deep learning methods to non-Euclidean structured data\nsuch as graphs and manifolds, with a variety of applications from the domains\nof network analysis, computational social science, or computer graphics. In\nthis paper, we propose a unified framework allowing to generalize CNN\narchitectures to non-Euclidean domains (graphs and manifolds) and learn local,\nstationary, and compositional task-specific features. We show that various\nnon-Euclidean CNN methods previously proposed in the literature can be\nconsidered as particular instances of our framework. We test the proposed\nmethod on standard tasks from the realms of image-, graph- and 3D shape\nanalysis and show that it consistently outperforms previous approaches.", "field": [], "task": ["Document Classification", "Graph Classification", "Graph Regression", "Node Classification", "Object Detection", "Speech Recognition", "Superpixel Image Classification"], "method": [], "dataset": ["Cora", "ZINC-500k", "75 Superpixel MNIST", "CIFAR10 100k", "PATTERN 100k", "ZINC 100k"], "metric": ["MAE", "Accuracy (%)", "Classification Error", "Accuracy"], "title": "Geometric deep learning on graphs and manifolds using mixture model CNNs"} {"abstract": "Understanding unstructured text is a major goal within natural language\nprocessing. Comprehension tests pose questions based on short text passages to\nevaluate such understanding. In this work, we investigate machine comprehension\non the challenging {\\it MCTest} benchmark. Partly because of its limited size,\nprior work on {\\it MCTest} has focused mainly on engineering better features.\nWe tackle the dataset with a neural approach, harnessing simple neural networks\narranged in a parallel hierarchy. The parallel hierarchy enables our model to\ncompare the passage, question, and answer from a variety of trainable\nperspectives, as opposed to using a manually designed, rigid feature set.\nPerspectives range from the word level to sentence fragments to sequences of\nsentences; the networks operate only on word-embedding representations of text.\nWhen trained with a methodology designed to help cope with limited training\ndata, our Parallel-Hierarchical model sets a new state of the art for {\\it\nMCTest}, outperforming previous feature-engineered approaches slightly and\nprevious neural approaches by a significant margin (over 15\\% absolute).", "field": [], "task": ["Question Answering", "Reading Comprehension"], "method": [], "dataset": ["MCTest-500", "MCTest-160"], "metric": ["Accuracy"], "title": "A Parallel-Hierarchical Model for Machine Comprehension on Sparse Data"} {"abstract": "Most activity localization methods in the literature suffer from the burden\nof frame-wise annotation requirement. Learning from weak labels may be a\npotential solution towards reducing such manual labeling effort. Recent years\nhave witnessed a substantial influx of tagged videos on the Internet, which can\nserve as a rich source of weakly-supervised training data. Specifically, the\ncorrelations between videos with similar tags can be utilized to temporally\nlocalize the activities. Towards this goal, we present W-TALC, a\nWeakly-supervised Temporal Activity Localization and Classification framework\nusing only video-level labels. The proposed network can be divided into two\nsub-networks, namely the Two-Stream based feature extractor network and a\nweakly-supervised module, which we learn by optimizing two complimentary loss\nfunctions. Qualitative and quantitative results on two challenging datasets -\nThumos14 and ActivityNet1.2, demonstrate that the proposed method is able to\ndetect activities at a fine granularity and achieve better performance than\ncurrent state-of-the-art methods.", "field": [], "task": ["Weakly Supervised Action Localization"], "method": [], "dataset": ["ActivityNet-1.2", "THUMOS 2014", "THUMOS\u201914"], "metric": ["mAP", "mAP@0.1:0.7", "mAP@0.5"], "title": "W-TALC: Weakly-supervised Temporal Activity Localization and Classification"} {"abstract": "Person Re-Identification is still a challenging task in Computer Vision due to a variety of reasons. On the other side, Incremental Learning is still an issue since deep learning models tend to face the problem of over catastrophic forgetting when trained on subsequent tasks. In this paper, we propose a model that can be used for multiple tasks in Person Re-Identification, provide state-of-the-art results on a variety of tasks and still achieve considerable accuracy subsequently. We evaluated our model on two datasets Market 1501 and Duke MTMC. Extensive experiments show that this method can achieve Incremental Learning in Person ReID efficiently as well as for other tasks in computer vision as well.", "field": [], "task": ["Incremental Learning", "Person Re-Identification"], "method": [], "dataset": ["DukeMTMC-reID", "Market-1501"], "metric": ["Rank-1", "MAP"], "title": "Incremental Learning in Person Re-Identification"} {"abstract": "When answering a question, people often draw upon their rich world knowledge\nin addition to the particular context. Recent work has focused primarily on\nanswering questions given some relevant document or context, and required very\nlittle general background. To investigate question answering with prior\nknowledge, we present CommonsenseQA: a challenging new dataset for commonsense\nquestion answering. To capture common sense beyond associations, we extract\nfrom ConceptNet (Speer et al., 2017) multiple target concepts that have the\nsame semantic relation to a single source concept. Crowd-workers are asked to\nauthor multiple-choice questions that mention the source concept and\ndiscriminate in turn between each of the target concepts. This encourages\nworkers to create questions with complex semantics that often require prior\nknowledge. We create 12,247 questions through this procedure and demonstrate\nthe difficulty of our task with a large number of strong baselines. Our best\nbaseline is based on BERT-large (Devlin et al., 2018) and obtains 56% accuracy,\nwell below human performance, which is 89%.", "field": [], "task": ["Common Sense Reasoning", "Question Answering"], "method": [], "dataset": ["CommonsenseQA"], "metric": ["Accuracy"], "title": "CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge"} {"abstract": "We present a novel end-to-end framework named as GSNet (Geometric and Scene-aware Network), which jointly estimates 6DoF poses and reconstructs detailed 3D car shapes from single urban street view. GSNet utilizes a unique four-way feature extraction and fusion scheme and directly regresses 6DoF poses and shapes in a single forward pass. Extensive experiments show that our diverse feature extraction and fusion scheme can greatly improve model performance. Based on a divide-and-conquer 3D shape representation strategy, GSNet reconstructs 3D vehicle shape with great detail (1352 vertices and 2700 faces). This dense mesh representation further leads us to consider geometrical consistency and scene context, and inspires a new multi-objective loss function to regularize network training, which in turn improves the accuracy of 6D pose estimation and validates the merit of jointly performing both tasks. We evaluate GSNet on the largest multi-task ApolloCar3D benchmark and achieve state-of-the-art performance both quantitatively and qualitatively. Project page is available at https://lkeab.github.io/gsnet/.", "field": [], "task": ["3D Reconstruction", "3D Shape Representation", "6D Pose Estimation", "Autonomous Driving", "Pose Estimation", "Self-Driving Cars", "Vehicle Pose Estimation"], "method": [], "dataset": ["ApolloCar3D"], "metric": ["A3DP"], "title": "GSNet: Joint Vehicle Pose and Shape Reconstruction with Geometrical and Scene-aware Supervision"} {"abstract": "Haze and smog are among the most common environmental factors impacting image quality and, therefore, image analysis. This paper proposes an end-to-end generative method for image dehazing. It is based on designing a fully convolutional neural network to recognize haze structures in input images and restore clear, haze-free images. The proposed method is agnostic in the sense that it does not explore the atmosphere scattering model. Somewhat surprisingly, it achieves superior performance relative to all existing state-of-the-art methods for image dehazing even on SOTS outdoor images, which are synthesized using the atmosphere scattering model. Project detail and code can be found here: https://github.com/Seanforfun/GMAN_Net_Haze_Removal", "field": [], "task": ["Image Dehazing", "Single Image Dehazing"], "method": [], "dataset": ["SOTS Indoor", "SOTS Outdoor"], "metric": ["SSIM", "PSNR"], "title": "Generic Model-Agnostic Convolutional Neural Network for Single Image Dehazing"} {"abstract": "The latency in the current neural based dialogue state tracking models\nprohibits them from being used efficiently for deployment in production\nsystems, albeit their highly accurate performance. This paper proposes a new\nscalable and accurate neural dialogue state tracking model, based on the\nrecently proposed Global-Local Self-Attention encoder (GLAD) model by Zhong et\nal. which uses global modules to share parameters between estimators for\ndifferent types (called slots) of dialogue states, and uses local modules to\nlearn slot-specific features. By using only one recurrent networks with global\nconditioning, compared to (1 + \\# slots) recurrent networks with global and\nlocal conditioning used in the GLAD model, our proposed model reduces the\nlatency in training and inference times by $35\\%$ on average, while preserving\nperformance of belief state tracking, by $97.38\\%$ on turn request and\n$88.51\\%$ on joint goal and accuracy. Evaluation on Multi-domain dataset\n(Multi-WoZ) also demonstrates that our model outperforms GLAD on turn inform\nand joint goal accuracy.", "field": [], "task": ["Dialogue State Tracking", "Multi-domain Dialogue State Tracking"], "method": [], "dataset": ["Wizard-of-Oz"], "metric": ["Request", "Joint"], "title": "Toward Scalable Neural Dialogue State Tracking Model"} {"abstract": "This paper proposes a low-complexity word-level deep convolutional neural network (CNN) architecture for text categorization that can efficiently represent long-range associations in text. In the literature, several deep and complex neural networks have been proposed for this task, assuming availability of relatively large amounts of training data. However, the associated computational complexity increases as the networks go deeper, which poses serious challenges in practical applications. Moreover, it was shown recently that shallow word-level CNNs are more accurate and much faster than the state-of-the-art very deep nets such as character-level CNNs even in the setting of large training data. Motivated by these findings, we carefully studied deepening of word-level CNNs to capture global representations of text, and found a simple network architecture with which the best accuracy can be obtained by increasing the network depth without increasing computational cost by much. We call it deep pyramid CNN. The proposed model with 15 weight layers outperforms the previous best models on six benchmark datasets for sentiment classification and topic categorization.", "field": [], "task": ["Sentiment Analysis", "Text Classification"], "method": [], "dataset": ["Yelp Fine-grained classification", "Amazon Review Polarity", "Yelp Binary classification", "DBpedia", "Amazon Review Full", "AG News"], "metric": ["Error", "Accuracy"], "title": "Deep Pyramid Convolutional Neural Networks for Text Categorization"} {"abstract": "We propose a novel framework based on neural networks to identify the sentiment of opinion targets in a comment/review. Our framework adopts multiple-attention mechanism to capture sentiment features separated by a long distance, so that it is more robust against irrelevant information. The results of multiple attentions are non-linearly combined with a recurrent neural network, which strengthens the expressive power of our model for handling more complications. The weighted-memory mechanism not only helps us avoid the labor-intensive feature engineering work, but also provides a tailor-made memory for different opinion targets of a sentence. We examine the merit of our model on four datasets: two are from SemEval2014, i.e. reviews of restaurants and laptops; a twitter dataset, for testing its performance on social media data; and a Chinese news comment dataset, for testing its language sensitivity. The experimental results show that our model consistently outperforms the state-of-the-art methods on different types of data.", "field": [], "task": ["Aspect-Based Sentiment Analysis", "Feature Engineering", "Sentiment Analysis"], "method": [], "dataset": ["SemEval 2014 Task 4 Sub Task 2"], "metric": ["Laptop (Acc)", "Restaurant (Acc)", "Mean Acc (Restaurant + Laptop)"], "title": "Recurrent Attention Network on Memory for Aspect Sentiment Analysis"} {"abstract": "Named entity recognition (NER) is highly sensitive to sentential syntactic and semantic properties where entities may be extracted according to how they are used and placed in the running text. To model such properties, one could rely on existing resources to providing helpful knowledge to the NER task; some existing studies proved the effectiveness of doing so, and yet are limited in appropriately leveraging the knowledge such as distinguishing the important ones for particular context. In this paper, we improve NER by leveraging different types of syntactic information through attentive ensemble, which functionalizes by the proposed key-value memory networks, syntax attention, and the gate mechanism for encoding, weighting and aggregating such syntactic information, respectively. Experimental results on six English and Chinese benchmark datasets suggest the effectiveness of the proposed model and show that it outperforms previous studies on all experiment datasets.", "field": [], "task": ["Chinese Named Entity Recognition", "Named Entity Recognition"], "method": [], "dataset": ["Resume NER", "OntoNotes 4", "Weibo NER"], "metric": ["F1"], "title": "Improving Named Entity Recognition with Attentive Ensemble of Syntactic Information"} {"abstract": "In this paper, we propose a novel encoder-decoder network, called \textit{Scale Aggregation Network (SANet)}, for accurate and efficient crowd counting. The encoder extracts multi-scale features with scale aggregation modules and the decoder generates high-resolution density maps by using a set of transposed convolutions. Moreover, we find that most existing works use only Euclidean loss which assumes independence among each pixel but ignores the local correlation in density maps. Therefore, we propose a novel training loss, combining of Euclidean loss and local pattern consistency loss, which improves the performance of the model in our experiments. In addition, we use normalization layers to ease the training process and apply a patch-based test scheme to reduce the impact of statistic shift problem. To demonstrate the effectiveness of the proposed method, we conduct extensive experiments on four major crowd counting datasets and our method achieves superior performance to state-of-the-art methods while with much less parameters.", "field": [], "task": ["Crowd Counting"], "method": [], "dataset": ["UCF CC 50", "ShanghaiTech A", "WorldExpo\u201910", "ShanghaiTech B"], "metric": ["MAE", "Average MAE"], "title": "Scale Aggregation Network for Accurate and Efficient Crowd Counting"} {"abstract": "In this work, we consider the problem of robust gaze estimation in natural environments. Large camera-to-subject distances and high variations in head pose and eye gaze angles are common in such environments. This leads to two main shortfalls in state-of-the-art methods for gaze estimation: hindered ground truth gaze annotation and diminished gaze estimation accuracy as image resolution decreases with distance. We first record a novel dataset of varied gaze and head pose images in a natural environment, addressing the issue of ground truth annotation by measuring head pose using a motion capture system and eye gaze using mobile eyetracking glasses. We apply semantic image inpainting to the area covered by the glasses to bridge the gap between training and testing images by removing the obtrusiveness of the glasses. We also present a new real-time algorithm involving appearance-based deep convolutional neural networks with increased capacity to cope with the diverse images in the new dataset. Experiments with this network architecture are conducted on a number of diverse eye-gaze datasets including our own, and in cross dataset evaluations. We demonstrate state-of-the-art performance in terms of estimation accuracy in all experiments, and the architecture performs well even on lower resolution images.", "field": [], "task": ["Gaze Estimation", "Image Inpainting", "Motion Capture"], "method": [], "dataset": ["MPII Gaze", "RT-GENE", "UT Multi-view"], "metric": ["Angular Error"], "title": "RT-GENE: Real-Time Eye Gaze Estimation in Natural Environments"} {"abstract": "Recently, we have seen a rapid development of Deep Neural Network (DNN) based\nvisual tracking solutions. Some trackers combine the DNN-based solutions with\nDiscriminative Correlation Filters (DCF) to extract semantic features and\nsuccessfully deliver the state-of-the-art tracking accuracy. However, these\nsolutions are highly compute-intensive, which require long processing time,\nresulting unsecured real-time performance. To deliver both high accuracy and\nreliable real-time performance, we propose a novel tracker called SiamVGG. It\ncombines a Convolutional Neural Network (CNN) backbone and a cross-correlation\noperator, and takes advantage of the features from exemplary images for more\naccurate object tracking.\n The architecture of SiamVGG is customized from VGG-16, with the parameters\nshared by both exemplary images and desired input video frames.\n We demonstrate the proposed SiamVGG on OTB-2013/50/100 and VOT 2015/2016/2017\ndatasets with the state-of-the-art accuracy while maintaining a decent\nreal-time performance of 50 FPS running on a GTX 1080Ti. Our design can achieve\n2% higher Expected Average Overlap (EAO) compared to the ECO and C-COT in\nVOT2017 Challenge.", "field": [], "task": ["Object Tracking", "Visual Object Tracking", "Visual Tracking"], "method": [], "dataset": ["OTB-2015", "VOT2016", "OTB-50", "OTB-2013", "VOT2017"], "metric": ["Expected Average Overlap (EAO)", "AUC"], "title": "SiamVGG: Visual Tracking using Deeper Siamese Networks"} {"abstract": "Learning how objects sound from video is challenging, since they often heavily overlap in a single audio channel. Current methods for visually-guided audio source separation sidestep the issue by training with artificially mixed video clips, but this puts unwieldy restrictions on training data collection and may even prevent learning the properties of \"true\" mixed sounds. We introduce a co-separation training paradigm that permits learning object-level sounds from unlabeled multi-source videos. Our novel training objective requires that the deep neural network's separated audio for similar-looking objects be consistently identifiable, while simultaneously reproducing accurate video-level audio tracks for each source training pair. Our approach disentangles sounds in realistic test videos, even in cases where an object was not observed individually during training. We obtain state-of-the-art results on visually-guided audio source separation and audio denoising for the MUSIC, AudioSet, and AV-Bench datasets.", "field": [], "task": ["Audio Denoising", "Audio Source Separation", "Denoising"], "method": [], "dataset": ["AV-Bench - Violin Yanni", "AV-Bench - Wooden Horse", "AV-Bench - Guitar Solo"], "metric": ["NSDR"], "title": "Co-Separating Sounds of Visual Objects"} {"abstract": "Flow-based generative models are powerful exact likelihood models with efficient sampling and inference. Despite their computational efficiency, flow-based models generally have much worse density modeling performance compared to state-of-the-art autoregressive models. In this paper, we investigate and improve upon three limiting design choices employed by flow-based models in prior work: the use of uniform noise for dequantization, the use of inexpressive affine flows, and the use of purely convolutional conditioning networks in coupling layers. Based on our findings, we propose Flow++, a new flow-based model that is now the state-of-the-art non-autoregressive model for unconditional density estimation on standard image benchmarks. Our work has begun to close the significant performance gap that has so far existed between autoregressive models and flow-based models. Our implementation is available at https://github.com/aravindsrinivas/flowpp", "field": [], "task": ["Density Estimation", "Image Generation"], "method": [], "dataset": ["ImageNet 64x64", "ImageNet 32x32", "CIFAR-10"], "metric": ["bits/dimension", "bpd", "Bits per dim"], "title": "Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design"} {"abstract": "We propose an end-to-end deep learning learning model for graph classification and representation learning that is invariant to permutation of the nodes of the input graphs. We address the challenge of learning a fixed size graph representation for graphs of varying dimensions through a differentiable node attention pooling mechanism. In addition to a theoretical proof of its invariance to permutation, we provide empirical evidence demonstrating the statistically significant gain in accuracy when faced with an isomorphic graph classification task given only a small number of training examples. We analyse the effect of four different matrices to facilitate the local message passing mechanism by which graph convolutions are performed vs. a matrix parametrised by a learned parameter pair able to transition smoothly between the former. Finally, we show that our model achieves competitive classification performance with existing techniques on a set of molecule datasets.", "field": [], "task": ["Graph Classification", "Representation Learning"], "method": [], "dataset": ["PROTEINS"], "metric": ["Accuracy"], "title": "PiNet: A Permutation Invariant Graph Neural Network for Graph Classification"} {"abstract": "We propose an inductive matrix completion model without using side information. By factorizing the (rating) matrix into the product of low-dimensional latent embeddings of rows (users) and columns (items), a majority of existing matrix completion methods are transductive, since the learned embeddings cannot generalize to unseen rows/columns or to new matrices. To make matrix completion inductive, most previous works use content (side information), such as user's age or movie's genre, to make predictions. However, high-quality content is not always available, and can be hard to extract. Under the extreme setting where not any side information is available other than the matrix to complete, can we still learn an inductive matrix completion model? In this paper, we propose an Inductive Graph-based Matrix Completion (IGMC) model to address this problem. IGMC trains a graph neural network (GNN) based purely on 1-hop subgraphs around (user, item) pairs generated from the rating matrix and maps these subgraphs to their corresponding ratings. It achieves highly competitive performance with state-of-the-art transductive baselines. In addition, IGMC is inductive -- it can generalize to users/items unseen during the training (given that their interactions exist), and can even transfer to new tasks. Our transfer learning experiments show that a model trained out of the MovieLens dataset can be directly used to predict Douban movie ratings with surprisingly good performance. Our work demonstrates that: 1) it is possible to train inductive matrix completion models without using side information while achieving similar or better performances than state-of-the-art transductive methods; 2) local graph patterns around a (user, item) pair are effective predictors of the rating this user gives to the item; and 3) Long-range dependencies might not be necessary for modeling recommender systems.", "field": [], "task": ["Matrix Completion", "Recommendation Systems", "Transfer Learning"], "method": [], "dataset": ["MovieLens 1M", "Flixster Monti", "Douban Monti", "YahooMusic Monti", "MovieLens 100K"], "metric": ["RMSE (u1 Splits)", "RMSE"], "title": "Inductive Matrix Completion Based on Graph Neural Networks"} {"abstract": "We introduce a detection framework for dense crowd counting and eliminate the need for the prevalent density regression paradigm. Typical counting models predict crowd density for an image as opposed to detecting every person. These regression methods, in general, fail to localize persons accurate enough for most applications other than counting. Hence, we adopt an architecture that locates every person in the crowd, sizes the spotted heads with bounding box and then counts them. Compared to normal object or face detectors, there exist certain unique challenges in designing such a detection system. Some of them are direct consequences of the huge diversity in dense crowds along with the need to predict boxes contiguously. We solve these issues and develop our LSC-CNN model, which can reliably detect heads of people across sparse to dense crowds. LSC-CNN employs a multi-column architecture with top-down feedback processing to better resolve persons and produce refined predictions at multiple resolutions. Interestingly, the proposed training regime requires only point head annotation, but can estimate approximate size information of heads. We show that LSC-CNN not only has superior localization than existing density regressors, but outperforms in counting as well. The code for our approach is available at https://github.com/val-iisc/lsc-cnn.", "field": [], "task": ["Crowd Counting", "Regression"], "method": [], "dataset": ["UCF CC 50", "ShanghaiTech A", "ShanghaiTech B"], "metric": ["MAE"], "title": "Locate, Size and Count: Accurately Resolving People in Dense Crowds via Detection"} {"abstract": "Continual lifelong learning is essential to many applications. In this paper, we propose a simple but effective approach to continual deep learning. Our approach leverages the principles of deep model compression, critical weights selection, and progressive networks expansion. By enforcing their integration in an iterative manner, we introduce an incremental learning method that is scalable to the number of sequential tasks in a continual learning process. Our approach is easy to implement and owns several favorable characteristics. First, it can avoid forgetting (i.e., learn new tasks while remembering all previous tasks). Second, it allows model expansion but can maintain the model compactness when handling sequential tasks. Besides, through our compaction and selection/expansion mechanism, we show that the knowledge accumulated through learning previous tasks is helpful to build a better model for the new tasks compared to training the models independently with tasks. Experimental results show that our approach can incrementally learn a deep model tackling multiple tasks without forgetting, while the model compactness is maintained with the performance more satisfiable than individual task training.", "field": [], "task": ["Age And Gender Classification", "Continual Learning", "Face Verification", "Facial Expression Recognition", "Incremental Learning", "Model Compression"], "method": [], "dataset": ["Stanford Cars (Fine-grained 6 Tasks)", "Sketch (Fine-grained 6 Tasks)", "Adience Gender", "Wikiart (Fine-grained 6 Tasks)", "Flowers (Fine-grained 6 Tasks)", "ImageNet (Fine-grained 6 Tasks)", "CUBS (Fine-grained 6 Tasks)", "Adience Age", "Cifar100 (20 tasks)", "Labeled Faces in the Wild", "AffectNet"], "metric": ["Accuracy (8 emotion)", "Accuracy (7 emotion)", "Accuracy (5-fold)", "Accuracy", "Average Accuracy"], "title": "Compacting, Picking and Growing for Unforgetting Continual Learning"} {"abstract": "Per-pixel ground-truth depth data is challenging to acquire at scale. To overcome this limitation, self-supervised learning has emerged as a promising alternative for training models to perform monocular depth estimation. In this paper, we propose a set of improvements, which together result in both quantitatively and qualitatively improved depth maps compared to competing self-supervised methods. Research on self-supervised monocular training usually explores increasingly complex architectures, loss functions, and image formation models, all of which have recently helped to close the gap with fully-supervised methods. We show that a surprisingly simple model, and associated design choices, lead to superior predictions. In particular, we propose (i) a minimum reprojection loss, designed to robustly handle occlusions, (ii) a full-resolution multi-scale sampling method that reduces visual artifacts, and (iii) an auto-masking loss to ignore training pixels that violate camera motion assumptions. We demonstrate the effectiveness of each component in isolation, and show high quality, state-of-the-art results on the KITTI benchmark.", "field": [], "task": ["Depth Estimation", "Image Reconstruction", "Motion Estimation", "Scene Understanding", "Self-Supervised Learning"], "method": [], "dataset": ["KITTI Eigen split"], "metric": ["absolute relative error"], "title": "Digging Into Self-Supervised Monocular Depth Estimation"} {"abstract": "Generating realistic images of complex visual scenes becomes challenging when one wishes to control the structure of the generated images. Previous approaches showed that scenes with few entities can be controlled using scene graphs, but this approach struggles as the complexity of the graph (the number of objects and edges) increases. In this work, we show that one limitation of current methods is their inability to capture semantic equivalence in graphs. We present a novel model that addresses these issues by learning canonical graph representations from the data, resulting in improved image generation for complex visual scenes. Our model demonstrates improved empirical performance on large scene graphs, robustness to noise in the input scene graph, and generalization on semantically equivalent graphs. Finally, we show improved performance of the model on three different benchmarks: Visual Genome, COCO, and CLEVR.", "field": [], "task": ["Image Generation", "Layout-to-Image Generation", "Scene Generation"], "method": [], "dataset": ["Visual Genome 256x256", "COCO-Stuff 256x256"], "metric": ["Inception Score", "FID", "LPIPS"], "title": "Learning Canonical Representations for Scene Graph to Image Generation"} {"abstract": "Commonsense reasoning is a critical AI capability, but it is difficult to construct challenging datasets that test common sense. Recent neural question answering systems, based on large pre-trained models of language, have already achieved near-human-level performance on commonsense knowledge benchmarks. These systems do not possess human-level common sense, but are able to exploit limitations of the datasets to achieve human-level scores. We introduce the CODAH dataset, an adversarially-constructed evaluation dataset for testing common sense. CODAH forms a challenging extension to the recently-proposed SWAG dataset, which tests commonsense knowledge using sentence-completion questions that describe situations observed in video. To produce a more difficult dataset, we introduce a novel procedure for question acquisition in which workers author questions designed to target weaknesses of state-of-the-art neural question answering systems. Workers are rewarded for submissions that models fail to answer correctly both before and after fine-tuning (in cross-validation). We create 2.8k questions via this procedure and evaluate the performance of multiple state-of-the-art question answering systems on our dataset. We observe a significant gap between human performance, which is 95.3%, and the performance of the best baseline accuracy of 67.5% by the BERT-Large model.", "field": [], "task": ["Common Sense Reasoning", "Question Answering", "Sentence Completion"], "method": [], "dataset": ["CODAH"], "metric": ["Accuracy"], "title": "CODAH: An Adversarially Authored Question-Answer Dataset for Common Sense"} {"abstract": "Weakly supervised object detection (WSOD) is a challenging task when provided\nwith image category supervision but required to simultaneously learn object\nlocations and object detectors. Many WSOD approaches adopt multiple instance\nlearning (MIL) and have non-convex loss functions which are prone to get stuck\ninto local minima (falsely localize object parts) while missing full object\nextent during training. In this paper, we introduce a continuation optimization\nmethod into MIL and thereby creating continuation multiple instance learning\n(C-MIL), with the intention of alleviating the non-convexity problem in a\nsystematic way. We partition instances into spatially related and class related\nsubsets, and approximate the original loss function with a series of smoothed\nloss functions defined within the subsets. Optimizing smoothed loss functions\nprevents the training procedure falling prematurely into local minima and\nfacilitates the discovery of Stable Semantic Extremal Regions (SSERs) which\nindicate full object extent. On the PASCAL VOC 2007 and 2012 datasets, C-MIL\nimproves the state-of-the-art of weakly supervised object detection and weakly\nsupervised object localization with large margins.", "field": [], "task": ["Multiple Instance Learning", "Object Detection", "Object Localization", "Weakly Supervised Object Detection", "Weakly-Supervised Object Localization"], "method": [], "dataset": ["PASCAL VOC 2007", "PASCAL VOC 2012 test"], "metric": ["MAP"], "title": "C-MIL: Continuation Multiple Instance Learning for Weakly Supervised Object Detection"} {"abstract": "We present SpecAugment, a simple data augmentation method for speech recognition. SpecAugment is applied directly to the feature inputs of a neural network (i.e., filter bank coefficients). The augmentation policy consists of warping the features, masking blocks of frequency channels, and masking blocks of time steps. We apply SpecAugment on Listen, Attend and Spell networks for end-to-end speech recognition tasks. We achieve state-of-the-art performance on the LibriSpeech 960h and Swichboard 300h tasks, outperforming all prior work. On LibriSpeech, we achieve 6.8% WER on test-other without the use of a language model, and 5.8% WER with shallow fusion with a language model. This compares to the previous state-of-the-art hybrid system of 7.5% WER. For Switchboard, we achieve 7.2%/14.6% on the Switchboard/CallHome portion of the Hub5'00 test set without the use of a language model, and 6.8%/14.1% with shallow fusion, which compares to the previous state-of-the-art hybrid system at 8.3%/17.3% WER.", "field": [], "task": ["Data Augmentation", "End-To-End Speech Recognition", "Language Modelling", "Speech Recognition"], "method": [], "dataset": ["LibriSpeech test-other", "LibriSpeech test-clean", "Hub5'00 SwitchBoard"], "metric": ["CallHome", "SwitchBoard", "Word Error Rate (WER)"], "title": "SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition"} {"abstract": "Current 3D object detection methods are heavily influenced by 2D detectors. In order to leverage architectures in 2D detectors, they often convert 3D point clouds to regular grids (i.e., to voxel grids or to bird's eye view images), or rely on detection in 2D images to propose 3D boxes. Few works have attempted to directly detect objects in point clouds. In this work, we return to first principles to construct a 3D detection pipeline for point cloud data and as generic as possible. However, due to the sparse nature of the data -- samples from 2D manifolds in 3D space -- we face a major challenge when directly predicting bounding box parameters from scene points: a 3D object centroid can be far from any surface point thus hard to regress accurately in one step. To address the challenge, we propose VoteNet, an end-to-end 3D object detection network based on a synergy of deep point set networks and Hough voting. Our model achieves state-of-the-art 3D detection on two large datasets of real 3D scans, ScanNet and SUN RGB-D with a simple design, compact model size and high efficiency. Remarkably, VoteNet outperforms previous methods by using purely geometric information without relying on color images.", "field": [], "task": ["3D Object Detection", "Object Detection"], "method": [], "dataset": ["ScanNetV2", "SUN-RGBD val"], "metric": ["mAP@0.5", "mAP@0.25", "MAP"], "title": "Deep Hough Voting for 3D Object Detection in Point Clouds"} {"abstract": "Research on depth-based human activity analysis achieved outstanding performance and demonstrated the effectiveness of 3D representation for action recognition. The existing depth-based and RGB+D-based action recognition benchmarks have a number of limitations, including the lack of large-scale training samples, realistic number of distinct class categories, diversity in camera views, varied environmental conditions, and variety of human subjects. In this work, we introduce a large-scale dataset for RGB+D human action recognition, which is collected from 106 distinct subjects and contains more than 114 thousand video samples and 8 million frames. This dataset contains 120 different action classes including daily, mutual, and health-related activities. We evaluate the performance of a series of existing 3D activity analysis methods on this dataset, and show the advantage of applying deep learning methods for 3D-based human action recognition. Furthermore, we investigate a novel one-shot 3D activity recognition problem on our dataset, and a simple yet effective Action-Part Semantic Relevance-aware (APSR) framework is proposed for this task, which yields promising results for recognition of the novel action classes. We believe the introduction of this large-scale dataset will enable the community to apply, adapt, and develop various data-hungry learning techniques for depth-based and RGB+D-based human activity understanding. [The dataset is available at: http://rose1.ntu.edu.sg/Datasets/actionRecognition.asp]", "field": [], "task": ["Action Recognition", "Activity Recognition", "One-Shot 3D Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D 120"], "metric": ["Accuracy"], "title": "NTU RGB+D 120: A Large-Scale Benchmark for 3D Human Activity Understanding"} {"abstract": "In this work, we propose a novel adaptive spatially-regularized correlation filters (ASRCF) model to simultaneously optimize the filter coefficients and the spatial regularization weight. First, this adaptive spatial regularization scheme could learn an effective spatial weight for a specific object and its appearance variations, and therefore result in more reliable filter coefficients during the tracking process. Second, our ASRCF model can be effectively optimized based on the alternating direction method of multipliers, where each subproblem has the closed-from solution. Third, our tracker applies two kinds of CF models to estimate the location and scale respectively. The location CF model exploits ensembles of shallow and deep features to determine the optimal position accurately. The scale CF model works on multi-scale shallow features to estimate the optimal scale efficiently. Extensive experiments on five recent benchmarks show that our tracker performs favorably against many state-of-the-art algorithms, with real-time performance of 28fps.\r", "field": [], "task": ["Visual Tracking"], "method": [], "dataset": ["OTB-2015"], "metric": ["AUC"], "title": "Visual Tracking via Adaptive Spatially-Regularized Correlation Filters"} {"abstract": "This paper focuses on two related subtasks of aspect-based sentiment analysis, namely aspect term extraction and aspect sentiment classification, which we call aspect term-polarity co-extraction. The former task is to extract aspects of a product or service from an opinion document, and the latter is to identify the polarity expressed in the document about these extracted aspects. Most existing algorithms address them as two separate tasks and solve them one by one, or only perform one task, which can be complicated for real applications. In this paper, we treat these two tasks as two sequence labeling problems and propose a novel Dual crOss-sharEd RNN framework (DOER) to generate all aspect term-polarity pairs of the input sentence simultaneously. Specifically, DOER involves a dual recurrent neural network to extract the respective representation of each task, and a cross-shared unit to consider the relationship between them. Experimental results demonstrate that the proposed framework outperforms state-of-the-art baselines on three benchmark datasets.", "field": [], "task": ["Aspect-Based Sentiment Analysis", "Sentiment Analysis"], "method": [], "dataset": ["SemEval 2014 Task 4 Subtask 1+2", "SemEval 2014 Task 4 Laptop"], "metric": ["F1"], "title": "DOER: Dual Cross-Shared RNN for Aspect Term-Polarity Co-Extraction"} {"abstract": "Satellite image time series, bolstered by their growing availability, are at the forefront of an extensive effort towards automated Earth monitoring by international institutions. In particular, large-scale control of agricultural parcels is an issue of major political and economic importance. In this regard, hybrid convolutional-recurrent neural architectures have shown promising results for the automated classification of satellite image time series.We propose an alternative approach in which the convolutional layers are advantageously replaced with encoders operating on unordered sets of pixels to exploit the typically coarse resolution of publicly available satellite images. We also propose to extract temporal features using a bespoke neural architecture based on self-attention instead of recurrent networks. We demonstrate experimentally that our method not only outperforms previous state-of-the-art approaches in terms of precision, but also significantly decreases processing time and memory requirements. Lastly, we release a large open-access annotated dataset as a benchmark for future work on satellite image time series.", "field": [], "task": ["Time Series", "Time Series Classification"], "method": [], "dataset": ["s2-agri"], "metric": ["oAcc", "mIoU"], "title": "Satellite Image Time Series Classification with Pixel-Set Encoders and Temporal Self-Attention"} {"abstract": "The topological landscape of gene interaction networks provides a rich source of information for inferring functional patterns of genes or proteins. However, it is still a challenging task to aggregate heterogeneous biological information such as gene expression and gene interactions to achieve more accurate inference for prediction and discovery of new gene interactions. In particular, how to generate a unified vector representation to integrate diverse input data is a key challenge addressed here. We propose a scalable and robust deep learning framework to learn embedded representations to unify known gene interactions and gene expression for gene interaction predictions. These low- dimensional embeddings derive deeper insights into the structure of rapidly accumulating and diverse gene interaction networks and greatly simplify downstream modeling. We compare the predictive power of our deep embeddings to the strong baselines. The results suggest that our deep embeddings achieve significantly more accurate predictions. Moreover, a set of novel gene interaction predictions are validated by up-to-date literature-based database entries. The proposed model demonstrates the importance of integrating heterogeneous information about genes for gene network inference. GNE is freely available under the GNU General Public License and can be downloaded from GitHub (https://github.com/kckishan/GNE).", "field": [], "task": ["Gene Interaction Prediction", "Link Prediction"], "method": [], "dataset": ["BioGRID (human)", "BioGRID(yeast)"], "metric": ["Average Precision"], "title": "GNE: a deep learning framework for gene network inference by aggregating biological information"} {"abstract": "We present a novel approach to adjust global image properties such as colour, saturation, and luminance using human-interpretable image enhancement curves, inspired by the Photoshop curves tool. Our method, dubbed neural CURve Layers (CURL), is designed as a multi-colour space neural retouching block trained jointly in three different colour spaces (HSV, CIELab, RGB) guided by a novel multi-colour space loss. The curves are fully differentiable and are trained end-to-end for different computer vision problems including photo enhancement (RGB-to-RGB) and as part of the image signal processing pipeline for image formation (RAW-to-RGB). To demonstrate the effectiveness of CURL we combine this global image transformation block with a pixel-level (local) image multi-scale encoder-decoder backbone network. In an extensive experimental evaluation we show that CURL produces state-of-the-art image quality versus recently proposed deep learning approaches in both objective and perceptual metrics, setting new state-of-the-art performance on multiple public datasets. Our code is publicly available at: https://github.com/sjmoran/CURL.", "field": [], "task": ["Demosaicking", "Denoising", "Image Enhancement"], "method": [], "dataset": ["MIT-Adobe 5k"], "metric": ["SSIM", "PSNR", "LPIPS"], "title": "CURL: Neural Curve Layers for Global Image Enhancement"} {"abstract": "In many scenarios of Person Re-identification (Re-ID), the gallery set consists of lots of surveillance videos and the query is just an image, thus Re-ID has to be conducted between image and videos. Compared with videos, still person images lack temporal information. Besides, the information asymmetry between image and video features increases the difficulty in matching images and videos. To solve this problem, we propose a novel Temporal Knowledge Propagation (TKP) method which propagates the temporal knowledge learned by the video representation network to the image representation network. Specifically, given the input videos, we enforce the image representation network to fit the outputs of video representation network in a shared feature space. With back propagation, temporal knowledge can be transferred to enhance the image features and the information asymmetry problem can be alleviated. With additional classification and integrated triplet losses, our model can learn expressive and discriminative image and video features for image-to-video re-identification. Extensive experiments demonstrate the effectiveness of our method and the overall results on two widely used datasets surpass the state-of-the-art methods by a large margin. Code is available at: https://github.com/guxinqian/TKP", "field": [], "task": ["Image-To-Video Person Re-Identification", "Person Re-Identification", "Video-Based Person Re-Identification"], "method": [], "dataset": ["MARS", "iLIDS-VID"], "metric": ["mAP", "Rank-10", "Rank-1", "Rank-20", "Rank-5"], "title": "Temporal Knowledge Propagation for Image-to-Video Person Re-identification"} {"abstract": "Recent deep learning based approaches have shown promising results for the\nchallenging task of inpainting large missing regions in an image. These methods\ncan generate visually plausible image structures and textures, but often create\ndistorted structures or blurry textures inconsistent with surrounding areas.\nThis is mainly due to ineffectiveness of convolutional neural networks in\nexplicitly borrowing or copying information from distant spatial locations. On\nthe other hand, traditional texture and patch synthesis approaches are\nparticularly suitable when it needs to borrow textures from the surrounding\nregions. Motivated by these observations, we propose a new deep generative\nmodel-based approach which can not only synthesize novel image structures but\nalso explicitly utilize surrounding image features as references during network\ntraining to make better predictions. The model is a feed-forward, fully\nconvolutional neural network which can process images with multiple holes at\narbitrary locations and with variable sizes during the test time. Experiments\non multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and\nnatural images (ImageNet, Places2) demonstrate that our proposed approach\ngenerates higher-quality inpainting results than existing ones. Code, demo and\nmodels are available at: https://github.com/JiahuiYu/generative_inpainting.", "field": [], "task": ["Image Inpainting"], "method": [], "dataset": ["Places2 val"], "metric": ["rect mask l2 err", "rect mask l1 error", "free-form mask l2 err", "free-form mask l1 err"], "title": "Generative Image Inpainting with Contextual Attention"} {"abstract": "In this paper, we propose to tackle the challenging few-shot learning (FSL) problem by learning global class representations using both base and novel class training samples. In each training episode, an episodic class mean computed from a support set is registered with the global representation via a registration module. This produces a registered global class representation for computing the classification loss using a query set. Though following a similar episodic training pipeline as existing meta learning based approaches, our method differs significantly in that novel class training samples are involved in the training from the beginning. To compensate for the lack of novel class training samples, an effective sample synthesis strategy is developed to avoid overfitting. Importantly, by joint base-novel class training, our approach can be easily extended to a more practical yet challenging FSL setting, i.e., generalized FSL, where the label space of test data is extended to both base and novel classes. Extensive experiments show that our approach is effective for both of the two FSL settings.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Generalized Few-Shot Classification", "Meta-Learning"], "method": [], "dataset": ["OMNIGLOT - 5-Shot, 20-way", "Mini-ImageNet - 1-Shot Learning", "mini-ImageNet - 100-Way", "OMNIGLOT - 1-Shot, 20-way"], "metric": ["Accuracy"], "title": "Few-Shot Learning with Global Class Representations"} {"abstract": "A Dialogue State Tracker (DST) is a key component in a dialogue system aiming at estimating the beliefs of possible user goals at each dialogue turn. Most of the current DST trackers make use of recurrent neural networks and are based on complex architectures that manage several aspects of a dialogue, including the user utterance, the system actions, and the slot-value pairs defined in a domain ontology. However, the complexity of such neural architectures incurs into a considerable latency in the dialogue state prediction, which limits the deployments of the models in real-world applications, particularly when task scalability (i.e. amount of slots) is a crucial factor. In this paper, we propose an innovative neural model for dialogue state tracking, named Global encoder and Slot-Attentive decoders (G-SAT), which can predict the dialogue state with a very low latency time, while maintaining high-level performance. We report experiments on three different languages (English, Italian, and German) of the WoZ2.0 dataset, and show that the proposed approach provides competitive advantages over state-of-art DST systems, both in terms of accuracy and in terms of time complexity for predictions, being over 15 times faster than the other systems.", "field": [], "task": ["Dialogue State Tracking"], "method": [], "dataset": ["Wizard-of-Oz"], "metric": ["Request", "Joint"], "title": "Scalable Neural Dialogue State Tracking"} {"abstract": "This paper proposes an utterance-to-utterance interactive matching network (U2U-IMN) for multi-turn response selection in retrieval-based chatbots. Different from previous methods following context-to-response matching or utterance-to-response matching frameworks, this model treats both contexts and responses as sequences of utterances when calculating the matching degrees between them. For a context-response pair, the U2U-IMN model first encodes each utterance separately using recurrent and self-attention layers. Then, a global and bidirectional interaction between the context and the response is conducted using the attention mechanism to collect the matching information between them. The distances between context and response utterances are employed as a prior component when calculating the attention weights. Finally, sentence-level aggregation and context-response-level aggregation are executed in turn to obtain the feature vector for matching degree prediction. Experiments on four public datasets showed that our proposed method outperformed baseline methods on all metrics, achieving a new state-of-the-art performance and demonstrating compatibility across domains for multi-turn response selection.", "field": [], "task": ["Conversational Response Selection"], "method": [], "dataset": ["Ubuntu Dialogue (v1, Ranking)"], "metric": ["R10@1", "R10@5", "R2@1", "R10@2"], "title": "Utterance-to-Utterance Interactive Matching Network for Multi-Turn Response Selection in Retrieval-Based Chatbots"} {"abstract": "Weakly supervised semantic segmentation is a challenging task as it only takes image-level information as supervision for training but produces pixel-level predictions for testing. To address such a challenging task, most recent state-of-the-art approaches propose to adopt two-step solutions, \\emph{i.e. } 1) learn to generate pseudo pixel-level masks, and 2) engage FCNs to train the semantic segmentation networks with the pseudo masks. However, the two-step solutions usually employ many bells and whistles in producing high-quality pseudo masks, making this kind of methods complicated and inelegant. In this work, we harness the image-level labels to produce reliable pixel-level annotations and design a fully end-to-end network to learn to predict segmentation maps. Concretely, we firstly leverage an image classification branch to generate class activation maps for the annotated categories, which are further pruned into confident yet tiny object/background regions. Such reliable regions are then directly served as ground-truth labels for the parallel segmentation branch, where a newly designed dense energy loss function is adopted for optimization. Despite its apparent simplicity, our one-step solution achieves competitive mIoU scores (\\emph{val}: 62.6, \\emph{test}: 62.9) on Pascal VOC compared with those two-step state-of-the-arts. By extending our one-step method to two-step, we get a new state-of-the-art performance on the Pascal VOC (\\emph{val}: 66.3, \\emph{test}: 66.5).", "field": [], "task": ["Image Classification", "Semantic Segmentation", "Weakly-Supervised Semantic Segmentation"], "method": [], "dataset": ["PASCAL VOC 2012 test", "PASCAL VOC 2012 val"], "metric": ["Mean IoU", "mIoU"], "title": "Reliability Does Matter: An End-to-End Weakly Supervised Semantic Segmentation Approach"} {"abstract": "Motivation\r\nRecent neural approaches on event extraction from text mainly focus on flat events in general domain, while there are less attempts to detect nested and overlapping events. These existing systems are built on given entities and they depend on external syntactic tools.\r\n\r\nResults\r\nWe propose an end-to-end neural nested event extraction model named DeepEventMine that extracts multiple overlapping directed acyclic graph structures from a raw sentence. On the top of the bidirectional encoder representations from transformers model, our model detects nested entities and triggers, roles, nested events and their modifications in an end-to-end manner without any syntactic tools. Our DeepEventMine model achieves the new state-of-the-art performance on seven biomedical nested event extraction tasks. Even when gold entities are unavailable, our model can detect events from raw text with promising performance.\r\n\r\nAvailability and implementation\r\nOur codes and models to reproduce the results are available at: https://github.com/aistairc/DeepEventMine.", "field": [], "task": ["Event Extraction"], "method": [], "dataset": ["GENIA", "Infectious Diseases 2011 (ID)", "GENIA 2013", "Multi-Level Event Extraction (MLEE)", "Cancer Genetics 2013 (CG)", "Epigenetics and Post-translational Modifications 2011 (EPI)", "Pathway Curation 2013 (PC)"], "metric": ["F1"], "title": "DeepEventMine: end-to-end neural nested event extraction from biomedical texts"} {"abstract": "Large pre-trained language models (LMs) are known to encode substantial amounts of linguistic information. However, high-level reasoning skills, such as numerical reasoning, are difficult to learn from a language-modeling objective only. Consequently, existing models for numerical reasoning have used specialized architectures with limited flexibility. In this work, we show that numerical reasoning is amenable to automatic data generation, and thus one can inject this skill into pre-trained LMs, by generating large amounts of data, and training in a multi-task setup. We show that pre-training our model, GenBERT, on this data, dramatically improves performance on DROP (49.3 $\\rightarrow$ 72.3 F1), reaching performance that matches state-of-the-art models of comparable size, while using a simple and general-purpose encoder-decoder architecture. Moreover, GenBERT generalizes well to math word problem datasets, while maintaining high performance on standard RC tasks. Our approach provides a general recipe for injecting skills into large pre-trained LMs, whenever the skill is amenable to automatic data augmentation.", "field": [], "task": ["Data Augmentation", "Language Modelling", "Question Answering"], "method": [], "dataset": ["DROP Test"], "metric": ["F1"], "title": "Injecting Numerical Reasoning Skills into Language Models"} {"abstract": "We propose ThaiLMCut, a semi-supervised approach for Thai word segmentation which utilizes a bi-directional character language model (LM) as a way to leverage useful linguistic knowledge from unlabeled data. After the language model is trained on substantial unlabeled corpora, the weights of its embedding and recurrent layers are transferred to a supervised word segmentation model which continues fine-tuning them on a word segmentation task. Our experimental results demonstrate that applying the LM always leads to a performance gain, especially when the amount of labeled data is small. In such cases, the F1 Score increased by up to 2.02{\\%}. Even on abig labeled dataset, a small improvement gain can still be obtained. The approach has also shown to be very beneficial for out-of-domain settings with a gain in F1 Score of up to 3.13{\\%}. Finally, we show that ThaiLMCut can outperform other open source state-of-the-art models achieving an F1 Score of 98.78{\\%} on the standard benchmark, InterBEST2009.", "field": [], "task": ["Language Modelling", "Thai Word Segmentation"], "method": [], "dataset": ["BEST-2010"], "metric": ["F1-Score"], "title": "ThaiLMCut: Unsupervised Pretraining for Thai Word Segmentation"} {"abstract": "Current graph neural network (GNN) architectures naively average or sum node embeddings into an aggregated graph representation -- potentially losing structural or semantic information. We here introduce OT-GNN, a model that computes graph embeddings using parametric prototypes that highlight key facets of different graph aspects. Towards this goal, we are (to our knowledge) the first to successfully combine optimal transport (OT) with parametric graph models. Graph representations are obtained from Wasserstein distances between the set of GNN node embeddings and \"prototype\" point clouds as free parameters. We theoretically prove that, unlike traditional sum aggregation, our function class on point clouds satisfies a fundamental universal approximation theorem. Empirically, we address an inherent collapse optimization issue by proposing a noise contrastive regularizer to steer the model towards truly exploiting the optimal transport geometry. Finally, we consistently report better generalization performance on several molecular property prediction tasks, while exhibiting smoother graph representations.", "field": [], "task": ["Drug Discovery", "Graph Regression", "Molecular Property Prediction"], "method": [], "dataset": ["Lipophilicity", "BACE", "BBBP", "ESOL", "Lipophilicity "], "metric": ["RMSE", "AUC"], "title": "Optimal Transport Graph Neural Networks"} {"abstract": "Multi-level feature fusion is a fundamental topic in computer vision. It has been exploited to detect, segment and classify objects at various scales. When multi-level features meet multi-modal cues, the optimal feature aggregation and multi-modal learning strategy become a hot potato. In this paper, we leverage the inherent multi-modal and multi-level nature of RGB-D salient object detection to devise a novel cascaded refinement network. In particular, first, we propose to regroup the multi-level features into teacher and student features using a bifurcated backbone strategy (BBS). Second, we introduce a depth-enhanced module (DEM) to excavate informative depth cues from the channel and spatial views. Then, RGB and depth modalities are fused in a complementary way. Our architecture, named Bifurcated Backbone Strategy Network (BBS-Net), is simple, efficient, and backbone-independent. Extensive experiments show that BBS-Net significantly outperforms eighteen SOTA models on eight challenging datasets under five evaluation measures, demonstrating the superiority of our approach ($\\sim 4 \\%$ improvement in S-measure $vs.$ the top-ranked model: DMRA-iccv2019). In addition, we provide a comprehensive analysis on the generalization ability of different RGB-D datasets and provide a powerful training set for future research.", "field": [], "task": ["Object Detection", "RGB-D Salient Object Detection", "RGB Salient Object Detection", "Salient Object Detection"], "method": [], "dataset": ["STERE", "NLPR", "DES", "SIP", "LFSD", "NJU2K", "SSD"], "metric": ["max E-Measure", "Average MAE", "S-Measure", "max F-Measure"], "title": "Bifurcated backbone strategy for RGB-D salient object detection"} {"abstract": "In this paper, we propose an adaptive weighting regression (AWR) method to leverage the advantages of both detection-based and regression-based methods. Hand joint coordinates are estimated as discrete integration of all pixels in dense representation, guided by adaptive weight maps. This learnable aggregation process introduces both dense and joint supervision that allows end-to-end training and brings adaptability to weight maps, making the network more accurate and robust. Comprehensive exploration experiments are conducted to validate the effectiveness and generality of AWR under various experimental settings, especially its usefulness for different types of dense representation and input modality. Our method outperforms other state-of-the-art methods on four publicly available datasets, including NYU, ICVL, MSRA and HANDS 2017 dataset.", "field": [], "task": ["3D Hand Pose Estimation", "Hand Pose Estimation", "Pose Estimation", "Regression"], "method": [], "dataset": ["MSRA Hands", "HANDS 2019", "NYU Hands", "ICVL Hands", "HANDS 2017"], "metric": ["Average 3D Error"], "title": "AWR: Adaptive Weighting Regression for 3D Hand Pose Estimation"} {"abstract": "We present a novel Bipartite Graph Reasoning GAN (BiGraphGAN) for the challenging person image generation task. The proposed graph generator mainly consists of two novel blocks that aim to model the pose-to-pose and pose-to-image relations, respectively. Specifically, the proposed Bipartite Graph Reasoning (BGR) block aims to reason the crossing long-range relations between the source pose and the target pose in a bipartite graph, which mitigates some challenges caused by pose deformation. Moreover, we propose a new Interaction-and-Aggregation (IA) block to effectively update and enhance the feature representation capability of both person's shape and appearance in an interactive way. Experiments on two challenging and public datasets, i.e., Market-1501 and DeepFashion, show the effectiveness of the proposed BiGraphGAN in terms of objective quantitative scores and subjective visual realness. The source code and trained models are available at https://github.com/Ha0Tang/BiGraphGAN.", "field": [], "task": ["Image Generation", "Pose Transfer"], "method": [], "dataset": ["Market-1501", "Deep-Fashion"], "metric": ["PCKh", "SSIM", "mask-IS", "mask-SSIM", "IS"], "title": "Bipartite Graph Reasoning GANs for Person Image Generation"} {"abstract": "Object recognition in video is an important task for plenty of applications, including autonomous driving perception, surveillance tasks, wearable devices or IoT networks. Object recognition using video data is more challenging than using still images due to blur, occlusions or rare object poses. Specific video detectors with high computational cost or standard image detectors together with a fast post-processing algorithm achieve the current state-of-the-art. This work introduces a novel post-processing pipeline that overcomes some of the limitations of previous post-processing methods by introducing a learning-based similarity evaluation between detections across frames. Our method improves the results of state-of-the-art specific video detectors, specially regarding fast moving objects, and presents low resource requirements. And applied to efficient still image detectors, such as YOLO, provides comparable results to much more computationally intensive detectors.", "field": [], "task": ["Autonomous Driving", "Dense Object Detection", "Object Detection", "Object Recognition", "Real-Time Object Detection", "Video Object Detection"], "method": [], "dataset": ["ImageNet VID"], "metric": ["runtime (ms)", "MAP"], "title": "Robust and Efficient Post-Processing for Video Object Detection (REPP)"} {"abstract": "Spatial pooling has been proven highly effective in capturing long-range contextual information for pixel-wise prediction tasks, such as scene parsing. In this paper, beyond conventional spatial pooling that usually has a regular shape of NxN, we rethink the formulation of spatial pooling by introducing a new pooling strategy, called strip pooling, which considers a long but narrow kernel, i.e., 1xN or Nx1. Based on strip pooling, we further investigate spatial pooling architecture design by 1) introducing a new strip pooling module that enables backbone networks to efficiently model long-range dependencies, 2) presenting a novel building block with diverse spatial pooling as a core, and 3) systematically comparing the performance of the proposed strip pooling and conventional spatial pooling techniques. Both novel pooling-based designs are lightweight and can serve as an efficient plug-and-play module in existing scene parsing networks. Extensive experiments on popular benchmarks (e.g., ADE20K and Cityscapes) demonstrate that our simple approach establishes new state-of-the-art results. Code is made available at https://github.com/Andrew-Qibin/SPNet.", "field": [], "task": ["Scene Parsing"], "method": [], "dataset": ["ADE20K", "Cityscapes test"], "metric": ["Mean IoU (class)", "Validation mIoU"], "title": "Strip Pooling: Rethinking Spatial Pooling for Scene Parsing"} {"abstract": "Feature warping is a core technique in optical flow estimation; however, the ambiguity caused by occluded areas during warping is a major problem that remains unsolved. In this paper, we propose an asymmetric occlusion-aware feature matching module, which can learn a rough occlusion mask that filters useless (occluded) areas immediately after feature warping without any explicit supervision. The proposed module can be easily integrated into end-to-end network architectures and enjoys performance gains while introducing negligible computational cost. The learned occlusion mask can be further fed into a subsequent network cascade with dual feature pyramids with which we achieve state-of-the-art performance. At the time of submission, our method, called MaskFlownet, surpasses all published optical flow methods on the MPI Sintel, KITTI 2012 and 2015 benchmarks. Code is available at https://github.com/microsoft/MaskFlownet.", "field": [], "task": ["Optical Flow Estimation"], "method": [], "dataset": ["KITTI 2012", "Sintel-final", "Sintel-clean", "KITTI 2015"], "metric": ["Average End-Point Error", "Fl-all"], "title": "MaskFlownet: Asymmetric Feature Matching with Learnable Occlusion Mask"} {"abstract": "Matrix completion models are among the most common formulations of\nrecommender systems. Recent works have showed a boost of performance of these\ntechniques when introducing the pairwise relationships between users/items in\nthe form of graphs, and imposing smoothness priors on these graphs. However,\nsuch techniques do not fully exploit the local stationarity structures of\nuser/item graphs, and the number of parameters to learn is linear w.r.t. the\nnumber of users and items. We propose a novel approach to overcome these\nlimitations by using geometric deep learning on graphs. Our matrix completion\narchitecture combines graph convolutional neural networks and recurrent neural\nnetworks to learn meaningful statistical graph-structured patterns and the\nnon-linear diffusion process that generates the known ratings. This neural\nnetwork system requires a constant number of parameters independent of the\nmatrix size. We apply our method on both synthetic and real datasets, showing\nthat it outperforms state-of-the-art techniques.", "field": [], "task": ["Matrix Completion", "Recommendation Systems"], "method": [], "dataset": ["YahooMusic Monti", "Douban Monti", "MovieLens 100K", "Flixster Monti"], "metric": ["RMSE (u1 Splits)", "RMSE"], "title": "Geometric Matrix Completion with Recurrent Multi-Graph Neural Networks"} {"abstract": "We present a novel neural network for processing sequences. The ByteNet is a\none-dimensional convolutional neural network that is composed of two parts, one\nto encode the source sequence and the other to decode the target sequence. The\ntwo network parts are connected by stacking the decoder on top of the encoder\nand preserving the temporal resolution of the sequences. To address the\ndiffering lengths of the source and the target, we introduce an efficient\nmechanism by which the decoder is dynamically unfolded over the representation\nof the encoder. The ByteNet uses dilation in the convolutional layers to\nincrease its receptive field. The resulting network has two core properties: it\nruns in time that is linear in the length of the sequences and it sidesteps the\nneed for excessive memorization. The ByteNet decoder attains state-of-the-art\nperformance on character-level language modelling and outperforms the previous\nbest results obtained with recurrent networks. The ByteNet also achieves\nstate-of-the-art performance on character-to-character machine translation on\nthe English-to-German WMT translation task, surpassing comparable neural\ntranslation models that are based on recurrent networks with attentional\npooling and run in quadratic time. We find that the latent alignment structure\ncontained in the representations reflects the expected alignment between the\ntokens.", "field": [], "task": ["Language Modelling", "Machine Translation"], "method": [], "dataset": ["enwik8", "WMT2014 English-German", "WMT2015 English-German"], "metric": ["Bit per Character (BPC)", "BLEU score"], "title": "Neural Machine Translation in Linear Time"} {"abstract": "We introduce a deep network architecture called DerainNet for removing rain\nstreaks from an image. Based on the deep convolutional neural network (CNN), we\ndirectly learn the mapping relationship between rainy and clean image detail\nlayers from data. Because we do not possess the ground truth corresponding to\nreal-world rainy images, we synthesize images with rain for training. In\ncontrast to other common strategies that increase depth or breadth of the\nnetwork, we use image processing domain knowledge to modify the objective\nfunction and improve deraining with a modestly-sized CNN. Specifically, we\ntrain our DerainNet on the detail (high-pass) layer rather than in the image\ndomain. Though DerainNet is trained on synthetic data, we find that the learned\nnetwork translates very effectively to real-world images for testing. Moreover,\nwe augment the CNN framework with image enhancement to improve the visual\nresults. Compared with state-of-the-art single image de-raining methods, our\nmethod has improved rain removal and much faster computation time after network\ntraining.", "field": [], "task": ["Image Enhancement", "Rain Removal", "Single Image Deraining"], "method": [], "dataset": ["Test2800", "Rain100H", "Test100", "Test1200", "Rain100L"], "metric": ["SSIM", "PSNR"], "title": "Clearing the Skies: A deep network architecture for single-image rain removal"} {"abstract": "As a successful deep model applied in image super-resolution (SR), the\nSuper-Resolution Convolutional Neural Network (SRCNN) has demonstrated superior\nperformance to the previous hand-crafted models either in speed and restoration\nquality. However, the high computational cost still hinders it from practical\nusage that demands real-time performance (24 fps). In this paper, we aim at\naccelerating the current SRCNN, and propose a compact hourglass-shape CNN\nstructure for faster and better SR. We re-design the SRCNN structure mainly in\nthree aspects. First, we introduce a deconvolution layer at the end of the\nnetwork, then the mapping is learned directly from the original low-resolution\nimage (without interpolation) to the high-resolution one. Second, we\nreformulate the mapping layer by shrinking the input feature dimension before\nmapping and expanding back afterwards. Third, we adopt smaller filter sizes but\nmore mapping layers. The proposed model achieves a speed up of more than 40\ntimes with even superior restoration quality. Further, we present the parameter\nsettings that can achieve real-time performance on a generic CPU while still\nmaintaining good performance. A corresponding transfer strategy is also\nproposed for fast training and testing across different upscaling factors.", "field": [], "task": ["Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["FFHQ 256 x 256 - 4x upscaling", "BSD100 - 2x upscaling", "FFHQ 1024 x 1024 - 4x upscaling"], "metric": ["SSIM", "PSNR", "FID", "MS-SSIM"], "title": "Accelerating the Super-Resolution Convolutional Neural Network"} {"abstract": "Current state of the art object recognition architectures achieve impressive\nperformance but are typically specialized for a single depictive style (e.g.\nphotos only, sketches only). In this paper, we present SwiDeN : our\nConvolutional Neural Network (CNN) architecture which recognizes objects\nregardless of how they are visually depicted (line drawing, realistic shaded\ndrawing, photograph etc.). In SwiDeN, we utilize a novel `deep' depictive\nstyle-based switching mechanism which appropriately addresses the\ndepiction-specific and depiction-invariant aspects of the problem. We compare\nSwiDeN with alternative architectures and prior work on a 50-category Photo-Art\ndataset containing objects depicted in multiple styles. Experimental results\nshow that SwiDeN outperforms other approaches for the depiction-invariant\nobject recognition problem.", "field": [], "task": ["Depiction Invariant Object Recognition", "Object Recognition"], "method": [], "dataset": ["Photo-Art-50"], "metric": ["Overall Accuracy"], "title": "SwiDeN : Convolutional Neural Networks For Depiction Invariant Object Recognition"} {"abstract": "The existing machine translation systems, whether phrase-based or neural,\nhave relied almost exclusively on word-level modelling with explicit\nsegmentation. In this paper, we ask a fundamental question: can neural machine\ntranslation generate a character sequence without any explicit segmentation? To\nanswer this question, we evaluate an attention-based encoder-decoder with a\nsubword-level encoder and a character-level decoder on four language\npairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.\nOur experiments show that the models with a character-level decoder outperform\nthe ones with a subword-level decoder on all of the four language pairs.\nFurthermore, the ensembles of neural models with a character-level decoder\noutperform the state-of-the-art non-neural machine translation systems on\nEn-Cs, En-De and En-Fi and perform comparably on En-Ru.", "field": [], "task": ["Machine Translation"], "method": [], "dataset": ["WMT2015 English-German"], "metric": ["BLEU score"], "title": "A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation"} {"abstract": "We introduce the multiresolution recurrent neural network, which extends the\nsequence-to-sequence framework to model natural language generation as two\nparallel discrete stochastic processes: a sequence of high-level coarse tokens,\nand a sequence of natural language tokens. There are many ways to estimate or\nlearn the high-level coarse tokens, but we argue that a simple extraction\nprocedure is sufficient to capture a wealth of high-level discourse semantics.\nSuch procedure allows training the multiresolution recurrent neural network by\nmaximizing the exact joint log-likelihood over both sequences. In contrast to\nthe standard log- likelihood objective w.r.t. natural language tokens (word\nperplexity), optimizing the joint log-likelihood biases the model towards\nmodeling high-level abstractions. We apply the proposed model to the task of\ndialogue response generation in two challenging domains: the Ubuntu technical\nsupport domain, and Twitter conversations. On Ubuntu, the model outperforms\ncompeting approaches by a substantial margin, achieving state-of-the-art\nresults according to both automatic evaluation metrics and a human evaluation\nstudy. On Twitter, the model appears to generate more relevant and on-topic\nresponses according to automatic evaluation metrics. Finally, our experiments\ndemonstrate that the proposed model is more adept at overcoming the sparsity of\nnatural language and is better able to capture long-term structure.", "field": [], "task": ["Dialogue Generation", "Text Generation"], "method": [], "dataset": ["Ubuntu Dialogue (Activity)", "Ubuntu Dialogue (Tense)", "Twitter Dialogue (Noun)", "Ubuntu Dialogue (Cmd)", "Ubuntu Dialogue (Entity)", "Twitter Dialogue (Tense)"], "metric": ["Precision", "Recall", "F1", "Accuracy"], "title": "Multiresolution Recurrent Neural Networks: An Application to Dialogue Response Generation"} {"abstract": "Surgical workflow recognition has numerous potential medical applications,\nsuch as the automatic indexing of surgical video databases and the optimization\nof real-time operating room scheduling, among others. As a result, phase\nrecognition has been studied in the context of several kinds of surgeries, such\nas cataract, neurological, and laparoscopic surgeries. In the literature, two\ntypes of features are typically used to perform this task: visual features and\ntool usage signals. However, the visual features used are mostly handcrafted.\nFurthermore, the tool usage signals are usually collected via a manual\nannotation process or by using additional equipment. In this paper, we propose\na novel method for phase recognition that uses a convolutional neural network\n(CNN) to automatically learn features from cholecystectomy videos and that\nrelies uniquely on visual information. In previous studies, it has been shown\nthat the tool signals can provide valuable information in performing the phase\nrecognition task. Thus, we present a novel CNN architecture, called EndoNet,\nthat is designed to carry out the phase recognition and tool presence detection\ntasks in a multi-task manner. To the best of our knowledge, this is the first\nwork proposing to use a CNN for multiple recognition tasks on laparoscopic\nvideos. Extensive experimental comparisons to other methods show that EndoNet\nyields state-of-the-art results for both tasks.", "field": [], "task": ["Surgical tool detection"], "method": [], "dataset": ["Cholec80"], "metric": ["mAP"], "title": "EndoNet: A Deep Architecture for Recognition Tasks on Laparoscopic Videos"} {"abstract": "Achieving efficient and scalable exploration in complex domains poses a major\nchallenge in reinforcement learning. While Bayesian and PAC-MDP approaches to\nthe exploration problem offer strong formal guarantees, they are often\nimpractical in higher dimensions due to their reliance on enumerating the\nstate-action space. Hence, exploration in complex domains is often performed\nwith simple epsilon-greedy methods. In this paper, we consider the challenging\nAtari games domain, which requires processing raw pixel inputs and delayed\nrewards. We evaluate several more sophisticated exploration strategies,\nincluding Thompson sampling and Boltzman exploration, and propose a new\nexploration method based on assigning exploration bonuses from a concurrently\nlearned model of the system dynamics. By parameterizing our learned model with\na neural network, we are able to develop a scalable and efficient approach to\nexploration bonuses that can be applied to tasks with complex, high-dimensional\nstate spaces. In the Atari domain, our method provides the most consistent\nimprovement across a range of games that pose a major challenge for prior\nmethods. In addition to raw game-scores, we also develop an AUC-100 metric for\nthe Atari Learning domain to evaluate the impact of exploration on this\nbenchmark.", "field": [], "task": ["Atari Games"], "method": [], "dataset": ["Atari 2600 Venture", "Atari 2600 Montezuma's Revenge", "Atari 2600 Frostbite", "Atari 2600 Freeway", "Atari 2600 Q*Bert"], "metric": ["Score"], "title": "Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models"} {"abstract": "We present a state-of-the-art speech recognition system developed using\nend-to-end deep learning. Our architecture is significantly simpler than\ntraditional speech systems, which rely on laboriously engineered processing\npipelines; these traditional systems also tend to perform poorly when used in\nnoisy environments. In contrast, our system does not need hand-designed\ncomponents to model background noise, reverberation, or speaker variation, but\ninstead directly learns a function that is robust to such effects. We do not\nneed a phoneme dictionary, nor even the concept of a \"phoneme.\" Key to our\napproach is a well-optimized RNN training system that uses multiple GPUs, as\nwell as a set of novel data synthesis techniques that allow us to efficiently\nobtain a large amount of varied data for training. Our system, called Deep\nSpeech, outperforms previously published results on the widely studied\nSwitchboard Hub5'00, achieving 16.0% error on the full test set. Deep Speech\nalso handles challenging noisy environments better than widely used,\nstate-of-the-art commercial speech systems.", "field": [], "task": ["Accented Speech Recognition", "End-To-End Speech Recognition", "Speech Recognition"], "method": [], "dataset": ["swb_hub_500 WER fullSWBCH", "Switchboard + Hub500", "VoxForge American-Canadian", "CHiME clean", "VoxForge Commonwealth", "CHiME real", "VoxForge European", "VoxForge Indian"], "metric": ["Percentage error"], "title": "Deep Speech: Scaling up end-to-end speech recognition"} {"abstract": "We introduce a multi-task setup of identifying and classifying entities,\nrelations, and coreference clusters in scientific articles. We create SciERC, a\ndataset that includes annotations for all three tasks and develop a unified\nframework called Scientific Information Extractor (SciIE) for with shared span\nrepresentations. The multi-task setup reduces cascading errors between tasks\nand leverages cross-sentence relations through coreference links. Experiments\nshow that our multi-task model outperforms previous models in scientific\ninformation extraction without using any domain-specific features. We further\nshow that the framework supports construction of a scientific knowledge graph,\nwhich we use to analyze information in scientific literature.", "field": [], "task": ["Coreference Resolution", "Joint Entity and Relation Extraction", "Named Entity Recognition"], "method": [], "dataset": ["SciERC"], "metric": ["Relation F1", "F1", "Entity F1"], "title": "Multi-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction"} {"abstract": "In this paper, we propose a convolutional layer inspired by optical flow algorithms to learn motion representations. Our representation flow layer is a fully-differentiable layer designed to capture the `flow' of any representation channel within a convolutional neural network for action recognition. Its parameters for iterative flow optimization are learned in an end-to-end fashion together with the other CNN model parameters, maximizing the action recognition performance. Furthermore, we newly introduce the concept of learning `flow of flow' representations by stacking multiple representation flow layers. We conducted extensive experimental evaluations, confirming its advantages over previous recognition models using traditional optical flows in both computational speed and performance. Code/models available here: https://piergiaj.github.io/rep-flow-site/", "field": [], "task": ["Action Classification", "Action Recognition", "Action Recognition In Videos", "Activity Recognition", "Activity Recognition In Videos", "Optical Flow Estimation", "Temporal Action Localization", "Video Classification", "Video Understanding"], "method": [], "dataset": ["Kinetics-400", "HMDB-51"], "metric": ["Average accuracy of 3 splits", "Vid acc@1"], "title": "Representation Flow for Action Recognition"} {"abstract": "In this paper, we study the task of image retrieval, where the input query is\nspecified in the form of an image plus some text that describes desired\nmodifications to the input image. For example, we may present an image of the\nEiffel tower, and ask the system to find images which are visually similar but\nare modified in small ways, such as being taken at nighttime instead of during\nthe day. To tackle this task, we learn a similarity metric between a target\nimage and a source image plus source text, an embedding and composing function\nsuch that target image feature is close to the source image plus text\ncomposition feature. We propose a new way to combine image and text using such\nfunction that is designed for the retrieval task. We show this outperforms\nexisting approaches on 3 different datasets, namely Fashion-200k, MIT-States\nand a new synthetic dataset we create based on CLEVR. We also show that our\napproach can be used to classify input queries, in addition to image retrieval.", "field": [], "task": ["Image Retrieval", "Image Retrieval with Multi-Modal Query"], "method": [], "dataset": ["FashionIQ", "MIT-States", "Fashion200k"], "metric": ["Recall@50", "Recall@1", "Recall@5", "Recall@10"], "title": "Composing Text and Image for Image Retrieval - An Empirical Odyssey"} {"abstract": "We present the Natural Questions corpus, a question answering dataset. Questions consist of real anonymized, aggregated queries issued to the Google search engine. An annotator is presented with a question along with a Wikipedia page from the top 5 search results, and annotates a long answer (typically a paragraph) and a short answer (one or more entities) if present on the page, or marks null if no long/short answer is present. The public release consists of 307,373 training examples with single annotations, 7,830 examples with 5-way annotations for development data, and a further 7,842 examples 5-way annotated sequestered as test data. We present experiments validating quality of the data. We also describe analysis of 25-way annotations on 302 examples, giving insights into human variability on the annotation task. We introduce robust metrics for the purposes of evaluating question answering systems; demonstrate high human upper bounds on these metrics; and establish baseline results using competitive methods drawn from related literature.", "field": [], "task": ["Question Answering"], "method": [], "dataset": ["Natural Questions (long)", "Natural Questions (short)"], "metric": ["F1"], "title": "Natural Questions: a Benchmark for Question Answering Research"} {"abstract": "High dynamic range (HDR) image generation from a single exposure low dynamic range (LDR) image has been made possible due to the recent advances in Deep Learning. Various feed-forward Convolutional Neural Networks (CNNs) have been proposed for learning LDR to HDR representations. To better utilize the power of CNNs, we exploit the idea of feedback, where the initial low level features are guided by the high level features using a hidden state of a Recurrent Neural Network. Unlike a single forward pass in a conventional feed-forward network, the reconstruction from LDR to HDR in a feedback network is learned over multiple iterations. This enables us to create a coarse-to-fine representation, leading to an improved reconstruction at every iteration. Various advantages over standard feed-forward networks include early reconstruction ability and better reconstruction quality with fewer network parameters. We design a dense feedback block and propose an end-to-end feedback network- FHDR for HDR image generation from a single exposure LDR image. Qualitative and quantitative evaluations show the superiority of our approach over the state-of-the-art methods.", "field": [], "task": ["Image Generation", "Image Reconstruction", "Single-Image-Based Hdr Reconstruction"], "method": [], "dataset": ["City Scene Dataset"], "metric": ["SSIM", "PSNR", "HDR-VDP2 Q SCORE"], "title": "FHDR: HDR Image Reconstruction from a Single LDR Image using Feedback Network"} {"abstract": "Man-made scenes can be densely packed, containing numerous objects, often\nidentical, positioned in close proximity. We show that precise object detection\nin such scenes remains a challenging frontier even for state-of-the-art object\ndetectors. We propose a novel, deep-learning based method for precise object\ndetection, designed for such challenging settings. Our contributions include:\n(1) A layer for estimating the Jaccard index as a detection quality score; (2)\na novel EM merging unit, which uses our quality scores to resolve detection\noverlap ambiguities; finally, (3) an extensive, annotated data set, SKU-110K,\nrepresenting packed retail environments, released for training and testing\nunder such extreme settings. Detection tests on SKU-110K and counting tests on\nthe CARPK and PUCPR+ show our method to outperform existing state-of-the-art\nwith substantial margins. The code and data will be made available on\n\\url{www.github.com/eg4000/SKU110K_CVPR19}.", "field": [], "task": ["Dense Object Detection", "Object Detection"], "method": [], "dataset": ["CARPK", "SKU-110K"], "metric": ["MAE", "AP75", "RMSE", "AP"], "title": "Precise Detection in Densely Packed Scenes"} {"abstract": "Suffering from the multi-view data diversity and complexity for semi-supervised classification, most of existing graph convolutional networks focus on the networks architecture construction or the salient graph structure preservation, and ignore the the complete graph structure for semi-supervised classification contribution. To mine the more complete distribution structure from multi-view data with the consideration of the specificity and the commonality, we propose structure fusion based on graph convolutional networks (SF-GCN) for improving the performance of semi-supervised classification. SF-GCN can not only retain the special characteristic of each view data by spectral embedding, but also capture the common style of multi-view data by distance metric between multi-graph structures. Suppose the linear relationship between multi-graph structures, we can construct the optimization function of structure fusion model by balancing the specificity loss and the commonality loss. By solving this function, we can simultaneously obtain the fusion spectral embedding from the multi-view data and the fusion structure as adjacent matrix to input graph convolutional networks for semi-supervised classification. Experiments demonstrate that the performance of SF-GCN outperforms that of the state of the arts on three challenging datasets, which are Cora,Citeseer and Pubmed in citation networks.", "field": [], "task": ["Node Classification"], "method": [], "dataset": ["Cora", "Pubmed", "Citeseer"], "metric": ["Accuracy"], "title": "Structure fusion based on graph convolutional networks for semi-supervised classification"} {"abstract": "We introduce SharpNet, a method that predicts an accurate depth map for an input color image, with a particular attention to the reconstruction of occluding contours: Occluding contours are an important cue for object recognition, and for realistic integration of virtual objects in Augmented Reality, but they are also notoriously difficult to reconstruct accurately. For example, they are a challenge for stereo-based reconstruction methods, as points around an occluding contour are visible in only one image. Inspired by recent methods that introduce normal estimation to improve depth prediction, we introduce a novel term that constrains depth and occluding contours predictions. Since ground truth depth is difficult to obtain with pixel-perfect accuracy along occluding contours, we use synthetic images for training, followed by fine-tuning on real data. We demonstrate our approach on the challenging NYUv2-Depth dataset, and show that our method outperforms the state-of-the-art along occluding contours, while performing on par with the best recent methods for the rest of the images. Its accuracy along the occluding contours is actually better than the `ground truth' acquired by a depth camera based on structured light. We show this by introducing a new benchmark based on NYUv2-Depth for evaluating occluding contours in monocular reconstruction, which is our second contribution.", "field": [], "task": ["Depth Estimation", "Monocular Depth Estimation", "Object Recognition"], "method": [], "dataset": ["NYU-Depth V2"], "metric": ["RMSE"], "title": "SharpNet: Fast and Accurate Recovery of Occluding Contours in Monocular Depth Estimation"} {"abstract": "Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having $O(1)$ global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data.", "field": [], "task": ["Chromatin-Profile Prediction", "Document Summarization", "Linguistic Acceptability", "Natural Language Inference", "Question Answering", "Semantic Textual Similarity", "Sentiment Analysis", "Text Classification", "Text Summarization"], "method": [], "dataset": ["MultiNLI", "arXiv", "TriviaQA", "SST-2 Binary classification", "Patents", "WikiHop", "Yelp-5", "HotpotQA", "BBC XSum", "Hyperpartisan", "IMDb", "STS Benchmark", "CoLA", "QNLI", "BigPatent", "CNN / Daily Mail", "RTE", "MRPC", "Natural Questions", "Pubmed", "Quora Question Pairs"], "metric": ["F1 (Long)", "Sup", "ROUGE-1", "Accuracy (2 classes)", "F1 (Short)", "Test", "Spearman Correlation", "Matched", "ROUGE-2", "Ans", "F1", "Joint F1", "ROUGE-L", "Accuracy", "Accuracy (10 classes)"], "title": "Big Bird: Transformers for Longer Sequences"} {"abstract": "Joint extraction of entities and relations aims to detect entity pairs along with their relations using a single model. Prior work typically solves this task in the extract-then-classify or unified labeling manner. However, these methods either suffer from the redundant entity pairs, or ignore the important inner structure in the process of extracting entities and relations. To address these limitations, in this paper, we first decompose the joint extraction task into two interrelated subtasks, namely HE extraction and TER extraction. The former subtask is to distinguish all head-entities that may be involved with target relations, and the latter is to identify corresponding tail-entities and relations for each extracted head-entity. Next, these two subtasks are further deconstructed into several sequence labeling problems based on our proposed span-based tagging scheme, which are conveniently solved by a hierarchical boundary tagger and a multi-span decoding algorithm. Owing to the reasonable decomposition strategy, our model can fully capture the semantic interdependency between different steps, as well as reduce noise from irrelevant entity pairs. Experimental results show that our method outperforms previous work by 5.2%, 5.9% and 21.5% (F1 score), achieving a new state-of-the-art on three public datasets", "field": [], "task": ["Relation Extraction"], "method": [], "dataset": ["NYT", "NYT-single", "WebNLG"], "metric": ["F1"], "title": "Joint Extraction of Entities and Relations Based on a Novel Decomposition Strategy"} {"abstract": "Text simplification aims at making a text easier to read and understand by simplifying grammar and structure while keeping the underlying information identical. It is often considered an all-purpose generic task where the same simplification is suitable for all; however multiple audiences can benefit from simplified text in different ways. We adapt a discrete parametrization mechanism that provides explicit control on simplification systems based on Sequence-to-Sequence models. As a result, users can condition the simplifications returned by a model on attributes such as length, amount of paraphrasing, lexical complexity and syntactic complexity. We also show that carefully chosen values of these attributes allow out-of-the-box Sequence-to-Sequence models to outperform their standard counterparts on simplification benchmarks. Our model, which we call ACCESS (as shorthand for AudienCe-CEntric Sentence Simplification), establishes the state of the art at 41.87 SARI on the WikiLarge test set, a +1.42 improvement over the best previously reported score.", "field": [], "task": ["Text Simplification"], "method": [], "dataset": ["ASSET", "TurkCorpus"], "metric": ["BLEU", "SARI (EASSE>=0.2.1)"], "title": "Controllable Sentence Simplification"} {"abstract": "We address Unsupervised Video Object Segmentation (UVOS), the task of automatically generating accurate pixel masks for salient objects in a video sequence and of tracking these objects consistently through time, without any input about which objects should be tracked. Towards solving this task, we present UnOVOST (Unsupervised Offline Video Object Segmentation and Tracking) as a simple and generic algorithm which is able to track and segment a large variety of objects. This algorithm builds up tracks in a number stages, first grouping segments into short tracklets that are spatio-temporally consistent, before merging these tracklets into long-term consistent object tracks based on their visual similarity. In order to achieve this we introduce a novel tracklet-based Forest Path Cutting data association algorithm which builds up a decision forest of track hypotheses before cutting this forest into paths that form long-term consistent object tracks. When evaluating our approach on the DAVIS 2017 Unsupervised dataset we obtain state-of-the-art performance with a mean J &F score of 67.9% on the val, 58% on the test-dev and 56.4% on the test-challenge benchmarks, obtaining first place in the DAVIS 2019 Unsupervised Video Object Segmentation Challenge. UnOVOST even performs competitively with many semi-supervised video object segmentation algorithms even though it is not given any input as to which objects should be tracked and segmented.", "field": [], "task": ["Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Unsupervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["DAVIS 2017 (val)", "DAVIS 2017 (test-dev)"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "UnOVOST: Unsupervised Offline Video Object Segmentation and Tracking"} {"abstract": "This paper targets on the problem of set to set recognition, which learns the\nmetric between two image sets. Images in each set belong to the same identity.\nSince images in a set can be complementary, they hopefully lead to higher\naccuracy in practical applications. However, the quality of each sample cannot\nbe guaranteed, and samples with poor quality will hurt the metric. In this\npaper, the quality aware network (QAN) is proposed to confront this problem,\nwhere the quality of each sample can be automatically learned although such\ninformation is not explicitly provided in the training stage. The network has\ntwo branches, where the first branch extracts appearance feature embedding for\neach sample and the other branch predicts quality score for each sample.\nFeatures and quality scores of all samples in a set are then aggregated to\ngenerate the final feature embedding. We show that the two branches can be\ntrained in an end-to-end manner given only the set-level identity annotation.\nAnalysis on gradient spread of this mechanism indicates that the quality\nlearned by the network is beneficial to set-to-set recognition and simplifies\nthe distribution that the network needs to fit. Experiments on both face\nverification and person re-identification show advantages of the proposed QAN.\nThe source code and network structure can be downloaded at\nhttps://github.com/sciencefans/Quality-Aware-Network.", "field": [], "task": ["Face Verification", "Person Re-Identification"], "method": [], "dataset": ["YouTube Faces DB"], "metric": ["Accuracy"], "title": "Quality Aware Network for Set to Set Recognition"} {"abstract": "The task of session-based recommendation is to predict user actions based on anonymous sessions. Recent research mainly models the target session as a sequence or a graph to capture item transitions within it, ignoring complex transitions between items in different sessions that have been generated by other users. These item transitions include potential collaborative information and reflect similar behavior patterns, which we assume may help with the recommendation for the target session. In this paper, we propose a novel method, namely Dual-channel Graph Transition Network (DGTN), to model item transitions within not only the target session but also the neighbor sessions. Specifically, we integrate the target session and its neighbor (similar) sessions into a single graph. Then the transition signals are explicitly injected into the embedding by channel-aware propagation. Experiments on real-world datasets demonstrate that DGTN outperforms other state-of-the-art methods. Further analysis verifies the rationality of dual-channel item transition modeling, suggesting a potential future direction for session-based recommendation.", "field": [], "task": ["Session-Based Recommendations"], "method": [], "dataset": ["yoochoose1", "Diginetica", "yoochoose1/64"], "metric": ["MRR@20", "Precision@20"], "title": "DGTN: Dual-channel Graph Transition Network for Session-based Recommendation"} {"abstract": "Self-training and unsupervised pre-training have emerged as effective approaches to improve speech recognition systems using unlabeled data. However, it is not clear whether they learn similar patterns or if they can be effectively combined. In this paper, we show that pseudo-labeling and pre-training with wav2vec 2.0 are complementary in a variety of labeled data setups. Using just 10 minutes of labeled data from Libri-light as well as 53k hours of unlabeled data from LibriVox achieves WERs of 3.0%/5.2% on the clean and other test sets of Librispeech - rivaling the best published systems trained on 960 hours of labeled data only a year ago. Training on all labeled data of Librispeech achieves WERs of 1.5%/3.1%.", "field": [], "task": ["Speech Recognition", "Unsupervised Pre-training"], "method": [], "dataset": ["LibriSpeech test-other", "LibriSpeech test-clean", "LibriSpeech train-clean-100 test-other", "LibriSpeech train-clean-100 test-clean"], "metric": ["Word Error Rate (WER)"], "title": "Self-training and Pre-training are Complementary for Speech Recognition"} {"abstract": "Despite the remarkable recent progress, person re-identification (Re-ID)\napproaches are still suffering from the failure cases where the discriminative\nbody parts are missing. To mitigate such cases, we propose a simple yet\neffective Horizontal Pyramid Matching (HPM) approach to fully exploit various\npartial information of a given person, so that correct person candidates can be\nstill identified even even some key parts are missing. Within the HPM, we make\nthe following contributions to produce a more robust feature representation for\nthe Re-ID task: 1) we learn to classify using partial feature representations\nat different horizontal pyramid scales, which successfully enhance the\ndiscriminative capabilities of various person parts; 2) we exploit average and\nmax pooling strategies to account for person-specific discriminative\ninformation in a global-local manner. To validate the effectiveness of the\nproposed HPM, extensive experiments are conducted on three popular benchmarks,\nincluding Market-1501, DukeMTMC-ReID and CUHK03. In particular, we achieve mAP\nscores of 83.1%, 74.5% and 59.7% on these benchmarks, which are the new\nstate-of-the-arts. Our code is available on Github", "field": [], "task": ["Person Re-Identification"], "method": [], "dataset": ["DukeMTMC-reID", "Market-1501"], "metric": ["Rank-1", "MAP"], "title": "Horizontal Pyramid Matching for Person Re-identification"} {"abstract": "It is common that entity mentions can contain other mentions recursively.\nThis paper introduces a scalable transition-based method to model the nested\nstructure of mentions. We first map a sentence with nested mentions to a\ndesignated forest where each mention corresponds to a constituent of the\nforest. Our shift-reduce based system then learns to construct the forest\nstructure in a bottom-up manner through an action sequence whose maximal length\nis guaranteed to be three times of the sentence length. Based on Stack-LSTM\nwhich is employed to efficiently and effectively represent the states of the\nsystem in a continuous space, our system is further incorporated with a\ncharacter-based component to capture letter-level patterns. Our model achieves\nthe state-of-the-art results on ACE datasets, showing its effectiveness in\ndetecting nested mentions.", "field": [], "task": ["Named Entity Recognition", "Nested Mention Recognition", "Nested Named Entity Recognition"], "method": [], "dataset": ["GENIA", "ACE 2005", "ACE 2004"], "metric": ["F1"], "title": "A Neural Transition-based Model for Nested Mention Recognition"} {"abstract": "Existing methods for arterial blood pressure (BP) estimation directly map the\ninput physiological signals to output BP values without explicitly modeling the\nunderlying temporal dependencies in BP dynamics. As a result, these models\nsuffer from accuracy decay over a long time and thus require frequent\ncalibration. In this work, we address this issue by formulating BP estimation\nas a sequence prediction problem in which both the input and target are\ntemporal sequences. We propose a novel deep recurrent neural network (RNN)\nconsisting of multilayered Long Short-Term Memory (LSTM) networks, which are\nincorporated with (1) a bidirectional structure to access larger-scale context\ninformation of input sequence, and (2) residual connections to allow gradients\nin deep RNN to propagate more effectively. The proposed deep RNN model was\ntested on a static BP dataset, and it achieved root mean square error (RMSE) of\n3.90 and 2.66 mmHg for systolic BP (SBP) and diastolic BP (DBP) prediction\nrespectively, surpassing the accuracy of traditional BP prediction models. On a\nmulti-day BP dataset, the deep RNN achieved RMSE of 3.84, 5.25, 5.80 and 5.81\nmmHg for the 1st day, 2nd day, 4th day and 6th month after the 1st day SBP\nprediction, and 1.80, 4.78, 5.0, 5.21 mmHg for corresponding DBP prediction,\nrespectively, which outperforms all previous models with notable improvement.\nThe experimental results suggest that modeling the temporal dependencies in BP\ndynamics significantly improves the long-term BP prediction accuracy.", "field": [], "task": ["Blood pressure estimation", "Electrocardiography (ECG)", "Photoplethysmography (PPG)"], "method": [], "dataset": ["Multi-day Continuous BP Prediction", "MIMIC-III"], "metric": ["MAE for SBP [mmHg]", "MAE for DBP [mmHg]", "RMSE"], "title": "Long-term Blood Pressure Prediction with Deep Recurrent Neural Networks"} {"abstract": "Most approaches to extraction multiple relations from a paragraph require multiple passes over the paragraph. In practice, multiple passes are computationally expensive and this makes difficult to scale to longer paragraphs and larger text corpora. In this work, we focus on the task of multiple relation extraction by encoding the paragraph only once (one-pass). We build our solution on the pre-trained self-attentive (Transformer) models, where we first add a structured prediction layer to handle extraction between multiple entity pairs, then enhance the paragraph embedding to capture multiple relational information associated with each entity with an entity-aware attention technique. We show that our approach is not only scalable but can also perform state-of-the-art on the standard benchmark ACE 2005.", "field": [], "task": ["Relation Extraction", "Structured Prediction"], "method": [], "dataset": ["SemEval-2010 Task 8"], "metric": ["F1"], "title": "Extracting Multiple-Relations in One-Pass with Pre-Trained Transformers"} {"abstract": "Temporal Action Proposal (TAP) generation is an important problem, as fast\nand accurate extraction of semantically important (e.g. human actions) segments\nfrom untrimmed videos is an important step for large-scale video analysis. We\npropose a novel Temporal Unit Regression Network (TURN) model. There are two\nsalient aspects of TURN: (1) TURN jointly predicts action proposals and refines\nthe temporal boundaries by temporal coordinate regression; (2) Fast computation\nis enabled by unit feature reuse: a long untrimmed video is decomposed into\nvideo units, which are reused as basic building blocks of temporal proposals.\nTURN outperforms the state-of-the-art methods under average recall (AR) by a\nlarge margin on THUMOS-14 and ActivityNet datasets, and runs at over 880 frames\nper second (FPS) on a TITAN X GPU. We further apply TURN as a proposal\ngeneration stage for existing temporal action localization pipelines, it\noutperforms state-of-the-art performance on THUMOS-14 and ActivityNet.", "field": [], "task": ["Action Localization", "Regression", "Temporal Action Localization"], "method": [], "dataset": ["THUMOS\u201914"], "metric": ["mAP@0.3", "mAP IOU@0.5", "mAP IOU@0.2", "mAP IOU@0.4", "mAP@0.4", "mAP IOU@0.3", "mAP@0.5", "mAP IOU@0.1"], "title": "TURN TAP: Temporal Unit Regression Network for Temporal Action Proposals"} {"abstract": "Multivariate time series data in practical applications, such as health care,\ngeoscience, and biology, are characterized by a variety of missing values. In\ntime series prediction and other related tasks, it has been noted that missing\nvalues and their missing patterns are often correlated with the target labels,\na.k.a., informative missingness. There is very limited work on exploiting the\nmissing patterns for effective imputation and improving prediction performance.\nIn this paper, we develop novel deep learning models, namely GRU-D, as one of\nthe early attempts. GRU-D is based on Gated Recurrent Unit (GRU), a\nstate-of-the-art recurrent neural network. It takes two representations of\nmissing patterns, i.e., masking and time interval, and effectively incorporates\nthem into a deep model architecture so that it not only captures the long-term\ntemporal dependencies in time series, but also utilizes the missing patterns to\nachieve better prediction results. Experiments of time series classification\ntasks on real-world clinical datasets (MIMIC-III, PhysioNet) and synthetic\ndatasets demonstrate that our models achieve state-of-the-art performance and\nprovides useful insights for better understanding and utilization of missing\nvalues in time series analysis.", "field": [], "task": ["Imputation", "Multivariate Time Series Forecasting", "Multivariate Time Series Imputation", "Time Series", "Time Series Analysis", "Time Series Classification", "Time Series Prediction"], "method": [], "dataset": ["MuJoCo", "PhysioNet Challenge 2012"], "metric": ["MSE (10^2, 50% missing)", "MSE (10^-2, 50% missing)", "AUC", "AUC Stdev"], "title": "Recurrent Neural Networks for Multivariate Time Series with Missing Values"} {"abstract": "Unsupervised Domain Adaptation (UDA) makes predictions for the target domain\ndata while manual annotations are only available in the source domain. Previous\nmethods minimize the domain discrepancy neglecting the class information, which\nmay lead to misalignment and poor generalization performance. To address this\nissue, this paper proposes Contrastive Adaptation Network (CAN) optimizing a\nnew metric which explicitly models the intra-class domain discrepancy and the\ninter-class domain discrepancy. We design an alternating update strategy for\ntraining CAN in an end-to-end manner. Experiments on two real-world benchmarks\nOffice-31 and VisDA-2017 demonstrate that CAN performs favorably against the\nstate-of-the-art methods and produces more discriminative features.", "field": [], "task": ["Domain Adaptation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["VisDA2017", "Office-31"], "metric": ["Avg accuracy", "Average Accuracy"], "title": "Contrastive Adaptation Network for Unsupervised Domain Adaptation"} {"abstract": "Data augmentation is usually adopted to increase the amount of training data,\nprevent overfitting and improve the performance of deep models. However, in\npractice, random data augmentation, such as random image cropping, is\nlow-efficiency and might introduce many uncontrolled background noises. In this\npaper, we propose Weakly Supervised Data Augmentation Network (WS-DAN) to\nexplore the potential of data augmentation. Specifically, for each training\nimage, we first generate attention maps to represent the object's\ndiscriminative parts by weakly supervised learning. Next, we augment the image\nguided by these attention maps, including attention cropping and attention\ndropping. The proposed WS-DAN improves the classification accuracy in two\nfolds. In the first stage, images can be seen better since more discriminative\nparts' features will be extracted. In the second stage, attention regions\nprovide accurate location of object, which ensures our model to look at the\nobject closer and further improve the performance. Comprehensive experiments in\ncommon fine-grained visual classification datasets show that our WS-DAN\nsurpasses the state-of-the-art methods, which demonstrates its effectiveness.", "field": [], "task": ["Data Augmentation", "Fine-Grained Image Classification", "Image Cropping"], "method": [], "dataset": ["Stanford Cars", "CUB-200-2011", "FGVC Aircraft"], "metric": ["Accuracy"], "title": "See Better Before Looking Closer: Weakly Supervised Data Augmentation Network for Fine-Grained Visual Classification"} {"abstract": "Flow-based generative models, conceptually attractive due to tractability of both the exact log-likelihood computation and latent-variable inference, and efficiency of both training and sampling, has led to a number of impressive empirical successes and spawned many advanced variants and theoretical investigations. Despite their computational efficiency, the density estimation performance of flow-based generative models significantly falls behind those of state-of-the-art autoregressive models. In this work, we introduce masked convolutional generative flow (MaCow), a simple yet effective architecture of generative flow using masked convolution. By restricting the local connectivity in a small kernel, MaCow enjoys the properties of fast and stable training, and efficient sampling, while achieving significant improvements over Glow for density estimation on standard image benchmarks, considerably narrowing the gap to autoregressive models.", "field": [], "task": ["Density Estimation", "Image Generation"], "method": [], "dataset": ["ImageNet 64x64", "CelebA 256x256", "CIFAR-10"], "metric": ["bits/dimension", "bpd", "Bits per dim"], "title": "MaCow: Masked Convolutional Generative Flow"} {"abstract": "Recent developed deep unsupervised methods allow us to jointly learn representation and cluster unlabelled data. These deep clustering methods mainly focus on the correlation among samples, e.g., selecting high precision pairs to gradually tune the feature representation, which neglects other useful correlations. In this paper, we propose a novel clustering framework, named deep comprehensive correlation mining(DCCM), for exploring and taking full advantage of various kinds of correlations behind the unlabeled data from three aspects: 1) Instead of only using pair-wise information, pseudo-label supervision is proposed to investigate category information and learn discriminative features. 2) The features' robustness to image transformation of input space is fully explored, which benefits the network learning and significantly improves the performance. 3) The triplet mutual information among features is presented for clustering problem to lift the recently discovered instance-level deep mutual information to a triplet-level formation, which further helps to learn more discriminative features. Extensive experiments on several challenging datasets show that our method achieves good performance, e.g., attaining $62.3\\%$ clustering accuracy on CIFAR-10, which is $10.1\\%$ higher than the state-of-the-art results.", "field": [], "task": ["Deep Clustering", "Image Clustering"], "method": [], "dataset": ["Imagenet-dog-15", "CIFAR-100", "CIFAR-10", "Tiny-ImageNet", "ImageNet-10", "STL-10"], "metric": ["Train set", "Train Split", "ARI", "Backbone", "Train Set", "NMI", "Accuracy"], "title": "Deep Comprehensive Correlation Mining for Image Clustering"} {"abstract": "Person re-identification aims to establish the correct identity correspondences of a person moving through a non-overlapping multi-camera installation. Recent advances based on deep learning models for this task mainly focus on supervised learning scenarios where accurate annotations are assumed to be available for each setup. Annotating large scale datasets for person re-identification is demanding and burdensome, which renders the deployment of such supervised approaches to real-world applications infeasible. Therefore, it is necessary to train models without explicit supervision in an autonomous manner. In this paper, we propose an elegant and practical clustering approach for unsupervised person re-identification based on the cluster validity consideration. Concretely, we explore a fundamental concept in statistics, namely \\emph{dispersion}, to achieve a robust clustering criterion. Dispersion reflects the compactness of a cluster when employed at the intra-cluster level and reveals the separation when measured at the inter-cluster level. With this insight, we design a novel Dispersion-based Clustering (DBC) approach which can discover the underlying patterns in data. This approach considers a wider context of sample-level pairwise relationships to achieve a robust cluster affinity assessment which handles the complications may arise due to prevalent imbalanced data distributions. Additionally, our solution can automatically prioritize standalone data points and prevents inferior clustering. Our extensive experimental analysis on image and video re-identification benchmarks demonstrate that our method outperforms the state-of-the-art unsupervised methods by a significant margin. Code is available at https://github.com/gddingcs/Dispersion-based-Clustering.git.", "field": [], "task": ["Person Re-Identification", "Unsupervised Person Re-Identification"], "method": [], "dataset": ["DukeMTMC-reID", "Market-1501"], "metric": ["Rank-1", "Rank-10", "Rank-5", "MAP"], "title": "Towards better Validity: Dispersion based Clustering for Unsupervised Person Re-identification"} {"abstract": "Landmark localization is a challenging problem in computer vision with a multitude of applications. Recent deep learning based methods have shown improved results by regressing likelihood maps instead of regressing the coordinates directly. However, setting the precision of these regression targets during the training is a cumbersome process since it creates a trade-off between trainability vs localization accuracy. Using precise targets introduces a significant sampling bias and hence makes the training more difficult, whereas using imprecise targets results in inaccurate landmark detectors. In this paper, we introduce \"Adaloss\", an objective function that adapts itself during the training by updating the target precision based on the training statistics. This approach does not require setting problem-specific parameters and shows improved stability in training and better localization accuracy during inference. We demonstrate the effectiveness of our proposed method in three different applications of landmark localization: 1) the challenging task of precisely detecting catheter tips in medical X-ray images, 2) localizing surgical instruments in endoscopic images, and 3) localizing facial features on in-the-wild images where we show state-of-the-art results on the 300-W benchmark dataset.", "field": [], "task": ["Facial Landmark Detection", "Regression"], "method": [], "dataset": ["300W"], "metric": ["NME"], "title": "Adaloss: Adaptive Loss Function for Landmark Localization"} {"abstract": "Since the seminal work of Mikolov et al., word embeddings have become the preferred word representations for many natural language processing tasks. Document similarity measures extracted from word embeddings, such as the soft cosine measure (SCM) and the Word Mover's Distance (WMD), were reported to achieve state-of-the-art performance on semantic text similarity and text classification. Despite the strong performance of the WMD on text classification and semantic text similarity, its super-cubic average time complexity is impractical. The SCM has quadratic worst-case time complexity, but its performance on text classification has never been compared with the WMD. Recently, two word embedding regularization techniques were shown to reduce storage and memory costs, and to improve training speed, document processing speed, and task performance on word analogy, word similarity, and semantic text similarity. However, the effect of these techniques on text classification has not yet been studied. In our work, we investigate the individual and joint effect of the two word embedding regularization techniques on the document processing speed and the task performance of the SCM and the WMD on text classification. For evaluation, we use the $k$NN classifier and six standard datasets: BBCSPORT, TWITTER, OHSUMED, REUTERS-21578, AMAZON, and 20NEWS. We show 39% average $k$NN test error reduction with regularized word embeddings compared to non-regularized word embeddings. We describe a practical procedure for deriving such regularized embeddings through Cholesky factorization. We also show that the SCM with regularized word embeddings significantly outperforms the WMD on text classification and is over 10,000 times faster.", "field": [], "task": ["Document Classification", "Text Classification", "Word Embeddings"], "method": [], "dataset": ["BBCSport", "Amazon", "Reuters-21578", "20NEWS", "Twitter", "Ohsumed"], "metric": ["Accuracy"], "title": "Text classification with word embedding regularization and soft similarity measure"} {"abstract": "Estimating depth from a single RGB images is a fundamental task in computer vision, which is most directly solved using supervised deep learning. In the field of unsupervised learning of depth from a single RGB image, depth is not given explicitly. Existing work in the field receives either a stereo pair, a monocular video, or multiple views, and, using losses that are based on structure-from-motion, trains a depth estimation network. In this work, we rely, instead of different views, on depth from focus cues. Learning is based on a novel Point Spread Function convolutional layer, which applies location specific kernels that arise from the Circle-Of-Confusion in each image location. We evaluate our method on data derived from five common datasets for depth estimation and lightfield images, and present results that are on par with supervised methods on KITTI and Make3D datasets and outperform unsupervised learning approaches. Since the phenomenon of depth from defocus is not dataset specific, we hypothesize that learning based on it would overfit less to the specific content in each dataset. Our experiments show that this is indeed the case, and an estimator learned on one dataset using our method provides better results on other datasets, than the directly supervised methods.", "field": [], "task": ["Depth Estimation", "Lightfield", "Monocular Depth Estimation", "Structure from Motion"], "method": [], "dataset": ["NYU-Depth V2", "KITTI Eigen split"], "metric": ["RMSE", "absolute relative error"], "title": "Single Image Depth Estimation Trained via Depth from Defocus Cues"} {"abstract": "Temporal action localization is an important step towards video understanding. Most current action localization methods depend on untrimmed videos with full temporal annotations of action instances. However, it is expensive and time-consuming to annotate both action labels and temporal boundaries of videos. To this end, we propose a weakly supervised temporal action localization method that only requires video-level action instances as supervision during training. We propose a classification module to generate action labels for each segment in the video, and a deep metric learning module to learn the similarity between different action instances. We jointly optimize a balanced binary cross-entropy loss and a metric loss using a standard backpropagation algorithm. Extensive experiments demonstrate the effectiveness of both of these components in temporal localization. We evaluate our algorithm on two challenging untrimmed video datasets: THUMOS14 and ActivityNet1.2. Our approach improves the current state-of-the-art result for THUMOS14 by 6.5% mAP at IoU threshold 0.5, and achieves competitive performance for ActivityNet1.2.", "field": [], "task": ["Action Localization", "Metric Learning", "Temporal Action Localization", "Temporal Localization", "Video Understanding", "Weakly-supervised Temporal Action Localization", "Weakly Supervised Temporal Action Localization"], "method": [], "dataset": ["ActivityNet-1.2", "THUMOS\u201914"], "metric": ["mAP IOU@0.7", "mAP IOU@0.1", "mAP IOU@0.3", "mAP IOU@0.5"], "title": "Weakly Supervised Temporal Action Localization Using Deep Metric Learning"} {"abstract": "Entropy minimization has been widely used in unsupervised domain adaptation (UDA). However, existing works reveal that entropy minimization only may result into collapsed trivial solutions. In this paper, we propose to avoid trivial solutions by further introducing diversity maximization. In order to achieve the possible minimum target risk for UDA, we show that diversity maximization should be elaborately balanced with entropy minimization, the degree of which can be finely controlled with the use of deep embedded validation in an unsupervised manner. The proposed minimal-entropy diversity maximization (MEDM) can be directly implemented by stochastic gradient descent without use of adversarial learning. Empirical evidence demonstrates that MEDM outperforms the state-of-the-art methods on four popular domain adaptation datasets.", "field": [], "task": ["Domain Adaptation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["Office-31", "Office-Home", "ImageCLEF-DA"], "metric": ["Average Accuracy", "Accuracy"], "title": "Entropy Minimization vs. Diversity Maximization for Domain Adaptation"} {"abstract": "Leveraging physical knowledge described by partial differential equations (PDEs) is an appealing way to improve unsupervised video prediction methods. Since physics is too restrictive for describing the full visual content of generic videos, we introduce PhyDNet, a two-branch deep architecture, which explicitly disentangles PDE dynamics from unknown complementary information. A second contribution is to propose a new recurrent physical cell (PhyCell), inspired from data assimilation techniques, for performing PDE-constrained prediction in latent space. Extensive experiments conducted on four various datasets show the ability of PhyDNet to outperform state-of-the-art methods. Ablation studies also highlight the important gain brought out by both disentanglement and PDE-constrained prediction. Finally, we show that PhyDNet presents interesting features for dealing with missing data and long-term forecasting.", "field": [], "task": ["Video Prediction"], "method": [], "dataset": ["Human3.6M", "Moving MNIST"], "metric": ["MAE", "SSIM", "MSE"], "title": "Disentangling Physical Dynamics from Unknown Factors for Unsupervised Video Prediction"} {"abstract": "Existing methods for instance segmentation in videos typi-cally involve multi-stage pipelines that follow the tracking-by-detectionparadigm and model a video clip as a sequence of images. Multiple net-works are used to detect objects in individual frames, and then associatethese detections over time. Hence, these methods are often non-end-to-end trainable and highly tailored to specific tasks. In this paper, we pro-pose a different approach that is well-suited to a variety of tasks involvinginstance segmentation in videos. In particular, we model a video clip asa single 3D spatio-temporal volume, and propose a novel approach thatsegments and tracks instances across space and time in a single stage. Ourproblem formulation is centered around the idea of spatio-temporal em-beddings which are trained to cluster pixels belonging to a specific objectinstance over an entire video clip. To this end, we introduce (i) novel mix-ing functions that enhance the feature representation of spatio-temporalembeddings, and (ii) a single-stage, proposal-free network that can rea-son about temporal context. Our network is trained end-to-end to learnspatio-temporal embeddings as well as parameters required to clusterthese embeddings, thus simplifying inference. Our method achieves state-of-the-art results across multiple datasets and tasks. Code and modelsare available at https://github.com/sabarim/STEm-Seg.", "field": [], "task": ["Instance Segmentation", "Semantic Segmentation", "Unsupervised Video Object Segmentation", "Video Instance Segmentation"], "method": [], "dataset": ["DAVIS 2017 (val)", "YouTube-VIS validation", "DAVIS 2016"], "metric": ["AR10", "F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "AR1", "AP75", "AP50", "Jaccard (Recall)", "F-measure (Mean)", "J&F", "mask AP"], "title": "STEm-Seg: Spatio-temporal Embeddings for Instance Segmentation in Videos"} {"abstract": "Geospatial object segmentation, as a particular semantic segmentation task, always faces with larger-scale variation, larger intra-class variance of background, and foreground-background imbalance in the high spatial resolution (HSR) remote sensing imagery. However, general semantic segmentation methods mainly focus on scale variation in the natural scene, with inadequate consideration of the other two problems that usually happen in the large area earth observation scene. In this paper, we argue that the problems lie on the lack of foreground modeling and propose a foreground-aware relation network (FarSeg) from the perspectives of relation-based and optimization-based foreground modeling, to alleviate the above two problems. From perspective of relation, FarSeg enhances the discrimination of foreground features via foreground-correlated contexts associated by learning foreground-scene relation. Meanwhile, from perspective of optimization, a foreground-aware optimization is proposed to focus on foreground examples and hard examples of background during training for a balanced optimization. The experimental results obtained using a large scale dataset suggest that the proposed method is superior to the state-of-the-art general semantic segmentation methods and achieves a better trade-off between speed and accuracy. Code has been made available at: \\url{https://github.com/Z-Zheng/FarSeg}.", "field": [], "task": ["Semantic Segmentation"], "method": [], "dataset": ["iSAID"], "metric": ["mIoU"], "title": "Foreground-Aware Relation Network for Geospatial Object Segmentation in High Spatial Resolution Remote Sensing Imagery"} {"abstract": "Real-world data is predominantly unbalanced and long-tailed, but deep models struggle to recognize rare classes in the presence of frequent classes. Often, classes can be accompanied by side information like textual descriptions, but it is not fully clear how to use them for learning with unbalanced long-tail data. Such descriptions have been mostly used in (Generalized) Zero-shot learning (ZSL), suggesting that ZSL with class descriptions may also be useful for long-tail distributions. We describe DRAGON, a late-fusion architecture for long-tail learning with class descriptors. It learns to (1) correct the bias towards head classes on a sample-by-sample basis; and (2) fuse information from class-descriptions to improve the tail-class accuracy. We also introduce new benchmarks CUB-LT, SUN-LT, AWA-LT for long-tail learning with class-descriptions, building on existing learning-with-attributes datasets and a version of Imagenet-LT with class descriptors. DRAGON outperforms state-of-the-art models on the new benchmark. It is also a new SoTA on existing benchmarks for GFSL with class descriptors (GFSL-d) and standard (vision-only) long-tailed learning ImageNet-LT, CIFAR-10, 100, and Places365.", "field": [], "task": ["Few-Shot Learning", "Generalized Few-Shot Learning", "Generalized Zero-Shot Learning", "Long-tail Learning", "Long-tail learning with class descriptors", "Zero-Shot Learning"], "method": [], "dataset": ["SUN-LT", "Places-LT", "AWA2", "CIFAR-10-LT (\u03c1=100)", "CIFAR-100-LT (\u03c1=10)", "ImageNet-LT", "CIFAR-10-LT (\u03c1=10)", "AWA-LT", "CIFAR-100-LT (\u03c1=100)", "ImageNet-LT-d", "CUB-LT", "SUN", "CUB"], "metric": ["Per-Class Accuracy (1-shot)", "Per-Class Accuracy (20-shots)", "Long-Tailed Accuracy", "Error Rate", "Per-Class Accuracy (2-shots)", "Per-Class Accuracy (2-shots)", "Per-Class Accuracy (5-shots)", "Per-Class Accuracy (10-shots)", "Per-Class Accuracy"], "title": "From Generalized zero-shot learning to long-tail with class descriptors"} {"abstract": "The low-level details and high-level semantics are both essential to the semantic segmentation task. However, to speed up the model inference, current approaches almost always sacrifice the low-level details, which leads to a considerable accuracy decrease. We propose to treat these spatial details and categorical semantics separately to achieve high accuracy and high efficiency for realtime semantic segmentation. To this end, we propose an efficient and effective architecture with a good trade-off between speed and accuracy, termed Bilateral Segmentation Network (BiSeNet V2). This architecture involves: (i) a Detail Branch, with wide channels and shallow layers to capture low-level details and generate high-resolution feature representation; (ii) a Semantic Branch, with narrow channels and deep layers to obtain high-level semantic context. The Semantic Branch is lightweight due to reducing the channel capacity and a fast-downsampling strategy. Furthermore, we design a Guided Aggregation Layer to enhance mutual connections and fuse both types of feature representation. Besides, a booster training strategy is designed to improve the segmentation performance without any extra inference cost. Extensive quantitative and qualitative evaluations demonstrate that the proposed architecture performs favourably against a few state-of-the-art real-time semantic segmentation approaches. Specifically, for a 2,048x1,024 input, we achieve 72.6% Mean IoU on the Cityscapes test set with a speed of 156 FPS on one NVIDIA GeForce GTX 1080 Ti card, which is significantly faster than existing methods, yet we achieve better segmentation accuracy.", "field": [], "task": ["Real-Time Semantic Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["COCO-Stuff", "Cityscapes test", "CamVid"], "metric": ["Time (ms)", "Frame (fps)", "mIoU"], "title": "BiSeNet V2: Bilateral Network with Guided Aggregation for Real-time Semantic Segmentation"} {"abstract": "We present TDNet, a temporally distributed network designed for fast and accurate video semantic segmentation. We observe that features extracted from a certain high-level layer of a deep CNN can be approximated by composing features extracted from several shallower sub-networks. Leveraging the inherent temporal continuity in videos, we distribute these sub-networks over sequential frames. Therefore, at each time step, we only need to perform a lightweight computation to extract a sub-features group from a single sub-network. The full features used for segmentation are then recomposed by application of a novel attention propagation module that compensates for geometry deformation between frames. A grouped knowledge distillation loss is also introduced to further improve the representation power at both full and sub-feature levels. Experiments on Cityscapes, CamVid, and NYUD-v2 demonstrate that our method achieves state-of-the-art accuracy with significantly faster speed and lower latency.", "field": [], "task": ["Knowledge Distillation", "Real-Time Semantic Segmentation", "Semantic Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["CamVid", "NYU Depth v2", "Cityscapes val", "Cityscapes test"], "metric": ["Speed(ms/f)", "Time (ms)", "Mean IoU", "mIoU", "Frame (fps)"], "title": "Temporally Distributed Networks for Fast Video Semantic Segmentation"} {"abstract": "Modern face alignment methods have become quite accurate at predicting the locations of facial landmarks, but they do not typically estimate the uncertainty of their predicted locations nor predict whether landmarks are visible. In this paper, we present a novel framework for jointly predicting landmark locations, associated uncertainties of these predicted locations, and landmark visibilities. We model these as mixed random variables and estimate them using a deep network trained with our proposed Location, Uncertainty, and Visibility Likelihood (LUVLi) loss. In addition, we release an entirely new labeling of a large face alignment dataset with over 19,000 face images in a full range of head poses. Each face is manually labeled with the ground-truth locations of 68 landmarks, with the additional information of whether each landmark is unoccluded, self-occluded (due to extreme head poses), or externally occluded. Not only does our joint estimation yield accurate estimates of the uncertainty of predicted landmark locations, but it also yields state-of-the-art estimates for the landmark locations themselves on multiple standard face alignment datasets. Our method's estimates of the uncertainty of predicted landmark locations could be used to automatically identify input images on which face alignment fails, which can be critical for downstream tasks.", "field": [], "task": ["Face Alignment"], "method": [], "dataset": ["MERL-RAV"], "metric": ["NME"], "title": "LUVLi Face Alignment: Estimating Landmarks' Location, Uncertainty, and Visibility Likelihood"} {"abstract": "Score-based generative models can produce high quality image samples comparable to GANs, without requiring adversarial optimization. However, existing training procedures are limited to images of low resolution (typically below 32x32), and can be unstable under some settings. We provide a new theoretical analysis of learning and sampling from score models in high dimensional spaces, explaining existing failure modes and motivating new solutions that generalize across datasets. To enhance stability, we also propose to maintain an exponential moving average of model weights. With these improvements, we can effortlessly scale score-based generative models to images with unprecedented resolutions ranging from 64x64 to 256x256. Our score-based models can generate high-fidelity samples that rival best-in-class GANs on various image datasets, including CelebA, FFHQ, and multiple LSUN categories.", "field": [], "task": ["Image Generation"], "method": [], "dataset": ["CIFAR-10"], "metric": ["Inception score", "FID"], "title": "Improved Techniques for Training Score-Based Generative Models"} {"abstract": "We introduce a general-purpose conditioning method for neural networks called\nFiLM: Feature-wise Linear Modulation. FiLM layers influence neural network\ncomputation via a simple, feature-wise affine transformation based on\nconditioning information. We show that FiLM layers are highly effective for\nvisual reasoning - answering image-related questions which require a\nmulti-step, high-level process - a task which has proven difficult for standard\ndeep learning methods that do not explicitly model reasoning. Specifically, we\nshow on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error\nfor the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are\nrobust to ablations and architectural modifications, and 4) generalize well to\nchallenging, new data from few examples or even zero-shot.", "field": [], "task": ["Image Retrieval with Multi-Modal Query", "Visual Question Answering", "Visual Reasoning"], "method": [], "dataset": ["CLEVR-Humans", "MIT-States", "CLEVR"], "metric": ["Recall@1", "Recall@5", "Recall@10", "Accuracy"], "title": "FiLM: Visual Reasoning with a General Conditioning Layer"} {"abstract": "Current research in text simplification has been hampered by two central problems: (i) the small amount of high-quality parallel simplification data available, and (ii) the lack of explicit annotations of simplification operations, such as deletions or substitutions, on existing data. While the recently introduced Newsela corpus has alleviated the first problem, simplifications still need to be learned directly from parallel text using black-box, end-to-end approaches rather than from explicit annotations. These complex-simple parallel sentence pairs often differ to such a high degree that generalization becomes difficult. End-to-end models also make it hard to interpret what is actually learned from data. We propose a method that decomposes the task of TS into its sub-problems. We devise a way to automatically identify operations in a parallel corpus and introduce a sequence-labeling approach based on these annotations. Finally, we provide insights on the types of transformations that different approaches can model.", "field": [], "task": ["Machine Translation", "Sentence Compression", "Text Simplification"], "method": [], "dataset": ["PWKP / WikiSmall", "Newsela", "TurkCorpus"], "metric": ["SARI (EASSE>=0.2.1)", "SARI"], "title": "Learning How to Simplify From Explicit Labeling of Complex-Simplified Text Pairs"} {"abstract": "Arbitrary shape text detection in natural scenes is an extremely challenging task. Unlike existing text detection approaches that only perceive texts based on limited feature representations, we propose a novel framework, namely TextFuseNet, to exploit the use of richer features fused for text detection. More specifically, we propose to perceive texts from three levels of feature representations, i.e., character-, word- and global-level, and then introduce a novel text representation fusion technique to help achieve robust arbitrary text detection. The multi-level feature representation can adequately describe texts by dissecting them into individual characters while still maintaining their general semantics. TextFuseNet then collects and merges the texts\u2019 features from different levels using a multi-path fusion architecture which can effectively align and fuse different representations. In practice, our proposed TextFuseNet can learn a more adequate description of arbitrary shapes texts, suppressing false positives and producing more accurate detection results. Our proposed framework can also be trained with weak supervision for those datasets that lack character-level annotations. Experiments on several datasets show that the proposed TextFuseNet achieves state-of-the-art performance. Specifically, we achieve an F-measure of 94.3% on ICDAR2013, 92.1% on ICDAR2015, 87.1% on Total-Text and 86.6% on CTW-1500, respectively.", "field": [], "task": ["Scene Text", "Scene Text Detection"], "method": [], "dataset": ["ICDAR 2015", "SCUT-CTW1500", "IC19-Art", "Total-Text", "ICDAR 2013"], "metric": ["F-Measure", "Recall", "Precision", "H-Mean"], "title": "TextFuseNet: Scene Text Detection with Richer Fused Features"} {"abstract": "We study a formalization of the grammar induction problem that models sentences as being generated by a compound probabilistic context-free grammar. In contrast to traditional formulations which learn a single stochastic grammar, our grammar's rule probabilities are modulated by a per-sentence continuous latent variable, which induces marginal dependencies beyond the traditional context-free assumptions. Inference in this grammar is performed by collapsed variational inference, in which an amortized variational posterior is placed on the continuous variable, and the latent trees are marginalized out with dynamic programming. Experiments on English and Chinese show the effectiveness of our approach compared to recent state-of-the-art methods when evaluated on unsupervised parsing.", "field": [], "task": ["Constituency Grammar Induction", "Variational Inference"], "method": [], "dataset": ["PTB"], "metric": ["Max F1 (WSJ)", "Mean F1 (WSJ)"], "title": "Compound Probabilistic Context-Free Grammars for Grammar Induction"} {"abstract": "Facial landmark localization aims to detect the predefined points of human faces, and the topic has been rapidly improved with the recent development of neural network based methods. However, it remains a challenging task when dealing with faces in unconstrained scenarios, especially with large pose variations. In this paper, we target the problem of facial landmark localization across large poses and address this task based on a split-and-aggregate strategy. To split the search space, we propose a set of anchor templates as references for regression, which well addresses the large variations of face poses. Based on the prediction of each anchor template, we propose to aggregate the results, which can reduce the landmark uncertainty due to the large poses. Overall, our proposed approach, named AnchorFace, obtains state-of-the-art results with extremely efficient inference speed on four challenging benchmarks, i.e. AFLW, 300W, Menpo, and WFLW dataset. Code will be available at https://github.com/nothingelse92/AnchorFace.", "field": [], "task": ["Face Alignment", "Facial Landmark Detection", "Regression"], "method": [], "dataset": ["WFLW", "300W", "AFLW-Full", "AFLW-Front"], "metric": ["Mean NME", "Fullset (public)", "AUC@0.1 (all)", "ME (%, all) ", "FR@0.1(%, all)", "Mean NME "], "title": "AnchorFace: An Anchor-based Facial Landmark Detector Across Large Poses"} {"abstract": "Most existing neural models for math word problems exploit Seq2Seq model to generate solution\r\nexpressions sequentially from left to right, whose\r\nresults are far from satisfactory due to the lack\r\nof goal-driven mechanism commonly seen in human problem solving. This paper proposes a treestructured neural model to generate expression tree\r\nin a goal-driven manner. Given a math word problem, the model first identifies and encodes its goal\r\nto achieve, and then the goal gets decomposed into\r\nsub-goals combined by an operator in a top-down\r\nrecursive way. The whole process is repeated until the goal is simple enough to be realized by a\r\nknown quantity as leaf node. During the process,\r\ntwo-layer gated-feedforward networks are designed\r\nto implement each step of goal decomposition, and\r\na recursive neural network is used to encode fulfilled subtrees into subtree embeddings, which provides a better representation of subtrees than the\r\nsimple goals of subtrees. Experimental results on\r\nthe dataset Math23K have shown that our treestructured model outperforms significantly several\r\nstate-of-the-art models.", "field": [], "task": ["Math Word Problem Solving"], "method": [], "dataset": ["Math23K"], "metric": ["Accuracy(5-fold)"], "title": "A Goal-Driven Tree-Structured Neural Model for Math Word Problems"} {"abstract": "Attentional, RNN-based encoder-decoder models for abstractive summarization\nhave achieved good performance on short input and output sequences. For longer\ndocuments and summaries however these models often include repetitive and\nincoherent phrases. We introduce a neural network model with a novel\nintra-attention that attends over the input and continuously generated output\nseparately, and a new training method that combines standard supervised word\nprediction and reinforcement learning (RL). Models trained only with supervised\nlearning often exhibit \"exposure bias\" - they assume ground truth is provided\nat each step during training. However, when standard word prediction is\ncombined with the global sequence prediction training of RL the resulting\nsummaries become more readable. We evaluate this model on the CNN/Daily Mail\nand New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the\nCNN/Daily Mail dataset, an improvement over previous state-of-the-art models.\nHuman evaluation also shows that our model produces higher quality summaries.", "field": [], "task": ["Abstractive Text Summarization", "Text Summarization"], "method": [], "dataset": ["CNN / Daily Mail", "CNN / Daily Mail (Anonymized)"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "A Deep Reinforced Model for Abstractive Summarization"} {"abstract": "We introduce a new function-preserving transformation for efficient neural\narchitecture search. This network transformation allows reusing previously\ntrained networks and existing successful architectures that improves sample\nefficiency. We aim to address the limitation of current network transformation\noperations that can only perform layer-level architecture modifications, such\nas adding (pruning) filters or inserting (removing) a layer, which fails to\nchange the topology of connection paths. Our proposed path-level transformation\noperations enable the meta-controller to modify the path topology of the given\nnetwork while keeping the merits of reusing weights, and thus allow efficiently\ndesigning effective structures with complex path topologies like Inception\nmodels. We further propose a bidirectional tree-structured reinforcement\nlearning meta-controller to explore a simple yet highly expressive\ntree-structured architecture space that can be viewed as a generalization of\nmulti-branch architectures. We experimented on the image classification\ndatasets with limited computational resources (about 200 GPU-hours), where we\nobserved improved parameter efficiency and better test results (97.70% test\naccuracy on CIFAR-10 with 14.3M parameters and 74.6% top-1 accuracy on ImageNet\nin the mobile setting), demonstrating the effectiveness and transferability of\nour designed architectures.", "field": [], "task": ["Image Classification", "Neural Architecture Search"], "method": [], "dataset": ["CIFAR-10 Image Classification"], "metric": ["Percentage error", "Params"], "title": "Path-Level Network Transformation for Efficient Architecture Search"} {"abstract": "Being a fundamental component in training and inference, data processing has not been systematically considered in human pose estimation community, to the best of our knowledge. In this paper, we focus on this problem and find that the devil of human pose estimation evolution is in the biased data processing. Specifically, by investigating the standard data processing in state-of-the-art approaches mainly including coordinate system transformation and keypoint format transformation (i.e., encoding and decoding), we find that the results obtained by common flipping strategy are unaligned with the original ones in inference. Moreover, there is a statistical error in some keypoint format transformation methods. Two problems couple together, significantly degrade the pose estimation performance and thus lay a trap for the research community. This trap has given bone to many suboptimal remedies, which are always unreported, confusing but influential. By causing failure in reproduction and unfair in comparison, the unreported remedies seriously impedes the technological development. To tackle this dilemma from the source, we propose Unbiased Data Processing (UDP) consist of two technique aspect for the two aforementioned problems respectively (i.e., unbiased coordinate system transformation and unbiased keypoint format transformation). As a model-agnostic approach and a superior solution, UDP successfully pushes the performance boundary of human pose estimation and offers a higher and more reliable baseline for research community. Code is public available in https://github.com/HuangJunJie2017/UDP-Pose", "field": [], "task": ["Pose Estimation"], "method": [], "dataset": ["COCO test-dev"], "metric": ["APM", "AP75", "AP", "APL", "AP50", "AR"], "title": "The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation"} {"abstract": "We present a semi-parametric approach to photographic image synthesis from\nsemantic layouts. The approach combines the complementary strengths of\nparametric and nonparametric techniques. The nonparametric component is a\nmemory bank of image segments constructed from a training set of images. Given\na novel semantic layout at test time, the memory bank is used to retrieve\nphotographic references that are provided as source material to a deep network.\nThe synthesis is performed by a deep network that draws on the provided\nphotographic material. Experiments on multiple semantic segmentation datasets\nshow that the presented approach yields considerably more realistic images than\nrecent purely parametric techniques. The results are shown in the supplementary\nvideo at https://youtu.be/U4Q98lenGLQ", "field": [], "task": ["Image Generation", "Image-to-Image Translation", "Semantic Segmentation"], "method": [], "dataset": ["COCO-Stuff Labels-to-Photos", "Cityscapes Labels-to-Photo", "ADE20K-Outdoor Labels-to-Photos"], "metric": ["Accuracy", "FID", "Per-pixel Accuracy", "mIoU"], "title": "Semi-parametric Image Synthesis"} {"abstract": "Graph-structured data such as social networks, functional brain networks,\ngene regulatory networks, communications networks have brought the interest in\ngeneralizing deep learning techniques to graph domains. In this paper, we are\ninterested to design neural networks for graphs with variable length in order\nto solve learning problems such as vertex classification, graph classification,\ngraph regression, and graph generative tasks. Most existing works have focused\non recurrent neural networks (RNNs) to learn meaningful representations of\ngraphs, and more recently new convolutional neural networks (ConvNets) have\nbeen introduced. In this work, we want to compare rigorously these two\nfundamental families of architectures to solve graph learning tasks. We review\nexisting graph RNN and ConvNet architectures, and propose natural extension of\nLSTM and ConvNet to graphs with arbitrary size. Then, we design a set of\nanalytically controlled experiments on two basic graph problems, i.e. subgraph\nmatching and graph clustering, to test the different architectures. Numerical\nresults show that the proposed graph ConvNets are 3-17% more accurate and\n1.5-4x faster than graph RNNs. Graph ConvNets are also 36% more accurate than\nvariational (non-learning) techniques. Finally, the most effective graph\nConvNet architecture uses gated edges and residuality. Residuality plays an\nessential role to learn multi-layer architectures as they provide a 10% gain of\nperformance.", "field": [], "task": ["Graph Classification", "Graph Clustering", "Graph Learning", "Graph Regression", "Node Classification", "Regression"], "method": [], "dataset": ["CIFAR10 100k", "ZINC-500k", "PATTERN 100k"], "metric": ["MAE", "Accuracy (%)"], "title": "Residual Gated Graph ConvNets"} {"abstract": "For natural language understanding (NLU) technology to be maximally useful,\nboth practically and as a scientific object of study, it must be general: it\nmust be able to process language in a way that is not exclusively tailored to\nany one specific task or dataset. In pursuit of this objective, we introduce\nthe General Language Understanding Evaluation benchmark (GLUE), a tool for\nevaluating and analyzing the performance of models across a diverse range of\nexisting NLU tasks. GLUE is model-agnostic, but it incentivizes sharing\nknowledge across tasks because certain tasks have very limited training data.\nWe further provide a hand-crafted diagnostic test suite that enables detailed\nlinguistic analysis of NLU models. We evaluate baselines based on current\nmethods for multi-task and transfer learning and find that they do not\nimmediately give substantial improvements over the aggregate performance of\ntraining a separate model per task, indicating room for improvement in\ndeveloping general and robust NLU systems.", "field": [], "task": ["Natural Language Inference", "Natural Language Understanding", "Transfer Learning"], "method": [], "dataset": ["MultiNLI"], "metric": ["Mismatched", "Matched"], "title": "GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding"} {"abstract": "Link prediction for knowledge graphs is the task of predicting missing\nrelationships between entities. Previous work on link prediction has focused on\nshallow, fast models which can scale to large knowledge graphs. However, these\nmodels learn less expressive features than deep, multi-layer models -- which\npotentially limits performance. In this work, we introduce ConvE, a multi-layer\nconvolutional network model for link prediction, and report state-of-the-art\nresults for several established datasets. We also show that the model is highly\nparameter efficient, yielding the same performance as DistMult and R-GCN with\n8x and 17x fewer parameters. Analysis of our model suggests that it is\nparticularly effective at modelling nodes with high indegree -- which are\ncommon in highly-connected, complex knowledge graphs such as Freebase and\nYAGO3. In addition, it has been noted that the WN18 and FB15k datasets suffer\nfrom test set leakage, due to inverse relations from the training set being\npresent in the test set -- however, the extent of this issue has so far not\nbeen quantified. We find this problem to be severe: a simple rule-based model\ncan achieve state-of-the-art results on both WN18 and FB15k. To ensure that\nmodels are evaluated on datasets where simply exploiting inverse relations\ncannot yield competitive results, we investigate and validate several commonly\nused datasets -- deriving robust variants where necessary. We then perform\nexperiments on these robust datasets for our own and several previously\nproposed models and find that ConvE achieves state-of-the-art Mean Reciprocal\nRank across most datasets.", "field": [], "task": ["Knowledge Graph Embeddings", "Knowledge Graphs", "Link Prediction"], "method": [], "dataset": ["WN18RR", " FB15k", "FB15k-237", "YAGO3-10", "WN18"], "metric": ["Hits@3", "Hits@1", "MR", "MRR", "Hits@10"], "title": "Convolutional 2D Knowledge Graph Embeddings"} {"abstract": "Recent deep networks are capable of memorizing the entire data even when the\nlabels are completely random. To overcome the overfitting on corrupted labels,\nwe propose a novel technique of learning another neural network, called\nMentorNet, to supervise the training of the base deep networks, namely,\nStudentNet. During training, MentorNet provides a curriculum (sample weighting\nscheme) for StudentNet to focus on the sample the label of which is probably\ncorrect. Unlike the existing curriculum that is usually predefined by human\nexperts, MentorNet learns a data-driven curriculum dynamically with StudentNet.\nExperimental results demonstrate that our approach can significantly improve\nthe generalization performance of deep networks trained on corrupted training\ndata. Notably, to the best of our knowledge, we achieve the best-published\nresult on WebVision, a large benchmark containing 2.2 million images of\nreal-world noisy labels. The code are at https://github.com/google/mentornet", "field": [], "task": [], "method": [], "dataset": ["mini WebVision 1.0"], "metric": ["ImageNet Top-1 Accuracy", "ImageNet Top-5 Accuracy"], "title": "MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels"} {"abstract": "This paper describes our results for TRAC 2020 competition held together with the conference LREC 2020. Our team name was Ms8qQxMbnjJMgYcw. The competition consisted of 2 subtasks in 3 languages (Bengali, English and Hindi) where the participants{'} task was to classify aggression in short texts from social media and decide whether it is gendered or not. We used a single BERT-based system with two outputs for all tasks simultaneously. Our model placed first in English and second in Bengali gendered text classification competition tasks with 0.87 and 0.93 in F1-score respectively.", "field": [], "task": ["Text Classification"], "method": [], "dataset": ["TRAC2-Benghali. Task 2.", "TRAC2-English. Task2."], "metric": ["F1"], "title": "BERT of all trades, master of some"} {"abstract": "Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of\ncurrent research. Combining such an embedding model with logic rules has\nrecently attracted increasing attention. Most previous attempts made a one-time\ninjection of logic rules, ignoring the interactive nature between embedding\nlearning and logical inference. And they focused only on hard rules, which\nalways hold with no exception and usually require extensive manual effort to\ncreate or validate. In this paper, we propose Rule-Guided Embedding (RUGE), a\nnovel paradigm of KG embedding with iterative guidance from soft rules. RUGE\nenables an embedding model to learn simultaneously from 1) labeled triples that\nhave been directly observed in a given KG, 2) unlabeled triples whose labels\nare going to be predicted iteratively, and 3) soft rules with various\nconfidence levels extracted automatically from the KG. In the learning process,\nRUGE iteratively queries rules to obtain soft labels for unlabeled triples, and\nintegrates such newly labeled triples to update the embedding model. Through\nthis iterative procedure, knowledge embodied in logic rules may be better\ntransferred into the learned embeddings. We evaluate RUGE in link prediction on\nFreebase and YAGO. Experimental results show that: 1) with rule knowledge\ninjected iteratively, RUGE achieves significant and consistent improvements\nover state-of-the-art baselines; and 2) despite their uncertainties,\nautomatically extracted soft rules are highly beneficial to KG embedding, even\nthose with moderate confidence levels. The code and data used for this paper\ncan be obtained from https://github.com/iieir-km/RUGE.", "field": [], "task": ["Graph Embedding", "Knowledge Graph Embedding", "Knowledge Graphs", "Link Prediction"], "method": [], "dataset": ["FB15k", "YAGO37"], "metric": ["Hits@3", "Hits@5", "Hits@1", "MRR", "Hits@10"], "title": "Knowledge Graph Embedding with Iterative Guidance from Soft Rules"} {"abstract": "We present a novel training framework for neural sequence models,\nparticularly for grounded dialog generation. The standard training paradigm for\nthese models is maximum likelihood estimation (MLE), or minimizing the\ncross-entropy of the human responses. Across a variety of domains, a recurring\nproblem with MLE trained generative neural dialog models (G) is that they tend\nto produce 'safe' and generic responses (\"I don't know\", \"I can't tell\"). In\ncontrast, discriminative dialog models (D) that are trained to rank a list of\ncandidate human responses outperform their generative counterparts; in terms of\nautomatic metrics, diversity, and informativeness of the responses. However, D\nis not useful in practice since it cannot be deployed to have real\nconversations with users.\n Our work aims to achieve the best of both worlds -- the practical usefulness\nof G and the strong performance of D -- via knowledge transfer from D to G. Our\nprimary contribution is an end-to-end trainable generative visual dialog model,\nwhere G receives gradients from D as a perceptual (not adversarial) loss of the\nsequence sampled from G. We leverage the recently proposed Gumbel-Softmax (GS)\napproximation to the discrete distribution -- specifically, an RNN augmented\nwith a sequence of GS samplers, coupled with the straight-through gradient\nestimator to enable end-to-end differentiability. We also introduce a stronger\nencoder for visual dialog, and employ a self-attention mechanism for answer\nencoding along with a metric learning loss to aid D in better capturing\nsemantic similarities in answer responses. Overall, our proposed model\noutperforms state-of-the-art on the VisDial dataset by a significant margin\n(2.67% on recall@10). The source code can be downloaded from\nhttps://github.com/jiasenlu/visDial.pytorch.", "field": [], "task": ["Metric Learning", "Transfer Learning", "Visual Dialog"], "method": [], "dataset": ["VisDial v0.9 val"], "metric": ["R@10", "R@5", "Mean Rank", "MRR", "R@1"], "title": "Best of Both Worlds: Transferring Knowledge from Discriminative Learning to a Generative Visual Dialog Model"} {"abstract": "Segmentation of the pixels corresponding to human skin is an essential first step in multiple applications ranging from surveillance to heart-rate estimation from remote-photoplethysmography. However, the existing literature considers the problem only in the visible-range of the EM-spectrum which limits their utility in low or no light settings where the criticality of the application is higher. To alleviate this problem, we consider the problem of skin segmentation from the Near-infrared images. However, Deep learning based state-of-the-art segmentation techniques demands large amounts of labelled data that is unavailable for the current problem. Therefore we cast the skin segmentation problem as that of target-independent Unsupervised Domain Adaptation (UDA) where we use the data from the Red-channel of the visible-range to develop skin segmentation algorithm on NIR images. We propose a method for target-independent segmentation where the 'nearest-clone' of a target image in the source domain is searched and used as a proxy in the segmentation network trained only on the source domain. We prove the existence of 'nearest-clone' and propose a method to find it through an optimization algorithm over the latent space of a Deep generative model based on variational inference. We demonstrate the efficacy of the proposed method for NIR skin segmentation over the state-of-the-art UDA segmentation methods on the two newly created skin segmentation datasets in NIR domain despite not having access to the target NIR data. Additionally, we report state-of-the-art results for adaption from Synthia to Cityscapes which is a popular setting in Unsupervised Domain Adaptation for semantic segmentation. The code and datasets are available at https://github.com/ambekarsameer96/GLSS.", "field": [], "task": ["Domain Adaptation", "Heart rate estimation", "Image-to-Image Translation", "Semantic Segmentation", "Unsupervised Domain Adaptation", "Variational Inference"], "method": [], "dataset": ["SYNTHIA-to-Cityscapes"], "metric": ["mIoU (13 classes)"], "title": "Unsupervised Domain Adaptation for Semantic Segmentation of NIR Images through Generative Latent Search"} {"abstract": "Person re-identification (person re-ID) is mostly viewed as an image\nretrieval problem. This task aims to search a query person in a large image\npool. In practice, person re-ID usually adopts automatic detectors to obtain\ncropped pedestrian images. However, this process suffers from two types of\ndetector errors: excessive background and part missing. Both errors deteriorate\nthe quality of pedestrian alignment and may compromise pedestrian matching due\nto the position and scale variances. To address the misalignment problem, we\npropose that alignment can be learned from an identification procedure. We\nintroduce the pedestrian alignment network (PAN) which allows discriminative\nembedding learning and pedestrian alignment without extra annotations. Our key\nobservation is that when the convolutional neural network (CNN) learns to\ndiscriminate between different identities, the learned feature maps usually\nexhibit strong activations on the human body rather than the background. The\nproposed network thus takes advantage of this attention mechanism to adaptively\nlocate and align pedestrians within a bounding box. Visual examples show that\npedestrians are better aligned with PAN. Experiments on three large-scale re-ID\ndatasets confirm that PAN improves the discriminative ability of the feature\nembeddings and yields competitive accuracy with the state-of-the-art methods.", "field": [], "task": ["Image Retrieval", "Large-Scale Person Re-Identification", "Person Re-Identification"], "method": [], "dataset": ["DukeMTMC-reID", "Market-1501", "CUHK03 labeled", "CUHK03 (detected)"], "metric": ["Rank-1", "MAP"], "title": "Pedestrian Alignment Network for Large-scale Person Re-identification"} {"abstract": "Of late, weakly supervised object detection is with great importance in\nobject recognition. Based on deep learning, weakly supervised detectors have\nachieved many promising results. However, compared with fully supervised\ndetection, it is more challenging to train deep network based detectors in a\nweakly supervised manner. Here we formulate weakly supervised detection as a\nMultiple Instance Learning (MIL) problem, where instance classifiers (object\ndetectors) are put into the network as hidden nodes. We propose a novel online\ninstance classifier refinement algorithm to integrate MIL and the instance\nclassifier refinement procedure into a single deep network, and train the\nnetwork end-to-end with only image-level supervision, i.e., without object\nlocation information. More precisely, instance labels inferred from weak\nsupervision are propagated to their spatially overlapped instances to refine\ninstance classifier online. The iterative instance classifier refinement\nprocedure is implemented using multiple streams in deep network, where each\nstream supervises its latter stream. Weakly supervised object detection\nexperiments are carried out on the challenging PASCAL VOC 2007 and 2012\nbenchmarks. We obtain 47% mAP on VOC 2007 that significantly outperforms the\nprevious state-of-the-art.", "field": [], "task": ["Multiple Instance Learning", "Object Detection", "Object Recognition", "Weakly Supervised Object Detection"], "method": [], "dataset": ["PASCAL VOC 2012 test", "PASCAL VOC 2007", "ImageNet"], "metric": ["MAP"], "title": "Multiple Instance Detection Network with Online Instance Classifier Refinement"} {"abstract": "Lipreading is the task of decoding text from the movement of a speaker's\nmouth. Traditional approaches separated the problem into two stages: designing\nor learning visual features, and prediction. More recent deep lipreading\napproaches are end-to-end trainable (Wand et al., 2016; Chung & Zisserman,\n2016a). However, existing work on models trained end-to-end perform only word\nclassification, rather than sentence-level sequence prediction. Studies have\nshown that human lipreading performance increases for longer words (Easton &\nBasala, 1982), indicating the importance of features capturing temporal context\nin an ambiguous communication channel. Motivated by this observation, we\npresent LipNet, a model that maps a variable-length sequence of video frames to\ntext, making use of spatiotemporal convolutions, a recurrent network, and the\nconnectionist temporal classification loss, trained entirely end-to-end. To the\nbest of our knowledge, LipNet is the first end-to-end sentence-level lipreading\nmodel that simultaneously learns spatiotemporal visual features and a sequence\nmodel. On the GRID corpus, LipNet achieves 95.2% accuracy in sentence-level,\noverlapped speaker split task, outperforming experienced human lipreaders and\nthe previous 86.4% word-level state-of-the-art accuracy (Gergen et al., 2016).", "field": [], "task": ["Lipreading"], "method": [], "dataset": ["GRID corpus (mixed-speech)"], "metric": ["Word Error Rate (WER)"], "title": "LipNet: End-to-End Sentence-level Lipreading"} {"abstract": "The 3D shapes of faces are well known to be discriminative. Yet despite this,\nthey are rarely used for face recognition and always under controlled viewing\nconditions. We claim that this is a symptom of a serious but often overlooked\nproblem with existing methods for single view 3D face reconstruction: when\napplied \"in the wild\", their 3D estimates are either unstable and change for\ndifferent photos of the same subject or they are over-regularized and generic.\nIn response, we describe a robust method for regressing discriminative 3D\nmorphable face models (3DMM). We use a convolutional neural network (CNN) to\nregress 3DMM shape and texture parameters directly from an input photo. We\novercome the shortage of training data required for this purpose by offering a\nmethod for generating huge numbers of labeled examples. The 3D estimates\nproduced by our CNN surpass state of the art accuracy on the MICC data set.\nCoupled with a 3D-3D face matching pipeline, we show the first competitive face\nrecognition results on the LFW, YTF and IJB-A benchmarks using 3D face shapes\nas representations, rather than the opaque deep feature vectors used by other\nmodern systems.", "field": [], "task": ["3D Face Reconstruction", "Face Recognition", "Face Reconstruction", "Face Verification"], "method": [], "dataset": ["NoW Benchmark", "YouTube Faces DB", "Florence", "Labeled Faces in the Wild"], "metric": ["Mean Reconstruction Error (mm)", "Average 3D Error", "Accuracy"], "title": "Regressing Robust and Discriminative 3D Morphable Models with a very Deep Neural Network"} {"abstract": "Predicting user responses, such as clicks and conversions, is of great\nimportance and has found its usage in many Web applications including\nrecommender systems, web search and online advertising. The data in those\napplications is mostly categorical and contains multiple fields; a typical\nrepresentation is to transform it into a high-dimensional sparse binary feature\nrepresentation via one-hot encoding. Facing with the extreme sparsity,\ntraditional models may limit their capacity of mining shallow patterns from the\ndata, i.e. low-order feature combinations. Deep models like deep neural\nnetworks, on the other hand, cannot be directly applied for the\nhigh-dimensional input because of the huge feature space. In this paper, we\npropose a Product-based Neural Networks (PNN) with an embedding layer to learn\na distributed representation of the categorical data, a product layer to\ncapture interactive patterns between inter-field categories, and further fully\nconnected layers to explore high-order feature interactions. Our experimental\nresults on two large-scale real-world ad click datasets demonstrate that PNNs\nconsistently outperform the state-of-the-art models on various metrics.", "field": [], "task": ["Click-Through Rate Prediction", "Recommendation Systems"], "method": [], "dataset": ["Bing News", "Amazon", "MovieLens 20M", "Criteo", "Company*", "Dianping", "iPinYou"], "metric": ["Log Loss", "AUC"], "title": "Product-based Neural Networks for User Response Prediction"} {"abstract": "We introduce recurrent neural network grammars, probabilistic models of\nsentences with explicit phrase structure. We explain efficient inference\nprocedures that allow application to both parsing and language modeling.\nExperiments show that they provide better parsing in English than any single\npreviously published supervised generative model and better language modeling\nthan state-of-the-art sequential RNNs in English and Chinese.", "field": [], "task": ["Constituency Parsing", "Language Modelling"], "method": [], "dataset": ["Penn Treebank"], "metric": ["F1 score"], "title": "Recurrent Neural Network Grammars"} {"abstract": "We develop a new edge detection algorithm that tackles two important issues\nin this long-standing vision problem: (1) holistic image training and\nprediction; and (2) multi-scale and multi-level feature learning. Our proposed\nmethod, holistically-nested edge detection (HED), performs image-to-image\nprediction by means of a deep learning model that leverages fully convolutional\nneural networks and deeply-supervised nets. HED automatically learns rich\nhierarchical representations (guided by deep supervision on side responses)\nthat are important in order to approach the human ability resolve the\nchallenging ambiguity in edge and object boundary detection. We significantly\nadvance the state-of-the-art on the BSD500 dataset (ODS F-score of .782) and\nthe NYU Depth dataset (ODS F-score of .746), and do so with an improved speed\n(0.4 second per image) that is orders of magnitude faster than some recent\nCNN-based edge detection algorithms.", "field": [], "task": ["Boundary Detection", "Edge Detection"], "method": [], "dataset": ["BIPED"], "metric": ["ODS"], "title": "Holistically-Nested Edge Detection"} {"abstract": "We present Spider, a large-scale, complex and cross-domain semantic parsing\nand text-to-SQL dataset annotated by 11 college students. It consists of 10,181\nquestions and 5,693 unique complex SQL queries on 200 databases with multiple\ntables, covering 138 different domains. We define a new complex and\ncross-domain semantic parsing and text-to-SQL task where different complex SQL\nqueries and databases appear in train and test sets. In this way, the task\nrequires the model to generalize well to both new SQL queries and new database\nschemas. Spider is distinct from most of the previous semantic parsing tasks\nbecause they all use a single database and the exact same programs in the train\nset and the test set. We experiment with various state-of-the-art models and\nthe best model achieves only 12.4% exact matching accuracy on a database split\nsetting. This shows that Spider presents a strong challenge for future\nresearch. Our dataset and task are publicly available at\nhttps://yale-lily.github.io/spider", "field": [], "task": ["Semantic Parsing", "Text-To-Sql"], "method": [], "dataset": ["spider"], "metric": ["Accuracy"], "title": "Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task"} {"abstract": "State-of-the-art approaches for semantic image segmentation are built on\nConvolutional Neural Networks (CNNs). The typical segmentation architecture is\ncomposed of (a) a downsampling path responsible for extracting coarse semantic\nfeatures, followed by (b) an upsampling path trained to recover the input image\nresolution at the output of the model and, optionally, (c) a post-processing\nmodule (e.g. Conditional Random Fields) to refine the model predictions.\n Recently, a new CNN architecture, Densely Connected Convolutional Networks\n(DenseNets), has shown excellent results on image classification tasks. The\nidea of DenseNets is based on the observation that if each layer is directly\nconnected to every other layer in a feed-forward fashion then the network will\nbe more accurate and easier to train.\n In this paper, we extend DenseNets to deal with the problem of semantic\nsegmentation. We achieve state-of-the-art results on urban scene benchmark\ndatasets such as CamVid and Gatech, without any further post-processing module\nnor pretraining. Moreover, due to smart construction of the model, our approach\nhas much less parameters than currently published best entries for these\ndatasets.\n Code to reproduce the experiments is available here :\nhttps://github.com/SimJeg/FC-DenseNet/blob/master/train.py", "field": [], "task": ["Semantic Segmentation"], "method": [], "dataset": ["CamVid"], "metric": ["Mean IoU", "Global Accuracy"], "title": "The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation"} {"abstract": "Modeling sentence pairs plays the vital role for\r\njudging the relationship between two sentences,\r\nsuch as paraphrase identification, natural language\r\ninference, and answer sentence selection. Previous\r\nwork achieves very promising results using neural\r\nnetworks with attention mechanism. In this paper,\r\nwe propose the multiway attention networks which\r\nemploy multiple attention functions to match sentence pairs under the matching-aggregation framework. Specifically, we design four attention functions to match words in corresponding sentences.\r\nThen, we aggregate the matching information from\r\neach function, and combine the information from\r\nall functions to obtain the final representation. Experimental results demonstrate that the proposed\r\nmultiway attention networks improve the result on\r\nthe Quora Question Pairs, SNLI, MultiNLI, and answer sentence selection task on the SQuAD dataset.", "field": [], "task": ["Natural Language Inference", "Paraphrase Identification"], "method": [], "dataset": ["Quora Question Pairs", "SNLI"], "metric": ["Parameters", "% Train Accuracy", "% Test Accuracy", "Accuracy"], "title": "Multiway Attention Networks for Modeling Sentence Pairs"} {"abstract": "We address the problem of action detection in videos. Driven by the latest\nprogress in object detection from 2D images, we build action models using rich\nfeature hierarchies derived from shape and kinematic cues. We incorporate\nappearance and motion in two ways. First, starting from image region proposals\nwe select those that are motion salient and thus are more likely to contain the\naction. This leads to a significant reduction in the number of regions being\nprocessed and allows for faster computations. Second, we extract\nspatio-temporal feature representations to build strong classifiers using\nConvolutional Neural Networks. We link our predictions to produce detections\nconsistent in time, which we call action tubes. We show that our approach\noutperforms other techniques in the task of action detection.", "field": [], "task": ["Action Detection", "Object Detection", "Skeleton Based Action Recognition"], "method": [], "dataset": ["J-HMDB"], "metric": ["Accuracy (RGB+pose)"], "title": "Finding Action Tubes"} {"abstract": "Classical deformable registration techniques achieve impressive results and offer a rigorous theoretical treatment, but are computationally intensive since they solve an optimization problem for each image pair. Recently, learning-based methods have facilitated fast registration by learning spatial deformation functions. However, these approaches use restricted deformation models, require supervised labels, or do not guarantee a diffeomorphic (topology-preserving) registration. Furthermore, learning-based registration tools have not been derived from a probabilistic framework that can offer uncertainty estimates. In this paper, we build a connection between classical and learning-based methods. We present a probabilistic generative model and derive an unsupervised learning-based inference algorithm that uses insights from classical registration methods and makes use of recent developments in convolutional neural networks (CNNs). We demonstrate our method on a 3D brain registration task for both images and anatomical surfaces, and provide extensive empirical analyses. Our principled approach results in state of the art accuracy and very fast runtimes, while providing diffeomorphic guarantees. Our implementation is available at http://voxelmorph.csail.mit.edu.", "field": [], "task": ["Constrained Diffeomorphic Image Registration", "Deformable Medical Image Registration", "Diffeomorphic Medical Image Registration", "Image Registration", "Medical Image Registration"], "method": [], "dataset": ["OASIS+ADIBE+ADHD200+MCIC+PPMI+HABS+HarvardGSP"], "metric": ["Dice (SE)", "CPU (sec)", "GPU sec", "Dice (Average)", "Neg Jacob Det"], "title": "Unsupervised Learning of Probabilistic Diffeomorphic Registration for Images and Surfaces"} {"abstract": "Generating text from graph-based data, such as Abstract Meaning Representation (AMR), is a challenging task due to the inherent difficulty in how to properly encode the structure of a graph with labeled edges. To address this difficulty, we propose a novel graph-to-sequence model that encodes different but complementary perspectives of the structural information contained in the AMR graph. The model learns parallel top-down and bottom-up representations of nodes capturing contrasting views of the graph. We also investigate the use of different node message passing strategies, employing different state-of-the-art graph encoders to compute node representations based on incoming and outgoing perspectives. In our experiments, we demonstrate that the dual graph representation leads to improvements in AMR-to-text generation, achieving state-of-the-art results on two AMR datasets.", "field": [], "task": ["AMR-to-Text Generation", "Data-to-Text Generation", "Graph-to-Sequence", "Text Generation"], "method": [], "dataset": ["LDC2017T10"], "metric": ["BLEU"], "title": "Enhancing AMR-to-Text Generation with Dual Graph Representations"} {"abstract": "Permutation Invariant Training (PIT) has long been a stepping stone method for training speech separation model in handling the label ambiguity problem. With PIT selecting the minimum cost label assignments dynamically, very few studies considered the separation problem to be optimizing both the model parameters and the label assignments, but focused on searching for good model architecture and parameters. In this paper, we investigate instead for a given model architecture the various flexible label assignment strategies for training the model, rather than directly using PIT. Surprisingly, we discover a significant performance boost compared to PIT is possible if the model is trained with fixed label assignments and a good set of labels is chosen. With fixed label training cascaded between two sections of PIT, we achieved the state-of-the-art performance on WSJ0-2mix without changing the model architecture at all.", "field": [], "task": ["Speech Separation"], "method": [], "dataset": ["wsj0-2mix"], "metric": ["SI-SDRi"], "title": "Interrupted and cascaded permutation invariant training for speech separation"} {"abstract": "Data-to-text generation can be conceptually divided into two parts: ordering\nand structuring the information (planning), and generating fluent language\ndescribing the information (realization). Modern neural generation systems\nconflate these two steps into a single end-to-end differentiable system. We\npropose to split the generation process into a symbolic text-planning stage\nthat is faithful to the input, followed by a neural generation stage that\nfocuses only on realization. For training a plan-to-text generator, we present\na method for matching reference texts to their corresponding text plans. For\ninference time, we describe a method for selecting high-quality text plans for\nnew inputs. We implement and evaluate our approach on the WebNLG benchmark. Our\nresults demonstrate that decoupling text planning from neural realization\nindeed improves the system's reliability and adequacy while maintaining fluent\noutput. We observe improvements both in BLEU scores and in manual evaluations.\nAnother benefit of our approach is the ability to output diverse realizations\nof the same input, paving the way to explicit control over the generated text\nstructure.", "field": [], "task": ["Data-to-Text Generation", "Graph-to-Sequence", "Text Generation"], "method": [], "dataset": ["WebNLG"], "metric": ["BLEU"], "title": "Step-by-Step: Separating Planning from Realization in Neural Data-to-Text Generation"} {"abstract": "In generative modeling, the Wasserstein distance (WD) has emerged as a useful\nmetric to measure the discrepancy between generated and real data\ndistributions. Unfortunately, it is challenging to approximate the WD of\nhigh-dimensional distributions. In contrast, the sliced Wasserstein distance\n(SWD) factorizes high-dimensional distributions into their multiple\none-dimensional marginal distributions and is thus easier to approximate. In\nthis paper, we introduce novel approximations of the primal and dual SWD.\nInstead of using a large number of random projections, as it is done by\nconventional SWD approximation methods, we propose to approximate SWDs with a\nsmall number of parameterized orthogonal projections in an end-to-end deep\nlearning fashion. As concrete applications of our SWD approximations, we design\ntwo types of differentiable SWD blocks to equip modern generative\nframeworks---Auto-Encoders (AE) and Generative Adversarial Networks (GAN). In\nthe experiments, we not only show the superiority of the proposed generative\nmodels on standard image synthesis benchmarks, but also demonstrate the\nstate-of-the-art performance on challenging high resolution image and video\ngeneration in an unsupervised manner.", "field": [], "task": ["Image Generation", "Video Generation"], "method": [], "dataset": ["LSUN Bedroom 256 x 256", "TrailerFaces", "CelebA-HQ 1024x1024"], "metric": ["FID"], "title": "Sliced Wasserstein Generative Models"} {"abstract": "Previous feed-forward architectures of recently proposed deep super-resolution networks learn the features of low-resolution inputs and the non-linear mapping from those to a high-resolution output. However, this approach does not fully address the mutual dependencies of low- and high-resolution images. We propose Deep Back-Projection Networks (DBPN), the winner of two image super-resolution challenges (NTIRE2018 and PIRM2018), that exploit iterative up- and down-sampling layers. These layers are formed as a unit providing an error feedback mechanism for projection errors. We construct mutually-connected up- and down-sampling units each of which represents different types of low- and high-resolution components. We also show that extending this idea to demonstrate a new insight towards more efficient network design substantially, such as parameter sharing on the projection module and transition layer on projection step. The experimental results yield superior results and in particular establishing new state-of-the-art results across multiple data sets, especially for large scaling factors such as 8x.", "field": [], "task": ["Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["Set14 - 2x upscaling", "Set14 - 4x upscaling", "Manga109 - 8x upscaling", "Manga109 - 4x upscaling", "Urban100 - 2x upscaling", "BSDS100 - 2x upscaling", "Manga109 - 2x upscaling", "Set5 - 4x upscaling", "BSDS100 - 4x upscaling", "Set14 - 8x upscaling", "Urban100 - 8x upscaling", "BSDS100 - 8x upscaling", "Set5 - 8x upscaling", "Set5 - 2x upscaling", "Urban100 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Deep Back-Projection Networks for Single Image Super-resolution"} {"abstract": "We address talker-independent monaural speaker separation from the\nperspectives of deep learning and computational auditory scene analysis (CASA).\nSpecifically, we decompose the multi-speaker separation task into the stages of\nsimultaneous grouping and sequential grouping. Simultaneous grouping is first\nperformed in each time frame by separating the spectra of different speakers\nwith a permutation-invariantly trained neural network. In the second stage, the\nframe-level separated spectra are sequentially grouped to different speakers by\na clustering network. The proposed deep CASA approach optimizes frame-level\nseparation and speaker tracking in turn, and produces excellent results for\nboth objectives. Experimental results on the benchmark WSJ0-2mix database show\nthat the new approach achieves the state-of-the-art results with a modest model\nsize.", "field": [], "task": ["Speaker Separation", "Speech Separation"], "method": [], "dataset": ["wsj0-2mix"], "metric": ["SI-SDRi"], "title": "Divide and Conquer: A Deep CASA Approach to Talker-independent Monaural Speaker Separation"} {"abstract": "Answering natural language questions over tables is usually seen as a semantic parsing task. To alleviate the collection cost of full logical forms, one popular approach focuses on weak supervision consisting of denotations instead of logical forms. However, training semantic parsers from weak supervision poses difficulties, and in addition, the generated logical forms are only used as an intermediate step prior to retrieving the denotation. In this paper, we present TAPAS, an approach to question answering over tables without generating logical forms. TAPAS trains from weak supervision, and predicts the denotation by selecting table cells and optionally applying a corresponding aggregation operator to such selection. TAPAS extends BERT's architecture to encode tables as input, initializes from an effective joint pre-training of text segments and tables crawled from Wikipedia, and is trained end-to-end. We experiment with three different semantic parsing datasets, and find that TAPAS outperforms or rivals semantic parsing models by improving state-of-the-art accuracy on SQA from 55.1 to 67.2 and performing on par with the state-of-the-art on WIKISQL and WIKITQ, but with a simpler model architecture. We additionally find that transfer learning, which is trivial in our setting, from WIKISQL to WIKITQ, yields 48.7 accuracy, 4.2 points above the state-of-the-art.", "field": [], "task": ["Question Answering", "Semantic Parsing", "Transfer Learning"], "method": [], "dataset": ["SQA", "WikiSQL", "WikiTableQuestions"], "metric": ["Accuracy (Test)", "Accuracy (Dev)", "Average question accuracy", "Denotation accuracy (test)"], "title": "TAPAS: Weakly Supervised Table Parsing via Pre-training"} {"abstract": "Fine-grained visual categorization is a classification task for distinguishing categories with high intra-class and small inter-class variance. While global approaches aim at using the whole image for performing the classification, part-based solutions gather additional local information in terms of attentions or parts. We propose a novel classification-specific part estimation that uses an initial prediction as well as back-propagation of feature importance via gradient computations in order to estimate relevant image regions. The subsequently detected parts are then not only selected by a-posteriori classification knowledge, but also have an intrinsic spatial extent that is determined automatically. This is in contrast to most part-based approaches and even to available ground-truth part annotations, which only provide point coordinates and no additional scale information. We show in our experiments on various widely-used fine-grained datasets the effectiveness of the mentioned part selection method in conjunction with the extracted part features.", "field": [], "task": ["Feature Importance", "Fine-Grained Image Classification", "Fine-Grained Visual Categorization", "Image Classification"], "method": [], "dataset": [" CUB-200-2011", "Stanford Cars", "Flowers-102", "NABirds"], "metric": ["Accuracy"], "title": "Classification-Specific Parts for Improving Fine-Grained Visual Categorization"} {"abstract": "Neural architecture search (NAS) relies on a good controller to generate better architectures or predict the accuracy of given architectures. However, training the controller requires both abundant and high-quality pairs of architectures and their accuracy, while it is costly to evaluate an architecture and obtain its accuracy. In this paper, we propose SemiNAS, a semi-supervised NAS approach that leverages numerous unlabeled architectures (without evaluation and thus nearly no cost). Specifically, SemiNAS 1) trains an initial accuracy predictor with a small set of architecture-accuracy data pairs; 2) uses the trained accuracy predictor to predict the accuracy of large amount of architectures (without evaluation); and 3) adds the generated data pairs to the original data to further improve the predictor. The trained accuracy predictor can be applied to various NAS algorithms by predicting the accuracy of candidate architectures for them. SemiNAS has two advantages: 1) It reduces the computational cost under the same accuracy guarantee. On NASBench-101 benchmark dataset, it achieves comparable accuracy with gradient-based method while using only 1/7 architecture-accuracy pairs. 2) It achieves higher accuracy under the same computational cost. It achieves 94.02% test accuracy on NASBench-101, outperforming all the baselines when using the same number of architectures. On ImageNet, it achieves 23.5% top-1 error rate (under 600M FLOPS constraint) using 4 GPU-days for search. We further apply it to LJSpeech text to speech task and it achieves 97% intelligibility rate in the low-resource setting and 15% test error rate in the robustness setting, with 9%, 7% improvements over the baseline respectively.", "field": [], "task": ["Natural Language Transduction", "Neural Architecture Search"], "method": [], "dataset": ["ImageNet"], "metric": ["Top-1 Error Rate", "Accuracy"], "title": "Semi-Supervised Neural Architecture Search"} {"abstract": "This paper presents X3D, a family of efficient video networks that progressively expand a tiny 2D image classification architecture along multiple network axes, in space, time, width and depth. Inspired by feature selection methods in machine learning, a simple stepwise network expansion approach is employed that expands a single axis in each step, such that good accuracy to complexity trade-off is achieved. To expand X3D to a specific target complexity, we perform progressive forward expansion followed by backward contraction. X3D achieves state-of-the-art performance while requiring 4.8x and 5.5x fewer multiply-adds and parameters for similar accuracy as previous work. Our most surprising finding is that networks with high spatiotemporal resolution can perform well, while being extremely light in terms of network width and parameters. We report competitive accuracy at unprecedented efficiency on video classification and detection benchmarks. Code will be available at: https://github.com/facebookresearch/SlowFast", "field": [], "task": ["Action Classification", "Feature Selection", "Image Classification", "Video Classification", "Video Recognition"], "method": [], "dataset": ["Kinetics-400"], "metric": ["Vid acc@5", "Vid acc@1"], "title": "X3D: Expanding Architectures for Efficient Video Recognition"} {"abstract": "Although a significant progress has been witnessed in supervised person re-identification (re-id), it remains challenging to generalize re-id models to new domains due to the huge domain gaps. Recently, there has been a growing interest in using unsupervised domain adaptation to address this scalability issue. Existing methods typically conduct adaptation on the representation space that contains both id-related and id-unrelated factors, thus inevitably undermining the adaptation efficacy of id-related features. In this paper, we seek to improve adaptation by purifying the representation space to be adapted. To this end, we propose a joint learning framework that disentangles id-related/unrelated features and enforces adaptation to work on the id-related feature space exclusively. Our model involves a disentangling module that encodes cross-domain images into a shared appearance space and two separate structure spaces, and an adaptation module that performs adversarial alignment and self-training on the shared appearance space. The two modules are co-designed to be mutually beneficial. Extensive experiments demonstrate that the proposed joint learning framework outperforms the state-of-the-art methods by clear margins.", "field": [], "task": ["Domain Adaptation", "Person Re-Identification", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["Market to MSMT"], "metric": ["rank-10", "mAP", "rank-5", "rank-1"], "title": "Joint Disentangling and Adaptation for Cross-Domain Person Re-Identification"} {"abstract": "Pre-training general-purpose visual features with convolutional neural networks without relying on annotations is a challenging and important task. Most recent efforts in unsupervised feature learning have focused on either small or highly curated datasets like ImageNet, whereas using uncurated raw datasets was found to decrease the feature quality when evaluated on a transfer task. Our goal is to bridge the performance gap between unsupervised methods trained on curated data, which are costly to obtain, and massive raw datasets that are easily available. To that effect, we propose a new unsupervised approach which leverages self-supervision and clustering to capture complementary statistics from large-scale data. We validate our approach on 96 million images from YFCC100M, achieving state-of-the-art results among unsupervised methods on standard benchmarks, which confirms the potential of unsupervised learning when only uncurated data are available. We also show that pre-training a supervised VGG-16 with our method achieves 74.9% top-1 classification accuracy on the validation set of ImageNet, which is an improvement of +0.8% over the same network trained from scratch. Our code is available at https://github.com/facebookresearch/DeeperCluster.", "field": [], "task": ["Self-Supervised Image Classification", "Unsupervised Pre-training"], "method": [], "dataset": ["ImageNet (finetuned)"], "metric": ["Top 1 Accuracy"], "title": "Unsupervised Pre-Training of Image Features on Non-Curated Data"} {"abstract": "The ever-increasing size of modern data sets combined with the difficulty of\nobtaining label information has made semi-supervised learning one of the\nproblems of significant practical importance in modern data analysis. We\nrevisit the approach to semi-supervised learning with generative models and\ndevelop new models that allow for effective generalisation from small labelled\ndata sets to large unlabelled ones. Generative approaches have thus far been\neither inflexible, inefficient or non-scalable. We show that deep generative\nmodels and approximate Bayesian inference exploiting recent advances in\nvariational methods can be used to provide significant improvements, making\ngenerative approaches highly competitive for semi-supervised learning.", "field": [], "task": ["Bayesian Inference"], "method": [], "dataset": ["SVHN"], "metric": ["Percentage error"], "title": "Semi-Supervised Learning with Deep Generative Models"} {"abstract": "We propose a CNN-based approach for multi-camera markerless motion capture of\nthe human body. Unlike existing methods that first perform pose estimation on\nindividual cameras and generate 3D models as post-processing, our approach\nmakes use of 3D reasoning throughout a multi-stage approach. This novelty\nallows us to use provisional 3D models of human pose to rethink where the\njoints should be located in the image and to recover from past mistakes. Our\nprincipled refinement of 3D human poses lets us make use of image cues, even\nfrom images where we previously misdetected joints, to refine our estimates as\npart of an end-to-end approach. Finally, we demonstrate how the high-quality\noutput of our multi-camera setup can be used as an additional training source\nto improve the accuracy of existing single camera models.", "field": [], "task": ["3D Human Pose Estimation", "Markerless Motion Capture", "Motion Capture", "Pose Estimation"], "method": [], "dataset": ["Human3.6M"], "metric": ["Average MPJPE (mm)"], "title": "Rethinking Pose in 3D: Multi-stage Refinement and Recovery for Markerless Motion Capture"} {"abstract": "In this work, we propose a novel method for training neural networks to perform single-document extractive summarization without heuristically-generated extractive labels. We call our approach BanditSum as it treats extractive summarization as a contextual bandit (CB) problem, where the model receives a document to summarize (the context), and chooses a sequence of sentences to include in the summary (the action). A policy gradient reinforcement learning algorithm is used to train the model to select sequences of sentences that maximize ROUGE score. We perform a series of experiments demonstrating that BanditSum is able to achieve ROUGE scores that are better than or comparable to the state-of-the-art for extractive summarization, and converges using significantly fewer update steps than competing approaches. In addition, we show empirically that BanditSum performs significantly better than competing approaches when good summary sentences appear late in the source document.", "field": [], "task": ["Extractive Text Summarization"], "method": [], "dataset": ["CNN / Daily Mail"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "BanditSum: Extractive Summarization as a Contextual Bandit"} {"abstract": "We propose the Lanczos network (LanczosNet), which uses the Lanczos algorithm to construct low rank approximations of the graph Laplacian for graph convolution. Relying on the tridiagonal decomposition of the Lanczos algorithm, we not only efficiently exploit multi-scale information via fast approximated computation of matrix power but also design learnable spectral filters. Being fully differentiable, LanczosNet facilitates both graph kernel learning as well as learning node embeddings. We show the connection between our LanczosNet and graph based manifold learning methods, especially the diffusion maps. We benchmark our model against several recent deep graph networks on citation networks and QM8 quantum chemistry dataset. Experimental results show that our model achieves the state-of-the-art performance in most tasks. Code is released at: \\url{https://github.com/lrjconan/LanczosNetwork}.", "field": [], "task": ["Node Classification"], "method": [], "dataset": ["PubMed (0.1%)", "PubMed (0.03%)", "Cora (1%)", "PubMed (0.05%)", "Cora (3%)", "CiteSeer (1%)", "Cora (0.5%)", "Cora with Public Split: fixed 20 nodes per class", "CiteSeer (0.5%)", "CiteSeer with Public Split: fixed 20 nodes per class", "PubMed with Public Split: fixed 20 nodes per class"], "metric": ["Accuracy"], "title": "LanczosNet: Multi-Scale Deep Graph Convolutional Networks"} {"abstract": "Knowledge Bases (KBs) require constant up-dating to reflect changes to the world they represent. For general purpose KBs, this is often done through Relation Extraction (RE), the task of predicting KB relations expressed in text mentioning entities known to the KB. One way to improve RE is to use KB Embeddings (KBE) for link prediction. However, despite clear connections between RE and KBE, little has been done toward properly unifying these models systematically. We help close the gap with a framework that unifies the learning of RE and KBE models leading to significant improvements over the state-of-the-art in RE. The code is available at https://github.com/billy-inn/HRERE.", "field": [], "task": ["Link Prediction", "Relation Extraction"], "method": [], "dataset": ["NYT Corpus"], "metric": ["P@30%", "P@10%"], "title": "Connecting Language and Knowledge with Heterogeneous Representations for Neural Relation Extraction"} {"abstract": "We propose real-time, six degrees of freedom (6DoF), 3D face pose estimation without face detection or landmark localization. We observe that estimating the 6DoF rigid transformation of a face is a simpler problem than facial landmark detection, often used for 3D face alignment. In addition, 6DoF offers more information than face bounding box labels. We leverage these observations to make multiple contributions: (a) We describe an easily trained, efficient, Faster R-CNN--based model which regresses 6DoF pose for all faces in the photo, without preliminary face detection. (b) We explain how pose is converted and kept consistent between the input photo and arbitrary crops created while training and evaluating our model. (c) Finally, we show how face poses can replace detection bounding box training labels. Tests on AFLW2000-3D and BIWI show that our method runs at real-time and outperforms state of the art (SotA) face pose estimators. Remarkably, our method also surpasses SotA models of comparable complexity on the WIDER FACE detection benchmark, despite not been optimized on bounding box labels.", "field": [], "task": ["Face Alignment", "Face Detection", "Facial Landmark Detection", "Head Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["WIDER Face (Medium)", "AFLW2000", "WIDER Face (Easy)", "WIDER Face (Hard)", "BIWI"], "metric": ["MAE", "MAE_t", "AP", "MAE (trained with other data)"], "title": "img2pose: Face Alignment and Detection via 6DoF, Face Pose Estimation"} {"abstract": "This paper presents SPICE, a Semantic Pseudo-labeling framework for Image ClustEring. Instead of using indirect loss functions required by the recently proposed methods, SPICE generates pseudo-labels via self-learning and directly uses the pseudo-label-based classification loss to train a deep clustering network. The basic idea of SPICE is to synergize the discrepancy among semantic clusters, the similarity among instance samples, and the semantic consistency of local samples in an embedding space to optimize the clustering network in a semantically-driven paradigm. Specifically, a semantic-similarity-based pseudo-labeling algorithm is first proposed to train a clustering network through unsupervised representation learning. Given the initial clustering results, a local semantic consistency principle is used to select a set of reliably labeled samples, and a semi-pseudo-labeling algorithm is adapted for performance boosting. Extensive experiments demonstrate that SPICE clearly outperforms the state-of-the-art methods on six common benchmark datasets including STL10, Cifar10, Cifar100-20, ImageNet-10, ImageNet-Dog, and Tiny-ImageNet. On average, our SPICE method improves the current best results by about 10% in terms of adjusted rand index, normalized mutual information, and clustering accuracy.", "field": [], "task": ["Deep Clustering", "Image Clustering", "Representation Learning", "Semantic Similarity", "Semantic Textual Similarity", "Unsupervised Representation Learning"], "method": [], "dataset": ["Imagenet-dog-15", "CIFAR-100", "CIFAR-10", "Tiny-ImageNet", "ImageNet-10", "STL-10"], "metric": ["Train set", "Train Split", "ARI", "Backbone", "Train Set", "NMI", "Accuracy"], "title": "SPICE: Semantic Pseudo-labeling for Image Clustering"} {"abstract": "Learning to generate natural scenes has always been a challenging task in\ncomputer vision. It is even more painstaking when the generation is conditioned\non images with drastically different views. This is mainly because\nunderstanding, corresponding, and transforming appearance and semantic\ninformation across the views is not trivial. In this paper, we attempt to solve\nthe novel problem of cross-view image synthesis, aerial to street-view and vice\nversa, using conditional generative adversarial networks (cGAN). Two new\narchitectures called Crossview Fork (X-Fork) and Crossview Sequential (X-Seq)\nare proposed to generate scenes with resolutions of 64x64 and 256x256 pixels.\nX-Fork architecture has a single discriminator and a single generator. The\ngenerator hallucinates both the image and its semantic segmentation in the\ntarget view. X-Seq architecture utilizes two cGANs. The first one generates the\ntarget image which is subsequently fed to the second cGAN for generating its\ncorresponding semantic segmentation map. The feedback from the second cGAN\nhelps the first cGAN generate sharper images. Both of our proposed\narchitectures learn to generate natural images as well as their semantic\nsegmentation maps. The proposed methods show that they are able to capture and\nmaintain the true semantics of objects in source and target views better than\nthe traditional image-to-image translation method which considers only the\nvisual appearance of the scene. Extensive qualitative and quantitative\nevaluations support the effectiveness of our frameworks, compared to two state\nof the art methods, for natural scene generation across drastically different\nviews.", "field": [], "task": ["Cross-View Image-to-Image Translation", "Image Generation", "Image-to-Image Translation", "Scene Generation", "Semantic Segmentation"], "method": [], "dataset": ["cvusa", "Dayton (256\u00d7256) - ground-to-aerial", "Dayton (64x64) - ground-to-aerial", "Dayton (64\u00d764) - aerial-to-ground", "Ego2Top", "Dayton (256\u00d7256) - aerial-to-ground"], "metric": ["SSIM"], "title": "Cross-View Image Synthesis using Conditional GANs"} {"abstract": "Most of the recent deep learning-based 3D human pose and mesh estimation methods regress the pose and shape parameters of human mesh models, such as SMPL and MANO, from an input image. The first weakness of these methods is an appearance domain gap problem, due to different image appearance between train data from controlled environments, such as a laboratory, and test data from in-the-wild environments. The second weakness is that the estimation of the pose parameters is quite challenging owing to the representation issues of 3D rotations. To overcome the above weaknesses, we propose Pose2Mesh, a novel graph convolutional neural network (GraphCNN)-based system that estimates the 3D coordinates of human mesh vertices directly from the 2D human pose. The 2D human pose as input provides essential human body articulation information, while having a relatively homogeneous geometric property between the two domains. Also, the proposed system avoids the representation issues, while fully exploiting the mesh topology using a GraphCNN in a coarse-to-fine manner. We show that our Pose2Mesh outperforms the previous 3D human pose and mesh estimation methods on various benchmark datasets. The codes are publicly available https://github.com/hongsukchoi/Pose2Mesh_RELEASE.", "field": [], "task": ["3D Hand Pose Estimation", "3D Human Pose Estimation"], "method": [], "dataset": ["FreiHAND", "3DPW"], "metric": ["PA-MPJPE", "PA-MPVPE", "MPJPE", "MPVPE"], "title": "Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh Recovery from a 2D Human Pose"} {"abstract": "Convolutional neural networks have recently demonstrated high-quality\nreconstruction for single-image super-resolution. In this paper, we propose the\nLaplacian Pyramid Super-Resolution Network (LapSRN) to progressively\nreconstruct the sub-band residuals of high-resolution images. At each pyramid\nlevel, our model takes coarse-resolution feature maps as input, predicts the\nhigh-frequency residuals, and uses transposed convolutions for upsampling to\nthe finer level. Our method does not require the bicubic interpolation as the\npre-processing step and thus dramatically reduces the computational complexity.\nWe train the proposed LapSRN with deep supervision using a robust Charbonnier\nloss function and achieve high-quality reconstruction. Furthermore, our network\ngenerates multi-scale predictions in one feed-forward pass through the\nprogressive reconstruction, thereby facilitates resource-aware applications.\nExtensive quantitative and qualitative evaluations on benchmark datasets show\nthat the proposed algorithm performs favorably against the state-of-the-art\nmethods in terms of speed and accuracy.", "field": [], "task": ["Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["Urban100 - 4x upscaling", "BSD100 - 4x upscaling", "Set14 - 4x upscaling"], "metric": ["PSNR"], "title": "Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution"} {"abstract": "Face detection has received intensive attention in recent years. Many works present lots of special methods for face detection from different perspectives like model architecture, data augmentation, label assignment and etc., which make the overall algorithm and system become more and more complex. In this paper, we point out that \\textbf{there is no gap between face detection and generic object detection}. Then we provide a strong but simple baseline method to deal with face detection named TinaFace. We use ResNet-50 \\cite{he2016deep} as backbone, and all modules and techniques in TinaFace are constructed on existing modules, easily implemented and based on generic object detection. On the hard test set of the most popular and challenging face detection benchmark WIDER FACE \\cite{yang2016wider}, with single-model and single-scale, our TinaFace achieves 92.1\\% average precision (AP), which exceeds most of the recent face detectors with larger backbone. And after using test time augmentation (TTA), our TinaFace outperforms the current state-of-the-art method and achieves 92.4\\% AP. The code will be available at \\url{https://github.com/Media-Smart/vedadet}.", "field": [], "task": ["Data Augmentation", "Face Detection", "Object Detection"], "method": [], "dataset": ["WIDER Face (Hard)"], "metric": ["AP"], "title": "TinaFace: Strong but Simple Baseline for Face Detection"} {"abstract": "Multiple object video object segmentation is a challenging task, specially for the zero-shot case, when no object mask is given at the initial frame and the model has to find the objects to be segmented along the sequence. In our work, we propose a Recurrent network for multiple object Video Object Segmentation (RVOS) that is fully end-to-end trainable. Our model incorporates recurrence on two different domains: (i) the spatial, which allows to discover the different object instances within a frame, and (ii) the temporal, which allows to keep the coherence of the segmented objects along time. We train RVOS for zero-shot video object segmentation and are the first ones to report quantitative results for DAVIS-2017 and YouTube-VOS benchmarks. Further, we adapt RVOS for one-shot video object segmentation by using the masks obtained in previous time steps as inputs to be processed by the recurrent module. Our model reaches comparable results to state-of-the-art techniques in YouTube-VOS benchmark and outperforms all previous video object segmentation methods not using online learning in the DAVIS-2017 benchmark. Moreover, our model achieves faster inference runtimes than previous methods, reaching 44ms/frame on a P100 GPU.", "field": [], "task": ["Semi-Supervised Video Object Segmentation", "Unsupervised Video Object Segmentation", "Video Object Segmentation", "Youtube-VOS"], "method": [], "dataset": ["DAVIS 2017 (val)", "YouTube-VOS", "DAVIS 2017 (test-dev)"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "Jaccard (Unseen)", "F-Measure (Seen)", "Jaccard (Seen)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F", "F-Measure (Unseen)"], "title": "RVOS: End-to-End Recurrent Network for Video Object Segmentation"} {"abstract": "Recently, the machine learning community paused in a moment of self-reflection. In a widely discussed paper at ICLR 2018, Sculley et al. wrote: \"We observe that the rate of empirical advancement may not have been matched by consistent increase in the level of empirical rigor across the field as a whole.\" Their primary complaint is the development of a \"research and publication culture that emphasizes wins\" (emphasis in original), which typically means \"demonstrating that a new method beats previous methods on a given task or benchmark\". An apt description might be \"leaderboard chasing\"-and for many vision and NLP tasks, this isn't a metaphor. There are literally centralized leaderboards1 that track incremental progress, down to the fifth decimal point, some persisting over years, accumulating dozens of entries.\r\n\r\nSculley et al. remind us that \"the goal of science is not wins, but knowledge\". The structure of the scientific enterprise today (pressure to publish, pace of progress, etc.) means that \"winning\" and \"doing good science\" are often not fully aligned. To wit, they cite a number of papers showing that recent advances in neural networks could very well be attributed to mundane issues like better hyperparameter optimization. Many results can't be reproduced, and some observed improvements might just be noise.", "field": [], "task": ["Ad-Hoc Information Retrieval", "Hyperparameter Optimization"], "method": [], "dataset": ["TREC Robust04"], "metric": ["P@20", "MAP"], "title": "The Neural Hype and Comparisons Against Weak Baselines"} {"abstract": "Scoring functions (SFs), which measure the plausibility of triplets in knowledge graph (KG), have become the crux of KG embedding. Lots of SFs, which target at capturing different kinds of relations in KGs, have been designed by humans in recent years. However, as relations can exhibit complex patterns that are hard to infer before training, none of them can consistently perform better than others on existing benchmark data sets. In this paper, inspired by the recent success of automated machine learning (AutoML), we propose to automatically design SFs (AutoSF) for distinct KGs by the AutoML techniques. However, it is non-trivial to explore domain-specific information here to make AutoSF efficient and effective. We firstly identify a unified representation over popularly used SFs, which helps to set up a search space for AutoSF. Then, we propose a greedy algorithm to search in such a space efficiently. The algorithm is further sped up by a filter and a predictor, which can avoid repeatedly training SFs with same expressive ability and help removing bad candidates during the search before model training. Finally, we perform extensive experiments on benchmark data sets. Results on link prediction and triplets classification show that the searched SFs by AutoSF, are KG dependent, new to the literature, and outperform the state-of-the-art SFs designed by humans.", "field": [], "task": ["AutoML", "Graph Embedding", "Knowledge Graph Embedding", "Link Prediction"], "method": [], "dataset": [" FB15k", "WN18RR", "WN18", "FB15k-237"], "metric": ["Hits@10", "MRR"], "title": "AutoSF: Searching Scoring Functions for Knowledge Graph Embedding"} {"abstract": "We aim to better understand attention over nodes in graph neural networks (GNNs) and identify factors influencing its effectiveness. We particularly focus on the ability of attention GNNs to generalize to larger, more complex or noisy graphs. Motivated by insights from the work on Graph Isomorphism Networks, we design simple graph reasoning tasks that allow us to study attention in a controlled environment. We find that under typical conditions the effect of attention is negligible or even harmful, but under certain conditions it provides an exceptional gain in performance of more than 60% in some of our classification tasks. Satisfying these conditions in practice is challenging and often requires optimal initialization or supervised training of attention. We propose an alternative recipe and train attention in a weakly-supervised fashion that approaches the performance of supervised models, and, compared to unsupervised models, improves results on several synthetic as well as real datasets. Source code and datasets are available at https://github.com/bknyaz/graph_attention_pool.", "field": [], "task": ["Graph Classification"], "method": [], "dataset": ["COLLAB", "PROTEINS", "D&D"], "metric": ["Accuracy"], "title": "Understanding Attention and Generalization in Graph Neural Networks"} {"abstract": "This paper extends the popular task of multi-object tracking to multi-object\ntracking and segmentation (MOTS). Towards this goal, we create dense\npixel-level annotations for two existing tracking datasets using a\nsemi-automatic annotation procedure. Our new annotations comprise 65,213 pixel\nmasks for 977 distinct objects (cars and pedestrians) in 10,870 video frames.\nFor evaluation, we extend existing multi-object tracking metrics to this new\ntask. Moreover, we propose a new baseline method which jointly addresses\ndetection, tracking, and segmentation with a single convolutional network. We\ndemonstrate the value of our datasets by achieving improvements in performance\nwhen training on MOTS annotations. We believe that our datasets, metrics and\nbaseline will become a valuable resource towards developing multi-object\ntracking approaches that go beyond 2D bounding boxes. We make our annotations,\ncode, and models available at https://www.vision.rwth-aachen.de/page/mots.", "field": [], "task": ["Multi-Object Tracking", "Object Tracking"], "method": [], "dataset": ["KITTI Tracking test"], "metric": ["MOTA"], "title": "MOTS: Multi-Object Tracking and Segmentation"} {"abstract": "The paper introduces methods of adaptation of multilingual masked language models for a specific language. Pre-trained bidirectional language models show state-of-the-art performance on a wide range of tasks including reading comprehension, natural language inference, and sentiment analysis. At the moment there are two alternative approaches to train such models: monolingual and multilingual. While language specific models show superior performance, multilingual models allow to perform a transfer from one language to another and solve tasks for different languages simultaneously. This work shows that transfer learning from a multilingual model to monolingual model results in significant growth of performance on such tasks as reading comprehension, paraphrase detection, and sentiment analysis. Furthermore, multilingual initialization of monolingual model substantially reduces training time. Pre-trained models for the Russian language are open sourced.", "field": [], "task": ["Natural Language Inference", "Paraphrase Identification", "Question Answering", "Reading Comprehension", "Sentiment Analysis", "Transfer Learning"], "method": [], "dataset": ["RuSentiment", "SQuAD1.1"], "metric": ["Weighted F1", "F1"], "title": "Adaptation of Deep Bidirectional Multilingual Transformers for Russian Language"} {"abstract": "A significant amount of the world's knowledge is stored in relational\ndatabases. However, the ability for users to retrieve facts from a database is\nlimited due to a lack of understanding of query languages such as SQL. We\npropose Seq2SQL, a deep neural network for translating natural language\nquestions to corresponding SQL queries. Our model leverages the structure of\nSQL queries to significantly reduce the output space of generated queries.\nMoreover, we use rewards from in-the-loop query execution over the database to\nlearn a policy to generate unordered parts of the query, which we show are less\nsuitable for optimization via cross entropy loss. In addition, we will publish\nWikiSQL, a dataset of 80654 hand-annotated examples of questions and SQL\nqueries distributed across 24241 tables from Wikipedia. This dataset is\nrequired to train our model and is an order of magnitude larger than comparable\ndatasets. By applying policy-based reinforcement learning with a query\nexecution environment to WikiSQL, our model Seq2SQL outperforms attentional\nsequence to sequence models, improving execution accuracy from 35.9% to 59.4%\nand logical form accuracy from 23.4% to 48.3%.", "field": [], "task": ["Text-To-Sql"], "method": [], "dataset": ["WikiSQL"], "metric": ["Exact Match Accuracy", "Execution Accuracy"], "title": "Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning"} {"abstract": "Robust machine learning relies on access to data that can be used with standardized frameworks in important tasks and the ability to develop models whose performance can be reasonably reproduced. In machine learning for healthcare, the community faces reproducibility challenges due to a lack of publicly accessible data and a lack of standardized data processing frameworks. We present MIMIC-Extract, an open-source pipeline for transforming raw electronic health record (EHR) data for critical care patients contained in the publicly-available MIMIC-III database into dataframes that are directly usable in common machine learning pipelines. MIMIC-Extract addresses three primary challenges in making complex health records data accessible to the broader machine learning community. First, it provides standardized data processing functions, including unit conversion, outlier detection, and aggregating semantically equivalent features, thus accounting for duplication and reducing missingness. Second, it preserves the time series nature of clinical data and can be easily integrated into clinically actionable prediction tasks in machine learning for health. Finally, it is highly extensible so that other researchers with related questions can easily use the same pipeline. We demonstrate the utility of this pipeline by showcasing several benchmark tasks and baseline results.", "field": [], "task": ["Length-of-Stay prediction", "Outlier Detection", "Time Series"], "method": [], "dataset": ["MIMIC-III"], "metric": ["Accuracy (LOS>7 Days)", "Accuracy (LOS>3 Days)"], "title": "MIMIC-Extract: A Data Extraction, Preprocessing, and Representation Pipeline for MIMIC-III"} {"abstract": "Constructing a joint representation invariant across different modalities (e.g., video, language) is of significant importance in many multimedia applications. While there are a number of recent successes in developing effective image-text retrieval methods by learning\r\njoint representations, the video-text retrieval task, in contrast, has not been explored to its fullest extent. In this paper, we study how\r\nto effectively utilize available multi-modal cues from videos for the cross-modal video-text retrieval task. Based on our analysis,\r\nwe propose a novel framework that simultaneously utilizes multimodal features (different visual characteristics, audio inputs, and text) by a fusion strategy for efficient retrieval. Furthermore, we explore several loss functions in training the joint embedding and propose a modified pairwise ranking loss for the retrieval task. Experiments on MSVD and MSR-VTT datasets demonstrate that our method achieves significant performance gain compared to the state-of-the-art approaches.", "field": [], "task": ["Video Retrieval", "Video-Text Retrieval"], "method": [], "dataset": ["MSR-VTT"], "metric": ["text-to-video Median Rank", "text-to-video R@5", "video-to-text Mean Rank", "video-to-text R@10", "text-to-video R@1", "text-to-video Mean Rank", "video-to-text Median Rank", "video-to-text R@1", "text-to-video R@10", "video-to-text R@5"], "title": "Learning Joint Embedding with Multimodal Cues for Cross-Modal Video-Text Retrieval"} {"abstract": "Temporal action localization is a challenging computer vision problem with numerous real-world applications. Most existing methods require laborious frame-level supervision to train action localization models. In this work, we propose a framework, called 3C-Net, which only requires video-level supervision (weak supervision) in the form of action category labels and the corresponding count. We introduce a novel formulation to learn discriminative action features with enhanced localization capabilities. Our joint formulation has three terms: a classification term to ensure the separability of learned action features, an adapted multi-label center loss term to enhance the action feature discriminability and a counting loss term to delineate adjacent action sequences, leading to improved localization. Comprehensive experiments are performed on two challenging benchmarks: THUMOS14 and ActivityNet 1.2. Our approach sets a new state-of-the-art for weakly-supervised temporal action localization on both datasets. On the THUMOS14 dataset, the proposed method achieves an absolute gain of 4.6% in terms of mean average precision (mAP), compared to the state-of-the-art. Source code is available at https://github.com/naraysa/3c-net.", "field": [], "task": ["Action Classification", "Action Localization", "Temporal Action Localization", "Weakly Supervised Action Localization", "Weakly-supervised Temporal Action Localization", "Weakly Supervised Temporal Action Localization"], "method": [], "dataset": ["ActivityNet-1.2", "THUMOS'14", "THUMOS 2014", "THUMOS\u201914"], "metric": ["mAP", "mAP@0.5", "Mean mAP"], "title": "3C-Net: Category Count and Center Loss for Weakly-Supervised Action Localization"} {"abstract": "Everyone makes mistakes. So do human annotators when curating labels for named entity recognition (NER). Such label mistakes might hurt model training and interfere model comparison. In this study, we dive deep into one of the widely-adopted NER benchmark datasets, CoNLL03 NER. We are able to identify label mistakes in about 5.38% test sentences, which is a significant ratio considering that the state-of-the-art test F1 score is already around 93%. Therefore, we manually correct these label mistakes and form a cleaner test set. Our re-evaluation of popular models on this corrected test set leads to more accurate assessments, compared to those on the original test set. More importantly, we propose a simple yet effective framework, CrossWeigh, to handle label mistakes during NER model training. Specifically, it partitions the training data into several folds and train independent NER models to identify potential mistakes in each fold. Then it adjusts the weights of training data accordingly to train the final NER model. Extensive experiments demonstrate significant improvements of plugging various NER models into our proposed framework on three datasets. All implementations and corrected test set are available at our Github repo: https://github.com/ZihanWangKi/CrossWeigh.", "field": [], "task": ["Named Entity Recognition"], "method": [], "dataset": ["Long-tail emerging entities", "CoNLL 2003 (English)", "CoNLL++"], "metric": ["F1"], "title": "CrossWeigh: Training Named Entity Tagger from Imperfect Annotations"} {"abstract": "We propose Chirality Nets, a family of deep nets that is equivariant to the \"chirality transform,\" i.e., the transformation to create a chiral pair. Through parameter sharing, odd and even symmetry, we propose and prove variants of standard building blocks of deep nets that satisfy the equivariance property, including fully connected layers, convolutional layers, batch-normalization, and LSTM/GRU cells. The proposed layers lead to a more data efficient representation and a reduction in computation by exploiting symmetry. We evaluate chirality nets on the task of human pose regression, which naturally exploits the left/right mirroring of the human body. We study three pose regression tasks: 3D pose estimation from video, 2D pose forecasting, and skeleton based activity recognition. Our approach achieves/matches state-of-the-art results, with more significant gains on small datasets and limited-data settings.", "field": [], "task": ["3D Pose Estimation", "Activity Recognition", "Pose Estimation", "Regression", "Skeleton Based Action Recognition"], "method": [], "dataset": ["Kinetics-Skeleton dataset"], "metric": ["Accuracy"], "title": "Chirality Nets for Human Pose Regression"} {"abstract": "Graph similarity search is among the most important graph-based applications, e.g. finding the chemical compounds that are most similar to a query compound. Graph similarity computation, such as Graph Edit Distance (GED) and Maximum Common Subgraph (MCS), is the core operation of graph similarity search and many other applications, but very costly to compute in practice. Inspired by the recent success of neural network approaches to several graph applications, such as node or graph classification, we propose a novel neural network based approach to address this classic yet challenging graph problem, aiming to alleviate the computational burden while preserving a good performance. The proposed approach, called SimGNN, combines two strategies. First, we design a learnable embedding function that maps every graph into a vector, which provides a global summary of a graph. A novel attention mechanism is proposed to emphasize the important nodes with respect to a specific similarity metric. Second, we design a pairwise node comparison method to supplement the graph-level embeddings with fine-grained node-level information. Our model achieves better generalization on unseen graphs, and in the worst case runs in quadratic time with respect to the number of nodes in two graphs. Taking GED computation as an example, experimental results on three real graph datasets demonstrate the effectiveness and efficiency of our approach. Specifically, our model achieves smaller error rate and great time reduction compared against a series of baselines, including several approximation algorithms on GED computation, and many existing graph neural network based models. To the best of our knowledge, we are among the first to adopt neural networks to explicitly model the similarity between two graphs, and provide a new direction for future research on graph similarity computation and graph similarity search.", "field": [], "task": ["Graph Classification", "Graph Similarity"], "method": [], "dataset": ["IMDb"], "metric": ["mse (10^-3)"], "title": "SimGNN: A Neural Network Approach to Fast Graph Similarity Computation"} {"abstract": "Deep Convolutional Neural Networks (DCNNs) is currently the method of choice both for generative, as well as for discriminative learning in computer vision and machine learning. The success of DCNNs can be attributed to the careful selection of their building blocks (e.g., residual blocks, rectifiers, sophisticated normalization schemes, to mention but a few). In this paper, we propose $\\Pi$-Nets, a new class of DCNNs. $\\Pi$-Nets are polynomial neural networks, i.e., the output is a high-order polynomial of the input. $\\Pi$-Nets can be implemented using special kind of skip connections and their parameters can be represented via high-order tensors. We empirically demonstrate that $\\Pi$-Nets have better representation power than standard DCNNs and they even produce good results without the use of non-linear activation functions in a large battery of tasks and signals, i.e., images, graphs, and audio. When used in conjunction with activation functions, $\\Pi$-Nets produce state-of-the-art results in challenging tasks, such as image generation. Lastly, our framework elucidates why recent generative models, such as StyleGAN, improve upon their predecessors, e.g., ProGAN.", "field": [], "task": ["Audio Classification", "Graph Representation Learning", "Image Classification", "Image Generation"], "method": [], "dataset": ["COMA", "CIFAR-10"], "metric": ["Error (mm)", "Inception score", "FID"], "title": "$\u03a0-$nets: Deep Polynomial Neural Networks"} {"abstract": "We present a simple and effective deep convolutional neural network (CNN) model for video deblurring. The proposed algorithm mainly consists of optical flow estimation from intermediate latent frames and latent frame restoration steps. It first develops a deep CNN model to estimate optical flow from intermediate latent frames and then restores the latent frames based on the estimated optical flow. To better explore the temporal information from videos, we develop a temporal sharpness prior to constrain the deep CNN model to help the latent frame restoration. We develop an effective cascaded training approach and jointly train the proposed CNN model in an end-to-end manner. We show that exploring the domain knowledge of video deblurring is able to make the deep CNN model more compact and efficient. Extensive experimental results show that the proposed algorithm performs favorably against state-of-the-art methods on the benchmark datasets as well as real-world videos.", "field": [], "task": ["Deblurring", "Optical Flow Estimation"], "method": [], "dataset": ["GoPro", "DVD "], "metric": ["SSIM", "PSNR"], "title": "Cascaded Deep Video Deblurring Using Temporal Sharpness Prior"} {"abstract": "Deep-learning-based image inpainting methods have shown significant promise in both rectangular and irregular holes. However, the inpainting of irregular holes presents numerous challenges owing to uncertainties in their shapes and locations. When depending solely on convolutional neural network (CNN) or adversarial supervision, plausible inpainting results cannot be guaranteed because irregular holes need attention-based guidance for retrieving information for content generation. In this paper, we propose two new attention mechanisms, namely a mask pruning-based global attention module and a global and local attention module to obtain global dependency information and the local similarity information among the features for refined results. The proposed method is evaluated using state-of-the-art methods, and the experimental results show that our method outperforms the existing methods in both quantitative and qualitative measures.", "field": [], "task": ["Image Inpainting"], "method": [], "dataset": ["Places2"], "metric": ["L1-loss", "40-50% Mask PSNR", "SSIM", "free-form mask l2 err"], "title": "Global and Local Attention-Based Free-Form Image Inpainting"} {"abstract": "This paper proposes a novel differentiable architecture search method by formulating it into a distribution learning problem. We treat the continuously relaxed architecture mixing weight as random variables, modeled by Dirichlet distribution. With recently developed pathwise derivatives, the Dirichlet parameters can be easily optimized with gradient-based optimizer in an end-to-end manner. This formulation improves the generalization ability and induces stochasticity that naturally encourages exploration in the search space. Furthermore, to alleviate the large memory consumption of differentiable NAS, we propose a simple yet effective progressive learning scheme that enables searching directly on large-scale tasks, eliminating the gap between search and evaluation phases. Extensive experiments demonstrate the effectiveness of our method. Specifically, we obtain a test error of 2.46% for CIFAR-10, 23.7% for ImageNet under the mobile setting. On NAS-Bench-201, we also achieve state-of-the-art results on all three datasets and provide insights for the effective design of neural architecture search algorithms.", "field": [], "task": ["Neural Architecture Search"], "method": [], "dataset": ["NAS-Bench-201, ImageNet-16-120"], "metric": ["Accuracy (Test)", "Accuracy (val)"], "title": "DrNAS: Dirichlet Neural Architecture Search"} {"abstract": "Simi-Supervised Recognition Challenge-FGVC7 is a challenging fine-grained recognition competition. One of the difficulties of this competition is how to use unlabeled data. We adopted pseudo-tag data mining to increase the amount of training data. The other one is how to identify similar birds with a very small difference, especially those have a relatively tiny main-body in examples. We combined generic image recognition and fine-grained image recognition method to solve the problem. All generic image recognition models were training using PaddleClas . Using the combination of two different ways of deep recognition models, we finally won the third place in the competition.", "field": [], "task": ["Fine-Grained Image Recognition", "Image Classification"], "method": [], "dataset": ["ImageNet"], "metric": ["Number of params", "Top 5 Accuracy", "Top 1 Accuracy"], "title": "Semi-Supervised Recognition under a Noisy and Fine-grained Dataset"} {"abstract": "Progress in Reinforcement Learning (RL) algorithms goes hand-in-hand with the development of challenging environments that test the limits of current methods. While existing RL environments are either sufficiently complex or based on fast simulation, they are rarely both. Here, we present the NetHack Learning Environment (NLE), a scalable, procedurally generated, stochastic, rich, and challenging environment for RL research based on the popular single-player terminal-based roguelike game, NetHack. We argue that NetHack is sufficiently complex to drive long-term research on problems such as exploration, planning, skill acquisition, and language-conditioned RL, while dramatically reducing the computational resources required to gather a large amount of experience. We compare NLE and its task suite to existing alternatives, and discuss why it is an ideal medium for testing the robustness and systematic generalization of RL agents. We demonstrate empirical success for early stages of the game using a distributed Deep RL baseline and Random Network Distillation exploration, alongside qualitative analysis of various agents trained in the environment. NLE is open source at https://github.com/facebookresearch/nle.", "field": [], "task": ["NetHack", "NetHack Score", "Systematic Generalization"], "method": [], "dataset": ["NetHack Learning Environment"], "metric": ["Average Score"], "title": "The NetHack Learning Environment"} {"abstract": "We propose a novel framework to perform classification via deep learning in the presence of noisy annotations. When trained on noisy labels, deep neural networks have been observed to first fit the training data with clean labels during an \"early learning\" phase, before eventually memorizing the examples with false labels. We prove that early learning and memorization are fundamental phenomena in high-dimensional classification tasks, even in simple linear models, and give a theoretical explanation in this setting. Motivated by these findings, we develop a new technique for noisy classification tasks, which exploits the progress of the early learning phase. In contrast with existing approaches, which use the model output during early learning to detect the examples with clean labels, and either ignore or attempt to correct the false labels, we take a different route and instead capitalize on early learning via regularization. There are two key elements to our approach. First, we leverage semi-supervised learning techniques to produce target probabilities based on the model outputs. Second, we design a regularization term that steers the model towards these targets, implicitly preventing memorization of the false labels. The resulting framework is shown to provide robustness to noisy annotations on several standard benchmarks and real-world datasets, where it achieves results comparable to the state of the art.", "field": [], "task": ["Image Classification", "Learning with noisy labels"], "method": [], "dataset": ["mini WebVision 1.0", "WebVision", "Clothing1M"], "metric": ["Top 1 Accuracy", "Top-5 Accuracy", "ImageNet Top-1 Accuracy", "Top-1 Accuracy", "Accuracy", "Top 5 Accuracy", "ImageNet Top-5 Accuracy"], "title": "Early-Learning Regularization Prevents Memorization of Noisy Labels"} {"abstract": "A number of lane detection methods depend on a proposal-free instance segmentation because of its adaptability to flexible object shape, occlusion, and real-time application. This paper addresses the problem that pixel embedding in proposal-free instance segmentation based lane detection is difficult to optimize. A translation invariance of convolution, which is one of the supposed strengths, causes challenges in optimizing pixel embedding. In this work, we propose a lane detection method based on proposal-free instance segmentation, directly optimizing spatial embedding of pixels using image coordinate. Our proposed method allows the post-processing step for center localization and optimizes clustering in an end-to-end manner. The proposed method enables real-time lane detection through the simplicity of post-processing and the adoption of a lightweight backbone. Our proposed method demonstrates competitive performance on public lane detection datasets.", "field": [], "task": ["Instance Segmentation", "Lane Detection", "Semantic Segmentation"], "method": [], "dataset": ["TuSimple"], "metric": ["F1 score", "Accuracy"], "title": "Towards Lightweight Lane Detection by Optimizing Spatial Embedding"} {"abstract": "Prior works in cross-lingual named entity recognition (NER) with no/little labeled data fall into two primary categories: model transfer based and data transfer based methods. In this paper we find that both method types can complement each other, in the sense that, the former can exploit context information via language-independent features but sees no task-specific information in the target language; while the latter generally generates pseudo target-language training data via translation but its exploitation of context information is weakened by inaccurate translations. Moreover, prior works rarely leverage unlabeled data in the target language, which can be effortlessly collected and potentially contains valuable information for improved results. To handle both problems, we propose a novel approach termed UniTrans to Unify both model and data Transfer for cross-lingual NER, and furthermore, to leverage the available information from unlabeled target-language data via enhanced knowledge distillation. We evaluate our proposed UniTrans over 4 target languages on benchmark datasets. Our experimental results show that it substantially outperforms the existing state-of-the-art methods.", "field": [], "task": ["Cross-Lingual NER", "Cross-Lingual Transfer", "Knowledge Distillation", "Named Entity Recognition"], "method": [], "dataset": ["CoNLL Dutch", "CoNLL German", "NoDaLiDa Norwegian Bokm\u00e5l", "CoNLL Spanish"], "metric": ["F1"], "title": "UniTrans: Unifying Model Transfer and Data Transfer for Cross-Lingual Named Entity Recognition with Unlabeled Data"} {"abstract": "Traditional signature-based methods have started becoming inadequnate to deal with next generation malware which utilize sophisticated obfuscation (polymorphic and metamorphic) techniques to evade detection. Recently, research efforts have been conducted on malware detection and classification by applying machine learning techniques. Despite them, most methods are build on shallow learning architectures and rely on the extraction of hand-crafted features. In this paper, based on assembly language code extracted from disassembled binary files and embedded into vectors, we present a convolutional neural network architecture to learn a set of discriminative patterns able to cluster malware files amongst families. To demonstrate the suitability of our approach we evaluated our model on the data provided by Microsoft for the BigData Innovators Gathering 2015 Anti-Malware Prediction Challenge. Experiments show that the method achieves competitive results without relying on the manual extraction of features and is resilient to the most common obfuscation techniques.", "field": [], "task": ["Malware Classification", "Malware Detection"], "method": [], "dataset": ["Microsoft Malware Classification Challenge"], "metric": ["Accuracy (10-fold)", "Macro F1 (10-fold)", "LogLoss"], "title": "Convolutional Neural Network for Classification of Malware Assembly Code"} {"abstract": "Recent developments in medical imaging with Deep Learning presents evidence of automated diagnosis and prognosis. It can also be a complement to currently available diagnosis methods. Deep Learning can be leveraged for diagnosis, severity prediction, intubation support prediction and many similar tasks. We present prediction of intubation support requirement for patients from the Chest X-ray using Deep representation learning. We release our source code publicly at https://github.com/aniketmaurya/covid-research.", "field": [], "task": ["COVID-19 Diagnosis", "Intubation Support Prediction", "Representation Learning"], "method": [], "dataset": ["COVID chest X-ray"], "metric": ["AUC-ROC"], "title": "Predicting intubation support requirement of patients using Chest X-ray with Deep Representation Learning"} {"abstract": "Fine-tuning pre-trained deep neural networks (DNNs) to a target dataset, also known as transfer learning, is widely used in computer vision and NLP. Because task-specific layers mainly contain categorical information and categories vary with datasets, practitioners only \\textit{partially} transfer pre-trained models by discarding task-specific layers and fine-tuning bottom layers. However, it is a reckless loss to simply discard task-specific parameters who take up as many as $20\\%$ of the total parameters in pre-trained models. To \\textit{fully} transfer pre-trained models, we propose a two-step framework named \\textbf{Co-Tuning}: (i) learn the relationship between source categories and target categories from the pre-trained model and calibrated predictions; (ii) target labels (one-hot labels), as well as source labels (probabilistic labels) translated by the category relationship, collaboratively supervise the fine-tuning process. A simple instantiation of the framework shows strong empirical results in four visual classification tasks and one NLP classification task, bringing up to $20\\%$ relative improvement. While state-of-the-art fine-tuning techniques mainly focus on how to impose regularization when data are not abundant, Co-Tuning works not only in medium-scale datasets (100 samples per class) but also in large-scale datasets (1000 samples per class) where regularization-based methods bring no gains over the vanilla fine-tuning. Co-Tuning relies on a typically valid assumption that the pre-trained dataset is diverse enough, implying its broad application area.", "field": [], "task": ["Image Classification", "Transfer Learning"], "method": [], "dataset": ["COCO70"], "metric": ["Accuracy"], "title": "Co-Tuning for Transfer Learning"} {"abstract": "In this work, we introduce a novel local autoregressive translation (LAT) mechanism into non-autoregressive translation (NAT) models so as to capture local dependencies among tar-get outputs. Specifically, for each target decoding position, instead of only one token, we predict a short sequence of tokens in an autoregressive way. We further design an efficient merging algorithm to align and merge the out-put pieces into one final output sequence. We integrate LAT into the conditional masked language model (CMLM; Ghazvininejad et al.,2019) and similarly adopt iterative decoding. Empirical results on five translation tasks show that compared with CMLM, our method achieves comparable or better performance with fewer decoding iterations, bringing a 2.5xspeedup. Further analysis indicates that our method reduces repeated translations and performs better at longer sentences.", "field": [], "task": ["Machine Translation"], "method": [], "dataset": ["WMT2014 German-English", "WMT2016 Romanian-English", "WMT2016 English-Romanian", "WMT2014 English-German"], "metric": ["BLEU score"], "title": "Incorporating a Local Translation Mechanism into Non-autoregressive Translation"} {"abstract": "General purpose relation extractors, which can model arbitrary relations, are a core aspiration in information extraction. Efforts have been made to build general purpose extractors that represent relations with their surface forms, or which jointly embed surface forms with relations from an existing knowledge graph. However, both of these approaches are limited in their ability to generalize. In this paper, we build on extensions of Harris' distributional hypothesis to relations, as well as recent advances in learning text representations (specifically, BERT), to build task agnostic relation representations solely from entity-linked text. We show that these representations significantly outperform previous work on exemplar based relation extraction (FewRel) even without using any of that task's training data. We also show that models initialized with our task agnostic representations, and then tuned on supervised relation extraction datasets, significantly outperform the previous methods on SemEval 2010 Task 8, KBP37, and TACRED.", "field": [], "task": ["Relation Extraction"], "method": [], "dataset": ["TACRED", "SemEval-2010 Task 8"], "metric": ["F1"], "title": "Matching the Blanks: Distributional Similarity for Relation Learning"} {"abstract": "Recent insights on language and vision with neural networks have been successfully applied to simple single-image visual question answering. However, to tackle real-life question answering problems on multimedia collections such as personal photos, we have to look at whole collections with sequences of photos or videos. When answering questions from a large collection, a natural problem is to identify snippets to support the answer. In this paper, we describe a novel neural network called Focal Visual-Text Attention network (FVTA) for collective reasoning in visual question answering, where both visual and text sequence information such as images and text metadata are presented. FVTA introduces an end-to-end approach that makes use of a hierarchical process to dynamically determine what media and what time to focus on in the sequential data to answer the question. FVTA can not only answer the questions well but also provides the justifications which the system results are based upon to get the answers. FVTA achieves state-of-the-art performance on the MemexQA dataset and competitive results on the MovieQA dataset.", "field": [], "task": ["Memex Question Answering", "Question Answering", "Visual Question Answering"], "method": [], "dataset": ["MemexQA"], "metric": ["Accuracy"], "title": "Focal Visual-Text Attention for Visual Question Answering"} {"abstract": "The success of deep learning has been due, in no small part, to the availability of large annotated datasets. Thus, a major bottleneck in current learning pipelines is the time-consuming human annotation of data. In scenarios where such input-output pairs cannot be collected, simulation is often used instead, leading to a domain-shift between synthesized and real-world data. This work offers an unsupervised alternative that relies on the availability of task-specific energy functions, replacing the generic supervised loss. Such energy functions are assumed to lead to the desired label as their minimizer given the input. The proposed approach, termed \"Deep Energy\", trains a Deep Neural Network (DNN) to approximate this minimization for any chosen input. Once trained, a simple and fast feed-forward computation provides the inferred label. This approach allows us to perform unsupervised training of DNNs with real-world inputs only, and without the need for manually-annotated labels, nor synthetically created data. \"Deep Energy\" is demonstrated in this paper on three different tasks -- seeded segmentation, image matting and single image dehazing -- exposing its generality and wide applicability. Our experiments show that the solution provided by the network is often much better in quality than the one obtained by a direct minimization of the energy function, suggesting an added regularization property in our scheme.", "field": [], "task": ["Image Dehazing", "Image Matting", "Single Image Dehazing"], "method": [], "dataset": ["SOTS Outdoor"], "metric": ["SIMM", "PSNR"], "title": "Deep-Energy: Unsupervised Training of Deep Neural Networks"} {"abstract": "This paper presents a method for adding multiple tasks to a single deep\nneural network while avoiding catastrophic forgetting. Inspired by network\npruning techniques, we exploit redundancies in large deep networks to free up\nparameters that can then be employed to learn new tasks. By performing\niterative pruning and network re-training, we are able to sequentially \"pack\"\nmultiple tasks into a single network while ensuring minimal drop in performance\nand minimal storage overhead. Unlike prior work that uses proxy losses to\nmaintain accuracy on older tasks, we always optimize for the task at hand. We\nperform extensive experiments on a variety of network architectures and\nlarge-scale datasets, and observe much better robustness against catastrophic\nforgetting than prior work. In particular, we are able to add three\nfine-grained classification tasks to a single ImageNet-trained VGG-16 network\nand achieve accuracies close to those of separately trained networks for each\ntask. Code available at https://github.com/arunmallya/packnet", "field": [], "task": ["Continual Learning", "Network Pruning"], "method": [], "dataset": ["Stanford Cars (Fine-grained 6 Tasks)", "Sketch (Fine-grained 6 Tasks)", "Wikiart (Fine-grained 6 Tasks)", "CUBS (Fine-grained 6 Tasks)", "ImageNet (Fine-grained 6 Tasks)", "Cifar100 (20 tasks)", "Flowers (Fine-grained 6 Tasks)"], "metric": ["Average Accuracy", "Accuracy"], "title": "PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning"} {"abstract": "Benefiting from the joint learning of the multiple tasks in the deep multi-task networks, many applications have shown the promising performance comparing to single-task learning. However, the performance of multi-task learning framework is highly dependant on the relative weights of the tasks. How to assign the weight of each task is a critical issue in the multi-task learning. Instead of tuning the weights manually which is exhausted and time-consuming, in this paper we propose an approach which can dynamically adapt the weights of the tasks according to the difficulty for training the task. Specifically, the proposed method does not introduce the hyperparameters and the simple structure allows the other multi-task deep learning networks can easily realize or reproduce this method. We demonstrate our approach for face recognition with facial expression and facial expression recognition from a single input image based on a deep multi-task learning Conventional Neural Networks (CNNs). Both the theoretical analysis and the experimental results demonstrate the effectiveness of the proposed dynamic multi-task learning method. This multi-task learning with dynamic weights also boosts of the performance on the different tasks comparing to the state-of-art methods with single-task learning.", "field": [], "task": ["Face Recognition", "Facial Expression Recognition", "Multi-Task Learning"], "method": [], "dataset": ["Oulu-CASIA"], "metric": ["Accuracy (10-fold)"], "title": "Dynamic Multi-Task Learning for Face Recognition with Facial Expression"} {"abstract": "Timely accurate traffic forecast is crucial for urban traffic control and\nguidance. Due to the high nonlinearity and complexity of traffic flow,\ntraditional methods cannot satisfy the requirements of mid-and-long term\nprediction tasks and often neglect spatial and temporal dependencies. In this\npaper, we propose a novel deep learning framework, Spatio-Temporal Graph\nConvolutional Networks (STGCN), to tackle the time series prediction problem in\ntraffic domain. Instead of applying regular convolutional and recurrent units,\nwe formulate the problem on graphs and build the model with complete\nconvolutional structures, which enable much faster training speed with fewer\nparameters. Experiments show that our model STGCN effectively captures\ncomprehensive spatio-temporal correlations through modeling multi-scale traffic\nnetworks and consistently outperforms state-of-the-art baselines on various\nreal-world traffic datasets.", "field": [], "task": ["Time Series", "Time Series Prediction", "Traffic Prediction"], "method": [], "dataset": ["PeMS-M", "METR-LA"], "metric": ["MAE (60 min)", "MAE @ 12 step"], "title": "Spatio-Temporal Graph Convolutional Networks: A Deep Learning Framework for Traffic Forecasting"} {"abstract": "Single image super resolution is a very important computer vision task, with\na wide range of applications. In recent years, the depth of the\nsuper-resolution model has been constantly increasing, but with a small\nincrease in performance, it has brought a huge amount of computation and memory\nconsumption. In this work, in order to make the super resolution models more\neffective, we proposed a novel single image super resolution method via\nrecursive squeeze and excitation networks (SESR). By introducing the squeeze\nand excitation module, our SESR can model the interdependencies and\nrelationships between channels and that makes our model more efficiency. In\naddition, the recursive structure and progressive reconstruction method in our\nmodel minimized the layers and parameters and enabled SESR to simultaneously\ntrain multi-scale super resolution in a single model. After evaluating on four\nbenchmark test sets, our model is proved to be above the state-of-the-art\nmethods in terms of speed and accuracy.", "field": [], "task": ["Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["Set5 - 4x upscaling", "Urban100 - 4x upscaling", "BSD100 - 4x upscaling", "Set14 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "SESR: Single Image Super Resolution with Recursive Squeeze and Excitation Networks"} {"abstract": "We improve automatic correction of grammatical, orthographic, and collocation\nerrors in text using a multilayer convolutional encoder-decoder neural network.\nThe network is initialized with embeddings that make use of character N-gram\ninformation to better suit this task. When evaluated on common benchmark test\ndata sets (CoNLL-2014 and JFLEG), our model substantially outperforms all prior\nneural approaches on this task as well as strong statistical machine\ntranslation-based systems with neural and task-specific features trained on the\nsame data. Our analysis shows the superiority of convolutional neural networks\nover recurrent neural networks such as long short-term memory (LSTM) networks\nin capturing the local context via attention, and thereby improving the\ncoverage in correcting grammatical errors. By ensembling multiple models, and\nincorporating an N-gram language model and edit features via rescoring, our\nnovel method becomes the first neural approach to outperform the current\nstate-of-the-art statistical machine translation-based approach, both in terms\nof grammaticality and fluency.", "field": [], "task": ["Grammatical Error Correction", "Language Modelling"], "method": [], "dataset": ["_Restricted_", "Restricted", "CoNLL-2014 Shared Task", "CoNLL-2014 Shared Task (10 annotations)", "JFLEG"], "metric": ["GLEU", "F0.5"], "title": "A Multilayer Convolutional Encoder-Decoder Neural Network for Grammatical Error Correction"} {"abstract": "Table-to-text generation aims to generate a description for a factual table\nwhich can be viewed as a set of field-value records. To encode both the content\nand the structure of a table, we propose a novel structure-aware seq2seq\narchitecture which consists of field-gating encoder and description generator\nwith dual attention. In the encoding phase, we update the cell memory of the\nLSTM unit by a field gate and its corresponding field value in order to\nincorporate field information into table representation. In the decoding phase,\ndual attention mechanism which contains word level attention and field level\nattention is proposed to model the semantic relevance between the generated\ndescription and the table. We conduct experiments on the \\texttt{WIKIBIO}\ndataset which contains over 700k biographies and corresponding infoboxes from\nWikipedia. The attention visualizations and case studies show that our model is\ncapable of generating coherent and informative descriptions based on the\ncomprehensive understanding of both the content and the structure of a table.\nAutomatic evaluations also show our model outperforms the baselines by a great\nmargin. Code for this work is available on\nhttps://github.com/tyliupku/wiki2bio.", "field": [], "task": ["Table-to-Text Generation", "Text Generation"], "method": [], "dataset": ["WikiBio"], "metric": ["BLEU", "ROUGE"], "title": "Table-to-text Generation by Structure-aware Seq2seq Learning"} {"abstract": "Learning with recurrent neural networks (RNNs) on long sequences is a\nnotoriously difficult task. There are three major challenges: 1) complex\ndependencies, 2) vanishing and exploding gradients, and 3) efficient\nparallelization. In this paper, we introduce a simple yet effective RNN\nconnection structure, the DilatedRNN, which simultaneously tackles all of these\nchallenges. The proposed architecture is characterized by multi-resolution\ndilated recurrent skip connections and can be combined flexibly with diverse\nRNN cells. Moreover, the DilatedRNN reduces the number of parameters needed and\nenhances training efficiency significantly, while matching state-of-the-art\nperformance (even with standard RNN cells) in tasks involving very long-term\ndependencies. To provide a theory-based quantification of the architecture's\nadvantages, we introduce a memory capacity measure, the mean recurrent length,\nwhich is more suitable for RNNs with long skip connections than existing\nmeasures. We rigorously prove the advantages of the DilatedRNN over other\nrecurrent neural architectures. The code for our method is publicly available\nat https://github.com/code-terminator/DilatedRNN", "field": [], "task": ["Sequential Image Classification"], "method": [], "dataset": ["Sequential MNIST"], "metric": ["Permuted Accuracy", "Unpermuted Accuracy"], "title": "Dilated Recurrent Neural Networks"} {"abstract": "We propose a new method for semantic instance segmentation, by first\ncomputing how likely two pixels are to belong to the same object, and then by\ngrouping similar pixels together. Our similarity metric is based on a deep,\nfully convolutional embedding model. Our grouping method is based on selecting\nall points that are sufficiently similar to a set of \"seed points\", chosen from\na deep, fully convolutional scoring model. We show competitive results on the\nPascal VOC instance segmentation benchmark.", "field": [], "task": ["Instance Segmentation", "Metric Learning", "Object Proposal Generation", "Semantic Segmentation"], "method": [], "dataset": ["PASCAL VOC 2012, 60 proposals per image"], "metric": ["Average Recall"], "title": "Semantic Instance Segmentation via Deep Metric Learning"} {"abstract": "This paper describes our system (HIT-SCIR) submitted to the CoNLL 2018 shared\ntask on Multilingual Parsing from Raw Text to Universal Dependencies. We base\nour submission on Stanford's winning system for the CoNLL 2017 shared task and\nmake two effective extensions: 1) incorporating deep contextualized word\nembeddings into both the part of speech tagger and parser; 2) ensembling\nparsers trained with different initialization. We also explore different ways\nof concatenating treebanks for further improvements. Experimental results on\nthe development data show the effectiveness of our methods. In the final\nevaluation, our system was ranked first according to LAS (75.84%) and\noutperformed the other systems by a large margin.", "field": [], "task": ["Dependency Parsing", "Word Embeddings"], "method": [], "dataset": ["Universal Dependencies"], "metric": ["LAS"], "title": "Towards Better UD Parsing: Deep Contextualized Word Embeddings, Ensemble, and Treebank Concatenation"} {"abstract": "The number of emergencies have increased over the years with the growth in urbanization. This pattern has overwhelmed the emergency services with limited resources and demands the optimization of response processes. It is partly due to traditional `reactive' approach of emergency services to collect data about incidents, where a source initiates a call to the emergency number (e.g., 911 in U.S.), delaying and limiting the potentially optimal response. Crowdsourcing platforms such as Waze provides an opportunity to develop a rapid, `proactive' approach to collect data about incidents through crowd-generated observational reports. However, the reliability of reporting sources and spatio-temporal uncertainty of the reported incidents challenge the design of such a proactive approach. Thus, this paper presents a novel method for emergency incident detection using noisy crowdsourced Waze data. We propose a principled computational framework based on Bayesian theory to model the uncertainty in the reliability of crowd-generated reports and their integration across space and time to detect incidents. Extensive experiments using data collected from Waze and the official reported incidents in Nashville, Tenessee in the U.S. show our method can outperform strong baselines for both F1-score and AUC. The application of this work provides an extensible framework to incorporate different noisy data sources for proactive incident detection to improve and optimize emergency response operations in our communities.", "field": [], "task": ["Traffic Accident Detection"], "method": [], "dataset": ["custom"], "metric": ["Average F1"], "title": "Emergency Incident Detection from Crowdsourced Waze Data using Bayesian Information Fusion"} {"abstract": "In many practical few-shot learning problems, even though labeled examples are scarce, there are abundant auxiliary data sets that potentially contain useful information. We propose a framework to address the challenges of efficiently selecting and effectively using auxiliary data in image classification. Given an auxiliary dataset and a notion of semantic similarity among classes, we automatically select pseudo shots, which are labeled examples from other classes related to the target task. We show that naively assuming that these additional examples come from the same distribution as the target task examples does not significantly improve accuracy. Instead, we propose a masking module that adjusts the features of auxiliary data to be more similar to those of the target classes. We show that this masking module can improve accuracy by up to 18 accuracy points, particularly when the auxiliary data is semantically distant from the target task. We also show that incorporating pseudo shots improves over the current state-of-the-art few-shot image classification scores by an average of 4.81 percentage points of accuracy on 1-shot tasks and an average of 0.31 percentage points on 5-shot tasks.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Image Classification", "Semantic Similarity", "Semantic Textual Similarity"], "method": [], "dataset": ["CIFAR-FS - 1-Shot Learning", "FC100 5-way (1-shot)", "CIFAR-FS 5-way (5-shot)", "Mini-Imagenet 5-way (1-shot)", "Tiered ImageNet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "CIFAR-FS - 5-Shot Learning", "CIFAR-FS 5-way (1-shot)", "Fewshot-CIFAR100 - 5-Shot Learning", "FC100 5-way (5-shot)", "Fewshot-CIFAR100 - 1-Shot Learning", "Tiered ImageNet 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Pseudo Shots: Few-Shot Learning with Auxiliary Data"} {"abstract": "Meta-learning has been proposed as a framework to address the challenging\nfew-shot learning setting. The key idea is to leverage a large number of\nsimilar few-shot tasks in order to learn how to adapt a base-learner to a new\ntask for which only a few labeled samples are available. As deep neural\nnetworks (DNNs) tend to overfit using a few samples only, meta-learning\ntypically uses shallow neural networks (SNNs), thus limiting its effectiveness.\nIn this paper we propose a novel few-shot learning method called meta-transfer\nlearning (MTL) which learns to adapt a deep NN for few shot learning tasks.\nSpecifically, \"meta\" refers to training multiple tasks, and \"transfer\" is\nachieved by learning scaling and shifting functions of DNN weights for each\ntask. In addition, we introduce the hard task (HT) meta-batch scheme as an\neffective learning curriculum for MTL. We conduct experiments using (5-class,\n1-shot) and (5-class, 5-shot) recognition tasks on two challenging few-shot\nlearning benchmarks: miniImageNet and Fewshot-CIFAR100. Extensive comparisons\nto related works validate that our meta-transfer learning approach trained with\nthe proposed HT meta-batch scheme achieves top performance. An ablation study\nalso shows that both components contribute to fast convergence and high\naccuracy.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Meta-Learning", "Transfer Learning"], "method": [], "dataset": ["FC100 5-way (1-shot)", "Mini-Imagenet 5-way (1-shot)", "FC100 5-way (10-shot)", "Mini-Imagenet 5-way (5-shot)", "FC100 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Meta-Transfer Learning for Few-Shot Learning"} {"abstract": "Neural Architecture Search (NAS) has shown excellent results in designing architectures for computer vision problems. NAS alleviates the need for human-defined settings by automating architecture design and engineering. However, NAS methods tend to be slow, as they require large amounts of GPU computation. This bottleneck is mainly due to the performance estimation strategy, which requires the evaluation of the generated architectures, mainly by training them, to update the sampler method. In this paper, we propose EPE-NAS, an efficient performance estimation strategy, that mitigates the problem of evaluating networks, by scoring untrained networks and creating a correlation with their trained performance. We perform this process by looking at intra and inter-class correlations of an untrained network. We show that EPE-NAS can produce a robust correlation and that by incorporating it into a simple random sampling strategy, we are able to search for competitive networks, without requiring any training, in a matter of seconds using a single GPU. Moreover, EPE-NAS is agnostic to the search method, since it focuses on the evaluation of untrained networks, making it easy to integrate into almost any NAS method.", "field": [], "task": ["Neural Architecture Search"], "method": [], "dataset": ["NAS-Bench-201, ImageNet-16-120", "NAS-Bench-201, CIFAR-100", "NAS-Bench-201, CIFAR-10"], "metric": ["Search time (s)", "Accuracy (Test)", "Accuracy (Val)", "Accuracy (val)"], "title": "EPE-NAS: Efficient Performance Estimation Without Training for Neural Architecture Search"} {"abstract": "State-of-the-art methods for video action recognition commonly use an\nensemble of two networks: the spatial stream, which takes RGB frames as input,\nand the temporal stream, which takes optical flow as input. In recent work,\nboth of these streams consist of 3D Convolutional Neural Networks, which apply\nspatiotemporal filters to the video clip before performing classification.\nConceptually, the temporal filters should allow the spatial stream to learn\nmotion representations, making the temporal stream redundant. However, we still\nsee significant benefits in action recognition performance by including an\nentirely separate temporal stream, indicating that the spatial stream is\n\"missing\" some of the signal captured by the temporal stream. In this work, we\nfirst investigate whether motion representations are indeed missing in the\nspatial stream of 3D CNNs. Second, we demonstrate that these motion\nrepresentations can be improved by distillation, by tuning the spatial stream\nto predict the outputs of the temporal stream, effectively combining both\nmodels into a single stream. Finally, we show that our Distilled 3D Network\n(D3D) achieves performance on par with two-stream approaches, using only a\nsingle model and with no need to compute optical flow.", "field": [], "task": ["Action Classification", "Action Recognition", "Optical Flow Estimation", "Temporal Action Localization"], "method": [], "dataset": ["Kinetics-400", "AVA v2.1", "UCF101", "Kinetics-600", "HMDB-51"], "metric": ["3-fold Accuracy", "mAP (Val)", "Top-1 Accuracy", "Average accuracy of 3 splits", "Vid acc@1"], "title": "D3D: Distilled 3D Networks for Video Action Recognition"} {"abstract": "We introduce the Self-Annotated Reddit Corpus (SARC), a large corpus for\nsarcasm research and for training and evaluating systems for sarcasm detection.\nThe corpus has 1.3 million sarcastic statements -- 10 times more than any\nprevious dataset -- and many times more instances of non-sarcastic statements,\nallowing for learning in both balanced and unbalanced label regimes. Each\nstatement is furthermore self-annotated -- sarcasm is labeled by the author,\nnot an independent annotator -- and provided with user, topic, and conversation\ncontext. We evaluate the corpus for accuracy, construct benchmarks for sarcasm\ndetection, and evaluate baseline methods.", "field": [], "task": ["Sarcasm Detection"], "method": [], "dataset": ["SARC (all-bal)", "SARC (pol-unbal)", "SARC (pol-bal)"], "metric": ["Avg F1", "Accuracy"], "title": "A Large Self-Annotated Corpus for Sarcasm"} {"abstract": "Current deep neural networks (DNNs) can easily overfit to biased training data with corrupted labels or class imbalance. Sample re-weighting strategy is commonly used to alleviate this issue by designing a weighting function mapping from training loss to sample weight, and then iterating between weight recalculating and classifier updating. Current approaches, however, need manually pre-specify the weighting function as well as its additional hyper-parameters. It makes them fairly hard to be generally applied in practice due to the significant variation of proper weighting schemes relying on the investigated problem and training data. To address this issue, we propose a method capable of adaptively learning an explicit weighting function directly from data. The weighting function is an MLP with one hidden layer, constituting a universal approximator to almost any continuous functions, making the method able to fit a wide range of weighting functions including those assumed in conventional research. Guided by a small amount of unbiased meta-data, the parameters of the weighting function can be finely updated simultaneously with the learning process of the classifiers. Synthetic and real experiments substantiate the capability of our method for achieving proper weighting functions in class imbalance and noisy label cases, fully complying with the common settings in traditional methods, and more complicated scenarios beyond conventional cases. This naturally leads to its better accuracy than other state-of-the-art methods.", "field": [], "task": ["Image Classification", "Meta-Learning"], "method": [], "dataset": ["Clothing1M"], "metric": ["Accuracy"], "title": "Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting"} {"abstract": "Many of the recent successful methods for video object segmentation (VOS) are\noverly complicated, heavily rely on fine-tuning on the first frame, and/or are\nslow, and are hence of limited practical use. In this work, we propose FEELVOS\nas a simple and fast method which does not rely on fine-tuning. In order to\nsegment a video, for each frame FEELVOS uses a semantic pixel-wise embedding\ntogether with a global and a local matching mechanism to transfer information\nfrom the first frame and from the previous frame of the video to the current\nframe. In contrast to previous work, our embedding is only used as an internal\nguidance of a convolutional network. Our novel dynamic segmentation head allows\nus to train the network, including the embedding, end-to-end for the multiple\nobject segmentation task with a cross entropy loss. We achieve a new state of\nthe art in video object segmentation without fine-tuning with a J&F measure of\n71.5% on the DAVIS 2017 validation set. We make our code and models available\nat https://github.com/tensorflow/models/tree/master/research/feelvos.", "field": [], "task": ["Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["DAVIS 2017 (val)", "DAVIS 2017 (test-dev)", "DAVIS 2016", "YouTube"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "mIoU", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "FEELVOS: Fast End-to-End Embedding Learning for Video Object Segmentation"} {"abstract": "The paper presents a first attempt towards unsupervised neural text simplification that relies only on unlabeled text corpora. The core framework is composed of a shared encoder and a pair of attentional-decoders and gains knowledge of simplification through discrimination based-losses and denoising. The framework is trained using unlabeled text collected from en-Wikipedia dump. Our analysis (both quantitative and qualitative involving human evaluators) on a public test data shows that the proposed model can perform text-simplification at both lexical and syntactic levels, competitive to existing supervised methods. Addition of a few labelled pairs also improves the performance further.", "field": [], "task": ["Denoising", "Text Simplification"], "method": [], "dataset": ["ASSET", "TurkCorpus"], "metric": ["BLEU", "SARI (EASSE>=0.2.1)"], "title": "Unsupervised Neural Text Simplification"} {"abstract": "We introduce an exploration bonus for deep reinforcement learning methods\nthat is easy to implement and adds minimal overhead to the computation\nperformed. The bonus is the error of a neural network predicting features of\nthe observations given by a fixed randomly initialized neural network. We also\nintroduce a method to flexibly combine intrinsic and extrinsic rewards. We find\nthat the random network distillation (RND) bonus combined with this increased\nflexibility enables significant progress on several hard exploration Atari\ngames. In particular we establish state of the art performance on Montezuma's\nRevenge, a game famously difficult for deep reinforcement learning methods. To\nthe best of our knowledge, this is the first method that achieves better than\naverage human performance on this game without using demonstrations or having\naccess to the underlying state of the game, and occasionally completes the\nfirst level.", "field": [], "task": ["Atari Games", "Montezuma's Revenge"], "method": [], "dataset": ["Atari 2600 Venture", "Atari 2600 Private Eye", "Atari 2600 Montezuma's Revenge", "Atari 2600 Solaris", "Atari 2600 Gravitar", "Atari 2600 Pitfall!"], "metric": ["Score"], "title": "Exploration by Random Network Distillation"} {"abstract": "Human activity understanding is crucial for building automatic intelligent system. With the help of deep learning, activity understanding has made huge progress recently. But some challenges such as imbalanced data distribution, action ambiguity, complex visual patterns still remain. To address these and promote the activity understanding, we build a large-scale Human Activity Knowledge Engine (HAKE) based on the human body part states. Upon existing activity datasets, we annotate the part states of all the active persons in all images, thus establish the relationship between instance activity and body part states. Furthermore, we propose a HAKE based part state recognition model with a knowledge extractor named Activity2Vec and a corresponding part state based reasoning network. With HAKE, our method can alleviate the learning difficulty brought by the long-tail data distribution, and bring in interpretability. Now our HAKE has more than 7 M+ part state annotations and is still under construction. We first validate our approach on a part of HAKE in this preliminary paper, where we show 7.2 mAP performance improvement on Human-Object Interaction recognition, and 12.38 mAP improvement on the one-shot subsets.", "field": [], "task": ["Human-Object Interaction Detection"], "method": [], "dataset": ["HICO"], "metric": ["mAP"], "title": "HAKE: Human Activity Knowledge Engine"} {"abstract": "Visual Question Answering (VQA) deep-learning systems tend to capture superficial statistical correlations in the training data because of strong language priors and fail to generalize to test data with a significantly different question-answer (QA) distribution. To address this issue, we introduce a self-critical training objective that ensures that visual explanations of correct answers match the most influential image regions more than other competitive answer candidates. The influential regions are either determined from human visual/textual explanations or automatically from just significant words in the question and answer. We evaluate our approach on the VQA generalization task using the VQA-CP dataset, achieving a new state-of-the-art i.e., 49.5% using textual explanations and 48.5% using automatically annotated regions.", "field": [], "task": ["Question Answering", "Visual Question Answering"], "method": [], "dataset": ["VQA-CP"], "metric": ["Score"], "title": "Self-Critical Reasoning for Robust Visual Question Answering"} {"abstract": "Video object segmentation (VOS) aims at pixel-level object tracking given only the annotations in the first frame. Due to the large visual variations of objects in video and the lack of training samples, it remains a difficult task despite the upsurging development of deep learning. Toward solving the VOS problem, we bring in several new insights by the proposed unified framework consisting of object proposal, tracking and segmentation components. The object proposal network transfers objectness information as generic knowledge into VOS; the tracking network identifies the target object from the proposals; and the segmentation network is performed based on the tracking results with a novel dynamic-reference based model adaptation scheme. Extensive experiments have been conducted on the DAVIS'17 dataset and the YouTube-VOS dataset, our method achieves the state-of-the-art performance on several video object segmentation benchmarks. We make the code publicly available at https://github.com/sydney0zq/PTSNet.", "field": [], "task": ["Object Tracking", "Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation", "Youtube-VOS"], "method": [], "dataset": ["DAVIS 2017 (val)", "YouTube-VOS"], "metric": ["Jaccard (Mean)", "Jaccard (Unseen)", "Jaccard (Seen)", "F-measure (Mean)", "J&F"], "title": "Proposal, Tracking and Segmentation (PTS): A Cascaded Network for Video Object Segmentation"} {"abstract": "Graph embedding methods transform high-dimensional and complex graph contents into low-dimensional representations. They are useful for a wide range of graph analysis tasks including link prediction, node classification, recommendation and visualization. Most existing approaches represent graph nodes as point vectors in a low-dimensional embedding space, ignoring the uncertainty present in the real-world graphs. Furthermore, many real-world graphs are large-scale and rich in content (e.g. node attributes). In this work, we propose GLACE, a novel, scalable graph embedding method that preserves both graph structure and node attributes effectively and efficiently in an end-to-end manner. GLACE effectively models uncertainty through Gaussian embeddings, and supports inductive inference of new nodes based on their attributes. In our comprehensive experiments, we evaluate GLACE on real-world graphs, and the results demonstrate that GLACE significantly outperforms state-of-the-art embedding methods on multiple graph analysis tasks.", "field": [], "task": ["Graph Embedding", "Link Prediction", "Node Classification"], "method": [], "dataset": ["Pubmed (nonstandard variant)", "ACM", "Cora (nonstandard variant)", "DBLP", "Citeseer (nonstandard variant)"], "metric": ["AP", "AUC"], "title": "Gaussian Embedding of Large-scale Attributed Graphs"} {"abstract": "Domain generalization refers to the task of training a model which generalizes to new domains that are not seen during training. We present CSD (Common Specific Decomposition), for this setting,which jointly learns a common component (which generalizes to new domains) and a domain specific component (which overfits on training domains). The domain specific components are discarded after training and only the common component is retained. The algorithm is extremely simple and involves only modifying the final linear classification layer of any given neural network architecture. We present a principled analysis to understand existing approaches, provide identifiability results of CSD,and study effect of low-rank on domain generalization. We show that CSD either matches or beats state of the art approaches for domain generalization based on domain erasure, domain perturbed data augmentation, and meta-learning. Further diagnostics on rotated MNIST, where domains are interpretable, confirm the hypothesis that CSD successfully disentangles common and domain specific components and hence leads to better domain generalization.", "field": [], "task": ["Data Augmentation", "Domain Generalization", "Meta-Learning", "Rotated MNIST"], "method": [], "dataset": ["PACS", "LipitK", "Rotated Fashion-MNIST"], "metric": ["Average Accuracy", "Accuracy"], "title": "Efficient Domain Generalization via Common-Specific Low-Rank Decomposition"} {"abstract": "This paper creates a paradigm shift with regard to the way we build neural extractive summarization systems. Instead of following the commonly used framework of extracting sentences individually and modeling the relationship between sentences, we formulate the extractive summarization task as a semantic text matching problem, in which a source document and candidate summaries will be (extracted from the original text) matched in a semantic space. Notably, this paradigm shift to semantic matching framework is well-grounded in our comprehensive analysis of the inherent gap between sentence-level and summary-level extractors based on the property of the dataset. Besides, even instantiating the framework with a simple form of a matching model, we have driven the state-of-the-art extractive result on CNN/DailyMail to a new level (44.41 in ROUGE-1). Experiments on the other five datasets also show the effectiveness of the matching framework. We believe the power of this matching-based summarization framework has not been fully exploited. To encourage more instantiations in the future, we have released our codes, processed dataset, as well as generated summaries in https://github.com/maszhongming/MatchSum.", "field": [], "task": ["Document Summarization", "Extractive Text Summarization", "Text Matching", "Text Summarization"], "method": [], "dataset": ["CNN / Daily Mail", "BBC XSum", "WikiHow", "Reddit TIFU", "Pubmed"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Extractive Summarization as Text Matching"} {"abstract": "Recognizing an activity with a single reference sample using metric learning approaches is a promising research field. The majority of few-shot methods focus on object recognition or face-identification. We propose a metric learning approach to reduce the action recognition problem to a nearest neighbor search in embedding space. We encode signals into images and extract features using a deep residual CNN. Using triplet loss, we learn a feature embedding. The resulting encoder transforms features into an embedding space in which closer distances encode similar actions while higher distances encode different actions. Our approach is based on a signal level formulation and remains flexible across a variety of modalities. It further outperforms the baseline on the large scale NTU RGB+D 120 dataset for the One-Shot action recognition protocol by 5.6%. With just 60% of the training data, our approach still outperforms the baseline approach by 3.7%. With 40% of the training data, our approach performs comparably well to the second follow up. Further, we show that our approach generalizes well in experiments on the UTD-MHAD dataset for inertial, skeleton and fused data and the Simitate dataset for motion capturing data. Furthermore, our inter-joint and inter-sensor experiments suggest good capabilities on previously unseen setups.", "field": [], "task": ["Action Recognition", "Face Identification", "Metric Learning", "Object Recognition", "One-Shot 3D Action Recognition"], "method": [], "dataset": ["NTU RGB+D 120"], "metric": ["Accuracy"], "title": "SL-DML: Signal Level Deep Metric Learning for Multimodal One-Shot Action Recognition"} {"abstract": "In this paper, we conduct a comprehensive study on the co-salient object detection (CoSOD) problem for images. CoSOD is an emerging and rapidly growing extension of salient object detection (SOD), which aims to detect the co-occurring salient objects in a group of images. However, existing CoSOD datasets often have a serious data bias, assuming that each group of images contains salient objects of similar visual appearances. This bias can lead to the ideal settings and effectiveness of models trained on existing datasets, being impaired in real-life situations, where similarities are usually semantic or conceptual. To tackle this issue, we first introduce a new benchmark, called CoSOD3k in the wild, which requires a large amount of semantic context, making it more challenging than existing CoSOD datasets. Our CoSOD3k consists of 3,316 high-quality, elaborately selected images divided into 160 groups with hierarchical annotations. The images span a wide range of categories, shapes, object sizes, and backgrounds. Second, we integrate the existing SOD techniques to build a unified, trainable CoSOD framework, which is long overdue in this field. Specifically, we propose a novel CoEG-Net that augments our prior model EGNet with a co-attention projection strategy to enable fast common information learning. CoEG-Net fully leverages previous large-scale SOD datasets and significantly improves the model scalability and stability. Third, we comprehensively summarize 40 cutting-edge algorithms, benchmarking 18 of them over three challenging CoSOD datasets (iCoSeg, CoSal2015, and our CoSOD3k), and reporting more detailed (i.e., group-level) performance analysis. Finally, we discuss the challenges and future works of CoSOD. We hope that our study will give a strong boost to growth in the CoSOD community. The benchmark toolbox and results are available on our project page at http://dpfan.net/CoSOD3K/.", "field": [], "task": ["Co-Salient Object Detection", "Object Detection", "RGB Salient Object Detection", "Salient Object Detection"], "method": [], "dataset": ["CoCA"], "metric": ["mean E-Measure", "Mean F-measure", "S-Measure", "max F-Measure"], "title": "Re-thinking Co-Salient Object Detection"} {"abstract": "Existing weakly-supervised semantic segmentation methods using image-level annotations typically rely on initial responses to locate object regions. However, such response maps generated by the classification network usually focus on discriminative object parts, due to the fact that the network does not need the entire object for optimizing the objective function. To enforce the network to pay attention to other parts of an object, we propose a simple yet effective approach that introduces a self-supervised task by exploiting the sub-category information. Specifically, we perform clustering on image features to generate pseudo sub-categories labels within each annotated parent class, and construct a sub-category objective to assign the network to a more challenging task. By iteratively clustering image features, the training process does not limit itself to the most discriminative object parts, hence improving the quality of the response maps. We conduct extensive analysis to validate the proposed method and show that our approach performs favorably against the state-of-the-art approaches.", "field": [], "task": ["Semantic Segmentation", "Weakly-Supervised Semantic Segmentation"], "method": [], "dataset": ["PASCAL VOC 2012 val"], "metric": ["Mean IoU"], "title": "Weakly-Supervised Semantic Segmentation via Sub-category Exploration"} {"abstract": "We define the object detection from imagery problem as estimating a very\nlarge but extremely sparse bounding box dependent probability distribution.\nSubsequently we identify a sparse distribution estimation scheme, Directed\nSparse Sampling, and employ it in a single end-to-end CNN based detection\nmodel. This methodology extends and formalizes previous state-of-the-art\ndetection models with an additional emphasis on high evaluation rates and\nreduced manual engineering. We introduce two novelties, a corner based\nregion-of-interest estimator and a deconvolution based CNN model. The resulting\nmodel is scene adaptive, does not require manually defined reference bounding\nboxes and produces highly competitive results on MSCOCO, Pascal VOC 2007 and\nPascal VOC 2012 with real-time evaluation rates. Further analysis suggests our\nmodel performs particularly well when finegrained object localization is\ndesirable. We argue that this advantage stems from the significantly larger set\nof available regions-of-interest relative to other methods. Source-code is\navailable from: https://github.com/lachlants/denet", "field": [], "task": ["Object Detection", "Object Localization", "Real-Time Object Detection"], "method": [], "dataset": ["PASCAL VOC 2007"], "metric": ["MAP"], "title": "DeNet: Scalable Real-time Object Detection with Directed Sparse Sampling"} {"abstract": "In this paper, we present a new feature representation for first-person\nvideos. In first-person video understanding (e.g., activity recognition), it is\nvery important to capture both entire scene dynamics (i.e., egomotion) and\nsalient local motion observed in videos. We describe a representation framework\nbased on time series pooling, which is designed to abstract\nshort-term/long-term changes in feature descriptor elements. The idea is to\nkeep track of how descriptor values are changing over time and summarize them\nto represent motion in the activity video. The framework is general, handling\nany types of per-frame feature descriptors including conventional motion\ndescriptors like histogram of optical flows (HOF) as well as appearance\ndescriptors from more recent convolutional neural networks (CNN). We\nexperimentally confirm that our approach clearly outperforms previous feature\nrepresentations including bag-of-visual-words and improved Fisher vector (IFV)\nwhen using identical underlying feature descriptors. We also confirm that our\nfeature representation has superior performance to existing state-of-the-art\nfeatures like local spatio-temporal features and Improved Trajectory Features\n(originally developed for 3rd-person videos) when handling first-person videos.\nMultiple first-person activity datasets were tested under various settings to\nconfirm these findings.", "field": [], "task": ["Activity Recognition", "Time Series", "Video Understanding"], "method": [], "dataset": ["DogCentric"], "metric": ["Accuracy"], "title": "Pooled Motion Features for First-Person Videos"} {"abstract": "Single image rain streak removal is an extremely challenging problem due to\nthe presence of non-uniform rain densities in images. We present a novel\ndensity-aware multi-stream densely connected convolutional neural network-based\nalgorithm, called DID-MDN, for joint rain density estimation and de-raining.\nThe proposed method enables the network itself to automatically determine the\nrain-density information and then efficiently remove the corresponding\nrain-streaks guided by the estimated rain-density label. To better characterize\nrain-streaks with different scales and shapes, a multi-stream densely connected\nde-raining network is proposed which efficiently leverages features from\ndifferent scales. Furthermore, a new dataset containing images with\nrain-density labels is created and used to train the proposed density-aware\nnetwork. Extensive experiments on synthetic and real datasets demonstrate that\nthe proposed method achieves significant improvements over the recent\nstate-of-the-art methods. In addition, an ablation study is performed to\ndemonstrate the improvements obtained by different modules in the proposed\nmethod. Code can be found at: https://github.com/hezhangsprinter", "field": [], "task": ["Density Estimation", "Single Image Deraining"], "method": [], "dataset": ["Test2800", "Rain100H", "Test100", "Test1200", "Rain100L"], "metric": ["SSIM", "PSNR"], "title": "Density-aware Single Image De-raining using a Multi-stream Dense Network"} {"abstract": "Most previous work on neural text generation from graph-structured data\nrelies on standard sequence-to-sequence methods. These approaches linearise the\ninput graph to be fed to a recurrent neural network. In this paper, we propose\nan alternative encoder based on graph convolutional networks that directly\nexploits the input structure. We report results on two graph-to-sequence\ndatasets that empirically show the benefits of explicitly encoding the input\ngraph structure.", "field": [], "task": ["Data-to-Text Generation", "Graph-to-Sequence", "Text Generation"], "method": [], "dataset": ["SR11Deep", "WebNLG"], "metric": ["BLEU"], "title": "Deep Graph Convolutional Encoders for Structured Data to Text Generation"} {"abstract": "I propose a system for Automated Theorem Proving in higher order logic using\ndeep learning and eschewing hand-constructed features. Holophrasm exploits the\nformalism of the Metamath language and explores partial proof trees using a\nneural-network-augmented bandit algorithm and a sequence-to-sequence model for\naction enumeration. The system proves 14% of its test theorems from Metamath's\nset.mm module.", "field": [], "task": ["Automated Theorem Proving"], "method": [], "dataset": ["Metamath set.mm"], "metric": ["Percentage correct"], "title": "Holophrasm: a neural Automated Theorem Prover for higher-order logic"} {"abstract": "Non-local methods exploiting the self-similarity of natural signals have been\nwell studied, for example in image analysis and restoration. Existing\napproaches, however, rely on k-nearest neighbors (KNN) matching in a fixed\nfeature space. The main hurdle in optimizing this feature space w.r.t.\napplication performance is the non-differentiability of the KNN selection rule.\nTo overcome this, we propose a continuous deterministic relaxation of KNN\nselection that maintains differentiability w.r.t. pairwise distances, but\nretains the original KNN as the limit of a temperature parameter approaching\nzero. To exploit our relaxation, we propose the neural nearest neighbors block\n(N3 block), a novel non-local processing layer that leverages the principle of\nself-similarity and can be used as building block in modern neural network\narchitectures. We show its effectiveness for the set reasoning task of\ncorrespondence classification as well as for image restoration, including image\ndenoising and single image super-resolution, where we outperform strong\nconvolutional neural network (CNN) baselines and recent non-local models that\nrely on KNN selection in hand-chosen features spaces.", "field": [], "task": ["Denoising", "Image Denoising", "Image Restoration", "Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["Set5 - 3x upscaling", "Urban100 sigma25", "Urban100 sigma50", "Set12 sigma50", "BSD68 sigma50", "Set5 - 4x upscaling", "BSD68 sigma70", "Set12 sigma25", "BSD68 sigma25", "Set5 - 2x upscaling", "Set12 sigma70", "Urban100 sigma70"], "metric": ["SSIM", "PSNR"], "title": "Neural Nearest Neighbors Networks"} {"abstract": "The noetic end-to-end response selection challenge as one track in Dialog System Technology Challenges 7 (DSTC7) aims to push the state of the art of utterance classification for real world goal-oriented dialog systems, for which participants need to select the correct next utterances from a set of candidates for the multi-turn context. This paper describes our systems that are ranked the top on both datasets under this challenge, one focused and small (Advising) and the other more diverse and large (Ubuntu). Previous state-of-the-art models use hierarchy-based (utterance-level and token-level) neural networks to explicitly model the interactions among different turns' utterances for context modeling. In this paper, we investigate a sequential matching model based only on chain sequence for multi-turn response selection. Our results demonstrate that the potentials of sequential matching approaches have not yet been fully exploited in the past for multi-turn response selection. In addition to ranking the top in the challenge, the proposed model outperforms all previous models, including state-of-the-art hierarchy-based models, and achieves new state-of-the-art performances on two large-scale public multi-turn response selection benchmark datasets.", "field": [], "task": ["Conversational Response Selection", "Goal-Oriented Dialog"], "method": [], "dataset": ["DSTC7 Ubuntu", "Advising Corpus", "Ubuntu Dialogue (v1, Ranking)"], "metric": ["R10@1", "R10@2", "R@10", "1-of-100 Accuracy", "R@50", "R@1", "R10@5"], "title": "Sequential Attention-based Network for Noetic End-to-End Response Selection"} {"abstract": "Can performance on the task of action quality assessment (AQA) be improved by exploiting a description of the action and its quality? Current AQA and skills assessment approaches propose to learn features that serve only one task - estimating the final score. In this paper, we propose to learn spatio-temporal features that explain three related tasks - fine-grained action recognition, commentary generation, and estimating the AQA score. A new multitask-AQA dataset, the largest to date, comprising of 1412 diving samples was collected to evaluate our approach (https://github.com/ParitoshParmar/MTL-AQA). We show that our MTL approach outperforms STL approach using two different kinds of architectures: C3D-AVG and MSCADC. The C3D-AVG-MTL approach achieves the new state-of-the-art performance with a rank correlation of 90.44%. Detailed experiments were performed to show that MTL offers better generalization than STL, and representations from action recognition models are not sufficient for the AQA task and instead should be learned.", "field": [], "task": ["Action Classification", "Action Quality Assessment", "Action Recognition", "Fine-grained Action Recognition", "Multi-Task Learning", "Temporal Action Localization", "Video Captioning"], "method": [], "dataset": ["MTL-AQA"], "metric": ["Spearman Correlation"], "title": "What and How Well You Performed? A Multitask Learning Approach to Action Quality Assessment"} {"abstract": "3D human pose estimation from a monocular image or 2D joints is an ill-posed\nproblem because of depth ambiguity and occluded joints. We argue that 3D human\npose estimation from a monocular input is an inverse problem where multiple\nfeasible solutions can exist. In this paper, we propose a novel approach to\ngenerate multiple feasible hypotheses of the 3D pose from 2D joints.In contrast\nto existing deep learning approaches which minimize a mean square error based\non an unimodal Gaussian distribution, our method is able to generate multiple\nfeasible hypotheses of 3D pose based on a multimodal mixture density networks.\nOur experiments show that the 3D poses estimated by our approach from an input\nof 2D joints are consistent in 2D reprojections, which supports our argument\nthat multiple solutions exist for the 2D-to-3D inverse problem. Furthermore, we\nshow state-of-the-art performance on the Human3.6M dataset in both best\nhypothesis and multi-view settings, and we demonstrate the generalization\ncapacity of our model by testing on the MPII and MPI-INF-3DHP datasets. Our\ncode is available at the project website.", "field": [], "task": ["3D Human Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["Human3.6M"], "metric": ["Average MPJPE (mm)"], "title": "Generating Multiple Hypotheses for 3D Human Pose Estimation with Mixture Density Network"} {"abstract": "We present a bundle-adjustment-based algorithm for recovering accurate 3D human pose and meshes from monocular videos. Unlike previous algorithms which operate on single frames, we show that reconstructing a person over an entire sequence gives extra constraints that can resolve ambiguities. This is because videos often give multiple views of a person, yet the overall body shape does not change and 3D positions vary slowly. Our method improves not only on standard mocap-based datasets like Human 3.6M -- where we show quantitative improvements -- but also on challenging in-the-wild datasets such as Kinetics. Building upon our algorithm, we present a new dataset of more than 3 million frames of YouTube videos from Kinetics with automatically generated 3D poses and meshes. We show that retraining a single-frame 3D pose estimator on this data improves accuracy on both real-world and mocap data by evaluating on the 3DPW and HumanEVA datasets.", "field": [], "task": ["3D Human Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["Human3.6M", "3DPW"], "metric": ["Average MPJPE (mm)", "PA-MPJPE"], "title": "Exploiting temporal context for 3D human pose estimation in the wild"} {"abstract": "This paper presents PointWeb, a new approach to extract contextual features from local neighborhood in a point cloud. Unlike previous work, we densely connect each point with every other in a local neighborhood, aiming to specify feature of each point based on the local region characteristics for better representing the region. A novel module, namely Adaptive Feature Adjustment (AFA) module, is presented to find the interaction between points. For each local region, an impact map carrying element-wise impact between point pairs is applied to the feature difference map. Each feature is then pulled or pushed by other features in the same region according to the adaptively learned impact indicators. The adjusted features are well encoded with region information, and thus benefit the point cloud recognition tasks, such as point cloud segmentation and classification. Experimental results show that our model outperforms the state-of-the-arts on both semantic segmentation and shape classification datasets.\r", "field": [], "task": ["3D Point Cloud Classification", "Semantic Segmentation"], "method": [], "dataset": ["S3DIS Area5", "S3DIS", "ModelNet40"], "metric": ["Overall Accuracy", "oAcc", "Mean IoU", "mAcc", "mIoU"], "title": "PointWeb: Enhancing Local Neighborhood Features for Point Cloud Processing"} {"abstract": "Lip reading has received an increasing research interest in recent years due to the rapid development of deep learning and its widespread potential applications. One key point to obtain good performance for the lip reading task depends heavily on how effective the representation can be to capture the lip movement information and meanwhile to resist the noises resulted from the change of pose, lighting conditions, speaker's appearance and so on. Towards this target, we propose to introduce the mutual information constraints on both the local feature's level and the global sequence's level to enhance the relations of the features with the speech content. On the one hand, we constraint the features generated at each time step to enable them carry a strong relation with the speech content by imposing the local mutual information maximization constraint (LMIM), leading to improvements over the model's ability to discover fine-grained lip movements and the fine-grained differences among words with similar pronunciation, such as ``spend'' and ``spending''. On the other hand, we introduce the mutual information maximization constraint on the global sequence's level (GMIM), to make the model be able to pay more attention to discriminate key frames related with the speech content, and less to various noises appeared in the speaking process. By combining these two advantages together, the proposed method is expected to be both discriminative and robust for effective lip reading. To verify this method, we evaluate on two large-scale benchmark. We perform a detailed analysis and comparison on several aspects, including the comparison of the LMIM and GMIM with the baseline, the visualization of the learned representation and so on. The results not only prove the effectiveness of the proposed method but also report new state-of-the-art performance on both the two benchmarks.", "field": [], "task": ["Lipreading", "Lip Reading"], "method": [], "dataset": ["Lip Reading in the Wild", "LRW-1000"], "metric": ["Top-1 Accuracy"], "title": "Mutual Information Maximization for Effective Lip Reading"} {"abstract": "Graph convolutional networks (GCNs) are a powerful deep learning approach for graph-structured data. Recently, GCNs and subsequent variants have shown superior performance in various application areas on real-world datasets. Despite their success, most of the current GCN models are shallow, due to the {\\em over-smoothing} problem. In this paper, we study the problem of designing and analyzing deep graph convolutional networks. We propose the GCNII, an extension of the vanilla GCN model with two simple yet effective techniques: {\\em Initial residual} and {\\em Identity mapping}. We provide theoretical and empirical evidence that the two techniques effectively relieves the problem of over-smoothing. Our experiments show that the deep GCNII model outperforms the state-of-the-art methods on various semi- and full-supervised tasks. Code is available at https://github.com/chennnM/GCNII .", "field": ["Graph Models"], "task": [], "method": ["Graph Convolutional Network", "GCN"], "dataset": ["PPI", "Pubmed Full-supervised", "Cora with Public Split: fixed 20 nodes per class", "Cora Full-supervised", "CiteSeer with Public Split: fixed 20 nodes per class", "Citeseer Full-supervised", "PubMed with Public Split: fixed 20 nodes per class"], "metric": ["F1", "Accuracy"], "title": "Simple and Deep Graph Convolutional Networks"} {"abstract": "We introduce a novel approach for scanned document representation to perform field extraction. It allows the simultaneous encoding of the textual, visual and layout information in a 3D matrix used as an input to a segmentation model. We improve the recent Chargrid and Wordgrid models in several ways, first by taking into account the visual modality, then by boosting its robustness in regards to small datasets while keeping the inference time low. Our approach is tested on public and private document-image datasets, showing higher performances compared to the recent state-of-the-art methods.", "field": [], "task": [], "method": [], "dataset": ["RVL-CDIP"], "metric": ["WAR", "FAR"], "title": "VisualWordGrid: Information Extraction From Scanned Documents Using A Multimodal Approach"} {"abstract": "Deep learning-based detectors usually produce a redundant set of object bounding boxes including many duplicate detections of the same object. These boxes are then filtered using non-maximum suppression (NMS) in order to select exactly one bounding box per object of interest. This greedy scheme is simple and provides sufficient accuracy for isolated objects but often fails in crowded environments, since one needs to both preserve boxes for different objects and suppress duplicate detections. In this work we develop an alternative iterative scheme, where a new subset of objects is detected at each iteration. Detected boxes from the previous iterations are passed to the network at the following iterations to ensure that the same object would not be detected twice. This iterative scheme can be applied to both one-stage and two-stage object detectors with just minor modifications of the training and inference procedures. We perform extensive experiments with two different baseline detectors on four datasets and show significant improvement over the baseline, leading to state-of-the-art performance on CrowdHuman and WiderPerson datasets. The source code and the trained models are available at https://github.com/saic-vul/iterdet.", "field": [], "task": ["Object Detection"], "method": [], "dataset": ["CrowdHuman (full body)", "WiderPerson"], "metric": ["mMR", "AP"], "title": "IterDet: Iterative Scheme for Object Detection in Crowded Environments"} {"abstract": "Visual question answering is fundamentally compositional in nature---a\nquestion like \"where is the dog?\" shares substructure with questions like \"what\ncolor is the dog?\" and \"where is the cat?\" This paper seeks to simultaneously\nexploit the representational capacity of deep networks and the compositional\nlinguistic structure of questions. We describe a procedure for constructing and\nlearning *neural module networks*, which compose collections of jointly-trained\nneural \"modules\" into deep networks for question answering. Our approach\ndecomposes questions into their linguistic substructures, and uses these\nstructures to dynamically instantiate modular networks (with reusable\ncomponents for recognizing dogs, classifying colors, etc.). The resulting\ncompound networks are jointly trained. We evaluate our approach on two\nchallenging datasets for visual question answering, achieving state-of-the-art\nresults on both the VQA natural image dataset and a new dataset of complex\nquestions about abstract shapes.", "field": [], "task": ["Visual Question Answering"], "method": [], "dataset": ["VQA v1 test-std", "VQA v1 test-dev"], "metric": ["Accuracy"], "title": "Neural Module Networks"} {"abstract": "The ability to recognize facial expressions automatically enables novel\napplications in human-computer interaction and other areas. Consequently, there\nhas been active research in this field, with several recent works utilizing\nConvolutional Neural Networks (CNNs) for feature extraction and inference.\nThese works differ significantly in terms of CNN architectures and other\nfactors. Based on the reported results alone, the performance impact of these\nfactors is unclear. In this paper, we review the state of the art in\nimage-based facial expression recognition using CNNs and highlight algorithmic\ndifferences and their performance impact. On this basis, we identify existing\nbottlenecks and consequently directions for advancing this research field.\nFurthermore, we demonstrate that overcoming one of these bottlenecks - the\ncomparatively basic architectures of the CNNs utilized in this field - leads to\na substantial performance increase. By forming an ensemble of modern deep CNNs,\nwe obtain a FER2013 test accuracy of 75.2%, outperforming previous works\nwithout requiring auxiliary training data or face registration.", "field": [], "task": [], "method": [], "dataset": ["FER2013"], "metric": ["Accuracy"], "title": "Facial Expression Recognition using Convolutional Neural Networks: State of the Art"} {"abstract": "Vision-based detection on surface defects has long postulated in the magnetic tile automation process. In this work, we introduce a real-time and multi-module neural network model called MCuePush U-Net, specifically designed for the image saliency detection of magnetic tile. We show that the model exceeds the state-of-the-art, in which it both effectively and explicitly maps multiple surface defects from low-contrast images. Our model significantly reduces time cost of machinery from 0.5s per image to 0.07s, and enhances saliency accuracy on surface defect detection.", "field": [], "task": ["Anomaly Detection", "Defect Detection", "Saliency Detection"], "method": [], "dataset": ["Surface Defect Saliency of Magnetic Tile"], "metric": ["Segmentation AUROC"], "title": "Surface Defect Saliency of Magnetic Tile"} {"abstract": "Few-shot classification is a challenge in machine learning where the goal is to train a classifier using a very limited number of labeled examples. This scenario is likely to occur frequently in real life, for example when data acquisition or labeling is expensive. In this work, we consider the problem of post-labeled few-shot unsupervised learning, a classification task where representations are learned in an unsupervised fashion, to be later labeled using very few annotated examples. We argue that this problem is very likely to occur on the edge, when the embedded device directly acquires the data, and the expert needed to perform labeling cannot be prompted often. To address this problem, we consider an algorithm consisting of the concatenation of transfer learning with clustering using Self-Organizing Maps (SOMs). We introduce a TensorFlow-based implementation to speed-up the process in multi-core CPUs and GPUs. Finally, we demonstrate the effectiveness of the method using standard off-the-shelf few-shot classification benchmarks.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Transfer Learning"], "method": [], "dataset": ["Mini-Imagenet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)"], "metric": ["Accuracy"], "title": "GPU-based Self-Organizing Maps for Post-Labeled Few-Shot Unsupervised Learning"} {"abstract": "Several recent publications have proposed methods for mapping images into\ncontinuous semantic embedding spaces. In some cases the embedding space is\ntrained jointly with the image transformation. In other cases the semantic\nembedding space is established by an independent natural language processing\ntask, and then the image transformation into that space is learned in a second\nstage. Proponents of these image embedding systems have stressed their\nadvantages over the traditional \\nway{} classification framing of image\nunderstanding, particularly in terms of the promise for zero-shot learning --\nthe ability to correctly annotate images of previously unseen object\ncategories. In this paper, we propose a simple method for constructing an image\nembedding system from any existing \\nway{} image classifier and a semantic word\nembedding model, which contains the $\\n$ class labels in its vocabulary. Our\nmethod maps images into the semantic embedding space via convex combination of\nthe class label embedding vectors, and requires no additional training. We show\nthat this simple and direct method confers many of the advantages associated\nwith more complex image embedding schemes, and indeed outperforms state of the\nart methods on the ImageNet zero-shot learning task.", "field": [], "task": ["Zero-Shot Learning"], "method": [], "dataset": ["ImageNet - 0-Shot"], "metric": ["Accuracy"], "title": "Zero-Shot Learning by Convex Combination of Semantic Embeddings"} {"abstract": "Aspect-level sentiment analysis aims to identify the sentiment of a specific target in its context. Previous works have proved that the interactions between aspects and the contexts are important. On this basis, we also propose a succinct hierarchical attention based mechanism to fuse the information of targets and the contextual words. In addition, most existing methods ignore the position information of the aspect when encoding the sentence. In this paper, we argue that the position-aware representations are beneficial to this task. Therefore, we propose a hierarchical attention based position-aware network (HAPN), which introduces position embeddings to learn the position-aware representations of sentences and further generate the target-specific representations of contextual words. The experimental results on SemEval 2014 dataset show that our approach outperforms the state-of-the-art methods.", "field": [], "task": ["Aspect-Based Sentiment Analysis", "Feature Engineering", "Sentiment Analysis"], "method": [], "dataset": ["SemEval 2014 Task 4 Sub Task 2"], "metric": ["Laptop (Acc)", "Restaurant (Acc)", "Mean Acc (Restaurant + Laptop)"], "title": "Hierarchical Attention Based Position-Aware Network for Aspect-Level Sentiment Analysis"} {"abstract": "The rapid pace of recent research in AI has been driven in part by the presence of fast and challenging simulation environments. These environments often take the form of games; with tasks ranging from simple board games, to competitive video games. We propose a new benchmark - Obstacle Tower: a high fidelity, 3D, 3rd person, procedurally generated environment. An agent playing Obstacle Tower must learn to solve both low-level control and high-level planning problems in tandem while learning from pixels and a sparse reward signal. Unlike other benchmarks such as the Arcade Learning Environment, evaluation of agent performance in Obstacle Tower is based on an agent's ability to perform well on unseen instances of the environment. In this paper we outline the environment and provide a set of baseline results produced by current state-of-the-art Deep RL methods as well as human players. These algorithms fail to produce agents capable of performing near human level.", "field": [], "task": ["Atari Games", "Board Games"], "method": [], "dataset": ["Obstacle Tower (Weak Gen) fixed", "Obstacle Tower (Strong Gen) fixed", "Obstacle Tower (No Gen) varied", "Obstacle Tower (No Gen) fixed", "Obstacle Tower (Strong Gen) varied", "Obstacle Tower (Weak Gen) varied"], "metric": ["Score"], "title": "Obstacle Tower: A Generalization Challenge in Vision, Control, and Planning"} {"abstract": "Automating the classification of camera-obtained microscopic images of White Blood Cells (WBCs) and related cell subtypes has assumed importance since it aids the laborious manual process of review and diagnosis. Several State-Of-The-Art (SOTA) methods developed using Deep Convolutional Neural Networks suffer from the problem of domain shift - severe performance degradation when they are tested on data (target) obtained in a setting different from that of the training (source). The change in the target data might be caused by factors such as differences in camera/microscope types, lenses, lighting-conditions etc. This problem can potentially be solved using Unsupervised Domain Adaptation (UDA) techniques albeit standard algorithms presuppose the existence of a sufficient amount of unlabelled target data which is not always the case with medical images. In this paper, we propose a method for UDA that is devoid of the need for target data. Given a test image from the target data, we obtain its 'closest-clone' from the source data that is used as a proxy in the classifier. We prove the existence of such a clone given that infinite number of data points can be sampled from the source distribution. We propose a method in which a latent-variable generative model based on variational inference is used to simultaneously sample and find the 'closest-clone' from the source distribution through an optimization procedure in the latent space. We demonstrate the efficacy of the proposed method over several SOTA UDA methods for WBC classification on datasets captured using different imaging modalities under multiple settings.", "field": [], "task": ["Domain Adaptation", "Unsupervised Domain Adaptation", "Variational Inference"], "method": [], "dataset": ["Office-31"], "metric": ["Average Accuracy"], "title": "Target-Independent Domain Adaptation for WBC Classification using Generative Latent Search"} {"abstract": "Human poses and motions are important cues for analysis of videos with people\nand there is strong evidence that representations based on body pose are highly\neffective for a variety of tasks such as activity recognition, content\nretrieval and social signal processing. In this work, we aim to further advance\nthe state of the art by establishing \"PoseTrack\", a new large-scale benchmark\nfor video-based human pose estimation and articulated tracking, and bringing\ntogether the community of researchers working on visual human analysis. The\nbenchmark encompasses three competition tracks focusing on i) single-frame\nmulti-person pose estimation, ii) multi-person pose estimation in videos, and\niii) multi-person articulated tracking. To facilitate the benchmark and\nchallenge we collect, annotate and release a new %large-scale benchmark dataset\nthat features videos with multiple people labeled with person tracks and\narticulated pose. A centralized evaluation server is provided to allow\nparticipants to evaluate on a held-out test set. We envision that the proposed\nbenchmark will stimulate productive research both by providing a large and\nrepresentative training dataset as well as providing a platform to objectively\nevaluate and compare the proposed methods. The benchmark is freely accessible\nat https://posetrack.net.", "field": [], "task": ["Activity Recognition", "Multi-Person Pose Estimation", "Pose Estimation", "Pose Tracking"], "method": [], "dataset": ["PoseTrack2017"], "metric": ["MOTA", "Mean mAP"], "title": "PoseTrack: A Benchmark for Human Pose Estimation and Tracking"} {"abstract": "In the design of deep neural architectures, recent studies have demonstrated\nthe benefits of grouping subnetworks into a larger network. For examples, the\nInception architecture integrates multi-scale subnetworks and the residual\nnetwork can be regarded that a residual unit combines a residual subnetwork\nwith an identity shortcut. In this work, we embrace this observation and\npropose the Competitive Pathway Network (CoPaNet). The CoPaNet comprises a\nstack of competitive pathway units and each unit contains multiple parallel\nresidual-type subnetworks followed by a max operation for feature competition.\nThis mechanism enhances the model capability by learning a variety of features\nin subnetworks. The proposed strategy explicitly shows that the features\npropagate through pathways in various routing patterns, which is referred to as\npathway encoding of category information. Moreover, the cross-block shortcut\ncan be added to the CoPaNet to encourage feature reuse. We evaluated the\nproposed CoPaNet on four object recognition benchmarks: CIFAR-10, CIFAR-100,\nSVHN, and ImageNet. CoPaNet obtained the state-of-the-art or comparable results\nusing similar amounts of parameters. The code of CoPaNet is available at:\nhttps://github.com/JiaRenChang/CoPaNet.", "field": [], "task": ["Image Classification", "Object Recognition"], "method": [], "dataset": ["SVHN", "CIFAR-100", "CIFAR-10"], "metric": ["Percentage error", "Percentage correct"], "title": "Deep Competitive Pathway Networks"} {"abstract": "Facial expression recognition methods use a combination of geometric and\nappearance-based features. Spatial features are derived from displacements of\nfacial landmarks, and carry geometric information. These features are either\nselected based on prior knowledge, or dimension-reduced from a large pool. In\nthis study, we produce a large number of potential spatial features using two\ncombinations of facial landmarks. Among these, we search for a descriptive\nsubset of features using sequential forward selection. The chosen feature\nsubset is used to classify facial expressions in the extended Cohn-Kanade\ndataset (CK+), and delivered 88.7% recognition accuracy without using any\nappearance-based features.", "field": [], "task": ["Facial Expression Recognition"], "method": [], "dataset": ["Cohn-Kanade"], "metric": ["Accuracy"], "title": "Greedy Search for Descriptive Spatial Face Features"} {"abstract": "In recent years, deep learning techniques revolutionized the way remote\nsensing data are processed. Classification of hyperspectral data is no\nexception to the rule, but has intrinsic specificities which make application\nof deep learning less straightforward than with other optical data. This\narticle presents a state of the art of previous machine learning approaches,\nreviews the various deep learning approaches currently proposed for\nhyperspectral classification, and identifies the problems and difficulties\nwhich arise to implement deep neural networks for this task. In particular, the\nissues of spatial and spectral resolution, data volume, and transfer of models\nfrom multimedia images to hyperspectral data are addressed. Additionally, a\ncomparative study of various families of network architectures is provided and\na software toolbox is publicly released to allow experimenting with these\nmethods. 1 This article is intended for both data scientists with interest in\nhyperspectral data and remote sensing experts eager to apply deep learning\ntechniques to their own dataset.", "field": [], "task": ["Hyperspectral Image Classification"], "method": [], "dataset": ["Pavia University"], "metric": ["Overall Accuracy"], "title": "Deep Learning for Classification of Hyperspectral Data: A Comparative Review"} {"abstract": "We present Scan2CAD, a novel data-driven method that learns to align clean 3D\nCAD models from a shape database to the noisy and incomplete geometry of a\ncommodity RGB-D scan. For a 3D reconstruction of an indoor scene, our method\ntakes as input a set of CAD models, and predicts a 9DoF pose that aligns each\nmodel to the underlying scan geometry. To tackle this problem, we create a new\nscan-to-CAD alignment dataset based on 1506 ScanNet scans with 97607 annotated\nkeypoint pairs between 14225 CAD models from ShapeNet and their counterpart\nobjects in the scans. Our method selects a set of representative keypoints in a\n3D scan for which we find correspondences to the CAD geometry. To this end, we\ndesign a novel 3D CNN architecture that learns a joint embedding between real\nand synthetic objects, and from this predicts a correspondence heatmap. Based\non these correspondence heatmaps, we formulate a variational energy\nminimization that aligns a given set of CAD models to the reconstruction. We\nevaluate our approach on our newly introduced Scan2CAD benchmark where we\noutperform both handcrafted feature descriptor as well as state-of-the-art CNN\nbased methods by 21.39%.", "field": [], "task": ["3D Reconstruction"], "method": [], "dataset": ["Scan2CAD"], "metric": ["Average Accuracy"], "title": "Scan2CAD: Learning CAD Model Alignment in RGB-D Scans"} {"abstract": "Vision-Language Navigation (VLN) is a task where agents learn to navigate following natural language instructions. The key to this task is to perceive both the visual scene and natural language sequentially. Conventional approaches exploit the vision and language features in cross-modal grounding. However, the VLN task remains challenging, since previous works have neglected the rich semantic information contained in the environment (such as implicit navigation graphs or sub-trajectory semantics). In this paper, we introduce Auxiliary Reasoning Navigation (AuxRN), a framework with four self-supervised auxiliary reasoning tasks to take advantage of the additional training signals derived from the semantic information. The auxiliary tasks have four reasoning objectives: explaining the previous actions, estimating the navigation progress, predicting the next orientation, and evaluating the trajectory consistency. As a result, these additional training signals help the agent to acquire knowledge of semantic representations in order to reason about its activity and build a thorough perception of the environment. Our experiments indicate that auxiliary reasoning tasks improve both the performance of the main task and the model generalizability by a large margin. Empirically, we demonstrate that an agent trained with self-supervised auxiliary reasoning tasks substantially outperforms the previous state-of-the-art method, being the best existing approach on the standard benchmark.", "field": [], "task": ["Vision-Language Navigation"], "method": [], "dataset": ["VLN Challenge"], "metric": ["length", "spl", "oracle success", "success", "error"], "title": "Vision-Language Navigation with Self-Supervised Auxiliary Reasoning Tasks"} {"abstract": "In recent years gaze estimation methods have made substantial progress, driven by the numerous application areas including human-robot interaction, visual attention estimation and foveated rendering for virtual reality headsets. However, many gaze estimation methods typically assume that the subject's eyes are open; for closed eyes, these methods provide irregular gaze estimates. Here, we address this assumption by first introducing a new open-sourced dataset with annotations of the eye-openness of more than 200,000 eye images, including more than 10,000 images where the eyes are closed. We further present baseline methods that allow for blink detection using convolutional neural networks. In extensive experiments, we show that the proposed baselines perform favourably in terms of precision and recall. We further incorporate our proposed RT-BENE baselines in the recently presented RT-GENE gaze estimation framework where it provides a real-time inference of the openness of the eyes. We argue that our work will benefit both gaze estimation and blink estimation methods, and we take steps towards unifying these methods.", "field": [], "task": ["Blink estimation", "Gaze Estimation", "Human robot interaction"], "method": [], "dataset": ["RT-BENE", "Eyeblink8", "Researcher's Night"], "metric": ["F1"], "title": "RT-BENE: A Dataset and Baselines for Real-Time Blink Estimation in Natural Environments"} {"abstract": "Although tremendous strides have been made in face detection, one of the\nremaining open challenges is to achieve real-time speed on the CPU as well as\nmaintain high performance, since effective models for face detection tend to be\ncomputationally prohibitive. To address this challenge, we propose a novel face\ndetector, named FaceBoxes, with superior performance on both speed and\naccuracy. Specifically, our method has a lightweight yet powerful network\nstructure that consists of the Rapidly Digested Convolutional Layers (RDCL) and\nthe Multiple Scale Convolutional Layers (MSCL). The RDCL is designed to enable\nFaceBoxes to achieve real-time speed on the CPU. The MSCL aims at enriching the\nreceptive fields and discretizing anchors over different layers to handle faces\nof various scales. Besides, we propose a new anchor densification strategy to\nmake different types of anchors have the same density on the image, which\nsignificantly improves the recall rate of small faces. As a consequence, the\nproposed detector runs at 20 FPS on a single CPU core and 125 FPS using a GPU\nfor VGA-resolution images. Moreover, the speed of FaceBoxes is invariant to the\nnumber of faces. We comprehensively evaluate this method and present\nstate-of-the-art detection performance on several face detection benchmark\ndatasets, including the AFW, PASCAL face, and FDDB. Code is available at\nhttps://github.com/sfzhang15/FaceBoxes", "field": [], "task": ["Face Detection"], "method": [], "dataset": ["PASCAL Face", "Annotated Faces in the Wild", "FDDB"], "metric": ["AP"], "title": "FaceBoxes: A CPU Real-time Face Detector with High Accuracy"} {"abstract": "We propose the Variational Shape Learner (VSL), a generative model that\nlearns the underlying structure of voxelized 3D shapes in an unsupervised\nfashion. Through the use of skip-connections, our model can successfully learn\nand infer a latent, hierarchical representation of objects. Furthermore,\nrealistic 3D objects can be easily generated by sampling the VSL's latent\nprobabilistic manifold. We show that our generative model can be trained\nend-to-end from 2D images to perform single image 3D model retrieval.\nExperiments show, both quantitatively and qualitatively, the improved\ngeneralization of our proposed model over a range of tasks, performing better\nor comparable to various state-of-the-art alternatives.", "field": [], "task": ["3D Object Classification", "3D Object Recognition", "3D Reconstruction", "3D Shape Generation"], "method": [], "dataset": ["ModelNet40"], "metric": ["Accuracy"], "title": "Learning a Hierarchical Latent-Variable Model of 3D Shapes"} {"abstract": "Pre-trained word embeddings learned from unlabeled text have become a\nstandard component of neural network architectures for NLP tasks. However, in\nmost cases, the recurrent network that operates on word-level representations\nto produce context sensitive representations is trained on relatively little\nlabeled data. In this paper, we demonstrate a general semi-supervised approach\nfor adding pre- trained context embeddings from bidirectional language models\nto NLP systems and apply it to sequence labeling tasks. We evaluate our model\non two standard datasets for named entity recognition (NER) and chunking, and\nin both cases achieve state of the art results, surpassing previous systems\nthat use other forms of transfer or joint learning with additional labeled data\nand task specific gazetteers.", "field": [], "task": ["Chunking", "Named Entity Recognition"], "method": [], "dataset": ["CoNLL 2003 (English)"], "metric": ["F1"], "title": "Semi-supervised sequence tagging with bidirectional language models"} {"abstract": "We introduce BilBOWA (Bilingual Bag-of-Words without Alignments), a simple\nand computationally-efficient model for learning bilingual distributed\nrepresentations of words which can scale to large monolingual datasets and does\nnot require word-aligned parallel training data. Instead it trains directly on\nmonolingual data and extracts a bilingual signal from a smaller set of raw-text\nsentence-aligned data. This is achieved using a novel sampled bag-of-words\ncross-lingual objective, which is used to regularize two noise-contrastive\nlanguage models for efficient cross-lingual feature learning. We show that\nbilingual embeddings learned using the proposed model outperform\nstate-of-the-art methods on a cross-lingual document classification task as\nwell as a lexical translation task on WMT11 data.", "field": [], "task": ["Cross-Lingual Document Classification", "Document Classification"], "method": [], "dataset": ["Reuters En-De", "Reuters De-En"], "metric": ["Accuracy"], "title": "BilBOWA: Fast Bilingual Distributed Representations without Word Alignments"} {"abstract": "The process of learning good features for machine learning applications can be very computationally expensive and may prove difficult in cases where little data is available. A prototypical example of this is the one-shot learning setting, in which we must correctly make predictions given only a single example of each new class. In this paper, we explore a method for learning siamese neural networks which employ a unique structure to naturally rank similarity between inputs. Once a network has been tuned, we can then capitalize on powerful discriminative features to generalize the predictive power of the network not just to new data, but to entirely new classes from unknown distributions. Using a convolutional architecture, we are able to achieve strong results which exceed those of other deep learning models with near state-of-the-art performance on one-shot classification tasks.", "field": [], "task": ["One-Shot Learning"], "method": [], "dataset": ["MNIST"], "metric": ["Accuracy"], "title": "Siamese neural networks for one-shot image recognition"} {"abstract": "Monocular depth prediction plays a crucial role in understanding 3D scene geometry. Although recent methods have achieved impressive progress in evaluation metrics such as the pixel-wise relative error, most methods neglect the geometric constraints in the 3D space. In this work, we show the importance of the high-order 3D geometric constraints for depth prediction. By designing a loss term that enforces one simple type of geometric constraints, namely, virtual normal directions determined by randomly sampled three points in the reconstructed 3D space, we can considerably improve the depth prediction accuracy. Significantly, the byproduct of this predicted depth being sufficiently accurate is that we are now able to recover good 3D structures of the scene such as the point cloud and surface normal directly from the depth, eliminating the necessity of training new sub-models as was previously done. Experiments on two benchmarks: NYU Depth-V2 and KITTI demonstrate the effectiveness of our method and state-of-the-art performance.", "field": [], "task": ["Depth Estimation", "Monocular Depth Estimation"], "method": [], "dataset": ["NYU-Depth V2", "KITTI Eigen split"], "metric": ["RMSE", "absolute relative error"], "title": "Enforcing geometric constraints of virtual normal for depth prediction"} {"abstract": "Recently, convolutional neural networks (CNN) have been successfully applied to many remote sensing problems. However, deep learning techniques for multi-image super-resolution from multitemporal unregistered imagery have received little attention so far. This work proposes a novel CNN-based technique that exploits both spatial and temporal correlations to combine multiple images. This novel framework integrates the spatial registration task directly inside the CNN, and allows to exploit the representation learning capabilities of the network to enhance registration accuracy. The entire super-resolution process relies on a single CNN with three main stages: shared 2D convolutions to extract high-dimensional features from the input images; a subnetwork proposing registration filters derived from the high-dimensional feature representations; 3D convolutions for slow fusion of the features from multiple images. The whole network can be trained end-to-end to recover a single high resolution image from multiple unregistered low resolution images. The method presented in this paper is the winner of the PROBA-V super-resolution challenge issued by the European Space Agency.", "field": [], "task": ["Image Super-Resolution", "Multi-Frame Super-Resolution", "Representation Learning", "Super-Resolution"], "method": [], "dataset": ["PROBA-V"], "metric": ["Normalized cPSNR"], "title": "DeepSUM: Deep neural network for Super-resolution of Unregistered Multitemporal images"} {"abstract": "Experimental reproducibility and replicability are critical topics in machine learning. Authors have often raised concerns about their lack in scientific publications to improve the quality of the field. Recently, the graph representation learning field has attracted the attention of a wide research community, which resulted in a large stream of works. As such, several Graph Neural Network models have been developed to effectively tackle graph classification. However, experimental procedures often lack rigorousness and are hardly reproducible. Motivated by this, we provide an overview of common practices that should be avoided to fairly compare with the state of the art. To counter this troubling trend, we ran more than 47000 experiments in a controlled and uniform framework to re-evaluate five popular models across nine common benchmarks. Moreover, by comparing GNNs with structure-agnostic baselines we provide convincing evidence that, on some datasets, structural information has not been exploited yet. We believe that this work can contribute to the development of the graph learning field, by providing a much needed grounding for rigorous evaluations of graph classification models.", "field": [], "task": ["Graph Classification", "Graph Learning", "Graph Representation Learning", "Representation Learning"], "method": [], "dataset": ["COLLAB", "REDDIT-MULTI-5k", "ENZYMES", "IMDb-B", "REDDIT-B", "PROTEINS", "D&D", "NCI1", "IMDb-M"], "metric": ["Accuracy"], "title": "A Fair Comparison of Graph Neural Networks for Graph Classification"} {"abstract": "Recently, many works have tried to augment the performance of Chinese named entity recognition (NER) using word lexicons. As a representative, Lattice-LSTM (Zhang and Yang, 2018) has achieved new benchmark results on several public Chinese NER datasets. However, Lattice-LSTM has a complex model architecture. This limits its application in many industrial areas where real-time NER responses are needed. In this work, we propose a simple but effective method for incorporating the word lexicon into the character representations. This method avoids designing a complicated sequence modeling architecture, and for any neural NER model, it requires only subtle adjustment of the character representation layer to introduce the lexicon information. Experimental studies on four benchmark Chinese NER datasets show that our method achieves an inference speed up to 6.15 times faster than those of state-ofthe-art methods, along with a better performance. The experimental results also show that the proposed method can be easily incorporated with pre-trained models like BERT.", "field": [], "task": ["Chinese Named Entity Recognition", "Named Entity Recognition"], "method": [], "dataset": ["Resume NER", "MSRA", "OntoNotes 4", "Weibo NER"], "metric": ["F1"], "title": "Simplify the Usage of Lexicon in Chinese NER"} {"abstract": "Session-based recommendation nowadays plays a vital role in many websites, which aims to predict users' actions based on anonymous sessions. There have emerged many studies that model a session as a sequence or a graph via investigating temporal transitions of items in a session. However, these methods compress a session into one fixed representation vector without considering the target items to be predicted. The fixed vector will restrict the representation ability of the recommender model, considering the diversity of target items and users' interests. In this paper, we propose a novel target attentive graph neural network (TAGNN) model for session-based recommendation. In TAGNN, target-aware attention adaptively activates different user interests with respect to varied target items. The learned interest representation vector varies with different target items, greatly improving the expressiveness of the model. Moreover, TAGNN harnesses the power of graph neural networks to capture rich item transitions in sessions. Comprehensive experiments conducted on real-world datasets demonstrate its superiority over state-of-the-art methods.", "field": [], "task": ["Session-Based Recommendations"], "method": [], "dataset": ["yoochoose1", "Diginetica", "yoochoose1/64"], "metric": ["MRR@20", "Precision@20"], "title": "TAGNN: Target Attentive Graph Neural Networks for Session-based Recommendation"} {"abstract": "The state of the art in semantic segmentation is steadily increasing in performance, resulting in more precise and reliable segmentations in many different applications. However, progress is limited by the cost of generating labels for training, which sometimes requires hours of manual labor for a single image. Because of this, semi-supervised methods have been applied to this task, with varying degrees of success. A key challenge is that common augmentations used in semi-supervised classification are less effective for semantic segmentation. We propose a novel data augmentation mechanism called ClassMix, which generates augmentations by mixing unlabelled samples, by leveraging on the network's predictions for respecting object boundaries. We evaluate this augmentation technique on two common semi-supervised semantic segmentation benchmarks, showing that it attains state-of-the-art results. Lastly, we also provide extensive ablation studies comparing different design decisions and training regimes.", "field": [], "task": ["Data Augmentation", "Semantic Segmentation", "Semi-Supervised Semantic Segmentation"], "method": [], "dataset": ["Pascal VOC 2012 1% labeled", "Pascal VOC 2012 12.5% labeled", "Cityscapes 12.5% labeled", "Pascal VOC 2012 5% labeled", "Pascal VOC 2012 2% labeled", "Cityscapes 100 samples labeled", "PASCAL VOC 2012 25% labeled", "Cityscapes 25% labeled", "Cityscapes 50% labeled"], "metric": ["Validation mIoU"], "title": "ClassMix: Segmentation-Based Data Augmentation for Semi-Supervised Learning"} {"abstract": "Normalizing flows transform a latent distribution through an invertible neural network for a flexible and pleasingly simple approach to generative modelling, while preserving an exact likelihood. We propose FlowGMM, an end-to-end approach to generative semi supervised learning with normalizing flows, using a latent Gaussian mixture model. FlowGMM is distinct in its simplicity, unified treatment of labelled and unlabelled data with an exact likelihood, interpretability, and broad applicability beyond image data. We show promising results on a wide range of applications, including AG-News and Yahoo Answers text data, tabular data, and semi-supervised image classification. We also show that FlowGMM can discover interpretable structure, provide real-time optimization-free feature visualizations, and specify well calibrated predictive distributions.", "field": [], "task": ["Image Classification", "Semi-Supervised Image Classification", "Semi Supervised Text Classification", "Semi-Supervised Text Classification"], "method": [], "dataset": ["Yahoo! Answers (800 Labels)", "AG News (200 Labels)"], "metric": ["Accuracy (%)"], "title": "Semi-Supervised Learning with Normalizing Flows"} {"abstract": "We present a novel unsupervised feature representation learning method, Visual Commonsense Region-based Convolutional Neural Network (VC R-CNN), to serve as an improved visual region encoder for high-level tasks such as captioning and VQA. Given a set of detected object regions in an image (e.g., using Faster R-CNN), like any other unsupervised feature learning methods (e.g., word2vec), the proxy training objective of VC R-CNN is to predict the contextual objects of a region. However, they are fundamentally different: the prediction of VC R-CNN is by using causal intervention: P(Y|do(X)), while others are by using the conventional likelihood: P(Y|X). This is also the core reason why VC R-CNN can learn \"sense-making\" knowledge like chair can be sat -- while not just \"common\" co-occurrences such as chair is likely to exist if table is observed. We extensively apply VC R-CNN features in prevailing models of three popular tasks: Image Captioning, VQA, and VCR, and observe consistent performance boosts across them, achieving many new state-of-the-arts. Code and feature are available at https://github.com/Wangt-CN/VC-R-CNN.", "field": [], "task": ["Image Captioning", "Representation Learning", "Visual Question Answering"], "method": [], "dataset": ["VQA v2 test-std", "COCO Captions", "VQA v2 test-dev"], "metric": ["CIDEr-D", "overall", "METEOR", "Accuracy", "ROUGE-L", "BLEU-4"], "title": "Visual Commonsense R-CNN"} {"abstract": "Abstract Meaning Representation (AMR) parsing has experienced a notable growth in performance in the last two years, due both to the impact of transfer learning and the development of novel architectures specific to AMR. At the same time, self-learning techniques have helped push the performance boundaries of other natural language processing applications, such as machine translation or question answering. In this paper, we explore different ways in which trained models can be applied to improve AMR parsing performance, including generation of synthetic text and AMR annotations as well as refinement of actions oracle. We show that, without any additional human annotations, these techniques improve an already performant parser and achieve state-of-the-art results on AMR 1.0 and AMR 2.0.", "field": [], "task": ["AMR Parsing", "Machine Translation", "Question Answering", "Transfer Learning"], "method": [], "dataset": ["LDC2017T10", "LDC2014T12"], "metric": ["Smatch", "F1 Full"], "title": "Pushing the Limits of AMR Parsing with Self-Learning"} {"abstract": "In this paper, we propose Dynamic Self-Attention (DSA), a new self-attention\nmechanism for sentence embedding. We design DSA by modifying dynamic routing in\ncapsule network (Sabouretal.,2017) for natural language processing. DSA attends\nto informative words with a dynamic weight vector. We achieve new\nstate-of-the-art results among sentence encoding methods in Stanford Natural\nLanguage Inference (SNLI) dataset with the least number of parameters, while\nshowing comparative results in Stanford Sentiment Treebank (SST) dataset.", "field": [], "task": ["Natural Language Inference", "Sentence Embedding"], "method": [], "dataset": ["SNLI"], "metric": ["Parameters", "% Train Accuracy", "% Test Accuracy"], "title": "Dynamic Self-Attention : Computing Attention over Words Dynamically for Sentence Embedding"} {"abstract": "The state of the art in video understanding suffers from two problems: (1)\nThe major part of reasoning is performed locally in the video, therefore, it\nmisses important relationships within actions that span several seconds. (2)\nWhile there are local methods with fast per-frame processing, the processing of\nthe whole video is not efficient and hampers fast video retrieval or online\nclassification of long-term activities. In this paper, we introduce a network\narchitecture that takes long-term content into account and enables fast\nper-video processing at the same time. The architecture is based on merging\nlong-term content already in the network rather than in a post-hoc fusion.\nTogether with a sampling strategy, which exploits that neighboring frames are\nlargely redundant, this yields high-quality action classification and video\ncaptioning at up to 230 videos per second, where each video can consist of a\nfew hundred frames. The approach achieves competitive performance across all\ndatasets while being 10x to 80x faster than state-of-the-art methods.", "field": [], "task": ["Action Classification", "Action Classification ", "Action Recognition", "Video Captioning", "Video Classification", "Video Retrieval", "Video Understanding"], "method": [], "dataset": ["UCF101", "Something-Something V1"], "metric": ["3-fold Accuracy", "Top 1 Accuracy"], "title": "ECO: Efficient Convolutional Network for Online Video Understanding"} {"abstract": "In this paper, we propose a novel end-to-end neural architecture for ranking\ncandidate answers, that adapts a hierarchical recurrent neural network and a\nlatent topic clustering module. With our proposed model, a text is encoded to a\nvector representation from an word-level to a chunk-level to effectively\ncapture the entire meaning. In particular, by adapting the hierarchical\nstructure, our model shows very small performance degradations in longer text\ncomprehension while other state-of-the-art recurrent neural network models\nsuffer from it. Additionally, the latent topic clustering module extracts\nsemantic information from target samples. This clustering module is useful for\nany text related tasks by allowing each data sample to find its nearest topic\ncluster, thus helping the neural network model analyze the entire data. We\nevaluate our models on the Ubuntu Dialogue Corpus and consumer electronic\ndomain question answering dataset, which is related to Samsung products. The\nproposed model shows state-of-the-art results for ranking question-answer\npairs.", "field": [], "task": ["Answer Selection", "Hierarchical structure", "Learning-To-Rank", "Question Answering"], "method": [], "dataset": ["Ubuntu Dialogue (v1, Ranking)", "Ubuntu Dialogue (v2, Ranking)"], "metric": ["1 in 10 R@1", "1 in 10 R@2", "1 in 10 R@5", "1 in 2 R@1"], "title": "Learning to Rank Question-Answer Pairs using Hierarchical Recurrent Encoder with Latent Topic Clustering"} {"abstract": "Multi-object tracking (MOT) is an important and practical task related to\nboth surveillance systems and moving camera applications, such as autonomous\ndriving and robotic vision. However, due to unreliable detection, occlusion and\nfast camera motion, tracked targets can be easily lost, which makes MOT very\nchallenging. Most recent works treat tracking as a re-identification (Re-ID)\ntask, but how to combine appearance and temporal features is still not well\naddressed. In this paper, we propose an innovative and effective tracking\nmethod called TrackletNet Tracker (TNT) that combines temporal and appearance\ninformation together as a unified framework. First, we define a graph model\nwhich treats each tracklet as a vertex. The tracklets are generated by\nappearance similarity with CNN features and intersection-over-union (IOU) with\nepipolar constraints to compensate camera movement between adjacent frames.\nThen, for every pair of two tracklets, the similarity is measured by our\ndesigned multi-scale TrackletNet. Afterwards, the tracklets are clustered into\ngroups which represent individual object IDs. Our proposed TNT has the ability\nto handle most of the challenges in MOT, and achieve promising results on MOT16\nand MOT17 benchmark datasets compared with other state-of-the-art methods.", "field": [], "task": ["Autonomous Driving", "Multi-Object Tracking", "Object Tracking"], "method": [], "dataset": ["MOT16", "MOT17"], "metric": ["MOTA"], "title": "Exploit the Connectivity: Multi-Object Tracking with TrackletNet"} {"abstract": "Point clouds provide a flexible geometric representation suitable for countless applications in computer graphics; they also comprise the raw output of most 3D data acquisition devices. While hand-designed features on point clouds have long been proposed in graphics and vision, however, the recent overwhelming success of convolutional neural networks (CNNs) for image analysis suggests the value of adapting insight from CNN to the point cloud world. Point clouds inherently lack topological information so designing a model to recover topology can enrich the representation power of point clouds. To this end, we propose a new neural network module dubbed EdgeConv suitable for CNN-based high-level tasks on point clouds including classification and segmentation. EdgeConv acts on graphs dynamically computed in each layer of the network. It is differentiable and can be plugged into existing architectures. Compared to existing modules operating in extrinsic space or treating each point independently, EdgeConv has several appealing properties: It incorporates local neighborhood information; it can be stacked applied to learn global shape properties; and in multi-layer systems affinity in feature space captures semantic characteristics over potentially long distances in the original embedding. We show the performance of our model on standard benchmarks including ModelNet40, ShapeNetPart, and S3DIS.", "field": [], "task": ["3D Part Segmentation", "3D Point Cloud Classification", "Semantic Segmentation"], "method": [], "dataset": ["ShapeNet-Part", "ModelNet40"], "metric": ["Mean Accuracy", "Overall Accuracy", "Instance Average IoU"], "title": "Dynamic Graph CNN for Learning on Point Clouds"} {"abstract": "Matching local geometric features on real-world depth images is a challenging\ntask due to the noisy, low-resolution, and incomplete nature of 3D scan data.\nThese difficulties limit the performance of current state-of-art methods, which\nare typically based on histograms over geometric properties. In this paper, we\npresent 3DMatch, a data-driven model that learns a local volumetric patch\ndescriptor for establishing correspondences between partial 3D data. To amass\ntraining data for our model, we propose a self-supervised feature learning\nmethod that leverages the millions of correspondence labels found in existing\nRGB-D reconstructions. Experiments show that our descriptor is not only able to\nmatch local geometry in new scenes for reconstruction, but also generalize to\ndifferent tasks and spatial scales (e.g. instance-level object model alignment\nfor the Amazon Picking Challenge, and mesh surface correspondence). Results\nshow that 3DMatch consistently outperforms other state-of-the-art approaches by\na significant margin. Code, data, benchmarks, and pre-trained models are\navailable online at http://3dmatch.cs.princeton.edu", "field": [], "task": ["3D Reconstruction", "Point Cloud Registration"], "method": [], "dataset": ["3DMatch Benchmark", "Scan2CAD"], "metric": ["Recall", "Average Accuracy"], "title": "3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions"} {"abstract": "We propose a structured prediction architecture, which exploits the local\ngeneric features extracted by Convolutional Neural Networks and the capacity of\nRecurrent Neural Networks (RNN) to retrieve distant dependencies. The proposed\narchitecture, called ReSeg, is based on the recently introduced ReNet model for\nimage classification. We modify and extend it to perform the more challenging\ntask of semantic segmentation. Each ReNet layer is composed of four RNN that\nsweep the image horizontally and vertically in both directions, encoding\npatches or activations, and providing relevant global information. Moreover,\nReNet layers are stacked on top of pre-trained convolutional layers, benefiting\nfrom generic local features. Upsampling layers follow ReNet layers to recover\nthe original image resolution in the final predictions. The proposed ReSeg\narchitecture is efficient, flexible and suitable for a variety of semantic\nsegmentation tasks. We evaluate ReSeg on several widely-used semantic\nsegmentation datasets: Weizmann Horse, Oxford Flower, and CamVid; achieving\nstate-of-the-art performance. Results show that ReSeg can act as a suitable\narchitecture for semantic segmentation tasks, and may have further applications\nin other structured prediction problems. The source code and model\nhyperparameters are available on https://github.com/fvisin/reseg.", "field": [], "task": ["Semantic Segmentation", "Structured Prediction"], "method": [], "dataset": ["CamVid"], "metric": ["Mean IoU", "Global Accuracy"], "title": "ReSeg: A Recurrent Neural Network-based Model for Semantic Segmentation"} {"abstract": "Automatic neural architecture design has shown its potential in discovering powerful neural network architectures. Existing methods, no matter based on reinforcement learning or evolutionary algorithms (EA), conduct architecture search in a discrete space, which is highly inefficient. In this paper, we propose a simple and efficient method to automatic neural architecture design based on continuous optimization. We call this new approach neural architecture optimization (NAO). There are three key components in our proposed approach: (1) An encoder embeds/maps neural network architectures into a continuous space. (2) A predictor takes the continuous representation of a network as input and predicts its accuracy. (3) A decoder maps a continuous representation of a network back to its architecture. The performance predictor and the encoder enable us to perform gradient based optimization in the continuous space to find the embedding of a new architecture with potentially better accuracy. Such a better embedding is then decoded to a network by the decoder. Experiments show that the architecture discovered by our method is very competitive for image classification task on CIFAR-10 and language modeling task on PTB, outperforming or on par with the best results of previous architecture search methods with a significantly reduction of computational resources. Specifically we obtain 1.93% test set error rate for CIFAR-10 image classification task and 56.0 test set perplexity of PTB language modeling task. Furthermore, combined with the recent proposed weight sharing mechanism, we discover powerful architecture on CIFAR-10 (with error rate 2.93%) and on PTB (with test set perplexity 56.6), with very limited computational resources (less than 10 GPU hours) for both tasks.", "field": [], "task": ["Image Classification", "Language Modelling", "Neural Architecture Search"], "method": [], "dataset": ["CIFAR-10 Image Classification"], "metric": ["Percentage error", "Params"], "title": "Neural Architecture Optimization"} {"abstract": "Motion estimation (ME) and motion compensation (MC) have been widely used for classical video frame interpolation systems over the past decades. Recently, a number of data-driven frame interpolation methods based on convolutional neural networks have been proposed. However, existing learning based methods typically estimate either flow or compensation kernels, thereby limiting performance on both computational efficiency and interpolation accuracy. In this work, we propose a motion estimation and compensation driven neural network for video frame interpolation. A novel adaptive warping layer is developed to integrate both optical flow and interpolation kernels to synthesize target frame pixels. This layer is fully differentiable such that both the flow and kernel estimation networks can be optimized jointly. The proposed model benefits from the advantages of motion estimation and compensation methods without using hand-crafted features. Compared to existing methods, our approach is computationally efficient and able to generate more visually appealing results. Furthermore, the proposed MEMC-Net can be seamlessly adapted to several video enhancement tasks, e.g., super-resolution, denoising, and deblocking. Extensive quantitative and qualitative evaluations demonstrate that the proposed method performs favorably against the state-of-the-art video frame interpolation and enhancement algorithms on a wide range of datasets.", "field": [], "task": ["Denoising", "Motion Compensation", "Motion Estimation", "Optical Flow Estimation", "Super-Resolution", "Video Enhancement", "Video Frame Interpolation"], "method": [], "dataset": ["Vimeo90k"], "metric": ["PSNR"], "title": "MEMC-Net: Motion Estimation and Motion Compensation Driven Neural Network for Video Interpolation and Enhancement"} {"abstract": "Natural spatiotemporal processes can be highly non-stationary in many ways,\ne.g. the low-level non-stationarity such as spatial correlations or temporal\ndependencies of local pixel values; and the high-level variations such as the\naccumulation, deformation or dissipation of radar echoes in precipitation\nforecasting. From Cramer's Decomposition, any non-stationary process can be\ndecomposed into deterministic, time-variant polynomials, plus a zero-mean\nstochastic term. By applying differencing operations appropriately, we may turn\ntime-variant polynomials into a constant, making the deterministic component\npredictable. However, most previous recurrent neural networks for\nspatiotemporal prediction do not use the differential signals effectively, and\ntheir relatively simple state transition functions prevent them from learning\ntoo complicated variations in spacetime. We propose the Memory In Memory (MIM)\nnetworks and corresponding recurrent blocks for this purpose. The MIM blocks\nexploit the differential signals between adjacent recurrent states to model the\nnon-stationary and approximately stationary properties in spatiotemporal\ndynamics with two cascaded, self-renewed memory modules. By stacking multiple\nMIM blocks, we could potentially handle higher-order non-stationarity. The MIM\nnetworks achieve the state-of-the-art results on four spatiotemporal prediction\ntasks across both synthetic and real-world datasets. We believe that the\ngeneral idea of this work can be potentially applied to other time-series\nforecasting tasks.", "field": [], "task": ["Time Series", "Time Series Forecasting", "Video Prediction"], "method": [], "dataset": ["Human3.6M"], "metric": ["MAE", "SSIM", "MSE"], "title": "Memory In Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity from Spatiotemporal Dynamics"} {"abstract": "Mental disorders such as depression and anxiety have been increasing at alarming rates in the worldwide population. Notably, the major depressive disorder has become a common problem among higher education students, aggravated, and maybe even occasioned, by the academic pressures they must face. While the reasons for this alarming situation remain unclear (although widely investigated), the student already facing this problem must receive treatment. To that, it is first necessary to screen the symptoms. The traditional way for that is relying on clinical consultations or answering questionnaires. However, nowadays, the data shared at social media is a ubiquitous source that can be used to detect the depression symptoms even when the student is not able to afford or search for professional care. Previous works have already relied on social media data to detect depression on the general population, usually focusing on either posted images or texts or relying on metadata. In this work, we focus on detecting the severity of the depression symptoms in higher education students, by comparing deep learning to feature engineering models induced from both the pictures and their captions posted on Instagram. The experimental results show that students presenting a BDI score higher or equal than 20 can be detected with 0.92 of recall and 0.69 of precision in the best case, reached by a fusion model. Our findings show the potential of large-scale depression screening, which could shed light upon students at-risk.", "field": [], "task": ["Feature Engineering"], "method": [], "dataset": ["2019_test set"], "metric": ["14 gestures accuracy"], "title": "See and Read: Detecting Depression Symptoms in Higher Education Students Using Multimodal Social Media Data"} {"abstract": "Bayesian optimization has recently been proposed as a framework for automatically tuning the hyperparameters of machine learning models and has been shown to yield state-of-the-art performance with impressive ease and efficiency. In this paper, we explore whether it is possible to transfer the knowledge gained from previous optimizations to new tasks in order to find optimal hyperparameter settings more efficiently. Our approach is based on extending multi-task Gaussian processes to the framework of Bayesian optimization. We show that this method significantly speeds up the optimization process when compared to the standard single-task approach. We further propose a straightforward extension of our algorithm in order to jointly minimize the average error across multiple tasks and demonstrate how this can be used to greatly speed up $k$-fold cross-validation. Lastly, our most significant contribution is an adaptation of a recently proposed acquisition function, entropy search, to the cost-sensitive and multi-task settings. We demonstrate the utility of this new acquisition function by utilizing a small dataset in order to explore hyperparameter settings for a large dataset. Our algorithm dynamically chooses which dataset to query in order to yield the most information per unit cost.", "field": [], "task": ["Gaussian Processes", "Image Classification"], "method": [], "dataset": ["STL-10"], "metric": ["Percentage correct"], "title": "Multi-Task Bayesian Optimization"} {"abstract": "Recently,there has been a lot of interest in building compact models for\nvideo classification which have a small memory footprint (<1 GB). While these\nmodels are compact, they typically operate by repeated application of a small\nweight matrix to all the frames in a video. E.g. recurrent neural network based\nmethods compute a hidden state for every frame of the video using a recurrent\nweight matrix. Similarly, cluster-and-aggregate based methods such as NetVLAD,\nhave a learnable clustering matrix which is used to assign soft-clusters to\nevery frame in the video. Since these models look at every frame in the video,\nthe number of floating point operations (FLOPs) is still large even though the\nmemory footprint is small. We focus on building compute-efficient video\nclassification models which process fewer frames and hence have less number of\nFLOPs. Similar to memory efficient models, we use the idea of distillation\nalbeit in a different setting. Specifically, in our case, a compute-heavy\nteacher which looks at all the frames in the video is used to train a\ncompute-efficient student which looks at only a small fraction of frames in the\nvideo. This is in contrast to a typical memory efficient Teacher-Student\nsetting, wherein both the teacher and the student look at all the frames in the\nvideo but the student has fewer parameters. Our work thus complements the\nresearch on memory efficient video classification. We do an extensive\nevaluation with three types of models for video classification,viz.(i)\nrecurrent models (ii) cluster-and-aggregate models and (iii) memory-efficient\ncluster-and-aggregate models and show that in each of these cases, a see-it-all\nteacher can be used to train a compute efficient see-very-little student. We\nshow that the proposed student network can reduce the inference time by 30% and\nthe number of FLOPs by approximately 90% with a negligible drop in the\nperformance.", "field": [], "task": ["Video Classification"], "method": [], "dataset": ["YouTube-8M"], "metric": ["mAP", "Hit@1", "Global Average Precision"], "title": "Efficient Video Classification Using Fewer Frames"} {"abstract": "Graph kernels based on the $1$-dimensional Weisfeiler-Leman algorithm and corresponding neural architectures recently emerged as powerful tools for (supervised) learning with graphs. However, due to the purely local nature of the algorithms, they might miss essential patterns in the given data and can only handle binary relations. The $k$-dimensional Weisfeiler-Leman algorithm addresses this by considering $k$-tuples, defined over the set of vertices, and defines a suitable notion of adjacency between these vertex tuples. Hence, it accounts for the higher-order interactions between vertices. However, it does not scale and may suffer from overfitting when used in a machine learning setting. Hence, it remains an important open problem to design WL-based graph learning methods that are simultaneously expressive, scalable, and non-overfitting. Here, we propose local variants and corresponding neural architectures, which consider a subset of the original neighborhood, making them more scalable, and less prone to overfitting. The expressive power of (one of) our algorithms is strictly higher than the original algorithm, in terms of ability to distinguish non-isomorphic graphs. Our experimental study confirms that the local algorithms, both kernel and neural architectures, lead to vastly reduced computation times, and prevent overfitting. The kernel version establishes a new state-of-the-art for graph classification on a wide range of benchmark datasets, while the neural version shows promising performance on large-scale molecular regression tasks.", "field": [], "task": ["Graph Classification", "Graph Learning", "Regression"], "method": [], "dataset": ["NCI109", "IMDb-B", "ENZYMES", "REDDIT-B", "PROTEINS", "NCI1", "IMDb-M", "PTC"], "metric": ["Accuracy"], "title": "Weisfeiler and Leman go sparse: Towards scalable higher-order graph embeddings"} {"abstract": "In this work we introduce a novel, CNN-based architecture that can be trained end-to-end to deliver seamless scene segmentation results. Our goal is to predict consistent semantic segmentation and detection results by means of a panoptic output format, going beyond the simple combination of independently trained segmentation and detection models. The proposed architecture takes advantage of a novel segmentation head that seamlessly integrates multi-scale features generated by a Feature Pyramid Network with contextual information conveyed by a light-weight DeepLab-like module. As additional contribution we review the panoptic metric and propose an alternative that overcomes its limitations when evaluating non-instance categories. Our proposed network architecture yields state-of-the-art results on three challenging street-level datasets, i.e. Cityscapes, Indian Driving Dataset and Mapillary Vistas.", "field": [], "task": ["Panoptic Segmentation", "Scene Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["Indian Driving Dataset", "KITTI Panoptic Segmentation"], "metric": ["PQ"], "title": "Seamless Scene Segmentation"} {"abstract": "Although various image-based domain adaptation (DA) techniques have been proposed in recent years, domain shift in videos is still not well-explored. Most previous works only evaluate performance on small-scale datasets which are saturated. Therefore, we first propose two large-scale video DA datasets with much larger domain discrepancy: UCF-HMDB_full and Kinetics-Gameplay. Second, we investigate different DA integration methods for videos, and show that simultaneously aligning and learning temporal dynamics achieves effective alignment even without sophisticated DA methods. Finally, we propose Temporal Attentive Adversarial Adaptation Network (TA3N), which explicitly attends to the temporal dynamics using domain discrepancy for more effective domain alignment, achieving state-of-the-art performance on four video DA datasets (e.g. 7.9% accuracy gain over \"Source only\" from 73.9% to 81.8% on \"HMDB --> UCF\", and 10.3% gain on \"Kinetics --> Gameplay\"). The code and data are released at http://github.com/cmhungsteve/TA3N.", "field": [], "task": ["Domain Adaptation"], "method": [], "dataset": ["UCF --> HMDB (full)", "HMDB --> UCF (full)"], "metric": ["Accuracy"], "title": "Temporal Attentive Alignment for Large-Scale Video Domain Adaptation"} {"abstract": "Linked Open Data has been recognized as a valuable source for background information in many data mining and information retrieval tasks. However, most of the existing tools require features in propositional form, i.e., a vector of nominal or numerical features associated with an instance, while Linked Open Data sources are graphs by nature. In this paper, we present RDF2Vec, an approach that uses language modeling approaches for unsupervised feature extraction from sequences of words, and adapts them to RDF graphs.We generate sequences by leveraging local information from graph sub-structures, harvested by Weisfeiler-Lehman Subtree RDF Graph Kernels and graph walks, and learn latent numerical representations of entities in RDF graphs.We evaluate our approach on three different tasks: (i) standard machine learning tasks, (ii) entity and document modeling, and (iii) content-based recommender systems. The evaluation shows that the proposed entity embeddings outperform existing techniques, and that pre-computed feature vector representations of general knowledge graphs such as DBpedia and Wikidata can be easily reused for different tasks.", "field": [], "task": ["Entity Embeddings", "Information Retrieval", "Knowledge Graph Embedding", "Knowledge Graph Embeddings", "Knowledge Graphs", "Language Modelling", "Node Classification", "Recommendation Systems"], "method": [], "dataset": ["MUTAG", "AIFB", "BGS", "AM"], "metric": ["Accuracy"], "title": "RDF2Vec: RDF Graph Embeddings and Their Applications"} {"abstract": "In this work, we propose a novel depth-induced multi-scale recurrent attention network for saliency detection. It achieves dramatic performance especially in complex scenarios. There are three main contributions of our network that are experimentally demonstrated to have significant practical merits. First, we design an effective depth refinement block using residual connections to fully extract and fuse multi-level paired complementary cues from RGB and depth streams. Second, depth cues with abundant spatial information are innovatively combined with multi-scale context features for accurately locating salient objects. Third, we boost our model's performance by a novel recurrent attention module inspired by Internal Generative Mechanism of human brain. This module can generate more accurate saliency results via comprehensively learning the internal semantic relation of the fused feature and progressively optimizing local details with memory-oriented scene understanding. In addition, we create a large scale RGB-D dataset containing more complex scenarios, which can contribute to comprehensively evaluating saliency models. Extensive experiments on six public datasets and ours demonstrate that our method can accurately identify salient objects and achieve consistently superior performance over 16 state-of-the-art RGB and RGB-D approaches.\r", "field": [], "task": ["RGB-D Salient Object Detection", "Saliency Detection", "Scene Understanding"], "method": [], "dataset": ["NJU2K"], "metric": ["max E-Measure", "Average MAE", "S-Measure", "max F-Measure"], "title": "Depth-Induced Multi-Scale Recurrent Attention Network for Saliency Detection"} {"abstract": "Graph representation learning is of paramount importance for a variety of graph analytical tasks, ranging from node classification to community detection. Recently, graph convolutional networks (GCNs) have been successfully applied for graph representation learning. These GCNs generate node representation by aggregating features from the neighborhoods, which follows the \"neighborhood aggregation\" scheme. In spite of having achieved promising performance on various tasks, existing GCN-based models have difficulty in well capturing complicated non-linearity of graph data. In this paper, we first theoretically prove that coefficients of the neighborhood interacting terms are relatively small in current models, which explains why GCNs barely outperforms linear models. Then, in order to better capture the complicated non-linearity of graph data, we present a novel GraphAIR framework which models the neighborhood interaction in addition to neighborhood aggregation. Comprehensive experiments conducted on benchmark tasks including node classification and link prediction using public datasets demonstrate the effectiveness of the proposed method.", "field": [], "task": ["Community Detection", "Graph Representation Learning", "Link Prediction", "Node Classification", "Representation Learning"], "method": [], "dataset": ["Cora with Public Split: fixed 20 nodes per class", "CiteSeer with Public Split: fixed 20 nodes per class", "PubMed with Public Split: fixed 20 nodes per class"], "metric": ["Accuracy"], "title": "GraphAIR: Graph Representation Learning with Neighborhood Aggregation and Interaction"} {"abstract": "We propose a novel text editing task, referred to as \\textit{fact-based text editing}, in which the goal is to revise a given document to better describe the facts in a knowledge base (e.g., several triples). The task is important in practice because reflecting the truth is a common requirement in text editing. First, we propose a method for automatically generating a dataset for research on fact-based text editing, where each instance consists of a draft text, a revised text, and several facts represented in triples. We apply the method into two public table-to-text datasets, obtaining two new datasets consisting of 233k and 37k instances, respectively. Next, we propose a new neural network architecture for fact-based text editing, called \\textsc{FactEditor}, which edits a draft text by referring to given facts using a buffer, a stream, and a memory. A straightforward approach to address the problem would be to employ an encoder-decoder model. Our experimental results on the two datasets show that \\textsc{FactEditor} outperforms the encoder-decoder approach in terms of fidelity and fluency. The results also show that \\textsc{FactEditor} conducts inference faster than the encoder-decoder approach.", "field": [], "task": ["Fact-based Text Editing"], "method": [], "dataset": ["RotoEdit", "WebEdit"], "metric": ["ADD", "DELETE", "Exact Match", "Recall", "KEEP", "SARI", "Precision", "F1", "BLEU"], "title": "Fact-based Text Editing"} {"abstract": "Sentiment Analysis and Emotion Detection in conversation is key in several real-world applications, with an increase in modalities available aiding a better understanding of the underlying emotions. Multi-modal Emotion Detection and Sentiment Analysis can be particularly useful, as applications will be able to use specific subsets of available modalities, as per the available data. Current systems dealing with Multi-modal functionality fail to leverage and capture - the context of the conversation through all modalities, the dependency between the listener(s) and speaker emotional states, and the relevance and relationship between the available modalities. In this paper, we propose an end to end RNN architecture that attempts to take into account all the mentioned drawbacks. Our proposed model, at the time of writing, out-performs the state of the art on a benchmark dataset on a variety of accuracy and regression metrics.", "field": [], "task": ["Emotion Recognition", "Emotion Recognition in Conversation", "Multimodal Emotion Recognition", "Multimodal Sentiment Analysis", "Regression", "Sentiment Analysis"], "method": [], "dataset": ["CMU-MOSEI"], "metric": ["MAE", "Accuracy"], "title": "Multilogue-Net: A Context Aware RNN for Multi-modal Emotion Detection and Sentiment Analysis in Conversation"} {"abstract": "Unsupervised domain adaptation (UDA) aims at adapting the model trained on a labeled source-domain dataset to an unlabeled target-domain dataset. The task of UDA on open-set person re-identification (re-ID) is even more challenging as the identities (classes) do not overlap between the two domains. One major research direction was based on domain translation, which, however, has fallen out of favor in recent years due to inferior performance compared to pseudo-label-based methods. We argue that translation-based methods have great potential on exploiting the valuable source-domain data but they did not provide proper regularization on the translation process. Specifically, these methods only focus on maintaining the identities of the translated images while ignoring the inter-sample relation during translation. To tackle the challenge, we propose an end-to-end structured domain adaptation framework with an online relation-consistency regularization term. During training, the person feature encoder is optimized to model inter-sample relations on-the-fly for supervising relation-consistency domain translation, which in turn, improves the encoder with informative translated images. An improved pseudo-label-based encoder can therefore be obtained by jointly training the source-to-target translated images with ground-truth identities and target-domain images with pseudo identities. In the experiments, our proposed framework is shown to outperform state-of-the-art methods on multiple UDA tasks of person re-ID. Code is available at https://github.com/yxgeee/SDA.", "field": [], "task": ["Domain Adaptation", "Person Re-Identification", "Unsupervised Domain Adaptation", "Unsupervised Person Re-Identification"], "method": [], "dataset": ["Duke to Market", "Duke to MSMT", "Market to Duke", "Market to MSMT"], "metric": ["rank-10", "mAP", "rank-5", "rank-1"], "title": "Structured Domain Adaptation with Online Relation Regularization for Unsupervised Person Re-ID"} {"abstract": "Graph Neural Networks (GNNs) have been shown to be effective models for different predictive tasks on graph-structured data. Recent work on their expressive power has focused on isomorphism tasks and countable feature spaces. We extend this theoretical framework to include continuous features - which occur regularly in real-world input domains and within the hidden layers of GNNs - and we demonstrate the requirement for multiple aggregation functions in this context. Accordingly, we propose Principal Neighbourhood Aggregation (PNA), a novel architecture combining multiple aggregators with degree-scalers (which generalize the sum aggregator). Finally, we compare the capacity of different models to capture and exploit the graph structure via a novel benchmark containing multiple tasks taken from classical graph theory, alongside existing benchmarks from real-world domains, all of which demonstrate the strength of our model. With this work, we hope to steer some of the GNN research towards new aggregation methods which we believe are essential in the search for powerful and robust models.", "field": [], "task": ["Graph Classification", "Graph Regression", "Node Classification"], "method": [], "dataset": ["ZINC-500k", "ZINC", "CIFAR10 100k", "PATTERN 100k", "ZINC 100k"], "metric": ["MAE", "Accuracy (%)"], "title": "Principal Neighbourhood Aggregation for Graph Nets"} {"abstract": "Anomaly Detection (AD) in images is a fundamental computer vision problem and refers to identifying images and image substructures that deviate significantly from the norm. Popular AD algorithms commonly try to learn a model of normality from scratch using task specific datasets, but are limited to semi-supervised approaches employing mostly normal data due to the inaccessibility of anomalies on a large scale combined with the ambiguous nature of anomaly appearance. We follow an alternative approach and demonstrate that deep feature representations learned by discriminative models on large natural image datasets are well suited to describe normality and detect even subtle anomalies in a transfer learning setting. Our model of normality is established by fitting a multivariate Gaussian (MVG) to deep feature representations of classification networks trained on ImageNet using normal data only. By subsequently applying the Mahalanobis distance as the anomaly score we outperform the current state of the art on the public MVTec AD dataset, achieving an AUROC value of $95.8 \\pm 1.2$ (mean $\\pm$ SEM) over all 15 classes. We further investigate why the learned representations are discriminative to the AD task using Principal Component Analysis. We find that the principal components containing little variance in normal data are the ones crucial for discriminating between normal and anomalous instances. This gives a possible explanation to the often sub-par performance of AD approaches trained from scratch using normal data only. By selectively fitting a MVG to these most relevant components only, we are able to further reduce model complexity while retaining AD performance. We also investigate setting the working point by selecting acceptable False Positive Rate thresholds based on the MVG assumption. Code available at https://github.com/ORippler/gaussian-ad-mvtec", "field": [], "task": ["Anomaly Detection", "Transfer Learning"], "method": [], "dataset": ["MVTec AD"], "metric": ["Detection AUROC"], "title": "Modeling the Distribution of Normal Data in Pre-Trained Deep Features for Anomaly Detection"} {"abstract": "This paper addresses the problem of 3D human pose estimation from single\nimages. While for a long time human skeletons were parameterized and fitted to\nthe observation by satisfying a reprojection error, nowadays researchers\ndirectly use neural networks to infer the 3D pose from the observations.\nHowever, most of these approaches ignore the fact that a reprojection\nconstraint has to be satisfied and are sensitive to overfitting. We tackle the\noverfitting problem by ignoring 2D to 3D correspondences. This efficiently\navoids a simple memorization of the training data and allows for a weakly\nsupervised training. One part of the proposed reprojection network (RepNet)\nlearns a mapping from a distribution of 2D poses to a distribution of 3D poses\nusing an adversarial training approach. Another part of the network estimates\nthe camera. This allows for the definition of a network layer that performs the\nreprojection of the estimated 3D pose back to 2D which results in a\nreprojection loss function. Our experiments show that RepNet generalizes well\nto unknown data and outperforms state-of-the-art methods when applied to unseen\ndata. Moreover, our implementation runs in real-time on a standard desktop PC.", "field": [], "task": ["3D Human Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["Human3.6M", "MPI-INF-3DHP"], "metric": ["Average MPJPE (mm)", "Using 2D ground-truth joints", "Multi-View or Monocular", "MJPE", "AUC", "3DPCK"], "title": "RepNet: Weakly Supervised Training of an Adversarial Reprojection Network for 3D Human Pose Estimation"} {"abstract": "Embedding methods have achieved success in face recognition by comparing facial features in a latent semantic space. However, in a fully unconstrained face setting, the facial features learned by the embedding model could be ambiguous or may not even be present in the input face, leading to noisy representations. We propose Probabilistic Face Embeddings (PFEs), which represent each face image as a Gaussian distribution in the latent space. The mean of the distribution estimates the most likely feature values while the variance shows the uncertainty in the feature values. Probabilistic solutions can then be naturally derived for matching and fusing PFEs using the uncertainty information. Empirical evaluation on different baseline models, training datasets and benchmarks show that the proposed method can improve the face recognition performance of deterministic embeddings by converting them into PFEs. The uncertainties estimated by PFEs also serve as good indicators of the potential matching accuracy, which are important for a risk-controlled recognition system.", "field": [], "task": ["Face Recognition"], "method": [], "dataset": ["MegaFace", "IJB-A", "YouTube Faces DB", "Labeled Faces in the Wild", "IJB-C"], "metric": ["TAR @ FAR=0.01", "TAR @ FAR=0.001", "Accuracy"], "title": "Probabilistic Face Embeddings"} {"abstract": "Semi-supervised video object segmentation aims to separate a target object from a video sequence, given the mask in the first frame. Most of current prevailing methods utilize information from additional modules trained in other domains like optical flow and instance segmentation, and as a result they do not compete with other methods on common ground. To address this issue, we propose a simple yet strong transductive method, in which additional modules, datasets, and dedicated architectural designs are not needed. Our method takes a label propagation approach where pixel labels are passed forward based on feature similarity in an embedding space. Different from other propagation methods, ours diffuses temporal information in a holistic manner which take accounts of long-term object appearance. In addition, our method requires few additional computational overhead, and runs at a fast $\\sim$37 fps speed. Our single model with a vanilla ResNet50 backbone achieves an overall score of 72.3 on the DAVIS 2017 validation set and 63.1 on the test set. This simple yet high performing and efficient method can serve as a solid baseline that facilitates future research. Code and models are available at \\url{https://github.com/microsoft/transductive-vos.pytorch}.", "field": [], "task": ["Instance Segmentation", "Optical Flow Estimation", "Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["DAVIS 2017 (val)"], "metric": ["F-measure (Mean)", "Jaccard (Mean)", "J&F"], "title": "A Transductive Approach for Video Object Segmentation"} {"abstract": "We propose the task of free-form and open-ended Visual Question Answering\n(VQA). Given an image and a natural language question about the image, the task\nis to provide an accurate natural language answer. Mirroring real-world\nscenarios, such as helping the visually impaired, both the questions and\nanswers are open-ended. Visual questions selectively target different areas of\nan image, including background details and underlying context. As a result, a\nsystem that succeeds at VQA typically needs a more detailed understanding of\nthe image and complex reasoning than a system producing generic image captions.\nMoreover, VQA is amenable to automatic evaluation, since many open-ended\nanswers contain only a few words or a closed set of answers that can be\nprovided in a multiple-choice format. We provide a dataset containing ~0.25M\nimages, ~0.76M questions, and ~10M answers (www.visualqa.org), and discuss the\ninformation it provides. Numerous baselines and methods for VQA are provided\nand compared with human performance. Our VQA demo is available on CloudCV\n(http://cloudcv.org/vqa).", "field": [], "task": ["Image Captioning", "Visual Question Answering"], "method": [], "dataset": ["COCO Visual Question Answering (VQA) abstract 1.0 multiple choice", "COCO Visual Question Answering (VQA) real images 1.0 open ended", "COCO Visual Question Answering (VQA) real images 1.0 multiple choice", "COCO Visual Question Answering (VQA) real images 2.0 open ended", "COCO Visual Question Answering (VQA) abstract images 1.0 open ended"], "metric": ["Percentage correct"], "title": "VQA: Visual Question Answering"} {"abstract": "Spatiotemporal feature learning in videos is a fundamental problem in\ncomputer vision. This paper presents a new architecture, termed as\nAppearance-and-Relation Network (ARTNet), to learn video representation in an\nend-to-end manner. ARTNets are constructed by stacking multiple generic\nbuilding blocks, called as SMART, whose goal is to simultaneously model\nappearance and relation from RGB input in a separate and explicit manner.\nSpecifically, SMART blocks decouple the spatiotemporal learning module into an\nappearance branch for spatial modeling and a relation branch for temporal\nmodeling. The appearance branch is implemented based on the linear combination\nof pixels or filter responses in each frame, while the relation branch is\ndesigned based on the multiplicative interactions between pixels or filter\nresponses across multiple frames. We perform experiments on three action\nrecognition benchmarks: Kinetics, UCF101, and HMDB51, demonstrating that SMART\nblocks obtain an evident improvement over 3D convolutions for spatiotemporal\nfeature learning. Under the same training setting, ARTNets achieve superior\nperformance on these three datasets to the existing state-of-the-art methods.", "field": [], "task": ["Action Classification", "Action Recognition", "Temporal Action Localization", "Video Classification"], "method": [], "dataset": ["Kinetics-400", "UCF101", "HMDB-51"], "metric": ["Average accuracy of 3 splits", "Vid acc@5", "3-fold Accuracy", "Vid acc@1"], "title": "Appearance-and-Relation Networks for Video Classification"} {"abstract": "This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost\n1 million multi-turn dialogues, with a total of over 7 million utterances and\n100 million words. This provides a unique resource for research into building\ndialogue managers based on neural language models that can make use of large\namounts of unlabeled data. The dataset has both the multi-turn property of\nconversations in the Dialog State Tracking Challenge datasets, and the\nunstructured nature of interactions from microblog services such as Twitter. We\nalso describe two neural learning architectures suitable for analyzing this\ndataset, and provide benchmark performance on the task of selecting the best\nnext response.", "field": [], "task": ["Answer Selection", "Conversational Response Selection"], "method": [], "dataset": ["Ubuntu Dialogue (v1, Ranking)"], "metric": ["R10@1", "R10@5", "R2@1", "R10@2"], "title": "The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems"} {"abstract": "State-of-the-art semantic segmentation methods were almost exclusively trained on images within a fixed resolution range. These segmentations are inaccurate for very high-resolution images since using bicubic upsampling of low-resolution segmentation does not adequately capture high-resolution details along object boundaries. In this paper, we propose a novel approach to address the high-resolution segmentation problem without using any high-resolution training data. The key insight is our CascadePSP network which refines and corrects local boundaries whenever possible. Although our network is trained with low-resolution segmentation data, our method is applicable to any resolution even for very high-resolution images larger than 4K. We present quantitative and qualitative studies on different datasets to show that CascadePSP can reveal pixel-accurate segmentation boundaries using our novel refinement module without any finetuning. Thus, our method can be regarded as class-agnostic. Finally, we demonstrate the application of our model to scene parsing in multi-class segmentation.", "field": [], "task": ["Scene Parsing", "Semantic Segmentation"], "method": [], "dataset": ["BIG"], "metric": ["IoU", "mBA"], "title": "CascadePSP: Toward Class-Agnostic and Very High-Resolution Segmentation via Global and Local Refinement"} {"abstract": "Robust loss functions are essential for training accurate deep neural networks (DNNs) in the presence of noisy (incorrect) labels. It has been shown that the commonly used Cross Entropy (CE) loss is not robust to noisy labels. Whilst new loss functions have been designed, they are only partially robust. In this paper, we theoretically show by applying a simple normalization that: any loss can be made robust to noisy labels. However, in practice, simply being robust is not sufficient for a loss function to train accurate DNNs. By investigating several robust loss functions, we find that they suffer from a problem of underfitting. To address this, we propose a framework to build robust loss functions called Active Passive Loss (APL). APL combines two robust loss functions that mutually boost each other. Experiments on benchmark datasets demonstrate that the family of new loss functions created by our APL framework can consistently outperform state-of-the-art methods by large margins, especially under large noise rates such as 60% or 80% incorrect labels.", "field": [], "task": ["Learning with noisy labels"], "method": [], "dataset": ["mini WebVision 1.0"], "metric": ["ImageNet Top-1 Accuracy"], "title": "Normalized Loss Functions for Deep Learning with Noisy Labels"} {"abstract": "Fine-grained image classification is a challenging task due to the presence of hierarchical coarse-to-fine-grained distribution in the dataset. Generally, parts are used to discriminate various objects in fine-grained datasets, however, not all parts are beneficial and indispensable. In recent years, natural language descriptions are used to obtain information on discriminative parts of the object. This paper leverages on natural language description and proposes a strategy for learning the joint representation of natural language description and images using a two-branch network with multiple layers to improve the fine-grained classification task. Extensive experiments show that our approach gains significant improvements in accuracy for the fine-grained image classification task. Furthermore, our method achieves new state-of-the-art results on the CUB-200-2011 dataset.", "field": [], "task": ["Document Text Classification", "Fine-Grained Image Classification", "Image Classification", "Multimodal Deep Learning", "Multimodal Text and Image Classification"], "method": [], "dataset": ["CUB-200-2011"], "metric": ["Accuracy"], "title": "Are These Birds Similar: Learning Branched Networks for Fine-grained Representations"} {"abstract": "Deep convolutional networks have achieved great success for object\nrecognition in still images. However, for action recognition in videos, the\nimprovement of deep convolutional networks is not so evident. We argue that\nthere are two reasons that could probably explain this result. First the\ncurrent network architectures (e.g. Two-stream ConvNets) are relatively shallow\ncompared with those very deep models in image domain (e.g. VGGNet, GoogLeNet),\nand therefore their modeling capacity is constrained by their depth. Second,\nprobably more importantly, the training dataset of action recognition is\nextremely small compared with the ImageNet dataset, and thus it will be easy to\nover-fit on the training dataset.\n To address these issues, this report presents very deep two-stream ConvNets\nfor action recognition, by adapting recent very deep architectures into video\ndomain. However, this extension is not easy as the size of action recognition\nis quite small. We design several good practices for the training of very deep\ntwo-stream ConvNets, namely (i) pre-training for both spatial and temporal\nnets, (ii) smaller learning rates, (iii) more data augmentation techniques,\n(iv) high drop out ratio. Meanwhile, we extend the Caffe toolbox into Multi-GPU\nimplementation with high computational efficiency and low memory consumption.\nWe verify the performance of very deep two-stream ConvNets on the dataset of\nUCF101 and it achieves the recognition accuracy of $91.4\\%$.", "field": [], "task": ["Action Recognition", "Action Recognition In Videos", "Action Recognition In Videos ", "Data Augmentation", "Temporal Action Localization"], "method": [], "dataset": ["UCF101"], "metric": ["3-fold Accuracy"], "title": "Towards Good Practices for Very Deep Two-Stream ConvNets"} {"abstract": "Few-shot learning (FSL) approaches are usually based on an assumption that the pre-trained knowledge can be obtained from base (seen) categories and can be well transferred to novel (unseen) categories. However, there is no guarantee, especially for the latter part. This issue leads to the unknown nature of the inference process in most FSL methods, which hampers its application in some risk-sensitive areas. In this paper, we reveal a new way to perform FSL for image classification, using visual representations from the backbone model and weights generated by a newly-emerged explainable classifier. The weighted representations only include a minimum number of distinguishable features and the visualized weights can serve as an informative hint for the FSL process. Finally, a discriminator will compare the representations of each pair of the images in the support set and the query set. Pairs with the highest scores will decide the classification results. Experimental results prove that the proposed method can achieve both good accuracy and satisfactory explainability on three mainstream datasets.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Image Classification"], "method": [], "dataset": ["Mini-Imagenet 5-way (1-shot)", "Tiered ImageNet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "CIFAR-FS 5-way (1-shot)", "Tiered ImageNet 5-way (5-shot)", "CIFAR-FS 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Match Them Up: Visually Explainable Few-shot Image Classification"} {"abstract": "Spectral graph convolutional networks are generalizations of standard convolutional networks for graph-structured data using the Laplacian operator. A common misconception is the instability of spectral filters, i.e. the impossibility to transfer spectral filters between graphs of variable size and topology. This misbelief has limited the development of spectral networks for multi-graph tasks in favor of spatial graph networks. However, recent works have proved the stability of spectral filters under graph perturbation. Our work complements and emphasizes further the high quality of spectral transferability by benchmarking spectral graph networks on tasks involving graphs of different size and connectivity. Numerical experiments exhibit favorable performance on graph regression, graph classification, and node classification problems on two graph benchmarks. The implementation of our experiments is available on GitHub for reproducibility.", "field": [], "task": ["Graph Classification", "Graph Regression", "Node Classification", "Regression"], "method": [], "dataset": ["ogbg-molhiv", "ZINC"], "metric": ["ROC-AUC", "MAE"], "title": "An Experimental Study of the Transferability of Spectral Graph Networks"} {"abstract": "Document-level relation extraction aims to extract relations among entities within a document. Different from sentence-level relation extraction, it requires reasoning over multiple sentences across a document. In this paper, we propose Graph Aggregation-and-Inference Network (GAIN) featuring double graphs. GAIN first constructs a heterogeneous mention-level graph (hMG) to model complex interaction among different mentions across the document. It also constructs an entity-level graph (EG), based on which we propose a novel path reasoning mechanism to infer relations between entities. Experiments on the public dataset, DocRED, show GAIN achieves a significant performance improvement (2.85 on F1) over the previous state-of-the-art. Our code is available at https://github.com/DreamInvoker/GAIN .", "field": [], "task": ["Relation Extraction"], "method": [], "dataset": ["DocRED"], "metric": ["Ign F1", "F1"], "title": "Double Graph Based Reasoning for Document-level Relation Extraction"} {"abstract": "In this paper, we address the task of utterance level emotion recognition in conversations using commonsense knowledge. We propose COSMIC, a new framework that incorporates different elements of commonsense such as mental states, events, and causal relations, and build upon them to learn interactions between interlocutors participating in a conversation. Current state-of-the-art methods often encounter difficulties in context propagation, emotion shift detection, and differentiating between related emotion classes. By learning distinct commonsense representations, COSMIC addresses these challenges and achieves new state-of-the-art results for emotion recognition on four different benchmark conversational datasets. Our code is available at https://github.com/declare-lab/conv-emotion.", "field": [], "task": ["Emotion Recognition", "Emotion Recognition in Conversation"], "method": [], "dataset": ["IEMOCAP", "MELD", "EmoryNLP", "DailyDialog"], "metric": ["Weighted Macro-F1", "Macro F1", "F1", "Micro-F1"], "title": "COSMIC: COmmonSense knowledge for eMotion Identification in Conversations"} {"abstract": "Self-supervised representation learning has witnessed significant leaps fueled by recent progress in Contrastive learning, which seeks to learn transformations that embed positive input pairs nearby, while pushing negative pairs far apart. While positive pairs can be generated reliably (e.g., as different views of the same image), it is difficult to accurately establish negative pairs, defined as samples from different images regardless of their semantic content or visual features. A fundamental problem in contrastive learning is mitigating the effects of false negatives. Contrasting false negatives induces two critical issues in representation learning: discarding semantic information and slow convergence. In this paper, we study this problem in detail and propose novel approaches to mitigate the effects of false negatives. The proposed methods exhibit consistent and significant improvements over existing contrastive learning-based models. They achieve new state-of-the-art performance on ImageNet evaluations, achieving 5.8% absolute improvement in top-1 accuracy over the previous state-of-the-art when finetuning with 1% labels, as well as transferring to downstream tasks.", "field": [], "task": ["Representation Learning", "Self-Supervised Image Classification", "Self-Supervised Learning", "Semi-Supervised Image Classification"], "method": [], "dataset": ["ImageNet", "ImageNet - 1% labeled data"], "metric": ["Top 5 Accuracy", "Top 1 Accuracy"], "title": "Boosting Contrastive Self-Supervised Learning with False Negative Cancellation"} {"abstract": "To address the sparsity and cold start problem of collaborative filtering,\nresearchers usually make use of side information, such as social networks or\nitem attributes, to improve recommendation performance. This paper considers\nthe knowledge graph as the source of side information. To address the\nlimitations of existing embedding-based and path-based methods for\nknowledge-graph-aware recommendation, we propose Ripple Network, an end-to-end\nframework that naturally incorporates the knowledge graph into recommender\nsystems. Similar to actual ripples propagating on the surface of water, Ripple\nNetwork stimulates the propagation of user preferences over the set of\nknowledge entities by automatically and iteratively extending a user's\npotential interests along links in the knowledge graph. The multiple \"ripples\"\nactivated by a user's historically clicked items are thus superposed to form\nthe preference distribution of the user with respect to a candidate item, which\ncould be used for predicting the final clicking probability. Through extensive\nexperiments on real-world datasets, we demonstrate that Ripple Network achieves\nsubstantial gains in a variety of scenarios, including movie, book and news\nrecommendation, over several state-of-the-art baselines.", "field": [], "task": ["Click-Through Rate Prediction", "Recommendation Systems"], "method": [], "dataset": ["MovieLens 1M", "Book-Crossing", "Bing News"], "metric": ["AUC", "Accuracy"], "title": "RippleNet: Propagating User Preferences on the Knowledge Graph for Recommender Systems"} {"abstract": "Graphs have been widely adopted to denote structural connections between entities. The relations are in many cases heterogeneous, but entangled together and denoted merely as a single edge between a pair of nodes. For example, in a social network graph, users in different latent relationships like friends and colleagues, are usually connected via a bare edge that conceals such intrinsic connections. In this paper, we introduce a novel graph convolutional network (GCN), termed as factorizable graph convolutional network(FactorGCN), that explicitly disentangles such intertwined relations encoded in a graph. FactorGCN takes a simple graph as input, and disentangles it into several factorized graphs, each of which represents a latent and disentangled relation among nodes. The features of the nodes are then aggregated separately in each factorized latent space to produce disentangled features, which further leads to better performances for downstream tasks. We evaluate the proposed FactorGCN both qualitatively and quantitatively on the synthetic and real-world datasets, and demonstrate that it yields truly encouraging results in terms of both disentangling and feature aggregation. Code is publicly available at https://github.com/ihollywhy/FactorGCN.PyTorch.", "field": [], "task": ["Graph Classification", "Graph Regression", "Node Classification"], "method": [], "dataset": ["COLLAB", "IMDb-B", "ZINC", "MUTAG", "PATTERN 100k"], "metric": ["MAE", "Accuracy (%)", "Accuracy (10-fold)", "Accuracy"], "title": "Factorizable Graph Convolutional Networks"} {"abstract": "Past few years have witnessed exponential growth of interest in deep learning\nmethodologies with rapidly improving accuracies and reduced computational\ncomplexity. In particular, architectures using Convolutional Neural Networks\n(CNNs) have produced state-of-the-art performances for image classification and\nobject recognition tasks. Recently, Capsule Networks (CapsNet) achieved\nsignificant increase in performance by addressing an inherent limitation of\nCNNs in encoding pose and deformation. Inspired by such advancement, we asked\nourselves, can we do better? We propose Dense Capsule Networks (DCNet) and\nDiverse Capsule Networks (DCNet++). The two proposed frameworks customize the\nCapsNet by replacing the standard convolutional layers with densely connected\nconvolutions. This helps in incorporating feature maps learned by different\nlayers in forming the primary capsules. DCNet, essentially adds a deeper\nconvolution network, which leads to learning of discriminative feature maps.\nAdditionally, DCNet++ uses a hierarchical architecture to learn capsules that\nrepresent spatial information in a fine-to-coarser manner, which makes it more\nefficient for learning complex data. Experiments on image classification task\nusing benchmark datasets demonstrate the efficacy of the proposed\narchitectures. DCNet achieves state-of-the-art performance (99.75%) on MNIST\ndataset with twenty fold decrease in total training iterations, over the\nconventional CapsNet. Furthermore, DCNet++ performs better than CapsNet on SVHN\ndataset (96.90%), and outperforms the ensemble of seven CapsNet models on\nCIFAR-10 by 0.31% with seven fold decrease in number of parameters.", "field": [], "task": ["Image Classification", "Object Recognition"], "method": [], "dataset": ["smallNORB"], "metric": ["Classification Error"], "title": "Dense and Diverse Capsule Networks: Making the Capsules Learn Better"} {"abstract": "The cost of large scale data collection and annotation often makes the\napplication of machine learning algorithms to new tasks or datasets\nprohibitively expensive. One approach circumventing this cost is training\nmodels on synthetic data where annotations are provided automatically. Despite\ntheir appeal, such models often fail to generalize from synthetic to real\nimages, necessitating domain adaptation algorithms to manipulate these models\nbefore they can be successfully applied. Existing approaches focus either on\nmapping representations from one domain to the other, or on learning to extract\nfeatures that are invariant to the domain from which they were extracted.\nHowever, by focusing only on creating a mapping or shared representation\nbetween the two domains, they ignore the individual characteristics of each\ndomain. We suggest that explicitly modeling what is unique to each domain can\nimprove a model's ability to extract domain-invariant features. Inspired by\nwork on private-shared component analysis, we explicitly learn to extract image\nrepresentations that are partitioned into two subspaces: one component which is\nprivate to each domain and one which is shared across domains. Our model is\ntrained not only to perform the task we care about in the source domain, but\nalso to use the partitioned representation to reconstruct the images from both\ndomains. Our novel architecture results in a model that outperforms the\nstate-of-the-art on a range of unsupervised domain adaptation scenarios and\nadditionally produces visualizations of the private and shared representations\nenabling interpretation of the domain adaptation process.", "field": [], "task": ["Domain Adaptation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["SVNH-to-MNIST", "Synth Digits-to-SVHN", "Synth Signs-to-GTSRB", "MNIST-to-MNIST-M", "Synth Objects-to-LINEMOD"], "metric": ["Classification Accuracy", "Mean Angle Error", "Accuracy"], "title": "Domain Separation Networks"} {"abstract": "We introduce a convolutional neural network that operates directly on graphs.\nThese networks allow end-to-end learning of prediction pipelines whose inputs\nare graphs of arbitrary size and shape. The architecture we present generalizes\nstandard molecular feature extraction methods based on circular fingerprints.\nWe show that these data-driven features are more interpretable, and have better\npredictive performance on a variety of tasks.", "field": [], "task": ["Drug Discovery", "Graph Regression", "Node Classification"], "method": [], "dataset": ["PubMed (0.1%)", "PubMed (0.03%)", "MUV", "ToxCast", "Cora (1%)", "HIV dataset", "PubMed (0.05%)", "PCBA", "Cora (3%)", "Tox21", "CiteSeer (1%)", "Cora (0.5%)", "Cora with Public Split: fixed 20 nodes per class", "CiteSeer (0.5%)", "CiteSeer with Public Split: fixed 20 nodes per class", "PubMed with Public Split: fixed 20 nodes per class", "Lipophilicity "], "metric": ["RMSE", "AUC", "Accuracy"], "title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints"} {"abstract": "Existing entity typing systems usually exploit the type hierarchy provided by\nknowledge base (KB) schema to model label correlations and thus improve the\noverall performance. Such techniques, however, are not directly applicable to\nmore open and practical scenarios where the type set is not restricted by KB\nschema and includes a vast number of free-form types. To model the underly-ing\nlabel correlations without access to manually annotated label structures, we\nintroduce a novel label-relational inductive bias, represented by a graph\npropagation layer that effectively encodes both global label co-occurrence\nstatistics and word-level similarities.On a large dataset with over 10,000\nfree-form types, the graph-enhanced model equipped with an attention-based\nmatching module is able to achieve a much higher recall score while maintaining\na high-level precision. Specifically, it achieves a 15.3% relative F1\nimprovement and also less inconsistency in the outputs. We further show that a\nsimple modification of our proposed graph layer can also improve the\nperformance on a conventional and widely-tested dataset that only includes\nKB-schema types.", "field": [], "task": ["Entity Typing"], "method": [], "dataset": ["Ontonotes v5 (English)"], "metric": ["Precision", "Recall", "F1"], "title": "Imposing Label-Relational Inductive Bias for Extremely Fine-Grained Entity Typing"} {"abstract": "We introduce a novel approach to graph-level representation learning, which is to embed an entire graph into a vector space where the embeddings of two graphs preserve their graph-graph proximity. Our approach, UGRAPHEMB, is a general framework that provides a novel means to performing graph-level embedding in a completely unsupervised and inductive manner. The learned neural network can be considered as a function that receives any graph as input, either seen or unseen in the training set, and transforms it into an embedding. A novel graph-level embedding generation mechanism called Multi-Scale Node Attention (MSNA), is proposed. Experiments on five real graph datasets show that UGRAPHEMB achieves competitive accuracy in the tasks of graph classification, similarity ranking, and graph visualization.", "field": [], "task": ["Graph Classification", "Graph Embedding", "Graph Similarity", "Representation Learning"], "method": [], "dataset": ["NCI109", "Web", "IMDb-M", "REDDIT-MULTI-12K", "PTC"], "metric": ["Accuracy"], "title": "Unsupervised Inductive Graph-Level Representation Learning via Graph-Graph Proximity"} {"abstract": "Automatic search of neural architectures for various vision and natural language tasks is becoming a prominent tool as it allows to discover high-performing structures on any dataset of interest. Nevertheless, on more difficult domains, such as dense per-pixel classification, current automatic approaches are limited in their scope - due to their strong reliance on existing image classifiers they tend to search only for a handful of additional layers with discovered architectures still containing a large number of parameters. In contrast, in this work we propose a novel solution able to find light-weight and accurate segmentation architectures starting from only few blocks of a pre-trained classification network. To this end, we progressively build up a methodology that relies on templates of sets of operations, predicts which template and how many times should be applied at each step, while also generating the connectivity structure and downsampling factors. All these decisions are being made by a recurrent neural network that is rewarded based on the score of the emitted architecture on the holdout set and trained using reinforcement learning. One discovered architecture achieves 63.2% mean IoU on CamVid and 67.8% on CityScapes having only 270K parameters. Pre-trained models and the search code are available at https://github.com/DrSleep/nas-segm-pytorch.", "field": [], "task": ["Real-Time Semantic Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["CamVid", "Cityscapes val", "Cityscapes test"], "metric": ["Time (ms)", "Mean IoU", "mIoU", "Mean IoU (class)", "Frame (fps)"], "title": "Template-Based Automatic Search of Compact Semantic Segmentation Architectures"} {"abstract": "We propose a method for creating a matte -- the per-pixel foreground color and alpha -- of a person by taking photos or videos in an everyday setting with a handheld camera. Most existing matting methods require a green screen background or a manually created trimap to produce a good matte. Automatic, trimap-free methods are appearing, but are not of comparable quality. In our trimap free approach, we ask the user to take an additional photo of the background without the subject at the time of capture. This step requires a small amount of foresight but is far less time-consuming than creating a trimap. We train a deep network with an adversarial loss to predict the matte. We first train a matting network with supervised loss on ground truth data with synthetic composites. To bridge the domain gap to real imagery with no labeling, we train another matting network guided by the first network and by a discriminator that judges the quality of composites. We demonstrate results on a wide variety of photos and videos and show significant improvement over the state of the art.", "field": [], "task": ["Image Matting"], "method": [], "dataset": ["Adobe Matting"], "metric": ["MSE", "SAD"], "title": "Background Matting: The World is Your Green Screen"} {"abstract": "Correspondences between frames encode rich information about dynamic content in videos. However, it is challenging to effectively capture and learn those due to their irregular structure and complex dynamics. In this paper, we propose a novel neural network that learns video representations by aggregating information from potential correspondences. This network, named $CPNet$, can learn evolving 2D fields with temporal consistency. In particular, it can effectively learn representations for videos by mixing appearance and long-range motion with an RGB-only input. We provide extensive ablation experiments to validate our model. CPNet shows stronger performance than existing methods on Kinetics and achieves the state-of-the-art performance on Something-Something and Jester. We provide analysis towards the behavior of our model and show its robustness to errors in proposals.", "field": [], "task": ["Action Recognition"], "method": [], "dataset": ["Jester", "Something-Something V2"], "metric": ["Val", "Top-5 Accuracy", "Top-1 Accuracy"], "title": "Learning Video Representations from Correspondence Proposals"} {"abstract": "We propose a novel, conceptually simple and general framework for instance segmentation on 3D point clouds. Our method, called 3D-BoNet, follows the simple design philosophy of per-point multilayer perceptrons (MLPs). The framework directly regresses 3D bounding boxes for all instances in a point cloud, while simultaneously predicting a point-level mask for each instance. It consists of a backbone network followed by two parallel network branches for 1) bounding box regression and 2) point mask prediction. 3D-BoNet is single-stage, anchor-free and end-to-end trainable. Moreover, it is remarkably computationally efficient as, unlike existing approaches, it does not require any post-processing steps such as non-maximum suppression, feature sampling, clustering or voting. Extensive experiments show that our approach surpasses existing work on both ScanNet and S3DIS datasets while being approximately 10x more computationally efficient. Comprehensive ablation studies demonstrate the effectiveness of our design.", "field": [], "task": ["3D Instance Segmentation", "Instance Segmentation", "Regression", "Semantic Segmentation"], "method": [], "dataset": ["ScanNet(v2)", "S3DIS"], "metric": ["mPrec", "mAP", "Mean AP @ 0.5", "mRec"], "title": "Learning Object Bounding Boxes for 3D Instance Segmentation on Point Clouds"} {"abstract": "Unsupervised domain adaptation aims to address the problem of classifying unlabeled samples from the target domain whilst labeled samples are only available from the source domain and the data distributions are different in these two domains. As a result, classifiers trained from labeled samples in the source domain suffer from significant performance drop when directly applied to the samples from the target domain. To address this issue, different approaches have been proposed to learn domain-invariant features or domain-specific classifiers. In either case, the lack of labeled samples in the target domain can be an issue which is usually overcome by pseudo-labeling. Inaccurate pseudo-labeling, however, could result in catastrophic error accumulation during learning. In this paper, we propose a novel selective pseudo-labeling strategy based on structured prediction. The idea of structured prediction is inspired by the fact that samples in the target domain are well clustered within the deep feature space so that unsupervised clustering analysis can be used to facilitate accurate pseudo-labeling. Experimental results on four datasets (i.e. Office-Caltech, Office31, ImageCLEF-DA and Office-Home) validate our approach outperforms contemporary state-of-the-art methods.", "field": [], "task": ["Domain Adaptation", "Structured Prediction", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["Office-Home", "Office-31", "Office-Caltech", "ImageCLEF-DA"], "metric": ["Average Accuracy", "Accuracy"], "title": "Unsupervised Domain Adaptation via Structured Prediction Based Selective Pseudo-Labeling"} {"abstract": "We present 3D-MPA, a method for instance segmentation on 3D point clouds. Given an input point cloud, we propose an object-centric approach where each point votes for its object center. We sample object proposals from the predicted object centers. Then, we learn proposal features from grouped point features that voted for the same object center. A graph convolutional network introduces inter-proposal relations, providing higher-level feature learning in addition to the lower-level point features. Each proposal comprises a semantic label, a set of associated points over which we define a foreground-background mask, an objectness score and aggregation features. Previous works usually perform non-maximum-suppression (NMS) over proposals to obtain the final object detections or semantic instances. However, NMS can discard potentially correct predictions. Instead, our approach keeps all proposals and groups them together based on the learned aggregation features. We show that grouping proposals improves over NMS and outperforms previous state-of-the-art methods on the tasks of 3D object detection and semantic instance segmentation on the ScanNetV2 benchmark and the S3DIS dataset.", "field": [], "task": ["3D Instance Segmentation", "3D Object Detection", "3D Semantic Instance Segmentation"], "method": [], "dataset": ["ScanNetV2", "ScanNet(v2)", "S3DIS"], "metric": ["mAP", "Mean AP @ 0.5", "mRec", "mAP@0.50", "mPrec", "mAP@0.5", "mAP@0.25"], "title": "3D-MPA: Multi Proposal Aggregation for 3D Semantic Instance Segmentation"} {"abstract": "Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method. In this work, we show that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dual-encoder framework. When evaluated on a wide range of open-domain QA datasets, our dense retriever outperforms a strong Lucene-BM25 system largely by 9%-19% absolute in terms of top-20 passage retrieval accuracy, and helps our end-to-end QA system establish new state-of-the-art on multiple open-domain QA benchmarks.", "field": [], "task": ["Open-Domain Question Answering", "Question Answering"], "method": [], "dataset": ["TriviaQA", "Natural Questions (short)"], "metric": ["F1"], "title": "Dense Passage Retrieval for Open-Domain Question Answering"} {"abstract": "In this study, we focus on the unsupervised domain adaptation problem where an approximate inference model is to be learned from a labeled data domain and expected to generalize well to an unlabeled data domain. The success of unsupervised domain adaptation largely relies on the cross-domain feature alignment. Previous work has attempted to directly align latent features by the classifier-induced discrepancies. Nevertheless, a common feature space cannot always be learned via this direct feature alignment especially when a large domain gap exists. To solve this problem, we introduce a Gaussian-guided latent alignment approach to align the latent feature distributions of the two domains under the guidance of the prior distribution. In such an indirect way, the distributions over the samples from the two domains will be constructed on a common feature space, i.e., the space of the prior, which promotes better feature alignment. To effectively align the target latent distribution with this prior distribution, we also propose a novel unpaired L1-distance by taking advantage of the formulation of the encoder-decoder. The extensive evaluations on nine benchmark datasets validate the superior knowledge transferability through outperforming state-of-the-art methods and the versatility of the proposed method by improving the existing work significantly.", "field": [], "task": ["Data Augmentation", "Domain Adaptation", "Domain Generalization", "Traffic Sign Recognition", "Transfer Learning", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["ImageCLEF-DA", "Office-Home", "SVHN-to-MNIST", "USPS-to-MNIST", "SYNSIG-to-GTSRB", "MNIST-to-USPS"], "metric": ["Accuracy"], "title": "Discriminative Feature Alignment: Improving Transferability of Unsupervised Domain Adaptation by Gaussian-guided Latent Alignment"} {"abstract": "Identifying emotion from speech is a non-trivial task pertaining to the\nambiguous definition of emotion itself. In this work, we adopt a\nfeature-engineering based approach to tackle the task of speech emotion\nrecognition. Formalizing our problem as a multi-class classification problem,\nwe compare the performance of two categories of models. For both, we extract\neight hand-crafted features from the audio signal. In the first approach, the\nextracted features are used to train six traditional machine learning\nclassifiers, whereas the second approach is based on deep learning wherein a\nbaseline feed-forward neural network and an LSTM-based classifier are trained\nover the same features. In order to resolve ambiguity in communication, we also\ninclude features from the text domain. We report accuracy, f-score, precision,\nand recall for the different experiment settings we evaluated our models in.\nOverall, we show that lighter machine learning based models trained over a few\nhand-crafted features are able to achieve performance comparable to the current\ndeep learning based state-of-the-art method for emotion recognition.", "field": [], "task": ["Emotion Recognition", "Feature Engineering", "Multi-class Classification", "Multimodal Emotion Recognition", "Speech Emotion Recognition"], "method": [], "dataset": ["IEMOCAP"], "metric": ["UA", "F1"], "title": "Multimodal Speech Emotion Recognition and Ambiguity Resolution"} {"abstract": "We propose the first stochastic framework to employ uncertainty for RGB-D saliency detection by learning from the data labeling process. Existing RGB-D saliency detection models treat this task as a point estimation problem by predicting a single saliency map following a deterministic learning pipeline. We argue that, however, the deterministic solution is relatively ill-posed. Inspired by the saliency data labeling process, we propose a generative architecture to achieve probabilistic RGB-D saliency detection which utilizes a latent variable to model the labeling variations. Our framework includes two main models: 1) a generator model, which maps the input image and latent variable to stochastic saliency prediction, and 2) an inference model, which gradually updates the latent variable by sampling it from the true or approximate posterior distribution. The generator model is an encoder-decoder saliency network. To infer the latent variable, we introduce two different solutions: i) a Conditional Variational Auto-encoder with an extra encoder to approximate the posterior distribution of the latent variable; and ii) an Alternating Back-Propagation technique, which directly samples the latent variable from the true posterior distribution. Qualitative and quantitative results on six challenging RGB-D benchmark datasets show our approach's superior performance in learning the distribution of saliency maps. The source code is publicly available via our project page: https://github.com/JingZhang617/UCNet.", "field": [], "task": ["RGB-D Salient Object Detection", "RGB Salient Object Detection", "Saliency Detection", "Saliency Prediction"], "method": [], "dataset": ["DUTS-test", "DUT-OMRON", "ECSSD", "STERE", "NLPR", "SOC", "DES", "SIP", "LFSD", "HKU-IS", "DUTS-TE", "NJU2K"], "metric": ["S-Measure", "mean F-Measure", "Average MAE", "mean E-Measure", "MAE"], "title": "Uncertainty Inspired RGB-D Saliency Detection"} {"abstract": "Story visualization aims at generating a sequence of images to narrate each sentence in a multi-sentence story. Different from video generation that focuses on maintaining the continuity of generated images (frames), story visualization emphasizes preserving the global consistency of characters and scenes across different story pictures, which is very challenging since story sentences only provide sparse signals for generating images. Therefore, we propose a new framework named Character-Preserving Coherent Story Visualization (CP-CSV) to tackle the challenges. CP-CSV effectively learns to visualize the story by three critical modules: story and context encoder (story and sentence representation learning), figure-ground segmentation (auxiliary task to provide information for preserving character and story consistency), and figure-ground aware generation (image sequence generation by incorporating figure-ground information). Moreover, we propose a metric named Fr\\'{e}chet Story Distance (FSD) to evaluate the performance of story visualization. Extensive experiments demonstrate that CP-CSV maintains the details of character information and achieves high consistency among different frames, while FSD better measures the performance of story visualization.", "field": [], "task": ["Representation Learning", "Story Visualization"], "method": [], "dataset": ["Pororo"], "metric": ["FSD", "FID"], "title": "Character-Preserving Coherent Story Visualization"} {"abstract": "Interpersonal language style shifting in dialogues is an interesting and almost instinctive ability of human. Understanding interpersonal relationship from language content is also a crucial step toward further understanding dialogues. Previous work mainly focuses on relation extraction between named entities in texts. In this paper, we propose the task of relation classification of interlocutors based on their dialogues. We crawled movie scripts from IMSDb, and annotated the relation labels for each session according to 13 pre-defined relationships. The annotated dataset DDRel consists of 6300 dyadic dialogue sessions between 694 pair of speakers with 53,126 utterances in total. We also construct session-level and pair-level relation classification tasks with widely-accepted baselines. The experimental results show that this task is challenging for existing models and the dataset will be useful for future research.", "field": [], "task": ["Dialog Relation Extraction", "Relation Classification", "Relation Extraction"], "method": [], "dataset": ["DDRel"], "metric": ["Pair-level 4-class Acc", "Session-level 4-class Acc", "Pair-level 13-class Acc", "Session-level 13-class Acc", "Session-level 6-class Acc", "Pair-level 6-class Acc"], "title": "DDRel: A New Dataset for Interpersonal Relation Classification in Dyadic Dialogues"} {"abstract": "Modern deep learning architectures produce highly accurate results on many\nchallenging semantic segmentation datasets. State-of-the-art methods are,\nhowever, not directly transferable to real-time applications or embedded\ndevices, since naive adaptation of such systems to reduce computational cost\n(speed, memory and energy) causes a significant drop in accuracy. We propose\nContextNet, a new deep neural network architecture which builds on factorized\nconvolution, network compression and pyramid representation to produce\ncompetitive semantic segmentation in real-time with low memory requirement.\nContextNet combines a deep network branch at low resolution that captures\nglobal context information efficiently with a shallow branch that focuses on\nhigh-resolution segmentation details. We analyse our network in a thorough\nablation study and present results on the Cityscapes dataset, achieving 66.1%\naccuracy at 18.3 frames per second at full (1024x2048) resolution (41.9 fps\nwith pipelined computations for streamed data).", "field": [], "task": ["Semantic Segmentation"], "method": [], "dataset": ["Cityscapes val"], "metric": ["mIoU"], "title": "ContextNet: Exploring Context and Detail for Semantic Segmentation in Real-time"} {"abstract": "A popular recent approach to answering open-domain questions is to first\nsearch for question-related passages and then apply reading comprehension\nmodels to extract answers. Existing methods usually extract answers from single\npassages independently. But some questions require a combination of evidence\nfrom across different sources to answer correctly. In this paper, we propose\ntwo models which make use of multiple passages to generate their answers. Both\nuse an answer-reranking approach which reorders the answer candidates generated\nby an existing state-of-the-art QA model. We propose two methods, namely,\nstrength-based re-ranking and coverage-based re-ranking, to make use of the\naggregated evidence from different passages to better determine the answer. Our\nmodels have achieved state-of-the-art results on three public open-domain QA\ndatasets: Quasar-T, SearchQA and the open-domain version of TriviaQA, with\nabout 8 percentage points of improvement over the former two datasets.", "field": [], "task": ["Open-Domain Question Answering", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["Quasar"], "metric": ["EM (Quasar-T)", "F1 (Quasar-T)"], "title": "Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering"} {"abstract": "Person Re-identification (re-id) faces two major challenges: the lack of\ncross-view paired training data and learning discriminative identity-sensitive\nand view-invariant features in the presence of large pose variations. In this\nwork, we address both problems by proposing a novel deep person image\ngeneration model for synthesizing realistic person images conditional on the\npose. The model is based on a generative adversarial network (GAN) designed\nspecifically for pose normalization in re-id, thus termed pose-normalization\nGAN (PN-GAN). With the synthesized images, we can learn a new type of deep\nre-id feature free of the influence of pose variations. We show that this\nfeature is strong on its own and complementary to features learned with the\noriginal images. Importantly, under the transfer learning setting, we show that\nour model generalizes well to any new re-id dataset without the need for\ncollecting any training data for model fine-tuning. The model thus has the\npotential to make re-id model truly scalable.", "field": [], "task": ["Image Generation", "Person Re-Identification", "Transfer Learning"], "method": [], "dataset": ["Market-1501->DukeMTMC-reID"], "metric": ["Rank-1", "mAP"], "title": "Pose-Normalized Image Generation for Person Re-identification"} {"abstract": "The detection of anomalous structures in natural image data is of utmost importance for numerous tasks in the field of computer vision. The development of methods for unsupervised anomaly detection requires data on which to train and evaluate new approaches and ideas. We introduce the MVTec Anomaly Detection (MVTec AD) dataset containing 5354 high-resolution color images of different object and texture categories. It contains normal, i.e., defect-free, images intended for training and images with anomalies intended for testing. The anomalies manifest themselves in the form of over 70 different types of defects such as scratches, dents, contaminations, and various structural changes. In addition, we provide pixel-precise ground truth regions for all anomalies. We also conduct a thorough evaluation of current state-of-the-art unsupervised anomaly detection methods based on deep architectures such as convolutional autoencoders, generative adversarial networks, and feature descriptors using pre-trained convolutional neural networks, as well as classical computer vision methods. This initial benchmark indicates that there is considerable room for improvement. To the best of our knowledge, this is the first comprehensive, multi-object, multi-defect dataset for anomaly detection that provides pixel-accurate ground truth regions and focuses on real-world applications. \r", "field": [], "task": ["Anomaly Detection", "Unsupervised Anomaly Detection"], "method": [], "dataset": ["MVTec AD"], "metric": ["Segmentation AUROC"], "title": "MVTec AD -- A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection"} {"abstract": "Anomaly detection in videos refers to the identification of events that do\nnot conform to expected behavior. However, almost all existing methods tackle\nthe problem by minimizing the reconstruction errors of training data, which\ncannot guarantee a larger reconstruction error for an abnormal event. In this\npaper, we propose to tackle the anomaly detection problem within a video\nprediction framework. To the best of our knowledge, this is the first work that\nleverages the difference between a predicted future frame and its ground truth\nto detect an abnormal event. To predict a future frame with higher quality for\nnormal events, other than the commonly used appearance (spatial) constraints on\nintensity and gradient, we also introduce a motion (temporal) constraint in\nvideo prediction by enforcing the optical flow between predicted frames and\nground truth frames to be consistent, and this is the first work that\nintroduces a temporal constraint into the video prediction task. Such spatial\nand motion constraints facilitate the future frame prediction for normal\nevents, and consequently facilitate to identify those abnormal events that do\nnot conform the expectation. Extensive experiments on both a toy dataset and\nsome publicly available datasets validate the effectiveness of our method in\nterms of robustness to the uncertainty in normal events and the sensitivity to\nabnormal events.", "field": [], "task": ["Anomaly Detection", "Optical Flow Estimation", "Video Prediction"], "method": [], "dataset": ["A3D", "SA"], "metric": ["AUC"], "title": "Future Frame Prediction for Anomaly Detection -- A New Baseline"} {"abstract": "Convolutional neural network-based approaches for semantic segmentation rely on supervision with pixel-level ground truth, but may not generalize well to unseen image domains. As the labeling process is tedious and labor intensive, developing algorithms that can adapt source ground truth labels to the target domain is of great interest. In this paper, we propose an adversarial learning method for domain adaptation in the context of semantic segmentation. Considering semantic segmentations as structured outputs that contain spatial similarities between the source and target domains, we adopt adversarial learning in the output space. To further enhance the adapted model, we construct a multi-level adversarial network to effectively perform output space domain adaptation at different feature levels. Extensive experiments and ablation study are conducted under various domain adaptation settings, including synthetic-to-real and cross-city scenarios. We show that the proposed method performs favorably against the state-of-the-art methods in terms of accuracy and visual quality.", "field": [], "task": ["Domain Adaptation", "Image-to-Image Translation", "Semantic Segmentation", "Synthetic-to-Real Translation"], "method": [], "dataset": ["GTAV-to-Cityscapes Labels", "SYNTHIA-to-Cityscapes"], "metric": ["mIoU (13 classes)", "mIoU"], "title": "Learning to Adapt Structured Output Space for Semantic Segmentation"} {"abstract": "Deep learning techniques have achieved success in aspect-based sentiment\nanalysis in recent years. However, there are two important issues that still\nremain to be further studied, i.e., 1) how to efficiently represent the target\nespecially when the target contains multiple words; 2) how to utilize the\ninteraction between target and left/right contexts to capture the most\nimportant words in them. In this paper, we propose an approach, called\nleft-center-right separated neural network with rotatory attention (LCR-Rot),\nto better address the two problems. Our approach has two characteristics: 1) it\nhas three separated LSTMs, i.e., left, center and right LSTMs, corresponding to\nthree parts of a review (left context, target phrase and right context); 2) it\nhas a rotatory attention mechanism which models the relation between target and\nleft/right contexts. The target2context attention is used to capture the most\nindicative sentiment words in left/right contexts. Subsequently, the\ncontext2target attention is used to capture the most important word in the\ntarget. This leads to a two-side representation of the target: left-aware\ntarget and right-aware target. We compare our approach on three benchmark\ndatasets with ten related methods proposed recently. The results show that our\napproach significantly outperforms the state-of-the-art techniques.", "field": [], "task": ["Aspect-Based Sentiment Analysis", "Sentiment Analysis"], "method": [], "dataset": ["SemEval 2014 Task 4 Sub Task 2"], "metric": ["Laptop (Acc)", "Restaurant (Acc)", "Mean Acc (Restaurant + Laptop)"], "title": "Left-Center-Right Separated Neural Network for Aspect-based Sentiment Analysis with Rotatory Attention"} {"abstract": "Face photo-sketch synthesis aims at generating a facial sketch/photo conditioned on a given photo/sketch. It is of wide applications including digital entertainment and law enforcement. Precisely depicting face photos/sketches remains challenging due to the restrictions on structural realism and textural consistency. While existing methods achieve compelling results, they mostly yield blurred effects and great deformation over various facial components, leading to the unrealistic feeling of synthesized images. To tackle this challenge, in this work, we propose to use the facial composition information to help the synthesis of face sketch/photo. Specially, we propose a novel composition-aided generative adversarial network (CA-GAN) for face photo-sketch synthesis. In CA-GAN, we utilize paired inputs including a face photo/sketch and the corresponding pixel-wise face labels for generating a sketch/photo. In addition, to focus training on hard-generated components and delicate facial structures, we propose a compositional reconstruction loss. Finally, we use stacked CA-GANs (SCA-GAN) to further rectify defects and add compelling details. Experimental results show that our method is capable of generating both visually comfortable and identity-preserving face sketches/photos over a wide range of challenging data. Our method achieves the state-of-the-art quality, reducing best previous Frechet Inception distance (FID) by a large margin. Besides, we demonstrate that the proposed method is of considerable generalization ability. We have made our code and results publicly available: https://fei-hdu.github.io/ca-gan/.", "field": [], "task": ["Face Sketch Synthesis"], "method": [], "dataset": ["CUFS", "CUFSF"], "metric": ["FID", "NLDA", "FSIM"], "title": "Towards Realistic Face Photo-Sketch Synthesis via Composition-Aided GANs"} {"abstract": "Person re-identification is a challenging task due to various complex factors. Recent studies have attempted to integrate human parsing results or externally defined attributes to help capture human parts or important object regions. On the other hand, there still exist many useful contextual cues that do not fall into the scope of predefined human parts or attributes. In this paper, we address the missed contextual cues by exploiting both the accurate human parts and the coarse non-human parts. In our implementation, we apply a human parsing model to extract the binary human part masks \\emph{and} a self-attention mechanism to capture the soft latent (non-human) part masks. We verify the effectiveness of our approach with new state-of-the-art performances on three challenging benchmarks: Market-1501, DukeMTMC-reID and CUHK03. Our implementation is available at https://github.com/ggjy/P2Net.pytorch.", "field": [], "task": ["Human Parsing", "Person Re-Identification"], "method": [], "dataset": ["DukeMTMC-reID", "Market-1501"], "metric": ["Rank-1", "Rank-10", "Rank-5", "MAP"], "title": "Beyond Human Parts: Dual Part-Aligned Representations for Person Re-Identification"} {"abstract": "Convolutional Neural Networks (CNNs) achieve state-of-the-art performance in\nmany computer vision tasks. However, this achievement is preceded by extreme\nmanual annotation in order to perform either training from scratch or\nfine-tuning for the target task. In this work, we propose to fine-tune CNN for\nimage retrieval from a large collection of unordered images in a fully\nautomated manner. We employ state-of-the-art retrieval and\nStructure-from-Motion (SfM) methods to obtain 3D models, which are used to\nguide the selection of the training data for CNN fine-tuning. We show that both\nhard positive and hard negative examples enhance the final performance in\nparticular object retrieval with compact codes.", "field": [], "task": ["Image Retrieval", "Structure from Motion"], "method": [], "dataset": ["Par106k", "Par6k", "Oxf5k", "Oxf105k"], "metric": ["mAP", "MAP"], "title": "CNN Image Retrieval Learns from BoW: Unsupervised Fine-Tuning with Hard Examples"} {"abstract": "Tree-structured neural networks exploit valuable syntactic parse information\nas they interpret the meanings of sentences. However, they suffer from two key\ntechnical problems that make them slow and unwieldy for large-scale NLP tasks:\nthey usually operate on parsed sentences and they do not directly support\nbatched computation. We address these issues by introducing the Stack-augmented\nParser-Interpreter Neural Network (SPINN), which combines parsing and\ninterpretation within a single tree-sequence hybrid model by integrating\ntree-structured sentence interpretation into the linear sequential structure of\na shift-reduce parser. Our model supports batched computation for a speedup of\nup to 25 times over other tree-structured models, and its integrated parser can\noperate on unparsed data with little loss in accuracy. We evaluate it on the\nStanford NLI entailment task and show that it significantly outperforms other\nsentence-encoding models.", "field": [], "task": [], "method": [], "dataset": ["SNLI"], "metric": ["Parameters", "% Train Accuracy", "% Test Accuracy"], "title": "A Fast Unified Model for Parsing and Sentence Understanding"} {"abstract": "Deep neural networks are known to be annotation-hungry. Numerous efforts have been devoted to reducing the annotation cost when learning with deep networks. Two prominent directions include learning with noisy labels and semi-supervised learning by exploiting unlabeled data. In this work, we propose DivideMix, a novel framework for learning with noisy labels by leveraging semi-supervised learning techniques. In particular, DivideMix models the per-sample loss distribution with a mixture model to dynamically divide the training data into a labeled set with clean samples and an unlabeled set with noisy samples, and trains the model on both the labeled and unlabeled data in a semi-supervised manner. To avoid confirmation bias, we simultaneously train two diverged networks where each network uses the dataset division from the other network. During the semi-supervised training phase, we improve the MixMatch strategy by performing label co-refinement and label co-guessing on labeled and unlabeled samples, respectively. Experiments on multiple benchmark datasets demonstrate substantial improvements over state-of-the-art methods. Code is available at https://github.com/LiJunnan1992/DivideMix .", "field": [], "task": ["Image Classification", "Learning with noisy labels"], "method": [], "dataset": ["mini WebVision 1.0", "Clothing1M"], "metric": ["Top-5 Accuracy", "ImageNet Top-1 Accuracy", "Top-1 Accuracy", "Accuracy", "ImageNet Top-5 Accuracy"], "title": "DivideMix: Learning with Noisy Labels as Semi-supervised Learning"} {"abstract": "This paper addresses the challenge of 3D full-body human pose estimation from\na monocular image sequence. Here, two cases are considered: (i) the image\nlocations of the human joints are provided and (ii) the image locations of\njoints are unknown. In the former case, a novel approach is introduced that\nintegrates a sparsity-driven 3D geometric prior and temporal smoothness. In the\nlatter case, the former case is extended by treating the image locations of the\njoints as latent variables. A deep fully convolutional network is trained to\npredict the uncertainty maps of the 2D joint locations. The 3D pose estimates\nare realized via an Expectation-Maximization algorithm over the entire\nsequence, where it is shown that the 2D joint location uncertainties can be\nconveniently marginalized out during inference. Empirical evaluation on the\nHuman3.6M dataset shows that the proposed approaches achieve greater 3D pose\nestimation accuracy over state-of-the-art baselines. Further, the proposed\napproach outperforms a publicly available 2D pose estimation baseline on the\nchallenging PennAction dataset.", "field": [], "task": ["3D Human Pose Estimation", "3D Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["Human3.6M"], "metric": ["Average MPJPE (mm)"], "title": "Sparseness Meets Deepness: 3D Human Pose Estimation from Monocular Video"} {"abstract": "The problem of finding the missing values of a matrix given a few of its\nentries, called matrix completion, has gathered a lot of attention in the\nrecent years. Although the problem under the standard low rank assumption is\nNP-hard, Cand\\`es and Recht showed that it can be exactly relaxed if the number\nof observed entries is sufficiently large. In this work, we introduce a novel\nmatrix completion model that makes use of proximity information about rows and\ncolumns by assuming they form communities. This assumption makes sense in\nseveral real-world problems like in recommender systems, where there are\ncommunities of people sharing preferences, while products form clusters that\nreceive similar ratings. Our main goal is thus to find a low-rank solution that\nis structured by the proximities of rows and columns encoded by graphs. We\nborrow ideas from manifold learning to constrain our solution to be smooth on\nthese graphs, in order to implicitly force row and column proximities. Our\nmatrix recovery model is formulated as a convex non-smooth optimization\nproblem, for which a well-posed iterative scheme is provided. We study and\nevaluate the proposed matrix completion on synthetic and real data, showing\nthat the proposed structured low-rank recovery model outperforms the standard\nmatrix completion model in many situations.", "field": [], "task": ["Matrix Completion", "Recommendation Systems"], "method": [], "dataset": ["MovieLens 100K"], "metric": ["RMSE (u1 Splits)"], "title": "Matrix Completion on Graphs"} {"abstract": "In this paper, we present a new open source toolkit for speech recognition, named CAT (CTC-CRF based ASR Toolkit). CAT inherits the data-efficiency of the hybrid approach and the simplicity of the E2E approach, providing a full-fledged implementation of CTC-CRFs and complete training and testing scripts for a number of English and Chinese benchmarks. Experiments show CAT obtains state-of-the-art results, which are comparable to the fine-tuned hybrid models in Kaldi but with a much simpler training pipeline. Compared to existing non-modularized E2E models, CAT performs better on limited-scale datasets, demonstrating its data efficiency. Furthermore, we propose a new method called contextualized soft forgetting, which enables CAT to do streaming ASR without accuracy degradation. We hope CAT, especially the CTC-CRF based framework and software, will be of broad interest to the community, and can be further explored and improved.", "field": [], "task": ["Speech Recognition"], "method": [], "dataset": ["Hub5'00 FISHER-SWBD", "AISHELL-1", "Hub5'00 SwitchBoard", "WSJ eval92", "WSJ eval93"], "metric": ["CallHome", "Hub5'00", "SwitchBoard", "Word Error Rate (WER)"], "title": "CAT: A CTC-CRF based ASR Toolkit Bridging the Hybrid and the End-to-end Approaches towards Data Efficiency and Low Latency"} {"abstract": "While recent years have witnessed astonishing improvements in visual tracking\nrobustness, the advancements in tracking accuracy have been limited. As the\nfocus has been directed towards the development of powerful classifiers, the\nproblem of accurate target state estimation has been largely overlooked. In\nfact, most trackers resort to a simple multi-scale search in order to estimate\nthe target bounding box. We argue that this approach is fundamentally limited\nsince target estimation is a complex task, requiring high-level knowledge about\nthe object.\n We address this problem by proposing a novel tracking architecture,\nconsisting of dedicated target estimation and classification components. High\nlevel knowledge is incorporated into the target estimation through extensive\noffline learning. Our target estimation component is trained to predict the\noverlap between the target object and an estimated bounding box. By carefully\nintegrating target-specific information, our approach achieves previously\nunseen bounding box accuracy. We further introduce a classification component\nthat is trained online to guarantee high discriminative power in the presence\nof distractors. Our final tracking framework sets a new state-of-the-art on\nfive challenging benchmarks. On the new large-scale TrackingNet dataset, our\ntracker ATOM achieves a relative gain of 15% over the previous best approach,\nwhile running at over 30 FPS. Code and models are available at\nhttps://github.com/visionml/pytracking.", "field": [], "task": ["Visual Object Tracking", "Visual Tracking"], "method": [], "dataset": ["TrackingNet"], "metric": ["Normalized Precision", "Precision", "Accuracy"], "title": "ATOM: Accurate Tracking by Overlap Maximization"} {"abstract": "Human-Object Interaction (HOI) Detection is an important problem to understand how humans interact with objects. In this paper, we explore Interactiveness Knowledge which indicates whether human and object interact with each other or not. We found that interactiveness knowledge can be learned across HOI datasets, regardless of HOI category settings. Our core idea is to exploit an Interactiveness Network to learn the general interactiveness knowledge from multiple HOI datasets and perform Non-Interaction Suppression before HOI classification in inference. On account of the generalization of interactiveness, interactiveness network is a transferable knowledge learner and can be cooperated with any HOI detection models to achieve desirable results. We extensively evaluate the proposed method on HICO-DET and V-COCO datasets. Our framework outperforms state-of-the-art HOI detection results by a great margin, verifying its efficacy and flexibility. Code is available at https://github.com/DirtyHarryLYL/Transferable-Interactiveness-Network.", "field": [], "task": ["Human-Object Interaction Detection"], "method": [], "dataset": ["HICO-DET", "Ambiguious-HOI", "V-COCO"], "metric": ["mAP", "Time Per Frame(ms)", "Time Per Frame (ms)", "MAP"], "title": "Transferable Interactiveness Knowledge for Human-Object Interaction Detection"} {"abstract": "Deep convolutional networks (CNNs) have achieved great success in face\ncompletion to generate plausible facial structures. These methods, however, are\nlimited in maintaining global consistency among face components and recovering\nfine facial details. On the other hand, reflectional symmetry is a prominent\nproperty of face image and benefits face recognition and consistency modeling,\nyet remaining uninvestigated in deep face completion. In this work, we leverage\ntwo kinds of symmetry-enforcing subnets to form a symmetry-consistent CNN model\n(i.e., SymmFCNet) for effective face completion. For missing pixels on only one\nof the half-faces, an illumination-reweighted warping subnet is developed to\nguide the warping and illumination reweighting of the other half-face. As for\nmissing pixels on both of half-faces, we present a generative reconstruction\nsubnet together with a perceptual symmetry loss to enforce symmetry consistency\nof recovered structures. The SymmFCNet is constructed by stacking generative\nreconstruction subnet upon illumination-reweighted warping subnet, and can be\nend-to-end learned from training set of unaligned face images. Experiments show\nthat SymmFCNet can generate high quality results on images with synthetic and\nreal occlusion, and performs favorably against state-of-the-arts.", "field": [], "task": ["Face Recognition", "Facial Inpainting"], "method": [], "dataset": ["WebFace", "VggFace2"], "metric": ["PSNR"], "title": "Learning Symmetry Consistent Deep CNNs for Face Completion"} {"abstract": "In this paper, we propose a deep convolutional neural network for learning\nthe embeddings of images in order to capture the notion of visual similarity.\nWe present a deep siamese architecture that when trained on positive and\nnegative pairs of images learn an embedding that accurately approximates the\nranking of images in order of visual similarity notion. We also implement a\nnovel loss calculation method using an angular loss metrics based on the\nproblems requirement. The final embedding of the image is combined\nrepresentation of the lower and top-level embeddings. We used fractional\ndistance matrix to calculate the distance between the learned embeddings in\nn-dimensional space. In the end, we compare our architecture with other\nexisting deep architecture and go on to demonstrate the superiority of our\nsolution in terms of image retrieval by testing the architecture on four\ndatasets. We also show how our suggested network is better than the other\ntraditional deep CNNs used for capturing fine-grained image similarities by\nlearning an optimum embedding.", "field": [], "task": ["Fine-Grained Visual Recognition", "Image Retrieval", "Product Recommendation", "Recommendation Systems"], "method": [], "dataset": ["street2shop - topwear"], "metric": ["Accuracy"], "title": "Retrieving Similar E-Commerce Images Using Deep Learning"} {"abstract": "A key technical challenge in performing 6D object pose estimation from RGB-D\nimage is to fully leverage the two complementary data sources. Prior works\neither extract information from the RGB image and depth separately or use\ncostly post-processing steps, limiting their performances in highly cluttered\nscenes and real-time applications. In this work, we present DenseFusion, a\ngeneric framework for estimating 6D pose of a set of known objects from RGB-D\nimages. DenseFusion is a heterogeneous architecture that processes the two data\nsources individually and uses a novel dense fusion network to extract\npixel-wise dense feature embedding, from which the pose is estimated.\nFurthermore, we integrate an end-to-end iterative pose refinement procedure\nthat further improves the pose estimation while achieving near real-time\ninference. Our experiments show that our method outperforms state-of-the-art\napproaches in two datasets, YCB-Video and LineMOD. We also deploy our proposed\nmethod to a real robot to grasp and manipulate objects based on the estimated\npose.", "field": [], "task": ["6D Pose Estimation", "6D Pose Estimation using RGBD", "Pose Estimation"], "method": [], "dataset": ["LineMOD", "YCB-Video"], "metric": ["Mean ADD", "ADDS AUC", "Accuracy (ADD)"], "title": "DenseFusion: 6D Object Pose Estimation by Iterative Dense Fusion"} {"abstract": "Image segmentation is an important task in many medical applications. Methods\nbased on convolutional neural networks attain state-of-the-art accuracy;\nhowever, they typically rely on supervised training with large labeled\ndatasets. Labeling medical images requires significant expertise and time, and\ntypical hand-tuned approaches for data augmentation fail to capture the complex\nvariations in such images.\n We present an automated data augmentation method for synthesizing labeled\nmedical images. We demonstrate our method on the task of segmenting magnetic\nresonance imaging (MRI) brain scans. Our method requires only a single\nsegmented scan, and leverages other unlabeled scans in a semi-supervised\napproach. We learn a model of transformations from the images, and use the\nmodel along with the labeled example to synthesize additional labeled examples.\nEach transformation is comprised of a spatial deformation field and an\nintensity change, enabling the synthesis of complex effects such as variations\nin anatomy and image acquisition procedures. We show that training a supervised\nsegmenter with these new examples provides significant improvements over\nstate-of-the-art methods for one-shot biomedical image segmentation. Our code\nis available at https://github.com/xamyzhao/brainstorm.", "field": [], "task": ["Data Augmentation", "Medical Image Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["T1-weighted MRI"], "metric": ["Dice Score"], "title": "Data augmentation using learned transformations for one-shot medical image segmentation"} {"abstract": "Hyper-relational knowledge graphs (KGs) (e.g., Wikidata) enable associating additional key-value pairs along with the main triple to disambiguate, or restrict the validity of a fact. In this work, we propose a message passing based graph encoder - StarE capable of modeling such hyper-relational KGs. Unlike existing approaches, StarE can encode an arbitrary number of additional information (qualifiers) along with the main triple while keeping the semantic roles of qualifiers and triples intact. We also demonstrate that existing benchmarks for evaluating link prediction (LP) performance on hyper-relational KGs suffer from fundamental flaws and thus develop a new Wikidata-based dataset - WD50K. Our experiments demonstrate that StarE based LP model outperforms existing approaches across multiple benchmarks. We also confirm that leveraging qualifiers is vital for link prediction with gains up to 25 MRR points compared to triple-based representations.", "field": [], "task": ["Knowledge Graphs", "Link Prediction"], "method": [], "dataset": ["WD50K", "JF17K"], "metric": ["Hit@1", "Hit@5", "MRR", "Hit@10"], "title": "Message Passing for Hyper-Relational Knowledge Graphs"} {"abstract": "Human action recognition remains as a challenging task partially due to the presence of large variations in the execution of action. To address this issue, we propose a probabilistic model called Hierarchical Dynamic Model (HDM). Leveraging on Bayesian framework, the model parameters are allowed to vary across different sequences of data, which increase the capacity of the model to adapt to intra-class variations on both spatial and temporal extent of actions. Meanwhile, the generative learning process allows the model to preserve the distinctive dynamic pattern for each action class. Through Bayesian inference, we are able to quantify the uncertainty of the classification, providing insight during the decision process. Compared to state-of-the-art methods, our method not only achieves competitive recognition performance within individual dataset but also shows better generalization capability across different datasets. Experiments conducted on data with missing values also show the robustness of the proposed method.\r", "field": [], "task": ["Action Recognition", "Bayesian Inference", "Multimodal Activity Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["MSR Action3D", "UPenn Action", "Gaming 3D (G3D)", "UTD-MHAD"], "metric": ["Accuracy (CS)", "Accuracy"], "title": "Bayesian Hierarchical Dynamic Model for Human Action Recognition"} {"abstract": "Over the past decade, knowledge graphs became popular for capturing structured domain knowledge. Relational learning models enable the prediction of missing links inside knowledge graphs. More specifically, latent distance approaches model the relationships among entities via a distance between latent representations. Translating embedding models (e.g., TransE) are among the most popular latent distance approaches which use one distance function to learn multiple relation patterns. However, they are mostly inefficient in capturing symmetric relations since the representation vector norm for all the symmetric relations becomes equal to zero. They also lose information when learning relations with reflexive patterns since they become symmetric and transitive. We propose the Multiple Distance Embedding model (MDE) that addresses these limitations and a framework to collaboratively combine variant latent distance-based terms. Our solution is based on two principles: 1) we use a limit-based loss instead of a margin ranking loss and, 2) by learning independent embedding vectors for each of the terms we can collectively train and predict using contradicting distance terms. We further demonstrate that MDE allows modeling relations with (anti)symmetry, inversion, and composition patterns. We propose MDE as a neural network model that allows us to map non-linear relations between the embedding vectors and the expected output of the score function. Our empirical results show that MDE performs competitively to state-of-the-art embedding models on several benchmark datasets.", "field": [], "task": ["Knowledge Graphs", "Link Prediction", "Relational Pattern Learning", "Relational Reasoning"], "method": [], "dataset": ["FB15k", "WN18RR", "WN18", "FB15k-237"], "metric": ["Hits@10", "MR", "MRR"], "title": "MDE: Multiple Distance Embeddings for Link Prediction in Knowledge Graphs"} {"abstract": "Aspect-based sentiment analysis produces a list of aspect terms and their corresponding sentiments for a natural language sentence. This task is usually done in a pipeline manner, with aspect term extraction performed first, followed by sentiment predictions toward the extracted aspect terms. While easier to develop, such an approach does not fully exploit joint information from the two subtasks and does not use all available sources of training information that might be helpful, such as document-level labeled sentiment corpus. In this paper, we propose an interactive multi-task learning network (IMN) which is able to jointly learn multiple related tasks simultaneously at both the token level as well as the document level. Unlike conventional multi-task learning methods that rely on learning common features for the different tasks, IMN introduces a message passing architecture where information is iteratively passed to different tasks through a shared set of latent variables. Experimental results demonstrate superior performance of the proposed method against multiple baselines on three benchmark datasets.", "field": [], "task": ["Aspect-Based Sentiment Analysis", "Multi-Task Learning", "Sentiment Analysis"], "method": [], "dataset": ["SemEval 2014 Task 4 Subtask 1+2", "SemEval 2014 Task 4 Laptop", "SemEval 2014 Task 4 Sub Task 2"], "metric": ["Laptop (Acc)", "Restaurant (Acc)", "F1", "Mean Acc (Restaurant + Laptop)"], "title": "An Interactive Multi-Task Learning Network for End-to-End Aspect-Based Sentiment Analysis"} {"abstract": "Conventional methods for object detection typically require a substantial amount of training data and preparing such high-quality training data is very labor-intensive. In this paper, we propose a novel few-shot object detection network that aims at detecting objects of unseen categories with only a few annotated examples. Central to our method are our Attention-RPN, Multi-Relation Detector and Contrastive Training strategy, which exploit the similarity between the few shot support set and query set to detect novel objects while suppressing false detection in the background. To train our network, we contribute a new dataset that contains 1000 categories of various objects with high-quality annotations. To the best of our knowledge, this is one of the first datasets specifically designed for few-shot object detection. Once our few-shot network is trained, it can detect objects of unseen categories without further training or fine-tuning. Our method is general and has a wide range of potential applications. We produce a new state-of-the-art performance on different datasets in the few-shot setting. The dataset link is https://github.com/fanq15/Few-Shot-Object-Detection-Dataset.", "field": [], "task": ["Few-Shot Object Detection", "Object Detection"], "method": [], "dataset": ["MS-COCO (10-shot)"], "metric": ["AP"], "title": "Few-Shot Object Detection with Attention-RPN and Multi-Relation Detector"} {"abstract": "Learning an effective similarity measure between image representations is key to the success of recent advances in visual search tasks (e.g. verification or zero-shot learning). Although the metric learning part is well addressed, this metric is usually computed over the average of the extracted deep features. This representation is then trained to be discriminative. However, these deep features tend to be scattered across the feature space. Consequently, the representations are not robust to outliers, object occlusions, background variations, etc. In this paper, we tackle this scattering problem with a distribution-aware regularization named HORDE. This regularizer enforces visually-close images to have deep features with the same distribution which are well localized in the feature space. We provide a theoretical analysis supporting this regularization effect. We also show the effectiveness of our approach by obtaining state-of-the-art results on 4 well-known datasets (Cub-200-2011, Cars-196, Stanford Online Products and Inshop Clothes Retrieval).", "field": [], "task": ["Image Retrieval", "Metric Learning"], "method": [], "dataset": [" CUB-200-2011", "CARS196"], "metric": ["R@1"], "title": "Metric Learning With HORDE: High-Order Regularizer for Deep Embeddings"} {"abstract": "Pre-trained language models such as BERT have proven to be highly effective for natural language processing (NLP) tasks. However, the high demand for computing resources in training such models hinders their application in practice. In order to alleviate this resource hunger in large-scale model training, we propose a Patient Knowledge Distillation approach to compress an original large model (teacher) into an equally-effective lightweight shallow network (student). Different from previous knowledge distillation methods, which only use the output from the last layer of the teacher network for distillation, our student model patiently learns from multiple intermediate layers of the teacher model for incremental knowledge extraction, following two strategies: ($i$) PKD-Last: learning from the last $k$ layers; and ($ii$) PKD-Skip: learning from every $k$ layers. These two patient distillation schemes enable the exploitation of rich information in the teacher's hidden layers, and encourage the student model to patiently learn from and imitate the teacher through a multi-layer distillation process. Empirically, this translates into improved results on multiple NLP tasks with significant gain in training efficiency, without sacrificing model accuracy.", "field": ["Regularization", "Output Functions", "Learning Rate Schedules", "Stochastic Optimization", "Attention Modules", "Activation Functions", "Subword Segmentation", "Normalization", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections"], "task": ["Knowledge Distillation", "Model Compression"], "method": ["Weight Decay", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units"], "dataset": [], "metric": [], "title": "Patient Knowledge Distillation for BERT Model Compression"} {"abstract": "In this paper, we propose a novel controllable text-to-image generative adversarial network (ControlGAN), which can effectively synthesise high-quality images and also control parts of the image generation according to natural language descriptions. To achieve this, we introduce a word-level spatial and channel-wise attention-driven generator that can disentangle different visual attributes, and allow the model to focus on generating and manipulating subregions corresponding to the most relevant words. Also, a word-level discriminator is proposed to provide fine-grained supervisory feedback by correlating words with image regions, facilitating training an effective generator which is able to manipulate specific visual attributes without affecting the generation of other content. Furthermore, perceptual loss is adopted to reduce the randomness involved in the image generation, and to encourage the generator to manipulate specific attributes required in the modified text. Extensive experiments on benchmark datasets demonstrate that our method outperforms existing state of the art, and is able to effectively manipulate synthetic images using natural language descriptions. Code is available at https://github.com/mrlibw/ControlGAN.", "field": [], "task": ["Image Generation", "Text-to-Image Generation"], "method": [], "dataset": ["COCO", "CUB"], "metric": ["Inception score"], "title": "Controllable Text-to-Image Generation"} {"abstract": "Despite the remarkable success of generative models in creating photorealistic images using deep neural networks, gaps could still exist between the real and generated images, especially in the frequency domain. In this study, we find that narrowing the frequency domain gap can ameliorate the image synthesis quality further. To this end, we propose the focal frequency loss, a novel objective function that brings optimization of generative models into the frequency domain. The proposed loss allows the model to dynamically focus on the frequency components that are hard to synthesize by down-weighting the easy frequencies. This objective function is complementary to existing spatial losses, offering great impedance against the loss of important frequency information due to the inherent crux of neural networks. We demonstrate the versatility and effectiveness of focal frequency loss to improve various baselines in both perceptual quality and quantitative performance.", "field": [], "task": ["Image Generation", "Image-to-Image Translation"], "method": [], "dataset": ["Cityscapes Labels-to-Photo"], "metric": ["FID", "Per-pixel Accuracy", "mIoU"], "title": "Focal Frequency Loss for Generative Models"} {"abstract": "Interpretability is an emerging area of research in trustworthy machine learning. Safe deployment of machine learning system mandates that the prediction and its explanation be reliable and robust. Recently, it has been shown that the explanations could be manipulated easily by adding visually imperceptible perturbations to the input while keeping the model's prediction intact. In this work, we study the problem of attributional robustness (i.e. models having robust explanations) by showing an upper bound for attributional vulnerability in terms of spatial correlation between the input image and its explanation map. We propose a training methodology that learns robust features by minimizing this upper bound using soft-margin triplet loss. Our methodology of robust attribution training (\\textit{ART}) achieves the new state-of-the-art attributional robustness measure by a margin of $\\approx$ 6-18 $\\%$ on several standard datasets, ie. SVHN, CIFAR-10 and GTSRB. We further show the utility of the proposed robust training technique (\\textit{ART}) in the downstream task of weakly supervised object localization by achieving the new state-of-the-art performance on CUB-200 dataset.", "field": [], "task": ["Object Localization", "Weakly-Supervised Object Localization"], "method": [], "dataset": [" CUB-200-2011"], "metric": ["Top-1 Localization Accuracy", "Top-1 Error Rate"], "title": "Attributional Robustness Training using Input-Gradient Spatial Alignment"} {"abstract": "Convolutional neural networks (CNNs) with residual links (ResNets) and causal dilated convolutional units have been the network of choice for deep learning approaches to speech enhancement. While residual links improve gradient flow during training, feature diminution of shallow layer outputs can occur due to repetitive summations with deeper layer outputs. One strategy to improve feature re-usage is to fuse both ResNets and densely connected CNNs (DenseNets). DenseNets, however, over-allocate parameters for feature re-usage. Motivated by this, we propose the residual-dense lattice network (RDL-Net), which is a new CNN for speech enhancement that employs both residual and dense aggregations without over-allocating parameters for feature re-usage. This is managed through the topology of the RDL blocks, which limit the number of outputs used for dense aggregations. Our extensive experimental investigation shows that RDL-Nets are able to achieve a higher speech enhancement performance than CNNs that employ residual and/or dense aggregations. RDL-Nets also use substantially fewer parameters and have a lower computational requirement. Furthermore, we demonstrate that RDL-Nets outperform many state-of-the-art deep learning approaches to speech enhancement.", "field": [], "task": ["Speech Enhancement"], "method": [], "dataset": ["DEMAND"], "metric": ["CSIG", "COVL", "CBAK", "PESQ"], "title": "Deep Residual-Dense Lattice Network for Speech Enhancement"} {"abstract": "Few-shot classification is challenging because the data distribution of the training set can be widely different to the test set as their classes are disjoint. This distribution shift often results in poor generalization. Manifold smoothing has been shown to address the distribution shift problem by extending the decision boundaries and reducing the noise of the class representations. Moreover, manifold smoothness is a key factor for semi-supervised learning and transductive learning algorithms. In this work, we propose to use embedding propagation as an unsupervised non-parametric regularizer for manifold smoothing in few-shot classification. Embedding propagation leverages interpolations between the extracted features of a neural network based on a similarity graph. We empirically show that embedding propagation yields a smoother embedding manifold. We also show that applying embedding propagation to a transductive classifier achieves new state-of-the-art results in mini-Imagenet, tiered-Imagenet, Imagenet-FS, and CUB. Furthermore, we show that embedding propagation consistently improves the accuracy of the models in multiple semi-supervised learning scenarios by up to 16\\% points. The proposed embedding propagation operation can be easily integrated as a non-parametric layer into a neural network. We provide the training code and usage examples at https://github.com/ElementAI/embedding-propagation.", "field": [], "task": ["Few-Shot Image Classification"], "method": [], "dataset": ["Mini-Imagenet 5-way (1-shot)", "Tiered ImageNet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "Mini-ImageNet - 1-Shot Learning", "Tiered ImageNet 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Embedding Propagation: Smoother Manifold for Few-Shot Classification"} {"abstract": "Cutting out an object and estimating its opacity mask, known as image matting, is a key task in many image editing applications. Deep learning approaches have made significant progress by adapting the encoder-decoder architecture of segmentation networks. However, most of the existing networks only predict the alpha matte and post-processing methods must then be used to recover the original foreground and background colours in the transparent regions. Recently, two methods have shown improved results by also estimating the foreground colours, but at a significant computational and memory cost. In this paper, we propose a low-cost modification to alpha matting networks to also predict the foreground and background colours. We study variations of the training regime and explore a wide range of existing and novel loss functions for the joint prediction. Our method achieves the state of the art performance on the Adobe Composition-1k dataset for alpha matte and composite colour quality. It is also the current best performing method on the alphamatting.com online evaluation.", "field": [], "task": ["Image Matting"], "method": [], "dataset": ["Composition-1K"], "metric": ["MSE"], "title": "$F$, $B$, Alpha Matting"} {"abstract": "Performing sound event detection on real-world recordings often implies dealing with overlapping target sound events and non-target sounds, also referred to as interference or noise. Until now these problems were mainly tackled at the classifier level. We propose to use sound separation as a pre-processing for sound event detection. In this paper we start from a sound separation model trained on the Free Universal Sound Separation dataset and the DCASE 2020 task 4 sound event detection baseline. We explore different methods to combine separated sound sources and the original mixture within the sound event detection. Furthermore, we investigate the impact of adapting the sound separation model to the sound event detection data on both the sound separation and the sound event detection.", "field": [], "task": ["Audio Source Separation", "Sound Event Detection"], "method": [], "dataset": ["DESED"], "metric": ["event-based F1 score"], "title": "Improving Sound Event Detection In Domestic Environments Using Sound Separation"} {"abstract": "Neural networks are known to be vulnerable to adversarial examples, inputs\nthat have been intentionally perturbed to remain visually similar to the source\ninput, but cause a misclassification. It was recently shown that given a\ndataset and classifier, there exists so called universal adversarial\nperturbations, a single perturbation that causes a misclassification when\napplied to any input. In this work, we introduce universal adversarial\nnetworks, a generative network that is capable of fooling a target classifier\nwhen it's generated output is added to a clean sample from a dataset. We show\nthat this technique improves on known universal adversarial attacks.", "field": [], "task": ["Graph Classification"], "method": [], "dataset": ["NCI1"], "metric": ["Accuracy"], "title": "Learning Universal Adversarial Perturbations with Generative Models"} {"abstract": "Existing multi-person pose estimators can be roughly divided into two-stage approaches (top-down and bottom-up approaches) and one-stage approaches. The two-stage methods either suffer high computational redundancy for additional person detectors or group keypoints heuristically after predicting all the instance-free keypoints. The recently proposed single-stage methods do not rely on the above two extra stages but have lower performance than the latest bottom-up approaches. In this work, a novel single-stage multi-person pose regression, termed SMPR, is presented. It follows the paradigm of dense prediction and predicts instance-aware keypoints from every location. Besides feature aggregation, we propose better strategies to define positive pose hypotheses for training which all play an important role in dense pose estimation. The network also learns the scores of estimated poses. The pose scoring strategy further improves the pose estimation performance by prioritizing superior poses during non-maximum suppression (NMS). We show that our method not only outperforms existing single-stage methods and but also be competitive with the latest bottom-up methods, with 70.2 AP and 77.5 AP75 on the COCO test-dev pose benchmark. Code is available at https://github.com/cmdi-dlut/SMPR.", "field": [], "task": ["Multi-Person Pose Estimation", "Pose Estimation", "Regression"], "method": [], "dataset": ["COCO test-dev"], "metric": ["APM", "AP75", "AP", "APL", "AP50"], "title": "SMPR: Single-Stage Multi-Person Pose Regression"} {"abstract": "We propose a new algorithm, Mean Actor-Critic (MAC), for discrete-action\ncontinuous-state reinforcement learning. MAC is a policy gradient algorithm\nthat uses the agent's explicit representation of all action values to estimate\nthe gradient of the policy, rather than using only the actions that were\nactually executed. We prove that this approach reduces variance in the policy\ngradient estimate relative to traditional actor-critic methods. We show\nempirical results on two control domains and on six Atari games, where MAC is\ncompetitive with state-of-the-art policy search algorithms.", "field": [], "task": ["Atari Games"], "method": [], "dataset": ["Cart Pole (OpenAI Gym)", "Lunar Lander (OpenAI Gym)", "Atari 2600 Beam Rider", "Atari 2600 Seaquest", "Atari 2600 Breakout", "Atari 2600 Space Invaders", "Atari 2600 Pong", "Atari 2600 Q*Bert"], "metric": ["Score"], "title": "Mean Actor Critic"} {"abstract": "Text preprocessing is often the first step in the pipeline of a Natural\nLanguage Processing (NLP) system, with potential impact in its final\nperformance. Despite its importance, text preprocessing has not received much\nattention in the deep learning literature. In this paper we investigate the\nimpact of simple text preprocessing decisions (particularly tokenizing,\nlemmatizing, lowercasing and multiword grouping) on the performance of a\nstandard neural text classifier. We perform an extensive evaluation on standard\nbenchmarks from text categorization and sentiment analysis. While our\nexperiments show that a simple tokenization of input text is generally\nadequate, they also highlight significant degrees of variability across\npreprocessing techniques. This reveals the importance of paying attention to\nthis usually-overlooked step in the pipeline, particularly when comparing\ndifferent models. Finally, our evaluation provides insights into the best\npreprocessing practices for training word embeddings.", "field": [], "task": ["Sentiment Analysis", "Text Categorization", "Text Classification", "Tokenization", "Word Embeddings"], "method": [], "dataset": ["IMDb", "SST-2 Binary classification", "Ohsumed"], "metric": ["Accuracy"], "title": "On the Role of Text Preprocessing in Neural Network Architectures: An Evaluation Study on Text Categorization and Sentiment Analysis"} {"abstract": "Common language models typically predict the next word given the context. In this work, we propose a method that improves language modeling by learning to align the given context and the following phrase. The model does not require any linguistic annotation of phrase segmentation. Instead, we define syntactic heights and phrase segmentation rules, enabling the model to automatically induce phrases, recognize their task-specific heads, and generate phrase embeddings in an unsupervised learning manner. Our method can easily be applied to language models with different network architectures since an independent module is used for phrase induction and context-phrase alignment, and no change is required in the underlying language modeling network. Experiments have shown that our model outperformed several strong baseline models on different data sets. We achieved a new state-of-the-art performance of 17.4 perplexity on the Wikitext-103 dataset. Additionally, visualizing the outputs of the phrase induction module showed that our model is able to learn approximate phrase-level structural knowledge without any annotation.", "field": [], "task": ["Language Modelling"], "method": [], "dataset": ["WikiText-103"], "metric": ["Number of params", "Test perplexity"], "title": "Improving Neural Language Models by Segmenting, Attending, and Predicting the Future"} {"abstract": "Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), is one of the most important problems in natural language processing. It requires to infer the logical relationship between two given sentences. While current approaches mostly focus on the interaction architectures of the sentences, in this paper, we propose to transfer knowledge from some important discourse markers to augment the quality of the NLI model. We observe that people usually use some discourse markers such as \"so\" or \"but\" to represent the logical relationship between two sentences. These words potentially have deep connections with the meanings of the sentences, thus can be utilized to help improve the representations of them. Moreover, we use reinforcement learning to optimize a new objective function with a reward defined by the property of the NLI datasets to make full use of the labels information. Experiments show that our method achieves the state-of-the-art performance on several large-scale datasets.", "field": [], "task": ["Natural Language Inference"], "method": [], "dataset": ["SNLI"], "metric": ["Parameters", "% Train Accuracy", "% Test Accuracy"], "title": "Discourse Marker Augmented Network with Reinforcement Learning for Natural Language Inference"} {"abstract": "Foundational verification allows programmers to build software which has been empirically shown to have high levels of assurance in a variety of important domains. However, the cost of producing foundationally verified software remains prohibitively high for most projects,as it requires significant manual effort by highly trained experts. In this paper we present Proverbot9001 a proof search system using machine learning techniques to produce proofs of software correctness in interactive theorem provers. We demonstrate Proverbot9001 on the proof obligations from a large practical proof project,the CompCert verified C compiler,and show that it can effectively automate what was previously manual proofs,automatically solving 15.77% of proofs in our test dataset. This corresponds to an over 3X improvement over the prior state of the art machine learning technique for generating proofs in Coq.", "field": [], "task": ["Automated Theorem Proving"], "method": [], "dataset": ["CompCert"], "metric": ["Percentage correct"], "title": "Generating Correctness Proofs with Neural Networks"} {"abstract": "Graph neural networks (GNNs) are widely used in many applications. However, their robustness against adversarial attacks is criticized. Prior studies show that using unnoticeable modifications on graph topology or nodal features can significantly reduce the performances of GNNs. It is very challenging to design robust graph neural networks against poisoning attack and several efforts have been taken. Existing work aims at reducing the negative impact from adversarial edges only with the poisoned graph, which is sub-optimal since they fail to discriminate adversarial edges from normal ones. On the other hand, clean graphs from similar domains as the target poisoned graph are usually available in the real world. By perturbing these clean graphs, we create supervised knowledge to train the ability to detect adversarial edges so that the robustness of GNNs is elevated. However, such potential for clean graphs is neglected by existing work. To this end, we investigate a novel problem of improving the robustness of GNNs against poisoning attacks by exploring clean graphs. Specifically, we propose PA-GNN, which relies on a penalized aggregation mechanism that directly restrict the negative impact of adversarial edges by assigning them lower attention coefficients. To optimize PA-GNN for a poisoned graph, we design a meta-optimization algorithm that trains PA-GNN to penalize perturbations using clean graphs and their adversarial counterparts, and transfers such ability to improve the robustness of PA-GNN on the poisoned graph. Experimental results on four real-world datasets demonstrate the robustness of PA-GNN against poisoning attacks on graphs. Code and data are available here: https://github.com/tangxianfeng/PA-GNN.", "field": [], "task": ["Node Classification", "Transfer Learning"], "method": [], "dataset": ["Pubmed"], "metric": ["Accuracy"], "title": "Transferring Robustness for Graph Neural Network Against Poisoning Attacks"} {"abstract": "Temporal knowledge bases associate relational (s,r,o) triples with a set of times (or a single time instant) when the relation is valid. While time-agnostic KB completion (KBC) has witnessed significant research, temporal KB completion (TKBC) is in its early days. In this paper, we consider predicting missing entities (link prediction) and missing time intervals (time prediction) as joint TKBC tasks where entities, relations, and time are all embedded in a uniform, compatible space. We present TIMEPLEX, a novel time-aware KBC method, that also automatically exploits the recurrent nature of some relations and temporal interactions between pairs of relations. TIMEPLEX achieves state-of-the-art performance on both prediction tasks. We also find that existing TKBC models heavily overestimate link prediction performance due to imperfect evaluation mechanisms. In response, we propose improved TKBC evaluation protocols for both link and time prediction tasks, dealing with subtle issues that arise from the partial overlap of time intervals in gold instances and system predictions.", "field": [], "task": ["Knowledge Base Completion", "Knowledge Graph Completion", "Knowledge Graphs", "Link Prediction", "Temporal Information Extraction", "Temporal Knowledge Graph Completion", "Time-interval Prediction"], "method": [], "dataset": ["ICEWS05-15", "ICEWS14", "Wikidata12k", "Yago11k"], "metric": ["MRR"], "title": "Temporal Knowledge Base Completion: New Algorithms and Evaluation Protocols"} {"abstract": "Object recognition requires a generalization capability to avoid overfitting, especially when the samples are extremely few. Generalization from limited samples, usually studied under the umbrella of meta-learning, equips learning techniques with the ability to adapt quickly in dynamical environments and proves to be an essential aspect of life long learning. In this paper, we provide a framework for few-shot learning by introducing dynamic classifiers that are constructed from few samples. A subspace method is exploited as the central block of a dynamic classifier. We will empirically show that such modelling leads to robustness against perturbations (e.g., outliers) and yields competitive results on the task of supervised and semi-supervised few-shot classification. We also develop a discriminative form which can boost the accuracy even further. Our code is available at https://github.com/chrysts/dsn_fewshot\r", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Meta-Learning", "Object Recognition"], "method": [], "dataset": ["Mini-Imagenet 5-way (1-shot)", "Tiered ImageNet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "CIFAR-FS 5-way (1-shot)", "Tiered ImageNet 5-way (5-shot)", "CIFAR-FS 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Adaptive Subspaces for Few-Shot Learning"} {"abstract": "In this paper, we present CorefQA, an accurate and extensible approach for the coreference resolution task. We formulate the problem as a span prediction task, like in question answering: A query is generated for each candidate mention using its surrounding context, and a span prediction module is employed to extract the text spans of the coreferences within the document using the generated query. This formulation comes with the following key advantages: (1) The span prediction strategy provides the flexibility of retrieving mentions left out at the mention proposal stage; (2) In the question answering framework, encoding the mention and its context explicitly in a query makes it possible to have a deep and thorough examination of cues embedded in the context of coreferent mentions; and (3) A plethora of existing question answering datasets can be used for data augmentation to improve the model{'}s generalization capability. Experiments demonstrate significant performance boost over previous models, with 83.1 (+3.5) F1 score on the CoNLL-2012 benchmark and 87.5 (+2.5) F1 score on the GAP benchmark.", "field": [], "task": ["Coreference Resolution", "Data Augmentation", "Question Answering"], "method": [], "dataset": ["CoNLL 2012"], "metric": ["Avg F1"], "title": "CorefQA: Coreference Resolution as Query-based Span Prediction"} {"abstract": "Real-world image noise removal is a long-standing yet very challenging task in computer vision. The success of deep neural network in denoising stimulates the research of noise generation, aiming at synthesizing more clean-noisy image pairs to facilitate the training of deep denoisers. In this work, we propose a novel unified framework to simultaneously deal with the noise removal and noise generation tasks. Instead of only inferring the posteriori distribution of the latent clean image conditioned on the observed noisy image in traditional MAP framework, our proposed method learns the joint distribution of the clean-noisy image pairs. Specifically, we approximate the joint distribution with two different factorized forms, which can be formulated as a denoiser mapping the noisy image to the clean one and a generator mapping the clean image to the noisy one. The learned joint distribution implicitly contains all the information between the noisy and clean images, avoiding the necessity of manually designing the image priors and noise assumptions as traditional. Besides, the performance of our denoiser can be further improved by augmenting the original training dataset with the learned generator. Moreover, we propose two metrics to assess the quality of the generated noisy image, for which, to the best of our knowledge, such metrics are firstly proposed along this research line. Extensive experiments have been conducted to demonstrate the superiority of our method over the state-of-the-arts both in the real noise removal and generation tasks. The training and testing code is available at https://github.com/zsyOAOA/DANet.", "field": [], "task": ["Denoising", "Image Denoising"], "method": [], "dataset": ["SIDD", "DND"], "metric": ["SSIM (sRGB)", "PSNR (sRGB)"], "title": "Dual Adversarial Network: Toward Real-world Noise Removal and Noise Generation"} {"abstract": "Optimising a ranking-based metric, such as Average Precision (AP), is notoriously challenging due to the fact that it is non-differentiable, and hence cannot be optimised directly using gradient-descent methods. To this end, we introduce an objective that optimises instead a smoothed approximation of AP, coined Smooth-AP. Smooth-AP is a plug-and-play objective function that allows for end-to-end training of deep networks with a simple and elegant implementation. We also present an analysis for why directly optimising the ranking based metric of AP offers benefits over other deep metric learning losses. We apply Smooth-AP to standard retrieval benchmarks: Stanford Online products and VehicleID, and also evaluate on larger-scale datasets: INaturalist for fine-grained category retrieval, and VGGFace2 and IJB-C for face retrieval. In all cases, we improve the performance over the state-of-the-art, especially for larger-scale datasets, thus demonstrating the effectiveness and scalability of Smooth-AP to real-world scenarios.", "field": [], "task": ["Image Instance Retrieval", "Image Retrieval", "Metric Learning", "Vehicle Re-Identification"], "method": [], "dataset": ["iNaturalist", "SOP", "VehicleID Large", "VehicleID Small", "VehicleID Medium"], "metric": ["R@16", "R@5", "Rank-1", "R@1", "R@32", "Rank-5"], "title": "Smooth-AP: Smoothing the Path Towards Large-Scale Image Retrieval"} {"abstract": "We propose a neural rendering-based system that creates head avatars from a single photograph. Our approach models a person's appearance by decomposing it into two layers. The first layer is a pose-dependent coarse image that is synthesized by a small neural network. The second layer is defined by a pose-independent texture image that contains high-frequency details. The texture image is generated offline, warped and added to the coarse image to ensure a high effective resolution of synthesized head views. We compare our system to analogous state-of-the-art systems in terms of visual quality and speed. The experiments show significant inference speedup over previous neural head avatar models for a given visual quality. We also report on a real-time smartphone-based implementation of our system.", "field": [], "task": ["Neural Rendering", "Talking Head Generation"], "method": [], "dataset": ["VoxCeleb2 - 1-shot learning"], "metric": ["Normalized Pose Error", "inference time (ms)", "CSIM", "LPIPS", "SSIM"], "title": "Fast Bi-layer Neural Synthesis of One-Shot Realistic Head Avatars"} {"abstract": "Aspect Sentiment Triplet Extraction (ASTE) is the task of extracting the triplets of target entities, their associated sentiment, and opinion spans explaining the reason for the sentiment. Existing research efforts mostly solve this problem using pipeline approaches, which break the triplet extraction process into several stages. Our observation is that the three elements within a triplet are highly related to each other, and this motivates us to build a joint model to extract such triplets using a sequence tagging approach. However, how to effectively design a tagging approach to extract the triplets that can capture the rich interactions among the elements is a challenging research question. In this work, we propose the first end-to-end model with a novel position-aware tagging scheme that is capable of jointly extracting the triplets. Our experimental results on several existing datasets show that jointly capturing elements in the triplet using our approach leads to improved performance over the existing approaches. We also conducted extensive experiments to investigate the model effectiveness and robustness.", "field": [], "task": ["Aspect Sentiment Triplet Extraction"], "method": [], "dataset": ["SemEval"], "metric": ["F1"], "title": "Position-Aware Tagging for Aspect Sentiment Triplet Extraction"} {"abstract": "Conventional unsupervised multi-source domain adaptation (UMDA) methods assume all source domains can be accessed directly. This neglects the privacy-preserving policy, that is, all the data and computations must be kept decentralized. There exists three problems in this scenario: (1) Minimizing the domain distance requires the pairwise calculation of the data from source and target domains, which is not accessible. (2) The communication cost and privacy security limit the application of UMDA methods (e.g., the domain adversarial training). (3) Since users have no authority to check the data quality, the irrelevant or malicious source domains are more likely to appear, which causes negative transfer. In this study, we propose a privacy-preserving UMDA paradigm named Knowledge Distillation based Decentralized Domain Adaptation (KD3A), which performs domain adaptation through the knowledge distillation on models from different source domains. KD3A solves the above problems with three components: (1) A multi-source knowledge distillation method named Knowledge Vote to learn high-quality domain consensus knowledge. (2) A dynamic weighting strategy named Consensus Focus to identify both the malicious and irrelevant domains. (3) A decentralized optimization strategy for domain distance named BatchNorm MMD. The extensive experiments on DomainNet demonstrate that KD3A is robust to the negative transfer and brings a 100x reduction of communication cost compared with other decentralized UMDA methods. Moreover, our KD3A significantly outperforms state-of-the-art UMDA approaches.", "field": [], "task": ["Domain Adaptation", "Knowledge Distillation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["DomainNet"], "metric": ["Average Accuracy"], "title": "KD3A: Unsupervised Multi-Source Decentralized Domain Adaptation via Knowledge Distillation"} {"abstract": "Emerging interests have been brought to recognize previously unseen objects given very few training examples, known as few-shot object detection (FSOD). Recent researches demonstrate that good feature embedding is the key to reach favorable few-shot learning performance. We observe object proposals with different Intersection-of-Union (IoU) scores are analogous to the intra-image augmentation used in contrastive approaches. And we exploit this analogy and incorporate supervised contrastive learning to achieve more robust objects representations in FSOD. We present Few-Shot object detection via Contrastive proposals Encoding (FSCE), a simple yet effective approach to learning contrastive-aware object proposal encodings that facilitate the classification of detected objects. We notice the degradation of average precision (AP) for rare objects mainly comes from misclassifying novel instances as confusable classes. And we ease the misclassification issues by promoting instance level intra-class compactness and inter-class variance via our contrastive proposal encoding loss (CPE loss). Our design outperforms current state-of-the-art works in any shot and all data splits, with up to +8.8% on standard benchmark PASCAL VOC and +2.7% on challenging COCO benchmark. Code is available at: https: //github.com/MegviiDetection/FSCE", "field": [], "task": ["Few-Shot Learning", "Few-Shot Object Detection", "Image Augmentation", "Object Detection"], "method": [], "dataset": ["MS-COCO (30-shot)", "MS-COCO (10-shot)"], "metric": ["AP"], "title": "FSCE: Few-Shot Object Detection via Contrastive Proposal Encoding"} {"abstract": "Commonsense reasoning is a long-standing challenge for deep learning. For example, it is difficult to use neural networks to tackle the Winograd Schema dataset (Levesque et al., 2011). In this paper, we present a simple method for commonsense reasoning with neural networks, using unsupervised learning. Key to our method is the use of language models, trained on a massive amount of unlabled data, to score multiple choice questions posed by commonsense reasoning tests. On both Pronoun Disambiguation and Winograd Schema challenges, our models outperform previous state-of-the-art methods by a large margin, without using expensive annotated knowledge bases or hand-engineered features. We train an array of large RNN language models that operate at word or character level on LM-1-Billion, CommonCrawl, SQuAD, Gutenberg Books, and a customized corpus for this task and show that diversity of training data plays an important role in test performance. Further analysis also shows that our system successfully discovers important features of the context that decide the correct answer, indicating a good grasp of commonsense knowledge.", "field": [], "task": ["Common Sense Reasoning"], "method": [], "dataset": ["Winograd Schema Challenge"], "metric": ["Score"], "title": "A Simple Method for Commonsense Reasoning"} {"abstract": "Many image-to-image translation problems are ambiguous, as a single input\nimage may correspond to multiple possible outputs. In this work, we aim to\nmodel a \\emph{distribution} of possible outputs in a conditional generative\nmodeling setting. The ambiguity of the mapping is distilled in a\nlow-dimensional latent vector, which can be randomly sampled at test time. A\ngenerator learns to map the given input, combined with this latent code, to the\noutput. We explicitly encourage the connection between output and the latent\ncode to be invertible. This helps prevent a many-to-one mapping from the latent\ncode to the output during training, also known as the problem of mode collapse,\nand produces more diverse results. We explore several variants of this approach\nby employing different training objectives, network architectures, and methods\nof injecting the latent code. Our proposed method encourages bijective\nconsistency between the latent encoding and output modes. We present a\nsystematic comparison of our method and other variants on both perceptual\nrealism and diversity.", "field": [], "task": ["Image-to-Image Translation"], "method": [], "dataset": ["Edge-to-Shoes", "Edge-to-Handbags"], "metric": ["Quality", "Diversity"], "title": "Toward Multimodal Image-to-Image Translation"} {"abstract": "We consider the problem of representation learning for graph data. Convolutional neural networks can naturally operate on images, but have significant challenges in dealing with graph data. Given images are special cases of graphs with nodes lie on 2D lattices, graph embedding tasks have a natural correspondence with image pixel-wise prediction tasks such as segmentation. While encoder-decoder architectures like U-Nets have been successfully applied on many image pixel-wise prediction tasks, similar methods are lacking for graph data. This is due to the fact that pooling and up-sampling operations are not natural on graph data. To address these challenges, we propose novel graph pooling (gPool) and unpooling (gUnpool) operations in this work. The gPool layer adaptively selects some nodes to form a smaller graph based on their scalar projection values on a trainable projection vector. We further propose the gUnpool layer as the inverse operation of the gPool layer. The gUnpool layer restores the graph into its original structure using the position information of nodes selected in the corresponding gPool layer. Based on our proposed gPool and gUnpool layers, we develop an encoder-decoder model on graph, known as the graph U-Nets. Our experimental results on node classification and graph classification tasks demonstrate that our methods achieve consistently better performance than previous models.", "field": [], "task": ["Graph Classification", "Graph Embedding", "Node Classification", "Representation Learning"], "method": [], "dataset": ["COLLAB", "Cora", "PROTEINS", "D&D", "Citeseer", "Pubmed"], "metric": ["Accuracy"], "title": "Graph U-Nets"} {"abstract": "We introduce the first end-to-end coreference resolution model and show that\nit significantly outperforms all previous work without using a syntactic parser\nor hand-engineered mention detector. The key idea is to directly consider all\nspans in a document as potential mentions and learn distributions over possible\nantecedents for each. The model computes span embeddings that combine\ncontext-dependent boundary representations with a head-finding attention\nmechanism. It is trained to maximize the marginal likelihood of gold antecedent\nspans from coreference clusters and is factored to enable aggressive pruning of\npotential mentions. Experiments demonstrate state-of-the-art performance, with\na gain of 1.5 F1 on the OntoNotes benchmark and by 3.1 F1 using a 5-model\nensemble, despite the fact that this is the first approach to be successfully\ntrained with no external resources.", "field": [], "task": ["Coreference Resolution"], "method": [], "dataset": ["OntoNotes", "CoNLL 2012"], "metric": ["Avg F1", "F1"], "title": "End-to-end Neural Coreference Resolution"} {"abstract": "In recent years, we have seen tremendous progress in the field of object\ndetection. Most of the recent improvements have been achieved by targeting\ndeeper feedforward networks. However, many hard object categories such as\nbottle, remote, etc. require representation of fine details and not just\ncoarse, semantic representations. But most of these fine details are lost in\nthe early convolutional layers. What we need is a way to incorporate finer\ndetails from lower layers into the detection architecture. Skip connections\nhave been proposed to combine high-level and low-level features, but we argue\nthat selecting the right features from low-level requires top-down contextual\ninformation. Inspired by the human visual pathway, in this paper we propose\ntop-down modulations as a way to incorporate fine details into the detection\nframework. Our approach supplements the standard bottom-up, feedforward ConvNet\nwith a top-down modulation (TDM) network, connected using lateral connections.\nThese connections are responsible for the modulation of lower layer filters,\nand the top-down network handles the selection and integration of contextual\ninformation and low-level features. The proposed TDM architecture provides a\nsignificant boost on the COCO testdev benchmark, achieving 28.6 AP for VGG16,\n35.2 AP for ResNet101, and 37.3 for InceptionResNetv2 network, without any\nbells and whistles (e.g., multi-scale, iterative box refinement, etc.).", "field": [], "task": ["Object Detection"], "method": [], "dataset": ["COCO test-dev"], "metric": ["box AP"], "title": "Beyond Skip Connections: Top-Down Modulation for Object Detection"} {"abstract": "We introduce a new generative model where samples are produced via Langevin dynamics using gradients of the data distribution estimated with score matching. Because gradients can be ill-defined and hard to estimate when the data resides on low-dimensional manifolds, we perturb the data with different levels of Gaussian noise, and jointly estimate the corresponding scores, i.e., the vector fields of gradients of the perturbed data distribution for all noise levels. For sampling, we propose an annealed Langevin dynamics where we use gradients corresponding to gradually decreasing noise levels as the sampling process gets closer to the data manifold. Our framework allows flexible model architectures, requires no sampling during training or the use of adversarial methods, and provides a learning objective that can be used for principled model comparisons. Our models produce samples comparable to GANs on MNIST, CelebA and CIFAR-10 datasets, achieving a new state-of-the-art inception score of 8.87 on CIFAR-10. Additionally, we demonstrate that our models learn effective representations via image inpainting experiments.", "field": [], "task": ["Image Generation", "Image Inpainting"], "method": [], "dataset": ["CIFAR-10"], "metric": ["Inception score", "FID"], "title": "Generative Modeling by Estimating Gradients of the Data Distribution"} {"abstract": "We seek to improve deep neural networks by generalizing the pooling\noperations that play a central role in current architectures. We pursue a\ncareful exploration of approaches to allow pooling to learn and to adapt to\ncomplex and variable patterns. The two primary directions lie in (1) learning a\npooling function via (two strategies of) combining of max and average pooling,\nand (2) learning a pooling function in the form of a tree-structured fusion of\npooling filters that are themselves learned. In our experiments every\ngeneralized pooling operation we explore improves performance when used in\nplace of average or max pooling. We experimentally demonstrate that the\nproposed pooling operations provide a boost in invariance properties relative\nto conventional pooling and set the state of the art on several widely adopted\nbenchmark datasets; they are also easy to implement, and can be applied within\nvarious deep neural network architectures. These benefits come with only a\nlight increase in computational overhead during training and a very modest\nincrease in the number of model parameters.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["SVHN", "MNIST", "CIFAR-100", "CIFAR-10"], "metric": ["Percentage error", "Percentage correct"], "title": "Generalizing Pooling Functions in Convolutional Neural Networks: Mixed, Gated, and Tree"} {"abstract": "Given a partial description like \"she opened the hood of the car,\" humans can\nreason about the situation and anticipate what might come next (\"then, she\nexamined the engine\"). In this paper, we introduce the task of grounded\ncommonsense inference, unifying natural language inference and commonsense\nreasoning.\n We present SWAG, a new dataset with 113k multiple choice questions about a\nrich spectrum of grounded situations. To address the recurring challenges of\nthe annotation artifacts and human biases found in many existing datasets, we\npropose Adversarial Filtering (AF), a novel procedure that constructs a\nde-biased dataset by iteratively training an ensemble of stylistic classifiers,\nand using them to filter the data. To account for the aggressive adversarial\nfiltering, we use state-of-the-art language models to massively oversample a\ndiverse set of potential counterfactuals. Empirical results demonstrate that\nwhile humans can solve the resulting inference problems with high accuracy\n(88%), various competitive models struggle on our task. We provide\ncomprehensive analysis that indicates significant opportunities for future\nresearch.", "field": [], "task": ["Common Sense Reasoning", "Natural Language Inference", "Question Answering"], "method": [], "dataset": ["SWAG"], "metric": ["Dev", "Test"], "title": "SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference"} {"abstract": "We propose a self-supervised framework for learning facial attributes by\nsimply watching videos of a human face speaking, laughing, and moving over\ntime. To perform this task, we introduce a network, Facial Attributes-Net\n(FAb-Net), that is trained to embed multiple frames from the same video\nface-track into a common low-dimensional space. With this approach, we make\nthree contributions: first, we show that the network can leverage information\nfrom multiple source frames by predicting confidence/attention masks for each\nframe; second, we demonstrate that using a curriculum learning regime improves\nthe learned embedding; finally, we demonstrate that the network learns a\nmeaningful face embedding that encodes information about head pose, facial\nlandmarks and facial expression, i.e. facial attributes, without having been\nsupervised with any labelled data. We are comparable or superior to\nstate-of-the-art self-supervised methods on these tasks and approach the\nperformance of supervised methods.", "field": [], "task": ["Curriculum Learning", "Self-Supervised Learning", "Unsupervised Facial Landmark Detection"], "method": [], "dataset": ["MAFL", "300W"], "metric": ["NME"], "title": "Self-supervised learning of a facial attribute embedding from video"} {"abstract": "Online multi-object tracking is a fundamental problem in time-critical video\nanalysis applications. A major challenge in the popular tracking-by-detection\nframework is how to associate unreliable detection results with existing\ntracks. In this paper, we propose to handle unreliable detection by collecting\ncandidates from outputs of both detection and tracking. The intuition behind\ngenerating redundant candidates is that detection and tracks can complement\neach other in different scenarios. Detection results of high confidence prevent\ntracking drifts in the long term, and predictions of tracks can handle noisy\ndetection caused by occlusion. In order to apply optimal selection from a\nconsiderable amount of candidates in real-time, we present a novel scoring\nfunction based on a fully convolutional neural network, that shares most\ncomputations on the entire image. Moreover, we adopt a deeply learned\nappearance representation, which is trained on large-scale person\nre-identification datasets, to improve the identification ability of our\ntracker. Extensive experiments show that our tracker achieves real-time and\nstate-of-the-art performance on a widely used people tracking benchmark.", "field": [], "task": ["Large-Scale Person Re-Identification", "Multi-Object Tracking", "Multiple People Tracking", "Object Tracking", "Online Multi-Object Tracking", "Person Re-Identification"], "method": [], "dataset": ["MOT16", "MOT17"], "metric": ["MOTA"], "title": "Real-time Multiple People Tracking with Deeply Learned Candidate Selection and Person Re-Identification"} {"abstract": "Graph classification has recently received a lot of attention from various\nfields of machine learning e.g. kernel methods, sequential modeling or graph\nembedding. All these approaches offer promising results with different\nrespective strengths and weaknesses. However, most of them rely on complex\nmathematics and require heavy computational power to achieve their best\nperformance. We propose a simple and fast algorithm based on the spectral\ndecomposition of graph Laplacian to perform graph classification and get a\nfirst reference score for a dataset. We show that this method obtains\ncompetitive results compared to state-of-the-art algorithms.", "field": [], "task": ["Graph Classification", "Graph Embedding"], "method": [], "dataset": ["ENZYMES", "PROTEINS", "D&D", "NCI1", "MUTAG", "PTC"], "metric": ["Accuracy"], "title": "A Simple Baseline Algorithm for Graph Classification"} {"abstract": "Semantic scene understanding is important for various applications. In particular, self-driving cars need a fine-grained understanding of the surfaces and objects in their vicinity. Light detection and ranging (LiDAR) provides precise geometric information about the environment and is thus a part of the sensor suites of almost all self-driving cars. Despite the relevance of semantic scene understanding for this application, there is a lack of a large dataset for this task which is based on an automotive LiDAR. In this paper, we introduce a large dataset to propel research on laser-based semantic segmentation. We annotated all sequences of the KITTI Vision Odometry Benchmark and provide dense point-wise annotations for the complete $360^{o}$ field-of-view of the employed automotive LiDAR. We propose three benchmark tasks based on this dataset: (i) semantic segmentation of point clouds using a single scan, (ii) semantic segmentation using multiple past scans, and (iii) semantic scene completion, which requires to anticipate the semantic scene in the future. We provide baseline experiments and show that there is a need for more sophisticated models to efficiently tackle these tasks. Our dataset opens the door for the development of more advanced methods, but also provides plentiful data to investigate new research directions.", "field": [], "task": ["3D Semantic Segmentation", "Scene Understanding", "Self-Driving Cars", "Semantic Segmentation"], "method": [], "dataset": ["SemanticKITTI"], "metric": ["mIoU"], "title": "SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences"} {"abstract": "We present an end-to-end head-pose estimation network designed to predict Euler angles through the full range head yaws from a single RGB image. Existing methods perform well for frontal views but few target head pose from all viewpoints. This has applications in autonomous driving and retail. Our network builds on multi-loss approaches with changes to loss functions and training strategies adapted to wide range estimation. Additionally, we extract ground truth labelings of anterior views from a current panoptic dataset for the first time. The resulting Wide Headpose Estimation Network (WHENet) is the first fine-grained modern method applicable to the full-range of head yaws (hence wide) yet also meets or beats state-of-the-art methods for frontal head pose estimation. Our network is compact and efficient for mobile devices and applications.", "field": [], "task": ["Autonomous Driving", "Head Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["AFLW2000", "BIWI"], "metric": ["MAE", "MAE (trained with other data)"], "title": "WHENet: Real-time Fine-Grained Estimation for Wide Range Head Pose"} {"abstract": "Graph kernels are powerful tools to bridge the gap between machine learning and data encoded as graphs. Most graph kernels are based on the decomposition of graphs into a set of patterns. The similarity between two graphs is then deduced from the similarity between corresponding patterns. Kernels based on linear patterns constitute a good trade-off between accuracy performance and computational complexity. In this work, we propose a thorough investigation and comparison of graph kernels based on different linear patterns, namely walks and paths. First, all these kernels are explored in detail, including their mathematical foundations, structures of patterns and computational complexity. Then, experiments are performed on various benchmark datasets exhibiting different types of graphs, including labeled and unlabeled graphs, graphs with different numbers of vertices, graphs with different average vertex degrees, cyclic and acyclic graphs. Finally, for regression and classification tasks, performance and computational complexity of kernels are compared and analyzed, and suggestions are proposed to choose kernels according to the types of graph datasets. This work leads to a clear comparison of strengths and weaknesses of these kernels. An open-source Python library containing an implementation of all discussed kernels is publicly available on GitHub to the community, thus allowing to promote and facilitate the use of graph kernels in machine learning problems.", "field": [], "task": ["Graph Classification", "Regression"], "method": [], "dataset": ["MUTAG"], "metric": ["Accuracy"], "title": "Graph Kernels Based on Linear Patterns: Theoretical and Experimental Comparisons"} {"abstract": "Neural Architecture Search (NAS) has emerged as a promising technique for automatic neural network design. However, existing MCTS based NAS approaches often utilize manually designed action space, which is not directly related to the performance metric to be optimized (e.g., accuracy), leading to sample-inefficient explorations of architectures. To improve the sample efficiency, this paper proposes Latent Action Neural Architecture Search (LaNAS), which learns actions to recursively partition the search space into good or bad regions that contain networks with similar performance metrics. During the search phase, as different action sequences lead to regions with different performance, the search efficiency can be significantly improved by biasing towards the good regions. On three NAS tasks, empirical results demonstrate that LaNAS is at least an order more sample efficient than baseline methods including evolutionary algorithms, Bayesian optimizations and random search. When applied in practice, both one-shot and regular LaNAS consistently outperforms existing results. Particularly, LaNAS achieves 99.0\\% accuracy on CIFAR-10 and 80.8\\% top1 accuracy at 600 MFLOPS on ImageNet in only 800 samples, significantly outperforming AmoebaNet with $33\\times$ fewer samples.", "field": [], "task": ["Image Classification", "Neural Architecture Search"], "method": [], "dataset": ["CIFAR-10"], "metric": ["PARAMS", "Percentage correct"], "title": "Sample-Efficient Neural Architecture Search by Learning Action Space for Monte Carlo Tree Search"} {"abstract": "We present a deep learning-based multi-task approach for head pose estimation in images. We contribute with a network architecture and training strategy that harness the strong dependencies among face pose, alignment and visibility, to produce a top performing model for all three tasks. Our architecture is an encoder-decoder CNN with residual blocks and lateral skip connections. We show that the combination of head pose estimation and landmark-based face alignment significantly improve the performance of the former task. Further, the location of the pose task at the bottleneck layer, at the end of the encoder, and that of tasks depending on spatial information, such as visibility and alignment, in the final decoder layer, also contribute to increase the final performance. In the experiments conducted the proposed model outperforms the state-of-the-art in the face pose and visibility tasks. By including a final landmark regression step it also produces face alignment results on par with the state-of-the-art.", "field": [], "task": ["Face Alignment", "Head Pose Estimation", "Pose Estimation", "Regression"], "method": [], "dataset": ["AFLW2000", "AFLW2000-3D", "BIWI", "COFW"], "metric": ["MAE (trained with other data)", "MAE", "Mean NME ", "Recall at 80% precision (Landmarks Visibility)", "Mean Error Rate"], "title": "Multi-task head pose estimation in-the-wild"} {"abstract": "Soft Actor-Critic is a state-of-the-art reinforcement learning algorithm for continuous action settings that is not applicable to discrete action settings. Many important settings involve discrete actions, however, and so here we derive an alternative version of the Soft Actor-Critic algorithm that is applicable to discrete action settings. We then show that, even without any hyperparameter tuning, it is competitive with the tuned model-free state-of-the-art on a selection of games from the Atari suite.", "field": [], "task": ["Atari Games"], "method": [], "dataset": ["Atari 2600 Amidar", "Atari 2600 Beam Rider", "Atari 2600 Enduro", "Atari 2600 Alien", "Atari 2600 Space Invaders", "Atari 2600 Assault", "Atari 2600 Asterix", "Atari 2600 Breakout", "Atari 2600 Crazy Climber", "Atari 2600 Freeway", "Atari 2600 James Bond", "Atari 2600 Pong", "Atari 2600 Kangaroo", "Atari 2600 Ms. Pacman", "Atari 2600 Seaquest", "Atari 2600 Frostbite", "Atari 2600 Battle Zone", "Atari 2600 Road Runner", "Atari 2600 Up and Down", "Atari 2600 Q*Bert"], "metric": ["Score"], "title": "Soft Actor-Critic for Discrete Action Settings"} {"abstract": "We present a simple methods to leverage the table content for the BERT-based model to solve the text-to-SQL problem. Based on the observation that some of the table content match some words in question string and some of the table header also match some words in question string, we encode two addition feature vector for the deep model. Our methods also benefit the model inference in testing time as the tables are almost the same in training and testing time. We test our model on the WikiSQL dataset and outperform the BERT-based baseline by 3.7% in logic form and 3.7% in execution accuracy and achieve state-of-the-art.", "field": [], "task": ["Semantic Parsing", "Text-To-Sql"], "method": [], "dataset": ["WikiSQL"], "metric": ["Accuracy"], "title": "Content Enhanced BERT-based Text-to-SQL Generation"} {"abstract": "We propose a simple yet robust stochastic answer network (SAN) that simulates\nmulti-step reasoning in machine reading comprehension. Compared to previous\nwork such as ReasoNet which used reinforcement learning to determine the number\nof steps, the unique feature is the use of a kind of stochastic prediction\ndropout on the answer module (final layer) of the neural network during the\ntraining. We show that this simple trick improves robustness and achieves\nresults competitive to the state-of-the-art on the Stanford Question Answering\nDataset (SQuAD), the Adversarial SQuAD, and the Microsoft MAchine Reading\nCOmprehension Dataset (MS MARCO).", "field": [], "task": ["Machine Reading Comprehension", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["SQuAD1.1 dev", "SQuAD1.1", "SQuAD2.0"], "metric": ["EM", "F1"], "title": "Stochastic Answer Networks for Machine Reading Comprehension"} {"abstract": "Image-level weakly supervised semantic segmentation is a challenging problem that has been deeply studied in recent years. Most of advanced solutions exploit class activation map (CAM). However, CAMs can hardly serve as the object mask due to the gap between full and weak supervisions. In this paper, we propose a self-supervised equivariant attention mechanism (SEAM) to discover additional supervision and narrow the gap. Our method is based on the observation that equivariance is an implicit constraint in fully supervised semantic segmentation, whose pixel-level labels take the same spatial transformation as the input images during data augmentation. However, this constraint is lost on the CAMs trained by image-level supervision. Therefore, we propose consistency regularization on predicted CAMs from various transformed images to provide self-supervision for network learning. Moreover, we propose a pixel correlation module (PCM), which exploits context appearance information and refines the prediction of current pixel by its similar neighbors, leading to further improvement on CAMs consistency. Extensive experiments on PASCAL VOC 2012 dataset demonstrate our method outperforms state-of-the-art methods using the same level of supervision. The code is released online.", "field": [], "task": ["Data Augmentation", "Semantic Segmentation", "Weakly-Supervised Semantic Segmentation"], "method": [], "dataset": ["PASCAL VOC 2012 val"], "metric": ["Mean IoU"], "title": "Self-supervised Equivariant Attention Mechanism for Weakly Supervised Semantic Segmentation"} {"abstract": "We consider an important task of effective and efficient semantic image\nsegmentation. In particular, we adapt a powerful semantic segmentation\narchitecture, called RefineNet, into the more compact one, suitable even for\ntasks requiring real-time performance on high-resolution inputs. To this end,\nwe identify computationally expensive blocks in the original setup, and propose\ntwo modifications aimed to decrease the number of parameters and floating point\noperations. By doing that, we achieve more than twofold model reduction, while\nkeeping the performance levels almost intact. Our fastest model undergoes a\nsignificant speed-up boost from 20 FPS to 55 FPS on a generic GPU card on\n512x512 inputs with solid 81.1% mean iou performance on the test set of PASCAL\nVOC, while our slowest model with 32 FPS (from original 17 FPS) shows 82.7%\nmean iou on the same dataset. Alternatively, we showcase that our approach is\neasily mixable with light-weight classification networks: we attain 79.2% mean\niou on PASCAL VOC using a model that contains only 3.3M parameters and performs\nonly 9.3B floating point operations.", "field": [], "task": ["Real-Time Semantic Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["NYU Depth v2", "PASCAL VOC 2012 test"], "metric": ["Speed(ms/f)", "Mean IoU", "mIoU"], "title": "Light-Weight RefineNet for Real-Time Semantic Segmentation"} {"abstract": "Natural Language Inference (NLI) task requires an agent to determine the\nlogical relationship between a natural language premise and a natural language\nhypothesis. We introduce Interactive Inference Network (IIN), a novel class of\nneural network architectures that is able to achieve high-level understanding\nof the sentence pair by hierarchically extracting semantic features from\ninteraction space. We show that an interaction tensor (attention weight)\ncontains semantic information to solve natural language inference, and a denser\ninteraction tensor contains richer semantic information. One instance of such\narchitecture, Densely Interactive Inference Network (DIIN), demonstrates the\nstate-of-the-art performance on large scale NLI copora and large-scale NLI\nalike corpus. It's noteworthy that DIIN achieve a greater than 20% error\nreduction on the challenging Multi-Genre NLI (MultiNLI) dataset with respect to\nthe strongest published system.", "field": [], "task": ["Natural Language Inference", "Paraphrase Identification"], "method": [], "dataset": ["Quora Question Pairs", "SNLI"], "metric": ["Parameters", "% Train Accuracy", "% Test Accuracy", "Accuracy"], "title": "Natural Language Inference over Interaction Space"} {"abstract": "This work presents a method for adapting a single, fixed deep neural network\nto multiple tasks without affecting performance on already learned tasks. By\nbuilding upon ideas from network quantization and pruning, we learn binary\nmasks that piggyback on an existing network, or are applied to unmodified\nweights of that network to provide good performance on a new task. These masks\nare learned in an end-to-end differentiable fashion, and incur a low overhead\nof 1 bit per network parameter, per task. Even though the underlying network is\nfixed, the ability to mask individual weights allows for the learning of a\nlarge number of filters. We show performance comparable to dedicated fine-tuned\nnetworks for a variety of classification tasks, including those with large\ndomain shifts from the initial task (ImageNet), and a variety of network\narchitectures. Unlike prior work, we do not suffer from catastrophic forgetting\nor competition between tasks, and our performance is agnostic to task ordering.\nCode available at https://github.com/arunmallya/piggyback.", "field": [], "task": ["Continual Learning", "Quantization"], "method": [], "dataset": ["Stanford Cars (Fine-grained 6 Tasks)", "Sketch (Fine-grained 6 Tasks)", "Wikiart (Fine-grained 6 Tasks)", "visual domain decathlon (10 tasks)", "CUBS (Fine-grained 6 Tasks)", "ImageNet (Fine-grained 6 Tasks)", "Flowers (Fine-grained 6 Tasks)"], "metric": ["decathlon discipline (Score)", "Accuracy"], "title": "Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights"} {"abstract": "Convolutional networks reach top quality in pixel-level video object\nsegmentation but require a large amount of training data (1k~100k) to deliver\nsuch results. We propose a new training strategy which achieves\nstate-of-the-art results across three evaluation datasets while using 20x~1000x\nless annotated data than competing methods. Our approach is suitable for both\nsingle and multiple object segmentation. Instead of using large training sets\nhoping to generalize across domains, we generate in-domain training data using\nthe provided annotation on the first frame of each video to synthesize (\"lucid\ndream\") plausible future video frames. In-domain per-video training data allows\nus to train high quality appearance- and motion-based models, as well as tune\nthe post-processing stage. This approach allows to reach competitive results\neven when training from only a single annotated frame, without ImageNet\npre-training. Our results indicate that using a larger training set is not\nautomatically better, and that for the video object segmentation task a smaller\ntraining set that is closer to the target domain is more effective. This\nchanges the mindset regarding how many training samples and general\n\"objectness\" knowledge are required for the video object segmentation task.", "field": [], "task": ["Multiple Object Tracking", "Object Tracking", "Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["DAVIS 2017 (test-dev)", "DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "Lucid Data Dreaming for Video Object Segmentation"} {"abstract": "Although the performance of person Re-Identification (ReID) has been\nsignificantly boosted, many challenging issues in real scenarios have not been\nfully investigated, e.g., the complex scenes and lighting variations, viewpoint\nand pose changes, and the large number of identities in a camera network. To\nfacilitate the research towards conquering those issues, this paper contributes\na new dataset called MSMT17 with many important features, e.g., 1) the raw\nvideos are taken by an 15-camera network deployed in both indoor and outdoor\nscenes, 2) the videos cover a long period of time and present complex lighting\nvariations, and 3) it contains currently the largest number of annotated\nidentities, i.e., 4,101 identities and 126,441 bounding boxes. We also observe\nthat, domain gap commonly exists between datasets, which essentially causes\nsevere performance drop when training and testing on different datasets. This\nresults in that available training data cannot be effectively leveraged for new\ntesting domains. To relieve the expensive costs of annotating new training\nsamples, we propose a Person Transfer Generative Adversarial Network (PTGAN) to\nbridge the domain gap. Comprehensive experiments show that the domain gap could\nbe substantially narrowed-down by the PTGAN.", "field": [], "task": ["Person Re-Identification", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["DukeMTMC-reID", "Duke to MSMT", "Market to MSMT"], "metric": ["rank-10", "mAP", "Rank-10", "Rank-1", "rank-1", "rank-5"], "title": "Person Transfer GAN to Bridge Domain Gap for Person Re-Identification"} {"abstract": "Deep convolutional networks have achieved great success for visual\nrecognition in still images. However, for action recognition in videos, the\nadvantage over traditional methods is not so evident. This paper aims to\ndiscover the principles to design effective ConvNet architectures for action\nrecognition in videos and learn these models given limited training samples.\nOur first contribution is temporal segment network (TSN), a novel framework for\nvideo-based action recognition. which is based on the idea of long-range\ntemporal structure modeling. It combines a sparse temporal sampling strategy\nand video-level supervision to enable efficient and effective learning using\nthe whole action video. The other contribution is our study on a series of good\npractices in learning ConvNets on video data with the help of temporal segment\nnetwork. Our approach obtains the state-the-of-art performance on the datasets\nof HMDB51 ( $ 69.4\\% $) and UCF101 ($ 94.2\\% $). We also visualize the learned\nConvNet models, which qualitatively demonstrates the effectiveness of temporal\nsegment network and the proposed good practices.", "field": [], "task": ["Action Classification", "Action Recognition", "Action Recognition In Videos", "Action Recognition In Videos ", "Multimodal Activity Recognition", "Temporal Action Localization"], "method": [], "dataset": ["Kinetics-400", "UCF101", "HMDB-51", "EV-Action"], "metric": ["3-fold Accuracy", "Vid acc@5", "Accuracy", "Average accuracy of 3 splits", "Vid acc@1"], "title": "Temporal Segment Networks: Towards Good Practices for Deep Action Recognition"} {"abstract": "Neural networks with tree-based sentence encoders have shown better results\non many downstream tasks. Most of existing tree-based encoders adopt syntactic\nparsing trees as the explicit structure prior. To study the effectiveness of\ndifferent tree structures, we replace the parsing trees with trivial trees\n(i.e., binary balanced tree, left-branching tree and right-branching tree) in\nthe encoders. Though trivial trees contain no syntactic information, those\nencoders get competitive or even better results on all of the ten downstream\ntasks we investigated. This surprising result indicates that explicit syntax\nguidance may not be the main contributor to the superior performances of\ntree-based neural sentence modeling. Further analysis show that tree modeling\ngives better results when crucial words are closer to the final representation.\nAdditional experiments give more clues on how to design an effective tree-based\nencoder. Our code is open-source and available at\nhttps://github.com/ExplorerFreda/TreeEnc.", "field": [], "task": ["Sentiment Analysis", "Text Classification"], "method": [], "dataset": ["DBpedia", "Amazon Review Polarity", "AG News", "Amazon Review Full"], "metric": ["Error", "Accuracy"], "title": "On Tree-Based Neural Sentence Modeling"} {"abstract": "We present a simple and accurate span-based model for semantic role labeling\n(SRL). Our model directly takes into account all possible argument spans and\nscores them for each label. At decoding time, we greedily select higher scoring\nlabeled spans. One advantage of our model is to allow us to design and use\nspan-level features, that are difficult to use in token-based BIO tagging\napproaches. Experimental results demonstrate that our ensemble model achieves\nthe state-of-the-art results, 87.4 F1 and 87.0 F1 on the CoNLL-2005 and 2012\ndatasets, respectively.", "field": [], "task": ["Semantic Role Labeling"], "method": [], "dataset": ["CoNLL 2005", "OntoNotes"], "metric": ["F1"], "title": "A Span Selection Model for Semantic Role Labeling"} {"abstract": "This research note combines two methods that have recently improved the state\nof the art in language modeling: Transformers and dynamic evaluation.\nTransformers use stacked layers of self-attention that allow them to capture\nlong range dependencies in sequential data. Dynamic evaluation fits models to\nthe recent sequence history, allowing them to assign higher probabilities to\nre-occurring sequential patterns. By applying dynamic evaluation to\nTransformer-XL models, we improve the state of the art on enwik8 from 0.99 to\n0.94 bits/char, text8 from 1.08 to 1.04 bits/char, and WikiText-103 from 18.3\nto 16.4 perplexity points.", "field": [], "task": ["Language Modelling"], "method": [], "dataset": ["Text8", "enwik8", "WikiText-103", "Hutter Prize"], "metric": ["Number of params", "Bit per Character (BPC)", "Validation perplexity", "Test perplexity"], "title": "Dynamic Evaluation of Transformer Language Models"} {"abstract": "Real-world large-scale datasets usually contain noisy labels and are imbalanced. Therefore, we propose derivative manipulation (DM), a novel and general example weighting approach for training robust deep models under these adverse conditions. DM has two main merits. First, loss function and example weighting are two common techniques in robust learning. In gradient-based optimisation, the role of a loss function is to provide the gradient for back-propagation to update a model, so that the derivative magnitude of an example defines how much impact it has, namely its weight. By DM, we connect the design of loss function and example weighting together. Second, although designing a loss function sometimes has the same effect, we need to care whether a loss is differentiable, and derive its derivative to understand its example weighting scheme. They make the design complicated. Instead, DM is more flexible and straightforward by directly modifying the derivative. Concretely, DM modifies a derivative magnitude function, including transformation and normalisation, after which we term it an emphasis density function, which expresses a weighting scheme. Accordingly, diverse weighting schemes are derived from common probability density functions, including those of well-known robust losses, e.g., MAE and GCE. We conduct extensive experiments demonstrating the effectiveness of DM on both vision and language tasks.", "field": [], "task": ["Image Classification", "Representation Learning"], "method": [], "dataset": ["Clothing1M"], "metric": ["Accuracy"], "title": "Derivative Manipulation for General Example Weighting"} {"abstract": "Integrating logical reasoning within deep learning architectures has been a major goal of modern AI systems. In this paper, we propose a new direction toward this goal by introducing a differentiable (smoothed) maximum satisfiability (MAXSAT) solver that can be integrated into the loop of larger deep learning systems. Our (approximate) solver is based upon a fast coordinate descent approach to solving the semidefinite program (SDP) associated with the MAXSAT problem. We show how to analytically differentiate through the solution to this SDP and efficiently solve the associated backward pass. We demonstrate that by integrating this solver into end-to-end learning systems, we can learn the logical structure of challenging problems in a minimally supervised fashion. In particular, we show that we can learn the parity function using single-bit supervision (a traditionally hard task for deep networks) and learn how to play 9x9 Sudoku solely from examples. We also solve a \"visual Sudok\" problem that maps images of Sudoku puzzles to their associated logical solutions by combining our MAXSAT solver with a traditional convolutional architecture. Our approach thus shows promise in integrating logical structures within deep learning.", "field": [], "task": ["Game of Suduko"], "method": [], "dataset": ["Sudoko 9x9"], "metric": ["Accuracy"], "title": "SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver"} {"abstract": "Consider end-to-end training of a multi-modal vs. a single-modal network on a task with multiple input modalities: the multi-modal network receives more information, so it should match or outperform its single-modal counterpart. In our experiments, however, we observe the opposite: the best single-modal network always outperforms the multi-modal network. This observation is consistent across different combinations of modalities and on different tasks and benchmarks. This paper identifies two main causes for this performance drop: first, multi-modal networks are often prone to overfitting due to increased capacity. Second, different modalities overfit and generalize at different rates, so training them jointly with a single optimization strategy is sub-optimal. We address these two problems with a technique we call Gradient Blending, which computes an optimal blend of modalities based on their overfitting behavior. We demonstrate that Gradient Blending outperforms widely-used baselines for avoiding overfitting and achieves state-of-the-art accuracy on various tasks including human action recognition, ego-centric action recognition, and acoustic event detection.", "field": [], "task": ["Action Classification", "Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["Kinetics-400", "Sports-1M", "miniSports"], "metric": ["Video hit@1 ", "Video hit@1", "Vid acc@1", "Video hit@5", "Clip Hit@1"], "title": "What Makes Training Multi-Modal Classification Networks Hard?"} {"abstract": "In this paper, we propose a state-of-the-art video denoising algorithm based on a convolutional neural network architecture. Until recently, video denoising with neural networks had been a largely under explored domain, and existing methods could not compete with the performance of the best patch-based methods. The approach we introduce in this paper, called FastDVDnet, shows similar or better performance than other state-of-the-art competitors with significantly lower computing times. In contrast to other existing neural network denoisers, our algorithm exhibits several desirable properties such as fast runtimes, and the ability to handle a wide range of noise levels with a single network model. The characteristics of its architecture make it possible to avoid using a costly motion compensation stage while achieving excellent performance. The combination between its denoising performance and lower computational load makes this algorithm attractive for practical denoising applications. We compare our method with different state-of-art algorithms, both visually and with respect to objective quality metrics.", "field": [], "task": ["Denoising", "Motion Compensation", "Motion Estimation", "Video Denoising"], "method": [], "dataset": ["DAVIS sigma50", "Set8 sigma30", "DAVIS sigma20", "DAVIS sigma40", "DAVIS sigma10", "Set8 sigma40", "Set8 sigma10", "DAVIS sigma30", "Set8 sigma20", "Set8 sigma50"], "metric": ["PSNR"], "title": "FastDVDnet: Towards Real-Time Deep Video Denoising Without Flow Estimation"} {"abstract": "We introduce a new, rigorously-formulated Bayesian meta-learning algorithm that learns a probability distribution of model parameter prior for few-shot learning. The proposed algorithm employs a gradient-based variational inference to infer the posterior of model parameters to a new task. Our algorithm can be applied to any model architecture and can be implemented in various machine learning paradigms, including regression and classification. We show that the models trained with our proposed meta-learning algorithm are well calibrated and accurate, with state-of-the-art calibration and classification results on two few-shot classification benchmarks (Omniglot and Mini-ImageNet), and competitive results in a multi-modal task-distribution regression.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Meta-Learning", "Omniglot", "Regression", "Variational Inference"], "method": [], "dataset": ["OMNIGLOT - 1-Shot, 5-way", "OMNIGLOT - 5-Shot, 20-way", "Mini-Imagenet 5-way (1-shot)", "Tiered ImageNet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "OMNIGLOT - 5-Shot, 5-way", "OMNIGLOT - 1-Shot, 20-way", "Tiered ImageNet 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Uncertainty in Model-Agnostic Meta-Learning using Variational Inference"} {"abstract": "Current state-of-the-art models for video action recognition are mostly based on expensive 3D ConvNets. This results in a need for large GPU clusters to train and evaluate such architectures. To address this problem, we present a lightweight and memory-friendly architecture for action recognition that performs on par with or better than current architectures by using only a fraction of resources. The proposed architecture is based on a combination of a deep subnet operating on low-resolution frames with a compact subnet operating on high-resolution frames, allowing for high efficiency and accuracy at the same time. We demonstrate that our approach achieves a reduction by $3\\sim4$ times in FLOPs and $\\sim2$ times in memory usage compared to the baseline. This enables training deeper models with more input frames under the same computational budget. To further obviate the need for large-scale 3D convolutions, a temporal aggregation module is proposed to model temporal dependencies in a video at very small additional computational costs. Our models achieve strong performance on several action recognition benchmarks including Kinetics, Something-Something and Moments-in-time. The code and models are available at https://github.com/IBM/bLVNet-TAM.", "field": [], "task": ["Action Classification", "Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["Kinetics-400", "Something-Something V2"], "metric": ["Vid acc@5", "Vid acc@1", "Top-1 Accuracy"], "title": "More Is Less: Learning Efficient Video Representations by Big-Little Network and Depthwise Temporal Aggregation"} {"abstract": "We propose to modify the common training protocols of optical flow, leading to sizable accuracy improvements without adding to the computational complexity of the training process. The improvement is based on observing the bias in sampling challenging data that exists in the current training protocol, and improving the sampling process. In addition, we find that both regularization and augmentation should decrease during the training protocol. Using an existing low parameters architecture, the method is ranked first on the MPI Sintel benchmark among all other methods, improving the best two frames method accuracy by more than 10%. The method also surpasses all similar architecture variants by more than 12% and 19.7% on the KITTI benchmarks, achieving the lowest Average End-Point Error on KITTI2012 among two-frame methods, without using extra datasets.", "field": [], "task": ["Optical Flow Estimation"], "method": [], "dataset": ["Sintel-final", "Sintel-clean"], "metric": ["Average End-Point Error"], "title": "ScopeFlow: Dynamic Scene Scoping for Optical Flow"} {"abstract": "Reading strategies have been shown to improve comprehension levels,\nespecially for readers lacking adequate prior knowledge. Just as the process of\nknowledge accumulation is time-consuming for human readers, it is\nresource-demanding to impart rich general domain knowledge into a deep language\nmodel via pre-training. Inspired by reading strategies identified in cognitive\nscience, and given limited computational resources -- just a pre-trained model\nand a fixed number of training instances -- we propose three general strategies\naimed to improve non-extractive machine reading comprehension (MRC): (i) BACK\nAND FORTH READING that considers both the original and reverse order of an\ninput sequence, (ii) HIGHLIGHTING, which adds a trainable embedding to the text\nembedding of tokens that are relevant to the question and candidate answers,\nand (iii) SELF-ASSESSMENT that generates practice questions and candidate\nanswers directly from the text in an unsupervised manner.\n By fine-tuning a pre-trained language model (Radford et al., 2018) with our\nproposed strategies on the largest general domain multiple-choice MRC dataset\nRACE, we obtain a 5.8% absolute increase in accuracy over the previous best\nresult achieved by the same pre-trained model fine-tuned on RACE without the\nuse of strategies. We further fine-tune the resulting model on a target MRC\ntask, leading to an absolute improvement of 6.2% in average accuracy over\nprevious state-of-the-art approaches on six representative non-extractive MRC\ndatasets from different domains (i.e., ARC, OpenBookQA, MCTest, SemEval-2018\nTask 11, ROCStories, and MultiRC). These results demonstrate the effectiveness\nof our proposed strategies and the versatility and general applicability of our\nfine-tuned models that incorporate these strategies. Core code is available at\nhttps://github.com/nlpdata/strategy/.", "field": [], "task": ["Language Modelling", "Machine Reading Comprehension", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["Story Cloze Test"], "metric": ["Accuracy"], "title": "Improving Machine Reading Comprehension with General Reading Strategies"} {"abstract": "Recent advances in image-based 3D human shape estimation have been driven by the significant improvement in representation power afforded by deep neural networks. Although current approaches have demonstrated the potential in real world settings, they still fail to produce reconstructions with the level of detail often present in the input images. We argue that this limitation stems primarily form two conflicting requirements; accurate predictions require large context, but precise predictions require high resolution. Due to memory limitations in current hardware, previous approaches tend to take low resolution images as input to cover large spatial context, and produce less precise (or low resolution) 3D estimates as a result. We address this limitation by formulating a multi-level architecture that is end-to-end trainable. A coarse level observes the whole image at lower resolution and focuses on holistic reasoning. This provides context to an fine level which estimates highly detailed geometry by observing higher-resolution images. We demonstrate that our approach significantly outperforms existing state-of-the-art techniques on single image human shape reconstruction by fully leveraging 1k-resolution input images.", "field": [], "task": ["3D Human Pose Estimation", "3D Object Reconstruction From A Single Image", "3D Shape Reconstruction"], "method": [], "dataset": ["BUFF", "RenderPeople"], "metric": ["Surface normal consistency", "Point-to-surface distance (cm)", "Chamfer (cm)"], "title": "PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization"} {"abstract": "Depth maps contain geometric clues for assisting Salient Object Detection (SOD). In this paper, we propose a novel Cross-Modal Weighting (CMW) strategy to encourage comprehensive interactions between RGB and depth channels for RGB-D SOD. Specifically, three RGB-depth interaction modules, named CMW-L, CMW-M and CMW-H, are developed to deal with respectively low-, middle- and high-level cross-modal information fusion. These modules use Depth-to-RGB Weighing (DW) and RGB-to-RGB Weighting (RW) to allow rich cross-modal and cross-scale interactions among feature layers generated by different network blocks. To effectively train the proposed Cross-Modal Weighting Network (CMWNet), we design a composite loss function that summarizes the errors between intermediate predictions and ground truth over different scales. With all these novel components working together, CMWNet effectively fuses information from RGB and depth channels, and meanwhile explores object localization and details across scales. Thorough evaluations demonstrate CMWNet consistently outperforms 15 state-of-the-art RGB-D SOD methods on seven popular benchmarks.", "field": [], "task": ["Object Detection", "Object Localization", "RGB-D Salient Object Detection", "RGB Salient Object Detection", "Salient Object Detection"], "method": [], "dataset": ["NJU2K"], "metric": ["Average MAE", "S-Measure"], "title": "Cross-Modal Weighting Network for RGB-D Salient Object Detection"} {"abstract": "Human-Object Interaction (HOI) consists of human, object and implicit interaction/verb. Different from previous methods that directly map pixels to HOI semantics, we propose a novel perspective for HOI learning in an analytical manner. In analogy to Harmonic Analysis, whose goal is to study how to represent the signals with the superposition of basic waves, we propose the HOI Analysis. We argue that coherent HOI can be decomposed into isolated human and object. Meanwhile, isolated human and object can also be integrated into coherent HOI again. Moreover, transformations between human-object pairs with the same HOI can also be easier approached with integration and decomposition. As a result, the implicit verb will be represented in the transformation function space. In light of this, we propose an Integration-Decomposition Network (IDN) to implement the above transformations and achieve state-of-the-art performance on widely-used HOI detection benchmarks. Code is available at https://github.com/DirtyHarryLYL/HAKE-Action-Torch/tree/IDN-(Integrating-Decomposing-Network).", "field": [], "task": ["Human-Object Interaction Detection"], "method": [], "dataset": ["HICO-DET", "V-COCO"], "metric": ["MAP"], "title": "HOI Analysis: Integrating and Decomposing Human-Object Interaction"} {"abstract": "In this paper, we propose a recurrent framework for Joint Unsupervised\nLEarning (JULE) of deep representations and image clusters. In our framework,\nsuccessive operations in a clustering algorithm are expressed as steps in a\nrecurrent process, stacked on top of representations output by a Convolutional\nNeural Network (CNN). During training, image clusters and representations are\nupdated jointly: image clustering is conducted in the forward pass, while\nrepresentation learning in the backward pass. Our key idea behind this\nframework is that good representations are beneficial to image clustering and\nclustering results provide supervisory signals to representation learning. By\nintegrating two processes into a single model with a unified weighted triplet\nloss and optimizing it end-to-end, we can obtain not only more powerful\nrepresentations, but also more precise image clusters. Extensive experiments\nshow that our method outperforms the state-of-the-art on image clustering\nacross a variety of image datasets. Moreover, the learned representations\ngeneralize well when transferred to other tasks.", "field": [], "task": ["Image Clustering", "Representation Learning"], "method": [], "dataset": ["coil-100", "MNIST-test", "CMU-PIE", "Imagenet-dog-15", "YouTube Faces DB", "USPS", "CIFAR-100", "CIFAR-10", "UMist", "FRGC", "Tiny-ImageNet", "CUB Birds", "ImageNet-10", "STL-10", "Coil-20", "Stanford Dogs", "Stanford Cars", "MNIST-full"], "metric": ["Train set", "Train Split", "ARI", "Train Set", "NMI", "Accuracy"], "title": "Joint Unsupervised Learning of Deep Representations and Image Clusters"} {"abstract": "In this paper we present a novel deep learning method for 3D object detection and 6D pose estimation from RGB images. Our method, named DPOD (Dense Pose Object Detector), estimates dense multi-class 2D-3D correspondence maps between an input image and available 3D models. Given the correspondences, a 6DoF pose is computed via PnP and RANSAC. An additional RGB pose refinement of the initial pose estimates is performed using a custom deep learning-based refinement scheme. Our results and comparison to a vast number of related works demonstrate that a large number of correspondences is beneficial for obtaining high-quality 6D poses both before and after refinement. Unlike other methods that mainly use real data for training and do not train on synthetic renderings, we perform evaluation on both synthetic and real training data demonstrating superior results before and after refinement when compared to all recent detectors. While being precise, the presented approach is still real-time capable.", "field": [], "task": ["3D Object Detection", "6D Pose Estimation", "6D Pose Estimation using RGB", "Object Detection", "Pose Estimation"], "method": [], "dataset": ["LineMOD", "Occlusion LineMOD"], "metric": ["Mean ADD", "Accuracy (ADD)"], "title": "DPOD: 6D Pose Object Detector and Refiner"} {"abstract": "Recently, it has been demonstrated that deep neural networks can significantly improve the performance of single image super-resolution (SISR). Numerous studies have concentrated on raising the quantitative quality of super-resolved (SR) images. However, these methods that target PSNR maximization usually produce blurred images at large upscaling factor. The introduction of generative adversarial networks (GANs) can mitigate this issue and show impressive results with synthetic high-frequency textures. Nevertheless, these GAN-based approaches always have a tendency to add fake textures and even artifacts to make the SR image of visually higher-resolution. In this paper, we propose a novel perceptual image super-resolution method that progressively generates visually high-quality results by constructing a stage-wise network. Specifically, the first phase concentrates on minimizing pixel-wise error, and the second stage utilizes the features extracted by the previous stage to pursue results with better structural retention. The final stage employs fine structure features distilled by the second phase to produce more realistic results. In this way, we can maintain the pixel, and structural level information in the perceptual image as much as possible. It is useful to note that the proposed method can build three types of images in a feed-forward process. Also, we explore a new generator that adopts multi-scale hierarchical features fusion. Extensive experiments on benchmark datasets show that our approach is superior to the state-of-the-art methods. Code is available at https://github.com/Zheng222/PPON.", "field": [], "task": ["Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["Set14 - 4x upscaling", "Manga109 - 4x upscaling", "BSD100 - 4x upscaling", "Set5 - 4x upscaling", "Urban100 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Progressive Perception-Oriented Network for Single Image Super-Resolution"} {"abstract": "We propose to estimate 3D human pose from multi-view images and a few IMUs attached at person's limbs. It operates by firstly detecting 2D poses from the two signals, and then lifting them to the 3D space. We present a geometric approach to reinforce the visual features of each pair of joints based on the IMUs. This notably improves 2D pose estimation accuracy especially when one joint is occluded. We call this approach Orientation Regularized Network (ORN). Then we lift the multi-view 2D poses to the 3D space by an Orientation Regularized Pictorial Structure Model (ORPSM) which jointly minimizes the projection error between the 3D and 2D poses, along with the discrepancy between the 3D pose and IMU orientations. The simple two-step approach reduces the error of the state-of-the-art by a large margin on a public dataset. Our code will be released at https://github.com/CHUNYUWANG/imu-human-pose-pytorch.", "field": [], "task": ["3D Absolute Human Pose Estimation", "3D Human Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["Total Capture"], "metric": ["Average MPJPE (mm)", "MPJPE"], "title": "Fusing Wearable IMUs with Multi-View Images for Human Pose Estimation: A Geometric Approach"} {"abstract": "We present a general framework for exemplar-based image translation, which synthesizes a photo-realistic image from the input in a distinct domain (e.g., semantic segmentation mask, or edge map, or pose keypoints), given an exemplar image. The output has the style (e.g., color, texture) in consistency with the semantically corresponding objects in the exemplar. We propose to jointly learn the crossdomain correspondence and the image translation, where both tasks facilitate each other and thus can be learned with weak supervision. The images from distinct domains are first aligned to an intermediate domain where dense correspondence is established. Then, the network synthesizes images based on the appearance of semantically corresponding patches in the exemplar. We demonstrate the effectiveness of our approach in several image translation tasks. Our method is superior to state-of-the-art methods in terms of image quality significantly, with the image style faithful to the exemplar with semantic consistency. Moreover, we show the utility of our method for several applications", "field": [], "task": ["Image Generation", "Image-to-Image Translation"], "method": [], "dataset": ["ADE20K Labels-to-Photos", "ADE20K-Outdoor Labels-to-Photos", "CelebA-HQ", "Deep-Fashion"], "metric": ["FID"], "title": "Cross-domain Correspondence Learning for Exemplar-based Image Translation"} {"abstract": "In statistical relational learning, knowledge graph completion deals with\nautomatically understanding the structure of large knowledge graphs---labeled\ndirected graphs---and predicting missing relationships---labeled edges.\nState-of-the-art embedding models propose different trade-offs between modeling\nexpressiveness, and time and space complexity. We reconcile both expressiveness\nand complexity through the use of complex-valued embeddings and explore the\nlink between such complex-valued embeddings and unitary diagonalization. We\ncorroborate our approach theoretically and show that all real square\nmatrices---thus all possible relation/adjacency matrices---are the real part of\nsome unitarily diagonalizable matrix. This results opens the door to a lot of\nother applications of square matrices factorization. Our approach based on\ncomplex embeddings is arguably simple, as it only involves a Hermitian dot\nproduct, the complex counterpart of the standard dot product between real\nvectors, whereas other methods resort to more and more complicated composition\nfunctions to increase their expressiveness. The proposed complex embeddings are\nscalable to large data sets as it remains linear in both space and time, while\nconsistently outperforming alternative approaches on standard link prediction\nbenchmarks.", "field": [], "task": ["Knowledge Graph Completion", "Knowledge Graphs", "Link Prediction", "Relational Reasoning"], "method": [], "dataset": [" FB15k"], "metric": ["Hits@10", "MRR", "Hits@3", "Hits@1"], "title": "Knowledge Graph Completion via Complex Tensor Factorization"} {"abstract": "This paper tackles the task of semi-supervised video object segmentation,\ni.e., the separation of an object from the background in a video, given the\nmask of the first frame. We present One-Shot Video Object Segmentation (OSVOS),\nbased on a fully-convolutional neural network architecture that is able to\nsuccessively transfer generic semantic information, learned on ImageNet, to the\ntask of foreground segmentation, and finally to learning the appearance of a\nsingle annotated object of the test sequence (hence one-shot). Although all\nframes are processed independently, the results are temporally coherent and\nstable. We perform experiments on two annotated video segmentation databases,\nwhich show that OSVOS is fast and improves the state of the art by a\nsignificant margin (79.8% vs 68.0%).", "field": [], "task": ["Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Segmentation", "Visual Object Tracking"], "method": [], "dataset": ["YouTube-VOS", "DAVIS 2017 (test-dev)", "DAVIS 2017 (val)", "YouTube", "DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "Speed (FPS)", "Jaccard (Unseen)", "F-Measure (Seen)", "Jaccard (Seen)", "mIoU", "F-measure (Recall)", "Jaccard (Decay)", "Overall", "O (Average of Measures)", "Jaccard (Recall)", "F-measure (Mean)", "J&F", "F-Measure (Unseen)"], "title": "One-Shot Video Object Segmentation"} {"abstract": "In this paper, drawing intuition from the Turing test, we propose using\nadversarial training for open-domain dialogue generation: the system is trained\nto produce sequences that are indistinguishable from human-generated dialogue\nutterances. We cast the task as a reinforcement learning (RL) problem where we\njointly train two systems, a generative model to produce response sequences,\nand a discriminator---analagous to the human evaluator in the Turing test--- to\ndistinguish between the human-generated dialogues and the machine-generated\nones. The outputs from the discriminator are then used as rewards for the\ngenerative model, pushing the system to generate dialogues that mostly resemble\nhuman dialogues.\n In addition to adversarial training we describe a model for adversarial {\\em\nevaluation} that uses success in fooling an adversary as a dialogue evaluation\nmetric, while avoiding a number of potential pitfalls. Experimental results on\nseveral metrics, including adversarial evaluation, demonstrate that the\nadversarially-trained system generates higher-quality responses than previous\nbaselines.", "field": [], "task": ["Dialogue Evaluation", "Dialogue Generation"], "method": [], "dataset": ["Amazon-5"], "metric": ["1 in 10 R@2"], "title": "Adversarial Learning for Neural Dialogue Generation"} {"abstract": "Most state-of-the-art text detection methods are specific to horizontal Latin\ntext and are not fast enough for real-time applications. We introduce Segment\nLinking (SegLink), an oriented text detection method. The main idea is to\ndecompose text into two locally detectable elements, namely segments and links.\nA segment is an oriented box covering a part of a word or text line; A link\nconnects two adjacent segments, indicating that they belong to the same word or\ntext line. Both elements are detected densely at multiple scales by an\nend-to-end trained, fully-convolutional neural network. Final detections are\nproduced by combining segments connected by links. Compared with previous\nmethods, SegLink improves along the dimensions of accuracy, speed, and ease of\ntraining. It achieves an f-measure of 75.0% on the standard ICDAR 2015\nIncidental (Challenge 4) benchmark, outperforming the previous best by a large\nmargin. It runs at over 20 FPS on 512x512 images. Moreover, without\nmodification, SegLink is able to detect long lines of non-Latin text, such as\nChinese.", "field": [], "task": ["Curved Text Detection", "Scene Text Detection"], "method": [], "dataset": ["ICDAR 2013", "ICDAR 2015", "MSRA-TD500"], "metric": ["F-Measure", "Recall", "Precision"], "title": "Detecting Oriented Text in Natural Images by Linking Segments"} {"abstract": "Semantic segmentation requires large amounts of pixel-wise annotations to learn accurate models. In this paper, we present a video prediction-based methodology to scale up training sets by synthesizing new training samples in order to improve the accuracy of semantic segmentation networks. We exploit video prediction models' ability to predict future frames in order to also predict future labels. A joint propagation strategy is also proposed to alleviate mis-alignments in synthesized samples. We demonstrate that training segmentation models on datasets augmented by the synthesized samples leads to significant improvements in accuracy. Furthermore, we introduce a novel boundary label relaxation technique that makes training robust to annotation noise and propagation artifacts along object boundaries. Our proposed methods achieve state-of-the-art mIoUs of 83.5% on Cityscapes and 82.9% on CamVid. Our single model, without model ensembles, achieves 72.8% mIoU on the KITTI semantic segmentation test set, which surpasses the winning entry of the ROB challenge 2018. Our code and videos can be found at https://nv-adlr.github.io/publication/2018-Segmentation.", "field": [], "task": ["Semantic Segmentation", "Video Prediction"], "method": [], "dataset": ["CamVid", "KITTI Semantic Segmentation", "Cityscapes test"], "metric": ["Mean IoU (class)", "Mean IoU"], "title": "Improving Semantic Segmentation via Video Propagation and Label Relaxation"} {"abstract": "Instance-level alignment is widely exploited for person re-identification, e.g. spatial alignment, latent semantic alignment and triplet alignment. This paper probes another feature alignment modality, namely cluster-level feature alignment across whole dataset, where the model can see not only the sampled images in local mini-batch but the global feature distribution of the whole dataset from distilled anchors. Towards this aim, we propose anchor loss and investigate many variants of cluster-level feature alignment, which consists of iterative aggregation and alignment from the overview of dataset. Our extensive experiments have demonstrated that our methods can provide consistent and significant performance improvement with small training efforts after the saturation of traditional training. In both theoretical and experimental aspects, our proposed methods can result in more stable and guided optimization towards better representation and generalization for well-aligned embedding.", "field": [], "task": ["Person Re-Identification"], "method": [], "dataset": ["DukeMTMC-reID", "Market-1501"], "metric": ["Rank-1", "MAP"], "title": "Cluster-level Feature Alignment for Person Re-identification"} {"abstract": "Depth estimation is a traditional computer vision task, which plays a crucial\nrole in understanding 3D scene geometry. Recently,\ndeep-convolutional-neural-networks based methods have achieved promising\nresults in the monocular depth estimation field. Specifically, the framework\nthat combines the multi-scale features extracted by the dilated convolution\nbased block (atrous spatial pyramid pooling, ASPP) has gained the significant\nimprovement in the dense labeling task. However, the discretized and predefined\ndilation rates cannot capture the continuous context information that differs\nin diverse scenes and easily introduce the grid artifacts in depth estimation.\nIn this paper, we propose an attention-based context aggregation network (ACAN)\nto tackle these difficulties. Based on the self-attention model, ACAN\nadaptively learns the task-specific similarities between pixels to model the\ncontext information. First, we recast the monocular depth estimation as a dense\nlabeling multi-class classification problem. Then we propose a soft ordinal\ninference to transform the predicted probabilities to continuous depth values,\nwhich can reduce the discretization error (about 1% decrease in RMSE). Second,\nthe proposed ACAN aggregates both the image-level and pixel-level context\ninformation for depth estimation, where the former expresses the statistical\ncharacteristic of the whole image and the latter extracts the long-range\nspatial dependencies for each pixel. Third, for further reducing the\ninconsistency between the RGB image and depth map, we construct an attention\nloss to minimize their information entropy. We evaluate on public monocular\ndepth-estimation benchmark datasets (including NYU Depth V2, KITTI). The\nexperiments demonstrate the superiority of our proposed ACAN and achieve the\ncompetitive results with the state of the arts.", "field": [], "task": ["Depth Estimation", "Monocular Depth Estimation", "Multi-class Classification"], "method": [], "dataset": ["NYU-Depth V2"], "metric": ["RMSE"], "title": "Attention-based Context Aggregation Network for Monocular Depth Estimation"} {"abstract": "Supervised depth estimation has achieved high accuracy due to the advanced\ndeep network architectures. Since the groundtruth depth labels are hard to\nobtain, recent methods try to learn depth estimation networks in an\nunsupervised way by exploring unsupervised cues, which are effective but less\nreliable than true labels. An emerging way to resolve this dilemma is to\ntransfer knowledge from synthetic images with ground truth depth via domain\nadaptation techniques. However, these approaches overlook specific geometric\nstructure of the natural images in the target domain (i.e., real data), which\nis important for high-performing depth prediction. Motivated by the\nobservation, we propose a geometry-aware symmetric domain adaptation framework\n(GASDA) to explore the labels in the synthetic data and epipolar geometry in\nthe real data jointly. Moreover, by training two image style translators and\ndepth estimators symmetrically in an end-to-end network, our model achieves\nbetter image style transfer and generates high-quality depth maps. The\nexperimental results demonstrate the effectiveness of our proposed method and\ncomparable performance against the state-of-the-art. Code will be publicly\navailable at: https://github.com/sshan-zhao/GASDA.", "field": [], "task": ["Depth Estimation", "Domain Adaptation", "Monocular Depth Estimation", "Style Transfer"], "method": [], "dataset": ["KITTI Eigen split"], "metric": ["absolute relative error"], "title": "Geometry-Aware Symmetric Domain Adaptation for Monocular Depth Estimation"} {"abstract": "Face sketch synthesis has made great progress in the past few years. Recent methods based on deep neural networks are able to generate high quality sketches from face photos. However, due to the lack of training data (photo-sketch pairs), none of such deep learning based methods can be applied successfully to face photos in the wild. In this paper, we propose a semi-supervised deep learning architecture which extends face sketch synthesis to handle face photos in the wild by exploiting additional face photos in training. Instead of supervising the network with ground truth sketches, we first perform patch matching in feature space between the input photo and photos in a small reference set of photo-sketch pairs. We then compose a pseudo sketch feature representation using the corresponding sketch feature patches to supervise our network. With the proposed approach, we can train our networks using a small reference set of photo-sketch pairs together with a large face photo dataset without ground truth sketches. Experiments show that our method achieve state-of-the-art performance both on public benchmarks and face photos in the wild. Codes are available at https://github.com/chaofengc/Face-Sketch-Wild.", "field": [], "task": ["Face Sketch Synthesis", "Patch Matching"], "method": [], "dataset": ["CUFS", "CUHK", "CUFSF"], "metric": ["SSIM", "FSIM"], "title": "Semi-Supervised Learning for Face Sketch Synthesis in the Wild"} {"abstract": "Meta-reinforcement learning algorithms can enable robots to acquire new skills much more quickly, by leveraging prior experience to learn how to learn. However, much of the current research on meta-reinforcement learning focuses on task distributions that are very narrow. For example, a commonly used meta-reinforcement learning benchmark uses different running velocities for a simulated robot as different tasks. When policies are meta-trained on such narrow task distributions, they cannot possibly generalize to more quickly acquire entirely new tasks. Therefore, if the aim of these methods is to enable faster acquisition of entirely new behaviors, we must evaluate them on task distributions that are sufficiently broad to enable generalization to new behaviors. In this paper, we propose an open-source simulated benchmark for meta-reinforcement learning and multi-task learning consisting of 50 distinct robotic manipulation tasks. Our aim is to make it possible to develop algorithms that generalize to accelerate the acquisition of entirely new, held-out tasks. We evaluate 6 state-of-the-art meta-reinforcement learning and multi-task learning algorithms on these tasks. Surprisingly, while each task and its variations (e.g., with different object positions) can be learned with reasonable success, these algorithms struggle to learn with multiple tasks at the same time, even with as few as ten distinct training tasks. Our analysis and open-source environments pave the way for future research in multi-task learning and meta-learning that can enable meaningful generalization, thereby unlocking the full potential of these methods.", "field": [], "task": ["Meta-Learning", "Meta Reinforcement Learning", "Multi-Task Learning"], "method": [], "dataset": ["MT50", "ML10"], "metric": ["Meta-test success rate", "Meta-train success rate", "Average Success Rate"], "title": "Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning"} {"abstract": "The performance of text classification has improved tremendously using\nintelligently engineered neural-based models, especially those injecting\ncategorical metadata as additional information, e.g., using user/product\ninformation for sentiment classification. These information have been used to\nmodify parts of the model (e.g., word embeddings, attention mechanisms) such\nthat results can be customized according to the metadata. We observe that\ncurrent representation methods for categorical metadata, which are devised for\nhuman consumption, are not as effective as claimed in popular classification\nmethods, outperformed even by simple concatenation of categorical features in\nthe final layer of the sentence encoder. We conjecture that categorical\nfeatures are harder to represent for machine use, as available context only\nindirectly describes the category, and even such context is often scarce (for\ntail category). To this end, we propose to use basis vectors to effectively\nincorporate categorical metadata on various parts of a neural-based model. This\nadditionally decreases the number of parameters dramatically, especially when\nthe number of categorical features is large. Extensive experiments on various\ndatasets with different properties are performed and show that through our\nmethod, we can represent categorical metadata more effectively to customize\nparts of the model, including unexplored ones, and increase the performance of\nthe model greatly.", "field": [], "task": ["Sentiment Analysis", "Text Classification", "Word Embeddings"], "method": [], "dataset": ["User and product information"], "metric": ["Yelp 2013 (Acc)"], "title": "Categorical Metadata Representation for Customized Text Classification"} {"abstract": "Change detection is a basic task of remote sensing image processing. The research objective is to identity the change information of interest and filter out the irrelevant change information as interference factors. Recently, the rise of deep learning has provided new tools for change detection, which have yielded impressive results. However, the available methods focus mainly on the difference information between multitemporal remote sensing images and lack robustness to pseudo-change information. To overcome the lack of resistance of current methods to pseudo-changes, in this paper, we propose a new method, namely, dual attentive fully convolutional Siamese networks (DASNet) for change detection in high-resolution images. Through the dual-attention mechanism, long-range dependencies are captured to obtain more discriminant feature representations to enhance the recognition performance of the model. Moreover, the imbalanced sample is a serious problem in change detection, i.e. unchanged samples are much more than changed samples, which is one of the main reasons resulting in pseudo-changes. We put forward the weighted double margin contrastive loss to address this problem by punishing the attention to unchanged feature pairs and increase attention to changed feature pairs. The experimental results of our method on the change detection dataset (CDD) and the building change detection dataset (BCDD) demonstrate that compared with other baseline methods, the proposed method realizes maximum improvements of 2.1\\% and 3.6\\%, respectively, in the F1 score. Our Pytorch implementation is available at https://github.com/lehaifeng/DASNet.", "field": [], "task": ["Change detection for remote sensing images"], "method": [], "dataset": ["CDD Dataset (season-varying)"], "metric": ["F1-Score"], "title": "DASNet: Dual attentive fully convolutional siamese networks for change detection of high resolution satellite images"} {"abstract": "We propose a new way of constructing invertible neural networks by combining simple building blocks with a novel set of composition rules. This leads to a rich set of invertible architectures, including those similar to ResNets. Inversion is achieved with a locally convergent iterative procedure that is parallelizable and very fast in practice. Additionally, the determinant of the Jacobian can be computed analytically and efficiently, enabling their generative use as flow models. To demonstrate their flexibility, we show that our invertible neural networks are competitive with ResNets on MNIST and CIFAR-10 classification. When trained as generative models, our invertible networks achieve competitive likelihoods on MNIST, CIFAR-10 and ImageNet 32x32, with bits per dimension of 0.98, 3.32 and 4.06 respectively.", "field": [], "task": ["Image Generation"], "method": [], "dataset": ["MNIST", "ImageNet 32x32", "CIFAR-10"], "metric": ["bits/dimension", "bpd"], "title": "MintNet: Building Invertible Neural Networks with Masked Convolutions"} {"abstract": "Entity alignment typically suffers from the issues of structural heterogeneity and limited seed alignments. In this paper, we propose a novel Multi-channel Graph Neural Network model (MuGNN) to learn alignment-oriented knowledge graph (KG) embeddings by robustly encoding two KGs via multiple channels. Each channel encodes KGs via different relation weighting schemes with respect to self-attention towards KG completion and cross-KG attention for pruning exclusive entities respectively, which are further combined via pooling techniques. Moreover, we also infer and transfer rule knowledge for completing two KGs consistently. MuGNN is expected to reconcile the structural differences of two KGs, and thus make better use of seed alignments. Extensive experiments on five publicly available datasets demonstrate our superior performance (5% Hits@1 up on average).", "field": [], "task": ["Entity Alignment"], "method": [], "dataset": ["DBP15k zh-en"], "metric": ["Hits@1"], "title": "Multi-Channel Graph Neural Network for Entity Alignment"} {"abstract": "BACKGROUND:\r\nAutomated single-channel spindle detectors, for human sleep EEG, are blind to the presence of spindles in other recorded channels unlike visual annotation by a human expert.\r\n\r\nNEW METHOD:\r\nWe propose a multichannel spindle detection method that aims to detect global and local spindle activity in human sleep EEG. Using a non-linear signal model, which assumes the input EEG to be the sum of a transient and an oscillatory component, we propose a multichannel transient separation algorithm. Consecutive overlapping blocks of the multichannel oscillatory component are assumed to be low-rank whereas the transient component is assumed to be piecewise constant with a zero baseline. The estimated oscillatory component is used in conjunction with a bandpass filter and the Teager operator for detecting sleep spindles.\r\n\r\nRESULTS AND COMPARISON WITH OTHER METHODS:\r\nThe proposed method is applied to two publicly available databases and compared with 7 existing single-channel automated detectors. F1 scores for the proposed spindle detection method averaged 0.66 (0.02) and 0.62 (0.06) for the two databases, respectively. For an overnight 6 channel EEG signal, the proposed algorithm takes about 4min to detect sleep spindles simultaneously across all channels with a single setting of corresponding algorithmic parameters.\r\n\r\nCONCLUSIONS:\r\nThe proposed method attempts to mimic and utilize, for better spindle detection, a particular human expert behavior where the decision to mark a spindle event may be subconsciously influenced by the presence of a spindle in EEG channels other than the central channel visible on a digital screen.", "field": [], "task": ["EEG", "Spindle Detection"], "method": [], "dataset": ["MASS SS2"], "metric": ["F1-score (@IoU = 0.3)"], "title": "Multichannel sleep spindle detection using sparse low-rank optimization"} {"abstract": "We propose a novel spectral convolutional neural network (CNN) model on graph structured data, namely Distributed Feedback-Looped Networks (DFNets). This model is incorporated with a robust class of spectral graph filters, called feedback-looped filters, to provide better localization on vertices, while still attaining fast convergence and linear memory requirements. Theoretically, feedback-looped filters can guarantee convergence w.r.t. a specified error bound, and be applied universally to any graph without knowing its structure. Furthermore, the propagation rule of this model can diversify features from the preceding layers to produce strong gradient flows. We have evaluated our model using two benchmark tasks: semi-supervised document classification on citation networks and semi-supervised entity classification on a knowledge graph. The experimental results show that our model considerably outperforms the state-of-the-art methods in both benchmark tasks over all datasets.", "field": [], "task": ["Document Classification", "Node Classification"], "method": [], "dataset": ["Cora", "Pubmed", "Citeseer", "NELL"], "metric": ["Accuracy"], "title": "DFNets: Spectral CNNs for Graphs with Feedback-Looped Filters"} {"abstract": "Depth completion recovers a dense depth map from sensor measurements. Current methods are mostly tailored for very sparse depth measurements from LiDARs in outdoor settings, while for indoor scenes Time-of-Flight (ToF) or structured light sensors are mostly used. These sensors provide semi-dense maps, with dense measurements in some regions and almost empty in others. We propose a new model that takes into account the statistical difference between such regions. Our main contribution is a new decoder modulation branch added to the encoder-decoder architecture. The encoder extracts features from the concatenated RGB image and raw depth. Given the mask of missing values as input, the proposed modulation branch controls the decoding of a dense depth map from these features differently for different regions. This is implemented by modifying the spatial distribution of output signals inside the decoder via Spatially-Adaptive Denormalization (SPADE) blocks. Our second contribution is a novel training strategy that allows us to train on a semi-dense sensor data when the ground truth depth map is not available. Our model achieves the state of the art results on indoor Matterport3D dataset. Being designed for semi-dense input depth, our model is still competitive with LiDAR-oriented approaches on the KITTI dataset. Our training strategy significantly improves prediction quality with no dense ground truth available, as validated on the NYUv2 dataset.", "field": [], "task": ["Depth Completion", "Depth Estimation", "Semantic Segmentation"], "method": [], "dataset": ["Matterport3D"], "metric": ["RMSE"], "title": "Decoder Modulation for Indoor Depth Completion"} {"abstract": "Few-shot segmentation is challenging because objects within the support and query images could significantly differ in appearance and pose. Using a single prototype acquired directly from the support image to segment the query image causes semantic ambiguity. In this paper, we propose prototype mixture models (PMMs), which correlate diverse image regions with multiple prototypes to enforce the prototype-based semantic representation. Estimated by an Expectation-Maximization algorithm, PMMs incorporate rich channel-wised and spatial semantics from limited support images. Utilized as representations as well as classifiers, PMMs fully leverage the semantics to activate objects in the query image while depressing background regions in a duplex manner. Extensive experiments on Pascal VOC and MS-COCO datasets show that PMMs significantly improve upon state-of-the-arts. Particularly, PMMs improve 5-shot segmentation performance on MS-COCO by up to 5.82\\% with only a moderate cost for model size and inference speed.", "field": [], "task": ["Few-Shot Semantic Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["PASCAL-5i (10-Shot)", "COCO-20i -> Pascal VOC (5-shot)", "COCO-20i (10-shot)", "PASCAL-5i (1-Shot)", "PASCAL-5i (5-Shot)", "COCO-20i -> Pascal VOC (1-shot)"], "metric": ["Mean IoU"], "title": "Prototype Mixture Models for Few-shot Semantic Segmentation"} {"abstract": "Accurate 3D object detection (3DOD) is crucial for safe navigation of complex environments by autonomous robots. Regressing accurate 3D bounding boxes in cluttered environments based on sparse LiDAR data is however a highly challenging problem. We address this task by exploring recent advances in conditional energy-based models (EBMs) for probabilistic regression. While methods employing EBMs for regression have demonstrated impressive performance on 2D object detection in images, these techniques are not directly applicable to 3D bounding boxes. In this work, we therefore design a differentiable pooling operator for 3D bounding boxes, serving as the core module of our EBM network. We further integrate this general approach into the state-of-the-art 3D object detector SA-SSD. On the KITTI dataset, our proposed approach consistently outperforms the SA-SSD baseline across all 3DOD metrics, demonstrating the potential of EBM-based regression for highly accurate 3DOD. Code is available at https://github.com/fregu856/ebms_3dod.", "field": [], "task": ["2D Object Detection", "3D Object Detection", "Object Detection", "Regression"], "method": [], "dataset": ["KITTI Cars Hard", "KITTI Cars Moderate", "KITTI Cars Moderate val", "KITTI Cars Hard val", "KITTI Cars Easy val", "KITTI Cars Easy"], "metric": ["AP"], "title": "Accurate 3D Object Detection using Energy-Based Models"} {"abstract": "Deep learning with noisy labels is practically challenging, as the capacity\nof deep models is so high that they can totally memorize these noisy labels\nsooner or later during training. Nonetheless, recent studies on the\nmemorization effects of deep neural networks show that they would first\nmemorize training data of clean labels and then those of noisy labels.\nTherefore in this paper, we propose a new deep learning paradigm called\nCo-teaching for combating with noisy labels. Namely, we train two deep neural\nnetworks simultaneously, and let them teach each other given every mini-batch:\nfirstly, each network feeds forward all data and selects some data of possibly\nclean labels; secondly, two networks communicate with each other what data in\nthis mini-batch should be used for training; finally, each network back\npropagates the data selected by its peer network and updates itself. Empirical\nresults on noisy versions of MNIST, CIFAR-10 and CIFAR-100 demonstrate that\nCo-teaching is much superior to the state-of-the-art methods in the robustness\nof trained deep models.", "field": [], "task": ["Learning with noisy labels"], "method": [], "dataset": ["mini WebVision 1.0"], "metric": ["Top-5 Accuracy", "ImageNet Top-1 Accuracy", "ImageNet Top-5 Accuracy", "Top-1 Accuracy"], "title": "Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels"} {"abstract": "This paper proposes an end-to-end trainable network, SegFlow, for\nsimultaneously predicting pixel-wise object segmentation and optical flow in\nvideos. The proposed SegFlow has two branches where useful information of\nobject segmentation and optical flow is propagated bidirectionally in a unified\nframework. The segmentation branch is based on a fully convolutional network,\nwhich has been proved effective in image segmentation task, and the optical\nflow branch takes advantage of the FlowNet model. The unified framework is\ntrained iteratively offline to learn a generic notion, and fine-tuned online\nfor specific objects. Extensive experiments on both the video object\nsegmentation and optical flow datasets demonstrate that introducing optical\nflow improves the performance of segmentation and vice versa, against the\nstate-of-the-art algorithms.", "field": [], "task": ["Optical Flow Estimation", "Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Unsupervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation", "Visual Object Tracking"], "method": [], "dataset": ["DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "SegFlow: Joint Learning for Video Object Segmentation and Optical Flow"} {"abstract": "The effectiveness of Convolutional Neural Networks (CNNs) has been substantially attributed to their built-in property of translation equivariance. However, CNNs do not have embedded mechanisms to handle other types of transformations. In this work, we pay attention to scale changes, which regularly appear in various tasks due to the changing distances between the objects and the camera. First, we introduce the general theory for building scale-equivariant convolutional networks with steerable filters. We develop scale-convolution and generalize other common blocks to be scale-equivariant. We demonstrate the computational efficiency and numerical stability of the proposed method. We compare the proposed models to the previously developed methods for scale equivariance and local scale invariance. We demonstrate state-of-the-art results on MNIST-scale dataset and on STL-10 dataset in the supervised learning setting.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["STL-10"], "metric": ["Percentage correct"], "title": "Scale-Equivariant Steerable Networks"} {"abstract": "We investigate architectures of discriminatively trained deep Convolutional\nNetworks (ConvNets) for action recognition in video. The challenge is to\ncapture the complementary information on appearance from still frames and\nmotion between frames. We also aim to generalise the best performing\nhand-crafted features within a data-driven learning framework.\n Our contribution is three-fold. First, we propose a two-stream ConvNet\narchitecture which incorporates spatial and temporal networks. Second, we\ndemonstrate that a ConvNet trained on multi-frame dense optical flow is able to\nachieve very good performance in spite of limited training data. Finally, we\nshow that multi-task learning, applied to two different action classification\ndatasets, can be used to increase the amount of training data and improve the\nperformance on both.\n Our architecture is trained and evaluated on the standard video actions\nbenchmarks of UCF-101 and HMDB-51, where it is competitive with the state of\nthe art. It also exceeds by a large margin previous attempts to use deep nets\nfor video classification.", "field": [], "task": ["Action Classification", "Action Classification ", "Action Recognition", "Action Recognition In Videos", "Action Recognition In Videos ", "Multi-Task Learning", "Optical Flow Estimation", "Temporal Action Localization", "Video Classification"], "method": [], "dataset": ["UCF101", "VIVA Hand Gestures Dataset", "HMDB-51", "Charades"], "metric": ["Average accuracy of 3 splits", "Accuracy", "3-fold Accuracy", "MAP"], "title": "Two-Stream Convolutional Networks for Action Recognition in Videos"} {"abstract": "Traffic light and sign detectors on autonomous cars are integral for road\nscene perception. The literature is abundant with deep learning networks that\ndetect either lights or signs, not both, which makes them unsuitable for\nreal-life deployment due to the limited graphics processing unit (GPU) memory\nand power available on embedded systems. The root cause of this issue is that\nno public dataset contains both traffic light and sign labels, which leads to\ndifficulties in developing a joint detection framework. We present a deep\nhierarchical architecture in conjunction with a mini-batch proposal selection\nmechanism that allows a network to detect both traffic lights and signs from\ntraining on separate traffic light and sign datasets. Our method solves the\noverlapping issue where instances from one dataset are not labelled in the\nother dataset. We are the first to present a network that performs joint\ndetection on traffic lights and signs. We measure our network on the\nTsinghua-Tencent 100K benchmark for traffic sign detection and the Bosch Small\nTraffic Lights benchmark for traffic light detection and show it outperforms\nthe existing Bosch Small Traffic light state-of-the-art method. We focus on\nautonomous car deployment and show our network is more suitable than others\nbecause of its low memory footprint and real-time image processing time.\nQualitative results can be viewed at https://youtu.be/_YmogPzBXOw", "field": [], "task": ["Traffic Sign Detection", "Traffic Sign Recognition"], "method": [], "dataset": ["Bosch Small Traffic Lights", "Tsinghua-Tencent 100K"], "metric": ["MAP"], "title": "A Hierarchical Deep Architecture and Mini-Batch Selection Method For Joint Traffic Sign and Light Detection"} {"abstract": "Region proposal mechanisms are essential for existing deep learning approaches to object detection in images. Although they can generally achieve a good detection performance under normal circumstances, their recall in a scene with extreme cases is unacceptably low. This is mainly because bounding box annotations contain much environment noise information, and non-maximum suppression (NMS) is required to select target boxes. Therefore, in this paper, we propose the first anchor-free and NMS-free object detection model called weakly supervised multimodal annotation segmentation (WSMA-Seg), which utilizes segmentation models to achieve an accurate and robust object detection without NMS. In WSMA-Seg, multimodal annotations are proposed to achieve an instance-aware segmentation using weakly supervised bounding boxes; we also develop a run-data-based following algorithm to trace contours of objects. In addition, we propose a multi-scale pooling segmentation (MSP-Seg) as the underlying segmentation model of WSMA-Seg to achieve a more accurate segmentation and to enhance the detection accuracy of WSMA-Seg. Experimental results on multiple datasets show that the proposed WSMA-Seg approach outperforms the state-of-the-art detectors.", "field": [], "task": ["Face Detection", "Head Detection", "Object Detection", "Region Proposal", "Robust Object Detection"], "method": [], "dataset": ["WIDER Face (Medium)", "Rebar Head", "WIDER Face (Easy)", "COCO test-dev", "WIDER Face (Hard)"], "metric": ["box AP", "F1", "AP"], "title": "Segmentation is All You Need"} {"abstract": "Face detection has drawn much attention in recent decades since the seminal work by Viola and Jones. While many subsequences have improved the work with more powerful learning algorithms, the feature representation used for face detection still can't meet the demand for effectively and efficiently handling faces with large appearance variance in the wild. To solve this bottleneck, we borrow the concept of channel features to the face detection domain, which extends the image channel to diverse types like gradient magnitude and oriented gradient histograms and therefore encodes rich information in a simple form. We adopt a novel variant called aggregate channel features, make a full exploration of feature design, and discover a multi-scale version of features with better performance. To deal with poses of faces in the wild, we propose a multi-view detection approach featuring score re-ranking and detection adjustment. Following the learning pipelines in Viola-Jones framework, the multi-view face detector using aggregate channel features shows competitive performance against state-of-the-art algorithms on AFW and FDDB testsets, while runs at 42 FPS on VGA images.", "field": [], "task": ["Face Detection"], "method": [], "dataset": ["WIDER Face (Hard)", "WIDER Face (Medium)", "WIDER Face (Easy)"], "metric": ["AP"], "title": "Aggregate channel features for multi-view face detection"} {"abstract": "This paper introduces PyDCI, a new implementation of Distributional\nCorrespondence Indexing (DCI) written in Python. DCI is a transfer learning\nmethod for cross-domain and cross-lingual text classification for which we had\nprovided an implementation (here called JaDCI) built on top of JaTeCS, a Java\nframework for text classification. PyDCI is a stand-alone version of DCI that\nexploits scikit-learn and the SciPy stack. We here report on new experiments\nthat we have carried out in order to test PyDCI, and in which we use as\nbaselines new high-performing methods that have appeared after DCI was\noriginally proposed. These experiments show that, thanks to a few subtle ways\nin which we have improved DCI, PyDCI outperforms both JaDCI and the\nabove-mentioned high-performing methods, and delivers the best known results on\nthe two popular benchmarks on which we had tested DCI, i.e.,\nMultiDomainSentiment (a.k.a. MDS -- for cross-domain adaptation) and\nWebis-CLS-10 (for cross-lingual adaptation). PyDCI, together with the code\nallowing to replicate our experiments, is available at\nhttps://github.com/AlexMoreo/pydci .", "field": [], "task": ["Domain Adaptation", "Sentiment Analysis", "Text Classification", "Transfer Learning"], "method": [], "dataset": ["Multi-Domain Sentiment Dataset"], "metric": ["DVD", "Average", "Kitchen", "Electronics", "Books"], "title": "Revisiting Distributional Correspondence Indexing: A Python Reimplementation and New Experiments"} {"abstract": "Recent anchor-based deep face detectors have achieved promising performance,\nbut they are still struggling to detect hard faces, such as small, blurred and\npartially occluded faces. A reason is that they treat all images and faces\nequally, without putting more effort on hard ones; however, many training\nimages only contain easy faces, which are less helpful to achieve better\nperformance on hard images. In this paper, we propose that the robustness of a\nface detector against hard faces can be improved by learning small faces on\nhard images. Our intuitions are (1) hard images are the images which contain at\nleast one hard face, thus they facilitate training robust face detectors; (2)\nmost hard faces are small faces and other types of hard faces can be easily\nconverted to small faces by shrinking. We build an anchor-based deep face\ndetector, which only output a single feature map with small anchors, to\nspecifically learn small faces and train it by a novel hard image mining\nstrategy. Extensive experiments have been conducted on WIDER FACE, FDDB, Pascal\nFaces, and AFW datasets to show the effectiveness of our method. Our method\nachieves APs of 95.7, 94.9 and 89.7 on easy, medium and hard WIDER FACE val\ndataset respectively, which surpass the previous state-of-the-arts, especially\non the hard subset. Code and model are available at\nhttps://github.com/bairdzhang/smallhardface.", "field": [], "task": ["Face Detection"], "method": [], "dataset": ["PASCAL Face", "WIDER Face (Hard)", "Annotated Faces in the Wild", "FDDB"], "metric": ["AP"], "title": "Robust Face Detection via Learning Small Faces on Hard Images"} {"abstract": "The goal of this paper is to detect the spatio-temporal extent of an action. The two-stream detection network based on RGB and flow provides state-of-the-art accuracy at the expense of a large model-size and heavy computation. We propose to embed RGB and optical-flow into a single two-in-one stream network with new layers. A motion condition layer extracts motion information from flow images, which is leveraged by the motion modulation layer to generate transformation parameters for modulating the low-level RGB features. The method is easily embedded in existing appearance- or two-stream action detection networks, and trained end-to-end. Experiments demonstrate that leveraging the motion condition to modulate RGB features improves detection accuracy. With only half the computation and parameters of the state-of-the-art two-stream methods, our two-in-one stream still achieves impressive results on UCF101-24, UCFSports and J-HMDB.", "field": [], "task": ["Action Detection", "Optical Flow Estimation"], "method": [], "dataset": ["UCF101", "UCF101-24"], "metric": ["mAP", "3-fold Accuracy"], "title": "Dance with Flow: Two-in-One Stream Action Detection"} {"abstract": "Video frame interpolation aims to synthesize nonexistent frames in-between\nthe original frames. While significant advances have been made from the recent\ndeep convolutional neural networks, the quality of interpolation is often\nreduced due to large object motion or occlusion. In this work, we propose a\nvideo frame interpolation method which explicitly detects the occlusion by\nexploring the depth information. Specifically, we develop a depth-aware flow\nprojection layer to synthesize intermediate flows that preferably sample closer\nobjects than farther ones. In addition, we learn hierarchical features to\ngather contextual information from neighboring pixels. The proposed model then\nwarps the input frames, depth maps, and contextual features based on the\noptical flow and local interpolation kernels for synthesizing the output frame.\nOur model is compact, efficient, and fully differentiable. Quantitative and\nqualitative results demonstrate that the proposed model performs favorably\nagainst state-of-the-art frame interpolation methods on a wide variety of\ndatasets.", "field": [], "task": ["Optical Flow Estimation", "Video Frame Interpolation"], "method": [], "dataset": ["Middlebury", "Vimeo90k", "UCF101"], "metric": ["SSIM", "PSNR", "Interpolation Error"], "title": "Depth-Aware Video Frame Interpolation"} {"abstract": "Transfer learning aims at transferring knowledge from a well-labeled domain\nto a similar but different domain with limited or no labels. Unfortunately,\nexisting learning-based methods often involve intensive model selection and\nhyperparameter tuning to obtain good results. Moreover, cross-validation is not\npossible for tuning hyperparameters since there are often no labels in the\ntarget domain. This would restrict wide applicability of transfer learning\nespecially in computationally-constraint devices such as wearables. In this\npaper, we propose a practically Easy Transfer Learning (EasyTL) approach which\nrequires no model selection and hyperparameter tuning, while achieving\ncompetitive performance. By exploiting intra-domain structures, EasyTL is able\nto learn both non-parametric transfer features and classifiers. Extensive\nexperiments demonstrate that, compared to state-of-the-art traditional and deep\nmethods, EasyTL satisfies the Occam's Razor principle: it is extremely easy to\nimplement and use while achieving comparable or better performance in\nclassification accuracy and much better computational efficiency. Additionally,\nit is shown that EasyTL can increase the performance of existing transfer\nfeature learning methods.", "field": [], "task": ["Domain Adaptation", "Model Selection", "Transfer Learning"], "method": [], "dataset": ["ImageCLEF-DA", "Office-Home"], "metric": ["Accuracy"], "title": "Easy Transfer Learning By Exploiting Intra-domain Structures"} {"abstract": "In this paper, we propose an accurate edge detector using richer convolutional features (RCF). Since objects in nature images have various scales and aspect ratios, the automatically learned rich hierarchical representations by CNNs are very critical and effective to detect edges and object boundaries. And the convolutional features gradually become coarser with receptive fields increasing. Based on these observations, our proposed network architecture makes full use of multiscale and multi-level information to perform the image-to-image edge prediction by combining all of the useful convolutional features into a holistic framework. It is the first attempt to adopt such rich convolutional features in computer vision tasks. Using VGG16 network, we achieve \\sArt results on several available datasets. When evaluating on the well-known BSDS500 benchmark, we achieve ODS F-measure of \\textbf{.811} while retaining a fast speed (\\textbf{8} FPS). Besides, our fast version of RCF achieves ODS F-measure of \\textbf{.806} with \\textbf{30} FPS.", "field": [], "task": ["Edge Detection"], "method": [], "dataset": ["BIPED"], "metric": ["ODS"], "title": "Richer Convolutional Features for Edge Detection"} {"abstract": "The skeleton data have been widely used for the action recognition tasks since they can robustly accommodate dynamic circumstances and complex backgrounds. In existing methods, both the joint and bone information in skeleton data have been proved to be of great help for action recognition tasks. However, how to incorporate these two types of data to best take advantage of the relationship between joints and bones remains a problem to be solved. In this work, we represent the skeleton data as a directed acyclic graph based on the kinematic dependency between the joints and bones in the natural human body. A novel directed graph neural network is designed specially to extract the information of joints, bones and their relations and make prediction based on the extracted features. In addition, to better fit the action recognition task, the topological structure of the graph is made adaptive based on the training process, which brings notable improvement. Moreover, the motion information of the skeleton sequence is exploited and combined with the spatial information to further enhance the performance in a two-stream framework. Our final model is tested on two large-scale datasets, NTU-RGBD and Skeleton-Kinetics, and exceeds state-of-the-art performance on both of them. \r", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D", "Kinetics-Skeleton dataset"], "metric": ["Accuracy (CS)", "Accuracy (CV)", "Accuracy"], "title": "Skeleton-Based Action Recognition With Directed Graph Neural Networks"} {"abstract": "We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.", "field": [], "task": ["Latent Variable Models", "Multivariate Time Series Forecasting", "Multivariate Time Series Imputation"], "method": [], "dataset": ["MuJoCo", "MIMIC-III", "USHCN-Daily", "PhysioNet Challenge 2012"], "metric": ["MSE (10^-2, 50% missing)", "MSE (10^2, 50% missing)", "MSE stdev", "MSE", "mse (10^-3)", "NegLL"], "title": "Neural Ordinary Differential Equations"} {"abstract": "Open-domain targeted sentiment analysis aims to detect opinion targets along with their sentiment polarities from a sentence. Prior work typically formulates this task as a sequence tagging problem. However, such formulation suffers from problems such as huge search space and sentiment inconsistency. To address these problems, we propose a span-based extract-then-classify framework, where multiple opinion targets are directly extracted from the sentence under the supervision of target span boundaries, and corresponding polarities are then classified using their span representations. We further investigate three approaches under this framework, namely the pipeline, joint, and collapsed models. Experiments on three benchmark datasets show that our approach consistently outperforms the sequence tagging baseline. Moreover, we find that the pipeline model achieves the best performance compared with the other two models.", "field": [], "task": ["Aspect-Based Sentiment Analysis", "Sentiment Analysis"], "method": [], "dataset": ["SemEval 2014 Task 4 Subtask 1+2", "SemEval 2014 Task 4 Laptop"], "metric": ["F1"], "title": "Open-Domain Targeted Sentiment Analysis via Span-Based Extraction and Classification"} {"abstract": "Planar homography estimation refers to the problem of computing a bijective linear mapping of pixels between two images. While this problem has been studied with convolutional neural networks (CNNs), existing methods simply regress the location of the four corners using a dense layer preceded by a fully-connected layer. This vector representation damages the spatial structure of the corners since they have a clear spatial order. Moreover, four points are the minimum required to compute the homography, and so such an approach is susceptible to perturbation. In this paper, we propose a conceptually simple, reliable, and general framework for homography estimation. In contrast to previous works, we formulate this problem as a perspective field (PF), which models the essence of the homography - pixel-to-pixel bijection. The PF is naturally learned by the proposed fully convolutional residual network, PFNet, to keep the spatial order of each pixel. Moreover, since every pixels\u2019 displacement can be obtained from the PF, it enables robust homography estimation by utilizing dense correspondences. Our experiments demonstrate the proposed method outperforms traditional correspondence-based approaches and state-of-the-art CNN approaches in terms of accuracy while also having a smaller network size. In addition, the new parameterization of this task is general and can be implemented by any fully convolutional network (FCN) architecture.", "field": [], "task": ["Homography Estimation"], "method": [], "dataset": ["COCO 2014"], "metric": ["MACE"], "title": "Rethinking Planar Homography Estimation Using Perspective Fields"} {"abstract": "Currently, researchers have paid great attention to retrieval-based dialogues in open-domain. In particular, people study the problem by investigating context-response matching for multi-turn response selection based on publicly recognized benchmark data sets. State-of-the-art methods require a response to interact with each utterance in a context from the beginning, but the interaction is performed in a shallow way. In this work, we let utterance-response interaction go deep by proposing an interaction-over-interaction network (IoI). The model performs matching by stacking multiple interaction blocks in which residual information from one time of interaction initiates the interaction process again. Thus, matching information within an utterance-response pair is extracted from the interaction of the pair in an iterative fashion, and the information flows along the chain of the blocks via representations. Evaluation results on three benchmark data sets indicate that IoI can significantly outperform state-of-the-art methods in terms of various matching metrics. Through further analysis, we also unveil how the depth of interaction affects the performance of IoI.", "field": [], "task": ["Conversational Response Selection"], "method": [], "dataset": ["Ubuntu Dialogue (v1, Ranking)"], "metric": ["R10@1", "R10@5", "R2@1", "R10@2"], "title": "One Time of Interaction May Not Be Enough: Go Deep with an Interaction-over-Interaction Network for Response Selection in Dialogues"} {"abstract": "Multi-hop knowledge graph (KG) reasoning is an effective and explainable method for predicting the target entity via reasoning paths in query answering (QA) task. Most previous methods assume that every relation in KGs has enough training triples, regardless of those few-shot relations which cannot provide sufficient triples for training robust reasoning models. In fact, the performance of existing multi-hop reasoning methods drops significantly on few-shot relations. In this paper, we propose a meta-based multi-hop reasoning method (Meta-KGR), which adopts meta-learning to learn effective meta parameters from high-frequency relations that could quickly adapt to few-shot relations. We evaluate Meta-KGR on two public datasets sampled from Freebase and NELL, and the experimental results show that Meta-KGR outperforms the current state-of-the-art methods in few-shot scenarios. Our code and datasets can be obtained from https://github.com/ THU-KEG/MetaKGR.", "field": [], "task": ["Meta-Learning"], "method": [], "dataset": ["NELL-995", "FB15k-237"], "metric": ["Hits@10", "MRR", "Appropriate Evaluation Protocols", "Hits@1"], "title": "Adapting Meta Knowledge Graph Information for Multi-Hop Reasoning over Few-Shot Relations"} {"abstract": "People using white canes for navigation find it challenging to concurrently access devices such as smartphones. Building on prior research on abandonment of specialized devices, we explore a new touch free mode of interaction wherein a person with visual impairment can perform gestures on their existing white cane to trigger tasks on their smartphone. We present GesturePod, an easy-to-integrate device that clips on to any white cane, and detects gestures performed with the cane. With GesturePod, a user can perform common tasks on their smartphone without touch or even removing the phone from their pocket or bag. We discuss the challenges in building the device and our design choices. We propose a novel, efficient machine learning pipeline to train and deploy the gesture recognition model. Our in-lab study shows that GesturePod achieves 92% gesture recognition accuracy and can help perform common smartphone tasks faster. Our in-wild study suggests that GesturePod is a promising tool to improve smartphone access for people with VI, especially in constrained outdoor scenarios.", "field": [], "task": ["Gesture Recognition", "Time Series", "Time Series Classification"], "method": [], "dataset": ["GesturePod"], "metric": ["Real World Accuracy"], "title": "GesturePod: Enabling On-device Gesture-based Interaction for White Cane Users"} {"abstract": "Recently, large-scale pre-trained language models have demonstrated impressive performance on several commonsense-reasoning benchmark datasets. However, building machines with commonsense to compose realistically plausible sentences remains challenging. In this paper, we present a constrained text generation task, CommonGen associated with a benchmark dataset, to explicitly test machines for the ability of generative commonsense reasoning. Given a set of common concepts (e.g., {dog, frisbee, catch, throw}); the task is to generate a coherent sentence describing an everyday scenario using these concepts (e.g., \"a man throws a frisbee and his dog catches it\"). The CommonGen task is challenging because it inherently requires 1) relational reasoning with background commonsense knowledge, and 2) compositional generalization ability to work on unseen concept combinations. Our dataset, constructed through a combination of crowdsourced and existing caption corpora, consists of 79k commonsense descriptions over 35k unique concept-sets. Experiments show that there is a large gap between state-of-the-art text generation models (e.g., T5) and human performance. Furthermore, we demonstrate that the learned generative commonsense reasoning capability can be transferred to improve downstream tasks such as CommonsenseQA by generating additional context.", "field": [], "task": ["Common Sense Reasoning", "Question Answering", "Relational Reasoning", "Text Generation"], "method": [], "dataset": ["CommonGen"], "metric": ["CIDEr"], "title": "CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning"} {"abstract": "Graphs offer a natural way to formulate Multiple Object Tracking (MOT) within the tracking-by-detection paradigm. However, they also introduce a major challenge for learning methods, as defining a model that can operate on such structured domain is not trivial. As a consequence, most learning-based work has been devoted to learning better features for MOT and then using these with well-established optimization frameworks. In this work, we exploit the classical network flow formulation of MOT to define a fully differentiable framework based on Message Passing Networks (MPNs). By operating directly on the graph domain, our method can reason globally over an entire set of detections and predict final solutions. Hence, we show that learning in MOT does not need to be restricted to feature extraction, but it can also be applied to the data association step. We show a significant improvement in both MOTA and IDF1 on three publicly available benchmarks. Our code is available at https://bit.ly/motsolv.\r", "field": [], "task": ["Multi-Object Tracking", "Multiple Object Tracking", "Object Tracking"], "method": [], "dataset": ["MOT17", "2D MOT 2015", "MOT16", "MOT20"], "metric": ["MOTA", "IDF1"], "title": "Learning a Neural Solver for Multiple Object Tracking"} {"abstract": "Semantic segmentation is a challenging task that addresses most of the perception needs of Intelligent Vehicles (IV) in an unified way. Deep Neural Networks excel at this task, as they can be trained end-to-end to accurately classify multiple object categories in an image at pixel level. However, a good trade-off between high quality and computational resources is yet not present in state-of-the-art semantic segmentation approaches, limiting their application in real vehicles. In this paper, we propose a deep architecture that is able to run in real-time while providing accurate semantic segmentation. The core of our architecture is a novel layer that uses residual connections and factorized convolutions in order to remain efficient while retaining remarkable accuracy. Our approach is able to run at over 83 FPS in a single Titan X, and 7 FPS in a Jetson TX1 (embedded GPU). A comprehensive set of experiments on the publicly available Cityscapes dataset demonstrates that our system achieves an accuracy that is similar to the state of the art, while being orders of magnitude faster to compute than other architectures that achieve top precision. The resulting trade-off makes our model an ideal approach for scene understanding in IV applications. The code is publicly available at: https://github.com/Eromera/erfnet", "field": [], "task": ["Real-Time Semantic Segmentation", "Scene Understanding", "Semantic Segmentation"], "method": [], "dataset": ["Cityscapes val", "Cityscapes test"], "metric": ["Mean IoU (class)", "mIoU"], "title": "ERFNet: Efficient Residual Factorized ConvNet for Real-time Semantic Segmentation"} {"abstract": "A common problem in human-object interaction (HOI) detection task is that numerous HOI classes have only a small number of labeled examples, resulting in training sets with a long-tailed distribution. The lack of positive labels can lead to low classification accuracy for these classes. Towards addressing this issue, we observe that there exist natural correlations and anti-correlations among human-object interactions. In this paper, we model the correlations as action co-occurrence matrices and present techniques to learn these priors and leverage them for more effective training, especially in rare classes. The utility of our approach is demonstrated experimentally, where the performance of our approach exceeds the state-of-the-art methods on both of the two leading HOI detection benchmark datasets, HICO-Det and V-COCO.", "field": [], "task": ["Human-Object Interaction Detection"], "method": [], "dataset": ["HICO-DET"], "metric": ["MAP"], "title": "Detecting Human-Object Interactions with Action Co-occurrence Priors"} {"abstract": "We consider the problem of scaling deep generative shape models to\nhigh-resolution. Drawing motivation from the canonical view representation of\nobjects, we introduce a novel method for the fast up-sampling of 3D objects in\nvoxel space through networks that perform super-resolution on the six\northographic depth projections. This allows us to generate high-resolution\nobjects with more efficient scaling than methods which work directly in 3D. We\ndecompose the problem of 2D depth super-resolution into silhouette and depth\nprediction to capture both structure and fine detail. This allows our method to\ngenerate sharp edges more easily than an individual network. We evaluate our\nwork on multiple experiments concerning high-resolution 3D objects, and show\nour system is capable of accurately predicting novel objects at resolutions as\nlarge as 512$\\mathbf{\\times}$512$\\mathbf{\\times}$512 -- the highest resolution\nreported for this task. We achieve state-of-the-art performance on 3D object\nreconstruction from RGB images on the ShapeNet dataset, and further demonstrate\nthe first effective 3D super-resolution method.", "field": [], "task": ["3D Object Reconstruction", "3D Object Super-Resolution", "Depth Estimation", "Object Reconstruction", "Super-Resolution"], "method": [], "dataset": ["Data3D\u2212R2N2"], "metric": ["Avg F1"], "title": "Multi-View Silhouette and Depth Decomposition for High Resolution 3D Object Representation"} {"abstract": "Inspired by how humans summarize long documents, we propose an accurate and\nfast summarization model that first selects salient sentences and then rewrites\nthem abstractively (i.e., compresses and paraphrases) to generate a concise\noverall summary. We use a novel sentence-level policy gradient method to bridge\nthe non-differentiable computation between these two neural networks in a\nhierarchical way, while maintaining language fluency. Empirically, we achieve\nthe new state-of-the-art on all metrics (including human evaluation) on the\nCNN/Daily Mail dataset, as well as significantly higher abstractiveness scores.\nMoreover, by first operating at the sentence-level and then the word-level, we\nenable parallel decoding of our neural generative model that results in\nsubstantially faster (10-20x) inference speed as well as 4x faster training\nconvergence than previous long-paragraph encoder-decoder models. We also\ndemonstrate the generalization of our model on the test-only DUC-2002 dataset,\nwhere we achieve higher scores than a state-of-the-art model.", "field": [], "task": ["Abstractive Text Summarization", "Sentence ReWriting", "Text Summarization"], "method": [], "dataset": ["CNN / Daily Mail", "CNN / Daily Mail (Anonymized)"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting"} {"abstract": "There has been much recent work on training neural attention models at the\nsequence-level using either reinforcement learning-style methods or by\noptimizing the beam. In this paper, we survey a range of classical objective\nfunctions that have been widely used to train linear models for structured\nprediction and apply them to neural sequence to sequence models. Our\nexperiments show that these losses can perform surprisingly well by slightly\noutperforming beam search optimization in a like for like setup. We also report\nnew state of the art results on both IWSLT'14 German-English translation as\nwell as Gigaword abstractive summarization. On the larger WMT'14 English-French\ntranslation task, sequence-level training achieves 41.5 BLEU which is on par\nwith the state of the art.", "field": [], "task": ["Abstractive Text Summarization", "Machine Translation", "Structured Prediction"], "method": [], "dataset": ["IWSLT2015 German-English", "IWSLT2014 German-English"], "metric": ["BLEU score"], "title": "Classical Structured Prediction Losses for Sequence to Sequence Learning"} {"abstract": "In this paper, we introduce a new large-scale face dataset named VGGFace2.\nThe dataset contains 3.31 million images of 9131 subjects, with an average of\n362.6 images for each subject. Images are downloaded from Google Image Search\nand have large variations in pose, age, illumination, ethnicity and profession\n(e.g. actors, athletes, politicians). The dataset was collected with three\ngoals in mind: (i) to have both a large number of identities and also a large\nnumber of images for each identity; (ii) to cover a large range of pose, age\nand ethnicity; and (iii) to minimize the label noise. We describe how the\ndataset was collected, in particular the automated and manual filtering stages\nto ensure a high accuracy for the images of each identity. To assess face\nrecognition performance using the new dataset, we train ResNet-50 (with and\nwithout Squeeze-and-Excitation blocks) Convolutional Neural Networks on\nVGGFace2, on MS- Celeb-1M, and on their union, and show that training on\nVGGFace2 leads to improved recognition performance over pose and age. Finally,\nusing the models trained on these datasets, we demonstrate state-of-the-art\nperformance on all the IARPA Janus face recognition benchmarks, e.g. IJB-A,\nIJB-B and IJB-C, exceeding the previous state-of-the-art by a large margin.\nDatasets and models are publicly available.", "field": [], "task": ["Face Recognition", "Face Verification", "Image Retrieval"], "method": [], "dataset": ["IJB-A", "IJB-B", "IJB-C"], "metric": ["TAR @ FAR=0.01", "TAR @ FAR=0.1", "TAR @ FAR=0.001"], "title": "VGGFace2: A dataset for recognising faces across pose and age"} {"abstract": "We consider the task of text attribute transfer: transforming a sentence to\nalter a specific attribute (e.g., sentiment) while preserving its\nattribute-independent content (e.g., changing \"screen is just the right size\"\nto \"screen is too small\"). Our training data includes only sentences labeled\nwith their attribute (e.g., positive or negative), but not pairs of sentences\nthat differ only in their attributes, so we must learn to disentangle\nattributes from attribute-independent content in an unsupervised way. Previous\nwork using adversarial methods has struggled to produce high-quality outputs.\nIn this paper, we propose simpler methods motivated by the observation that\ntext attributes are often marked by distinctive phrases (e.g., \"too small\").\nOur strongest method extracts content words by deleting phrases associated with\nthe sentence's original attribute value, retrieves new phrases associated with\nthe target attribute, and uses a neural model to fluently combine these into a\nfinal output. On human evaluation, our best method generates grammatical and\nappropriate responses on 22% more inputs than the best previous system,\naveraged over three attribute transfer datasets: altering sentiment of reviews\non Yelp, altering sentiment of reviews on Amazon, and altering image captions\nto be more romantic or humorous.", "field": [], "task": ["Image Captioning", "Style Transfer", "Text Attribute Transfer"], "method": [], "dataset": ["Yelp Review Dataset (Small)"], "metric": ["G-Score (BLEU, Accuracy)"], "title": "Delete, Retrieve, Generate: A Simple Approach to Sentiment and Style Transfer"} {"abstract": "Face detection has been well studied for many years and one of remaining\nchallenges is to detect small, blurred and partially occluded faces in\nuncontrolled environment. This paper proposes a novel context-assisted single\nshot face detector, named \\emph{PyramidBox} to handle the hard face detection\nproblem. Observing the importance of the context, we improve the utilization of\ncontextual information in the following three aspects. First, we design a novel\ncontext anchor to supervise high-level contextual feature learning by a\nsemi-supervised method, which we call it PyramidAnchors. Second, we propose the\nLow-level Feature Pyramid Network to combine adequate high-level context\nsemantic feature and Low-level facial feature together, which also allows the\nPyramidBox to predict faces of all scales in a single shot. Third, we introduce\na context-sensitive structure to increase the capacity of prediction network to\nimprove the final accuracy of output. In addition, we use the method of\nData-anchor-sampling to augment the training samples across different scales,\nwhich increases the diversity of training data for smaller faces. By exploiting\nthe value of context, PyramidBox achieves superior performance among the\nstate-of-the-art over the two common face detection benchmarks, FDDB and WIDER\nFACE. Our code is available in PaddlePaddle:\n\\href{https://github.com/PaddlePaddle/models/tree/develop/fluid/face_detection}{\\url{https://github.com/PaddlePaddle/models/tree/develop/fluid/face_detection}}.", "field": [], "task": ["Face Detection"], "method": [], "dataset": ["WIDER Face (Hard)", "WIDER Face (Medium)", "WIDER Face (Easy)", "FDDB"], "metric": ["AP"], "title": "PyramidBox: A Context-assisted Single Shot Face Detector"} {"abstract": "Several machine learning models, including neural networks, consistently\nmisclassify adversarial examples---inputs formed by applying small but\nintentionally worst-case perturbations to examples from the dataset, such that\nthe perturbed input results in the model outputting an incorrect answer with\nhigh confidence. Early attempts at explaining this phenomenon focused on\nnonlinearity and overfitting. We argue instead that the primary cause of neural\nnetworks' vulnerability to adversarial perturbation is their linear nature.\nThis explanation is supported by new quantitative results while giving the\nfirst explanation of the most intriguing fact about them: their generalization\nacross architectures and training sets. Moreover, this view yields a simple and\nfast method of generating adversarial examples. Using this approach to provide\nexamples for adversarial training, we reduce the test set error of a maxout\nnetwork on the MNIST dataset.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["MNIST"], "metric": ["Percentage error"], "title": "Explaining and Harnessing Adversarial Examples"} {"abstract": "We introduce a simple yet surprisingly powerful model to incorporate\nattention in action recognition and human object interaction tasks. Our\nproposed attention module can be trained with or without extra supervision, and\ngives a sizable boost in accuracy while keeping the network size and\ncomputational cost nearly the same. It leads to significant improvements over\nstate of the art base architecture on three standard action recognition\nbenchmarks across still images and videos, and establishes new state of the art\non MPII dataset with 12.5% relative improvement. We also perform an extensive\nanalysis of our attention module both empirically and analytically. In terms of\nthe latter, we introduce a novel derivation of bottom-up and top-down attention\nas low-rank approximations of bilinear pooling methods (typically used for\nfine-grained classification). From this perspective, our attention formulation\nsuggests a novel characterization of action recognition as a fine-grained\nrecognition problem.", "field": [], "task": ["Action Recognition", "Human-Object Interaction Detection", "Temporal Action Localization"], "method": [], "dataset": ["HICO"], "metric": ["mAP"], "title": "Attentional Pooling for Action Recognition"} {"abstract": "We present a simple sequential sentence encoder for multi-domain natural\nlanguage inference. Our encoder is based on stacked bidirectional LSTM-RNNs\nwith shortcut connections and fine-tuning of word embeddings. The overall\nsupervised model uses the above encoder to encode two input sentences into two\nvectors, and then uses a classifier over the vector combination to label the\nrelationship between these two sentences as that of entailment, contradiction,\nor neural. Our Shortcut-Stacked sentence encoders achieve strong improvements\nover existing encoders on matched and mismatched multi-domain natural language\ninference (top non-ensemble single-model result in the EMNLP RepEval 2017\nShared Task (Nangia et al., 2017)). Moreover, they achieve the new\nstate-of-the-art encoding result on the original SNLI dataset (Bowman et al.,\n2015).", "field": [], "task": ["Natural Language Inference", "Word Embeddings"], "method": [], "dataset": ["SNLI"], "metric": ["Parameters", "% Train Accuracy", "% Test Accuracy"], "title": "Shortcut-Stacked Sentence Encoders for Multi-Domain Inference"} {"abstract": "Image restoration is a long-standing problem in low-level computer vision\nwith many interesting applications. We describe a flexible learning framework\nbased on the concept of nonlinear reaction diffusion models for various image\nrestoration problems. By embodying recent improvements in nonlinear diffusion\nmodels, we propose a dynamic nonlinear reaction diffusion model with\ntime-dependent parameters (\\ie, linear filters and influence functions). In\ncontrast to previous nonlinear diffusion models, all the parameters, including\nthe filters and the influence functions, are simultaneously learned from\ntraining data through a loss based approach. We call this approach TNRD --\n\\textit{Trainable Nonlinear Reaction Diffusion}. The TNRD approach is\napplicable for a variety of image restoration tasks by incorporating\nappropriate reaction force. We demonstrate its capabilities with three\nrepresentative applications, Gaussian image denoising, single image super\nresolution and JPEG deblocking. Experiments show that our trained nonlinear\ndiffusion models largely benefit from the training of the parameters and\nfinally lead to the best reported performance on common test datasets for the\ntested applications. Our trained models preserve the structural simplicity of\ndiffusion models and take only a small number of diffusion steps, thus are\nhighly efficient. Moreover, they are also well-suited for parallel computation\non GPUs, which makes the inference procedure extremely fast.", "field": [], "task": ["Denoising", "Image Denoising", "Image Restoration", "Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["BSD68 sigma15", "Darmstadt Noise Dataset", "Set14 - 4x upscaling", "Urban100 sigma15", "Set5 - 4x upscaling", "BSD68 sigma25"], "metric": ["SSIM (sRGB)", "PSNR", "PSNR (sRGB)"], "title": "Trainable Nonlinear Reaction Diffusion: A Flexible Framework for Fast and Effective Image Restoration"} {"abstract": "Despite of the recent success of neural networks for human pose estimation,\ncurrent approaches are limited to pose estimation of a single person and cannot\nhandle humans in groups or crowds. In this work, we propose a method that\nestimates the poses of multiple persons in an image in which a person can be\noccluded by another person or might be truncated. To this end, we consider\nmulti-person pose estimation as a joint-to-person association problem. We\nconstruct a fully connected graph from a set of detected joint candidates in an\nimage and resolve the joint-to-person association and outlier detection using\ninteger linear programming. Since solving joint-to-person association jointly\nfor all persons in an image is an NP-hard problem and even approximations are\nexpensive, we solve the problem locally for each person. On the challenging\nMPII Human Pose Dataset for multiple persons, our approach achieves the\naccuracy of a state-of-the-art method, but it is 6,000 to 19,000 times faster.", "field": [], "task": ["Keypoint Detection", "Multi-Person Pose Estimation", "Outlier Detection", "Pose Estimation"], "method": [], "dataset": ["MPII Multi-Person"], "metric": ["AP", "mAP@0.5"], "title": "Multi-Person Pose Estimation with Local Joint-to-Person Associations"} {"abstract": "In this work, we connect two distinct concepts for unsupervised domain\nadaptation: feature distribution alignment between domains by utilizing the\ntask-specific decision boundary and the Wasserstein metric. Our proposed sliced\nWasserstein discrepancy (SWD) is designed to capture the natural notion of\ndissimilarity between the outputs of task-specific classifiers. It provides a\ngeometrically meaningful guidance to detect target samples that are far from\nthe support of the source and enables efficient distribution alignment in an\nend-to-end trainable fashion. In the experiments, we validate the effectiveness\nand genericness of our method on digit and sign recognition, image\nclassification, semantic segmentation, and object detection.", "field": [], "task": ["Domain Adaptation", "Image Classification", "Object Detection", "Semantic Segmentation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["VisDA2017", "GTAV-to-Cityscapes Labels", "SYNTHIA-to-Cityscapes"], "metric": ["mIoU", "mIoU (13 classes)", "Accuracy"], "title": "Sliced Wasserstein Discrepancy for Unsupervised Domain Adaptation"} {"abstract": "In image classification, visual separability between different object\ncategories is highly uneven, and some categories are more difficult to\ndistinguish than others. Such difficult categories demand more dedicated\nclassifiers. However, existing deep convolutional neural networks (CNN) are\ntrained as flat N-way classifiers, and few efforts have been made to leverage\nthe hierarchical structure of categories. In this paper, we introduce\nhierarchical deep CNNs (HD-CNNs) by embedding deep CNNs into a category\nhierarchy. An HD-CNN separates easy classes using a coarse category classifier\nwhile distinguishing difficult classes using fine category classifiers. During\nHD-CNN training, component-wise pretraining is followed by global finetuning\nwith a multinomial logistic loss regularized by a coarse category consistency\nterm. In addition, conditional executions of fine category classifiers and\nlayer parameter compression make HD-CNNs scalable for large-scale visual\nrecognition. We achieve state-of-the-art results on both CIFAR100 and\nlarge-scale ImageNet 1000-class benchmark datasets. In our experiments, we\nbuild up three different HD-CNNs and they lower the top-1 error of the standard\nCNNs by 2.65%, 3.1% and 1.1%, respectively.", "field": [], "task": ["Hierarchical structure", "Image Classification", "Object Recognition"], "method": [], "dataset": ["CIFAR-100"], "metric": ["Percentage correct"], "title": "HD-CNN: Hierarchical Deep Convolutional Neural Network for Large Scale Visual Recognition"} {"abstract": "The recent increase in the extensive use of digital imaging technologies has brought with it a simultaneous demand for higher-resolution images. We develop a novel edge-informed approach to single image super-resolution (SISR). The SISR problem is reformulated as an image inpainting task. We use a two-stage inpainting model as a baseline for super-resolution and show its effectiveness for different scale factors (x2, x4, x8) compared to basic interpolation schemes. This model is trained using a joint optimization of image contents (texture and color) and structures (edges). Quantitative and qualitative comparisons are included and the proposed model is compared with current state-of-the-art techniques. We show that our method of decoupling structure and texture reconstruction improves the quality of the final reconstructed high-resolution image. Code and models available at: https://github.com/knazeri/edge-informed-sisr", "field": [], "task": ["Image Inpainting", "Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["Set5 - 4x upscaling", "BSD100 - 4x upscaling", "Set14 - 4x upscaling", "Celeb-HQ 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Edge-Informed Single Image Super-Resolution"} {"abstract": "Recent years have witnessed rapid progress in detecting and recognizing\nindividual object instances. To understand the situation in a scene, however,\ncomputers need to recognize how humans interact with surrounding objects. In\nthis paper, we tackle the challenging task of detecting human-object\ninteractions (HOI). Our core idea is that the appearance of a person or an\nobject instance contains informative cues on which relevant parts of an image\nto attend to for facilitating interaction prediction. To exploit these cues, we\npropose an instance-centric attention module that learns to dynamically\nhighlight regions in an image conditioned on the appearance of each instance.\nSuch an attention-based network allows us to selectively aggregate features\nrelevant for recognizing HOIs. We validate the efficacy of the proposed network\non the Verb in COCO and HICO-DET datasets and show that our approach compares\nfavorably with the state-of-the-arts.", "field": [], "task": ["Human-Object Interaction Detection"], "method": [], "dataset": ["HICO-DET", "Ambiguious-HOI", "V-COCO"], "metric": ["mAP", "MAP"], "title": "iCAN: Instance-Centric Attention Network for Human-Object Interaction Detection"} {"abstract": "In this paper, we introduce Iterative Text Summarization (ITS), an iteration-based model for supervised extractive text summarization, inspired by the observation that it is often necessary for a human to read an article multiple times in order to fully understand and summarize its contents. Current summarization approaches read through a document only once to generate a document representation, resulting in a sub-optimal representation. To address this issue we introduce a model which iteratively polishes the document representation on many passes through the document. As part of our model, we also introduce a selective reading mechanism that decides more accurately the extent to which each sentence in the model should be updated. Experimental results on the CNN/DailyMail and DUC2002 datasets demonstrate that our model significantly outperforms state-of-the-art extractive systems when evaluated by machines and by humans.", "field": [], "task": ["Extractive Text Summarization", "Representation Learning", "Text Summarization"], "method": [], "dataset": ["CNN / Daily Mail"], "metric": ["ROUGE-1", "ROUGE-2"], "title": "Iterative Document Representation Learning Towards Summarization with Polishing"} {"abstract": "Neural methods have had several recent successes in semantic parsing, though\nthey have yet to face the challenge of producing meaning representations based\non formal semantics. We present a sequence-to-sequence neural semantic parser\nthat is able to produce Discourse Representation Structures (DRSs) for English\nsentences with high accuracy, outperforming traditional DRS parsers. To\nfacilitate the learning of the output, we represent DRSs as a sequence of flat\nclauses and introduce a method to verify that produced DRSs are well-formed and\ninterpretable. We compare models using characters and words as input and see\n(somewhat surprisingly) that the former performs better than the latter. We\nshow that eliminating variable names from the output using De Bruijn-indices\nincreases parser performance. Adding silver training data boosts performance\neven further.", "field": [], "task": ["DRS Parsing", "Semantic Parsing"], "method": [], "dataset": ["PMB-3.0.0", "PMB-2.2.0"], "metric": ["F1"], "title": "Exploring Neural Methods for Parsing Discourse Representation Structures"} {"abstract": "Spectral Graph Convolutional Networks (GCNs) are a generalization of\nconvolutional networks to learning on graph-structured data. Applications of\nspectral GCNs have been successful, but limited to a few problems where the\ngraph is fixed, such as shape correspondence and node classification. In this\nwork, we address this limitation by revisiting a particular family of spectral\ngraph networks, Chebyshev GCNs, showing its efficacy in solving graph\nclassification tasks with a variable graph structure and size. Chebyshev GCNs\nrestrict graphs to have at most one edge between any pair of nodes. To this\nend, we propose a novel multigraph network that learns from multi-relational\ngraphs. We model learned edges with abstract meaning and experiment with\ndifferent ways to fuse the representations extracted from annotated and learned\nedges, achieving competitive results on a variety of chemical classification\nbenchmarks.", "field": [], "task": ["Graph Classification", "Node Classification"], "method": [], "dataset": ["NCI109", "ENZYMES", "PROTEINS", "NCI1", "MUTAG"], "metric": ["Accuracy"], "title": "Spectral Multigraph Networks for Discovering and Fusing Relationships in Molecules"} {"abstract": "Egocentric activity recognition is one of the most challenging tasks in video\nanalysis. It requires a fine-grained discrimination of small objects and their\nmanipulation. While some methods base on strong supervision and attention\nmechanisms, they are either annotation consuming or do not take spatio-temporal\npatterns into account. In this paper we propose LSTA as a mechanism to focus on\nfeatures from spatial relevant parts while attention is being tracked smoothly\nacross the video sequence. We demonstrate the effectiveness of LSTA on\negocentric activity recognition with an end-to-end trainable two-stream\narchitecture, achieving state of the art performance on four standard\nbenchmarks.", "field": [], "task": ["Action Recognition", "Activity Recognition", "Egocentric Activity Recognition", "Temporal Action Localization"], "method": [], "dataset": ["EPIC-KITCHENS-55", "EGTEA"], "metric": ["Actions Top-1 (S2)", "Mean class accuracy", "Average Accuracy"], "title": "LSTA: Long Short-Term Attention for Egocentric Action Recognition"} {"abstract": "Temporally locating and classifying action segments in long untrimmed videos\nis of particular interest to many applications like surveillance and robotics.\nWhile traditional approaches follow a two-step pipeline, by generating\nframe-wise probabilities and then feeding them to high-level temporal models,\nrecent approaches use temporal convolutions to directly classify the video\nframes. In this paper, we introduce a multi-stage architecture for the temporal\naction segmentation task. Each stage features a set of dilated temporal\nconvolutions to generate an initial prediction that is refined by the next one.\nThis architecture is trained using a combination of a classification loss and a\nproposed smoothing loss that penalizes over-segmentation errors. Extensive\nevaluation shows the effectiveness of the proposed model in capturing\nlong-range dependencies and recognizing action segments. Our model achieves\nstate-of-the-art results on three challenging datasets: 50Salads, Georgia Tech\nEgocentric Activities (GTEA), and the Breakfast dataset.", "field": [], "task": ["Action Segmentation"], "method": [], "dataset": ["50 Salads", "Breakfast", "GTEA"], "metric": ["Acc", "Edit", "F1@10%", "F1@25%", "F1@50%"], "title": "MS-TCN: Multi-Stage Temporal Convolutional Network for Action Segmentation"} {"abstract": "We improve the informativeness of models for conditional text generation\nusing techniques from computational pragmatics. These techniques formulate\nlanguage production as a game between speakers and listeners, in which a\nspeaker should generate output text that a listener can use to correctly\nidentify the original input that the text describes. While such approaches are\nwidely used in cognitive science and grounded language learning, they have\nreceived less attention for more standard language generation tasks. We\nconsider two pragmatic modeling methods for text generation: one where\npragmatics is imposed by information preservation, and another where pragmatics\nis imposed by explicit modeling of distractors. We find that these methods\nimprove the performance of strong existing systems for abstractive\nsummarization and generation from structured meaning representations.", "field": [], "task": ["Abstractive Text Summarization", "Conditional Text Generation", "Data-to-Text Generation", "Text Generation"], "method": [], "dataset": ["E2E NLG Challenge"], "metric": ["NIST", "METEOR", "CIDEr", "ROUGE-L", "BLEU"], "title": "Pragmatically Informative Text Generation"} {"abstract": "In this paper, we propose Spatio-TEmporal Progressive (STEP) action\ndetector---a progressive learning framework for spatio-temporal action\ndetection in videos. Starting from a handful of coarse-scale proposal cuboids,\nour approach progressively refines the proposals towards actions over a few\nsteps. In this way, high-quality proposals (i.e., adhere to action movements)\ncan be gradually obtained at later steps by leveraging the regression outputs\nfrom previous steps. At each step, we adaptively extend the proposals in time\nto incorporate more related temporal context. Compared to the prior work that\nperforms action detection in one run, our progressive learning framework is\nable to naturally handle the spatial displacement within action tubes and\ntherefore provides a more effective way for spatio-temporal modeling. We\nextensively evaluate our approach on UCF101 and AVA, and demonstrate superior\ndetection results. Remarkably, we achieve mAP of 75.0% and 18.6% on the two\ndatasets with 3 progressive steps and using respectively only 11 and 34 initial\nproposals.", "field": [], "task": ["Action Detection", "Action Recognition", "Regression"], "method": [], "dataset": ["UCF101-24"], "metric": ["Video-mAP 0.1", "Video-mAP 0.2"], "title": "STEP: Spatio-Temporal Progressive Learning for Video Action Detection"} {"abstract": "3D multi-object tracking (MOT) is an essential component for many applications such as autonomous driving and assistive robotics. Recent work on 3D MOT focuses on developing accurate systems giving less attention to practical considerations such as computational cost and system complexity. In contrast, this work proposes a simple real-time 3D MOT system. Our system first obtains 3D detections from a LiDAR point cloud. Then, a straightforward combination of a 3D Kalman filter and the Hungarian algorithm is used for state estimation and data association. Additionally, 3D MOT datasets such as KITTI evaluate MOT methods in the 2D space and standardized 3D MOT evaluation tools are missing for a fair comparison of 3D MOT methods. Therefore, we propose a new 3D MOT evaluation tool along with three new metrics to comprehensively evaluate 3D MOT methods. We show that, although our system employs a combination of classical MOT modules, we achieve state-of-the-art 3D MOT performance on two 3D MOT benchmarks (KITTI and nuScenes). Surprisingly, although our system does not use any 2D data as inputs, we achieve competitive performance on the KITTI 2D MOT leaderboard. Our proposed system runs at a rate of $207.4$ FPS on the KITTI dataset, achieving the fastest speed among all modern MOT systems. To encourage standardized 3D MOT evaluation, our system and evaluation code are made publicly available at https://github.com/xinshuoweng/AB3DMOT.", "field": [], "task": ["3D Multi-Object Tracking", "Autonomous Driving", "Multi-Object Tracking", "Object Tracking"], "method": [], "dataset": ["KITTI Tracking test", "KITTI"], "metric": ["MOTA", "MOTP"], "title": "3D Multi-Object Tracking: A Baseline and New Evaluation Metrics"} {"abstract": "Deep learning-based video salient object detection has recently achieved great success with its performance significantly outperforming any other unsupervised methods. However, existing data-driven approaches heavily rely on a large quantity of pixel-wise annotated video frames to deliver such promising results. In this paper, we address the semi-supervised video salient object detection task using pseudo-labels. Specifically, we present an effective video saliency detector that consists of a spatial refinement network and a spatiotemporal module. Based on the same refinement network and motion information in terms of optical flow, we further propose a novel method for generating pixel-level pseudo-labels from sparsely annotated frames. By utilizing the generated pseudo-labels together with a part of manual annotations, our video saliency detector learns spatial and temporal cues for both contrast inference and coherence enhancement, thus producing accurate saliency maps. Experimental results demonstrate that our proposed semi-supervised method even greatly outperforms all the state-of-the-art fully supervised methods across three public benchmarks of VOS, DAVIS, and FBMS.", "field": [], "task": ["RGB Salient Object Detection", "Salient Object Detection", "Unsupervised Video Object Segmentation", "Video Salient Object Detection"], "method": [], "dataset": ["DAVIS-2016", "VOS-T", "FBMS-59"], "metric": ["MAX F-MEASURE", "S-Measure", "AVERAGE MAE", "Average MAE", "max E-measure"], "title": "Semi-Supervised Video Salient Object Detection Using Pseudo-Labels"} {"abstract": "The paucity of videos in current action classification datasets (UCF-101 and\nHMDB-51) has made it difficult to identify good video architectures, as most\nmethods obtain similar performance on existing small-scale benchmarks. This\npaper re-evaluates state-of-the-art architectures in light of the new Kinetics\nHuman Action Video dataset. Kinetics has two orders of magnitude more data,\nwith 400 human action classes and over 400 clips per class, and is collected\nfrom realistic, challenging YouTube videos. We provide an analysis on how\ncurrent architectures fare on the task of action classification on this dataset\nand how much performance improves on the smaller benchmark datasets after\npre-training on Kinetics.\n We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on\n2D ConvNet inflation: filters and pooling kernels of very deep image\nclassification ConvNets are expanded into 3D, making it possible to learn\nseamless spatio-temporal feature extractors from video while leveraging\nsuccessful ImageNet architecture designs and even their parameters. We show\nthat, after pre-training on Kinetics, I3D models considerably improve upon the\nstate-of-the-art in action classification, reaching 80.9% on HMDB-51 and 98.0%\non UCF-101.", "field": [], "task": ["Action Classification", "Action Classification ", "Action Recognition", "Skeleton Based Action Recognition"], "method": [], "dataset": ["Kinetics-400", "EgoGesture", "Moments in Time", "HMDB-51", "J-HMDB", "VIVA Hand Gestures Dataset", "UCF101", "Charades"], "metric": ["3-fold Accuracy", "Top 1 Accuracy", "Vid acc@5", "MAP", "Accuracy", "Average accuracy of 3 splits", "Top 5 Accuracy", "Vid acc@1", "Accuracy (RGB+pose)"], "title": "Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset"} {"abstract": "Many real world tasks require multiple agents to work together. Multi-agent reinforcement learning (RL) methods have been proposed in recent years to solve these tasks, but current methods often fail to efficiently learn policies. We thus investigate the presence of a common weakness in single-agent RL, namely value function overestimation bias, in the multi-agent setting. Based on our findings, we propose an approach that reduces this bias by using double centralized critics. We evaluate it on six mixed cooperative-competitive tasks, showing a significant advantage over current methods. Finally, we investigate the application of multi-agent methods to high-dimensional robotic tasks and show that our approach can be used to learn decentralized policies in this domain.", "field": [], "task": ["Multi-agent Reinforcement Learning"], "method": [], "dataset": ["ParticleEnvs Cooperative Communication"], "metric": ["final agent reward"], "title": "Reducing Overestimation Bias in Multi-Agent Domains Using Double Centralized Critics"} {"abstract": "We propose Localized Narratives, a new form of multimodal image annotations connecting vision and language. We ask annotators to describe an image with their voice while simultaneously hovering their mouse over the region they are describing. Since the voice and the mouse pointer are synchronized, we can localize every single word in the description. This dense visual grounding takes the form of a mouse trace segment per word and is unique to our data. We annotated 849k images with Localized Narratives: the whole COCO, Flickr30k, and ADE20K datasets, and 671k images of Open Images, all of which we make publicly available. We provide an extensive analysis of these annotations showing they are diverse, accurate, and efficient to produce. We also demonstrate their utility on the application of controlled image captioning.", "field": [], "task": ["Image Captioning", "Image Generation", "Visual Grounding"], "method": [], "dataset": ["Localized Narratives"], "metric": ["CIDEr"], "title": "Connecting Vision and Language with Localized Narratives"} {"abstract": "SUMMARY: Recently, novel machine-learning algorithms have shown potential for predicting undiscovered links in biomedical knowledge networks. However, dedicated benchmarks for measuring algorithmic progress have not yet emerged. With OpenBioLink, we introduce a large-scale, high-quality and highly challenging biomedical link prediction benchmark to transparently and reproducibly evaluate such algorithms. Furthermore, we present preliminary baseline evaluation results. AVAILABILITY AND IMPLEMENTATION: Source code, data and supplementary files are openly available at https://github.com/OpenBioLink/OpenBioLink CONTACT: matthias.samwald ((at)) meduniwien.ac.at", "field": [], "task": ["Link Prediction"], "method": [], "dataset": ["OpenBioLink"], "metric": ["Hits@10", "Hits@1"], "title": "OpenBioLink: A benchmarking framework for large-scale biomedical link prediction"} {"abstract": "Spatial-temporal graphs have been widely used by skeleton-based action recognition algorithms to model human action dynamics. To capture robust movement patterns from these graphs, long-range and multi-scale context aggregation and spatial-temporal dependency modeling are critical aspects of a powerful feature extractor. However, existing methods have limitations in achieving (1) unbiased long-range joint relationship modeling under multi-scale operators and (2) unobstructed cross-spacetime information flow for capturing complex spatial-temporal dependencies. In this work, we present (1) a simple method to disentangle multi-scale graph convolutions and (2) a unified spatial-temporal graph convolutional operator named G3D. The proposed multi-scale aggregation scheme disentangles the importance of nodes in different neighborhoods for effective long-range modeling. The proposed G3D module leverages dense cross-spacetime edges as skip connections for direct information propagation across the spatial-temporal graph. By coupling these proposals, we develop a powerful feature extractor named MS-G3D based on which our model outperforms previous state-of-the-art methods on three large-scale datasets: NTU RGB+D 60, NTU RGB+D 120, and Kinetics Skeleton 400.", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition"], "method": [], "dataset": ["NTU RGB+D", "Kinetics-Skeleton dataset", "NTU RGB+D 120"], "metric": ["Accuracy (CS)", "Accuracy (Cross-Subject)", "Accuracy (CV)", "Accuracy (Cross-Setup)", "Accuracy"], "title": "Disentangling and Unifying Graph Convolutions for Skeleton-Based Action Recognition"} {"abstract": "For object detection, how to address the contradictory requirement between feature map resolution and receptive field on high-resolution inputs still remains an open question. In this paper, to tackle this issue, we build a novel architecture, called Attention-guided Context Feature Pyramid Network (AC-FPN), that exploits discriminative information from various large receptive fields via integrating attention-guided multi-path features. The model contains two modules. The first one is Context Extraction Module (CEM) that explores large contextual information from multiple receptive fields. As redundant contextual relations may mislead localization and recognition, we also design the second module named Attention-guided Module (AM), which can adaptively capture the salient dependencies over objects by using the attention mechanism. AM consists of two sub-modules, i.e., Context Attention Module (CxAM) and Content Attention Module (CnAM), which focus on capturing discriminative semantics and locating precise positions, respectively. Most importantly, our AC-FPN can be readily plugged into existing FPN-based models. Extensive experiments on object detection and instance segmentation show that existing models with our proposed CEM and AM significantly surpass their counterparts without them, and our model successfully obtains state-of-the-art results. We have released the source code at https://github.com/Caojunxu/AC-FPN.", "field": [], "task": ["Instance Segmentation", "Object Detection", "Semantic Segmentation"], "method": [], "dataset": ["COCO test-dev"], "metric": ["APM", "box AP", "AP75", "APS", "APL", "AP50"], "title": "Attention-guided Context Feature Pyramid Network for Object Detection"} {"abstract": "The existing fusion based RGB-D salient object detection methods usually adopt the bi-stream structure to strike the fusion trade-off between RGB and depth (D). The D quality usually varies from scene to scene, while the SOTA bi-stream approaches are depth quality unaware, which easily result in substantial difficulties in achieving complementary fusion status between RGB and D, leading to poor fusion results in facing of low-quality D. Thus, this paper attempts to integrate a novel depth quality aware subnet into the classic bi-stream structure, aiming to assess the depth quality before conducting the selective RGB-D fusion. Compared with the SOTA bi-stream methods, the major highlight of our method is its ability to lessen the importance of those low-quality, no-contribution, or even negative-contribution D regions during the RGB-D fusion, achieving a much improved complementary status between RGB and D.", "field": [], "task": ["Object Detection", "RGB-D Salient Object Detection", "RGB Salient Object Detection", "Salient Object Detection"], "method": [], "dataset": ["NJU2K"], "metric": ["Average MAE", "S-Measure"], "title": "Depth Quality Aware Salient Object Detection"} {"abstract": "Knowledge Graphs (KG) are of vital importance for multiple applications on the web, including information retrieval, recommender systems, and metadata annotation. Regardless of whether they are built manually by domain experts or with automatic pipelines, KGs are often incomplete. Recent work has begun to explore the use of textual descriptions available in knowledge graphs to learn vector representations of entities in order to preform link prediction. However, the extent to which these representations learned for link prediction generalize to other tasks is unclear. This is important given the cost of learning such representations. Ideally, we would prefer representations that do not need to be trained again when transferring to a different task, while retaining reasonable performance. In this work, we propose a holistic evaluation protocol for entity representations learned via a link prediction objective. We consider the inductive link prediction and entity classification tasks, which involve entities not seen during training. We also consider an information retrieval task for entity-oriented search. We evaluate an architecture based on a pretrained language model, that exhibits strong generalization to entities not observed during training, and outperforms related state-of-the-art methods (22% MRR improvement in link prediction on average). We further provide evidence that the learned representations transfer well to other tasks without fine-tuning. In the entity classification task we obtain an average improvement of 16% in accuracy compared with baselines that also employ pre-trained models. In the information retrieval task, we obtain significant improvements of up to 8.8% in NDCG@10 for natural language queries. We thus show that the learned representations are not limited KG-specific tasks, and have greater generalization properties than evaluated in previous work.", "field": [], "task": ["Inductive knowledge graph completion", "Information Retrieval", "Knowledge Graph Embeddings", "Knowledge Graphs", "Language Modelling", "Link Prediction", "Node Classification", "Recommendation Systems"], "method": [], "dataset": ["Wikidata5m-ind", "WN18RR-ind", "FB15k-237-ind"], "metric": ["Hits@3", "Hits@1", "Hit@1", "MRR", "Hits@10", "Hit@10"], "title": "Inductive Entity Representations from Text via Link Prediction"} {"abstract": "In this paper, we investigate the following two limitations for the existing distractor generation (DG) methods. First, the quality of the existing DG methods are still far from practical use. There is still room for DG quality improvement. Second, the existing DG designs are mainly for single distractor generation. However, for practical MCQ preparation, multiple distractors are desired. Aiming at these goals, in this paper, we present a new distractor generation scheme with multi-tasking and negative answer training strategies for effectively generating \\textit{multiple} distractors. The experimental results show that (1) our model advances the state-of-the-art result from 28.65 to 39.81 (BLEU 1 score) and (2) the generated multiple distractors are diverse and show strong distracting power for multiple choice question.", "field": [], "task": ["Distractor Generation", "Text Generation"], "method": [], "dataset": ["RACE"], "metric": ["BLEU-2", "BLEU-1", "BLEU-3", "ROUGE-L", "BLEU-4"], "title": "A BERT-based Distractor Generation Scheme with Multi-tasking and Negative Answer Training Strategies"} {"abstract": "Motivation\r\nNeural methods to extract drug-drug interactions (DDIs) from literature require a large number of annotations. In this study, we propose a novel method to effectively utilize external drug database information as well as information from large-scale plain text for DDI extraction. Specifically, we focus on drug description and molecular structure information as the drug database information.\r\nResults\r\nWe evaluated our approach on the DDIExtraction 2013 shared task data set. We obtained the following results. First, large-scale raw text information can greatly improve the performance of extracting DDIs when combined with the existing model and it shows the state-of-the-art performance. Second, each of drug description and molecular structure information is helpful to further improve the DDI performance for some specific DDI types. Finally, the simultaneous use of the drug description and molecular structure information can significantly improve the performance on all the DDI types. We showed that the plain text, the drug description information, and molecular structure information are complementary and their effective combination are essential for the improvement.", "field": [], "task": ["Drug\u2013drug Interaction Extraction"], "method": [], "dataset": ["DDI extraction 2013 corpus"], "metric": ["F1", "Micro F1"], "title": "Using Drug Descriptions and Molecular Structures for Drug-Drug Interaction Extraction from Literature"} {"abstract": "Although pain is frequent in old age, older adults are often undertreated for pain. This is especially the case for long-term care residents with moderate to severe dementia who cannot report their pain because of cognitive impairments that accompany dementia. Nursing staff acknowledge the challenges of effectively recognizing and managing pain in long-term care facilities due to lack of human resources and, sometimes, expertise to use validated pain assessment approaches on a regular basis. Vision-based ambient monitoring will allow for frequent automated assessments so care staff could be automatically notified when signs of pain are displayed. However, existing computer vision techniques for pain detection are not validated on faces of older adults or people with dementia, and this population is not represented in existing facial expression datasets of pain. We present the first fully automated vision-based technique validated on a dementia cohort. Our contributions are threefold. First, we develop a deep learning-based computer vision system for detecting painful facial expressions on a video dataset that is collected unobtrusively from older adult participants with and without dementia. Second, we introduce a pairwise comparative inference method that calibrates to each person and is sensitive to changes in facial expression while using training data more efficiently than sequence models. Third, we introduce a fast contrastive training method that improves cross-dataset performance. Our pain estimation model outperforms baselines by a wide margin, especially when evaluated on faces of people with dementia. Pre-trained model and demo code available at https://github.com/TaatiTeam/pain_detection_demo", "field": [], "task": ["Pain Intensity Regression"], "method": [], "dataset": ["UNBC-McMaster ShoulderPain dataset"], "metric": ["Pearson Correlation Coefficient "], "title": "Unobtrusive Pain Monitoring in Older Adults with Dementia using Pairwise and Contrastive Training"} {"abstract": "The task of graph-to-text generation aims at producing sentences that preserve the meaning of input graphs. As a crucial defect, the current state-of-the-art models may mess up or even drop the core structural information of input graphs when generating outputs. We propose to tackle this problem by leveraging richer training signals that can guide our model for preserving input information. In particular, we introduce two types of autoencoding losses, each individually focusing on different aspects (a.k.a. views) of input graphs. The losses are then back-propagated to better calibrate our model via multi-task training. Experiments on two benchmarks for graph-to-text generation show the effectiveness of our approach over a state-of-the-art baseline. Our code is available at \\url{http://github.com/Soistesimmer/AMR-multiview}.", "field": [], "task": ["Data-to-Text Generation", "Text Generation"], "method": [], "dataset": ["WebNLG"], "metric": ["BLEU"], "title": "Structural Information Preserving for Graph-to-Text Generation"} {"abstract": "In this work we introduce Deforming Autoencoders, a generative model for\nimages that disentangles shape from appearance in an unsupervised manner. As in\nthe deformable template paradigm, shape is represented as a deformation between\na canonical coordinate system (`template') and an observed image, while\nappearance is modeled in `canonical', template, coordinates, thus discarding\nvariability due to deformations. We introduce novel techniques that allow this\napproach to be deployed in the setting of autoencoders and show that this\nmethod can be used for unsupervised group-wise image alignment. We show\nexperiments with expression morphing in humans, hands, and digits, face\nmanipulation, such as shape and appearance interpolation, as well as\nunsupervised landmark localization. A more powerful form of unsupervised\ndisentangling becomes possible in template coordinates, allowing us to\nsuccessfully decompose face images into shading and albedo, and further\nmanipulate face images.", "field": [], "task": ["Unsupervised Facial Landmark Detection"], "method": [], "dataset": ["MAFL"], "metric": ["NME"], "title": "Deforming Autoencoders: Unsupervised Disentangling of Shape and Appearance"} {"abstract": "This paper proposes Self-Imitation Learning (SIL), a simple off-policy\nactor-critic algorithm that learns to reproduce the agent's past good\ndecisions. This algorithm is designed to verify our hypothesis that exploiting\npast good experiences can indirectly drive deep exploration. Our empirical\nresults show that SIL significantly improves advantage actor-critic (A2C) on\nseveral hard exploration Atari games and is competitive to the state-of-the-art\ncount-based exploration methods. We also show that SIL improves proximal policy\noptimization (PPO) on MuJoCo tasks.", "field": [], "task": ["Atari Games", "Imitation Learning"], "method": [], "dataset": ["Atari 2600 Amidar", "Atari 2600 River Raid", "Atari 2600 Beam Rider", "Atari 2600 Video Pinball", "Atari 2600 Demon Attack", "Atari 2600 Enduro", "Atari 2600 Alien", "Atari 2600 Boxing", "Atari 2600 Bank Heist", "Atari 2600 Tutankham", "Atari 2600 Time Pilot", "Atari 2600 Space Invaders", "Atari 2600 Assault", "Atari 2600 Gravitar", "Atari 2600 Ice Hockey", "Atari 2600 Bowling", "Atari 2600 Private Eye", "Atari 2600 Asterix", "Atari 2600 Breakout", "Atari 2600 Name This Game", "Atari 2600 Crazy Climber", "Atari 2600 Pong", "Atari 2600 Krull", "Atari 2600 Freeway", "Atari 2600 James Bond", "Atari 2600 Robotank", "Atari 2600 Kangaroo", "Atari 2600 Venture", "Atari 2600 Asteroids", "Atari 2600 Fishing Derby", "Atari 2600 Ms. Pacman", "Atari 2600 Seaquest", "Atari 2600 Tennis", "Atari 2600 Zaxxon", "Atari 2600 Frostbite", "Atari 2600 Star Gunner", "Atari 2600 Double Dunk", "Atari 2600 Battle Zone", "Atari 2600 Gopher", "Atari 2600 Road Runner", "Atari 2600 Atlantis", "Atari 2600 Kung-Fu Master", "Atari 2600 Chopper Command", "Atari 2600 Up and Down", "Atari 2600 Montezuma's Revenge", "Atari 2600 Wizard of Wor", "Atari 2600 Q*Bert", "Atari 2600 Centipede", "Atari 2600 HERO"], "metric": ["Score"], "title": "Self-Imitation Learning"} {"abstract": "We introduce a fully differentiable approximation to higher-order inference\nfor coreference resolution. Our approach uses the antecedent distribution from\na span-ranking architecture as an attention mechanism to iteratively refine\nspan representations. This enables the model to softly consider multiple hops\nin the predicted clusters. To alleviate the computational cost of this\niterative process, we introduce a coarse-to-fine approach that incorporates a\nless accurate but more efficient bilinear factor, enabling more aggressive\npruning without hurting accuracy. Compared to the existing state-of-the-art\nspan-ranking approach, our model significantly improves accuracy on the English\nOntoNotes benchmark, while being far more computationally efficient.", "field": [], "task": ["Coreference Resolution"], "method": [], "dataset": ["OntoNotes", "CoNLL 2012"], "metric": ["Avg F1", "F1"], "title": "Higher-order Coreference Resolution with Coarse-to-fine Inference"} {"abstract": "Can we detect common objects in a variety of image domains without\ninstance-level annotations? In this paper, we present a framework for a novel\ntask, cross-domain weakly supervised object detection, which addresses this\nquestion. For this paper, we have access to images with instance-level\nannotations in a source domain (e.g., natural image) and images with\nimage-level annotations in a target domain (e.g., watercolor). In addition, the\nclasses to be detected in the target domain are all or a subset of those in the\nsource domain. Starting from a fully supervised object detector, which is\npre-trained on the source domain, we propose a two-step progressive domain\nadaptation technique by fine-tuning the detector on two types of artificially\nand automatically generated samples. We test our methods on our newly collected\ndatasets containing three image domains, and achieve an improvement of\napproximately 5 to 20 percentage points in terms of mean average precision\n(mAP) compared to the best-performing baselines.", "field": [], "task": ["Domain Adaptation", "Object Detection", "Weakly Supervised Object Detection"], "method": [], "dataset": ["Comic2k", "Watercolor2k", "Clipart1k"], "metric": ["MAP"], "title": "Cross-Domain Weakly-Supervised Object Detection through Progressive Domain Adaptation"} {"abstract": "To understand the visual world, a machine must not only recognize individual\nobject instances but also how they interact. Humans are often at the center of\nsuch interactions and detecting human-object interactions is an important\npractical and scientific problem. In this paper, we address the task of\ndetecting triplets in challenging everyday photos. We\npropose a novel model that is driven by a human-centric approach. Our\nhypothesis is that the appearance of a person -- their pose, clothing, action\n-- is a powerful cue for localizing the objects they are interacting with. To\nexploit this cue, our model learns to predict an action-specific density over\ntarget object locations based on the appearance of a detected person. Our model\nalso jointly learns to detect people and objects, and by fusing these\npredictions it efficiently infers interaction triplets in a clean, jointly\ntrained end-to-end system we call InteractNet. We validate our approach on the\nrecently introduced Verbs in COCO (V-COCO) and HICO-DET datasets, where we show\nquantitatively compelling results.", "field": [], "task": ["Human-Object Interaction Detection"], "method": [], "dataset": ["HICO-DET"], "metric": ["Time Per Frame (ms)", "MAP"], "title": "Detecting and Recognizing Human-Object Interactions"} {"abstract": "Knowledge graphs contain knowledge about the world and provide a structured\nrepresentation of this knowledge. Current knowledge graphs contain only a small\nsubset of what is true in the world. Link prediction approaches aim at\npredicting new links for a knowledge graph given the existing links among the\nentities. Tensor factorization approaches have proved promising for such link\nprediction problems. Proposed in 1927, Canonical Polyadic (CP) decomposition is\namong the first tensor factorization approaches. CP generally performs poorly\nfor link prediction as it learns two independent embedding vectors for each\nentity, whereas they are really tied. We present a simple enhancement of CP\n(which we call SimplE) to allow the two embeddings of each entity to be learned\ndependently. The complexity of SimplE grows linearly with the size of\nembeddings. The embeddings learned through SimplE are interpretable, and\ncertain types of background knowledge can be incorporated into these embeddings\nthrough weight tying. We prove SimplE is fully expressive and derive a bound on\nthe size of its embeddings for full expressivity. We show empirically that,\ndespite its simplicity, SimplE outperforms several state-of-the-art tensor\nfactorization techniques. SimplE's code is available on GitHub at\nhttps://github.com/Mehran-k/SimplE.", "field": [], "task": ["Knowledge Graphs", "Link Prediction"], "method": [], "dataset": [" FB15k", "WN18"], "metric": ["Hits@10", "MRR", "Hits@3", "Hits@1"], "title": "SimplE Embedding for Link Prediction in Knowledge Graphs"} {"abstract": "We propose associative domain adaptation, a novel technique for end-to-end\ndomain adaptation with neural networks, the task of inferring class labels for\nan unlabeled target domain based on the statistical properties of a labeled\nsource domain. Our training scheme follows the paradigm that in order to\neffectively derive class labels for the target domain, a network should produce\nstatistically domain invariant embeddings, while minimizing the classification\nerror on the labeled source domain. We accomplish this by reinforcing\nassociations between source and target data directly in embedding space. Our\nmethod can easily be added to any existing classification network with no\nstructural and almost no computational overhead. We demonstrate the\neffectiveness of our approach on various benchmarks and achieve\nstate-of-the-art results across the board with a generic convolutional neural\nnetwork architecture not specifically tuned to the respective tasks. Finally,\nwe show that the proposed association loss produces embeddings that are more\neffective for domain adaptation compared to methods employing maximum mean\ndiscrepancy as a similarity measure in embedding space.", "field": [], "task": ["Domain Adaptation"], "method": [], "dataset": ["SYNSIG-to-GTSRB"], "metric": ["Accuracy"], "title": "Associative Domain Adaptation"} {"abstract": "Deep learning techniques are being used in skeleton based action recognition\ntasks and outstanding performance has been reported. Compared with RNN based\nmethods which tend to overemphasize temporal information, CNN-based approaches\ncan jointly capture spatio-temporal information from texture color images\nencoded from skeleton sequences. There are several skeleton-based features that\nhave proven effective in RNN-based and handcrafted-feature-based methods.\nHowever, it remains unknown whether they are suitable for CNN-based approaches.\nThis paper proposes to encode five spatial skeleton features into images with\ndifferent encoding methods. In addition, the performance implication of\ndifferent joints used for feature extraction is studied. The proposed method\nachieved state-of-the-art performance on NTU RGB+D dataset for 3D human action\nanalysis. An accuracy of 75.32\\% was achieved in Large Scale 3D Human Activity\nAnalysis Challenge in Depth Videos.", "field": [], "task": ["3D Action Recognition", "Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CV)"], "title": "Investigation of Different Skeleton Features for CNN-based 3D Action Recognition"} {"abstract": "Meaning Representation (AMR) is a semantic representation for natural\nlanguage that embeds annotations related to traditional tasks such as named\nentity recognition, semantic role labeling, word sense disambiguation and\nco-reference resolution. We describe a transition-based parser for AMR that\nparses sentences left-to-right, in linear time. We further propose a test-suite\nthat assesses specific subtasks that are helpful in comparing AMR parsers, and\nshow that our parser is competitive with the state of the art on the LDC2015E86\ndataset and that it outperforms state-of-the-art parsers for recovering named\nentities and handling polarity.", "field": [], "task": ["AMR Parsing", "Named Entity Recognition", "Semantic Role Labeling", "Word Sense Disambiguation"], "method": [], "dataset": ["LDC2015E86"], "metric": ["Smatch"], "title": "An Incremental Parser for Abstract Meaning Representation"} {"abstract": "This paper proposes a state-of-the-art recurrent neural network (RNN)\nlanguage model that combines probability distributions computed not only from a\nfinal RNN layer but also from middle layers. Our proposed method raises the\nexpressive power of a language model based on the matrix factorization\ninterpretation of language modeling introduced by Yang et al. (2018). The\nproposed method improves the current state-of-the-art language model and\nachieves the best score on the Penn Treebank and WikiText-2, which are the\nstandard benchmark datasets. Moreover, we indicate our proposed method\ncontributes to two application tasks: machine translation and headline\ngeneration. Our code is publicly available at:\nhttps://github.com/nttcslab-nlp/doc_lm.", "field": [], "task": ["Constituency Parsing", "Language Modelling", "Machine Translation"], "method": [], "dataset": ["Penn Treebank (Word Level)", "WikiText-2", "Penn Treebank"], "metric": ["Number of params", "F1 score", "Validation perplexity", "Test perplexity", "Params"], "title": "Direct Output Connection for a High-Rank Language Model"} {"abstract": "Conventional object detection models require large amounts of training data. In comparison, humans can recognize previously unseen objects by merely knowing their semantic description. To mimic similar behaviour, zero-shot object detection aims to recognize and localize 'unseen' object instances by using only their semantic information. The model is first trained to learn the relationships between visual and semantic domains for seen objects, later transferring the acquired knowledge to totally unseen objects. This setting gives rise to the need for correct alignment between visual and semantic concepts, so that the unseen objects can be identified using only their semantic attributes. In this paper, we propose a novel loss function called 'Polarity loss', that promotes correct visual-semantic alignment for an improved zero-shot object detection. On one hand, it refines the noisy semantic embeddings via metric learning on a 'Semantic vocabulary' of related concepts to establish a better synergy between visual and semantic domains. On the other hand, it explicitly maximizes the gap between positive and negative predictions to achieve better discrimination between seen, unseen and background objects. Our approach is inspired by embodiment theories in cognitive science, that claim human semantic understanding to be grounded in past experiences (seen objects), related linguistic concepts (word vocabulary) and visual perception (seen/unseen object images). We conduct extensive evaluations on MS-COCO and Pascal VOC datasets, showing significant improvements over state of the art.", "field": [], "task": ["Metric Learning", "Object Detection", "Zero-Shot Learning", "Zero-Shot Object Detection"], "method": [], "dataset": ["MS-COCO"], "metric": ["mAP", "Recall"], "title": "Polarity Loss for Zero-shot Object Detection"} {"abstract": "We consider the problem of inferring a layered representa-tion, its depth ordering and motion segmentation from a video in whichobjects may undergo 3D non-planar motion relative to the camera. Wegeneralize layered inference to the aforementioned case and correspond-ing self-occlusion phenomena. We accomplish this by introducing a flat-tened 3D object representation, which is a compact representation of anobject that contains all visible portions of the object seen in the video,including parts of an object that are self-occluded (as well as occluded)in one frame but seen in another. We formulate the inference of such flat-tened representations and motion segmentation, and derive an optimiza-tion scheme. We also introduce a new depth ordering scheme, which isindependent of layered inference and addresses the case of self-occlusion.It requires almost no computation given the flattened representations.Experiments on benchmark datasets show the advantage of our methodcompared to existing layered methods, which do not model 3D motionand self-occlusion.", "field": [], "task": ["Motion Segmentation", "Unsupervised Video Object Segmentation"], "method": [], "dataset": ["DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "Extending Layered Models to 3D Motion"} {"abstract": "We propose a novel deep learning approach to solve simultaneous alignment and recognition problems (referred to as \"Sequence-to-sequence\" learning). We decompose the problem into a series of specialised expert systems referred to as SubUNets. The spatio-temporal relationships between these SubUNets are then modelled to solve the task, while remaining trainable end-to-end. The approach mimics human learning and educational techniques, and has a number of significant advantages. SubUNets allow us to inject domain-specific expert knowledge into the system regarding suitable intermediate representations. They also allow us to implicitly perform transfer learning between different interrelated tasks, which also allows us to exploit a wider range of more varied data sources. In our experiments we demonstrate that each of these properties serves to significantly improve the performance of the overarching recognition system, by better constraining the learning problem. The proposed techniques are demonstrated in the challenging domain of sign language recognition. We demonstrate state-of-the-art performance on hand-shape recognition outperforming previous techniques by more than 30%). Furthermore, we are able to obtain comparable sign recognition rates to previous research, without the need for an alignment step to segment out the signs for recognition.\r", "field": [], "task": ["Sign Language Recognition", "Transfer Learning"], "method": [], "dataset": ["RWTH-PHOENIX-Weather 2014"], "metric": ["Word Error Rate (WER)"], "title": "SubUNets: End-To-End Hand Shape and Continuous Sign Language Recognition"} {"abstract": "The field of self-supervised monocular depth estimation has seen huge\nadvancements in recent years. Most methods assume stereo data is available\nduring training but usually under-utilize it and only treat it as a reference\nsignal. We propose a novel self-supervised approach which uses both left and\nright images equally during training, but can still be used with a single input\nimage at test time, for monocular depth estimation. Our Siamese network\narchitecture consists of two, twin networks, each learns to predict a disparity\nmap from a single image. At test time, however, only one of these networks is\nused in order to infer depth. We show state-of-the-art results on the standard\nKITTI Eigen split benchmark as well as being the highest scoring\nself-supervised method on the new KITTI single view benchmark. To demonstrate\nthe ability of our method to generalize to new data sets, we further provide\nresults on the Make3D benchmark, which was not used during training.", "field": [], "task": ["Depth Estimation", "Monocular Depth Estimation"], "method": [], "dataset": ["KITTI Eigen split"], "metric": ["absolute relative error"], "title": "Learn Stereo, Infer Mono: Siamese Networks for Self-Supervised, Monocular, Depth Estimation"} {"abstract": "Learning subtle yet discriminative features (e.g., beak and eyes for a bird) plays a significant role in fine-grained image recognition. Existing attention-based approaches localize and amplify significant parts to learn fine-grained details, which often suffer from a limited number of parts and heavy computational cost. In this paper, we propose to learn such fine-grained features from hundreds of part proposals by Trilinear Attention Sampling Network (TASN) in an efficient teacher-student manner. Specifically, TASN consists of 1) a trilinear attention module, which generates attention maps by modeling the inter-channel relationships, 2) an attention-based sampler which highlights attended parts with high resolution, and 3) a feature distiller, which distills part features into a global one by weight sharing and feature preserving strategies. Extensive experiments verify that TASN yields the best performance under the same settings with the most competitive approaches, in iNaturalist-2017, CUB-Bird, and Stanford-Cars datasets.", "field": [], "task": ["Fine-Grained Image Classification", "Fine-Grained Image Recognition"], "method": [], "dataset": [" CUB-200-2011"], "metric": ["Accuracy"], "title": "Looking for the Devil in the Details: Learning Trilinear Attention Sampling Network for Fine-grained Image Recognition"} {"abstract": "Recently, deep learning based 3D face reconstruction methods have shown promising results in both quality and efficiency.However, training deep neural networks typically requires a large volume of data, whereas face images with ground-truth 3D face shapes are scarce. In this paper, we propose a novel deep 3D face reconstruction approach that 1) leverages a robust, hybrid loss function for weakly-supervised learning which takes into account both low-level and perception-level information for supervision, and 2) performs multi-image face reconstruction by exploiting complementary information from different images for shape aggregation. Our method is fast, accurate, and robust to occlusion and large pose. We provide comprehensive experiments on three datasets, systematically comparing our method with fifteen recent methods and demonstrating its state-of-the-art performance.", "field": [], "task": ["3D Face Reconstruction", "Face Reconstruction"], "method": [], "dataset": ["NoW Benchmark"], "metric": ["Mean Reconstruction Error (mm)"], "title": "Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set"} {"abstract": "Temporal action localization is crucial for understanding untrimmed videos. In this work, we first identify two underexplored problems posed by the weak supervision for temporal action localization, namely action completeness modeling and action-context separation. Then by presenting a novel network architecture and its training strategy, the two problems are explicitly looked into. Specifically, to model the completeness of actions, we propose a multi-branch neural network in which branches are enforced to discover distinctive action parts. Complete actions can be therefore localized by fusing activations from different branches. And to separate action instances from their surrounding context, we generate hard negative data for training using the prior that motionless video clips are unlikely to be actions. Experiments performed on datasets THUMOS'14 and ActivityNet show that our framework outperforms state-of-the-art methods. In particular, the average mAP on ActivityNet v1.2 is significantly improved from 18.0% to 22.4%. Our code will be released soon.\r", "field": [], "task": ["Action Localization", "Temporal Action Localization", "Weakly Supervised Action Localization", "Weakly-supervised Temporal Action Localization", "Weakly Supervised Temporal Action Localization"], "method": [], "dataset": ["ActivityNet-1.2", "ActivityNet-1.3", "THUMOS 2014"], "metric": ["mAP@0.5", "mAP@0.1:0.7"], "title": "Completeness Modeling and Context Separation for Weakly Supervised Temporal Action Localization"} {"abstract": "Temporal action proposal generation is an challenging and promising task which aims to locate temporal regions in real-world videos where action or event may occur. Current bottom-up proposal generation methods can generate proposals with precise boundary, but cannot efficiently generate adequately reliable confidence scores for retrieving proposals. To address these difficulties, we introduce the Boundary-Matching (BM) mechanism to evaluate confidence scores of densely distributed proposals, which denote a proposal as a matching pair of starting and ending boundaries and combine all densely distributed BM pairs into the BM confidence map. Based on BM mechanism, we propose an effective, efficient and end-to-end proposal generation method, named Boundary-Matching Network (BMN), which generates proposals with precise temporal boundaries as well as reliable confidence scores simultaneously. The two-branches of BMN are jointly trained in an unified framework. We conduct experiments on two challenging datasets: THUMOS-14 and ActivityNet-1.3, where BMN shows significant performance improvement with remarkable efficiency and generalizability. Further, combining with existing action classifier, BMN can achieve state-of-the-art temporal action detection performance.", "field": [], "task": ["Action Detection", "Action Recognition", "Temporal Action Localization", "Temporal Action Proposal Generation"], "method": [], "dataset": ["ActivityNet-1.3", "THUMOS\u201914"], "metric": ["mAP", "mAP@0.3", "mAP IOU@0.95", "AUC (val)", "mAP IOU@0.5", "mAP@0.4", "mAP@0.5", "mAP IOU@0.75", "AR@100"], "title": "BMN: Boundary-Matching Network for Temporal Action Proposal Generation"} {"abstract": "Event handlers have wide range of applications such as medical assistant systems and fire suppression systems. These systems try to provide accurate responses based on the least information. Support vector data description (SVDD) is one of the appropriate tools for such detections, which should handle lack of information. Therefore, many efforts have been done to improve SVDD. Unfortunately, the existing descriptors suffer from weak data characteristic in sparse data sets and their tuning parameters are organized improperly. These issues cause reduction of accuracy in event handlers when they are faced with data shortage. Therefore, we propose automatic support vector data description (ASVDD) based on both validation degree, which is originated from fuzzy rough set to discover data characteristic, and assigning effective values for tuning parameters by chaotic bat algorithm. To evaluate the performance of ASVDD, several experiments have been conducted on various data sets of UCI repository. The experimental results demonstrate superiority of the proposed method over state-of-the-art ones in terms of classification accuracy and AUC. In order to prove meaningful distinction between the accuracy results of the proposed method and the leading-edge ones, the Wilcoxon statistical test has been conducted.", "field": [], "task": ["One-class classifier", "Outlier Detection"], "method": [], "dataset": ["Breast cancer Wisconsin_class 2", "Breast cancer Wisconsin_class 4", "Ionosphere_class b", "Balance scale_class 1", "Glass identification"], "metric": ["Average Accuracy"], "title": "Automatic support vector data description"} {"abstract": "Most state-of-the-art action localization systems process each action proposal individually, without explicitly exploiting their relations during learning. However, the relations between proposals actually play an important role in action localization, since a meaningful action always consists of multiple proposals in a video. In this paper, we propose to exploit the proposal-proposal relations using Graph Convolutional Networks (GCNs). First, we construct an action proposal graph, where each proposal is represented as a node and their relations between two proposals as an edge. Here, we use two types of relations, one for capturing the context information for each proposal and the other one for characterizing the correlations between distinct actions. Then we apply the GCNs over the graph to model the relations among different proposals and learn powerful representations for the action classification and localization. Experimental results show that our approach significantly outperforms the state-of-the-art on THUMOS14 (49.1% versus 42.8%). Moreover, augmentation experiments on ActivityNet also verify the efficacy of modeling action proposal relationships. Codes are available at https://github.com/Alvin-Zeng/PGCN.", "field": [], "task": ["Action Classification", "Action Classification ", "Action Localization", "Temporal Action Localization"], "method": [], "dataset": ["ActivityNet-1.3", "THUMOS\u201914"], "metric": ["mAP", "mAP IOU@0.95", "mAP IOU@0.5", "mAP IOU@0.2", "mAP IOU@0.4", "mAP IOU@0.3", "mAP IOU@0.75", "mAP IOU@0.1"], "title": "Graph Convolutional Networks for Temporal Action Localization"} {"abstract": "Image animation consists of generating a video sequence so that an object in a source image is animated according to the motion of a driving video. Our framework addresses this problem without using any annotation or prior information about the specific object to animate. Once trained on a set of videos depicting objects of the same category (e.g. faces, human bodies), our method can be applied to any object of this class. To achieve this, we decouple appearance and motion information using a self-supervised formulation. To support complex motions, we use a representation consisting of a set of learned keypoints along with their local affine transformations. A generator network models occlusions arising during target motions and combines the appearance extracted from the source image and the motion derived from the driving video. Our framework scores best on diverse benchmarks and on a variety of object categories. Our source code is publicly available.", "field": [], "task": ["Image Animation", "Video Reconstruction"], "method": [], "dataset": ["Tai-Chi-HD"], "metric": ["L1"], "title": "First Order Motion Model for Image Animation"} {"abstract": "This paper aims to develop a method than can accurately\r\nestimate the crowd count from an individual image with arbitrary crowd density and arbitrary perspective. To this end,\r\nwe have proposed a simple but effective Multi-column Convolutional Neural Network (MCNN) architecture to map the\r\nimage to its crowd density map. The proposed MCNN allows the input image to be of arbitrary size or resolution.\r\nBy utilizing filters with receptive fields of different sizes, the\r\nfeatures learned by each column CNN are adaptive to variations in people/head size due to perspective effect or image\r\nresolution. Furthermore, the true density map is computed accurately based on geometry-adaptive kernels which do\r\nnot need knowing the perspective map of the input image. Since exiting crowd counting datasets do not adequately cover all the challenging situations considered in our work,\r\nwe have collected and labelled a large new dataset that\r\nincludes 1198 images with about 330,000 heads annotated. On this challenging new dataset, as well as all existing\r\ndatasets, we conduct extensive experiments to verify the effectiveness of the proposed model and method. In particular, with the proposed simple MCNN model, our method\r\noutperforms all existing methods. In addition, experiments\r\nshow that our model, once trained on one dataset, can be\r\nreadily transferred to a new dataset.", "field": [], "task": ["Crowd Counting"], "method": [], "dataset": ["ShanghaiTech A", "ShanghaiTech B", "WorldExpo\u201910", "Venice", "UCF-QNRF", "UCF CC 50"], "metric": ["MAE", "Average MAE"], "title": "Single-Image Crowd Counting via Multi-Column Convolutional Neural Network"} {"abstract": "Zero-shot learning strives to classify unseen categories for which no data is available during training. In the generalized variant, the test samples can further belong to seen or unseen categories. The state-of-the-art relies on Generative Adversarial Networks that synthesize unseen class features by leveraging class-specific semantic embeddings. During training, they generate semantically consistent features, but discard this constraint during feature synthesis and classification. We propose to enforce semantic consistency at all stages of (generalized) zero-shot learning: training, feature synthesis and classification. We first introduce a feedback loop, from a semantic embedding decoder, that iteratively refines the generated features during both the training and feature synthesis stages. The synthesized features together with their corresponding latent embeddings from the decoder are then transformed into discriminative features and utilized during classification to reduce ambiguities among categories. Experiments on (generalized) zero-shot object and action classification reveal the benefit of semantic consistency and iterative feedback, outperforming existing methods on six zero-shot learning benchmarks. Source code at https://github.com/akshitac8/tfvaegan.", "field": [], "task": ["Action Classification", "Action Classification ", "Action Recognition In Videos ", "Generalized Zero-Shot Learning", "Zero-Shot Learning"], "method": [], "dataset": ["Oxford 102 Flower", "SUN Attribute", "CUB-200-2011", "AWA2"], "metric": ["average top-1 classification accuracy", "Harmonic mean"], "title": "Latent Embedding Feedback and Discriminative Features for Zero-Shot Classification"} {"abstract": "In this paper, we propose the first framework (UCNet) to employ uncertainty for RGB-D saliency detection by learning from the data labeling process. Existing RGB-D saliency detection methods treat the saliency detection task as a point estimation problem, and produce a single saliency map following a deterministic learning pipeline. Inspired by the saliency data labeling process, we propose probabilistic RGB-D saliency detection network via conditional variational autoencoders to model human annotation uncertainty and generate multiple saliency maps for each input image by sampling in the latent space. With the proposed saliency consensus process, we are able to generate an accurate saliency map based on these multiple predictions. Quantitative and qualitative evaluations on six challenging benchmark datasets against 18 competing algorithms demonstrate the effectiveness of our approach in learning the distribution of saliency maps, leading to a new state-of-the-art in RGB-D saliency detection.", "field": [], "task": ["RGB-D Salient Object Detection", "Saliency Detection"], "method": [], "dataset": ["STERE", "NLPR", "DES", "SIP", "LFSD", "NJU2K"], "metric": ["Average MAE", "S-Measure"], "title": "UC-Net: Uncertainty Inspired RGB-D Saliency Detection via Conditional Variational Autoencoders"} {"abstract": "Grounding (i.e. localizing) arbitrary, free-form textual phrases in visual\ncontent is a challenging problem with many applications for human-computer\ninteraction and image-text reference resolution. Few datasets provide the\nground truth spatial localization of phrases, thus it is desirable to learn\nfrom data with no or little grounding supervision. We propose a novel approach\nwhich learns grounding by reconstructing a given phrase using an attention\nmechanism, which can be either latent or optimized directly. During training\nour approach encodes the phrase using a recurrent network language model and\nthen learns to attend to the relevant image region in order to reconstruct the\ninput phrase. At test time, the correct attention, i.e., the grounding, is\nevaluated. If grounding supervision is available it can be directly applied via\na loss over the attention mechanism. We demonstrate the effectiveness of our\napproach on the Flickr 30k Entities and ReferItGame datasets with different\nlevels of supervision, ranging from no supervision over partial supervision to\nfull supervision. Our supervised variant improves by a large margin over the\nstate-of-the-art on both datasets.", "field": [], "task": ["Language Modelling", "Natural Language Visual Grounding", "Phrase Grounding", "Visual Grounding"], "method": [], "dataset": ["Flickr30k Entities Test"], "metric": ["R@1"], "title": "Grounding of Textual Phrases in Images by Reconstruction"} {"abstract": "We address the problem of acoustic source separation in a deep learning\nframework we call \"deep clustering.\" Rather than directly estimating signals or\nmasking functions, we train a deep network to produce spectrogram embeddings\nthat are discriminative for partition labels given in training data. Previous\ndeep network approaches provide great advantages in terms of learning power and\nspeed, but previously it has been unclear how to use them to separate signals\nin a class-independent way. In contrast, spectral clustering approaches are\nflexible with respect to the classes and number of items to be segmented, but\nit has been unclear how to leverage the learning power and speed of deep\nnetworks. To obtain the best of both worlds, we use an objective function that\nto train embeddings that yield a low-rank approximation to an ideal pairwise\naffinity matrix, in a class-independent way. This avoids the high cost of\nspectral factorization and instead produces compact clusters that are amenable\nto simple clustering methods. The segmentations are therefore implicitly\nencoded in the embeddings, and can be \"decoded\" by clustering. Preliminary\nexperiments show that the proposed method can separate speech: when trained on\nspectrogram features containing mixtures of two speakers, and tested on\nmixtures of a held-out set of speakers, it can infer masking functions that\nimprove signal quality by around 6dB. We show that the model can generalize to\nthree-speaker mixtures despite training only on two-speaker mixtures. The\nframework can be used without class labels, and therefore has the potential to\nbe trained on a diverse set of sound types, and to generalize to novel sources.\nWe hope that future work will lead to segmentation of arbitrary sounds, with\nextensions to microphone array methods as well as image segmentation and other\ndomains.", "field": [], "task": ["Deep Clustering", "Semantic Segmentation", "Speech Separation"], "method": [], "dataset": ["wsj0-2mix"], "metric": ["SI-SDRi"], "title": "Deep clustering: Discriminative embeddings for segmentation and separation"} {"abstract": "The framework of variational autoencoders (VAEs) provides a principled method for jointly learning latent-variable models and corresponding inference models. However, the main drawback of this approach is the blurriness of the generated images. Some studies link this effect to the objective function, namely, the (negative) log-likelihood. Here, we propose to enhance VAEs by adding a random variable that is a downscaled version of the original image and still use the log-likelihood function as the learning objective. Further, by providing the downscaled image as an input to the decoder, it can be used in a manner similar to the super-resolution. We present empirically that the proposed approach performs comparably to VAEs in terms of the negative log-likelihood, but it obtains a better FID score in data synthesis.", "field": [], "task": ["Image Generation", "Latent Variable Models", "Super-Resolution"], "method": [], "dataset": ["CIFAR-10"], "metric": ["bits/dimension"], "title": "Super-resolution Variational Auto-Encoders"} {"abstract": "Attempts to render deep learning models interpretable, data-efficient, and robust have seen some success through hybridisation with rule-based systems, for example, in Neural Theorem Provers (NTPs). These neuro-symbolic models can induce interpretable rules and learn representations from data via back-propagation, while providing logical explanations for their predictions. However, they are restricted by their computational complexity, as they need to consider all possible proof paths for explaining a goal, thus rendering them unfit for large-scale applications. We present Conditional Theorem Provers (CTPs), an extension to NTPs that learns an optimal rule selection strategy via gradient-based optimisation. We show that CTPs are scalable and yield state-of-the-art results on the CLUTRR dataset, which tests systematic generalisation of neural models by learning to reason over smaller graphs and evaluating on larger ones. Finally, CTPs show better link prediction results on standard benchmarks in comparison with other neural-symbolic models, while being explainable. All source code and datasets are available online, at https://github.com/uclnlp/ctp.", "field": [], "task": ["Link Prediction", "Relational Reasoning"], "method": [], "dataset": ["CLUTRR (k=3)"], "metric": ["7 Hops", "6 Hops", "9 Hops", "8 Hops", "4 Hops", "5 Hops", "10 Hops"], "title": "Learning Reasoning Strategies in End-to-End Differentiable Proving"} {"abstract": "We present a simple but yet effective method for learning distinctive 3D local deep descriptors (DIPs) that can be used to register point clouds without requiring an initial alignment. Point cloud patches are extracted, canonicalised with respect to their estimated local reference frame and encoded into rotation-invariant compact descriptors by a PointNet-based deep neural network. DIPs can effectively generalise across different sensor modalities because they are learnt end-to-end from locally and randomly sampled points. Because DIPs encode only local geometric information, they are robust to clutter, occlusions and missing regions. We evaluate and compare DIPs against alternative hand-crafted and deep descriptors on several indoor and outdoor datasets consisting of point clouds reconstructed using different sensors. Results show that DIPs (i) achieve comparable results to the state-of-the-art on RGB-D indoor scenes (3DMatch dataset), (ii) outperform state-of-the-art by a large margin on laser-scanner outdoor scenes (ETH dataset), and (iii) generalise to indoor scenes reconstructed with the Visual-SLAM system of Android ARCore. Source code: https://github.com/fabiopoiesi/dip.", "field": [], "task": ["Point Cloud Registration"], "method": [], "dataset": ["3DMatch Benchmark"], "metric": ["Recall"], "title": "Distinctive 3D local deep descriptors"} {"abstract": "Denoising Score Matching with Annealed Langevin Sampling (DSM-ALS) has recently found success in generative modeling. The approach works by first training a neural network to estimate the score of a distribution, and then using Langevin dynamics to sample from the data distribution assumed by the score network. Despite the convincing visual quality of samples, this method appears to perform worse than Generative Adversarial Networks (GANs) under the Fr\\'echet Inception Distance, a standard metric for generative models. We show that this apparent gap vanishes when denoising the final Langevin samples using the score network. In addition, we propose two improvements to DSM-ALS: 1) Consistent Annealed Sampling as a more stable alternative to Annealed Langevin Sampling, and 2) a hybrid training formulation, composed of both Denoising Score Matching and adversarial objectives. By combining these two techniques and exploring different network architectures, we elevate score matching methods and obtain results competitive with state-of-the-art image generation on CIFAR-10.", "field": [], "task": ["Denoising", "Image Generation"], "method": [], "dataset": ["CIFAR-10"], "metric": ["FID"], "title": "Adversarial score matching and improved sampling for image generation"} {"abstract": "We present a novel and high-performance 3D object detection framework, named PointVoxel-RCNN (PV-RCNN), for accurate 3D object detection from point clouds. Our proposed method deeply integrates both 3D voxel Convolutional Neural Network (CNN) and PointNet-based set abstraction to learn more discriminative point cloud features. It takes advantages of efficient learning and high-quality proposals of the 3D voxel CNN and the flexible receptive fields of the PointNet-based networks. Specifically, the proposed framework summarizes the 3D scene with a 3D voxel CNN into a small set of keypoints via a novel voxel set abstraction module to save follow-up computations and also to encode representative scene features. Given the high-quality 3D proposals generated by the voxel CNN, the RoI-grid pooling is proposed to abstract proposal-specific features from the keypoints to the RoI-grid points via keypoint set abstraction with multiple receptive fields. Compared with conventional pooling operations, the RoI-grid feature points encode much richer context information for accurately estimating object confidences and locations. Extensive experiments on both the KITTI dataset and the Waymo Open dataset show that our proposed PV-RCNN surpasses state-of-the-art 3D detection methods with remarkable margins by using only point clouds.", "field": [], "task": ["3D Object Detection", "Object Detection"], "method": [], "dataset": ["KITTI Cyclists Hard", "KITTI Cars Hard", "KITTI Cars Moderate", "KITTI Cyclists Moderate", "waymo cyclist", "waymo vehicle", "waymo all_ns", "waymo pedestrian", "KITTI Cyclists Easy", "KITTI Cars Easy"], "metric": ["APH/L2", "AP"], "title": "PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection"} {"abstract": "Colonoscopy is the gold standard for examination and detection of colorectal polyps. Localization and delineation of polyps can play a vital role in treatment (e.g., surgical planning) and prognostic decision making. Polyp segmentation can provide detailed boundary information for clinical analysis. Convolutional neural networks have improved the performance in colonoscopy. However, polyps usually possess various challenges, such as intra-and inter-class variation and noise. While manual labeling for polyp assessment requires time from experts and is prone to human error (e.g., missed lesions), an automated, accurate, and fast segmentation can improve the quality of delineated lesion boundaries and reduce missed rate. The Endotect challenge provides an opportunity to benchmark computer vision methods by training on the publicly available Hyperkvasir and testing on a separate unseen dataset. In this paper, we propose a novel architecture called ``DDANet'' based on a dual decoder attention network. Our experiments demonstrate that the model trained on the Kvasir-SEG dataset and tested on an unseen dataset achieves a dice coefficient of 0.7874, mIoU of 0.7010, recall of 0.7987, and a precision of 0.8577, demonstrating the generalization ability of our model.", "field": [], "task": ["Decision Making", "Medical Image Segmentation"], "method": [], "dataset": ["Endotect Polyp Segmentation", "Kvasir-SEG"], "metric": ["DSC", "mean Dice", "FPS", "mIoU"], "title": "DDANet: Dual Decoder Attention Network for Automatic Polyp Segmentation"} {"abstract": "We present a paper abstract writing system based on an attentive neural\nsequence-to-sequence model that can take a title as input and automatically\ngenerate an abstract. We design a novel Writing-editing Network that can attend\nto both the title and the previously generated abstract drafts and then\niteratively revise and polish the abstract. With two series of Turing tests,\nwhere the human judges are asked to distinguish the system-generated abstracts\nfrom human-written ones, our system passes Turing tests by junior domain\nexperts at a rate up to 30% and by non-expert at a rate up to 80%.", "field": [], "task": ["Paper generation", "Text Generation"], "method": [], "dataset": ["ACL Title and Abstract Dataset"], "metric": ["ROUGE-L", "METEOR"], "title": "Paper Abstract Writing through Editing Mechanism"} {"abstract": "Observing that Semantic features learned in an image classification task and\nAppearance features learned in a similarity matching task complement each\nother, we build a twofold Siamese network, named SA-Siam, for real-time object\ntracking. SA-Siam is composed of a semantic branch and an appearance branch.\nEach branch is a similarity-learning Siamese network. An important design\nchoice in SA-Siam is to separately train the two branches to keep the\nheterogeneity of the two types of features. In addition, we propose a channel\nattention mechanism for the semantic branch. Channel-wise weights are computed\naccording to the channel activations around the target position. While the\ninherited architecture from SiamFC \\cite{SiamFC} allows our tracker to operate\nbeyond real-time, the twofold design and the attention mechanism significantly\nimprove the tracking performance. The proposed SA-Siam outperforms all other\nreal-time trackers by a large margin on OTB-2013/50/100 benchmarks.", "field": [], "task": ["Image Classification", "Object Tracking"], "method": [], "dataset": ["OTB-2013", "OTB-2015", "OTB-50"], "metric": ["AUC"], "title": "A Twofold Siamese Network for Real-Time Object Tracking"} {"abstract": "We propose a novel crowd counting model that maps a given crowd scene to its\ndensity. Crowd analysis is compounded by myriad of factors like inter-occlusion\nbetween people due to extreme crowding, high similarity of appearance between\npeople and background elements, and large variability of camera view-points.\nCurrent state-of-the art approaches tackle these factors by using multi-scale\nCNN architectures, recurrent networks and late fusion of features from\nmulti-column CNN with different receptive fields. We propose switching\nconvolutional neural network that leverages variation of crowd density within\nan image to improve the accuracy and localization of the predicted crowd count.\nPatches from a grid within a crowd scene are relayed to independent CNN\nregressors based on crowd count prediction quality of the CNN established\nduring training. The independent CNN regressors are designed to have different\nreceptive fields and a switch classifier is trained to relay the crowd scene\npatch to the best CNN regressor. We perform extensive experiments on all major\ncrowd counting datasets and evidence better performance compared to current\nstate-of-the-art methods. We provide interpretable representations of the\nmultichotomy of space of crowd scene patches inferred from the switch. It is\nobserved that the switch relays an image patch to a particular CNN column based\non density of crowd.", "field": [], "task": ["Crowd Counting"], "method": [], "dataset": ["ShanghaiTech A", "ShanghaiTech B", "WorldExpo\u201910", "Venice", "UCF-QNRF", "UCF CC 50"], "metric": ["MAE", "Average MAE"], "title": "Switching Convolutional Neural Network for Crowd Counting"} {"abstract": "In this paper, we study the task of 3D human pose estimation in the wild.\nThis task is challenging due to lack of training data, as existing datasets are\neither in the wild images with 2D pose or in the lab images with 3D pose.\n We propose a weakly-supervised transfer learning method that uses mixed 2D\nand 3D labels in a unified deep neutral network that presents two-stage\ncascaded structure. Our network augments a state-of-the-art 2D pose estimation\nsub-network with a 3D depth regression sub-network. Unlike previous two stage\napproaches that train the two sub-networks sequentially and separately, our\ntraining is end-to-end and fully exploits the correlation between the 2D pose\nand depth estimation sub-tasks. The deep features are better learnt through\nshared representations. In doing so, the 3D pose labels in controlled lab\nenvironments are transferred to in the wild images. In addition, we introduce a\n3D geometric constraint to regularize the 3D pose prediction, which is\neffective in the absence of ground truth depth labels. Our method achieves\ncompetitive results on both 2D and 3D benchmarks.", "field": [], "task": ["3D Human Pose Estimation", "Pose Estimation", "Pose Prediction", "Regression", "Transfer Learning"], "method": [], "dataset": ["Human3.6M", "Geometric Pose Affordance "], "metric": ["Average MPJPE (mm)", "MPJPE (CS)", "PCK3D (CS)", "PCK3D (CA)", "MPJPE (CA)"], "title": "Towards 3D Human Pose Estimation in the Wild: a Weakly-supervised Approach"} {"abstract": "We present a memory augmented neural network for natural language\nunderstanding: Neural Semantic Encoders. NSE is equipped with a novel memory\nupdate rule and has a variable sized encoding memory that evolves over time and\nmaintains the understanding of input sequences through read}, compose and write\noperations. NSE can also access multiple and shared memories. In this paper, we\ndemonstrated the effectiveness and the flexibility of NSE on five different\nnatural language tasks: natural language inference, question answering,\nsentence classification, document sentiment analysis and machine translation\nwhere NSE achieved state-of-the-art performance when evaluated on publically\navailable benchmarks. For example, our shared-memory model showed an\nencouraging result on neural machine translation, improving an attention-based\nbaseline by approximately 1.0 BLEU.", "field": [], "task": ["Machine Translation", "Natural Language Inference", "Natural Language Understanding", "Question Answering", "Sentence Classification", "Sentiment Analysis"], "method": [], "dataset": ["SST-2 Binary classification", "WMT2014 English-German", "SNLI", "WikiQA"], "metric": ["% Test Accuracy", "MAP", "Parameters", "MRR", "BLEU score", "Accuracy", "% Train Accuracy"], "title": "Neural Semantic Encoders"} {"abstract": "We propose RUDDER, a novel reinforcement learning approach for delayed rewards in finite Markov decision processes (MDPs). In MDPs the Q-values are equal to the expected immediate reward plus the expected future rewards. The latter are related to bias problems in temporal difference (TD) learning and to high variance problems in Monte Carlo (MC) learning. Both problems are even more severe when rewards are delayed. RUDDER aims at making the expected future rewards zero, which simplifies Q-value estimation to computing the mean of the immediate reward. We propose the following two new concepts to push the expected future rewards toward zero. (i) Reward redistribution that leads to return-equivalent decision processes with the same optimal policies and, when optimal, zero expected future rewards. (ii) Return decomposition via contribution analysis which transforms the reinforcement learning task into a regression task at which deep learning excels. On artificial tasks with delayed rewards, RUDDER is significantly faster than MC and exponentially faster than Monte Carlo Tree Search (MCTS), TD({\\lambda}), and reward shaping approaches. At Atari games, RUDDER on top of a Proximal Policy Optimization (PPO) baseline improves the scores, which is most prominent at games with delayed rewards. Source code is available at \\url{https://github.com/ml-jku/rudder} and demonstration videos at \\url{https://goo.gl/EQerZV}.", "field": [], "task": ["Atari Games", "Regression"], "method": [], "dataset": ["Atari 2600 Bowling", "Atari 2600 Yars Revenge"], "metric": ["Score"], "title": "RUDDER: Return Decomposition for Delayed Rewards"} {"abstract": "Video super-resolution (SR) aims to generate a sequence of high-resolution\n(HR) frames with plausible and temporally consistent details from their\nlow-resolution (LR) counterparts. The generation of accurate correspondence\nplays a significant role in video SR. It is demonstrated by traditional video\nSR methods that simultaneous SR of both images and optical flows can provide\naccurate correspondences and better SR results. However, LR optical flows are\nused in existing deep learning based methods for correspondence generation. In\nthis paper, we propose an end-to-end trainable video SR framework to\nsuper-resolve both images and optical flows. Specifically, we first propose an\noptical flow reconstruction network (OFRnet) to infer HR optical flows in a\ncoarse-to-fine manner. Then, motion compensation is performed according to the\nHR optical flows. Finally, compensated LR inputs are fed to a super-resolution\nnetwork (SRnet) to generate the SR results. Extensive experiments demonstrate\nthat HR optical flows provide more accurate correspondences than their LR\ncounterparts and improve both accuracy and consistency performance. Comparative\nresults on the Vid4 and DAVIS-10 datasets show that our framework achieves the\nstate-of-the-art performance.", "field": [], "task": ["Motion Compensation", "Optical Flow Estimation", "Super-Resolution", "Video Super-Resolution"], "method": [], "dataset": ["Vid4 - 4x upscaling"], "metric": ["SSIM", "PSNR", "MOVIE"], "title": "Learning for Video Super-Resolution through HR Optical Flow Estimation"} {"abstract": "In this paper, we address the problem of enhancing the speech of a speaker of interest in a cocktail party scenario when visual information of the speaker of interest is available. Contrary to most previous studies, we do not learn visual features on the typically small audio-visual datasets, but use an already available face landmark detector (trained on a separate image dataset). The landmarks are used by LSTM-based models to generate time-frequency masks which are applied to the acoustic mixed-speech spectrogram. Results show that: (i) landmark motion features are very effective features for this task, (ii) similarly to previous work, reconstruction of the target speaker's spectrogram mediated by masking is significantly more accurate than direct spectrogram reconstruction, and (iii) the best masks depend on both motion landmark features and the input mixed-speech spectrogram. To the best of our knowledge, our proposed models are the first models trained and evaluated on the limited size GRID and TCD-TIMIT datasets, that achieve speaker-independent speech enhancement in a multi-talker setting.", "field": [], "task": ["Speech Enhancement", "Speech Separation"], "method": [], "dataset": ["GRID corpus (mixed-speech)", "TCD-TIMIT corpus (mixed-speech)"], "metric": ["SDR", "PESQ"], "title": "Face Landmark-based Speaker-Independent Audio-Visual Speech Enhancement in Multi-Talker Environments"} {"abstract": "Many seemingly unrelated computer vision tasks can be viewed as a special\ncase of image decomposition into separate layers. For example, image\nsegmentation (separation into foreground and background layers); transparent\nlayer separation (into reflection and transmission layers); Image dehazing\n(separation into a clear image and a haze map), and more. In this paper we\npropose a unified framework for unsupervised layer decomposition of a single\nimage, based on coupled \"Deep-image-Prior\" (DIP) networks. It was shown\n[Ulyanov et al] that the structure of a single DIP generator network is\nsufficient to capture the low-level statistics of a single image. We show that\ncoupling multiple such DIPs provides a powerful tool for decomposing images\ninto their basic components, for a wide variety of applications. This\ncapability stems from the fact that the internal statistics of a mixture of\nlayers is more complex than the statistics of each of its individual\ncomponents. We show the power of this approach for Image-Dehazing, Fg/Bg\nSegmentation, Watermark-Removal, Transparency Separation in images and video,\nand more. These capabilities are achieved in a totally unsupervised way, with\nno training examples other than the input image/video itself.", "field": [], "task": ["Image Dehazing", "Semantic Segmentation", "Transparency Separation"], "method": [], "dataset": ["O-Haze"], "metric": ["PSNR"], "title": "\"Double-DIP\": Unsupervised Image Decomposition via Coupled Deep-Image-Priors"} {"abstract": "We demonstrate that for sentence-level relation extraction it is beneficial to consider other relations in the sentential context while predicting the target relation. Our architecture uses an LSTM-based encoder to jointly learn representations for all relations in a single sentence. We combine the context representations with an attention mechanism to make the final prediction. We use the Wikidata knowledge base to construct a dataset of multiple relations per sentence and to evaluate our approach. Compared to a baseline system, our method results in an average error reduction of 24 on a held-out set of relations. The code and the dataset to replicate the experiments are made available at \\url{https://github.com/ukplab/}.", "field": [], "task": ["Question Answering", "Relation Extraction"], "method": [], "dataset": ["Wikipedia-Wikidata relations"], "metric": ["Error rate"], "title": "Context-Aware Representations for Knowledge Base Relation Extraction"} {"abstract": "Previous CNN-based video super-resolution approaches need to align multiple\nframes to the reference. In this paper, we show that proper frame alignment and\nmotion compensation is crucial for achieving high quality results. We\naccordingly propose a `sub-pixel motion compensation' (SPMC) layer in a CNN\nframework. Analysis and experiments show the suitability of this layer in video\nSR. The final end-to-end, scalable CNN framework effectively incorporates the\nSPMC layer and fuses multiple frames to reveal image details. Our\nimplementation can generate visually and quantitatively high-quality results,\nsuperior to current state-of-the-arts, without the need of parameter tuning.", "field": [], "task": ["Image Super-Resolution", "Motion Compensation", "Super-Resolution", "Video Super-Resolution"], "method": [], "dataset": ["Set5 - 4x upscaling", "Vid4 - 4x upscaling", "Set14 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Detail-revealing Deep Video Super-resolution"} {"abstract": "Unsupervised domain adaptation aims to transfer knowledge from a source domain to a target domain so that the target domain data can be recognized without any explicit labelling information for this domain. One limitation of the problem setting is that testing data, despite having no labels, from the target domain is needed during training, which prevents the trained model being directly applied to classify unseen test instances. We formulate a new cross-domain classification problem arising from real-world scenarios where labelled data is available for a subset of classes (known classes) in the target domain, and we expect to recognize new samples belonging to any class (known and unseen classes) once the model is learned. This is a generalized zero-shot learning problem where the side information comes from the source domain in the form of labelled samples instead of class-level semantic representations commonly used in traditional zero-shot learning. We present a unified domain adaptation framework for both unsupervised and zero-shot learning conditions. Our approach learns a joint subspace from source and target domains so that the projections of both data in the subspace can be domain invariant and easily separable. We use the supervised locality preserving projection (SLPP) as the enabling technique and conduct experiments under both unsupervised and zero-shot learning conditions, achieving state-of-the-art results on three domain adaptation benchmark datasets: Office-Caltech, Office31 and Office-Home.", "field": [], "task": ["Domain Adaptation", "Generalized Zero-Shot Learning", "Unsupervised Domain Adaptation", "Zero-Shot Learning"], "method": [], "dataset": ["Office-Caltech"], "metric": ["Average Accuracy"], "title": "Unifying Unsupervised Domain Adaptation and Zero-Shot Visual Recognition"} {"abstract": "We address the problem of video representation learning without\nhuman-annotated labels. While previous efforts address the problem by designing\nnovel self-supervised tasks using video data, the learned features are merely\non a frame-by-frame basis, which are not applicable to many video analytic\ntasks where spatio-temporal features are prevailing. In this paper we propose a\nnovel self-supervised approach to learn spatio-temporal features for video\nrepresentation. Inspired by the success of two-stream approaches in video\nclassification, we propose to learn visual features by regressing both motion\nand appearance statistics along spatial and temporal dimensions, given only the\ninput video data. Specifically, we extract statistical concepts (fast-motion\nregion and the corresponding dominant direction, spatio-temporal color\ndiversity, dominant color, etc.) from simple patterns in both spatial and\ntemporal domains. Unlike prior puzzles that are even hard for humans to solve,\nthe proposed approach is consistent with human inherent visual habits and\ntherefore easy to answer. We conduct extensive experiments with C3D to validate\nthe effectiveness of our proposed approach. The experiments show that our\napproach can significantly improve the performance of C3D when applied to video\nclassification tasks. Code is available at\nhttps://github.com/laura-wang/video_repres_mas.", "field": [], "task": ["Action Recognition", "Representation Learning", "Self-Supervised Action Recognition", "Video Classification"], "method": [], "dataset": ["HMDB-51", "HMDB51", "UCF101"], "metric": ["Average accuracy of 3 splits", "3-fold Accuracy", "Pre-Training Dataset", "Top-1 Accuracy"], "title": "Self-supervised Spatio-temporal Representation Learning for Videos by Predicting Motion and Appearance Statistics"} {"abstract": "The large availability of depth sensors provides valuable complementary information for salient object detection (SOD) in RGBD images. However, due to the inherent difference between RGB and depth information, extracting features from the depth channel using ImageNet pre-trained backbone models and fusing them with RGB features directly are sub-optimal. In this paper, we utilize contrast prior, which used to be a dominant cue in none deep learning based SOD approaches, into CNNs-based architecture to enhance the depth information. The enhanced depth cues are further integrated with RGB features for SOD, using a novel fluid pyramid integration, which can make better use of multi-scale cross-modal features. Comprehensive experiments on 5 challenging benchmark datasets demonstrate the superiority of the architecture CPFP over 9 state-of-the-art alternative methods. \r", "field": [], "task": ["Object Detection", "RGB-D Salient Object Detection", "RGB Salient Object Detection", "Salient Object Detection"], "method": [], "dataset": ["STERE", "NLPR", "DES", "SIP", "LFSD", "NJU2K", "SSD"], "metric": ["max E-Measure", "Average MAE", "S-Measure", "max F-Measure"], "title": "Contrast Prior and Fluid Pyramid Integration for RGBD Salient Object Detection"} {"abstract": "Capsule networks are a recently proposed type of neural network shown to outperform alternatives in challenging shape recognition tasks. In capsule networks, scalar neurons are replaced with capsule vectors or matrices, whose entries represent different properties of objects. The relationships between objects and their parts are learned via trainable viewpoint-invariant transformation matrices, and the presence of a given object is decided by the level of agreement among votes from its parts. This interaction occurs between capsule layers and is a process called routing-by-agreement. In this paper, we propose a new capsule routing algorithm derived from Variational Bayes for fitting a mixture of transforming gaussians, and show it is possible transform our capsule network into a Capsule-VAE. Our Bayesian approach addresses some of the inherent weaknesses of MLE based models such as the variance-collapse by modelling uncertainty over capsule pose parameters. We outperform the state-of-the-art on smallNORB using 50% fewer capsules than previously reported, achieve competitive performances on CIFAR-10, Fashion-MNIST, SVHN, and demonstrate significant improvement in MNIST to affNIST generalisation over previous works.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["smallNORB"], "metric": ["Classification Error"], "title": "Capsule Routing via Variational Bayes"} {"abstract": "Heretofore, neural networks with external memory are restricted to single memory with lossy representations of memory interactions. A rich representation of relationships between memory pieces urges a high-order and segregated relational memory. In this paper, we propose to separate the storage of individual experiences (item memory) and their occurring relationships (relational memory). The idea is implemented through a novel Self-attentive Associative Memory (SAM) operator. Found upon outer product, SAM forms a set of associative memories that represent the hypothetical high-order relationships between arbitrary pairs of memory elements, through which a relational memory is constructed from an item memory. The two memories are wired into a single sequential model capable of both memorization and relational reasoning. We achieve competitive results with our proposed two-memory model in a diversity of machine learning tasks, from challenging synthetic problems to practical testbeds such as geometry, graph, reinforcement learning, and question answering.", "field": [], "task": ["Question Answering", "Relational Reasoning"], "method": [], "dataset": ["bAbi"], "metric": ["Mean Error Rate", "Accuracy (trained on 10k)"], "title": "Self-Attentive Associative Memory"} {"abstract": "The aim of unsupervised domain adaptation is to leverage the knowledge in a labeled (source) domain to improve a model's learning performance with an unlabeled (target) domain -- the basic strategy being to mitigate the effects of discrepancies between the two distributions. Most existing algorithms can only handle unsupervised closed set domain adaptation (UCSDA), i.e., where the source and target domains are assumed to share the same label set. In this paper, we target a more challenging but realistic setting: unsupervised open set domain adaptation (UOSDA), where the target domain has unknown classes that are not found in the source domain. This is the first study to provide a learning bound for open set domain adaptation, which we do by theoretically investigating the risk of the target classifier on unknown classes. The proposed learning bound has a special term, namely open set difference, which reflects the risk of the target classifier on unknown classes. Further, we present a novel and theoretically guided unsupervised algorithm for open set domain adaptation, called distribution alignment with ppen difference (DAOD), which is based on regularizing this open set difference bound. The experiments on several benchmark datasets show the superior performance of the proposed UOSDA method compared with the state-of-the-art methods in the literature.", "field": [], "task": ["Domain Adaptation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["Office-31", "Office-Home"], "metric": ["Average Accuracy", "Accuracy"], "title": "Open Set Domain Adaptation: Theoretical Bound and Algorithm"} {"abstract": "Multi-view subspace clustering aims to discover the inherent structure by fusing multi-view complementary information. Most existing methods first extract multiple types of hand-crafted features and then learn a joint affinity matrix for clustering. The disadvantage lies in two aspects: 1) Multi-view relations are not embedded into feature learning. 2) The end-to-end learning manner of deep learning is not well used in multi-view clustering. To address the above issues, we propose a novel multi-view deep subspace clustering network (MvDSCN) by learning a multi-view self-representation matrix in an end-to-end manner. MvDSCN consists of two sub-networks, i.e., diversity network (Dnet) and universality network (Unet). A latent space is built upon deep convolutional auto-encoders and a self-representation matrix is learned in the latent space using a fully connected layer. Dnet learns view-specific self-representation matrices while Unet learns a common self-representation matrix for all views. To exploit the complementarity of multi-view representations, Hilbert Schmidt Independence Criterion (HSIC) is introduced as a diversity regularization, which can capture the non-linear and high-order inter-view relations. As different views share the same label space, the self-representation matrices of each view are aligned to the common one by a universality regularization. Experiments on both multi-feature and multi-modality learning validate the superiority of the proposed multi-view subspace clustering model.", "field": [], "task": ["Multi-view Subspace Clustering"], "method": [], "dataset": ["ORL"], "metric": ["Accuracy"], "title": "Multi-view Deep Subspace Clustering Networks"} {"abstract": "Despite the continuing efforts to improve the engagingness and consistency of chit-chat dialogue systems, the majority of current work simply focus on mimicking human-like responses, leaving understudied the aspects of modeling understanding between interlocutors. The research in cognitive science, instead, suggests that understanding is an essential signal for a high-quality chit-chat conversation. Motivated by this, we propose P^2 Bot, a transmitter-receiver based framework with the aim of explicitly modeling understanding. Specifically, P^2 Bot incorporates mutual persona perception to enhance the quality of personalized dialogue generation. Experiments on a large public dataset, Persona-Chat, demonstrate the effectiveness of our approach, with a considerable boost over the state-of-the-art baselines across both automatic metrics and human evaluations.", "field": [], "task": ["Dialogue Generation"], "method": [], "dataset": ["Persona-Chat"], "metric": ["Avg F1"], "title": "You Impress Me: Dialogue Generation via Mutual Persona Perception"} {"abstract": "Visual counting, a task that predicts the number of objects from an image/video, is an open-set problem by nature, i.e., the number of population can vary in $[0,+\\infty)$ in theory. However, the collected images and labeled count values are limited in reality, which means only a small closed set is observed. Existing methods typically model this task in a regression manner, while they are likely to suffer from an unseen scene with counts out of the scope of the closed set. In fact, counting is decomposable. A dense region can always be divided until sub-region counts are within the previously observed closed set. Inspired by this idea, we propose a simple but effective approach, Spatial Divide-and- Conquer Network (S-DCNet). S-DCNet only learns from a closed set but can generalize well to open-set scenarios via S-DC. S-DCNet is also efficient. To avoid repeatedly computing sub-region convolutional features, S-DC is executed on the feature map instead of on the input image. S-DCNet achieves the state-of-the-art performance on three crowd counting datasets (ShanghaiTech, UCF_CC_50 and UCF-QNRF), a vehicle counting dataset (TRANCOS) and a plant counting dataset (MTC). Compared to the previous best methods, S-DCNet brings a 20.2% relative improvement on the ShanghaiTech Part B, 20.9% on the UCF-QNRF, 22.5% on the TRANCOS and 15.1% on the MTC. Code has been made available at: https://github. com/xhp-hust-2018-2011/S-DCNet.", "field": [], "task": ["Crowd Counting", "Regression"], "method": [], "dataset": ["UCF CC 50", "ShanghaiTech A", "ShanghaiTech B"], "metric": ["MAE"], "title": "From Open Set to Closed Set: Counting Objects by Spatial Divide-and-Conquer"} {"abstract": "Large geometry (e.g., orientation) variances are the key challenges in the scene text detection. In this work, we first conduct experiments to investigate the capacity of networks for learning geometry variances on detecting scene texts, and find that networks can handle only limited text geometry variances. Then, we put forward a novel Geometry Normalization Module (GNM) with multiple branches, each of which is composed of one Scale Normalization Unit and one Orientation Normalization Unit, to normalize each text instance to one desired canonical geometry range through at least one branch. The GNM is general and readily plugged into existing convolutional neural network based text detectors to construct end-to-end Geometry Normalization Networks (GNNets). Moreover, we propose a geometry-aware training scheme to effectively train the GNNets by sampling and augmenting text instances from a uniform geometry variance distribution. Finally, experiments on popular benchmarks of ICDAR 2015 and ICDAR 2017 MLT validate that our method outperforms all the state-of-the-art approaches remarkably by obtaining one-forward test F-scores of 88.52 and 74.54 respectively.", "field": [], "task": ["Scene Text", "Scene Text Detection"], "method": [], "dataset": ["ICDAR 2017 MLT", "ICDAR 2015"], "metric": ["F-Measure", "Recall", "Precision"], "title": "Geometry Normalization Networks for Accurate Scene Text Detection"} {"abstract": "In this paper, we propose a novel neural single document extractive summarization model for long documents, incorporating both the global context of the whole document and the local context within the current topic. We evaluate the model on two datasets of scientific papers, Pubmed and arXiv, where it outperforms previous work, both extractive and abstractive models, on ROUGE-1, ROUGE-2 and METEOR scores. We also show that, consistently with our goal, the benefits of our method become stronger as we apply it to longer documents. Rather surprisingly, an ablation study indicates that the benefits of our model seem to come exclusively from modeling the local context, even for the longest documents.", "field": [], "task": ["Text Summarization"], "method": [], "dataset": ["arXiv", "Pubmed"], "metric": ["ROUGE-1", "ROUGE-2"], "title": "Extractive Summarization of Long Documents by Combining Global and Local Context"} {"abstract": "In self-supervised learning, a system is tasked with achieving a surrogate objective by defining alternative targets on a set of unlabeled data. The aim is to build useful representations that can be used in downstream tasks, without costly manual annotation. In this work, we propose a novel self-supervised formulation of relational reasoning that allows a learner to bootstrap a signal from information implicit in unlabeled data. Training a relation head to discriminate how entities relate to themselves (intra-reasoning) and other entities (inter-reasoning), results in rich and descriptive representations in the underlying neural network backbone, which can be used in downstream tasks such as classification and image retrieval. We evaluate the proposed method following a rigorous experimental procedure, using standard datasets, protocols, and backbones. Self-supervised relational reasoning outperforms the best competitor in all conditions by an average 14% in accuracy, and the most recent state-of-the-art model by 3%. We link the effectiveness of the method to the maximization of a Bernoulli log-likelihood, which can be considered as a proxy for maximizing the mutual information, resulting in a more efficient objective with respect to the commonly used contrastive losses.", "field": [], "task": ["Image Retrieval", "Relational Reasoning", "Representation Learning", "Self-Supervised Learning"], "method": [], "dataset": ["STL-10"], "metric": ["Accuracy (%)"], "title": "Self-Supervised Relational Reasoning for Representation Learning"} {"abstract": "Natural language processing covers a wide variety of tasks predicting syntax, semantics, and information content, and usually each type of output is generated with specially designed architectures. In this paper, we provide the simple insight that a great variety of tasks can be represented in a single unified format consisting of labeling spans and relations between spans, thus a single task-independent model can be used across different tasks. We perform extensive experiments to test this insight on 10 disparate tasks spanning dependency parsing (syntax), semantic role labeling (semantics), relation extraction (information content), aspect based sentiment analysis (sentiment), and many others, achieving performance comparable to state-of-the-art specialized models. We further demonstrate benefits of multi-task learning, and also show that the proposed method makes it easy to analyze differences and similarities in how the model handles different tasks. Finally, we convert these datasets into a unified format to build a benchmark, which provides a holistic testbed for evaluating future models for generalized natural language analysis.", "field": [], "task": ["Aspect-Based Sentiment Analysis", "Constituency Parsing", "Dependency Parsing", "Multi-Task Learning", "Named Entity Recognition", "Part-Of-Speech Tagging", "Relation Extraction", "Semantic Role Labeling", "Semantic Role Labeling (predicted predicates)", "Sentiment Analysis"], "method": [], "dataset": ["CoNLL 2012", "Penn Treebank", "WLPC", "SemEval-2010 Task 8", "CoNLL 2003 (English)"], "metric": ["LAS", "F1", "F1 score", "Accuracy"], "title": "Generalizing Natural Language Analysis through Span-relation Representations"} {"abstract": "Existing domain adaptation methods aim at learning features that can be generalized among domains. These methods commonly require to update source classifier to adapt to the target domain and do not properly handle the trade off between the source domain and the target domain. In this work, instead of training a classifier to adapt to the target domain, we use a separable component called data calibrator to help the fixed source classifier recover discrimination power in the target domain, while preserving the source domain's performance. When the difference between two domains is small, the source classifier's representation is sufficient to perform well in the target domain and outperforms GAN-based methods in digits. Otherwise, the proposed method can leverage synthetic images generated by GANs to boost performance and achieve state-of-the-art performance in digits datasets and driving scene semantic segmentation. Our method empirically reveals that certain intriguing hints, which can be mitigated by adversarial attack to domain discriminators, are one of the sources for performance degradation under the domain shift.", "field": [], "task": ["Adversarial Attack", "Domain Adaptation", "Semantic Segmentation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["SVHN-to-MNIST", "GTAV-to-Cityscapes Labels", "MNIST-to-USPS", "USPS-to-MNIST"], "metric": ["mIoU", "Accuracy"], "title": "Light-weight Calibrator: a Separable Component for Unsupervised Domain Adaptation"} {"abstract": "Rain streaks in the air appear in various blurring degrees and resolutions due to different distances from their positions to the camera. Similar rain patterns are visible in a rain image as well as its multi-scale (or multi-resolution) versions, which makes it possible to exploit such complementary information for rain streak representation. In this work, we explore the multi-scale collaborative representation for rain streaks from the perspective of input image scales and hierarchical deep features in a unified framework, termed multi-scale progressive fusion network (MSPFN) for single image rain streak removal. For similar rain streaks at different positions, we employ recurrent calculation to capture the global texture, thus allowing to explore the complementary and redundant information at the spatial dimension to characterize target rain streaks. Besides, we construct multi-scale pyramid structure, and further introduce the attention mechanism to guide the fine fusion of this correlated information from different scales. This multi-scale progressive fusion strategy not only promotes the cooperative representation, but also boosts the end-to-end training. Our proposed method is extensively evaluated on several benchmark datasets and achieves state-of-the-art results. Moreover, we conduct experiments on joint deraining, detection, and segmentation tasks, and inspire a new research direction of vision task-driven image deraining. The source code is available at \\url{https://github.com/kuihua/MSPFN}.", "field": [], "task": ["Rain Removal", "Single Image Deraining"], "method": [], "dataset": ["Test2800", "Rain100H", "Test100", "Test1200", "Rain100L"], "metric": ["SSIM", "PSNR"], "title": "Multi-Scale Progressive Fusion Network for Single Image Deraining"} {"abstract": "Malware detection and classification is a challenging problem and an active area of research. Traditional machine learning methods depend almost entirely on the ability to extract a set of discriminative features into which characterize malware. However, this feature engineering process is very time consuming. On the contrary, deep learning methods replace manual feature engineering by a system that performs both feature extraction and classification from raw data at once. Despite that, a major shortfall of these methods is their inhability to consider multiple disparate sources of information when performing classification, leading them to perform poorly when compared to multimodal approaches. In this work, we introduce Orthrus, a new bimodal approach to categorize malware into families based on deep learning. Orthrus combines two modalities of data: (1) the byte sequence representing the malware\u2019s binary content, and (2) the assembly language instructions extracted from the assembly language source code of malware, and performs automatic feature learning and classification with a convolutional neural network. The idea is to benefit from multiple feature types to reflect malware\u2019s characteristics. The experiments carried on the Microsoft Malware Classification Challenge dataset show that our proposed solution achieves higher classification performance than deep learning approaches in the literature and n-gram based methods.", "field": [], "task": ["Feature Engineering", "Malware Classification", "Malware Detection"], "method": [], "dataset": ["Microsoft Malware Classification Challenge"], "metric": ["Accuracy (10-fold)", "Macro F1 (10-fold)"], "title": "Orthrus: A Bimodal Learning Architecture for Malware Classification"} {"abstract": "Colonoscopy is an effective technique for detecting colorectal polyps, which are highly related to colorectal cancer. In clinical practice, segmenting polyps from colonoscopy images is of great importance since it provides valuable information for diagnosis and surgery. However, accurate polyp segmentation is a challenging task, for two major reasons: (i) the same type of polyps has a diversity of size, color and texture; and (ii) the boundary between a polyp and its surrounding mucosa is not sharp. To address these challenges, we propose a parallel reverse attention network (PraNet) for accurate polyp segmentation in colonoscopy images. Specifically, we first aggregate the features in high-level layers using a parallel partial decoder (PPD). Based on the combined feature, we then generate a global map as the initial guidance area for the following components. In addition, we mine the boundary cues using a reverse attention (RA) module, which is able to establish the relationship between areas and boundary cues. Thanks to the recurrent cooperation mechanism between areas and boundaries, our PraNet is capable of calibrating any misaligned predictions, improving the segmentation accuracy. Quantitative and qualitative evaluations on five challenging datasets across six metrics show that our PraNet improves the segmentation accuracy significantly, and presents a number of advantages in terms of generalizability, and real-time segmentation efficiency.", "field": [], "task": ["Camouflaged Object Segmentation", "Camouflage Segmentation", "Medical Image Segmentation"], "method": [], "dataset": ["ETIS-LARIBPOLYPDB", "Kvasir-SEG", "CAMO", "CVC-ClinicDB"], "metric": ["max E-Measure", "S-Measure", "mean Dice", "Weighted F-Measure", "Average MAE", "mIoU", "MAE", "DSC", "E-Measure"], "title": "PraNet: Parallel Reverse Attention Network for Polyp Segmentation"} {"abstract": "The main purpose of RGB-D salient object detection (SOD) is how to better integrate and utilize cross-modal fusion information. In this paper, we explore these issues from a new perspective. We integrate the features of different modalities through densely connected structures and use their mixed features to generate dynamic filters with receptive fields of different sizes. In the end, we implement a kind of more flexible and efficient multi-scale cross-modal feature processing, i.e. dynamic dilated pyramid module. In order to make the predictions have sharper edges and consistent saliency regions, we design a hybrid enhanced loss function to further optimize the results. This loss function is also validated to be effective in the single-modal RGB SOD task. In terms of six metrics, the proposed method outperforms the existing twelve methods on eight challenging benchmark datasets. A large number of experiments verify the effectiveness of the proposed module and loss function. Our code, model and results are available at \\url{https://github.com/lartpang/HDFNet}.", "field": [], "task": ["Object Detection", "RGB-D Salient Object Detection", "RGB Salient Object Detection", "Salient Object Detection"], "method": [], "dataset": ["NJU2K"], "metric": ["Average MAE", "S-Measure"], "title": "Hierarchical Dynamic Filtering Network for RGB-D Salient Object Detection"} {"abstract": "Extracting accurate foreground animals from natural animal images benefits many downstream applications such as film production and augmented reality. However, the various appearance and furry characteristics of animals challenge existing matting methods, which usually require extra user inputs such as trimap or scribbles. To resolve these problems, we study the distinct roles of semantics and details for image matting and decompose the task into two parallel sub-tasks: high-level semantic segmentation and low-level details matting. Specifically, we propose a novel Glance and Focus Matting network (GFM), which employs a shared encoder and two separate decoders to learn both tasks in a collaborative manner for end-to-end animal image matting. Besides, we establish a novel Animal Matting dataset (AM-2k) containing 2,000 high-resolution natural animal images from 20 categories along with manually labeled alpha mattes. Furthermore, we investigate the domain gap issue between composite images and natural images systematically by conducting comprehensive analyses of various discrepancies between foreground and background images. We find that a carefully designed composition route RSSN that aims to reduce the discrepancies can lead to a better model with remarkable generalization ability. Comprehensive empirical studies on AM-2k demonstrate that GFM outperforms state-of-the-art methods and effectively reduces the generalization error.", "field": [], "task": ["Image Matting", "Semantic Segmentation"], "method": [], "dataset": ["AM-2K"], "metric": ["MSE", "MAD", "SAD"], "title": "End-to-end Animal Image Matting"} {"abstract": "We propose a simple data augmentation technique that can be applied to standard model-free reinforcement learning algorithms, enabling robust learning directly from pixels without the need for auxiliary losses or pre-training. The approach leverages input perturbations commonly used in computer vision tasks to regularize the value function. Existing model-free approaches, such as Soft Actor-Critic (SAC), are not able to train deep networks effectively from image pixels. However, the addition of our augmentation method dramatically improves SAC's performance, enabling it to reach state-of-the-art performance on the DeepMind control suite, surpassing model-based (Dreamer, PlaNet, and SLAC) methods and recently proposed contrastive learning (CURL). Our approach can be combined with any model-free reinforcement learning algorithm, requiring only minor modifications. An implementation can be found at https://sites.google.com/view/data-regularized-q.", "field": [], "task": ["Continuous Control", "Data Augmentation", "Image Augmentation"], "method": [], "dataset": ["DeepMind Walker Walk (Images)", "DeepMind Cheetah Run (Images)", "DeepMind Cup Catch (Images)"], "metric": ["Return"], "title": "Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels"} {"abstract": "In this paper, we study current and upcoming frontiers across the landscape of skeleton-based human action recognition. To begin with, we benchmark state-of-the-art models on the NTU-120 dataset and provide multi-layered assessment of the results. To examine skeleton action recognition 'in the wild', we introduce Skeletics-152, a curated and 3-D pose-annotated subset of RGB videos sourced from Kinetics-700, a large-scale action dataset. The results from benchmarking the top performers of NTU-120 on Skeletics-152 reveal the challenges and domain gap induced by actions 'in the wild'. We extend our study to include out-of-context actions by introducing Skeleton-Mimetics, a dataset derived from the recently introduced Mimetics dataset. Finally, as a new frontier for action recognition, we introduce Metaphorics, a dataset with caption-style annotated YouTube videos of the popular social game Dumb Charades and interpretative dance performances. Overall, our work characterizes the strengths and limitations of existing approaches and datasets. It also provides an assessment of top-performing approaches across a spectrum of activity settings and via the introduced datasets, proposes new frontiers for human action recognition.", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["Skeletics-152", "Skeleton-Mimetics", "NTU RGB+D 120"], "metric": ["Accuracy (Cross-Subject)", "Accuracy (%)", "Accuracy (Cross-Setup)"], "title": "Quo Vadis, Skeleton Action Recognition ?"} {"abstract": "Image guided depth completion is the task of generating a dense depth map from a sparse depth map and a high quality image. In this task, how to fuse the color and depth modalities plays an important role in achieving good performance. This paper proposes a two-branch backbone that consists of a color-dominant branch and a depth-dominant branch to exploit and fuse two modalities thoroughly. More specifically, one branch inputs a color image and a sparse depth map to predict a dense depth map. The other branch takes as inputs the sparse depth map and the previously predicted depth map, and outputs a dense depth map as well. The depth maps predicted from two branches are complimentary to each other and therefore they are adaptively fused. In addition, we also propose a simple geometric convolutional layer to encode 3D geometric cues. The geometric encoded backbone conducts the fusion of different modalities at multiple stages, leading to good depth completion results. We further implement a dilated and accelerated CSPN++ to refine the fused depth map efficiently. The proposed full model ranks 1st in the KITTI depth completion online leaderboard at the time of submission. It also infers much faster than most of the top ranked methods. The code of this work is available at https://github.com/JUGGHM/PENet_ICRA2021.", "field": [], "task": ["Depth Completion"], "method": [], "dataset": ["KITTI Depth Completion"], "metric": ["iMAE", "RMSE", "Runtime [ms]", "MAE", "iRMSE"], "title": "PENet: Towards Precise and Efficient Image Guided Depth Completion"} {"abstract": "Sequence-to-sequence models have shown strong performance across a broad\nrange of applications. However, their application to parsing and generating\ntext usingAbstract Meaning Representation (AMR)has been limited, due to the\nrelatively limited amount of labeled data and the non-sequential nature of the\nAMR graphs. We present a novel training procedure that can lift this limitation\nusing millions of unlabeled sentences and careful preprocessing of the AMR\ngraphs. For AMR parsing, our model achieves competitive results of 62.1SMATCH,\nthe current best score reported without significant use of external semantic\nresources. For AMR generation, our model establishes a new state-of-the-art\nperformance of BLEU 33.8. We present extensive ablative and qualitative\nanalysis including strong evidence that sequence-based AMR models are robust\nagainst ordering variations of graph-to-sequence conversions.", "field": [], "task": ["AMR Parsing", "Graph-to-Sequence"], "method": [], "dataset": ["LDC2015E86"], "metric": ["Smatch"], "title": "Neural AMR: Sequence-to-Sequence Models for Parsing and Generation"} {"abstract": "In this paper, we propose a novel approach for text detec- tion in natural\nimages. Both local and global cues are taken into account for localizing text\nlines in a coarse-to-fine pro- cedure. First, a Fully Convolutional Network\n(FCN) model is trained to predict the salient map of text regions in a holistic\nmanner. Then, text line hypotheses are estimated by combining the salient map\nand character components. Fi- nally, another FCN classifier is used to predict\nthe centroid of each character, in order to remove the false hypotheses. The\nframework is general for handling text in multiple ori- entations, languages\nand fonts. The proposed method con- sistently achieves the state-of-the-art\nperformance on three text detection benchmarks: MSRA-TD500, ICDAR2015 and\nICDAR2013.", "field": ["Graph Embeddings"], "task": [], "method": ["LINE", "Large-scale Information Network Embedding"], "dataset": ["ICDAR 2015"], "metric": ["F-Measure", "Recall", "Precision"], "title": "Multi-Oriented Text Detection with Fully Convolutional Networks"} {"abstract": "We propose a new benchmark corpus to be used for measuring progress in\nstatistical language modeling. With almost one billion words of training data,\nwe hope this benchmark will be useful to quickly evaluate novel language\nmodeling techniques, and to compare their contribution when combined with other\nadvanced techniques. We show performance of several well-known types of\nlanguage models, with the best results achieved with a recurrent neural network\nbased language model. The baseline unpruned Kneser-Ney 5-gram model achieves\nperplexity 67.6; a combination of techniques leads to 35% reduction in\nperplexity, or 10% reduction in cross-entropy (bits), over that baseline.\n The benchmark is available as a code.google.com project; besides the scripts\nneeded to rebuild the training/held-out data, it also makes available\nlog-probability values for each word in each of ten held-out data sets, for\neach of the baseline n-gram models.", "field": [], "task": ["Language Modelling"], "method": [], "dataset": ["One Billion Word"], "metric": ["Number of params", "PPL"], "title": "One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling"} {"abstract": "Domain adaptation aims to transfer knowledge from the sourcedata with annotations to scarcely-labeled data in the target domain,which has attracted a lot of attention in recent years and facilitatedmany multimedia applications. Recent approaches have shown theeffectiveness of using adversarial learning to reduce the distribu-tion discrepancy between the source and target images by aligningdistribution between source and target images at both image and in-stance levels. However, this remains challenging since two domainsmay have distinct background scenes and different objects. More-over, complex combinations of objects and a variety of image stylesdeteriorate the unsupervised cross-domain distribution alignment.To address these challenges, in this paper, we design an end-to-endapproach for unsupervised domain adaptation of object detector.Specifically, we propose a Multi-level Entropy Attention Alignment(MEAA) method that consists of two main components: (1) LocalUncertainty Attentional Alignment (LUAA) module to acceleratethe model better perceiving structure-invariant objects of interestby utilizing information theory to measure the uncertainty of eachlocal region via the entropy of the pixel-wise domain classifierand (2) Multi-level Uncertainty-Aware Context Alignment (MUCA)module to enrich domain-invariant information of relevant objectsbased on the entropy of multi-level domain classifiers. The proposedMEAA is evaluated in four domain-shift object detection scenarios.Experiment results demonstrate state-of-the-art performance onthree challenging scenarios and competitive performance on onebenchmark dataset.", "field": [], "task": ["Domain Adaptation", "Object Detection", "Unsupervised Domain Adaptation", "Weakly Supervised Object Detection"], "method": [], "dataset": ["Cityscapes-to-Foggy Cityscapes", "Watercolor2k", "Clipart1k"], "metric": ["mAP", "MAP"], "title": "Domain-Adaptive Object Detection via Uncertainty-Aware Distribution Alignment"} {"abstract": "The ability to accurately represent sentences is central to language\nunderstanding. We describe a convolutional architecture dubbed the Dynamic\nConvolutional Neural Network (DCNN) that we adopt for the semantic modelling of\nsentences. The network uses Dynamic k-Max Pooling, a global pooling operation\nover linear sequences. The network handles input sentences of varying length\nand induces a feature graph over the sentence that is capable of explicitly\ncapturing short and long-range relations. The network does not rely on a parse\ntree and is easily applicable to any language. We test the DCNN in four\nexperiments: small scale binary and multi-class sentiment prediction, six-way\nquestion classification and Twitter sentiment prediction by distant\nsupervision. The network achieves excellent performance in the first three\ntasks and a greater than 25% error reduction in the last task with respect to\nthe strongest baseline.", "field": [], "task": [], "method": [], "dataset": ["SNLI"], "metric": ["% Test Accuracy"], "title": "A Convolutional Neural Network for Modelling Sentences"} {"abstract": "We introduce multigrid Predictive Filter Flow (mgPFF), a framework for\nunsupervised learning on videos. The mgPFF takes as input a pair of frames and\noutputs per-pixel filters to warp one frame to the other. Compared to optical\nflow used for warping frames, mgPFF is more powerful in modeling sub-pixel\nmovement and dealing with corruption (e.g., motion blur). We develop a\nmultigrid coarse-to-fine modeling strategy that avoids the requirement of\nlearning large filters to capture large displacement. This allows us to train\nan extremely compact model (4.6MB) which operates in a progressive way over\nmultiple resolutions with shared weights. We train mgPFF on unsupervised,\nfree-form videos and show that mgPFF is able to not only estimate long-range\nflow for frame reconstruction and detect video shot transitions, but also\nreadily amendable for video object segmentation and pose tracking, where it\nsubstantially outperforms the published state-of-the-art without bells and\nwhistles. Moreover, owing to mgPFF's nature of per-pixel filter prediction, we\nhave the unique opportunity to visualize how each pixel is evolving during\nsolving these tasks, thus gaining better interpretability.", "field": [], "task": ["Optical Flow Estimation", "Pose Tracking", "Semantic Segmentation", "Skeleton Based Action Recognition", "Video Object Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["JHMDB Pose Tracking"], "metric": ["PCK@0.5", "PCK@0.2", "PCK@0.3", "PCK@0.4", "PCK@0.1"], "title": "Multigrid Predictive Filter Flow for Unsupervised Learning on Videos"} {"abstract": "The Weisfeiler\u2013Lehman graph kernel exhibits competitive performance in many graph classification tasks. However, its subtree features are not able to capture connected components and cycles, topological features known for characterising graphs. To extract such features, we leverage propagated node label information and transform unweighted graphs into metric ones. This permits us to augment the subtree features with topological information obtained using persistent homology, a concept from topological data analysis. Our method, which we formalise as a generalisation of Weisfeiler\u2013Lehman subtree features, exhibits favourable classification accuracy and its improvements in predictive performance are mainly driven by including cycle information.", "field": [], "task": ["Graph Classification", "Topological Data Analysis"], "method": [], "dataset": ["PROTEINS", "MUTAG"], "metric": ["Mean Accuracy", "Accuracy"], "title": "A Persistent Weisfeiler\u2013Lehman Procedure for Graph Classification"} {"abstract": "We introduce a novel scheme for parsing a piece of text into its Abstract Meaning Representation (AMR): Graph Spanning based Parsing (GSP). One novel characteristic of GSP is that it constructs a parse graph incrementally in a top-down fashion. Starting from the root, at each step, a new node and its connections to existing nodes will be jointly predicted. The output graph spans the nodes by the distance to the root, following the intuition of first grasping the main ideas then digging into more details. The \\textit{core semantic first} principle emphasizes capturing the main ideas of a sentence, which is of great interest. We evaluate our model on the latest AMR sembank and achieve the state-of-the-art performance in the sense that no heuristic graph re-categorization is adopted. More importantly, the experiments show that our parser is especially good at obtaining the core semantics.", "field": [], "task": ["AMR Parsing"], "method": [], "dataset": ["LDC2017T10"], "metric": ["Smatch"], "title": "Core Semantic First: A Top-down Approach for AMR Parsing"} {"abstract": "Sparse representation with respect to an overcomplete dictionary is often used when regularizing inverse problems in signal and image processing. In recent years, the Convolutional Sparse Coding (CSC) model, in which the dictionary consists of shift-invariant filters, has gained renewed interest. While this model has been successfully used in some image processing problems, it still falls behind traditional patch-based methods on simple tasks such as denoising. In this work we provide new insights regarding the CSC model and its capability to represent natural images, and suggest a Bayesian connection between this model and its patch-based ancestor. Armed with these observations, we suggest a novel feed-forward network that follows an MMSE approximation process to the CSC model, using strided convolutions. The performance of this supervised architecture is shown to be on par with state of the art methods while using much fewer parameters.", "field": [], "task": ["Color Image Denoising", "Denoising"], "method": [], "dataset": ["BSD68 sigma75", "BSD68 sigma15", "CBSD68 sigma50", "BSD68 sigma25"], "metric": ["PSNR"], "title": "Rethinking the CSC Model for Natural Images"} {"abstract": "This paper addresses unsupervised domain adaptation, the setting where labeled training data is available on a source domain, but the goal is to have good performance on a target domain with only unlabeled data. Like much of previous work, we seek to align the learned representations of the source and target domains while preserving discriminability. The way we accomplish alignment is by learning to perform auxiliary self-supervised task(s) on both domains simultaneously. Each self-supervised task brings the two domains closer together along the direction relevant to that task. Training this jointly with the main task classifier on the source domain is shown to successfully generalize to the unlabeled target domain. The presented objective is straightforward to implement and easy to optimize. We achieve state-of-the-art results on four out of seven standard benchmarks, and competitive results on segmentation adaptation. We also demonstrate that our method composes well with another popular pixel-level adaptation method.", "field": [], "task": ["Domain Adaptation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["GTAV-to-Cityscapes Labels"], "metric": ["mIoU"], "title": "Unsupervised Domain Adaptation through Self-Supervision"} {"abstract": "Extracting geometric features from 3D scans or point clouds is the first step in applications such as registration, reconstruction, and tracking. State-of-the-art methods require computing low-level features as input or extracting patch-based features with limited receptive field. In this work, we present fully-convolutional geometric features, computed in a single pass by a 3D fully-convolutional network. We also present new metric learning losses that dramatically improve performance. Fully-convolutional geometric features are compact, capture broad spatial context, and scale to large scenes. We experimentally validate our approach on both indoor and outdoor datasets. Fully-convolutional geometric features achieve state-of-the-art accuracy without requiring prepossessing, are compact (32 dimensions), and are 600 times faster than the most accurate prior method.", "field": [], "task": ["3D Feature Matching", "3D Point Cloud Matching", "3D Shape Representation", "Metric Learning", "Point Cloud Registration"], "method": [], "dataset": ["3DMatch Benchmark"], "metric": ["Recall", "Average Recall"], "title": "Fully Convolutional Geometric Features"} {"abstract": "Deep neural networks have been successfully applied to many real-world applications. However, such successes rely heavily on large amounts of labeled data that is expensive to obtain. Recently, many methods for semi-supervised learning have been proposed and achieved excellent performance. In this study, we propose a new EnAET framework to further improve existing semi-supervised methods with self-supervised information. To our best knowledge, all current semi-supervised methods improve performance with prediction consistency and confidence ideas. We are the first to explore the role of {\\bf self-supervised} representations in {\\bf semi-supervised} learning under a rich family of transformations. Consequently, our framework can integrate the self-supervised information as a regularization term to further improve {\\it all} current semi-supervised methods. In the experiments, we use MixMatch, which is the current state-of-the-art method on semi-supervised learning, as a baseline to test the proposed EnAET framework. Across different datasets, we adopt the same hyper-parameters, which greatly improves the generalization ability of the EnAET framework. Experiment results on different datasets demonstrate that the proposed EnAET framework greatly improves the performance of current semi-supervised algorithms. Moreover, this framework can also improve {\\bf supervised learning} by a large margin, including the extremely challenging scenarios with only 10 images per class. The code and experiment records are available in \\url{https://github.com/maple-research-lab/EnAET}.", "field": [], "task": ["Image Classification", "Semi-Supervised Image Classification"], "method": [], "dataset": ["CIFAR-100, 5000Labels", "CIFAR-100, 1000 Labels", "cifar-100, 10000 Labels", "CIFAR-100", "CIFAR-10", "STL-10, 1000 Labels", "cifar10, 250 Labels", "SVHN, 250 Labels", "SVHN, 1000 labels", "STL-10", "SVHN", "CIFAR-10, 4000 Labels"], "metric": ["Percentage error", "Percentage correct", "Accuracy"], "title": "EnAET: A Self-Trained framework for Semi-Supervised and Supervised Learning with Ensemble Transformations"} {"abstract": "Graph neural networks have recently emerged as a very effective framework for processing graph-structured data. These models have achieved state-of-the-art performance in many tasks. Most graph neural networks can be described in terms of message passing, vertex update, and readout functions. In this paper, we represent documents as word co-occurrence networks and propose an application of the message passing framework to NLP, the Message Passing Attention network for Document understanding (MPAD). We also propose several hierarchical variants of MPAD. Experiments conducted on 10 standard text classification datasets show that our architectures are competitive with the state-of-the-art. Ablation studies reveal further insights about the impact of the different components on performance. Code is publicly available at: https://github.com/giannisnik/mpad .", "field": [], "task": ["Text Classification"], "method": [], "dataset": ["BBCSport", "SST-2 Binary classification", "Reuters-21578", "IMDb", "SST-5 Fine-grained classification", "TREC-6", "MPQA"], "metric": ["Error", "Accuracy (2 classes)", "Accuracy (10 classes)", "Accuracy"], "title": "Message Passing Attention Networks for Document Understanding"} {"abstract": "Visual dialog is a challenging vision-language task in which a series of questions visually grounded by a given image are answered. To resolve the visual dialog task, a high-level understanding of various multimodal inputs (e.g., question, dialog history, and image) is required. Specifically, it is necessary for an agent to 1) determine the semantic intent of question and 2) align question-relevant textual and visual contents among heterogeneous modality inputs. In this paper, we propose Multi-View Attention Network (MVAN), which leverages multiple views about heterogeneous inputs based on attention mechanisms. MVAN effectively captures the question-relevant information from the dialog history with two complementary modules (i.e., Topic Aggregation and Context Matching), and builds multimodal representations through sequential alignment processes (i.e., Modality Alignment). Experimental results on VisDial v1.0 dataset show the effectiveness of our proposed model, which outperforms the previous state-of-the-art methods with respect to all evaluation metrics.", "field": [], "task": ["Visual Dialog"], "method": [], "dataset": ["Visual Dialog v1.0 test-std", "VisDial v0.9 val"], "metric": ["MRR (x 100)", "R@10", "NDCG (x 100)", "R@5", "Mean Rank", "MRR", "Mean", "R@1"], "title": "Multi-View Attention Network for Visual Dialog"} {"abstract": "We present a novel iterative, edit-based approach to unsupervised sentence simplification. Our model is guided by a scoring function involving fluency, simplicity, and meaning preservation. Then, we iteratively perform word and phrase-level edits on the complex sentence. Compared with previous approaches, our model does not require a parallel training set, but is more controllable and interpretable. Experiments on Newsela and WikiLarge datasets show that our approach is nearly as effective as state-of-the-art supervised approaches.", "field": [], "task": ["Text Simplification"], "method": [], "dataset": ["Newsela", "TurkCorpus"], "metric": ["BLEU", "SARI (EASSE>=0.2.1)", "SARI"], "title": "Iterative Edit-Based Unsupervised Sentence Simplification"} {"abstract": "Some downstream NLP tasks exploit discourse dependency trees converted from RST trees. To obtain better discourse dependency trees, we need to improve the accuracy of RST trees at the upper parts of the structures. Thus, we propose a novel neural top-down RST parsing method. Then, we exploit three levels of granularity in a document, paragraphs, sentences and Elementary Discourse Units (EDUs), to parse a document accurately and efficiently. The parsing is done in a top-down manner for each granularity level, by recursively splitting a larger text span into two smaller ones while predicting nuclearity and relation labels for the divided spans. The results on the RST-DT corpus show that our method achieved the state-of-the-art results, 87.0 unlabeled span score, 74.6 nuclearity labeled span score, and the comparable result with the state-of-the-art, 60.0 relation labeled span score. Furthermore, discourse dependency trees converted from our RST trees also achieved the state-of-the-art results, 64.9 unlabeled attachment score and 48.5 labeled attachment score.", "field": [], "task": ["Discourse Parsing"], "method": [], "dataset": ["RST-DT"], "metric": ["RST-Parseval (Relation)", "RST-Parseval (Span)", "RST-Parseval (Nuclearity)"], "title": "Top-Down RST Parsing Utilizing Granularity Levels in Documents"} {"abstract": "Contemporary neural networks are limited in their ability to learn from evolving streams of training data. When trained sequentially on new or evolving tasks, their accuracy drops sharply, making them unsuitable for many real-world applications. In this work, we shed light on the causes of this well-known yet unsolved phenomenon - often referred to as catastrophic forgetting - in a class-incremental setup. We show that a combination of simple components and a loss that balances intra-task and inter-task learning can already resolve forgetting to the same extent as more complex measures proposed in literature. Moreover, we identify poor quality of the learned representation as another reason for catastrophic forgetting in class-IL. We show that performance is correlated with secondary class information (dark knowledge) learned by the model and it can be improved by an appropriate regularizer. With these lessons learned, class-incremental learning results on CIFAR-100 and ImageNet improve over the state-of-the-art by a large margin, while keeping the approach simple.", "field": [], "task": ["class-incremental learning", "Continual Learning", "Incremental Learning"], "method": [], "dataset": ["CIFAR-100 - 50 classes + 5 steps of 10 classes", "ImageNet-100 - 50 classes + 5 steps of 10 classes", "ImageNet-100 - 50 classes + 10 steps of 5 classes", "CIFAR-100 - 50 classes + 10 steps of 5 classes", "ImageNet - 500 classes + 5 steps of 100 classes"], "metric": ["Average Incremental Accuracy"], "title": "Essentials for Class Incremental Learning"} {"abstract": "The current strive towards end-to-end trainable computer vision systems imposes major challenges for the task of visual tracking. In contrast to most other vision problems, tracking requires the learning of a robust target-specific appearance model online, during the inference stage. To be end-to-end trainable, the online learning of the target model thus needs to be embedded in the tracking architecture itself. Due to the imposed challenges, the popular Siamese paradigm simply predicts a target feature template, while ignoring the background appearance information during inference. Consequently, the predicted model possesses limited target-background discriminability. We develop an end-to-end tracking architecture, capable of fully exploiting both target and background appearance information for target model prediction. Our architecture is derived from a discriminative learning loss by designing a dedicated optimization process that is capable of predicting a powerful model in only a few iterations. Furthermore, our approach is able to learn key aspects of the discriminative loss itself. The proposed tracker sets a new state-of-the-art on 6 tracking benchmarks, achieving an EAO score of 0.440 on VOT2018, while running at over 40 FPS. The code and models are available at https://github.com/visionml/pytracking.", "field": [], "task": ["Visual Object Tracking", "Visual Tracking"], "method": [], "dataset": ["TrackingNet"], "metric": ["Normalized Precision", "Precision", "Accuracy"], "title": "Learning Discriminative Model Prediction for Tracking"} {"abstract": "3D point cloud generation is of great use for 3D scene modeling and understanding. Real-world 3D object point clouds can be properly described by a collection of low-level and high-level structures such as surfaces, geometric primitives, semantic parts,etc. In fact, there exist many different representations of a 3D object point cloud as a set of point groups. Existing frameworks for point cloud genera-ion either do not consider structure in their proposed solutions, or assume and enforce a specific structure/topology,e.g. a collection of manifolds or surfaces, for the generated point cloud of a 3D object. In this work, we pro-pose a novel decoder that generates a structured point cloud without assuming any specific structure or topology on the underlying point set. Our decoder is softly constrained to generate a point cloud following a hierarchical rooted tree structure. We show that given enough capacity and allowing for redundancies, the proposed decoder is very flexible and able to learn any arbitrary grouping of points including any topology on the point set. We evaluate our decoder on the task of point cloud generation for 3D point cloud shape completion. Combined with encoders from existing frameworks, we show that our proposed decoder significantly outperforms state-of-the-art 3D point cloud completion methods on the Shapenet dataset\r", "field": [], "task": ["Point Cloud Completion"], "method": [], "dataset": ["Completion3D"], "metric": ["Chamfer Distance"], "title": "TopNet: Structural Point Cloud Decoder"} {"abstract": "We present a semi-supervised learning framework based on graph embeddings.\nGiven a graph between instances, we train an embedding for each instance to\njointly predict the class label and the neighborhood context in the graph. We\ndevelop both transductive and inductive variants of our method. In the\ntransductive variant of our method, the class labels are determined by both the\nlearned embeddings and input feature vectors, while in the inductive variant,\nthe embeddings are defined as a parametric function of the feature vectors, so\npredictions can be made on instances not seen during training. On a large and\ndiverse set of benchmark tasks, including text classification, distantly\nsupervised entity extraction, and entity classification, we show improved\nperformance over many of the existing models.", "field": [], "task": ["Document Classification", "Entity Extraction using GAN", "Node Classification", "Text Classification"], "method": [], "dataset": ["Cora", "NELL", "Citeseer", "USA Air-Traffic", "Pubmed"], "metric": ["Accuracy"], "title": "Revisiting Semi-Supervised Learning with Graph Embeddings"} {"abstract": "For visual object tracking, it is difficult to realize an almighty online tracker due to the huge variations of target appearance depending on an image sequence. This paper proposes an online tracking method that adaptively aggregates arbitrary multiple online trackers. The performance of the proposed method is theoretically guaranteed to be comparable to that of the best tracker for any image sequence, although the best expert is unknown during tracking. The experimental study on the large variations of benchmark datasets and aggregated trackers demonstrates that the proposed method can achieve state-of-the-art performance. The code is available at https://github.com/songheony/AAA-journal.", "field": [], "task": ["Deblurring", "Object Tracking", "Visual Object Tracking"], "method": [], "dataset": ["TempleColor128", "OTB-2015"], "metric": ["Precision", "AUC"], "title": "AAA: Adaptive Aggregation of Arbitrary Online Trackers with Theoretical Performance Guarantee"} {"abstract": "This paper presents a hardness-aware deep metric learning (HDML) framework. Most previous deep metric learning methods employ the hard negative mining strategy to alleviate the lack of informative samples for training. However, this mining strategy only utilizes a subset of training data, which may not be enough to characterize the global geometry of the embedding space comprehensively. To address this problem, we perform linear interpolation on embeddings to adaptively manipulate their hard levels and generate corresponding label-preserving synthetics for recycled training, so that information buried in all samples can be fully exploited and the metric is always challenged with proper difficulty. Our method achieves very competitive performance on the widely used CUB-200-2011, Cars196, and Stanford Online Products datasets.", "field": [], "task": ["Image Retrieval", "Metric Learning"], "method": [], "dataset": [" CUB-200-2011", "CARS196"], "metric": ["R@1"], "title": "Hardness-Aware Deep Metric Learning"} {"abstract": "Video temporal action detection aims to temporally localize and recognize the\naction in untrimmed videos. Existing one-stage approaches mostly focus on\nunifying two subtasks, i.e., localization of action proposals and\nclassification of each proposal through a fully shared backbone. However, such\ndesign of encapsulating all components of two subtasks in one single network\nmight restrict the training by ignoring the specialized characteristic of each\nsubtask. In this paper, we propose a novel Decoupled Single Shot temporal\nAction Detection (Decouple-SSAD) method to mitigate such problem by decoupling\nthe localization and classification in a one-stage scheme. Particularly, two\nseparate branches are designed in parallel to enable each component to own\nrepresentations privately for accurate localization or classification. Each\nbranch produces a set of action anchor layers by applying deconvolution to the\nfeature maps of the main stream. Each branch produces a set of feature maps by\napplying deconvolution to the feature maps of the main stream. High-level\nsemantic information from deeper layers is thus incorporated to enhance the\nfeature representations. We conduct extensive experiments on THUMOS14 dataset\nand demonstrate superior performance over state-of-the-art methods. Our code is\navailable online.", "field": [], "task": ["Action Detection"], "method": [], "dataset": ["THUMOS\u201914"], "metric": ["mAP IOU@0.6", "mAP IOU@0.7", "mAP IOU@0.5", "mAP IOU@0.4", "mAP IOU@0.3"], "title": "Decoupling Localization and Classification in Single Shot Temporal Action Detection"} {"abstract": "This paper proposes a method for head pose estimation from a single image. Previous methods often predict head poses through landmark or depth estimation and would require more computation than necessary. Our method is based on regression and feature aggregation. For having a compact model, we employ the soft stagewise regression scheme. Existing feature aggregation methods treat inputs as a bag of features and thus ignore their spatial relationship in a feature map. We propose to learn a fine-grained structure mapping for spatially grouping features before aggregation. The fine-grained structure provides part-based information and pooled values. By utilizing learnable and non-learnable importance over the spatial location, different model variants can be generated and form a complementary ensemble. Experiments show that our method outperforms the state-of-the-art methods including both the landmark-free ones and the ones based on landmark or depth estimation. With only a single RGB frame as input, our method even outperforms methods utilizing multi-modality information (RGB-D, RGB-Time) on estimating the yaw angle. Furthermore, the memory overhead of our model is 100 times smaller than those of previous methods. \r", "field": [], "task": ["Depth Estimation", "Head Pose Estimation", "Pose Estimation", "Regression"], "method": [], "dataset": ["AFLW2000", "BIWI"], "metric": ["MAE", "MAE (trained with other data)"], "title": "FSA-Net: Learning Fine-Grained Structure Aggregation for Head Pose Estimation From a Single Image"} {"abstract": "Recently, the Weisfeiler-Lehman (WL) graph isomorphism test was used to measure the expressive power of graph neural networks (GNN). It was shown that the popular message passing GNN cannot distinguish between graphs that are indistinguishable by the 1-WL test (Morris et al. 2018; Xu et al. 2019). Unfortunately, many simple instances of graphs are indistinguishable by the 1-WL test. In search for more expressive graph learning models we build upon the recent k-order invariant and equivariant graph neural networks (Maron et al. 2019a,b) and present two results: First, we show that such k-order networks can distinguish between non-isomorphic graphs as good as the k-WL tests, which are provably stronger than the 1-WL test for k>2. This makes these models strictly stronger than message passing models. Unfortunately, the higher expressiveness of these models comes with a computational cost of processing high order tensors. Second, setting our goal at building a provably stronger, simple and scalable model we show that a reduced 2-order network containing just scaled identity operator, augmented with a single quadratic operation (matrix multiplication) has a provable 3-WL expressive power. Differently put, we suggest a simple model that interleaves applications of standard Multilayer-Perceptron (MLP) applied to the feature dimension and matrix multiplication. We validate this model by presenting state of the art results on popular graph classification and regression tasks. To the best of our knowledge, this is the first practical invariant/equivariant model with guaranteed 3-WL expressiveness, strictly stronger than message passing models.", "field": [], "task": ["Graph Classification", "Graph Learning", "Graph Regression", "Regression"], "method": [], "dataset": ["COLLAB", "NCI109", "IMDb-B", "ZINC-500k", "PROTEINS", "NCI1", "IMDb-M", "MUTAG", "PTC"], "metric": ["MAE", "Accuracy"], "title": "Provably Powerful Graph Networks"} {"abstract": "Detection of small moving objects is an important research area with applications including monitoring of flying insects, studying their foraging behavior, using insect pollinators to monitor flowering and pollination of crops, surveillance of honeybee colonies, and tracking movement of honeybees. However, due to the lack of distinctive shape and textural details on small objects, direct application of modern object detection methods based on convolutional neural networks (CNNs) shows considerably lower performance. In this paper we propose a method for the detection of small moving objects in videos recorded using unmanned aerial vehicles equipped with standard video cameras. The main steps of the proposed method are video stabilization, background estimation and subtraction, frame segmentation using a CNN, and thresholding the segmented frame. However, for training a CNN it is required that a large labeled dataset is available. Manual labelling of small moving objects in videos is very difficult and time consuming, and such labeled datasets do not exist at the moment. To circumvent this problem, we propose training a CNN using synthetic videos generated by adding small blob-like objects to video sequences with real-world backgrounds. The experimental results on detection of flying honeybees show that by using a combination of classical computer vision techniques and CNNs, as well as synthetic training sets, the proposed approach overcomes the problems associated with direct application of CNNs to the given problem and achieves an average F1-score of 0.86 in tests on real-world videos.", "field": [], "task": ["Object Detection", "Segmentation Of Remote Sensing Imagery", "Small Object Detection"], "method": [], "dataset": ["Bee4Exp Honeybee Detection"], "metric": ["Average F1"], "title": "A Method for Detection of Small Moving Objects in UAV Videos"} {"abstract": "In CNN-based object detection methods, region proposal becomes a bottleneck\nwhen objects exhibit significant scale variation, occlusion or truncation. In\naddition, these methods mainly focus on 2D object detection and cannot estimate\ndetailed properties of objects. In this paper, we propose subcategory-aware\nCNNs for object detection. We introduce a novel region proposal network that\nuses subcategory information to guide the proposal generating process, and a\nnew detection network for joint detection and subcategory classification. By\nusing subcategories related to object pose, we achieve state-of-the-art\nperformance on both detection and pose estimation on commonly used benchmarks.", "field": [], "task": ["2D Object Detection", "Object Detection", "Pose Estimation", "Region Proposal"], "method": [], "dataset": ["PASCAL VOC 2007"], "metric": ["MAP"], "title": "Subcategory-aware Convolutional Neural Networks for Object Proposals and Detection"} {"abstract": "Recent work has shown that CNN-based depth and ego-motion estimators can be learned using unlabelled monocular videos. However, the performance is limited by unidentified moving objects that violate the underlying static scene assumption in geometric image reconstruction. More significantly, due to lack of proper constraints, networks output scale-inconsistent results over different samples, i.e., the ego-motion network cannot provide full camera trajectories over a long video sequence because of the per-frame scale ambiguity. This paper tackles these challenges by proposing a geometry consistency loss for scale-consistent predictions and an induced self-discovered mask for handling moving objects and occlusions. Since we do not leverage multi-task learning like recent works, our framework is much simpler and more efficient. Comprehensive evaluation results demonstrate that our depth estimator achieves the state-of-the-art performance on the KITTI dataset. Moreover, we show that our ego-motion network is able to predict a globally scale-consistent camera trajectory for long video sequences, and the resulting visual odometry accuracy is competitive with the recent model that is trained using stereo videos. To the best of our knowledge, this is the first work to show that deep networks trained using unlabelled monocular videos can predict globally scale-consistent camera trajectories over a long video sequence.", "field": [], "task": ["Depth And Camera Motion", "Depth Estimation", "Monocular Depth Estimation", "Visual Odometry"], "method": [], "dataset": ["KITTI Eigen split"], "metric": ["absolute relative error"], "title": "Unsupervised Scale-consistent Depth and Ego-motion Learning from Monocular Video"} {"abstract": "We present 6-PACK, a deep learning approach to category-level 6D object pose tracking on RGB-D data. Our method tracks in real-time novel object instances of known object categories such as bowls, laptops, and mugs. 6-PACK learns to compactly represent an object by a handful of 3D keypoints, based on which the interframe motion of an object instance can be estimated through keypoint matching. These keypoints are learned end-to-end without manual supervision in order to be most effective for tracking. Our experiments show that our method substantially outperforms existing methods on the NOCS category-level 6D pose estimation benchmark and supports a physical robot to perform simple vision-based closed-loop manipulation tasks. Our code and video are available at https://sites.google.com/view/6packtracking.", "field": [], "task": ["6D Pose Estimation", "6D Pose Estimation using RGBD", "Pose Estimation", "Pose Tracking"], "method": [], "dataset": ["NOCS-REAL275"], "metric": ["Rerr", "5\u00b05 cm", "IOU25", "Terr"], "title": "6-PACK: Category-level 6D Pose Tracker with Anchor-Based Keypoints"} {"abstract": "The de-facto approach to many vision tasks is to start from pretrained visual representations, typically learned via supervised training on ImageNet. Recent methods have explored unsupervised pretraining to scale to vast quantities of unlabeled images. In contrast, we aim to learn high-quality visual representations from fewer images. To this end, we revisit supervised pretraining, and seek data-efficient alternatives to classification-based pretraining. We propose VirTex -- a pretraining approach using semantically dense captions to learn visual representations. We train convolutional networks from scratch on COCO Captions, and transfer them to downstream recognition tasks including image classification, object detection, and instance segmentation. On all tasks, VirTex yields features that match or exceed those learned on ImageNet -- supervised or unsupervised -- despite using up to ten times fewer images.", "field": [], "task": ["Image Captioning", "Image Classification", "Instance Segmentation", "Object Detection", "Semantic Segmentation"], "method": [], "dataset": ["COCO minival", "COCO test-dev", "COCO Captions"], "metric": ["box AP", "SPICE", "AP75", "CIDER", "AP50", "mask AP"], "title": "VirTex: Learning Visual Representations from Textual Annotations"} {"abstract": "Benefiting from the spatial cues embedded in depth images, recent progress on RGB-D saliency detection shows impressive ability on some challenge scenarios. However, there are still two limitations. One hand is that the pooling and upsampling operations in FCNs might cause blur object boundaries. On the other hand, using an additional depth-network to extract depth features might lead to high computation and storage cost. The reliance on depth inputs during testing also limits the practical applications of current RGB-D models. In this paper, we propose a novel collaborative learning framework where edge, depth and saliency are leveraged in a more efficient way, which solves those problems tactfully. The explicitly extracted edge information goes together with saliency to give more emphasis to the salient regions and object boundaries. Depth and saliency learning is innovatively integrated into the high-level feature learning process in a mutual-benefit manner. This strategy enables the network to be free of using extra depth networks and depth inputs to make inference. To this end, it makes our model more lightweight, faster and more versatile. Experiment results on seven benchmark datasets show its superior performance.", "field": [], "task": ["Object Detection", "RGB-D Salient Object Detection", "RGB Salient Object Detection", "Saliency Detection", "Salient Object Detection"], "method": [], "dataset": ["NJU2K"], "metric": ["Average MAE", "S-Measure"], "title": "Accurate RGB-D Salient Object Detection via Collaborative Learning"} {"abstract": "We propose a self-supervised method to learn feature representations from videos. A standard approach in traditional self-supervised methods uses positive-negative data pairs to train with contrastive learning strategy. In such a case, different modalities of the same video are treated as positives and video clips from a different video are treated as negatives. Because the spatio-temporal information is important for video representation, we extend the negative samples by introducing intra-negative samples, which are transformed from the same anchor video by breaking temporal relations in video clips. With the proposed Inter-Intra Contrastive (IIC) framework, we can train spatio-temporal convolutional networks to learn video representations. There are many flexible options in our IIC framework and we conduct experiments by using several different configurations. Evaluations are conducted on video retrieval and video recognition tasks using the learned video representation. Our proposed IIC outperforms current state-of-the-art results by a large margin, such as 16.7% and 9.5% points improvements in top-1 accuracy on UCF101 and HMDB51 datasets for video retrieval, respectively. For video recognition, improvements can also be obtained on these two benchmark datasets. Code is available at https://github.com/BestJuly/Inter-intra-video-contrastive-learning.", "field": [], "task": ["Action Recognition In Videos", "Representation Learning", "Self-Supervised Action Recognition", "Self-supervised Video Retrieval", "Video Recognition", "Video Retrieval"], "method": [], "dataset": ["UCF101", "HMDB51"], "metric": ["3-fold Accuracy", "Pre-Training Dataset", "Top-1 Accuracy"], "title": "Self-supervised Video Representation Learning Using Inter-intra Contrastive Framework"} {"abstract": "Pre-training models on vast quantities of unlabeled data has emerged as an effective approach to improving accuracy on many NLP tasks. On the other hand, traditional machine translation has a long history of leveraging unlabeled data through noisy channel modeling. The same idea has recently been shown to achieve strong improvements for neural machine translation. Unfortunately, na\\\"{i}ve noisy channel modeling with modern sequence to sequence models is up to an order of magnitude slower than alternatives. We address this issue by introducing efficient approximations to make inference with the noisy channel approach as fast as strong ensembles while increasing accuracy. We also show that the noisy channel approach can outperform strong pre-training results by achieving a new state of the art on WMT Romanian-English translation.", "field": [], "task": ["Machine Translation"], "method": [], "dataset": ["WMT2016 Romanian-English"], "metric": ["BLEU score"], "title": "Language Models not just for Pre-training: Fast Online Neural Noisy Channel Modeling"} {"abstract": "Understanding human motion behavior is critical for autonomous moving\nplatforms (like self-driving cars and social robots) if they are to navigate\nhuman-centric environments. This is challenging because human motion is\ninherently multimodal: given a history of human motion paths, there are many\nsocially plausible ways that people could move in the future. We tackle this\nproblem by combining tools from sequence prediction and generative adversarial\nnetworks: a recurrent sequence-to-sequence model observes motion histories and\npredicts future behavior, using a novel pooling mechanism to aggregate\ninformation across people. We predict socially plausible futures by training\nadversarially against a recurrent discriminator, and encourage diverse\npredictions with a novel variety loss. Through experiments on several datasets\nwe demonstrate that our approach outperforms prior work in terms of accuracy,\nvariety, collision avoidance, and computational complexity.", "field": [], "task": ["Motion Forecasting", "Multi-future Trajectory Prediction", "Self-Driving Cars", "Trajectory Forecasting", "Trajectory Prediction"], "method": [], "dataset": ["Stanford Drone", "ETH/UCY"], "metric": ["ADE-8/12 @K = 20", "FDE(8/12) @K=5", "FDE-8/12 @K= 20", "ADE-8/12", "ADE (8/12) @K=5"], "title": "Social GAN: Socially Acceptable Trajectories with Generative Adversarial Networks"} {"abstract": "When building a unified vision system or gradually adding new capabilities to\na system, the usual assumption is that training data for all tasks is always\navailable. However, as the number of tasks grows, storing and retraining on\nsuch data becomes infeasible. A new problem arises where we add new\ncapabilities to a Convolutional Neural Network (CNN), but the training data for\nits existing capabilities are unavailable. We propose our Learning without\nForgetting method, which uses only new task data to train the network while\npreserving the original capabilities. Our method performs favorably compared to\ncommonly used feature extraction and fine-tuning adaption techniques and\nperforms similarly to multitask learning that uses original task data we assume\nunavailable. A more surprising observation is that Learning without Forgetting\nmay be able to replace fine-tuning with similar old and new task datasets for\nimproved new task performance.", "field": [], "task": ["Continual Learning"], "method": [], "dataset": ["visual domain decathlon (10 tasks)"], "metric": ["decathlon discipline (Score)"], "title": "Learning without Forgetting"} {"abstract": "With the proliferation of social media, fashion inspired from celebrities,\nreputed designers as well as fashion influencers has shortened the cycle of\nfashion design and manufacturing. However, with the explosion of fashion\nrelated content and large number of user generated fashion photos, it is an\narduous task for fashion designers to wade through social media photos and\ncreate a digest of trending fashion. This necessitates deep parsing of fashion\nphotos on social media to localize and classify multiple fashion items from a\ngiven fashion photo. While object detection competitions such as MSCOCO have\nthousands of samples for each of the object categories, it is quite difficult\nto get large labeled datasets for fast fashion items. Moreover,\nstate-of-the-art object detectors do not have any functionality to ingest large\namount of unlabeled data available on social media in order to fine tune object\ndetectors with labeled datasets. In this work, we show application of a generic\nobject detector, that can be pretrained in an unsupervised manner, on 24\ncategories from recently released Open Images V4 dataset. We first train the\nbase architecture of the object detector using unsupervisd learning on 60K\nunlabeled photos from 24 categories gathered from social media, and then\nsubsequently fine tune it on 8.2K labeled photos from Open Images V4 dataset.\nOn 300 X 300 image inputs, we achieve 72.7% mAP on a test dataset of 2.4K\nphotos while performing 11% to 17% better as compared to the state-of-the-art\nobject detectors. We show that this improvement is due to our choice of\narchitecture that lets us do unsupervised learning and that performs\nsignificantly better in identifying small objects.", "field": [], "task": ["Object Detection"], "method": [], "dataset": ["SUN-RGBD val"], "metric": ["MAP"], "title": "How To Extract Fashion Trends From Social Media? A Robust Object Detector With Support For Unsupervised Learning"} {"abstract": "Deep learning models have achieved huge success in numerous fields, such as computer vision and natural language processing. However, unlike such fields, it is hard to apply traditional deep learning models on the graph data due to the 'node-orderless' property. Normally, adjacency matrices will cast an artificial and random node-order on the graphs, which renders the performance of deep models on graph classification tasks extremely erratic, and the representations learned by such models lack clear interpretability. To eliminate the unnecessary node-order constraint, we propose a novel model named Isomorphic Neural Network (IsoNN), which learns the graph representation by extracting its isomorphic features via the graph matching between input graph and templates. IsoNN has two main components: graph isomorphic feature extraction component and classification component. The graph isomorphic feature extraction component utilizes a set of subgraph templates as the kernel variables to learn the possible subgraph patterns existing in the input graph and then computes the isomorphic features. A set of permutation matrices is used in the component to break the node-order brought by the matrix representation. Three fully-connected layers are used as the classification component in IsoNN. Extensive experiments are conducted on benchmark datasets, the experimental results can demonstrate the effectiveness of ISONN, especially compared with both classic and state-of-the-art graph classification methods.", "field": [], "task": ["Graph Classification", "Graph Matching", "Graph Representation Learning", "Representation Learning"], "method": [], "dataset": ["BP-fMRI-97", "MUTAG", "HIV-fMRI-77 ", "PTC", "HIV-DTI-77", "HIV-fMRI-77"], "metric": ["F1", "Accuracy"], "title": "IsoNN: Isomorphic Neural Network for Graph Representation Learning and Classification"} {"abstract": "In this paper, we are interested in editing text in natural images, which aims to replace or modify a word in the source image with another one while maintaining its realistic look. This task is challenging, as the styles of both background and text need to be preserved so that the edited image is visually indistinguishable from the source image. Specifically, we propose an end-to-end trainable style retention network (SRNet) that consists of three modules: text conversion module, background inpainting module and fusion module. The text conversion module changes the text content of the source image into the target text while keeping the original text style. The background inpainting module erases the original text, and fills the text region with appropriate texture. The fusion module combines the information from the two former modules, and generates the edited text images. To our knowledge, this work is the first attempt to edit text in natural images at the word level. Both visual effects and quantitative results on synthetic and real-world dataset (ICDAR 2013) fully confirm the importance and necessity of modular decomposition. We also conduct extensive experiments to validate the usefulness of our method in various real-world applications such as text image synthesis, augmented reality (AR) translation, information hiding, etc.", "field": [], "task": ["Image Generation", "Image Inpainting", "Image-to-Image Translation", "Scene Text Editing"], "method": [], "dataset": ["KITTI Object Tracking Evaluation 2012", "StreetView"], "metric": ["Average PSNR", "SSIM"], "title": "Editing Text in the Wild"} {"abstract": "This paper introduces a new neural structure called FusionNet, which extends\nexisting attention approaches from three perspectives. First, it puts forward a\nnovel concept of \"history of word\" to characterize attention information from\nthe lowest word-level embedding up to the highest semantic-level\nrepresentation. Second, it introduces an improved attention scoring function\nthat better utilizes the \"history of word\" concept. Third, it proposes a\nfully-aware multi-level attention mechanism to capture the complete information\nin one text (such as a question) and exploit it in its counterpart (such as\ncontext or passage) layer by layer. We apply FusionNet to the Stanford Question\nAnswering Dataset (SQuAD) and it achieves the first position for both single\nand ensemble model on the official SQuAD leaderboard at the time of writing\n(Oct. 4th, 2017). Meanwhile, we verify the generalization of FusionNet with two\nadversarial SQuAD datasets and it sets up the new state-of-the-art on both\ndatasets: on AddSent, FusionNet increases the best F1 metric from 46.6% to\n51.4%; on AddOneSent, FusionNet boosts the best F1 metric from 56.0% to 60.7%.", "field": [], "task": ["Question Answering", "Reading Comprehension"], "method": [], "dataset": ["SQuAD1.1 dev", "SQuAD1.1", "SQuAD2.0"], "metric": ["EM", "F1"], "title": "FusionNet: Fusing via Fully-Aware Attention with Application to Machine Comprehension"} {"abstract": "We present an end-to-end 3D reconstruction method for a scene by directly regressing a truncated signed distance function (TSDF) from a set of posed RGB images. Traditional approaches to 3D reconstruction rely on an intermediate representation of depth maps prior to estimating a full 3D model of a scene. We hypothesize that a direct regression to 3D is more effective. A 2D CNN extracts features from each image independently which are then back-projected and accumulated into a voxel volume using the camera intrinsics and extrinsics. After accumulation, a 3D CNN refines the accumulated features and predicts the TSDF values. Additionally, semantic segmentation of the 3D model is obtained without significant computation. This approach is evaluated on the Scannet dataset where we significantly outperform state-of-the-art baselines (deep multiview stereo followed by traditional TSDF fusion) both quantitatively and qualitatively. We compare our 3D semantic segmentation to prior methods that use a depth sensor since no previous work attempts the problem with only RGB input.", "field": [], "task": ["3D Reconstruction", "3D Scene Reconstruction", "3D Semantic Segmentation", "Depth Estimation", "Regression", "Semantic Segmentation"], "method": [], "dataset": ["ScanNet"], "metric": ["3DIoU", "RMSE", "absolute relative error", "Chamfer Distance", "L1"], "title": "Atlas: End-to-End 3D Scene Reconstruction from Posed Images"} {"abstract": "Document-level relation extraction requires integrating information within and across multiple sentences of a document and capturing complex interactions between inter-sentence entities. However, effective aggregation of relevant information in the document remains a challenging research question. Existing approaches construct static document-level graphs based on syntactic trees, co-references or heuristics from the unstructured text to model the dependencies. Unlike previous methods that may not be able to capture rich non-local interactions for inference, we propose a novel model that empowers the relational reasoning across sentences by automatically inducing the latent document-level graph. We further develop a refinement strategy, which enables the model to incrementally aggregate relevant information for multi-hop reasoning. Specifically, our model achieves an F1 score of 59.05 on a large-scale document-level dataset (DocRED), significantly improving over the previous results, and also yields new state-of-the-art results on the CDR and GDA dataset. Furthermore, extensive analyses show that the model is able to discover more accurate inter-sentence relations.", "field": [], "task": ["Relational Reasoning", "Relation Extraction"], "method": [], "dataset": ["DocRED"], "metric": ["Ign F1", "F1"], "title": "Reasoning with Latent Structure Refinement for Document-Level Relation Extraction"} {"abstract": "We propose an effective framework for the temporal action segmentation task, namely an Action Segment Refinement Framework (ASRF). Our model architecture consists of a long-term feature extractor and two branches: the Action Segmentation Branch (ASB) and the Boundary Regression Branch (BRB). The long-term feature extractor provides shared features for the two branches with a wide temporal receptive field. The ASB classifies video frames with action classes, while the BRB regresses the action boundary probabilities. The action boundaries predicted by the BRB refine the output from the ASB, which results in a significant performance improvement. Our contributions are three-fold: (i) We propose a framework for temporal action segmentation, the ASRF, which divides temporal action segmentation into frame-wise action classification and action boundary regression. Our framework refines frame-level hypotheses of action classes using predicted action boundaries. (ii) We propose a loss function for smoothing the transition of action probabilities, and analyze combinations of various loss functions for temporal action segmentation. (iii) Our framework outperforms state-of-the-art methods on three challenging datasets, offering an improvement of up to 13.7% in terms of segmental edit distance and up to 16.1% in terms of segmental F1 score. Our code will be publicly available soon.", "field": [], "task": ["Action Classification", "Action Classification ", "Action Segmentation", "Regression"], "method": [], "dataset": ["50 Salads", "Breakfast", "GTEA"], "metric": ["Acc", "Edit", "F1@10%", "F1@25%", "F1@50%"], "title": "Alleviating Over-segmentation Errors by Detecting Action Boundaries"} {"abstract": "Despite great progress in supervised semantic segmentation,a large performance drop is usually observed when deploying the model in the wild. Domain adaptation methods tackle the issue by aligning the source domain and the target domain. However, most existing methods attempt to perform the alignment from a holistic view, ignoring the underlying class-level data structure in the target domain. To fully exploit the supervision in the source domain, we propose a fine-grained adversarial learning strategy for class-level feature alignment while preserving the internal structure of semantics across domains. We adopt a fine-grained domain discriminator that not only plays as a domain distinguisher, but also differentiates domains at class level. The traditional binary domain labels are also generalized to domain encodings as the supervision signal to guide the fine-grained feature alignment. An analysis with Class Center Distance (CCD) validates that our fine-grained adversarial strategy achieves better class-level alignment compared to other state-of-the-art methods. Our method is easy to implement and its effectiveness is evaluated on three classical domain adaptation tasks, i.e., GTA5 to Cityscapes, SYNTHIA to Cityscapes and Cityscapes to Cross-City. Large performance gains show that our method outperforms other global feature alignment based and class-wise alignment based counterparts. The code is publicly available at https://github.com/JDAI-CV/FADA.", "field": [], "task": ["Domain Adaptation", "Image-to-Image Translation", "Semantic Segmentation", "Synthetic-to-Real Translation"], "method": [], "dataset": ["GTAV-to-Cityscapes Labels", "SYNTHIA-to-Cityscapes"], "metric": ["mIoU (13 classes)", "mIoU"], "title": "Classes Matter: A Fine-grained Adversarial Approach to Cross-domain Semantic Segmentation"} {"abstract": "The superiority of deeply learned pedestrian representations has been\nreported in very recent literature of person re-identification (re-ID). In this\npaper, we consider the more pragmatic issue of learning a deep feature with no\nor only a few labels. We propose a progressive unsupervised learning (PUL)\nmethod to transfer pretrained deep representations to unseen domains. Our\nmethod is easy to implement and can be viewed as an effective baseline for\nunsupervised re-ID feature learning. Specifically, PUL iterates between 1)\npedestrian clustering and 2) fine-tuning of the convolutional neural network\n(CNN) to improve the original model trained on the irrelevant labeled dataset.\nSince the clustering results can be very noisy, we add a selection operation\nbetween the clustering and fine-tuning. At the beginning when the model is\nweak, CNN is fine-tuned on a small amount of reliable examples which locate\nnear to cluster centroids in the feature space. As the model becomes stronger\nin subsequent iterations, more images are being adaptively selected as CNN\ntraining samples. Progressively, pedestrian clustering and the CNN model are\nimproved simultaneously until algorithm convergence. This process is naturally\nformulated as self-paced learning. We then point out promising directions that\nmay lead to further improvement. Extensive experiments on three large-scale\nre-ID datasets demonstrate that PUL outputs discriminative features that\nimprove the re-ID accuracy.", "field": [], "task": ["Person Re-Identification", "Unsupervised Person Re-Identification"], "method": [], "dataset": ["DukeMTMC-reID", "Market-1501"], "metric": ["Rank-1", "Rank-10", "Rank-5", "MAP"], "title": "Unsupervised Person Re-identification: Clustering and Fine-tuning"} {"abstract": "The present study proposes a deep learning model, named DeepSleepNet, for\nautomatic sleep stage scoring based on raw single-channel EEG. Most of the\nexisting methods rely on hand-engineered features which require prior knowledge\nof sleep analysis. Only a few of them encode the temporal information such as\ntransition rules, which is important for identifying the next sleep stages,\ninto the extracted features. In the proposed model, we utilize Convolutional\nNeural Networks to extract time-invariant features, and bidirectional-Long\nShort-Term Memory to learn transition rules among sleep stages automatically\nfrom EEG epochs. We implement a two-step training algorithm to train our model\nefficiently. We evaluated our model using different single-channel EEGs\n(F4-EOG(Left), Fpz-Cz and Pz-Oz) from two public sleep datasets, that have\ndifferent properties (e.g., sampling rate) and scoring standards (AASM and\nR&K). The results showed that our model achieved similar overall accuracy and\nmacro F1-score (MASS: 86.2%-81.7, Sleep-EDF: 82.0%-76.9) compared to the\nstate-of-the-art methods (MASS: 85.9%-80.5, Sleep-EDF: 78.9%-73.7) on both\ndatasets. This demonstrated that, without changing the model architecture and\nthe training algorithm, our model could automatically learn features for sleep\nstage scoring from different raw single-channel EEGs from different datasets\nwithout utilizing any hand-engineered features.", "field": [], "task": ["EEG", "Sleep Stage Detection"], "method": [], "dataset": ["MASS SS3", "Sleep-EDF"], "metric": ["Cohen's kappa", "Macro-F1", "Accuracy"], "title": "DeepSleepNet: a Model for Automatic Sleep Stage Scoring based on Raw Single-Channel EEG"} {"abstract": "Person re-identification (reID) is an important task that requires to\nretrieve a person's images from an image dataset, given one image of the person\nof interest. For learning robust person features, the pose variation of person\nimages is one of the key challenges. Existing works targeting the problem\neither perform human alignment, or learn human-region-based representations.\nExtra pose information and computational cost is generally required for\ninference. To solve this issue, a Feature Distilling Generative Adversarial\nNetwork (FD-GAN) is proposed for learning identity-related and pose-unrelated\nrepresentations. It is a novel framework based on a Siamese structure with\nmultiple novel discriminators on human poses and identities. In addition to the\ndiscriminators, a novel same-pose loss is also integrated, which requires\nappearance of a same person's generated images to be similar. After learning\npose-unrelated person features with pose guidance, no auxiliary pose\ninformation and additional computational cost is required during testing. Our\nproposed FD-GAN achieves state-of-the-art performance on three person reID\ndatasets, which demonstrates that the effectiveness and robust feature\ndistilling capability of the proposed FD-GAN.", "field": [], "task": ["Person Re-Identification"], "method": [], "dataset": ["DukeMTMC-reID", "Market-1501", "CUHK03"], "metric": ["Rank-1", "MAP"], "title": "FD-GAN: Pose-guided Feature Distilling GAN for Robust Person Re-identification"} {"abstract": "Unsupervised learning of syntactic structure is typically performed using\ngenerative models with discrete latent variables and multinomial parameters. In\nmost cases, these models have not leveraged continuous word representations. In\nthis work, we propose a novel generative model that jointly learns discrete\nsyntactic structure and continuous word representations in an unsupervised\nfashion by cascading an invertible neural network with a structured generative\nprior. We show that the invertibility condition allows for efficient exact\ninference and marginal likelihood computation in our model so long as the prior\nis well-behaved. In experiments we instantiate our approach with both Markov\nand tree-structured priors, evaluating on two tasks: part-of-speech (POS)\ninduction, and unsupervised dependency parsing without gold POS annotation. On\nthe Penn Treebank, our Markov-structured model surpasses state-of-the-art\nresults on POS induction. Similarly, we find that our tree-structured model\nachieves state-of-the-art performance on unsupervised dependency parsing for\nthe difficult training condition where neither gold POS annotation nor\npunctuation-based constraints are available.", "field": [], "task": ["Constituency Grammar Induction", "Dependency Parsing", "Unsupervised Dependency Parsing"], "method": [], "dataset": ["PTB"], "metric": ["Mean F1 (WSJ10)", "Mean F1 (WSJ)"], "title": "Unsupervised Learning of Syntactic Structure with Invertible Neural Projections"} {"abstract": "Deep learning is currently playing a crucial role toward higher levels of artificial intelligence. This paradigm allows neural networks to learn complex and abstract representations, that are progressively obtained by combining simpler ones. Nevertheless, the internal \"black-box\" representations automatically discovered by current neural architectures often suffer from a lack of interpretability, making of primary interest the study of explainable machine learning techniques. This paper summarizes our recent efforts to develop a more interpretable neural model for directly processing speech from the raw waveform. In particular, we propose SincNet, a novel Convolutional Neural Network (CNN) that encourages the first layer to discover more meaningful filters by exploiting parametrized sinc functions. In contrast to standard CNNs, which learn all the elements of each filter, only low and high cutoff frequencies of band-pass filters are directly learned from data. This inductive bias offers a very compact way to derive a customized filter-bank front-end, that only depends on some parameters with a clear physical meaning. Our experiments, conducted on both speaker and speech recognition, show that the proposed architecture converges faster, performs better, and is more interpretable than standard CNNs.", "field": [], "task": ["Distant Speech Recognition", "Speech Recognition"], "method": [], "dataset": ["DIRHA English WSJ"], "metric": ["Word Error Rate (WER)"], "title": "Interpretable Convolutional Filters with SincNet"} {"abstract": "Model-based human pose estimation is currently approached through two different paradigms. Optimization-based methods fit a parametric body model to 2D observations in an iterative manner, leading to accurate image-model alignments, but are often slow and sensitive to the initialization. In contrast, regression-based methods, that use a deep network to directly estimate the model parameters from pixels, tend to provide reasonable, but not pixel accurate, results while requiring huge amounts of supervision. In this work, instead of investigating which approach is better, our key insight is that the two paradigms can form a strong collaboration. A reasonable, directly regressed estimate from the network can initialize the iterative optimization making the fitting faster and more accurate. Similarly, a pixel accurate fit from iterative optimization can act as strong supervision for the network. This is the core of our proposed approach SPIN (SMPL oPtimization IN the loop). The deep network initializes an iterative optimization routine that fits the body model to 2D joints within the training loop, and the fitted estimate is subsequently used to supervise the network. Our approach is self-improving by nature, since better network estimates can lead the optimization to better solutions, while more accurate optimization fits provide better supervision for the network. We demonstrate the effectiveness of our approach in different settings, where 3D ground truth is scarce, or not available, and we consistently outperform the state-of-the-art model-based pose estimation approaches by significant margins. The project website with videos, results, and code can be found at https://seas.upenn.edu/~nkolot/projects/spin.", "field": [], "task": ["3D Human Pose Estimation", "Pose Estimation", "Regression"], "method": [], "dataset": ["3D Poses in the Wild Challenge", "MPI-INF-3DHP", "3DPW"], "metric": ["PA-MPJPE", "MPVPE", "MPJPE", "MJPE", "AUC", "3DPCK", "MPJAE"], "title": "Learning to Reconstruct 3D Human Pose and Shape via Model-fitting in the Loop"} {"abstract": "Image clustering is a crucial but challenging task in machine learning and computer vision. Existing methods often ignore the combination between feature learning and clustering. To tackle this problem, we propose Deep Adaptive Clustering (DAC) that recasts the clustering problem into a binary pairwise-classification framework to judge whether pairs of images belong to the same clusters. In DAC, the similarities are calculated as the cosine distance between label features of images which are generated by a deep convolutional network (ConvNet). By introducing a constraint into DAC, the learned label features tend to be one-hot vectors that can be utilized for clustering images. The main challenge is that the ground-truth similarities are unknown in image clustering. We handle this issue by presenting an alternating iterative Adaptive Learning algorithm where each iteration alternately selects labeled samples and trains the ConvNet. Conclusively, images are automatically clustered based on the label features. Experimental results show that DAC achieves state-of-the-art performance on five popular datasets, e.g., yielding 97.75% clustering accuracy on MNIST, 52.18% on CIFAR-10 and 46.99% on STL-10.\r", "field": [], "task": ["Image Clustering"], "method": [], "dataset": ["Imagenet-dog-15", "CIFAR-100", "CIFAR-10", "Tiny-ImageNet", "ImageNet-10", "STL-10"], "metric": ["Train set", "Train Split", "ARI", "Backbone", "Train Set", "NMI", "Accuracy"], "title": "Deep Adaptive Image Clustering"} {"abstract": "Although unsupervised person re-identification (RE-ID) has drawn increasing\nresearch attentions due to its potential to address the scalability problem of\nsupervised RE-ID models, it is very challenging to learn discriminative\ninformation in the absence of pairwise labels across disjoint camera views. To\novercome this problem, we propose a deep model for the soft multilabel learning\nfor unsupervised RE-ID. The idea is to learn a soft multilabel (real-valued\nlabel likelihood vector) for each unlabeled person by comparing (and\nrepresenting) the unlabeled person with a set of known reference persons from\nan auxiliary domain. We propose the soft multilabel-guided hard negative mining\nto learn a discriminative embedding for the unlabeled target domain by\nexploring the similarity consistency of the visual features and the soft\nmultilabels of unlabeled target pairs. Since most target pairs are cross-view\npairs, we develop the cross-view consistent soft multilabel learning to achieve\nthe learning goal that the soft multilabels are consistently good across\ndifferent camera views. To enable effecient soft multilabel learning, we\nintroduce the reference agent learning to represent each reference person by a\nreference agent in a joint embedding. We evaluate our unified deep model on\nMarket-1501 and DukeMTMC-reID. Our model outperforms the state-of-the-art\nunsupervised RE-ID methods by clear margins. Code is available at\nhttps://github.com/KovenYu/MAR.", "field": [], "task": ["Person Re-Identification", "Unsupervised Person Re-Identification"], "method": [], "dataset": ["DukeMTMC-reID", "Market-1501"], "metric": ["Rank-1", "Rank-5", "MAP"], "title": "Unsupervised Person Re-identification by Soft Multilabel Learning"} {"abstract": "Arbitrary-oriented objects widely appear in natural scenes, aerial photographs, remote sensing images, etc., thus arbitrary-oriented object detection has received considerable attention. Many current rotation detectors use plenty of anchors with different orientations to achieve spatial alignment with ground truth boxes, then Intersection-over-Union (IoU) is applied to sample the positive and negative candidates for training. However, we observe that the selected positive anchors cannot always ensure accurate detections after regression, while some negative samples can achieve accurate localization. It indicates that the quality assessment of anchors through IoU is not appropriate, and this further lead to inconsistency between classification confidence and localization accuracy. In this paper, we propose a dynamic anchor learning (DAL) method, which utilizes the newly defined matching degree to comprehensively evaluate the localization potential of the anchors and carry out a more efficient label assignment process. In this way, the detector can dynamically select high-quality anchors to achieve accurate object detection, and the divergence between classification and regression will be alleviated. With the newly introduced DAL, we achieve superior detection performance for arbitrary-oriented objects with only a few horizontal preset anchors. Experimental results on three remote sensing datasets HRSC2016, DOTA, UCAS-AOD as well as a scene text dataset ICDAR 2015 show that our method achieves substantial improvement compared with the baseline model. Besides, our approach is also universal for object detection using horizontal bound box. The code and models are available at https://github.com/ming71/DAL.", "field": [], "task": ["Multi-Oriented Scene Text Detection", "Object Detection In Aerial Images", "Regression"], "method": [], "dataset": ["ICDAR2015", "DOTA"], "metric": ["F-Measure", "mAP"], "title": "Dynamic Anchor Learning for Arbitrary-Oriented Object Detection"} {"abstract": "In graph instance representation learning, both the diverse graph instance sizes and the graph node orderless property have been the major obstacles that render existing representation learning models fail to work. In this paper, we will examine the effectiveness of GRAPH-BERT on graph instance representation learning, which was designed for node representation learning tasks originally. To adapt GRAPH-BERT to the new problem settings, we re-design it with a segmented architecture instead, which is also named as SEG-BERT (Segmented GRAPH-BERT) for reference simplicity in this paper. SEG-BERT involves no node-order-variant inputs or functional components anymore, and it can handle the graph node orderless property naturally. What's more, SEG-BERT has a segmented architecture and introduces three different strategies to unify the graph instance sizes, i.e., full-input, padding/pruning and segment shifting, respectively. SEG-BERT is pre-trainable in an unsupervised manner, which can be further transferred to new tasks directly or with necessary fine-tuning. We have tested the effectiveness of SEG-BERT with experiments on seven graph instance benchmark datasets, and SEG-BERT can out-perform the comparison methods on six out of them with significant performance advantages.", "field": [], "task": ["Graph Classification", "Representation Learning"], "method": [], "dataset": ["COLLAB", "IMDb-B", "PROTEINS", "IMDb-M", "MUTAG", "PTC"], "metric": ["Accuracy"], "title": "Segmented Graph-Bert for Graph Instance Modeling"} {"abstract": "Several deep learning models have been proposed for question answering.\nHowever, due to their single-pass nature, they have no way to recover from\nlocal maxima corresponding to incorrect answers. To address this problem, we\nintroduce the Dynamic Coattention Network (DCN) for question answering. The DCN\nfirst fuses co-dependent representations of the question and the document in\norder to focus on relevant parts of both. Then a dynamic pointing decoder\niterates over potential answer spans. This iterative procedure enables the\nmodel to recover from initial local maxima corresponding to incorrect answers.\nOn the Stanford question answering dataset, a single DCN model improves the\nprevious state of the art from 71.0% F1 to 75.9%, while a DCN ensemble obtains\n80.4% F1.", "field": [], "task": ["Question Answering"], "method": [], "dataset": ["SQuAD1.1 dev", "SQuAD1.1"], "metric": ["EM", "F1"], "title": "Dynamic Coattention Networks For Question Answering"} {"abstract": "Spiking neural networks (SNNs) can be used in low-power and embedded systems (such as emerging neuromorphic chips) due to their event-based nature. Also, they have the advantage of low computation cost in contrast to conventional artificial neural networks (ANNs), while preserving ANN's properties. However, temporal coding in layers of convolutional spiking neural networks and other types of SNNs has yet to be studied. In this paper, we provide insight into spatio-temporal feature extraction of convolutional SNNs in experiments designed to exploit this property. The shallow convolutional SNN outperforms state-of-the-art spatio-temporal feature extractor methods such as C3D, ConvLstm, and similar networks. Furthermore, we present a new deep spiking architecture to tackle real-world problems (in particular classification tasks) which achieved superior performance compared to other SNN methods on NMNIST (99.6%), DVS-CIFAR10 (69.2%) and DVS-Gesture (96.7%) and ANN methods on UCF-101 (42.1%) and HMDB-51 (21.5%) datasets. It is also worth noting that the training process is implemented based on variation of spatio-temporal backpropagation explained in the paper.", "field": [], "task": ["Activity Recognition In Videos", "Event data classification", "Image Classification", "Video Classification"], "method": [], "dataset": ["CIFAR10-DVS", "MNIST"], "metric": ["Percentage error", "Accuracy"], "title": "Convolutional Spiking Neural Networks for Spatio-Temporal Feature Extraction"} {"abstract": "In this paper, we present a simple and efficient method for training deep\nneural networks in a semi-supervised setting where only a small portion of\ntraining data is labeled. We introduce self-ensembling, where we form a\nconsensus prediction of the unknown labels using the outputs of the\nnetwork-in-training on different epochs, and most importantly, under different\nregularization and input augmentation conditions. This ensemble prediction can\nbe expected to be a better predictor for the unknown labels than the output of\nthe network at the most recent training epoch, and can thus be used as a target\nfor training. Using our method, we set new records for two standard\nsemi-supervised learning benchmarks, reducing the (non-augmented)\nclassification error rate from 18.44% to 7.05% in SVHN with 500 labels and from\n18.63% to 16.55% in CIFAR-10 with 4000 labels, and further to 5.12% and 12.16%\nby enabling the standard augmentations. We additionally obtain a clear\nimprovement in CIFAR-100 classification accuracy by using random images from\nthe Tiny Images dataset as unlabeled extra inputs during training. Finally, we\ndemonstrate good tolerance to incorrect labels.", "field": [], "task": ["Semi-Supervised Image Classification"], "method": [], "dataset": ["cifar-100, 10000 Labels", "CIFAR-10, 250 Labels", "CIFAR-10, 4000 Labels", "SVHN, 1000 labels"], "metric": ["Accuracy"], "title": "Temporal Ensembling for Semi-Supervised Learning"} {"abstract": "To learn intrinsic low-dimensional structures from high-dimensional data that most discriminate between classes, we propose the principle of Maximal Coding Rate Reduction ($\\text{MCR}^2$), an information-theoretic measure that maximizes the coding rate difference between the whole dataset and the sum of each individual class. We clarify its relationships with most existing frameworks such as cross-entropy, information bottleneck, information gain, contractive and contrastive learning, and provide theoretical guarantees for learning diverse and discriminative features. The coding rate can be accurately computed from finite samples of degenerate subspace-like distributions and can learn intrinsic representations in supervised, self-supervised, and unsupervised settings in a unified manner. Empirically, the representations learned using this principle alone are significantly more robust to label corruptions in classification than those using cross-entropy, and can lead to state-of-the-art results in clustering mixed data from self-learned invariant features.", "field": [], "task": ["Image Clustering"], "method": [], "dataset": ["STL-10"], "metric": ["NMI", "Accuracy"], "title": "Learning Diverse and Discriminative Representations via the Principle of Maximal Coding Rate Reduction"} {"abstract": "Deep convolutional neural networks, assisted by architectural design strategies, make extensive use of data augmentation techniques and layers with a high number of feature maps to embed object transformations. That is highly inefficient and for large datasets implies a massive redundancy of features detectors. Even though capsules networks are still in their infancy, they constitute a promising solution to extend current convolutional networks and endow artificial visual perception with a process to encode more efficiently all feature affine transformations. Indeed, a properly working capsule network should theoretically achieve higher results with a considerably lower number of parameters count due to intrinsic capability to generalize to novel viewpoints. Nevertheless, little attention has been given to this relevant aspect. In this paper, we investigate the efficiency of capsule networks and, pushing their capacity to the limits with an extreme architecture with barely 160K parameters, we prove that the proposed architecture is still able to achieve state-of-the-art results on three different datasets with only 2% of the original CapsNet parameters. Moreover, we replace dynamic routing with a novel non-iterative, highly parallelizable routing algorithm that can easily cope with a reduced number of capsules. Extensive experimentation with other capsule implementations has proved the effectiveness of our methodology and the capability of capsule networks to efficiently embed visual representations more prone to generalization.", "field": [], "task": ["Data Augmentation", "Image Classification"], "method": [], "dataset": ["MNIST", "smallNORB"], "metric": ["Percentage error", "Classification Error", "Trainable Parameters", "Accuracy"], "title": "Efficient-CapsNet: Capsule Network with Self-Attention Routing"} {"abstract": "This paper presents Task 4 of the Detection and Classification of Acoustic Scenes and Events (DCASE) 2019 challenge and provides a first analysis of the challenge results. The task is a follow-up to Task 4 of DCASE 2018, and involves training systems for large-scale detection of sound events using a combination of weakly labeled data, i.e. training labels without time boundaries,and strongly-labeled synthesized data. The paper introduces Domestic Environment Sound Event Detection (DESED) dataset mixing a part of last year dataset and an additional synthetic, strongly labeled, dataset provided this year that we\u2019ll describe more in de-tail. We also report the performance of the submitted systems on the official evaluation (test) and development sets as well as several additional datasets. The best systems from this year outperform last year\u2019s winning system by about 10% points in terms of F-measure.", "field": [], "task": ["Sound Event Detection"], "method": [], "dataset": ["DESED"], "metric": ["event-based F1 score"], "title": "Sound event detection in domestic environments withweakly labeled data and soundscape synthesis"} {"abstract": "This technical report presents a brief description of our submission to the dense video captioning task of ActivityNet Challenge 2020. Our approach follows a two-stage pipeline: first, we extract a set of temporal event proposals; then we propose a multi-event captioning model to capture the event-level temporal relationships and effectively fuse the multi-modal information. Our approach achieves a 9.28 METEOR score on the test set.", "field": [], "task": ["Dense Video Captioning", "Video Captioning"], "method": [], "dataset": ["ActivityNet Captions"], "metric": ["METEOR"], "title": "Dense-Captioning Events in Videos: SYSU Submission to ActivityNet Challenge 2020"} {"abstract": "Graph Neural Networks (GNNs) have achieved promising performance on a wide range of graph-based tasks. Despite their success, one severe limitation of GNNs is the over-smoothing issue (indistinguishable representations of nodes in different classes). In this work, we present a systematic and quantitative study on the over-smoothing issue of GNNs. First, we introduce two quantitative metrics, MAD and MADGap, to measure the smoothness and over-smoothness of the graph nodes representations, respectively. Then, we verify that smoothing is the nature of GNNs and the critical factor leading to over-smoothness is the low information-to-noise ratio of the message received by the nodes, which is partially determined by the graph topology. Finally, we propose two methods to alleviate the over-smoothing issue from the topological view: (1) MADReg which adds a MADGap-based regularizer to the training objective;(2) AdaGraph which optimizes the graph topology based on the model predictions. Extensive experiments on 7 widely-used graph datasets with 10 typical GNN models show that the two proposed methods are effective for relieving the over-smoothing issue, thus improving the performance of various GNN models.", "field": [], "task": ["Node Classification"], "method": [], "dataset": ["Cora", "Pubmed", "Citeseer"], "metric": ["Accuracy"], "title": "Measuring and Relieving the Over-smoothing Problem for Graph Neural Networks from the Topological View"} {"abstract": "Previous adversarial training raises model robustness under the compromise of accuracy on natural data. In this paper, our target is to reduce natural accuracy degradation. We use the model logits from one clean model $\\mathcal{M}^{natural}$ to guide learning of the robust model $\\mathcal{M}^{robust}$, taking into consideration that logits from the well trained clean model $\\mathcal{M}^{natural}$ embed the most discriminative features of natural data, {\\it e.g.}, generalizable classifier boundary. Our solution is to constrain logits from the robust model $\\mathcal{M}^{robust}$ that takes adversarial examples as input and make it similar to those from a clean model $\\mathcal{M}^{natural}$ fed with corresponding natural data. It lets $\\mathcal{M}^{robust}$ inherit the classifier boundary of $\\mathcal{M}^{natural}$. Thus, we name our method Boundary Guided Adversarial Training (BGAT). Moreover, we generalize BGAT to Learnable Boundary Guided Adversarial Training (LBGAT) by training $\\mathcal{M}^{natural}$ and $\\mathcal{M}^{robust}$ simultaneously and collaboratively to learn one most robustness-friendly classifier boundary for the strongest robustness. Extensive experiments are conducted on CIFAR-10, CIFAR-100, and challenging Tiny ImageNet datasets. Along with other state-of-the-art adversarial training approaches, {\\it e.g.}, Adversarial Logit Pairing (ALP) and TRADES, the performance is further enhanced.", "field": [], "task": ["Adversarial Defense"], "method": [], "dataset": ["CIFAR-100"], "metric": ["autoattack"], "title": "Learnable Boundary Guided Adversarial Training"} {"abstract": "We address semi-supervised video object segmentation, the task of\nautomatically generating accurate and consistent pixel masks for objects in a\nvideo sequence, given the first-frame ground truth annotations. Towards this\ngoal, we present the PReMVOS algorithm (Proposal-generation, Refinement and\nMerging for Video Object Segmentation). Our method separates this problem into\ntwo steps, first generating a set of accurate object segmentation mask\nproposals for each video frame and then selecting and merging these proposals\ninto accurate and temporally consistent pixel-wise object tracks over a video\nsequence in a way which is designed to specifically tackle the difficult\nchallenges involved with segmenting multiple objects across a video sequence.\nOur approach surpasses all previous state-of-the-art results on the DAVIS 2017\nvideo object segmentation benchmark with a J & F mean score of 71.6 on the\ntest-dev dataset, and achieves first place in both the DAVIS 2018 Video Object\nSegmentation Challenge and the YouTube-VOS 1st Large-scale Video Object\nSegmentation Challenge.", "field": [], "task": ["Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation", "Youtube-VOS"], "method": [], "dataset": ["DAVIS 2017 (val)", "DAVIS 2017 (test-dev)", "DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "PReMVOS: Proposal-generation, Refinement and Merging for Video Object Segmentation"} {"abstract": "Semantic graphs, such as WordNet, are resources which curate natural language\non two distinguishable layers. On the local level, individual relations between\nsynsets (semantic building blocks) such as hypernymy and meronymy enhance our\nunderstanding of the words used to express their meanings. Globally, analysis\nof graph-theoretic properties of the entire net sheds light on the structure of\nhuman language as a whole. In this paper, we combine global and local\nproperties of semantic graphs through the framework of Max-Margin Markov Graph\nModels (M3GM), a novel extension of Exponential Random Graph Model (ERGM) that\nscales to large multi-relational graphs. We demonstrate how such global\nmodeling improves performance on the local task of predicting semantic\nrelations between synsets, yielding new state-of-the-art results on the WN18RR\ndataset, a challenging version of WordNet link prediction in which \"easy\"\nreciprocal cases are removed. In addition, the M3GM model identifies\nmultirelational motifs that are characteristic of well-formed lexical semantic\nontologies.", "field": [], "task": ["Link Prediction"], "method": [], "dataset": ["WN18RR"], "metric": ["Hits@10", "MRR", "Hits@1"], "title": "Predicting Semantic Relations using Global Graph Properties"} {"abstract": "In this paper, we propose a new rich resource enhanced AMR aligner which\nproduces multiple alignments and a new transition system for AMR parsing along\nwith its oracle parser. Our aligner is further tuned by our oracle parser via\npicking the alignment that leads to the highest-scored achievable AMR graph.\nExperimental results show that our aligner outperforms the rule-based aligner\nin previous work by achieving higher alignment F1 score and consistently\nimproving two open-sourced AMR parsers. Based on our aligner and transition\nsystem, we develop a transition-based AMR parser that parses a sentence into\nits AMR graph directly. An ensemble of our parsers with only words and POS tags\nas input leads to 68.4 Smatch F1 score.", "field": [], "task": ["AMR Parsing"], "method": [], "dataset": ["LDC2014T12:", "LDC2014T12"], "metric": ["F1 Newswire", "F1 Full"], "title": "An AMR Aligner Tuned by Transition-based Parser"} {"abstract": "In this paper, we propose PointRCNN for 3D object detection from raw point cloud. The whole framework is composed of two stages: stage-1 for the bottom-up 3D proposal generation and stage-2 for refining proposals in the canonical coordinates to obtain the final detection results. Instead of generating proposals from RGB image or projecting point cloud to bird's view or voxels as previous methods do, our stage-1 sub-network directly generates a small number of high-quality 3D proposals from point cloud in a bottom-up manner via segmenting the point cloud of the whole scene into foreground points and background. The stage-2 sub-network transforms the pooled points of each proposal to canonical coordinates to learn better local spatial features, which is combined with global semantic features of each point learned in stage-1 for accurate box refinement and confidence prediction. Extensive experiments on the 3D detection benchmark of KITTI dataset show that our proposed architecture outperforms state-of-the-art methods with remarkable margins by using only point cloud as input. The code is available at https://github.com/sshaoshuai/PointRCNN.", "field": [], "task": ["3D Object Detection", "Object Detection", "Object Proposal Generation"], "method": [], "dataset": ["KITTI Cars Hard", "KITTI Cyclists Hard", "KITTI Cars Moderate", "KITTI Cyclists Moderate", "KITTI Cyclists Easy", "KITTI Cars Easy"], "metric": ["AP"], "title": "PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud"} {"abstract": "This paper introduces an extremely efficient CNN architecture named DFANet\nfor semantic segmentation under resource constraints. Our proposed network\nstarts from a single lightweight backbone and aggregates discriminative\nfeatures through sub-network and sub-stage cascade respectively. Based on the\nmulti-scale feature propagation, DFANet substantially reduces the number of\nparameters, but still obtains sufficient receptive field and enhances the model\nlearning ability, which strikes a balance between the speed and segmentation\nperformance. Experiments on Cityscapes and CamVid datasets demonstrate the\nsuperior performance of DFANet with 8$\\times$ less FLOPs and 2$\\times$ faster\nthan the existing state-of-the-art real-time semantic segmentation methods\nwhile providing comparable accuracy. Specifically, it achieves 70.3\\% Mean IOU\non the Cityscapes test dataset with only 1.7 GFLOPs and a speed of 160 FPS on\none NVIDIA Titan X card, and 71.3\\% Mean IOU with 3.4 GFLOPs while inferring on\na higher resolution image.", "field": [], "task": ["Real-Time Semantic Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["CamVid", "Cityscapes test"], "metric": ["Mean IoU (class)", "Mean IoU"], "title": "DFANet: Deep Feature Aggregation for Real-Time Semantic Segmentation"} {"abstract": "Visual Question Answering (VQA) is the task of answering questions about an image. Some VQA models often exploit unimodal biases to provide the correct answer without using the image information. As a result, they suffer from a huge drop in performance when evaluated on data outside their training set distribution. This critical issue makes them unsuitable for real-world settings. We propose RUBi, a new learning strategy to reduce biases in any VQA model. It reduces the importance of the most biased examples, i.e. examples that can be correctly classified without looking at the image. It implicitly forces the VQA model to use the two input modalities instead of relying on statistical regularities between the question and the answer. We leverage a question-only model that captures the language biases by identifying when these unwanted regularities are used. It prevents the base VQA model from learning them by influencing its predictions. This leads to dynamically adjusting the loss in order to compensate for biases. We validate our contributions by surpassing the current state-of-the-art results on VQA-CP v2. This dataset is specifically designed to assess the robustness of VQA models when exposed to different question biases at test time than what was seen during training. Our code is available: github.com/cdancette/rubi.bootstrap.pytorch", "field": [], "task": ["Question Answering", "Visual Question Answering"], "method": [], "dataset": ["VQA-CP", "VQA v2 test-dev"], "metric": ["Score", "Accuracy"], "title": "RUBi: Reducing Unimodal Biases in Visual Question Answering"} {"abstract": "This review introduces a novel deformable image registration paradigm that exploits Markov random field formulation and powerful discrete optimization algorithms. We express deformable registration as a minimal cost graph problem, where nodes correspond to the deformation grid, a node's connectivity corresponds to regularization constraints, and labels correspond to 3D deformations. To cope with both iconic and geometric (landmark-based) registration, we introduce two graphical models, one for each subproblem. The two graphs share interconnected variables, leading to a modular, powerful, and flexible formulation that can account for arbitrary image-matching criteria, various local deformation models, and regularization constraints. To cope with the corresponding optimization problem, we adopt two optimization strategies: a computationally efficient one and a tight relaxation alternative. Promising results demonstrate the potential of this approach. Discrete methods are an important new trend in medical image registration, as they provide several improvements over the more traditional continuous methods. This is illustrated with several key examples where the presented framework outperforms existing general-purpose registration methods in terms of both performance and computational complexity. Our methods become of particular interest in applications where computation time is a critical issue, as in intraoperative imaging, or where the huge variation in data demands complex and application-specific matching criteria, as in large-scale multimodal population studies. The proposed registration framework, along with a graphical interface and corresponding publications, is available for download for research purposes (for Windows and Linux platforms) from http://www.mrf-registration.net.", "field": [], "task": ["BIRL", "Deformable Medical Image Registration", "Image Registration", "Medical Image Registration"], "method": [], "dataset": ["CIMA-10k"], "metric": ["MMrTRE", "AMrTRE"], "title": "Deformable medical image registration: setting the state of the art with discrete methods"} {"abstract": "A challenge of skeleton-based action recognition is the difficulty to classify actions with similar motions and object-related actions. Visual clues from other streams help in that regard. RGB data are sensible to illumination conditions, thus unusable in the dark. To alleviate this issue and still benefit from a visual stream, we propose a modular network (FUSION) combining skeleton and infrared data. A 2D convolutional neural network (CNN) is used as a pose module to extract features from skeleton data. A 3D CNN is used as an infrared module to extract visual cues from videos. Both feature vectors are then concatenated and exploited conjointly using a multilayer perceptron (MLP). Skeleton data also condition the infrared videos, providing a crop around the performing subjects and thus virtually focusing the attention of the infrared module. Ablation studies show that using pre-trained networks on other large scale datasets as our modules and data augmentation yield considerable improvements on the action classification accuracy. The strong contribution of our cropping strategy is also demonstrated. We evaluate our method on the NTU RGB+D dataset, the largest dataset for human action recognition from depth cameras, and report state-of-the-art performances.", "field": [], "task": ["Action Classification", "Action Classification ", "Action Recognition", "Data Augmentation", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "Infrared and 3D skeleton feature fusion for RGB-D action recognition"} {"abstract": "This paper investigates the principles of embedding learning to tackle the challenging semi-supervised video object segmentation. Different from previous practices that only explore the embedding learning using pixels from foreground object (s), we consider background should be equally treated and thus propose Collaborative video object segmentation by Foreground-Background Integration (CFBI) approach. Our CFBI implicitly imposes the feature embedding from the target foreground object and its corresponding background to be contrastive, promoting the segmentation results accordingly. With the feature embedding from both foreground and background, our CFBI performs the matching process between the reference and the predicted sequence from both pixel and instance levels, making the CFBI be robust to various object scales. We conduct extensive experiments on three popular benchmarks, i.e., DAVIS 2016, DAVIS 2017, and YouTube-VOS. Our CFBI achieves the performance (J$F) of 89.4%, 81.9%, and 81.4%, respectively, outperforming all the other state-of-the-art methods. Code: https://github.com/z-x-yang/CFBI.", "field": [], "task": ["Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation", "Youtube-VOS"], "method": [], "dataset": ["DAVIS 2017 (val)", "YouTube-VOS", "DAVIS 2017 (test-dev)", "DAVIS 2016"], "metric": ["Jaccard (Mean)", "Jaccard (Unseen)", "F-Measure (Seen)", "Jaccard (Seen)", "Overall", "F-measure (Mean)", "J&F", "F-Measure (Unseen)"], "title": "Collaborative Video Object Segmentation by Foreground-Background Integration"} {"abstract": "Toxic online content has become a major issue in today's world due to an\nexponential increase in the use of internet by people of different cultures and\neducational background. Differentiating hate speech and offensive language is a\nkey challenge in automatic detection of toxic text content. In this paper, we\npropose an approach to automatically classify tweets on Twitter into three\nclasses: hateful, offensive and clean. Using Twitter dataset, we perform\nexperiments considering n-grams as features and passing their term\nfrequency-inverse document frequency (TFIDF) values to multiple machine\nlearning models. We perform comparative analysis of the models considering\nseveral values of n in n-grams and TFIDF normalization methods. After tuning\nthe model giving the best results, we achieve 95.6% accuracy upon evaluating it\non test data. We also create a module which serves as an intermediate between\nuser and Twitter.", "field": [], "task": ["Hate Speech Detection"], "method": [], "dataset": ["Hate Speech and Offensive Language"], "metric": ["Accuracy"], "title": "Detecting Hate Speech and Offensive Language on Twitter using Machine Learning: An N-gram and TFIDF based Approach"} {"abstract": "Existing image-based activity understanding methods mainly adopt direct mapping, i.e. from image to activity concepts, which may encounter performance bottleneck since the huge gap. In light of this, we propose a new path: infer human part states first and then reason out the activities based on part-level semantics. Human Body Part States (PaSta) are fine-grained action semantic tokens, e.g. , which can compose the activities and help us step toward human activity knowledge engine. To fully utilize the power of PaSta, we build a large-scale knowledge base PaStaNet, which contains 7M+ PaSta annotations. And two corresponding models are proposed: first, we design a model named Activity2Vec to extract PaSta features, which aim to be general representations for various activities. Second, we use a PaSta-based Reasoning method to infer activities. Promoted by PaStaNet, our method achieves significant improvements, e.g. 6.4 and 13.9 mAP on full and one-shot sets of HICO in supervised learning, and 3.2 and 4.2 mAP on V-COCO and images-based AVA in transfer learning. Code and data are available at http://hake-mvig.cn/.", "field": [], "task": ["Human-Object Interaction Detection", "Transfer Learning"], "method": [], "dataset": ["HICO-DET", "V-COCO", "HICO"], "metric": ["mAP", "MAP"], "title": "PaStaNet: Toward Human Activity Knowledge Engine"} {"abstract": "Most graph-network-based meta-learning approaches model instance-level relation of examples. We extend this idea further to explicitly model the distribution-level relation of one example to all other examples in a 1-vs-N manner. We propose a novel approach named distribution propagation graph network (DPGN) for few-shot learning. It conveys both the distribution-level relations and instance-level relations in each few-shot learning task. To combine the distribution-level relations and instance-level relations for all examples, we construct a dual complete graph network which consists of a point graph and a distribution graph with each node standing for an example. Equipped with dual graph architecture, DPGN propagates label information from labeled examples to unlabeled examples within several update generations. In extensive experiments on few-shot learning benchmarks, DPGN outperforms state-of-the-art results by a large margin in 5% $\\sim$ 12% under supervised setting and 7% $\\sim$ 13% under semi-supervised setting. Code will be released.", "field": [], "task": ["Few-Shot Learning", "Meta-Learning"], "method": [], "dataset": ["Mini-ImageNet - 1-Shot Learning"], "metric": ["Acc"], "title": "DPGN: Distribution Propagation Graph Network for Few-shot Learning"} {"abstract": "This paper is concerned with learning to solve tasks that require a chain of\ninterdependent steps of relational inference, like answering complex questions\nabout the relationships between objects, or solving puzzles where the smaller\nelements of a solution mutually constrain each other. We introduce the\nrecurrent relational network, a general purpose module that operates on a graph\nrepresentation of objects. As a generalization of Santoro et al. [2017]'s\nrelational network, it can augment any neural network model with the capacity\nto do many-step relational reasoning. We achieve state of the art results on\nthe bAbI textual question-answering dataset with the recurrent relational\nnetwork, consistently solving 20/20 tasks. As bAbI is not particularly\nchallenging from a relational reasoning point of view, we introduce\nPretty-CLEVR, a new diagnostic dataset for relational reasoning. In the\nPretty-CLEVR set-up, we can vary the question to control for the number of\nrelational reasoning steps that are required to obtain the answer. Using\nPretty-CLEVR, we probe the limitations of multi-layer perceptrons, relational\nand recurrent relational networks. Finally, we show how recurrent relational\nnetworks can learn to solve Sudoku puzzles from supervised training data, a\nchallenging task requiring upwards of 64 steps of relational reasoning. We\nachieve state-of-the-art results amongst comparable methods by solving 96.6% of\nthe hardest Sudoku puzzles.", "field": [], "task": ["Question Answering", "Relational Reasoning"], "method": [], "dataset": ["bAbi"], "metric": ["Mean Error Rate"], "title": "Recurrent Relational Networks"} {"abstract": "Though impressive results have been achieved in visual captioning, the task\nof generating abstract stories from photo streams is still a little-tapped\nproblem. Different from captions, stories have more expressive language styles\nand contain many imaginary concepts that do not appear in the images. Thus it\nposes challenges to behavioral cloning algorithms. Furthermore, due to the\nlimitations of automatic metrics on evaluating story quality, reinforcement\nlearning methods with hand-crafted rewards also face difficulties in gaining an\noverall performance boost. Therefore, we propose an Adversarial REward Learning\n(AREL) framework to learn an implicit reward function from human\ndemonstrations, and then optimize policy search with the learned reward\nfunction. Though automatic eval- uation indicates slight performance boost over\nstate-of-the-art (SOTA) methods in cloning expert behaviors, human evaluation\nshows that our approach achieves significant improvement in generating more\nhuman-like stories than SOTA systems.", "field": [], "task": ["Image Captioning", "Visual Storytelling"], "method": [], "dataset": ["VIST"], "metric": ["BLEU-2", "METEOR", "BLEU-1", "CIDEr", "BLEU-3", "BLEU-4", "ROUGE"], "title": "No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling"} {"abstract": "We propose a stochastic answer network (SAN) to explore multi-step inference\nstrategies in Natural Language Inference. Rather than directly predicting the\nresults given the inputs, the model maintains a state and iteratively refines\nits predictions. Our experiments show that SAN achieves the state-of-the-art\nresults on three benchmarks: Stanford Natural Language Inference (SNLI)\ndataset, MultiGenre Natural Language Inference (MultiNLI) dataset and Quora\nQuestion Pairs dataset.", "field": [], "task": ["Natural Language Inference"], "method": [], "dataset": ["SNLI"], "metric": ["Parameters", "% Train Accuracy", "% Test Accuracy"], "title": "Stochastic Answer Networks for Natural Language Inference"} {"abstract": "Human shape estimation is an important task for video editing, animation and\nfashion industry. Predicting 3D human body shape from natural images, however,\nis highly challenging due to factors such as variation in human bodies,\nclothing and viewpoint. Prior methods addressing this problem typically attempt\nto fit parametric body models with certain priors on pose and shape. In this\nwork we argue for an alternative representation and propose BodyNet, a neural\nnetwork for direct inference of volumetric body shape from a single image.\nBodyNet is an end-to-end trainable network that benefits from (i) a volumetric\n3D loss, (ii) a multi-view re-projection loss, and (iii) intermediate\nsupervision of 2D pose, 2D body part segmentation, and 3D pose. Each of them\nresults in performance improvement as demonstrated by our experiments. To\nevaluate the method, we fit the SMPL model to our network output and show\nstate-of-the-art results on the SURREAL and Unite the People datasets,\noutperforming recent approaches. Besides achieving state-of-the-art\nperformance, our method also enables volumetric body-part segmentation.", "field": [], "task": ["3D Human Pose Estimation"], "method": [], "dataset": ["Surreal"], "metric": ["MPJPE"], "title": "BodyNet: Volumetric Inference of 3D Human Body Shapes"} {"abstract": "We present a novel end-to-end neural model to extract entities and relations\nbetween them. Our recurrent neural network based model captures both word\nsequence and dependency tree substructure information by stacking bidirectional\ntree-structured LSTM-RNNs on bidirectional sequential LSTM-RNNs. This allows\nour model to jointly represent both entities and relations with shared\nparameters in a single model. We further encourage detection of entities during\ntraining and use of entity information in relation extraction via entity\npretraining and scheduled sampling. Our model improves over the\nstate-of-the-art feature-based model on end-to-end relation extraction,\nachieving 12.1% and 5.7% relative error reductions in F1-score on ACE2005 and\nACE2004, respectively. We also show that our LSTM-RNN based model compares\nfavorably to the state-of-the-art CNN based model (in F1-score) on nominal\nrelation classification (SemEval-2010 Task 8). Finally, we present an extensive\nablation analysis of several model components.", "field": [], "task": ["Relation Classification", "Relation Extraction"], "method": [], "dataset": ["ACE 2005", "ACE 2004"], "metric": ["Sentence Encoder", "NER Micro F1", "RE+ Micro F1"], "title": "End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures"} {"abstract": "The rise of neural networks, and particularly recurrent neural networks, has\nproduced significant advances in part-of-speech tagging accuracy. One\ncharacteristic common among these models is the presence of rich initial word\nencodings. These encodings typically are composed of a recurrent\ncharacter-based representation with learned and pre-trained word embeddings.\nHowever, these encodings do not consider a context wider than a single word and\nit is only through subsequent recurrent layers that word or sub-word\ninformation interacts. In this paper, we investigate models that use recurrent\nneural networks with sentence-level context for initial character and\nword-based representations. In particular we show that optimal results are\nobtained by integrating these context sensitive representations through\nsynchronized training with a meta-model that learns to combine their states. We\npresent results on part-of-speech and morphological tagging with\nstate-of-the-art performance on a number of languages.", "field": [], "task": ["Morphological Tagging", "Part-Of-Speech Tagging", "Word Embeddings"], "method": [], "dataset": ["Penn Treebank"], "metric": ["Accuracy"], "title": "Morphosyntactic Tagging with a Meta-BiLSTM Model over Context Sensitive Token Encodings"} {"abstract": "Deep neural networks reach state-of-the-art performance for wide range of natural language processing, computer vision and speech applications. Yet, one of the biggest challenges is running these complex networks on devices such as mobile phones or smart watches with tiny memory footprint and low computational capacity. We propose on-device Self-Governing Neural Networks (SGNNs), which learn compact projection vectors with local sensitive hashing. The key advantage of SGNNs over existing work is that they surmount the need for pre-trained word embeddings and complex networks with huge parameters. We conduct extensive evaluation on dialog act classification and show significant improvement over state-of-the-art results. Our findings show that SGNNs are effective at capturing low-dimensional semantic text representations, while maintaining high accuracy.", "field": [], "task": ["Dialog Act Classification", "Dialogue Act Classification", "Text Classification", "Word Embeddings"], "method": [], "dataset": ["Switchboard corpus", "ICSI Meeting Recorder Dialog Act (MRDA) corpus"], "metric": ["Accuracy"], "title": "Self-Governing Neural Networks for On-Device Short Text Classification"} {"abstract": "Person re-identification (re-ID) poses unique challenges for unsupervised domain adaptation (UDA) in that classes in the source and target sets (domains) are entirely different and that image variations are largely caused by cameras. Given a labeled source training set and an unlabeled target training set, we aim to improve the generalization ability of re-ID models on the target testing set. To this end, we introduce a Hetero-Homogeneous Learning (HHL) method. Our method enforces two properties simultaneously: 1) camera invariance, learned via positive pairs formed by unlabeled target images and their camera style transferred counterparts; 2) domain connectedness, by regarding source / target images as negative matching pairs to the target / source images. The first property is implemented by homogeneous learning because training pairs are collected from the same domain. The second property is achieved by heterogeneous learning because we sample training pairs from both the source and target domains. On Market-1501, DukeMTMC-reID and CUHK03, we show that the two properties contribute indispensably and that very competitive re-ID UDA accuracy is achieved. Code is available at: https://github.com/zhunzhong07/HHL", "field": [], "task": ["Domain Adaptation", "Person Re-Identification", "Person Retrieval", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["Duke to Market", "Market to Duke"], "metric": ["rank-10", "mAP", "rank-5", "rank-1"], "title": "Generalizing A Person Retrieval Model Hetero- and Homogeneously"} {"abstract": "Intent detection and slot filling are two main tasks for building a spoken\nlanguage understanding(SLU) system. Multiple deep learning based models have\ndemonstrated good results on these tasks . The most effective algorithms are\nbased on the structures of sequence to sequence models (or \"encoder-decoder\"\nmodels), and generate the intents and semantic tags either using separate\nmodels or a joint model. Most of the previous studies, however, either treat\nthe intent detection and slot filling as two separate parallel tasks, or use a\nsequence to sequence model to generate both semantic tags and intent. Most of\nthese approaches use one (joint) NN based model (including encoder-decoder\nstructure) to model two tasks, hence may not fully take advantage of the\ncross-impact between them. In this paper, new Bi-model based RNN semantic frame\nparsing network structures are designed to perform the intent detection and\nslot filling tasks jointly, by considering their cross-impact to each other\nusing two correlated bidirectional LSTMs (BLSTM). Our Bi-model structure with a\ndecoder achieves state-of-the-art result on the benchmark ATIS data, with about\n0.5$\\%$ intent accuracy improvement and 0.9 $\\%$ slot filling improvement.", "field": [], "task": ["Intent Detection", "Slot Filling", "Spoken Language Understanding"], "method": [], "dataset": ["ATIS"], "metric": ["F1", "Accuracy"], "title": "A Bi-model based RNN Semantic Frame Parsing Model for Intent Detection and Slot Filling"} {"abstract": "This paper introduces WILDCAT, a deep learning method which jointly aims at aligning image regions for gaining spatial invariance and learning strongly localized features. Our model is trained using only global image labels and is devoted to three main visual recognition tasks: image classification, weakly supervised object localization and semantic segmentation. WILDCAT extends state-of-the-art Convolutional Neural Networks at three main levels: the use of Fully Convolutional Networks for maintaining spatial resolution, the explicit design in the network of local features related to different class modalities, and a new way to pool these features to provide a global image prediction required for weakly supervised training. Extensive experiments show that our model significantly outperforms state-of-the-art methods.\r", "field": [], "task": ["Image Classification", "Object Localization", "Semantic Segmentation", "Weakly Supervised Object Detection", "Weakly-Supervised Object Localization"], "method": [], "dataset": ["COCO"], "metric": ["MAP"], "title": "WILDCAT: Weakly Supervised Learning of Deep ConvNets for Image Classification, Pointwise Localization and Segmentation"} {"abstract": "Saliency detection aims to highlight the most relevant objects in an image. Methods using conventional models struggle whenever salient objects are pictured on top of a cluttered background while deep neural nets suffer from excess complexity and slow evaluation speeds. In this paper, we propose a simplified convolutional neural network which combines local and global information through a multi-resolution 4x5 grid structure. Instead of enforcing spacial coherence with a CRF or superpixels as is usually the case, we implemented a loss function inspired by the Mumford-Shah functional which penalizes errors on the boundary. We trained our model on the MSRA-B dataset, and tested it on six different saliency benchmark datasets. Results show that our method is on par with the state-of-the-art while reducing computation time by a factor of 18 to 100 times, enabling near real-time, high performance saliency detection.\r", "field": [], "task": ["Object Detection", "RGB Salient Object Detection", "Saliency Detection", "Salient Object Detection"], "method": [], "dataset": ["UCF", "SOC", "SBU", "DUTS-TE", "ISTD"], "metric": ["S-Measure", "Average MAE", "mean E-Measure", "MAE", "F-measure", "Balanced Error Rate"], "title": "Non-Local Deep Features for Salient Object Detection"} {"abstract": "Missing data is a ubiquitous problem. It is especially challenging in medical\nsettings because many streams of measurements are collected at different - and\noften irregular - times. Accurate estimation of those missing measurements is\ncritical for many reasons, including diagnosis, prognosis and treatment.\nExisting methods address this estimation problem by interpolating within data\nstreams or imputing across data streams (both of which ignore important\ninformation) or ignoring the temporal aspect of the data and imposing strong\nassumptions about the nature of the data-generating process and/or the pattern\nof missing data (both of which are especially problematic for medical data). We\npropose a new approach, based on a novel deep learning architecture that we\ncall a Multi-directional Recurrent Neural Network (M-RNN) that interpolates\nwithin data streams and imputes across data streams. We demonstrate the power\nof our approach by applying it to five real-world medical datasets. We show\nthat it provides dramatically improved estimation of missing measurements in\ncomparison to 11 state-of-the-art benchmarks (including Spline and Cubic\nInterpolations, MICE, MissForest, matrix completion and several RNN methods);\ntypical improvements in Root Mean Square Error are between 35% - 50%.\nAdditional experiments based on the same five datasets demonstrate that the\nimprovements provided by our method are extremely robust.", "field": [], "task": ["Matrix Completion", "Multivariate Time Series Imputation"], "method": [], "dataset": ["Beijing Air Quality", "UCI localization data", "PhysioNet Challenge 2012"], "metric": ["MAE (PM2.5)", "MAE (10% of data as GT)", "MAE (10% missing)"], "title": "Estimating Missing Data in Temporal Data Streams Using Multi-directional Recurrent Neural Networks"} {"abstract": "Identifying the intent of a citation in scientific papers (e.g., background information, use of methods, comparing results) is critical for machine reading of individual publications and automated analysis of the scientific literature. We propose structural scaffolds, a multitask model to incorporate structural information of scientific papers into citations for effective classification of citation intents. Our model achieves a new state-of-the-art on an existing ACL anthology dataset (ACL-ARC) with a 13.3% absolute increase in F1 score, without relying on external linguistic resources or hand-engineered features as done in existing methods. In addition, we introduce a new dataset of citation intents (SciCite) which is more than five times larger and covers multiple scientific domains compared with existing datasets. Our code and data are available at: https://github.com/allenai/scicite.", "field": [], "task": ["Citation Intent Classification", "Intent Classification", "Reading Comprehension", "Sentence Classification"], "method": [], "dataset": ["SciCite", "ACL-ARC"], "metric": ["F1"], "title": "Structural Scaffolds for Citation Intent Classification in Scientific Publications"} {"abstract": "The recent proliferation of knowledge graphs (KGs) coupled with incomplete or partial information, in the form of missing relations (links) between entities, has fueled a lot of research on knowledge base completion (also known as relation prediction). Several recent works suggest that convolutional neural network (CNN) based models generate richer and more expressive feature embeddings and hence also perform well on relation prediction. However, we observe that these KG embeddings treat triples independently and thus fail to cover the complex and hidden information that is inherently implicit in the local neighborhood surrounding a triple. To this effect, our paper proposes a novel attention based feature embedding that captures both entity and relation features in any given entity's neighborhood. Additionally, we also encapsulate relation clusters and multihop relations in our model. Our empirical study offers insights into the efficacy of our attention based model and we show marked performance gains in comparison to state of the art methods on all datasets.", "field": [], "task": ["Knowledge Base Completion", "Knowledge Graph Completion", "Knowledge Graph Embeddings", "Knowledge Graphs", "Link Prediction"], "method": [], "dataset": ["WN18RR", "FB15k-237"], "metric": ["Hits@3", "Appropriate Evaluation Protocols", "Hits@1", "MR", "MRR", "Hits@10"], "title": "Learning Attention-based Embeddings for Relation Prediction in Knowledge Graphs"} {"abstract": "Machine learning and deep learning have gained popularity and achieved immense success in Drug discovery in recent decades. Historically, machine learning and deep learning models were trained on either structural data or chemical properties by separated model. In this study, we proposed an architecture training simultaneously both type of data in order to improve the overall performance. Given the molecular structure in the form of SMILES notation and their label, we generated the SMILES-based feature matrix and molecular descriptors. These data were trained on a deep learning model which was also integrated with the Attention mechanism to facilitate training and interpreting. Experiments showed that our model could raise the performance of prediction comparing to the reference. With the maximum MCC 0.58 and AUC 90% by cross-validation on EGFR inhibitors dataset, our architecture was outperforming the referring model. We also successfully integrated Attention mechanism into our model, which helped to interpret the contribution of chemical structures on bioactivity.", "field": [], "task": ["Activity Prediction", "Drug Discovery"], "method": [], "dataset": ["egfr-inh"], "metric": ["AUC"], "title": "Attention-based Multi-Input Deep Learning Architecture for Biological Activity Prediction: An Application in EGFR Inhibitors"} {"abstract": "This paper studies graph-based recommendation, where an interaction graph is constructed from historical records and is lever-aged to alleviate data sparsity and cold start problems. We reveal an early summarization problem in existing graph-based models, and propose Neighborhood Interaction (NI) model to capture each neighbor pair (between user-side and item-side) distinctively. NI model is more expressive and can capture more complicated structural patterns behind user-item interactions. To further enrich node connectivity and utilize high-order structural information, we incorporate extra knowledge graphs (KGs) and adopt graph neural networks (GNNs) in NI, called Knowledge-enhanced NeighborhoodInteraction (KNI). Compared with the state-of-the-art recommendation methods,e.g., feature-based, meta path-based, and KG-based models, our KNI achieves superior performance in click-through rate prediction (1.1%-8.4% absolute AUC improvements) and out-performs by a wide margin in top-N recommendation on 4 real-world datasets.", "field": [], "task": ["Click-Through Rate Prediction", "Knowledge Graphs"], "method": [], "dataset": ["MovieLens 1M", "MovieLens 20M"], "metric": ["AUC"], "title": "An End-to-End Neighborhood-based Interaction Model for Knowledge-enhanced Recommendation"} {"abstract": "Facial landmark detection, or face alignment, is a fundamental task that has been extensively studied. In this paper, we investigate a new perspective of facial landmark detection and demonstrate it leads to further notable improvement. Given that any face images can be factored into space of style that captures lighting, texture and image environment, and a style-invariant structure space, our key idea is to leverage disentangled style and shape space of each individual to augment existing structures via style translation. With these augmented synthetic samples, our semi-supervised model surprisingly outperforms the fully-supervised one by a large margin. Extensive experiments verify the effectiveness of our idea with state-of-the-art results on WFLW, 300W, COFW, and AFLW datasets. Our proposed structure is general and could be assembled into any face alignment frameworks. The code is made publicly available at https://github.com/thesouthfrog/stylealign.", "field": [], "task": ["Face Alignment", "Facial Landmark Detection"], "method": [], "dataset": ["WFLW"], "metric": ["ME (%, all) ", "FR@0.1(%, all)", "AUC@0.1 (all)"], "title": "Aggregation via Separation: Boosting Facial Landmark Detector with Semi-Supervised Style Translation"} {"abstract": "Equivariance to random image transformations is an effective method to learn landmarks of object categories, such as the eyes and the nose in faces, without manual supervision. However, this method does not explicitly guarantee that the learned landmarks are consistent with changes between different instances of the same object, such as different facial identities. In this paper, we develop a new perspective on the equivariance approach by noting that dense landmark detectors can be interpreted as local image descriptors equipped with invariance to intra-category variations. We then propose a direct method to enforce such an invariance in the standard equivariant loss. We do so by exchanging descriptor vectors between images of different object instances prior to matching them geometrically. In this manner, the same vectors must work regardless of the specific object identity considered. We use this approach to learn vectors that can simultaneously be interpreted as local descriptors and dense landmarks, combining the advantages of both. Experiments on standard benchmarks show that this approach can match, and in some cases surpass state-of-the-art performance amongst existing methods that learn landmarks without supervision. Code is available at www.robots.ox.ac.uk/~vgg/research/DVE/.", "field": [], "task": ["Facial Landmark Detection", "Unsupervised Facial Landmark Detection"], "method": [], "dataset": ["AFLW (Zhang CVPR 2018 crops)", "AFLW-MTFL", "MAFL", "300W"], "metric": ["NME"], "title": "Unsupervised Learning of Landmarks by Descriptor Vector Exchange"} {"abstract": "Deep Convolutional Neural Network (DCNNs) come to be the most widely used solution for most computer vision related tasks, and one of the most important application scenes is face verification. Due to its high-accuracy performance, deep face verification models of which the inference stage occurs on cloud platform through internet plays the key role on most prectical scenes. However, two critical issues exist: First, individual privacy may not be well protected since they have to upload their personal photo and other private information to the online cloud backend. Secondly, either training or inference stage is time-comsuming and the latency may affect customer experience, especially when the internet link speed is not so stable or in remote areas where mobile reception is not so good, but also in cities where building and other construction may block mobile signals. Therefore, designing lightweight networks with low memory requirement and computational cost is one of the most practical solutions for face verification on mobile platform. In this paper, a novel mobile network named SeesawFaceNets, a simple but effective model, is proposed for productively deploying face recognition for mobile devices. Dense experimental results have shown that our proposed model SeesawFaceNets outperforms the baseline MobilefaceNets, with only {\\bf66\\%}(146M VS 221M MAdds) computational cost, smaller batch size and less training steps, and SeesawFaceNets achieve comparable performance with other SOTA model e.g. mobiface with only {\\bf54.2\\%}(1.3M VS 2.4M) parameters and {\\bf31.6\\%}(146M VS 462M MAdds) computational cost, It is also eventually competitive against large-scale deep-networks face recognition on all 5 listed public validation datasets, with {\\bf6.5\\%}(4.2M VS 65M) parameters and {\\bf4.35\\%}(526M VS 12G MAdds) computational cost.", "field": [], "task": ["Face Recognition", "Face Verification"], "method": [], "dataset": ["CFP-FP", "AgeDB-30", "Labeled Faces in the Wild"], "metric": ["Accuracy"], "title": "SeesawFaceNets: sparse and robust face verification model for mobile platform"} {"abstract": "Single view depth estimation models can be trained from video footage using a self-supervised end-to-end approach with view synthesis as the supervisory signal. This is achieved with a framework that predicts depth and camera motion, with a loss based on reconstructing a target video frame from temporally adjacent frames. In this context, occlusion relates to parts of a scene that can be observed in the target frame but not in a frame used for image reconstruction. Since the image reconstruction is based on sampling from the adjacent frame, and occluded areas by definition cannot be sampled, reconstructed occluded areas corrupt to the supervisory signal. In previous work arXiv:1806.01260 occlusion is handled based on reconstruction error; at each pixel location, only the reconstruction with the lowest error is included in the loss. The current study aims to determine whether performance improvements of depth estimation models can be gained by during training only ignoring those regions that are affected by occlusion. In this work we introduce occlusion mask, a mask that during training can be used to specifically ignore regions that cannot be reconstructed due to occlusions. Occlusion mask is based entirely on predicted depth information. We introduce two novel loss formulations which incorporate the occlusion mask. The method and implementation of arXiv:1806.01260 serves as the foundation for our modifications as well as the baseline in our experiments. We demonstrate that (i) incorporating occlusion mask in the loss function improves the performance of single image depth prediction models on the KITTI benchmark. (ii) loss functions that select from reconstructions based on error are able to ignore some of the reprojection error caused by object motion.", "field": [], "task": ["Depth And Camera Motion", "Depth Estimation", "Image Reconstruction", "Monocular Depth Estimation"], "method": [], "dataset": ["KITTI Eigen split unsupervised"], "metric": ["absolute relative error"], "title": "Improving Self-Supervised Single View Depth Estimation by Masking Occlusion"} {"abstract": "This work focuses on sentence-level aspect-based sentiment\r\nanalysis for restaurant reviews. A two-stage sentiment analysis algorithm\r\nis proposed. In this method, first a lexicalized domain ontology is used to\r\npredict the sentiment and as a back-up algorithm a neural network with\r\na rotatory attention mechanism (LCR-Rot) is utilized. Furthermore, two\r\nfeatures are added to the backup algorithm. The first extension changes\r\nthe order in which the rotatory attention mechanism operates (LCRRot-inv). The second extension runs over the rotatory attention mechanism for multiple iterations (LCR-Rot-hop). Using the SemEval-2015\r\nand SemEval-2016 data, we conclude that the two-stage method outperforms the baseline methods, albeit with a small percentage. Moreover,\r\nwe find that the method where we iterate multiple times over a rotatory\r\nattention mechanism has the best performance.", "field": [], "task": ["Aspect-Based Sentiment Analysis", "Sentiment Analysis"], "method": [], "dataset": ["SemEval-2016 Task 5 Subtask 1", " SemEval 2015 Task 12"], "metric": ["Restaurant (Acc)"], "title": "A Hybrid Approach for Aspect-Based Sentiment Analysis Using a Lexicalized Domain Ontology and Attentional Neural Models"} {"abstract": "Graph Neural Networks (GNN) have been shown to work effectively for modeling graph structured data to solve tasks such as node classification, link prediction and graph classification. There has been some recent progress in defining the notion of pooling in graphs whereby the model tries to generate a graph level representation by downsampling and summarizing the information present in the nodes. Existing pooling methods either fail to effectively capture the graph substructure or do not easily scale to large graphs. In this work, we propose ASAP (Adaptive Structure Aware Pooling), a sparse and differentiable pooling method that addresses the limitations of previous graph pooling architectures. ASAP utilizes a novel self-attention network along with a modified GNN formulation to capture the importance of each node in a given graph. It also learns a sparse soft cluster assignment for nodes at each layer to effectively pool the subgraphs to form the pooled graph. Through extensive experiments on multiple datasets and theoretical analysis, we motivate our choice of the components used in ASAP. Our experimental results show that combining existing GNN architectures with ASAP leads to state-of-the-art results on multiple graph classification benchmarks. ASAP has an average improvement of 4%, compared to current sparse hierarchical state-of-the-art method.", "field": [], "task": ["Graph Classification", "Link Prediction", "Node Classification"], "method": [], "dataset": ["NCI109", "PROTEINS", "D&D", "NCI1", "FRANKENSTEIN"], "metric": ["Accuracy"], "title": "ASAP: Adaptive Structure Aware Pooling for Learning Hierarchical Graph Representations"} {"abstract": "We present Siam R-CNN, a Siamese re-detection architecture which unleashes the full power of two-stage object detection approaches for visual object tracking. We combine this with a novel tracklet-based dynamic programming algorithm, which takes advantage of re-detections of both the first-frame template and previous-frame predictions, to model the full history of both the object to be tracked and potential distractor objects. This enables our approach to make better tracking decisions, as well as to re-detect tracked objects after long occlusion. Finally, we propose a novel hard example mining strategy to improve Siam R-CNN's robustness to similar looking objects. Siam R-CNN achieves the current best performance on ten tracking benchmarks, with especially strong results for long-term tracking. We make our code and models available at www.vision.rwth-aachen.de/page/siamrcnn.", "field": [], "task": ["Object Detection", "Object Tracking", "Semi-Supervised Video Object Segmentation", "Visual Object Tracking", "Visual Tracking"], "method": [], "dataset": ["DAVIS 2017 (val)", "DAVIS 2017 (test-dev)", "DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "Siam R-CNN: Visual Tracking by Re-Detection"} {"abstract": "We propose a simple, interpretable framework for solving a wide range of\nimage reconstruction problems such as denoising and deconvolution. Given a\ncorrupted input image, the model synthesizes a spatially varying linear filter\nwhich, when applied to the input image, reconstructs the desired output. The\nmodel parameters are learned using supervised or self-supervised training. We\ntest this model on three tasks: non-uniform motion blur removal,\nlossy-compression artifact reduction and single image super resolution. We\ndemonstrate that our model substantially outperforms state-of-the-art methods\non all these tasks and is significantly faster than optimization-based\napproaches to deconvolution. Unlike models that directly predict output pixel\nvalues, the predicted filter flow is controllable and interpretable, which we\ndemonstrate by visualizing the space of predicted filters for different tasks.", "field": [], "task": ["Deblurring", "Denoising", "Image Reconstruction", "Image Super-Resolution", "Lossy-Compression Artifact Reduction", "Super-Resolution"], "method": [], "dataset": ["Set5 - 4x upscaling", "Set14 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Image Reconstruction with Predictive Filter Flow"} {"abstract": "Training accurate 3D human pose estimators requires large amount of 3D\nground-truth data which is costly to collect. Various weakly or self supervised\npose estimation methods have been proposed due to lack of 3D data.\nNevertheless, these methods, in addition to 2D ground-truth poses, require\neither additional supervision in various forms (e.g. unpaired 3D ground truth\ndata, a small subset of labels) or the camera parameters in multiview settings.\nTo address these problems, we present EpipolarPose, a self-supervised learning\nmethod for 3D human pose estimation, which does not need any 3D ground-truth\ndata or camera extrinsics. During training, EpipolarPose estimates 2D poses\nfrom multi-view images, and then, utilizes epipolar geometry to obtain a 3D\npose and camera geometry which are subsequently used to train a 3D pose\nestimator. We demonstrate the effectiveness of our approach on standard\nbenchmark datasets i.e. Human3.6M and MPI-INF-3DHP where we set the new\nstate-of-the-art among weakly/self-supervised methods. Furthermore, we propose\na new performance measure Pose Structure Score (PSS) which is a scale\ninvariant, structure aware measure to evaluate the structural plausibility of a\npose with respect to its ground truth. Code and pretrained models are available\nat https://github.com/mkocabas/EpipolarPose", "field": [], "task": ["3D Human Pose Estimation", "Pose Estimation", "Self-Supervised Learning"], "method": [], "dataset": ["Human3.6M"], "metric": ["Average MPJPE (mm)"], "title": "Self-Supervised Learning of 3D Human Pose using Multi-view Geometry"} {"abstract": "We consider a family of problems that are concerned about making predictions for the majority of unlabeled, graph-structured data samples based on a small proportion of labeled samples. Relational information among the data samples, often encoded in the graph/network structure, is shown to be helpful for these semi-supervised learning tasks. However, conventional graph-based regularization methods and recent graph neural networks do not fully leverage the interrelations between the features, the graph, and the labels. In this work, we propose a flexible generative framework for graph-based semi-supervised learning, which approaches the joint distribution of the node features, labels, and the graph structure. Borrowing insights from random graph models in network science literature, this joint distribution can be instantiated using various distribution families. For the inference of missing labels, we exploit recent advances of scalable variational inference techniques to approximate the Bayesian posterior. We conduct thorough experiments on benchmark datasets for graph-based semi-supervised learning. Results show that the proposed methods outperform the state-of-the-art models in most settings.", "field": [], "task": ["Variational Inference"], "method": [], "dataset": ["Cora", "Citeseer", "Cora with Public Split: fixed 20 nodes per class", "Pubmed", "CiteSeer with Public Split: fixed 20 nodes per class", "PubMed with Public Split: fixed 20 nodes per class"], "metric": ["Validation", "Training Split", "Accuracy"], "title": "A Flexible Generative Framework for Graph-based Semi-supervised Learning"} {"abstract": "Unsupervised domain adaptation (UDA) for person re-identification is challenging because of the huge gap between the source and target domain. A typical self-training method is to use pseudo-labels generated by clustering algorithms to iteratively optimize the model on the target domain. However, a drawback to this is that noisy pseudo-labels generally cause trouble in learning. To address this problem, a mutual learning method by dual networks has been developed to produce reliable soft labels. However, as the two neural networks gradually converge, their complementarity is weakened and they likely become biased towards the same kind of noise. This paper proposes a novel light-weight module, the Attentive WaveBlock (AWB), which can be integrated into the dual networks of mutual learning to enhance the complementarity and further depress noise in the pseudo-labels. Specifically, we first introduce a parameter-free module, the WaveBlock, which creates a difference between features learned by two networks by waving blocks of feature maps differently. Then, an attention mechanism is leveraged to enlarge the difference created and discover more complementary features. Furthermore, two kinds of combination strategies, i.e. pre-attention and post-attention, are explored. Experiments demonstrate that the proposed method achieves state-of-the-art performance with significant improvements on multiple UDA person re-identification tasks. We also prove the generality of the proposed method by applying it to vehicle re-identification and image classification tasks. Our codes and models are available at https://github.com/WangWenhao0716/Attentive-WaveBlock.", "field": [], "task": ["Domain Adaptation", "Image Classification", "Person Re-Identification", "Unsupervised Domain Adaptation", "Vehicle Re-Identification"], "method": [], "dataset": ["Duke to Market", "Duke to MSMT", "Market to Duke", "Market to MSMT"], "metric": ["rank-10", "mAP", "rank-5", "rank-1"], "title": "Attentive WaveBlock: Complementarity-enhanced Mutual Networks for Unsupervised Domain Adaptation in Person Re-identification and Beyond"} {"abstract": "Sentiment Analysis and Emotion Detection in conversation is key in several real-world applications, with an increase in modalities available aiding a better understanding of the underlying emotions. Multi-modal Emotion Detection and Sentiment Analysis can be particularly useful, as applications will be able to use specific subsets of available modalities, as per the available data. Current systems dealing with Multi-modal functionality fail to leverage and capture - the context of the conversation through all modalities, the dependency between the listener(s) and speaker emotional states, and the relevance and relationship between the available modalities. In this paper, we propose an end to end RNN architecture that attempts to take into account all the mentioned drawbacks. Our proposed model, at the time of writing, out-performs the state of the art on a benchmark dataset on a variety of accuracy and regression metrics.", "field": [], "task": ["Multimodal Sentiment Analysis", "Regression", "Sentiment Analysis"], "method": [], "dataset": ["MOSI"], "metric": ["Accuracy"], "title": "Multilogue-Net: A Context-Aware RNN for Multi-modal Emotion Detection and Sentiment Analysis in Conversation"} {"abstract": "MaskedFusion is a framework to estimate the 6D pose of objects using RGB-D data, with an architecture that leverages multiple sub-tasks in a pipeline to achieve accurate 6D poses. 6D pose estimation is an open challenge due to complex world objects and many possible problems when capturing data from the real world, e.g., occlusions, truncations, and noise in the data. Achieving accurate 6D poses will improve results in other open problems like robot grasping or positioning objects in augmented reality. MaskedFusion improves the state-of-the-art by using object masks to eliminate non-relevant data. With the inclusion of the masks on the neural network that estimates the 6D pose of an object we also have features that represent the object shape. MaskedFusion is a modular pipeline where each sub-task can have different methods that achieve the objective. MaskedFusion achieved 97.3% on average using the ADD metric on the LineMOD dataset and 93.3% using the ADD-S AUC metric on YCB-Video Dataset, which is an improvement, compared to the state-of-the-art methods. The code is available on GitHub (https://github.com/kroglice/MaskedFusion).", "field": [], "task": ["6D Pose Estimation", "6D Pose Estimation using RGBD", "Pose Estimation"], "method": [], "dataset": ["LineMOD", "YCB-Video"], "metric": ["Mean ADD", "ADDS AUC", "Accuracy (ADD)", "Mean ADD-S"], "title": "MaskedFusion: Mask-based 6D Object Pose Estimation"} {"abstract": "Structures matter in single image super resolution (SISR). Recent studies benefiting from generative adversarial network (GAN) have promoted the development of SISR by recovering photo-realistic images. However, there are always undesired structural distortions in the recovered images. In this paper, we propose a structure-preserving super resolution method to alleviate the above issue while maintaining the merits of GAN-based methods to generate perceptual-pleasant details. Specifically, we exploit gradient maps of images to guide the recovery in two aspects. On the one hand, we restore high-resolution gradient maps by a gradient branch to provide additional structure priors for the SR process. On the other hand, we propose a gradient loss which imposes a second-order restriction on the super-resolved images. Along with the previous image-space loss functions, the gradient-space objectives help generative networks concentrate more on geometric structures. Moreover, our method is model-agnostic, which can be potentially used for off-the-shelf SR networks. Experimental results show that we achieve the best PI and LPIPS performance and meanwhile comparable PSNR and SSIM compared with state-of-the-art perceptual-driven SR methods. Visual results demonstrate our superiority in restoring structures while generating natural SR images.", "field": [], "task": ["Image Super-Resolution", "SSIM", "Super-Resolution"], "method": [], "dataset": ["Set5 - 4x upscaling", "Urban100 - 4x upscaling", "BSD100 - 4x upscaling", "Set14 - 4x upscaling"], "metric": ["SSIM", "PSNR", "LPIPS", "Perceptual Index"], "title": "Structure-Preserving Super Resolution with Gradient Guidance"} {"abstract": "A person is commonly described by attributes like height, build, cloth color, cloth type, and gender. Such attributes are known as soft biometrics. They bridge the semantic gap between human description and person retrieval in surveillance video. The paper proposes a deep learning-based linear filtering approach for person retrieval using height, cloth color, and gender. The proposed approach uses Mask R-CNN for pixel-wise person segmentation. It removes background clutter and provides precise boundary around the person. Color and gender models are fine-tuned using AlexNet and the algorithm is tested on SoftBioSearch dataset. It achieves good accuracy for person retrieval using the semantic query in challenging conditions.", "field": [], "task": ["Person Retrieval"], "method": [], "dataset": ["SoftBioSearch"], "metric": ["Average IOU"], "title": "Person Retrieval in Surveillance Video using Height, Color and Gender"} {"abstract": "Sign language translation (SLT) aims to interpret sign video sequences into text-based natural language sentences. Sign videos consist of continuous sequences of sign gestures with no clear boundaries in between. Existing SLT models usually represent sign visual features in a frame-wise manner so as to avoid needing to explicitly segmenting the videos into isolated signs. However, these methods neglect the temporal information of signs and lead to substantial ambiguity in translation. In this paper, we explore the temporal semantic structures of signvideos to learn more discriminative features. To this end, we first present a novel sign video segment representation which takes into account multiple temporal granularities, thus alleviating the need for accurate video segmentation. Taking advantage of the proposed segment representation, we develop a novel hierarchical sign video feature learning method via a temporal semantic pyramid network, called TSPNet. Specifically, TSPNet introduces an inter-scale attention to evaluate and enhance local semantic consistency of sign segments and an intra-scale attention to resolve semantic ambiguity by using non-local video context. Experiments show that our TSPNet outperforms the state-of-the-art with significant improvements on the BLEU score (from 9.58 to 13.41) and ROUGE score (from 31.80 to 34.96)on the largest commonly-used SLT dataset. Our implementation is available at https://github.com/verashira/TSPNet.", "field": [], "task": ["Sign Language Recognition", "Sign Language Translation", "Video Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["RWTH-PHOENIX-Weather 2014 T"], "metric": ["BLEU-4"], "title": "TSPNet: Hierarchical Feature Learning via Temporal Semantic Pyramid for Sign Language Translation"} {"abstract": "The outbreak of COVID-19 has forced everyone to stay indoors, fabricating a significant drop in physical activeness. Our work is constructed upon the idea to formulate a backbone mechanism, to detect levels of activeness in real-time, using a single monocular image of a target person. The scope can be generalized under many applications, be it in an interview, online classes, security surveillance, et cetera. We propose a Computer Vision based multi-stage approach, wherein the pose of a person is first detected, encoded with a novel approach, and then assessed by a classical machine learning algorithm to determine the level of activeness. An alerting system is wrapped around the approach to provide a solution to inhibit lethargy by sending notification alerts to individuals involved.", "field": [], "task": ["Activeness Detection"], "method": [], "dataset": ["COCO test-dev"], "metric": ["Accuracy (%)"], "title": "ActiveNet: A computer-vision based approach to determine lethargy"} {"abstract": "We consider the task of knowledge graph link prediction. Given a question consisting of a source entity and a relation (e.g., Shakespeare and BornIn), the objective is to predict the most likely answer entity (e.g., England). Recent approaches tackle this problem by learning entity and relation embeddings. However, they often constrain the relationship between these embeddings to be additive (i.e., the embeddings are concatenated and then processed by a sequence of linear functions and element-wise non-linearities). We show that this type of interaction significantly limits representational power. For example, such models cannot handle cases where a different projection of the source entity is used for each relation. We propose to use contextual parameter generation to address this limitation. More specifically, we treat relations as the context in which source entities are processed to produce predictions, by using relation embeddings to generate the parameters of a model operating over source entity embeddings. This allows models to represent more complex interactions between entities and relations. We apply our method on two existing link prediction methods, including the current state-of-the-art, resulting in significant performance gains and establishing a new state-of-the-art for this task. These gains are achieved while also reducing convergence time by up to 28 times.", "field": [], "task": ["Entity Embeddings", "Link Prediction"], "method": [], "dataset": ["WN18RR", "NELL-995", "FB15k-237"], "metric": ["Hits@10", "MRR", "Hits@1"], "title": "Contextual Parameter Generation for Knowledge Graph Link Prediction"} {"abstract": "Contrastive learning between multiple views of the data has recently achieved state of the art performance in the field of self-supervised representation learning. Despite its success, the influence of different view choices has been less studied. In this paper, we use theoretical and empirical analysis to better understand the importance of view selection, and argue that we should reduce the mutual information (MI) between views while keeping task-relevant information intact. To verify this hypothesis, we devise unsupervised and semi-supervised frameworks that learn effective views by aiming to reduce their MI. We also consider data augmentation as a way to reduce MI, and show that increasing data augmentation indeed leads to decreasing MI and improves downstream classification accuracy. As a by-product, we achieve a new state-of-the-art accuracy on unsupervised pre-training for ImageNet classification ($73\\%$ top-1 linear readout with a ResNet-50). In addition, transferring our models to PASCAL VOC object detection and COCO instance segmentation consistently outperforms supervised pre-training. Code:http://github.com/HobbitLong/PyContrast", "field": [], "task": ["Data Augmentation", "Instance Segmentation", "Object Detection", "Representation Learning", "Self-Supervised Image Classification", "Semantic Segmentation", "Unsupervised Pre-training"], "method": [], "dataset": ["ImageNet"], "metric": ["Top 5 Accuracy", "Number of Params", "Top 1 Accuracy"], "title": "What Makes for Good Views for Contrastive Learning?"} {"abstract": "Lip reading, also known as visual speech recognition, aims to recognize the speech content from videos by analyzing the lip dynamics. There have been several appealing progress in recent years, benefiting much from the rapidly developed deep learning techniques and the recent large-scale lip-reading datasets. Most existing methods obtained high performance by constructing a complex neural network, together with several customized training strategies which were always given in a very brief description or even shown only in the source code. We find that making proper use of these strategies could always bring exciting improvements without changing much of the model. Considering the non-negligible effects of these strategies and the existing tough status to train an effective lip reading model, we perform a comprehensive quantitative study and comparative analysis, for the first time, to show the effects of several different choices for lip reading. By only introducing some easy-to-get refinements to the baseline pipeline, we obtain an obvious improvement of the performance from 83.7% to 88.4% and from 38.2% to 55.7% on two largest public available lip reading datasets, LRW and LRW-1000, respectively. They are comparable and even surpass the existing state-of-the-art results.", "field": [], "task": ["Lipreading", "Lip Reading", "Speech Recognition", "Visual Speech Recognition"], "method": [], "dataset": ["Lip Reading in the Wild", "LRW-1000"], "metric": ["Top-1 Accuracy"], "title": "Learn an Effective Lip Reading Model without Pains"} {"abstract": "We address the problem of detecting human--object interactions in images using graphical neural networks. Our network constructs a bipartite graph of nodes representing detected humans and objects, wherein messages passed between the nodes encode relative spatial and appearance information. Unlike existing approaches that separate appearance and spatial features, our method fuses these two cues within a single graphical model allowing information conditioned on both modalities to influence the prediction of interactions with neighboring nodes. Through extensive experimentation we demonstrate the advantages of fusing relative spatial information with appearance features in the computation of adjacency structure, message passing and the ultimate refined graph features. On the popular HICO-DET benchmark dataset, our model outperforms state-of-the-art with an mAP of 27.18, a 10% relative improvement.", "field": [], "task": ["Human-Object Interaction Detection"], "method": [], "dataset": ["HICO-DET"], "metric": ["Time Per Frame (ms)", "MAP"], "title": "Spatio-attentive Graphs for Human-Object Interaction Detection"} {"abstract": "Determining which image regions to concentrate on is critical for Human-Object Interaction (HOI) detection. Conventional HOI detectors focus on either detected human and object pairs or pre-defined interaction locations, which limits learning of the effective features. In this paper, we reformulate HOI detection as an adaptive set prediction problem, with this novel formulation, we propose an Adaptive Set-based one-stage framework (AS-Net) with parallel instance and interaction branches. To attain this, we map a trainable interaction query set to an interaction prediction set with a transformer. Each query adaptively aggregates the interaction-relevant features from global contexts through multi-head co-attention. Besides, the training process is supervised adaptively by matching each ground-truth with the interaction prediction. Furthermore, we design an effective instance-aware attention module to introduce instructive features from the instance branch into the interaction branch. Our method outperforms previous state-of-the-art methods without any extra human pose and language features on three challenging HOI detection datasets. Especially, we achieve over $31\\%$ relative improvement on a large scale HICO-DET dataset. Code is available at https://github.com/yoyomimi/AS-Net.", "field": [], "task": ["Human-Object Interaction Detection"], "method": [], "dataset": ["HICO-DET"], "metric": ["MAP"], "title": "Reformulating HOI Detection as Adaptive Set Prediction"} {"abstract": "Capsules as well as dynamic routing between them are most recently proposed\nstructures for deep neural networks. A capsule groups data into vectors or\nmatrices as poses rather than conventional scalars to represent specific\nproperties of target instance. Besides of pose, a capsule should be attached\nwith a probability (often denoted as activation) for its presence. The dynamic\nrouting helps capsules achieve more generalization capacity with many fewer\nmodel parameters. However, the bottleneck that prevents widespread applications\nof capsule is the expense of computation during routing. To address this\nproblem, we generalize existing routing methods within the framework of\nweighted kernel density estimation, and propose two fast routing methods with\ndifferent optimization strategies. Our methods prompt the time efficiency of\nrouting by nearly 40\\% with negligible performance degradation. By stacking a\nhybrid of convolutional layers and capsule layers, we construct a network\narchitecture to handle inputs at a resolution of $64\\times{64}$ pixels. The\nproposed models achieve a parallel performance with other leading methods in\nmultiple benchmarks.", "field": [], "task": ["Density Estimation", "Image Classification"], "method": [], "dataset": ["smallNORB"], "metric": ["Classification Error"], "title": "Fast Dynamic Routing Based on Weighted Kernel Density Estimation"} {"abstract": "Salient Object Detection (SOD) domain using RGB-D data has lately emerged with some current models' adequately precise results. However, they have restrained generalization abilities and intensive computational complexity. In this paper, inspired by the best background/foreground separation abilities of deformable convolutions, we employ them in our Densely Deformable Network (DDNet) to achieve efficient SOD. The salient regions from densely deformable convolutions are further refined using transposed convolutions to optimally generate the saliency maps. Quantitative and qualitative evaluations using the recent SOD dataset against 22 competing techniques show our method's efficiency and effectiveness. We also offer evaluation using our own created cross-dataset, surveillance-SOD (S-SOD), to check the trained models' validity in terms of their applicability in diverse scenarios. The results indicate that the current models have limited generalization potentials, demanding further research in this direction. Our code and new dataset will be publicly available at https://github.com/tanveer-hussain/EfficientSOD", "field": [], "task": ["RGB-D Salient Object Detection", "RGB Salient Object Detection", "Saliency Detection", "Salient Object Detection"], "method": [], "dataset": ["SIP"], "metric": ["Average MAE"], "title": "Densely Deformable Efficient Salient Object Detection Network"} {"abstract": "In this paper we introduce a natural image prior that directly represents a\nGaussian-smoothed version of the natural image distribution. We include our\nprior in a formulation of image restoration as a Bayes estimator that also\nallows us to solve noise-blind image restoration problems. We show that the\ngradient of our prior corresponds to the mean-shift vector on the natural image\ndistribution. In addition, we learn the mean-shift vector field using denoising\nautoencoders, and use it in a gradient descent approach to perform Bayes risk\nminimization. We demonstrate competitive results for noise-blind deblurring,\nsuper-resolution, and demosaicing.", "field": [], "task": ["Deblurring", "Demosaicking", "Denoising", "Image Restoration", "Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["Set5 - 4x upscaling", "Set14 - 4x upscaling"], "metric": ["PSNR"], "title": "Deep Mean-Shift Priors for Image Restoration"} {"abstract": "Draft of textbook chapter on neural machine translation. a comprehensive\ntreatment of the topic, ranging from introduction to neural networks,\ncomputation graphs, description of the currently dominant attentional\nsequence-to-sequence model, recent refinements, alternative architectures and\nchallenges. Written as chapter for the textbook Statistical Machine\nTranslation. Used in the JHU Fall 2017 class on machine translation.", "field": [], "task": ["Machine Translation"], "method": [], "dataset": ["20NEWS"], "metric": ["1-of-100 Accuracy"], "title": "Neural Machine Translation"} {"abstract": "Image classification has advanced significantly in recent years with the\navailability of large-scale image sets. However, fine-grained classification\nremains a major challenge due to the annotation cost of large numbers of\nfine-grained categories. This project shows that compelling classification\nperformance can be achieved on such categories even without labeled training\ndata. Given image and class embeddings, we learn a compatibility function such\nthat matching embeddings are assigned a higher score than mismatching ones;\nzero-shot classification of an image proceeds by finding the label yielding the\nhighest joint compatibility score. We use state-of-the-art image features and\nfocus on different supervised attributes and unsupervised output embeddings\neither derived from hierarchies or learned from unlabeled text corpora. We\nestablish a substantially improved state-of-the-art on the Animals with\nAttributes and Caltech-UCSD Birds datasets. Most encouragingly, we demonstrate\nthat purely unsupervised output embeddings (learned from Wikipedia and improved\nwith fine-grained text) achieve compelling results, even outperforming the\nprevious supervised state-of-the-art. By combining different output embeddings,\nwe further improve results.", "field": [], "task": ["Few-Shot Image Classification", "Fine-Grained Image Classification", "Image Classification", "Zero-Shot Learning"], "method": [], "dataset": ["CUB 200 50-way (0-shot)", "CUB-200-2011 - 0-Shot", "CUB-200 - 0-Shot Learning"], "metric": ["Accuracy", "Top-1 Accuracy"], "title": "Evaluation of Output Embeddings for Fine-Grained Image Classification"} {"abstract": "In recent years, deep neural networks have led to exciting breakthroughs in\nspeech recognition, computer vision, and natural language processing (NLP)\ntasks. However, there have been few positive results of deep models on ad-hoc\nretrieval tasks. This is partially due to the fact that many important\ncharacteristics of the ad-hoc retrieval task have not been well addressed in\ndeep models yet. Typically, the ad-hoc retrieval task is formalized as a\nmatching problem between two pieces of text in existing work using deep models,\nand treated equivalent to many NLP tasks such as paraphrase identification,\nquestion answering and automatic conversation. However, we argue that the\nad-hoc retrieval task is mainly about relevance matching while most NLP\nmatching tasks concern semantic matching, and there are some fundamental\ndifferences between these two matching tasks. Successful relevance matching\nrequires proper handling of the exact matching signals, query term importance,\nand diverse matching requirements. In this paper, we propose a novel deep\nrelevance matching model (DRMM) for ad-hoc retrieval. Specifically, our model\nemploys a joint deep architecture at the query term level for relevance\nmatching. By using matching histogram mapping, a feed forward matching network,\nand a term gating network, we can effectively deal with the three relevance\nmatching factors mentioned above. Experimental results on two representative\nbenchmark collections show that our model can significantly outperform some\nwell-known retrieval models as well as state-of-the-art deep matching models.", "field": [], "task": ["Ad-Hoc Information Retrieval", "Paraphrase Identification", "Question Answering", "Speech Recognition"], "method": [], "dataset": ["TREC Robust04"], "metric": ["P@20", "nDCG@20", "MAP"], "title": "A Deep Relevance Matching Model for Ad-hoc Retrieval"} {"abstract": "In this paper, we are interested in the few-shot learning problem. In\nparticular, we focus on a challenging scenario where the number of categories\nis large and the number of examples per novel category is very limited, e.g. 1,\n2, or 3. Motivated by the close relationship between the parameters and the\nactivations in a neural network associated with the same category, we propose a\nnovel method that can adapt a pre-trained neural network to novel categories by\ndirectly predicting the parameters from the activations. Zero training is\nrequired in adaptation to novel categories, and fast inference is realized by a\nsingle forward pass. We evaluate our method by doing few-shot image recognition\non the ImageNet dataset, which achieves the state-of-the-art classification\naccuracy on novel categories by a significant margin while keeping comparable\nperformance on the large-scale categories. We also test our method on the\nMiniImageNet dataset and it strongly outperforms the previous state-of-the-art\nmethods.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning"], "method": [], "dataset": ["Mini-Imagenet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Few-Shot Image Recognition by Predicting Parameters from Activations"} {"abstract": "Few-shot deep learning is a topical challenge area for scaling visual recognition to open ended growth of unseen new classes with limited labeled examples. A promising approach is based on metric learning, which trains a deep embedding to support image similarity matching. Our insight is that effective general purpose matching requires non-linear comparison of features at multiple abstraction levels. We thus propose a new deep comparison network comprised of embedding and relation modules that learn multiple non-linear distance metrics based on different levels of features simultaneously. Furthermore, to reduce over-fitting and enable the use of deeper embeddings, we represent images as distributions rather than vectors via learning parameterized Gaussian noise regularization. The resulting network achieves excellent performance on both miniImageNet and tieredImageNet.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Metric Learning"], "method": [], "dataset": ["Mini-Imagenet 5-way (1-shot)", "Tiered ImageNet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "Mini-Imagenet 20-way (1-shot)", "Mini-Imagenet 20-way (5-shot)", "Tiered ImageNet 5-way (5-shot)"], "metric": ["Accuracy"], "title": "RelationNet2: Deep Comparison Columns for Few-Shot Learning"} {"abstract": "We introduce a family of multitask variational methods for semi-supervised sequence labeling. Our model family consists of a latent-variable generative model and a discriminative labeler. The generative models use latent variables to define the conditional probability of a word given its context, drawing inspiration from word prediction objectives commonly used in learning word embeddings. The labeler helps inject discriminative information into the latent space. We explore several latent variable configurations, including ones with hierarchical structure, which enables the model to account for both label-specific and word-specific information. Our models consistently outperform standard sequential baselines on 8 sequence labeling datasets, and improve further with unlabeled data.", "field": [], "task": ["Hierarchical structure", "Learning Word Embeddings", "Word Embeddings"], "method": [], "dataset": ["CoNLL 2003 (English)"], "metric": ["F1"], "title": "Variational Sequential Labelers for Semi-Supervised Learning"} {"abstract": "Dialogue state tracking is the core part of a spoken dialogue system. It\nestimates the beliefs of possible user's goals at every dialogue turn. However,\nfor most current approaches, it's difficult to scale to large dialogue domains.\nThey have one or more of following limitations: (a) Some models don't work in\nthe situation where slot values in ontology changes dynamically; (b) The number\nof model parameters is proportional to the number of slots; (c) Some models\nextract features based on hand-crafted lexicons. To tackle these challenges, we\npropose StateNet, a universal dialogue state tracker. It is independent of the\nnumber of values, shares parameters across all slots, and uses pre-trained word\nvectors instead of explicit semantic dictionaries. Our experiments on two\ndatasets show that our approach not only overcomes the limitations, but also\nsignificantly outperforms the performance of state-of-the-art approaches.", "field": [], "task": ["Dialogue State Tracking"], "method": [], "dataset": ["Wizard-of-Oz", "Second dialogue state tracking challenge"], "metric": ["Joint"], "title": "Towards Universal Dialogue State Tracking"} {"abstract": "Graph kernels are kernel methods measuring graph similarity and serve as a standard tool for graph classification. However, the use of kernel methods for node classification, which is a related problem to graph representation learning, is still ill-posed and the state-of-the-art methods are heavily based on heuristics. Here, we present a novel theoretical kernel-based framework for node classification that can bridge the gap between these two representation learning problems on graphs. Our approach is motivated by graph kernel methodology but extended to learn the node representations capturing the structural information in a graph. We theoretically show that our formulation is as powerful as any positive semidefinite kernels. To efficiently learn the kernel, we propose a novel mechanism for node feature aggregation and a data-driven similarity metric employed during the training phase. More importantly, our framework is flexible and complementary to other graph-based deep learning models, e.g., Graph Convolutional Networks (GCNs). We empirically evaluate our approach on a number of standard node classification benchmarks, and demonstrate that our model sets the new state of the art.", "field": [], "task": ["Graph Classification", "Graph Representation Learning", "Graph Similarity", "Node Classification", "Representation Learning"], "method": [], "dataset": ["Cora", "Pubmed", "Citeseer"], "metric": ["AP", "AUC"], "title": "Rethinking Kernel Methods for Node Representation Learning on Graphs"} {"abstract": "This study tackles generative reading comprehension (RC), which consists of answering questions based on textual evidence and natural language generation (NLG). We propose a multi-style abstractive summarization model for question answering, called Masque. The proposed model has two key characteristics. First, unlike most studies on RC that have focused on extracting an answer span from the provided passages, our model instead focuses on generating a summary from the question and multiple passages. This serves to cover various answer styles required for real-world applications. Second, whereas previous studies built a specific model for each answer style because of the difficulty of acquiring one general model, our approach learns multi-style answers within a model to improve the NLG capability for all styles involved. This also enables our model to give an answer in the target style. Experiments show that our model achieves state-of-the-art performance on the Q&A task and the Q&A + NLG task of MS MARCO 2.1 and the summary task of NarrativeQA. We observe that the transfer of the style-independent NLG capability to the target style is the key to its success.", "field": [], "task": ["Abstractive Text Summarization", "Question Answering", "Reading Comprehension", "Text Generation"], "method": [], "dataset": ["MS MARCO", "NarrativeQA"], "metric": ["Rouge-L", "BLEU-4", "METEOR", "BLEU-1"], "title": "Multi-style Generative Reading Comprehension"} {"abstract": "The Tsetlin Machine (TM) is an interpretable mechanism for pattern recognition that constructs conjunctive clauses from data. The clauses capture frequent patterns with high discriminating power, providing increasing expression power with each additional clause. However, the resulting accuracy gain comes at the cost of linear growth in computation time and memory usage. In this paper, we present the Weighted Tsetlin Machine (WTM), which reduces computation time and memory usage by weighting the clauses. Real-valued weighting allows one clause to replace multiple, and supports fine-tuning the impact of each clause. Our novel scheme simultaneously learns both the composition of the clauses and their weights. Furthermore, we increase training efficiency by replacing $k$ Bernoulli trials of success probability $p$ with a uniform sample of average size $p k$, the size drawn from a binomial distribution. In our empirical evaluation, the WTM achieved the same accuracy as the TM on MNIST, IMDb, and Connect-4, requiring only $1/4$, $1/3$, and $1/50$ of the clauses, respectively. With the same number of clauses, the WTM outperformed the TM, obtaining peak test accuracies of respectively $98.63\\%$, $90.37\\%$, and $87.91\\%$. Finally, our novel sampling scheme reduced sample generation time by a factor of $7$.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["MNIST"], "metric": ["Accuracy"], "title": "The Weighted Tsetlin Machine: Compressed Representations with Weighted Clauses"} {"abstract": "Recent years have witnessed a surge of interests of using neural topic models for automatic topic extraction from text, since they avoid the complicated mathematical derivations for model inference as in traditional topic models such as Latent Dirichlet Allocation (LDA). However, these models either typically assume improper prior (e.g. Gaussian or Logistic Normal) over latent topic space or could not infer topic distribution for a given document. To address these limitations, we propose a neural topic modeling approach, called Bidirectional Adversarial Topic (BAT) model, which represents the first attempt of applying bidirectional adversarial training for neural topic modeling. The proposed BAT builds a two-way projection between the document-topic distribution and the document-word distribution. It uses a generator to capture the semantic patterns from texts and an encoder for topic inference. Furthermore, to incorporate word relatedness information, the Bidirectional Adversarial Topic model with Gaussian (Gaussian-BAT) is extended from BAT. To verify the effectiveness of BAT and Gaussian-BAT, three benchmark corpora are used in our experiments. The experimental results show that BAT and Gaussian-BAT obtain more coherent topics, outperforming several competitive baselines. Moreover, when performing text clustering based on the extracted topics, our models outperform all the baselines, with more significant improvements achieved by Gaussian-BAT where an increase of near 6\\% is observed in accuracy.", "field": [], "task": ["Text Clustering", "Topic Models"], "method": [], "dataset": ["20 Newsgroups"], "metric": ["Accuracy"], "title": "Neural Topic Modeling with Bidirectional Adversarial Training"} {"abstract": "Weakly-supervised temporal action localization aims to learn detecting temporal intervals of action classes with only video-level labels. To this end, it is crucial to separate frames of action classes from the background frames (i.e., frames not belonging to any action classes). In this paper, we present a new perspective on background frames where they are modeled as out-of-distribution samples regarding their inconsistency. Then, background frames can be detected by estimating the probability of each frame being out-of-distribution, known as uncertainty, but it is infeasible to directly learn uncertainty without frame-level labels. To realize the uncertainty learning in the weakly-supervised setting, we leverage the multiple instance learning formulation. Moreover, we further introduce a background entropy loss to better discriminate background frames by encouraging their in-distribution (action) probabilities to be uniformly distributed over all action classes. Experimental results show that our uncertainty modeling is effective at alleviating the interference of background frames and brings a large performance gain without bells and whistles. We demonstrate that our model significantly outperforms state-of-the-art methods on the benchmarks, THUMOS'14 and ActivityNet (1.2 & 1.3). Our code is available at https://github.com/Pilhyeon/WTAL-Uncertainty-Modeling.", "field": [], "task": ["Action Classification", "Action Classification ", "Action Localization", "Multiple Instance Learning", "Out-of-Distribution Detection", "Temporal Action Localization", "Weakly Supervised Action Localization", "Weakly-supervised Temporal Action Localization", "Weakly Supervised Temporal Action Localization"], "method": [], "dataset": ["ActivityNet-1.2", "ActivityNet-1.3", "THUMOS 2014"], "metric": ["mAP@0.5", "mAP@0.1:0.7"], "title": "Weakly-supervised Temporal Action Localization by Uncertainty Modeling"} {"abstract": "The state-of-the-art Aspect-based Sentiment Analysis (ABSA) approaches are mainly based on either detecting aspect terms and their corresponding sentiment polarities, or co-extracting aspect and opinion terms. However, the extraction of aspect-sentiment pairs lacks opinion terms as a reference, while co-extraction of aspect and opinion terms would not lead to meaningful pairs without determining their sentiment dependencies. To address the issue, we present a novel view of ABSA as an opinion triplet extraction task, and propose a multi-task learning framework to jointly extract aspect terms and opinion terms, and simultaneously parses sentiment dependencies between them with a biaffine scorer. At inference phase, the extraction of triplets is facilitated by a triplet decoding method based on the above outputs. We evaluate the proposed framework on four SemEval benchmarks for ASBA. The results demonstrate that our approach significantly outperforms a range of strong baselines and state-of-the-art approaches.", "field": [], "task": ["Aspect-Based Sentiment Analysis", "Aspect Sentiment Triplet Extraction", "Extract Aspect", "Multi-Task Learning", "Sentiment Analysis"], "method": [], "dataset": ["SemEval"], "metric": ["F1"], "title": "A Multi-task Learning Framework for Opinion Triplet Extraction"} {"abstract": "Histopathological characterization of colorectal polyps allows to tailor patients' management and follow up with the ultimate aim of avoiding or promptly detecting an invasive carcinoma. Colorectal polyps characterization relies on the histological analysis of tissue samples to determine the polyps malignancy and dysplasia grade. Deep neural networks achieve outstanding accuracy in medical patterns recognition, however they require large sets of annotated training images. We introduce UniToPatho, an annotated dataset of 9536 hematoxylin and eosin (H&E) stained patches extracted from 292 whole-slide images, meant for training deep neural networks for colorectal polyps classification and adenomas grading. We present our dataset and provide insights on how to tackle the problem of automatic colorectal polyps characterization.", "field": [], "task": ["Histopathological Image Classification", "whole slide images"], "method": [], "dataset": ["UNITOPATHO"], "metric": ["BA"], "title": "UniToPatho, a labeled histopathological dataset for colorectal polyps classification and adenoma dysplasia grading"} {"abstract": "Deep learning based methods hold state-of-the-art results in image denoising, but remain difficult to interpret due to their construction from poorly understood building blocks such as batch-normalization, residual learning, and feature domain processing. Unrolled optimization networks propose an interpretable alternative to constructing deep neural networks by deriving their architecture from classical iterative optimization methods, without use of tricks from the standard deep learning tool-box. So far, such methods have demonstrated performance close to that of state-of-the-art models while using their interpretable construction to achieve a comparably low learned parameter count. In this work, we propose an unrolled convolutional dictionary learning network (CDLNet) and demonstrate its competitive denoising performance in both low and high parameter count regimes. Specifically, we show that the proposed model outperforms the state-of-the-art denoising models when scaled to similar parameter count. In addition, we leverage the model's interpretable construction to propose an augmentation of the network's thresholds that enables state-of-the-art blind denoising performance and near-perfect generalization on noise-levels unseen during training.", "field": [], "task": ["Denoising", "Dictionary Learning", "Grayscale Image Denoising", "Image Denoising"], "method": [], "dataset": ["BSD68 sigma15", "BSD68 sigma50", "BSD68 sigma25"], "metric": ["PSNR"], "title": "CDLNet: Robust and Interpretable Denoising Through Deep Convolutional Dictionary Learning"} {"abstract": "Seq2seq learning has produced promising results on summarization. However, in\nmany cases, system summaries still struggle to keep the meaning of the original\nintact. They may miss out important words or relations that play critical roles\nin the syntactic structure of source sentences. In this paper, we present\nstructure-infused copy mechanisms to facilitate copying important words and\nrelations from the source sentence to summary sentence. The approach naturally\ncombines source dependency structure with the copy mechanism of an abstractive\nsentence summarizer. Experimental results demonstrate the effectiveness of\nincorporating source-side syntactic information in the system, and our proposed\napproach compares favorably to state-of-the-art methods.", "field": [], "task": ["Abstractive Text Summarization"], "method": [], "dataset": ["GigaWord"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Structure-Infused Copy Mechanisms for Abstractive Summarization"} {"abstract": "In this work, we present a hybrid learning method for training task-oriented\ndialogue systems through online user interactions. Popular methods for learning\ntask-oriented dialogues include applying reinforcement learning with user\nfeedback on supervised pre-training models. Efficiency of such learning method\nmay suffer from the mismatch of dialogue state distribution between offline\ntraining and online interactive learning stages. To address this challenge, we\npropose a hybrid imitation and reinforcement learning method, with which a\ndialogue agent can effectively learn from its interaction with users by\nlearning from human teaching and feedback. We design a neural network based\ntask-oriented dialogue agent that can be optimized end-to-end with the proposed\nlearning method. Experimental results show that our end-to-end dialogue agent\ncan learn effectively from the mistake it makes via imitation learning from\nuser teaching. Applying reinforcement learning with user feedback after the\nimitation learning stage further improves the agent's capability in\nsuccessfully completing a task.", "field": [], "task": ["Dialogue State Tracking", "Imitation Learning", "Task-Oriented Dialogue Systems"], "method": [], "dataset": ["Second dialogue state tracking challenge"], "metric": ["Joint", "Price", "Area", "Food", "Request"], "title": "Dialogue Learning with Human Teaching and Feedback in End-to-End Trainable Task-Oriented Dialogue Systems"} {"abstract": "The problem of tracking multiple objects in a video sequence poses several challenging tasks. For tracking-by-detection, these include object re-identification, motion prediction and dealing with occlusions. We present a tracker (without bells and whistles) that accomplishes tracking without specifically targeting any of these tasks, in particular, we perform no training or optimization on tracking data. To this end, we exploit the bounding box regression of an object detector to predict the position of an object in the next frame, thereby converting a detector into a Tracktor. We demonstrate the potential of Tracktor and provide a new state-of-the-art on three multi-object tracking benchmarks by extending it with a straightforward re-identification and camera motion compensation. We then perform an analysis on the performance and failure cases of several state-of-the-art tracking methods in comparison to our Tracktor. Surprisingly, none of the dedicated tracking methods are considerably better in dealing with complex tracking scenarios, namely, small and occluded objects or missing detections. However, our approach tackles most of the easy tracking scenarios. Therefore, we motivate our approach as a new tracking paradigm and point out promising future research directions. Overall, Tracktor yields superior tracking performance than any current tracking method and our analysis exposes remaining and unsolved tracking challenges to inspire future research directions.", "field": [], "task": ["Motion Compensation", "motion prediction", "Multi-Object Tracking", "Object Tracking", "Regression"], "method": [], "dataset": ["2D MOT 2015", "MOT16", "MOT17"], "metric": ["MOTA"], "title": "Tracking without bells and whistles"} {"abstract": "Few-shot classification (FSC) is challenging due to the scarcity of labeled training data (e.g. only one labeled data point per class). Meta-learning has shown to achieve promising results by learning to initialize a classification model for FSC. In this paper we propose a novel semi-supervised meta-learning method called learning to self-train (LST) that leverages unlabeled data and specifically meta-learns how to cherry-pick and label such unsupervised data to further improve performance. To this end, we train the LST model through a large number of semi-supervised few-shot tasks. On each task, we train a few-shot model to predict pseudo labels for unlabeled data, and then iterate the self-training steps on labeled and pseudo-labeled data with each step followed by fine-tuning. We additionally learn a soft weighting network (SWN) to optimize the self-training weights of pseudo labels so that better ones can contribute more to gradient descent optimization. We evaluate our LST method on two ImageNet benchmarks for semi-supervised few-shot classification and achieve large improvements over the state-of-the-art method. Code is at https://github.com/xinzheli1217/learning-to-self-train.", "field": [], "task": ["Meta-Learning"], "method": [], "dataset": ["Mini-Imagenet 5-way (1-shot)", "Tiered ImageNet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "Tiered ImageNet 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Learning to Self-Train for Semi-Supervised Few-Shot Classification"} {"abstract": "In this paper, we propose an efficient online multi-object tracking framework based on the GMPHD filter and occlusion group management scheme where the GMPHD filter utilizes hierarchical data association to reduce the false negatives caused by miss detection. The hierarchical data association consists of two steps: detection-to-track and track-to-track associations, which can recover the lost tracks and their switched IDs. In addition, the proposed framework is equipped with an object grouping management scheme which handles occlusion problems with two main parts. The first part is \"track merging\" which can merge the false positive tracks caused by false positive detections from occlusions, where the false positive tracks are usually occluded with a measure. The measure is the occlusion ratio between visual objects, sum-of-intersection-over-area (SIOA) we defined instead of the IOU metric. The second part is \"occlusion group energy minimization (OGEM)\" which prevents the occluded true positive tracks from false \"track merging\". We define each group of the occluded objects as an energy function and find an optimal hypothesis which makes the energy minimal. We evaluate the proposed tracker in benchmark datasets such as MOT15 and MOT17 which are built for multi-person tracking. An ablation study in training dataset shows that not only \"track merging\" and \"OGEM\" complement each other but also the proposed tracking method has more robust performance and less sensitive to parameters than baseline methods. Also, SIOA works better than IOU for various sizes of false positives. Experimental results show that the proposed tracker efficiently handles occlusion situations and achieves competitive performance compared to the state-of-the-art methods. Especially, our method shows the best multi-object tracking accuracy among the online and real-time executable methods.", "field": [], "task": ["Multi-Object Tracking", "Multiple Object Tracking", "Object Tracking", "Online Multi-Object Tracking", "Real-Time Multi-Object Tracking"], "method": [], "dataset": ["MOT17", "MOT15"], "metric": ["MOTA"], "title": "Online Multi-Object Tracking Framework with the GMPHD Filter and Occlusion Group Management"} {"abstract": "Emotion recognition in conversation (ERC) has received much attention, lately, from researchers due to its potential widespread applications in diverse areas, such as health-care, education, and human resources. In this paper, we present Dialogue Graph Convolutional Network (DialogueGCN), a graph neural network based approach to ERC. We leverage self and inter-speaker dependency of the interlocutors to model conversational context for emotion recognition. Through the graph network, DialogueGCN addresses context propagation issues present in the current RNN-based methods. We empirically show that this method alleviates such issues, while outperforming the current state of the art on a number of benchmark emotion classification datasets.", "field": [], "task": ["Emotion Classification", "Emotion Recognition", "Emotion Recognition in Conversation"], "method": [], "dataset": ["IEMOCAP", "MELD", "SEMAINE"], "metric": ["MAE (Arousal)", "Weighted Macro-F1", "MAE (Power)", "MAE (Valence)", "MAE (Expectancy)", "F1", "Accuracy"], "title": "DialogueGCN: A Graph Convolutional Neural Network for Emotion Recognition in Conversation"} {"abstract": "We introduce a method for the generation of images from an input scene graph. The method separates between a layout embedding and an appearance embedding. The dual embedding leads to generated images that better match the scene graph, have higher visual quality, and support more complex scene graphs. In addition, the embedding scheme supports multiple and diverse output images per scene graph, which can be further controlled by the user. We demonstrate two modes of per-object control: (i) importing elements from other images, and (ii) navigation in the object space, by selecting an appearance archetype. Our code is publicly available at https://www.github.com/ashual/scene_generation", "field": [], "task": ["Layout-to-Image Generation", "Scene Generation"], "method": [], "dataset": ["COCO-Stuff 64x64", "COCO-Stuff 128x128"], "metric": ["Inception Score", "SceneFID", "FID"], "title": "Specifying Object Attributes and Relations in Interactive Scene Generation"} {"abstract": "The scarcity of labeled training data often prohibits the internationalization of NLP models to multiple languages. Recent developments in cross-lingual understanding (XLU) has made progress in this area, trying to bridge the language barrier using language universal representations. However, even if the language problem was resolved, models trained in one language would not transfer to another language perfectly due to the natural domain drift across languages and cultures. We consider the setting of semi-supervised cross-lingual understanding, where labeled data is available in a source language (English), but only unlabeled data is available in the target language. We combine state-of-the-art cross-lingual methods with recently proposed methods for weakly supervised learning such as unsupervised pre-training and unsupervised data augmentation to simultaneously close both the language gap and the domain gap in XLU. We show that addressing the domain gap is crucial. We improve over strong baselines and achieve a new state-of-the-art for cross-lingual document classification.", "field": [], "task": ["Cross-Domain Document Classification", "Cross-Lingual Document Classification", "Cross-Lingual Sentiment Classification", "Data Augmentation", "Document Classification", "Unsupervised Pre-training"], "method": [], "dataset": ["MLDoc Zero-Shot English-to-German", "MLDoc Zero-Shot English-to-French", "MLDoc Zero-Shot English-to-Chinese", "MLDoc Zero-Shot English-to-Spanish", "MLDoc Zero-Shot English-to-Russian"], "metric": ["Accuracy"], "title": "Bridging the domain gap in cross-lingual document classification"} {"abstract": "In this paper, we argue about the importance of considering task interactions at multiple scales when distilling task information in a multi-task learning setup. In contrast to common belief, we show that tasks with high affinity at a certain scale are not guaranteed to retain this behaviour at other scales, and vice versa. We propose a novel architecture, namely MTI-Net, that builds upon this finding in three ways. First, it explicitly models task interactions at every scale via a multi-scale multi-modal distillation unit. Second, it propagates distilled task information from lower to higher scales via a feature propagation module. Third, it aggregates the refined task features from all scales via a feature aggregation unit to produce the final per-task predictions. Extensive experiments on two multi-task dense labeling datasets show that, unlike prior work, our multi-task model delivers on the full potential of multi-task learning, that is, smaller memory footprint, reduced number of calculations, and better performance w.r.t. single-task learning. The code is made publicly available: https://github.com/SimonVandenhende/Multi-Task-Learning-PyTorch.", "field": [], "task": ["Multi-Task Learning", "Semantic Segmentation"], "method": [], "dataset": ["NYU Depth v2"], "metric": ["Mean IoU"], "title": "MTI-Net: Multi-Scale Task Interaction Networks for Multi-Task Learning"} {"abstract": "Every moment counts in action recognition. A comprehensive understanding of\nhuman activity in video requires labeling every frame according to the actions\noccurring, placing multiple labels densely over a video sequence. To study this\nproblem we extend the existing THUMOS dataset and introduce MultiTHUMOS, a new\ndataset of dense labels over unconstrained internet videos. Modeling multiple,\ndense labels benefits from temporal relations within and across classes. We\ndefine a novel variant of long short-term memory (LSTM) deep networks for\nmodeling these temporal relations via multiple input and output connections. We\nshow that this model improves action labeling accuracy and further enables\ndeeper understanding tasks ranging from structured retrieval to action\nprediction.", "field": [], "task": ["Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["Multi-THUMOS"], "metric": ["mAP"], "title": "Every Moment Counts: Dense Detailed Labeling of Actions in Complex Videos"} {"abstract": "Recognizing arbitrary multi-character text in unconstrained natural\nphotographs is a hard problem. In this paper, we address an equally hard\nsub-problem in this domain viz. recognizing arbitrary multi-digit numbers from\nStreet View imagery. Traditional approaches to solve this problem typically\nseparate out the localization, segmentation, and recognition steps. In this\npaper we propose a unified approach that integrates these three steps via the\nuse of a deep convolutional neural network that operates directly on the image\npixels. We employ the DistBelief implementation of deep neural networks in\norder to train large, distributed neural networks on high quality images. We\nfind that the performance of this approach increases with the depth of the\nconvolutional network, with the best performance occurring in the deepest\narchitecture we trained, with eleven hidden layers. We evaluate this approach\non the publicly available SVHN dataset and achieve over $96\\%$ accuracy in\nrecognizing complete street numbers. We show that on a per-digit recognition\ntask, we improve upon the state-of-the-art, achieving $97.84\\%$ accuracy. We\nalso evaluate this approach on an even more challenging dataset generated from\nStreet View imagery containing several tens of millions of street number\nannotations and achieve over $90\\%$ accuracy. To further explore the\napplicability of the proposed system to broader text recognition tasks, we\napply it to synthetic distorted text from reCAPTCHA. reCAPTCHA is one of the\nmost secure reverse turing tests that uses distorted text to distinguish humans\nfrom bots. We report a $99.8\\%$ accuracy on the hardest category of reCAPTCHA.\nOur evaluations on both tasks indicate that at specific operating thresholds,\nthe performance of the proposed system is comparable to, and in some cases\nexceeds, that of human operators.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["SVHN"], "metric": ["Percentage error"], "title": "Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks"} {"abstract": "In this paper, we propose a novel representation for text documents based on aggregating word embedding vectors into document embeddings. Our approach is inspired by the Vector of Locally-Aggregated Descriptors used for image representation, and it works as follows. First, the word embeddings gathered from a collection of documents are clustered by k-means in order to learn a codebook of semnatically-related word embeddings. Each word embedding is then associated to its nearest cluster centroid (codeword). The Vector of Locally-Aggregated Word Embeddings (VLAWE) representation of a document is then computed by accumulating the differences between each codeword vector and each word vector (from the document) associated to the respective codeword. We plug the VLAWE representation, which is learned in an unsupervised manner, into a classifier and show that it is useful for a diverse set of text classification tasks. We compare our approach with a broad range of recent state-of-the-art methods, demonstrating the effectiveness of our approach. Furthermore, we obtain a considerable improvement on the Movie Review data set, reporting an accuracy of 93.3%, which represents an absolute gain of 10% over the state-of-the-art approach. Our code is available at https://github.com/raduionescu/vlawe-boswe/.", "field": [], "task": ["Text Classification", "Word Embeddings"], "method": [], "dataset": ["TREC-6", "Reuters-21578"], "metric": ["Error", "F1"], "title": "Vector of Locally-Aggregated Word Embeddings (VLAWE): A Novel Document-level Representation"} {"abstract": "In natural language processing, it is common that many entities contain other entities inside them. Most existing works on named entity recognition (NER) only deal with flat entities but ignore nested ones. We propose a boundary-aware neural model for nested NER which leverages entity boundaries to predict entity categorical labels. Our model can locate entities precisely by detecting boundaries using sequence labeling models. Based on the detected boundaries, our model utilizes the boundary-relevant regions to predict entity categorical labels, which can decrease computation cost and relieve error propagation problem in layered sequence labeling model. We introduce multitask learning to capture the dependencies of entity boundaries and their categorical labels, which helps to improve the performance of identifying entities. We conduct our experiments on GENIA dataset and the experimental results demonstrate that our model outperforms other state-of-the-art methods.", "field": [], "task": ["Named Entity Recognition", "Nested Named Entity Recognition"], "method": [], "dataset": ["GENIA"], "metric": ["F1"], "title": "A Boundary-aware Neural Model for Nested Named Entity Recognition"} {"abstract": "This paper presents Pyramid, a novel layered model for Nested Named Entity Recognition (nested NER). In our approach, token or text region embeddings are recursively inputted into L flat NER layers, from bottom to top, stacked in a pyramid shape. Each time an embedding passes through a layer of the pyramid, its length is reduced by one. Its hidden state at layer l represents an l-gram in the input text, which is labeled only if its corresponding text region represents a complete entity mention. We also design an inverse pyramid to allow bidirectional interaction between layers. The proposed method achieves state-of-the-art F1 scores in nested NER on ACE-2004, ACE-2005, GENIA, and NNE, which are 80.27, 79.42, 77.78, and 93.70 with conventional embeddings, and 87.74, 86.34, 79.31, and 94.68 with pre-trained contextualized embeddings. In addition, our model can be used for the more general task of Overlapping Named Entity Recognition. A preliminary experiment confirms the effectiveness of our method in overlapping NER.", "field": [], "task": ["Named Entity Recognition", "Nested Named Entity Recognition"], "method": [], "dataset": ["GENIA"], "metric": ["F1"], "title": "Pyramid: A Layered Model for Nested Named Entity Recognition"} {"abstract": "Like many Natural Language Processing tasks, Thai word segmentation is domain-dependent. Researchers have been relying on transfer learning to adapt an existing model to a new domain. However, this approach is inapplicable to cases where we can interact with only input and output layers of the models, also known as {``}black boxes{''}. We propose a filter-and-refine solution based on the stacked-ensemble learning paradigm to address this black-box limitation. We conducted extensive experimental studies comparing our method against state-of-the-art models and transfer learning. Experimental results show that our proposed solution is an effective domain adaptation method and has a similar performance as the transfer learning method.", "field": [], "task": ["Domain Adaptation", "Thai Word Segmentation", "Transfer Learning"], "method": [], "dataset": ["BEST-2010", "WS160"], "metric": ["F1-score", "F1-Score"], "title": "Domain Adaptation of Thai Word Segmentation Models using Stacked Ensemble"} {"abstract": "Reading comprehension QA tasks have seen a recent surge in popularity, yet most works have focused on fact-finding extractive QA. We instead focus on a more challenging multi-hop generative task (NarrativeQA), which requires the model to reason, gather, and synthesize disjoint pieces of information within the context to generate an answer. This type of multi-step reasoning also often requires understanding implicit relations, which humans resolve via external, background commonsense knowledge. We first present a strong generative baseline that uses a multi-attention mechanism to perform multiple hops of reasoning and a pointer-generator decoder to synthesize the answer. This model performs substantially better than previous generative models, and is competitive with current state-of-the-art span prediction models. We next introduce a novel system for selecting grounded multi-hop relational commonsense information from ConceptNet via a pointwise mutual information and term-frequency based scoring function. Finally, we effectively use this extracted commonsense information to fill in gaps of reasoning between context hops, using a selectively-gated attention mechanism. This boosts the model's performance significantly (also verified via human evaluation), establishing a new state-of-the-art for the task. We also show promising initial results of the generalizability of our background knowledge enhancements by demonstrating some improvement on QAngaroo-WikiHop, another multi-hop reasoning dataset.", "field": [], "task": ["Multi-hop Question Answering", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["NarrativeQA", "WikiHop"], "metric": ["METEOR", "Test", "BLEU-1", "Rouge-L", "BLEU-4"], "title": "Commonsense for Generative Multi-Hop Question Answering Tasks"} {"abstract": "An effective method to improve neural machine translation with monolingual\ndata is to augment the parallel training corpus with back-translations of\ntarget language sentences. This work broadens the understanding of\nback-translation and investigates a number of methods to generate synthetic\nsource sentences. We find that in all but resource poor settings\nback-translations obtained via sampling or noised beam outputs are most\neffective. Our analysis shows that sampling or noisy synthetic data gives a\nmuch stronger training signal than data generated by beam or greedy search. We\nalso compare how synthetic data compares to genuine bitext and study various\ndomain effects. Finally, we scale to hundreds of millions of monolingual\nsentences and achieve a new state of the art of 35 BLEU on the WMT'14\nEnglish-German test set.", "field": [], "task": ["Machine Translation"], "method": [], "dataset": ["WMT2014 English-German", "WMT2014 English-French"], "metric": ["BLEU score", "SacreBLEU"], "title": "Understanding Back-Translation at Scale"} {"abstract": "Cross-lingual document classification aims at training a document classifier\non resources in one language and transferring it to a different language\nwithout any additional resources. Several approaches have been proposed in the\nliterature and the current best practice is to evaluate them on a subset of the\nReuters Corpus Volume 2. However, this subset covers only few languages\n(English, German, French and Spanish) and almost all published works focus on\nthe the transfer between English and German. In addition, we have observed that\nthe class prior distributions differ significantly between the languages. We\nargue that this complicates the evaluation of the multilinguality. In this\npaper, we propose a new subset of the Reuters corpus with balanced class priors\nfor eight languages. By adding Italian, Russian, Japanese and Chinese, we cover\nlanguages which are very different with respect to syntax, morphology, etc. We\nprovide strong baselines for all language transfer directions using\nmultilingual word and sentence embeddings respectively. Our goal is to offer a\nfreely available framework to evaluate cross-lingual document classification,\nand we hope to foster by these means, research in this important area.", "field": [], "task": ["Cross-Lingual Document Classification", "Document Classification", "Sentence Embeddings"], "method": [], "dataset": ["MLDoc Zero-Shot English-to-German", "MLDoc Zero-Shot English-to-French", "MLDoc Zero-Shot English-to-Spanish", "MLDoc Zero-Shot German-to-French", "MLDoc Zero-Shot English-to-Chinese", "MLDoc Zero-Shot English-to-Japanese", "MLDoc Zero-Shot English-to-Italian", "MLDoc Zero-Shot English-to-Russian"], "metric": ["Accuracy"], "title": "A Corpus for Multilingual Document Classification in Eight Languages"} {"abstract": "Analyzing videos of human actions involves understanding the temporal\nrelationships among video frames. State-of-the-art action recognition\napproaches rely on traditional optical flow estimation methods to pre-compute\nmotion information for CNNs. Such a two-stage approach is computationally\nexpensive, storage demanding, and not end-to-end trainable. In this paper, we\npresent a novel CNN architecture that implicitly captures motion information\nbetween adjacent frames. We name our approach hidden two-stream CNNs because it\nonly takes raw video frames as input and directly predicts action classes\nwithout explicitly computing optical flow. Our end-to-end approach is 10x\nfaster than its two-stage baseline. Experimental results on four challenging\naction recognition datasets: UCF101, HMDB51, THUMOS14 and ActivityNet v1.2 show\nthat our approach significantly outperforms the previous best real-time\napproaches.", "field": [], "task": ["Action Recognition", "Optical Flow Estimation", "Temporal Action Localization"], "method": [], "dataset": ["UCF101", "HMDB-51"], "metric": ["Average accuracy of 3 splits", "3-fold Accuracy"], "title": "Hidden Two-Stream Convolutional Networks for Action Recognition"} {"abstract": "Extending state-of-the-art object detectors from image to video is\nchallenging. The accuracy of detection suffers from degenerated object\nappearances in videos, e.g., motion blur, video defocus, rare poses, etc.\nExisting work attempts to exploit temporal information on box level, but such\nmethods are not trained end-to-end. We present flow-guided feature aggregation,\nan accurate and end-to-end learning framework for video object detection. It\nleverages temporal coherence on feature level instead. It improves the\nper-frame features by aggregation of nearby features along the motion paths,\nand thus improves the video recognition accuracy. Our method significantly\nimproves upon strong single-frame baselines in ImageNet VID, especially for\nmore challenging fast moving objects. Our framework is principled, and on par\nwith the best engineered systems winning the ImageNet VID challenges 2016,\nwithout additional bells-and-whistles. The proposed method, together with Deep\nFeature Flow, powered the winning entry of ImageNet VID challenges 2017. The\ncode is available at\nhttps://github.com/msracver/Flow-Guided-Feature-Aggregation.", "field": [], "task": ["Object Detection", "Video Object Detection", "Video Recognition"], "method": [], "dataset": ["ImageNet VID"], "metric": ["runtime (ms)", "MAP"], "title": "Flow-Guided Feature Aggregation for Video Object Detection"} {"abstract": "In this paper we argue for the fundamental importance of the value\ndistribution: the distribution of the random return received by a reinforcement\nlearning agent. This is in contrast to the common approach to reinforcement\nlearning which models the expectation of this return, or value. Although there\nis an established body of literature studying the value distribution, thus far\nit has always been used for a specific purpose such as implementing risk-aware\nbehaviour. We begin with theoretical results in both the policy evaluation and\ncontrol settings, exposing a significant distributional instability in the\nlatter. We then use the distributional perspective to design a new algorithm\nwhich applies Bellman's equation to the learning of approximate value\ndistributions. We evaluate our algorithm using the suite of games from the\nArcade Learning Environment. We obtain both state-of-the-art results and\nanecdotal evidence demonstrating the importance of the value distribution in\napproximate reinforcement learning. Finally, we combine theoretical and\nempirical evidence to highlight the ways in which the value distribution\nimpacts learning in the approximate setting.", "field": [], "task": ["Atari Games"], "method": [], "dataset": ["Atari 2600 Amidar", "Atari 2600 River Raid", "Atari 2600 Beam Rider", "Atari 2600 Video Pinball", "Atari 2600 Demon Attack", "Atari 2600 Enduro", "Atari-57", "Atari 2600 Alien", "Atari 2600 Boxing", "Atari 2600 Bank Heist", "Atari 2600 Tutankham", "Atari 2600 Time Pilot", "Atari 2600 Space Invaders", "Atari 2600 Assault", "Atari 2600 Gravitar", "Atari 2600 Ice Hockey", "Atari 2600 Bowling", "Atari 2600 Private Eye", "Atari 2600 Berzerk", "Atari 2600 Asterix", "Atari 2600 Breakout", "Atari 2600 Name This Game", "Atari 2600 Crazy Climber", "Atari 2600 Pong", "Atari 2600 Krull", "Atari 2600 Freeway", "Atari 2600 James Bond", "Atari 2600 Robotank", "Atari 2600 Kangaroo", "Atari 2600 Venture", "Atari 2600 Asteroids", "Atari 2600 Fishing Derby", "Atari 2600 Ms. Pacman", "Atari 2600 Seaquest", "Atari 2600 Tennis", "Atari 2600 Zaxxon", "Atari 2600 Frostbite", "Atari 2600 Star Gunner", "Atari 2600 Double Dunk", "Atari 2600 Battle Zone", "Atari 2600 Gopher", "Atari 2600 Road Runner", "Atari 2600 Atlantis", "Atari 2600 Kung-Fu Master", "Atari 2600 Chopper Command", "Atari 2600 Up and Down", "Atari 2600 Wizard of Wor", "Atari 2600 Q*Bert", "Atari 2600 Centipede", "Atari 2600 HERO"], "metric": ["Score", "Medium Human-Normalized Score"], "title": "A Distributional Perspective on Reinforcement Learning"} {"abstract": "The performance of face detection has been largely improved with the\ndevelopment of convolutional neural network. However, the occlusion issue due\nto mask and sunglasses, is still a challenging problem. The improvement on the\nrecall of these occluded cases usually brings the risk of high false positives.\nIn this paper, we present a novel face detector called Face Attention Network\n(FAN), which can significantly improve the recall of the face detection problem\nin the occluded case without compromising the speed. More specifically, we\npropose a new anchor-level attention, which will highlight the features from\nthe face region. Integrated with our anchor assign strategy and data\naugmentation techniques, we obtain state-of-art results on public face\ndetection benchmarks like WiderFace and MAFA. The code will be released for\nreproduction.", "field": [], "task": ["Data Augmentation", "Face Detection", "Occluded Face Detection"], "method": [], "dataset": ["MAFA"], "metric": ["MAP"], "title": "Face Attention Network: An Effective Face Detector for the Occluded Faces"} {"abstract": "Research on face spoofing detection has mainly been focused on analyzing the\nluminance of the face images, hence discarding the chrominance information\nwhich can be useful for discriminating fake faces from genuine ones. In this\nwork, we propose a new face anti-spoofing method based on color texture\nanalysis. We analyze the joint color-texture information from the luminance and\nthe chrominance channels using a color local binary pattern descriptor. More\nspecifically, the feature histograms are extracted from each image band\nseparately. Extensive experiments on two benchmark datasets, namely CASIA face\nanti-spoofing and Replay-Attack databases, showed excellent results compared to\nthe state-of-the-art. Most importantly, our inter-database evaluation depicts\nthat the proposed approach showed very promising generalization capabilities.", "field": [], "task": ["Face Anti-Spoofing", "Texture Classification"], "method": [], "dataset": ["MSU-MFSD", "Replay-Attack"], "metric": ["HTER", "Equal Error Rate", "EER"], "title": "face anti-spoofing based on color texture analysis"} {"abstract": "Most of the Neural Machine Translation (NMT) models are based on the\nsequence-to-sequence (Seq2Seq) model with an encoder-decoder framework equipped\nwith the attention mechanism. However, the conventional attention mechanism\ntreats the decoding at each time step equally with the same matrix, which is\nproblematic since the softness of the attention for different types of words\n(e.g. content words and function words) should differ. Therefore, we propose a\nnew model with a mechanism called Self-Adaptive Control of Temperature (SACT)\nto control the softness of attention by means of an attention temperature.\nExperimental results on the Chinese-English translation and English-Vietnamese\ntranslation demonstrate that our model outperforms the baseline models, and the\nanalysis and the case study show that our model can attend to the most relevant\nelements in the source-side contexts and generate the translation of high\nquality.", "field": [], "task": ["Machine Translation"], "method": [], "dataset": ["IWSLT2015 English-Vietnamese"], "metric": ["BLEU"], "title": "Learning When to Concentrate or Divert Attention: Self-Adaptive Attention Temperature for Neural Machine Translation"} {"abstract": "We explore unsupervised pre-training for speech recognition by learning representations of raw audio. wav2vec is trained on large amounts of unlabeled audio data and the resulting representations are then used to improve acoustic model training. We pre-train a simple multi-layer convolutional neural network optimized via a noise contrastive binary classification task. Our experiments on WSJ reduce WER of a strong character-based log-mel filterbank baseline by up to 36% when only a few hours of transcribed data is available. Our approach achieves 2.43% WER on the nov92 test set. This outperforms Deep Speech 2, the best reported character-based system in the literature while using two orders of magnitude less labeled training data.", "field": [], "task": ["Speech Recognition", "Unsupervised Pre-training"], "method": [], "dataset": ["TIMIT"], "metric": ["Percentage error"], "title": "wav2vec: Unsupervised Pre-training for Speech Recognition"} {"abstract": "Neural encoder-decoder models have been successful in natural language\ngeneration tasks. However, real applications of abstractive summarization must\nconsider additional constraint that a generated summary should not exceed a\ndesired length. In this paper, we propose a simple but effective extension of a\nsinusoidal positional encoding (Vaswani et al., 2017) to enable neural\nencoder-decoder model to preserves the length constraint. Unlike in previous\nstudies where that learn embeddings representing each length, the proposed\nmethod can generate a text of any length even if the target length is not\npresent in training data. The experimental results show that the proposed\nmethod can not only control the generation length but also improve the ROUGE\nscores.", "field": [], "task": ["Abstractive Text Summarization", "Sentence Summarization", "Text Generation", "Text Summarization"], "method": [], "dataset": ["DUC 2004 Task 1"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Positional Encoding to Control Output Sequence Length"} {"abstract": "It is intuitive that NLP tasks for logographic languages like Chinese should benefit from the use of the glyph information in those languages. However, due to the lack of rich pictographic evidence in glyphs and the weak generalization ability of standard computer vision models on character data, an effective way to utilize the glyph information remains to be found. In this paper, we address this gap by presenting Glyce, the glyph-vectors for Chinese character representations. We make three major innovations: (1) We use historical Chinese scripts (e.g., bronzeware script, seal script, traditional Chinese, etc) to enrich the pictographic evidence in characters; (2) We design CNN structures (called tianzege-CNN) tailored to Chinese character image processing; and (3) We use image-classification as an auxiliary task in a multi-task learning setup to increase the model's ability to generalize. We show that glyph-based models are able to consistently outperform word/char ID-based models in a wide range of Chinese NLP tasks. We are able to set new state-of-the-art results for a variety of Chinese NLP tasks, including tagging (NER, CWS, POS), sentence pair classification, single sentence classification tasks, dependency parsing, and semantic role labeling. For example, the proposed model achieves an F1 score of 80.6 on the OntoNotes dataset of NER, +1.5 over BERT; it achieves an almost perfect accuracy of 99.8\\% on the Fudan corpus for text classification. Code found at https://github.com/ShannonAI/glyce.", "field": [], "task": ["Chinese Dependency Parsing", "Chinese Named Entity Recognition", "Chinese Part-of-Speech Tagging", "Chinese Semantic Role Labeling", "Chinese Sentence Pair Classification", "Chinese Word Segmentation", "Dependency Parsing", "Document Classification", "Image Classification", "Language Modelling", "Machine Translation", "Multi-Task Learning", "Part-Of-Speech Tagging", "Semantic Role Labeling", "Semantic Textual Similarity", "Sentence Classification", "Sentiment Analysis", "Text Classification"], "method": [], "dataset": ["MSR", "Resume NER", "OntoNotes 4", "CITYU", "MSRA", "PKU", "AS", "Weibo NER"], "metric": ["Precision", "Recall", "F1"], "title": "Glyce: Glyph-vectors for Chinese Character Representations"} {"abstract": "Cloud Segmentation is one of the fundamental steps in optical remote sensing image analysis. Current methods for identification of cloud regions in aerial or satellite images are not accurate enough especially in the presence of snow and haze. This paper presents a deep learning-based framework to address the problem of cloud detection in Landsat 8 imagery. The proposed method benefits from a convolutional neural network (Cloud-Net+) with multiple blocks, which is trained with a novel loss function (Filtered Jaccard loss). The proposed loss function is more sensitive to the absence of cloud pixels in an image and penalizes/rewards the predicted mask more accurately. The combination of Cloud-Net+ and Filtered Jaccard loss function delivers superior results over four public cloud detection datasets. Our experiments on one of the most common public datasets in computer vision (Pascal VOC dataset) show that the proposed network/loss function could be used in other segmentation tasks for more accurate performance/evaluation.", "field": [], "task": ["Cloud Detection"], "method": [], "dataset": ["38-Cloud"], "metric": ["Jaccard (Mean)"], "title": "Cloud-Net+: A Cloud Segmentation CNN for Landsat 8 Remote Sensing Imagery Optimized with Filtered Jaccard Loss Function"} {"abstract": "Forecasting the future behaviors of dynamic actors is an important task in many robotics applications such as self-driving. It is extremely challenging as actors have latent intentions and their trajectories are governed by complex interactions between the other actors, themselves, and the maps. In this paper, we propose LaneRCNN, a graph-centric motion forecasting model. Importantly, relying on a specially designed graph encoder, we learn a local lane graph representation per actor (LaneRoI) to encode its past motions and the local map topology. We further develop an interaction module which permits efficient message passing among local graph representations within a shared global lane graph. Moreover, we parameterize the output trajectories based on lane graphs, a more amenable prediction parameterization. Our LaneRCNN captures the actor-to-actor and the actor-to-map relations in a distributed and map-aware manner. We demonstrate the effectiveness of our approach on the large-scale Argoverse Motion Forecasting Benchmark. We achieve the 1st place on the leaderboard and significantly outperform previous best results.", "field": [], "task": ["Motion Forecasting"], "method": [], "dataset": ["Argoverse CVPR 2020"], "metric": ["p-minADE (K=6)", "MR (K=1)", "DAC (K=6)", "DAC (K=1)", "minFDE (K=6)", "minADE (K=1)", "MR (K=6)", "minADE (K=6)", "minFDE (K=1)", "p-minFDE (K=6)"], "title": "LaneRCNN: Distributed Representations for Graph-Centric Motion Forecasting"} {"abstract": "Top-down visual attention mechanisms have been used extensively in image\ncaptioning and visual question answering (VQA) to enable deeper image\nunderstanding through fine-grained analysis and even multiple steps of\nreasoning. In this work, we propose a combined bottom-up and top-down attention\nmechanism that enables attention to be calculated at the level of objects and\nother salient image regions. This is the natural basis for attention to be\nconsidered. Within our approach, the bottom-up mechanism (based on Faster\nR-CNN) proposes image regions, each with an associated feature vector, while\nthe top-down mechanism determines feature weightings. Applying this approach to\nimage captioning, our results on the MSCOCO test server establish a new\nstate-of-the-art for the task, achieving CIDEr / SPICE / BLEU-4 scores of\n117.9, 21.5 and 36.9, respectively. Demonstrating the broad applicability of\nthe method, applying the same approach to VQA we obtain first place in the 2017\nVQA Challenge.", "field": [], "task": ["Image Captioning", "Visual Question Answering"], "method": [], "dataset": ["VQA v2 test-std", "GQA Test2019"], "metric": ["Binary", "overall", "Validity", "Consistency", "Plausibility", "Distribution", "Accuracy", "Open"], "title": "Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering"} {"abstract": "This paper presents a detailed study of improving visual representations for vision language (VL) tasks and develops an improved object detection model to provide object-centric representations of images. Compared to the most widely used \\emph{bottom-up and top-down} model \\cite{anderson2018bottom}, the new model is bigger, better-designed for VL tasks, and pre-trained on much larger training corpora that combine multiple public annotated object detection datasets. Therefore, it can generate representations of a richer collection of visual objects and concepts. While previous VL research focuses mainly on improving the vision-language fusion model and leaves the object detection model improvement untouched, we show that visual features matter significantly in VL models. In our experiments we feed the visual features generated by the new object detection model into a Transformer-based VL fusion model \\oscar \\cite{li2020oscar}, and utilize an improved approach \\short\\ to pre-train the VL model and fine-tune it on a wide range of downstream VL tasks. Our results show that the new visual features significantly improve the performance across all VL tasks, creating new state-of-the-art results on seven public benchmarks. We will release the new object detection model to public.", "field": [], "task": ["Object Detection"], "method": [], "dataset": ["VQA v2 test-std", "nocaps out-of-domain", "nocaps near-domain", "GQA Test2019", "nocaps in-domain", "nocaps entire"], "metric": ["Consistency", "CIDEr", "ROUGE-L", "Open", "B3", "B4", "number", "B2", "B1", "overall", "METEOR", "Plausibility", "SPICE", "Accuracy", "Binary", "other", "Validity", "Distribution", "yes/no"], "title": "VinVL: Revisiting Visual Representations in Vision-Language Models"} {"abstract": "Learning to classify new categories based on just one or a few examples is a\nlong-standing challenge in modern computer vision. In this work, we proposes a\nsimple yet effective method for few-shot (and one-shot) object recognition. Our\napproach is based on a modified auto-encoder, denoted Delta-encoder, that\nlearns to synthesize new samples for an unseen category just by seeing few\nexamples from it. The synthesized samples are then used to train a classifier.\nThe proposed approach learns to both extract transferable intra-class\ndeformations, or \"deltas\", between same-class pairs of training examples, and\nto apply those deltas to the few provided examples of a novel class (unseen\nduring training) in order to efficiently synthesize samples from that new\nclass. The proposed method improves over the state-of-the-art in one-shot\nobject-recognition and compares favorably in the few-shot case. Upon acceptance\ncode will be made available.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Object Recognition"], "method": [], "dataset": ["Mini-Imagenet 5-way (1-shot)", "Caltech-256 5-way (1-shot)", "CIFAR100 5-way (1-shot)", "CUB 200 5-way 1-shot"], "metric": ["Accuracy"], "title": "Delta-encoder: an effective sample synthesis method for few-shot object recognition"} {"abstract": "In this work, a region-based Deep Convolutional Neural Network framework is\nproposed for document structure learning. The contribution of this work\ninvolves efficient training of region based classifiers and effective\nensembling for document image classification. A primary level of `inter-domain'\ntransfer learning is used by exporting weights from a pre-trained VGG16\narchitecture on the ImageNet dataset to train a document classifier on whole\ndocument images. Exploiting the nature of region based influence modelling, a\nsecondary level of `intra-domain' transfer learning is used for rapid training\nof deep learning models for image segments. Finally, stacked generalization\nbased ensembling is utilized for combining the predictions of the base deep\nneural network models. The proposed method achieves state-of-the-art accuracy\nof 92.2% on the popular RVL-CDIP document image dataset, exceeding benchmarks\nset by existing algorithms.", "field": [], "task": ["Document Image Classification", "Image Classification", "Transfer Learning"], "method": [], "dataset": ["RVL-CDIP"], "metric": ["Accuracy"], "title": "Document Image Classification with Intra-Domain Transfer Learning and Stacked Generalization of Deep Convolutional Neural Networks"} {"abstract": "In human-object interactions (HOI) recognition, conventional methods consider\nthe human body as a whole and pay a uniform attention to the entire body\nregion. They ignore the fact that normally, human interacts with an object by\nusing some parts of the body. In this paper, we argue that different body parts\nshould be paid with different attention in HOI recognition, and the\ncorrelations between different body parts should be further considered. This is\nbecause our body parts always work collaboratively. We propose a new pairwise\nbody-part attention model which can learn to focus on crucial parts, and their\ncorrelations for HOI recognition. A novel attention based feature selection\nmethod and a feature representation scheme that can capture pairwise\ncorrelations between body parts are introduced in the model. Our proposed\napproach achieved 4% improvement over the state-of-the-art results in HOI\nrecognition on the HICO dataset. We will make our model and source codes\npublicly available.", "field": [], "task": ["Feature Selection", "Human-Object Interaction Detection"], "method": [], "dataset": ["HICO"], "metric": ["mAP"], "title": "Pairwise Body-Part Attention for Recognizing Human-Object Interactions"} {"abstract": "We present a new deep learning approach for matching deformable shapes by\nintroducing {\\it Shape Deformation Networks} which jointly encode 3D shapes and\ncorrespondences. This is achieved by factoring the surface representation into\n(i) a template, that parameterizes the surface, and (ii) a learnt global\nfeature vector that parameterizes the transformation of the template into the\ninput surface. By predicting this feature for a new shape, we implicitly\npredict correspondences between this shape and the template. We show that these\ncorrespondences can be improved by an additional step which improves the shape\nfeature by minimizing the Chamfer distance between the input and transformed\ntemplate. We demonstrate that our simple approach improves on state-of-the-art\nresults on the difficult FAUST-inter challenge, with an average correspondence\nerror of 2.88cm. We show, on the TOSCA dataset, that our method is robust to\nmany types of perturbations, and generalizes to non-human shapes. This\nrobustness allows it to perform well on real unclean, meshes from the the SCAPE\ndataset.", "field": [], "task": ["3D Human Pose Estimation", "3D Point Cloud Matching", "3D Surface Generation"], "method": [], "dataset": ["Faust"], "metric": ["L2"], "title": "3D-CODED : 3D Correspondences by Deep Deformation"} {"abstract": "An established method for Word Sense Induction (WSI) uses a language model to\npredict probable substitutes for target words, and induces senses by clustering\nthese resulting substitute vectors.\n We replace the ngram-based language model (LM) with a recurrent one. Beyond\nbeing more accurate, the use of the recurrent LM allows us to effectively query\nit in a creative way, using what we call dynamic symmetric patterns.\n The combination of the RNN-LM and the dynamic symmetric patterns results in\nstrong substitute vectors for WSI, allowing to surpass the current\nstate-of-the-art on the SemEval 2013 WSI shared task by a large margin.", "field": [], "task": ["Word Sense Induction"], "method": [], "dataset": ["SemEval 2013"], "metric": ["F_NMI", "F-BC", "AVG"], "title": "Word Sense Induction with Neural biLM and Symmetric Patterns"} {"abstract": "Recent deep learning approaches to single image super-resolution have\nachieved impressive results in terms of traditional error measures and\nperceptual quality. However, in each case it remains challenging to achieve\nhigh quality results for large upsampling factors. To this end, we propose a\nmethod (ProSR) that is progressive both in architecture and training: the\nnetwork upsamples an image in intermediate steps, while the learning process is\norganized from easy to hard, as is done in curriculum learning. To obtain more\nphotorealistic results, we design a generative adversarial network (GAN), named\nProGanSR, that follows the same progressive multi-scale design principle. This\nnot only allows to scale well to high upsampling factors (e.g., 8x) but\nconstitutes a principled multi-scale approach that increases the reconstruction\nquality for all upsampling factors simultaneously. In particular ProSR ranks\n2nd in terms of SSIM and 4th in terms of PSNR in the NTIRE2018 SISR challenge\n[34]. Compared to the top-ranking team, our model is marginally lower, but runs\n5 times faster.", "field": [], "task": ["Curriculum Learning", "Image Super-Resolution", "SSIM", "Super-Resolution"], "method": [], "dataset": ["Urban100 - 4x upscaling", "BSD100 - 4x upscaling", "Set14 - 4x upscaling"], "metric": ["PSNR"], "title": "A Fully Progressive Approach to Single-Image Super-Resolution"} {"abstract": "We present a conceptually simple, flexible, and general framework for\nfew-shot learning, where a classifier must learn to recognise new classes given\nonly few examples from each. Our method, called the Relation Network (RN), is\ntrained end-to-end from scratch. During meta-learning, it learns to learn a\ndeep distance metric to compare a small number of images within episodes, each\nof which is designed to simulate the few-shot setting. Once trained, a RN is\nable to classify images of new classes by computing relation scores between\nquery images and the few examples of each new class without further updating\nthe network. Besides providing improved performance on few-shot learning, our\nframework is easily extended to zero-shot learning. Extensive experiments on\nfive benchmarks demonstrate that our simple approach provides a unified and\neffective approach for both of these two tasks.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Meta-Learning", "Zero-Shot Learning"], "method": [], "dataset": ["OMNIGLOT - 1-Shot, 5-way", "CIFAR-FS 5-way (5-shot)", "OMNIGLOT - 5-Shot, 20-way", "Mini-Imagenet 5-way (1-shot)", "Mini-ImageNet-CUB 5-way (1-shot)", "OMNIGLOT - 5-Shot, 5-way", "OMNIGLOT - 1-Shot, 20-way", "CUB 200 5-way 1-shot", "CUB 200 5-way 5-shot", "Tiered ImageNet 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Learning to Compare: Relation Network for Few-Shot Learning"} {"abstract": "Recently popularized graph neural networks achieve the state-of-the-art\naccuracy on a number of standard benchmark datasets for graph-based\nsemi-supervised learning, improving significantly over existing approaches.\nThese architectures alternate between a propagation layer that aggregates the\nhidden states of the local neighborhood and a fully-connected layer. Perhaps\nsurprisingly, we show that a linear model, that removes all the intermediate\nfully-connected layers, is still able to achieve a performance comparable to\nthe state-of-the-art models. This significantly reduces the number of\nparameters, which is critical for semi-supervised learning where number of\nlabeled examples are small. This in turn allows a room for designing more\ninnovative propagation layers. Based on this insight, we propose a novel graph\nneural network that removes all the intermediate fully-connected layers, and\nreplaces the propagation layers with attention mechanisms that respect the\nstructure of the graph. The attention mechanism allows us to learn a dynamic\nand adaptive local summary of the neighborhood to achieve more accurate\npredictions. In a number of experiments on benchmark citation networks\ndatasets, we demonstrate that our approach outperforms competing methods. By\nexamining the attention weights among neighbors, we show that our model\nprovides some interesting insights on how neighbors influence each other.", "field": [], "task": ["Graph Regression"], "method": [], "dataset": ["Lipophilicity "], "metric": ["RMSE"], "title": "Attention-based Graph Neural Network for Semi-supervised Learning"} {"abstract": "Meta-learning, or learning-to-learn, has proven to be a successful strategy in attacking problems in supervised learning and reinforcement learning that involve small amounts of data. State-of-the-art solutions involve learning an initialization and/or learning algorithm using a set of training episodes so that the meta learner can generalize to an evaluation episode quickly. These methods perform well but often lack good quantification of uncertainty, which can be vital to real-world applications when data is lacking. We propose a meta-learning method which efficiently amortizes hierarchical variational inference across tasks, learning a prior distribution over neural network weights so that a few steps of Bayes by Backprop will produce a good task-specific approximate posterior. We show that our method produces good uncertainty estimates on contextual bandit and few-shot learning benchmarks.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Meta-Learning", "Variational Inference"], "method": [], "dataset": ["Mini-Imagenet 5-way (1-shot)"], "metric": ["Accuracy"], "title": "Amortized Bayesian Meta-Learning"} {"abstract": "Gaussian Processes (GPs) are effective Bayesian predictors. We here show for the first time that instance labels of a GP classifier can be inferred in the multiple instance learning (MIL) setting using variational Bayes. We achieve this via a new construction of the bag likelihood that assumes a large value if the instance predictions obey the MIL constraints and a small value otherwise. This construction lets us derive the update rules for the variational parameters analytically, assuring both scalable learning and fast convergence. We observe this model to improve the state of the art in instance label prediction from bag-level supervision in the 20 Newsgroups benchmark, as well as in Barrett's cancer tumor localization from histopathology tissue microarray images. Furthermore, we introduce a novel pipeline for weakly supervised object detection naturally complemented with our model, which improves the state of the art on the PASCAL VOC 2007 and 2012 data sets. Last but not least, the performance of our model can be further boosted up using mixed supervision: a combination of weak (bag) and strong (instance) labels. \r", "field": [], "task": ["Gaussian Processes", "Multiple Instance Learning", "Object Detection", "Weakly Supervised Object Detection"], "method": [], "dataset": ["PASCAL VOC 2007", "PASCAL VOC 2012 test"], "metric": ["MAP"], "title": "Variational Bayesian Multiple Instance Learning With Gaussian Processes"} {"abstract": "Many modern NLP systems rely on word embeddings, previously trained in an\nunsupervised manner on large corpora, as base features. Efforts to obtain\nembeddings for larger chunks of text, such as sentences, have however not been\nso successful. Several attempts at learning unsupervised representations of\nsentences have not reached satisfactory enough performance to be widely\nadopted. In this paper, we show how universal sentence representations trained\nusing the supervised data of the Stanford Natural Language Inference datasets\ncan consistently outperform unsupervised methods like SkipThought vectors on a\nwide range of transfer tasks. Much like how computer vision uses ImageNet to\nobtain features, which can then be transferred to other tasks, our work tends\nto indicate the suitability of natural language inference for transfer learning\nto other NLP tasks. Our encoder is publicly available.", "field": [], "task": ["Cross-Lingual Natural Language Inference", "Natural Language Inference", "Semantic Textual Similarity", "Transfer Learning", "Word Embeddings"], "method": [], "dataset": ["XNLI Zero-Shot English-to-German", "SNLI", "XNLI Zero-Shot English-to-Spanish", "MRPC", "SentEval", "XNLI Zero-Shot English-to-French"], "metric": ["SICK-E", "% Test Accuracy", "STS", "Parameters", "MRPC", "SICK-R", "Accuracy", "F1", "% Train Accuracy"], "title": "Supervised Learning of Universal Sentence Representations from Natural Language Inference Data"} {"abstract": "Single image rain removal is a typical inverse problem in computer vision.\nThe deep learning technique has been verified to be effective for this task and\nachieved state-of-the-art performance. However, previous deep learning methods\nneed to pre-collect a large set of image pairs with/without synthesized rain\nfor training, which tends to make the neural network be biased toward learning\nthe specific patterns of the synthesized rain, while be less able to generalize\nto real test samples whose rain types differ from those in the training data.\nTo this issue, this paper firstly proposes a semi-supervised learning paradigm\ntoward this task. Different from traditional deep learning methods which only\nuse supervised image pairs with/without synthesized rain, we further put real\nrainy images, without need of their clean ones, into the network training\nprocess. This is realized by elaborately formulating the residual between an\ninput rainy image and its expected network output (clear image without rain) as\na specific parametrized rain streaks distribution. The network is therefore\ntrained to adapt real unsupervised diverse rain types through transferring from\nthe supervised synthesized rain, and thus both the short-of-training-sample and\nbias-to-supervised-sample issues can be evidently alleviated. Experiments on\nsynthetic and real data verify the superiority of our model compared to the\nstate-of-the-arts.", "field": [], "task": ["Rain Removal", "Single Image Deraining", "Transfer Learning"], "method": [], "dataset": ["Test2800", "Rain100H", "Test100", "Test1200", "Rain100L"], "metric": ["SSIM", "PSNR"], "title": "Semi-supervised Transfer Learning for Image Rain Removal"} {"abstract": "We aim to automatically generate natural language descriptions about an input\nstructured knowledge base (KB). We build our generation framework based on a\npointer network which can copy facts from the input KB, and add two attention\nmechanisms: (i) slot-aware attention to capture the association between a slot\ntype and its corresponding slot value; and (ii) a new \\emph{table position\nself-attention} to capture the inter-dependencies among related slots. For\nevaluation, besides standard metrics including BLEU, METEOR, and ROUGE, we\npropose a KB reconstruction based metric by extracting a KB from the generation\noutput and comparing it with the input KB. We also create a new data set which\nincludes 106,216 pairs of structured KBs and their corresponding natural\nlanguage descriptions for two distinct entity types. Experiments show that our\napproach significantly outperforms state-of-the-art methods. The reconstructed\nKB achieves 68.8% - 72.6% F-score.", "field": [], "task": ["Data-to-Text Generation", "KB-to-Language Generation", "Table-to-Text Generation", "Text Generation"], "method": [], "dataset": ["Wikipedia Person and Animal Dataset"], "metric": ["BLEU", "METEOR", "ROUGE"], "title": "Describing a Knowledge Base"} {"abstract": "We present an efficient approach for leveraging the knowledge from multiple modalities in training unimodal 3D convolutional neural networks (3D-CNNs) for the task of dynamic hand gesture recognition. Instead of explicitly combining multimodal information, which is commonplace in many state-of-the-art methods, we propose a different framework in which we embed the knowledge of multiple modalities in individual networks so that each unimodal network can achieve an improved performance. In particular, we dedicate separate networks per available modality and enforce them to collaborate and learn to develop networks with common semantics and better representations. We introduce a \"spatiotemporal semantic alignment\" loss (SSA) to align the content of the features from different networks. In addition, we regularize this loss with our proposed \"focal regularization parameter\" to avoid negative knowledge transfer. Experimental results show that our framework improves the test time recognition accuracy of unimodal networks, and provides the state-of-the-art performance on various dynamic hand gesture recognition datasets.", "field": [], "task": ["Action Recognition", "Gesture Recognition", "Hand Gesture Recognition", "Hand-Gesture Recognition", "Transfer Learning"], "method": [], "dataset": ["EgoGesture", "NVGesture", "VIVA Hand Gestures Dataset"], "metric": ["Accuracy"], "title": "Improving the Performance of Unimodal Dynamic Hand-Gesture Recognition with Multimodal Training"} {"abstract": "Head-driven phrase structure grammar (HPSG) enjoys a uniform formalism representing rich contextual syntactic and even semantic meanings. This paper makes the first attempt to formulate a simplified HPSG by integrating constituent and dependency formal representations into head-driven phrase structure. Then two parsing algorithms are respectively proposed for two converted tree representations, division span and joint span. As HPSG encodes both constituent and dependency structure information, the proposed HPSG parsers may be regarded as a sort of joint decoder for both types of structures and thus are evaluated in terms of extracted or converted constituent and dependency parsing trees. Our parser achieves new state-of-the-art performance for both parsing tasks on Penn Treebank (PTB) and Chinese Penn Treebank, verifying the effectiveness of joint learning constituent and dependency structures. In details, we report 96.33 F1 of constituent parsing and 97.20\\% UAS of dependency parsing on PTB.", "field": [], "task": ["Constituency Parsing", "Dependency Parsing"], "method": [], "dataset": ["Penn Treebank"], "metric": ["F1 score", "UAS", "POS", "LAS"], "title": "Head-Driven Phrase Structure Grammar Parsing on Penn Treebank"} {"abstract": "We propose a neural language model capable of unsupervised syntactic\nstructure induction. The model leverages the structure information to form\nbetter semantic representations and better language modeling. Standard\nrecurrent neural networks are limited by their structure and fail to\nefficiently use syntactic information. On the other hand, tree-structured\nrecursive networks usually require additional structural supervision at the\ncost of human expert annotation. In this paper, We propose a novel neural\nlanguage model, called the Parsing-Reading-Predict Networks (PRPN), that can\nsimultaneously induce the syntactic structure from unannotated sentences and\nleverage the inferred structure to learn a better language model. In our model,\nthe gradient can be directly back-propagated from the language model loss into\nthe neural parsing network. Experiments show that the proposed model can\ndiscover the underlying syntactic structure and achieve state-of-the-art\nperformance on word/character-level language model tasks.", "field": [], "task": ["Constituency Grammar Induction", "Language Modelling"], "method": [], "dataset": ["PTB"], "metric": ["Max F1 (WSJ)", "Mean F1 (WSJ)"], "title": "Neural Language Modeling by Jointly Learning Syntax and Lexicon"} {"abstract": "This paper investigates how far a very deep neural network is from attaining\nclose to saturating performance on existing 2D and 3D face alignment datasets.\nTo this end, we make the following 5 contributions: (a) we construct, for the\nfirst time, a very strong baseline by combining a state-of-the-art architecture\nfor landmark localization with a state-of-the-art residual block, train it on a\nvery large yet synthetically expanded 2D facial landmark dataset and finally\nevaluate it on all other 2D facial landmark datasets. (b) We create a guided by\n2D landmarks network which converts 2D landmark annotations to 3D and unifies\nall existing datasets, leading to the creation of LS3D-W, the largest and most\nchallenging 3D facial landmark dataset to date ~230,000 images. (c) Following\nthat, we train a neural network for 3D face alignment and evaluate it on the\nnewly introduced LS3D-W. (d) We further look into the effect of all\n\"traditional\" factors affecting face alignment performance like large pose,\ninitialization and resolution, and introduce a \"new\" one, namely the size of\nthe network. (e) We show that both 2D and 3D face alignment networks achieve\nperformance of remarkable accuracy which is probably close to saturating the\ndatasets used. Training and testing code as well as the dataset can be\ndownloaded from https://www.adrianbulat.com/face-alignment/", "field": [], "task": ["Face Alignment", "Head Pose Estimation"], "method": [], "dataset": ["AFLW2000", "LS3D-W Balanced", "BIWI", "300-VW (C)"], "metric": ["MAE", "Error rate", "AUC0.07", "MAE (trained with other data)"], "title": "How far are we from solving the 2D & 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks)"} {"abstract": "Although widely adopted, existing approaches for fine-tuning pre-trained language models have been shown to be unstable across hyper-parameter settings, motivating recent work on trust region methods. In this paper, we present a simplified and efficient method rooted in trust region theory that replaces previously used adversarial objectives with parametric noise (sampling from either a normal or uniform distribution), thereby discouraging representation change during fine-tuning when possible without hurting performance. We also introduce a new analysis to motivate the use of trust region methods more generally, by studying representational collapse; the degradation of generalizable representations from pre-trained models as they are fine-tuned for a specific end task. Extensive experiments show that our fine-tuning method matches or exceeds the performance of previous trust region methods on a range of understanding and generation tasks (including DailyMail/CNN, Gigaword, Reddit TIFU, and the GLUE benchmark), while also being much faster. We also show that it is less prone to representation collapse; the pre-trained models maintain more generalizable representations every time they are fine-tuned.", "field": [], "task": ["Abstractive Text Summarization", "Cross-Lingual Natural Language Inference", "Text Summarization"], "method": [], "dataset": ["CNN / Daily Mail", "GigaWord", "XNLI Zero-Shot English-to-German", "Reddit TIFU", "XNLI Zero-Shot English-to-Spanish", "XNLI Zero-Shot English-to-French"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2", "Accuracy"], "title": "Better Fine-Tuning by Reducing Representational Collapse"} {"abstract": "Driven by deep learning and the large volume of data, scene text recognition has evolved rapidly in recent years. Formerly, RNN-attention based methods have dominated this field, but suffer from the problem of \\textit{attention drift} in certain situations. Lately, semantic segmentation based algorithms have proven effective at recognizing text of different forms (horizontal, oriented and curved). However, these methods may produce spurious characters or miss genuine characters, as they rely heavily on a thresholding procedure operated on segmentation maps. To tackle these challenges, we propose in this paper an alternative approach, called TextScanner, for scene text recognition. TextScanner bears three characteristics: (1) Basically, it belongs to the semantic segmentation family, as it generates pixel-wise, multi-channel segmentation maps for character class, position and order; (2) Meanwhile, akin to RNN-attention based methods, it also adopts RNN for context modeling; (3) Moreover, it performs paralleled prediction for character position and class, and ensures that characters are transcripted in correct order. The experiments on standard benchmark datasets demonstrate that TextScanner outperforms the state-of-the-art methods. Moreover, TextScanner shows its superiority in recognizing more difficult text such Chinese transcripts and aligning with target characters.", "field": [], "task": ["Scene Text", "Scene Text Recognition", "Semantic Segmentation"], "method": [], "dataset": ["ICDAR2013", "ICDAR2015", "SVT"], "metric": ["Accuracy"], "title": "TextScanner: Reading Characters in Order for Robust Scene Text Recognition"} {"abstract": "Weakly-supervised semantic segmentation (WSSS) is introduced to narrow the gap for semantic segmentation performance from pixel-level supervision to image-level supervision. Most advanced approaches are based on class activation maps (CAMs) to generate pseudo-labels to train the segmentation network. The main limitation of WSSS is that the process of generating pseudo-labels from CAMs that use an image classifier is mainly focused on the most discriminative parts of the objects. To address this issue, we propose Puzzle-CAM, a process that minimizes differences between the features from separate patches and the whole image. Our method consists of a puzzle module and two regularization terms to discover the most integrated region in an object. Puzzle-CAM can activate the overall region of an object using image-level supervision without requiring extra parameters. % In experiments, Puzzle-CAM outperformed previous state-of-the-art methods using the same labels for supervision on the PASCAL VOC 2012 test dataset. In experiments, Puzzle-CAM outperformed previous state-of-the-art methods using the same labels for supervision on the PASCAL VOC 2012 dataset. Code associated with our experiments is available at https://github.com/OFRIN/PuzzleCAM.", "field": [], "task": ["Semantic Segmentation", "Weakly-Supervised Semantic Segmentation"], "method": [], "dataset": ["PASCAL VOC 2012 test", "PASCAL VOC 2012 val"], "metric": ["Mean IoU"], "title": "Puzzle-CAM: Improved localization via matching partial and full features"} {"abstract": "Few-shot learning has become essential for producing models that generalize\nfrom few examples. In this work, we identify that metric scaling and metric\ntask conditioning are important to improve the performance of few-shot\nalgorithms. Our analysis reveals that simple metric scaling completely changes\nthe nature of few-shot algorithm parameter updates. Metric scaling provides\nimprovements up to 14% in accuracy for certain metrics on the mini-Imagenet\n5-way 5-shot classification task. We further propose a simple and effective way\nof conditioning a learner on the task sample set, resulting in learning a\ntask-dependent metric space. Moreover, we propose and empirically test a\npractical end-to-end optimization procedure based on auxiliary task co-training\nto learn a task-dependent metric space. The resulting few-shot learning model\nbased on the task-dependent scaled metric achieves state of the art on\nmini-Imagenet. We confirm these results on another few-shot dataset that we\nintroduce in this paper based on CIFAR100. Our code is publicly available at\nhttps://github.com/ElementAI/TADAM.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning"], "method": [], "dataset": ["FC100 5-way (1-shot)", "Mini-Imagenet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "Mini-Imagenet 5-way (10-shot)", "FC100 5-way (5-shot)"], "metric": ["Accuracy"], "title": "TADAM: Task dependent adaptive metric for improved few-shot learning"} {"abstract": "This paper presents a novel approach to estimating the continuous six degree\nof freedom (6-DoF) pose (3D translation and rotation) of an object from a\nsingle RGB image. The approach combines semantic keypoints predicted by a\nconvolutional network (convnet) with a deformable shape model. Unlike prior\nwork, we are agnostic to whether the object is textured or textureless, as the\nconvnet learns the optimal representation from the available training image\ndata. Furthermore, the approach can be applied to instance- and class-based\npose recovery. Empirically, we show that the proposed approach can accurately\nrecover the 6-DoF object pose for both instance- and class-based scenarios with\na cluttered background. For class-based object pose estimation,\nstate-of-the-art accuracy is shown on the large-scale PASCAL3D+ dataset.", "field": [], "task": ["Keypoint Detection", "Pose Estimation"], "method": [], "dataset": [" Pascal3D+"], "metric": ["Mean PCK"], "title": "6-DoF Object Pose from Semantic Keypoints"} {"abstract": "Learning image representations with ConvNets by pre-training on ImageNet has\nproven useful across many visual understanding tasks including object\ndetection, semantic segmentation, and image captioning. Although any image\nrepresentation can be applied to video frames, a dedicated spatiotemporal\nrepresentation is still vital in order to incorporate motion patterns that\ncannot be captured by appearance based models alone. This paper presents an\nempirical ConvNet architecture search for spatiotemporal feature learning,\nculminating in a deep 3-dimensional (3D) Residual ConvNet. Our proposed\narchitecture outperforms C3D by a good margin on Sports-1M, UCF101, HMDB51,\nTHUMOS14, and ASLAN while being 2 times faster at inference time, 2 times\nsmaller in model size, and having a more compact representation.", "field": [], "task": ["Action Classification", "Action Recognition", "Image Captioning", "Neural Architecture Search", "Object Detection", "Semantic Segmentation"], "method": [], "dataset": ["Kinetics-400", "UCF101", "HMDB-51"], "metric": ["Average accuracy of 3 splits", "3-fold Accuracy", "Vid acc@1"], "title": "ConvNet Architecture Search for Spatiotemporal Feature Learning"} {"abstract": "The wealth of structured (e.g. Wikidata) and unstructured data about the\nworld available today presents an incredible opportunity for tomorrow's\nArtificial Intelligence. So far, integration of these two different modalities\nis a difficult process, involving many decisions concerning how best to\nrepresent the information so that it will be captured or useful, and\nhand-labeling large amounts of data. DeepType overcomes this challenge by\nexplicitly integrating symbolic information into the reasoning process of a\nneural network with a type system. First we construct a type system, and\nsecond, we use it to constrain the outputs of a neural network to respect the\nsymbolic structure. We achieve this by reformulating the design problem into a\nmixed integer problem: create a type system and subsequently train a neural\nnetwork with it. In this reformulation discrete variables select which\nparent-child relations from an ontology are types within the type system, while\ncontinuous variables control a classifier fit to the type system. The original\nproblem cannot be solved exactly, so we propose a 2-step algorithm: 1)\nheuristic search or stochastic optimization over discrete variables that define\na type system informed by an Oracle and a Learnability heuristic, 2) gradient\ndescent to fit classifier parameters. We apply DeepType to the problem of\nEntity Linking on three standard datasets (i.e. WikiDisamb30, CoNLL (YAGO), TAC\nKBP 2010) and find that it outperforms all existing solutions by a wide margin,\nincluding approaches that rely on a human-designed type system or recent deep\nlearning-based entity embeddings, while explicitly using symbolic information\nlets it integrate new entities without retraining.", "field": [], "task": ["Entity Disambiguation", "Entity Embeddings", "Entity Linking", "Stochastic Optimization"], "method": [], "dataset": ["TAC2010", "AIDA-CoNLL"], "metric": ["Micro Precision", "In-KB Accuracy"], "title": "DeepType: Multilingual Entity Linking by Neural Type System Evolution"} {"abstract": "We propose a neural network for unsupervised anomaly detection with a novel robust subspace recovery layer (RSR layer). This layer seeks to extract the underlying subspace from a latent representation of the given data and removes outliers that lie away from this subspace. It is used within an autoencoder. The encoder maps the data into a latent space, from which the RSR layer extracts the subspace. The decoder then smoothly maps back the underlying subspace to a \"manifold\" close to the original inliers. Inliers and outliers are distinguished according to the distances between the original and mapped positions (small for inliers and large for outliers). Extensive numerical experiments with both image and document datasets demonstrate state-of-the-art precision and recall.", "field": [], "task": ["Anomaly Detection", "Unsupervised Anomaly Detection"], "method": [], "dataset": ["Caltech-101", "Fashion-MNIST", "20NEWS", "Reuters-21578"], "metric": ["AUC (outlier ratio = 0.5)"], "title": "Robust Subspace Recovery Layer for Unsupervised Anomaly Detection"} {"abstract": "We present graph wavelet neural network (GWNN), a novel graph convolutional\nneural network (CNN), leveraging graph wavelet transform to address the\nshortcomings of previous spectral graph CNN methods that depend on graph\nFourier transform. Different from graph Fourier transform, graph wavelet\ntransform can be obtained via a fast algorithm without requiring matrix\neigendecomposition with high computational cost. Moreover, graph wavelets are\nsparse and localized in vertex domain, offering high efficiency and good\ninterpretability for graph convolution. The proposed GWNN significantly\noutperforms previous spectral graph CNNs in the task of graph-based\nsemi-supervised classification on three benchmark datasets: Cora, Citeseer and\nPubmed.", "field": [], "task": [], "method": [], "dataset": ["Cora", "Pubmed", "Citeseer"], "metric": ["Accuracy"], "title": "Graph Wavelet Neural Network"} {"abstract": "Learning discriminative, view-invariant and multi-scale representations of person appearance with different se- mantic levels is of paramount importance for person Re- Identification (Re-ID). A surge of effort has been spent by the community to learn deep Re-ID models capturing a holistic single semantic level feature representation. To improve the achieved results, additional visual attributes and body part-driven models have been considered. How- ever, these require extensive human annotation labor or de- mand additional computational efforts. We argue that a pyramid-inspired method capturing multi-scale information may overcome such requirements. Precisely, multi-scale stripes that represent visual information of a person can be used by a novel architecture factorizing them into latent discriminative factors at multiple semantic levels. A multi- task loss is combined with a curriculum learning strategy to learn a discriminative and invariant person representation which is exploited for triplet-similarity learning. Results on three benchmark Re-ID datasets demonstrate that better performance than existing methods are achieved (e.g., more than 90% accuracy on the Duke-MTMC dataset).", "field": [], "task": ["Curriculum Learning", "Person Re-Identification"], "method": [], "dataset": ["DukeMTMC-reID", "Market-1501"], "metric": ["Rank-1", "MAP"], "title": "Aggregating Deep Pyramidal Representations for Person Re-Idenfitication"} {"abstract": "This paper studies the problem of predicting the distribution over multiple possible future paths of people as they move through various visual scenes. We make two main contributions. The first contribution is a new dataset, created in a realistic 3D simulator, which is based on real world trajectory data, and then extrapolated by human annotators to achieve different latent goals. This provides the first benchmark for quantitative evaluation of the models to predict multi-future trajectories. The second contribution is a new model to generate multiple plausible future trajectories, which contains novel designs of using multi-scale location encodings and convolutional RNNs over graphs. We refer to our model as Multiverse. We show that our model achieves the best results on our dataset, as well as on the real-world VIRAT/ActEV dataset (which just contains one possible future).", "field": [], "task": ["Autonomous Driving", "Human motion prediction", "Multi-future Trajectory Prediction", "Multi Future Trajectory Prediction", "Self-Driving Cars", "Trajectory Forecasting", "Trajectory Prediction"], "method": [], "dataset": ["Stanford Drone", "ActEV", "ForkingPaths"], "metric": ["ADE-8/12 @K = 20", "FDE-8/12 @K= 20", "ADE", "FDE-8/12", "ADE-8/12"], "title": "The Garden of Forking Paths: Towards Multi-Future Trajectory Prediction"} {"abstract": "This paper describes our approach for the Disguised Faces in the Wild (DFW)\n2018 challenge. The task here is to verify the identity of a person among\ndisguised and impostors images. Given the importance of the task of face\nverification it is essential to compare methods across a common platform. Our\napproach is based on VGG-face architecture paired with Contrastive loss based\non cosine distance metric. For augmenting the data set, we source more data\nfrom the internet. The experiments show the effectiveness of the approach on\nthe DFW data. We show that adding extra data to the DFW dataset with noisy\nlabels also helps in increasing the generalization performance of the network.\nThe proposed network achieves 27.13% absolute increase in accuracy over the DFW\nbaseline.", "field": [], "task": ["Disguised Face Verification", "Face Verification"], "method": [], "dataset": ["Disguised Faces in the Wild"], "metric": ["GAR @10% FAR", "GAR @0.1% FAR", "GAR @1% FAR"], "title": "DisguiseNet : A Contrastive Approach for Disguised Face Verification in the Wild"} {"abstract": "We propose a simple algorithm to train stochastic neural networks to draw\nsamples from given target distributions for probabilistic inference. Our method\nis based on iteratively adjusting the neural network parameters so that the\noutput changes along a Stein variational gradient that maximumly decreases the\nKL divergence with the target distribution. Our method works for any target\ndistribution specified by their unnormalized density function, and can train\nany black-box architectures that are differentiable in terms of the parameters\nwe want to adapt. As an application of our method, we propose an amortized MLE\nalgorithm for training deep energy model, where a neural sampler is adaptively\ntrained to approximate the likelihood function. Our method mimics an\nadversarial game between the deep energy model and the neural sampler, and\nobtains realistic-looking images competitive with the state-of-the-art results.", "field": [], "task": ["Conditional Image Generation"], "method": [], "dataset": ["CIFAR-10"], "metric": ["Inception score"], "title": "Learning to Draw Samples: With Application to Amortized MLE for Generative Adversarial Learning"} {"abstract": "We describe an approach for unsupervised learning of a generic, distributed\nsentence encoder. Using the continuity of text from books, we train an\nencoder-decoder model that tries to reconstruct the surrounding sentences of an\nencoded passage. Sentences that share semantic and syntactic properties are\nthus mapped to similar vector representations. We next introduce a simple\nvocabulary expansion method to encode words that were not seen as part of\ntraining, allowing us to expand our vocabulary to a million words. After\ntraining our model, we extract and evaluate our vectors with linear models on 8\ntasks: semantic relatedness, paraphrase detection, image-sentence ranking,\nquestion-type classification and 4 benchmark sentiment and subjectivity\ndatasets. The end result is an off-the-shelf encoder that can produce highly\ngeneric sentence representations that are robust and perform well in practice.\nWe will make our encoder publicly available.", "field": [], "task": [], "method": [], "dataset": ["SICK"], "metric": ["Spearman Correlation", "MSE", "Pearson Correlation"], "title": "Skip-Thought Vectors"} {"abstract": "Many recent datasets contain a variety of different data modalities, for instance, image, question, and answer data in visual question answering (VQA). When training deep net classifiers on those multi-modal datasets, the modalities get exploited at different scales, i.e., some modalities can more easily contribute to the classification results than others. This is suboptimal because the classifier is inherently biased towards a subset of the modalities. To alleviate this shortcoming, we propose a novel regularization term based on the functional entropy. Intuitively, this term encourages to balance the contribution of each modality to the classification result. However, regularization with the functional entropy is challenging. To address this, we develop a method based on the log-Sobolev inequality, which bounds the functional entropy with the functional-Fisher-information. Intuitively, this maximizes the amount of information that the modalities contribute. On the two challenging multi-modal datasets VQA-CPv2 and SocialIQ, we obtain state-of-the-art results while more uniformly exploiting the modalities. In addition, we demonstrate the efficacy of our method on Colored MNIST.", "field": [], "task": ["Question Answering", "Visual Question Answering"], "method": [], "dataset": ["VQA-CP"], "metric": ["Score"], "title": "Removing Bias in Multi-modal Classifiers: Regularization by Maximizing Functional Entropies"} {"abstract": "Few-shot learners aim to recognize new object classes based on a small number of labeled training examples. To prevent overfitting, state-of-the-art few-shot learners use meta-learning on convolutional-network features and perform classification using a nearest-neighbor classifier. This paper studies the accuracy of nearest-neighbor baselines without meta-learning. Surprisingly, we find simple feature transformations suffice to obtain competitive few-shot learning accuracies. For example, we find that a nearest-neighbor classifier used in combination with mean-subtraction and L2-normalization outperforms prior results in three out of five settings on the miniImageNet dataset.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Meta-Learning"], "method": [], "dataset": ["Mini-Imagenet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)"], "metric": ["Accuracy"], "title": "SimpleShot: Revisiting Nearest-Neighbor Classification for Few-Shot Learning"} {"abstract": "Recent advances in commonsense reasoning depend on large-scale human-annotated training data to achieve peak performance. However, manual curation of training examples is expensive and has been shown to introduce annotation artifacts that neural models can readily exploit and overfit on. We investigate G-DAUG^C, a novel generative data augmentation method that aims to achieve more accurate and robust learning in the low-resource setting. Our approach generates synthetic examples using pretrained language models, and selects the most informative and diverse set of examples for data augmentation. In experiments with multiple commonsense reasoning benchmarks, G-DAUG^C consistently outperforms existing data augmentation methods based on back-translation, and establishes a new state-of-the-art on WinoGrande, CODAH, and CommonsenseQA. Further, in addition to improvements in in-distribution accuracy, G-DAUG^C-augmented training also enhances out-of-distribution generalization, showing greater robustness against adversarial or perturbed examples. Our analysis demonstrates that G-DAUG^C produces a diverse set of fluent training examples, and that its selection and training approaches are important for performance. Our findings encourage future research toward generative data augmentation to enhance both in-distribution learning and out-of-distribution generalization.", "field": [], "task": ["Data Augmentation", "Question Answering"], "method": [], "dataset": ["CODAH"], "metric": ["Accuracy"], "title": "Generative Data Augmentation for Commonsense Reasoning"} {"abstract": "We propose a new model for digital pathology segmentation, based on the\nobservation that histopathology images are inherently symmetric under rotation\nand reflection. Utilizing recent findings on rotation equivariant CNNs, the\nproposed model leverages these symmetries in a principled manner. We present a\nvisual analysis showing improved stability on predictions, and demonstrate that\nexploiting rotation equivariance significantly improves tumor detection\nperformance on a challenging lymph node metastases dataset. We further present\na novel derived dataset to enable principled comparison of machine learning\nmodels, in combination with an initial benchmark. Through this dataset, the\ntask of histopathology diagnosis becomes accessible as a challenging benchmark\nfor fundamental machine learning research.", "field": [], "task": ["Breast Tumour Classification"], "method": [], "dataset": ["PCam"], "metric": ["AUC"], "title": "Rotation Equivariant CNNs for Digital Pathology"} {"abstract": "We present an efficient document representation learning framework, Document\nVector through Corruption (Doc2VecC). Doc2VecC represents each document as a\nsimple average of word embeddings. It ensures a representation generated as\nsuch captures the semantic meanings of the document during learning. A\ncorruption model is included, which introduces a data-dependent regularization\nthat favors informative or rare words while forcing the embeddings of common\nand non-discriminative ones to be close to zero. Doc2VecC produces\nsignificantly better word embeddings than Word2Vec. We compare Doc2VecC with\nseveral state-of-the-art document representation learning algorithms. The\nsimple model architecture introduced by Doc2VecC matches or out-performs the\nstate-of-the-art in generating high-quality document representations for\nsentiment analysis, document classification as well as semantic relatedness\ntasks. The simplicity of the model enables training on billions of words per\nhour on a single machine. At the same time, the model is very efficient in\ngenerating representations of unseen documents at test time.", "field": [], "task": ["Document Classification", "Representation Learning", "Sentiment Analysis", "Word Embeddings"], "method": [], "dataset": ["IMDb", "SICK"], "metric": ["Spearman Correlation", "MSE", "Pearson Correlation", "Accuracy"], "title": "Efficient Vector Representation for Documents through Corruption"} {"abstract": "In this paper we present a method for learning a discriminative classifier\nfrom unlabeled or partially labeled data. Our approach is based on an objective\nfunction that trades-off mutual information between observed examples and their\npredicted categorical class distribution, against robustness of the classifier\nto an adversarial generative model. The resulting algorithm can either be\ninterpreted as a natural generalization of the generative adversarial networks\n(GAN) framework or as an extension of the regularized information maximization\n(RIM) framework to robust classification against an optimal adversary. We\nempirically evaluate our method - which we dub categorical generative\nadversarial networks (or CatGAN) - on synthetic data as well as on challenging\nimage classification tasks, demonstrating the robustness of the learned\nclassifiers. We further qualitatively assess the fidelity of samples generated\nby the adversarial generator that is learned alongside the discriminative\nclassifier, and identify links between the CatGAN objective and discriminative\nclustering algorithms (such as RIM).", "field": [], "task": ["Image Classification", "Robust classification", "Unsupervised Image Classification", "Unsupervised MNIST"], "method": [], "dataset": ["MNIST"], "metric": ["Accuracy"], "title": "Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks"} {"abstract": "Neural message passing algorithms for semi-supervised classification on\ngraphs have recently achieved great success. However, for classifying a node\nthese methods only consider nodes that are a few propagation steps away and the\nsize of this utilized neighborhood is hard to extend. In this paper, we use the\nrelationship between graph convolutional networks (GCN) and PageRank to derive\nan improved propagation scheme based on personalized PageRank. We utilize this\npropagation procedure to construct a simple model, personalized propagation of\nneural predictions (PPNP), and its fast approximation, APPNP. Our model's\ntraining time is on par or faster and its number of parameters on par or lower\nthan previous models. It leverages a large, adjustable neighborhood for\nclassification and can be easily combined with any neural network. We show that\nthis model outperforms several recently proposed methods for semi-supervised\nclassification in the most thorough study done so far for GCN-like models. Our\nimplementation is available online.", "field": [], "task": ["Node Classification"], "method": [], "dataset": ["Cora", "Pubmed", "Citeseer", "MS ACADEMIC"], "metric": ["Validation", "Accuracy"], "title": "Predict then Propagate: Graph Neural Networks meet Personalized PageRank"} {"abstract": "One of the fundamental challenges in video object segmentation is to find an\neffective representation of the target and background appearance. The best\nperforming approaches resort to extensive fine-tuning of a convolutional neural\nnetwork for this purpose. Besides being prohibitively expensive, this strategy\ncannot be truly trained end-to-end since the online fine-tuning procedure is\nnot integrated into the offline training of the network.\n To address these issues, we propose a network architecture that learns a\npowerful representation of the target and background appearance in a single\nforward pass. The introduced appearance module learns a probabilistic\ngenerative model of target and background feature distributions. Given a new\nimage, it predicts the posterior class probabilities, providing a highly\ndiscriminative cue, which is processed in later network modules. Both the\nlearning and prediction stages of our appearance module are fully\ndifferentiable, enabling true end-to-end training of the entire segmentation\npipeline. Comprehensive experiments demonstrate the effectiveness of the\nproposed approach on three video object segmentation benchmarks. We close the\ngap to approaches based on online fine-tuning on DAVIS17, while operating at 15\nFPS on a single GPU. Furthermore, our method outperforms all published\napproaches on the large-scale YouTube-VOS dataset.", "field": [], "task": ["Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation", "Youtube-VOS"], "method": [], "dataset": ["DAVIS 2017 (val)", "DAVIS 2017 (test-dev)", "DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "A Generative Appearance Model for End-to-end Video Object Segmentation"} {"abstract": "Social recommendation leverages social information to solve data sparsity and\ncold-start problems in traditional collaborative filtering methods. However,\nmost existing models assume that social effects from friend users are static\nand under the forms of constant weights or fixed constraints. To relax this\nstrong assumption, in this paper, we propose dual graph attention networks to\ncollaboratively learn representations for two-fold social effects, where one is\nmodeled by a user-specific attention weight and the other is modeled by a\ndynamic and context-aware attention weight. We also extend the social effects\nin user domain to item domain, so that information from related items can be\nleveraged to further alleviate the data sparsity problem. Furthermore,\nconsidering that different social effects in two domains could interact with\neach other and jointly influence user preferences for items, we propose a new\npolicy-based fusion strategy based on contextual multi-armed bandit to weigh\ninteractions of various social effects. Experiments on one benchmark dataset\nand a commercial dataset verify the efficacy of the key components in our\nmodel. The results show that our model achieves great improvement for\nrecommendation accuracy compared with other state-of-the-art social\nrecommendation methods.", "field": [], "task": ["Recommendation Systems"], "method": [], "dataset": ["Epinions", "WeChat"], "metric": ["MAE", "P@10", "RMSE", "AUC"], "title": "Dual Graph Attention Networks for Deep Latent Representation of Multifaceted Social Effects in Recommender Systems"} {"abstract": "There has been remarkable progress on object detection and re-identification (re-ID) in recent years which are the key components of multi-object tracking. However, little attention has been focused on jointly accomplishing the two tasks in a single network. Our study shows that the previous attempts ended up with degraded accuracy mainly because the re-ID task is not fairly learned which causes many identity switches. The unfairness lies in two-fold: (1) they treat re-ID as a secondary task whose accuracy heavily depends on the primary detection task. So training is largely biased to the detection task but ignores the re-ID task; (2) they use ROI-Align to extract re-ID features which is directly borrowed from object detection. However, this introduces a lot of ambiguity in characterizing objects because many sampling points may belong to disturbing instances or background. To solve the problems, we present a simple approach \\emph{FairMOT} which consists of two homogeneous branches to predict pixel-wise objectness scores and re-ID features. The achieved fairness between the tasks allows \\emph{FairMOT} to obtain high levels of detection and tracking accuracy and outperform previous state-of-the-arts by a large margin on several public datasets. The source code and pre-trained models are released at https://github.com/ifzhang/FairMOT.", "field": [], "task": ["Fairness", "Multi-Object Tracking", "Multiple Object Tracking", "Object Detection", "Object Tracking"], "method": [], "dataset": ["MOT17", "2DMOT15", "MOT16", "MOT20"], "metric": ["MOTA"], "title": "FairMOT: On the Fairness of Detection and Re-Identification in Multiple Object Tracking"} {"abstract": "Few-shot semantic segmentation aims to learn to segment new object classes with only a few annotated examples, which has a wide range of real-world applications. Most existing methods either focus on the restrictive setting of one-way few-shot segmentation or suffer from incomplete coverage of object regions. In this paper, we propose a novel few-shot semantic segmentation framework based on the prototype representation. Our key idea is to decompose the holistic class representation into a set of part-aware prototypes, capable of capturing diverse and fine-grained object features. In addition, we propose to leverage unlabeled data to enrich our part-aware prototypes, resulting in better modeling of intra-class variations of semantic objects. We develop a novel graph neural network model to generate and enhance the proposed part-aware prototypes based on labeled and unlabeled images. Extensive experimental evaluations on two benchmarks show that our method outperforms the prior art with a sizable margin.", "field": [], "task": ["Few-Shot Semantic Segmentation", "Semantic Segmentation", "Semi-Supervised Semantic Segmentation"], "method": [], "dataset": ["PASCAL-5i (5-Shot)", "PASCAL-5i (1-Shot)", "Pascal5i"], "metric": ["Mean IoU", "meanIOU"], "title": "Part-aware Prototype Network for Few-shot Semantic Segmentation"} {"abstract": "Camouflaged objects attempt to conceal their texture into the background and discriminating them from the background is hard even for human beings. The main objective of this paper is to explore the camouflaged object segmentation problem, namely, segmenting the camouflaged object(s) for a given image. This problem has not been well studied in spite of a wide range of potential applications including the preservation of wild animals and the discovery of new species, surveillance systems, search-and-rescue missions in the event of natural disasters such as earthquakes, floods or hurricanes. This paper addresses a new challenging problem of camouflaged object segmentation. To address this problem, we provide a new image dataset of camouflaged objects for benchmarking purposes. In addition, we propose a general end-to-end network, called the Anabranch Network, that leverages both classification and segmentation tasks. Different from existing networks for segmentation, our proposed network possesses the second branch for classification to predict the probability of containing camouflaged object(s) in an image, which is then fused into the main branch for segmentation to boost up the segmentation accuracy. Extensive experiments conducted on the newly built dataset demonstrate the effectiveness of our network using various fully convolutional networks.", "field": [], "task": ["Camouflaged Object Segmentation", "Camouflage Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["CAMO"], "metric": ["S-Measure", "Weighted F-Measure", "MAE", "F-Measure", "E-Measure"], "title": "Anabranch Network for Camouflaged Object Segmentation"} {"abstract": "In this work, we present a method for unsupervised domain adaptation. Many\nadversarial learning methods train domain classifier networks to distinguish\nthe features as either a source or target and train a feature generator network\nto mimic the discriminator. Two problems exist with these methods. First, the\ndomain classifier only tries to distinguish the features as a source or target\nand thus does not consider task-specific decision boundaries between classes.\nTherefore, a trained generator can generate ambiguous features near class\nboundaries. Second, these methods aim to completely match the feature\ndistributions between different domains, which is difficult because of each\ndomain's characteristics.\n To solve these problems, we introduce a new approach that attempts to align\ndistributions of source and target by utilizing the task-specific decision\nboundaries. We propose to maximize the discrepancy between two classifiers'\noutputs to detect target samples that are far from the support of the source. A\nfeature generator learns to generate target features near the support to\nminimize the discrepancy. Our method outperforms other methods on several\ndatasets of image classification and semantic segmentation. The codes are\navailable at \\url{https://github.com/mil-tokyo/MCD_DA}", "field": [], "task": ["Domain Adaptation", "Image Classification", "Semantic Segmentation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["SVHN-to-MNIST", "USPS-to-MNIST", "UCF-to-HMDBfull", "Syn2Real-C", "SYNSIG-to-GTSRB", "MNIST-to-USPS", "HMDBfull-to-UCF"], "metric": ["Accuracy"], "title": "Maximum Classifier Discrepancy for Unsupervised Domain Adaptation"} {"abstract": "An efficient learner is one who reuses what they already know to tackle a new\nproblem. For a machine learner, this means understanding the similarities\namongst datasets. In order to do this, one must take seriously the idea of\nworking with datasets, rather than datapoints, as the key objects to model.\nTowards this goal, we demonstrate an extension of a variational autoencoder\nthat can learn a method for computing representations, or statistics, of\ndatasets in an unsupervised fashion. The network is trained to produce\nstatistics that encapsulate a generative model for each dataset. Hence the\nnetwork enables efficient learning from new datasets for both unsupervised and\nsupervised tasks. We show that we are able to learn statistics that can be used\nfor: clustering datasets, transferring generative models to new datasets,\nselecting representative samples of datasets and classifying previously unseen\nclasses. We refer to our model as a neural statistician, and by this we mean a\nneural network that can learn to compute summary statistics of datasets without\nsupervision.", "field": [], "task": ["Few-Shot Image Classification"], "method": [], "dataset": ["OMNIGLOT - 1-Shot, 5-way", "OMNIGLOT - 5-Shot, 20-way", "OMNIGLOT - 5-Shot, 5-way", "OMNIGLOT - 1-Shot, 20-way"], "metric": ["Accuracy"], "title": "Towards a Neural Statistician"} {"abstract": "In this paper, we newly introduce the concept of temporal attention filters,\nand describe how they can be used for human activity recognition from videos.\nMany high-level activities are often composed of multiple temporal parts (e.g.,\nsub-events) with different duration/speed, and our objective is to make the\nmodel explicitly learn such temporal structure using multiple attention filters\nand benefit from them. Our temporal filters are designed to be fully\ndifferentiable, allowing end-of-end training of the temporal filters together\nwith the underlying frame-based or segment-based convolutional neural network\narchitectures. This paper presents an approach of learning a set of optimal\nstatic temporal attention filters to be shared across different videos, and\nextends this approach to dynamically adjust attention filters per testing video\nusing recurrent long short-term memory networks (LSTMs). This allows our\ntemporal attention filters to learn latent sub-events specific to each\nactivity. We experimentally confirm that the proposed concept of temporal\nattention filters benefits the activity recognition, and we visualize the\nlearned latent sub-events.", "field": [], "task": ["Action Classification", "Action Recognition In Videos", "Activity Recognition"], "method": [], "dataset": ["DogCentric"], "metric": ["Accuracy"], "title": "Learning Latent Sub-events in Activity Videos Using Temporal Attention Filters"} {"abstract": "Numerous important problems can be framed as learning from graph data. We\npropose a framework for learning convolutional neural networks for arbitrary\ngraphs. These graphs may be undirected, directed, and with both discrete and\ncontinuous node and edge attributes. Analogous to image-based convolutional\nnetworks that operate on locally connected regions of the input, we present a\ngeneral approach to extracting locally connected regions from graphs. Using\nestablished benchmark data sets, we demonstrate that the learned feature\nrepresentations are competitive with state of the art graph kernels and that\ntheir computation is highly efficient.", "field": [], "task": ["Graph Classification"], "method": [], "dataset": ["IMDb-B", "D&D", "NCI1", "MUTAG", "PTC"], "metric": ["Accuracy"], "title": "Learning Convolutional Neural Networks for Graphs"} {"abstract": "Graph Convolutional Networks (GCNs) and their variants have experienced significant attention and have become the de facto methods for learning graph representations. GCNs derive inspiration primarily from recent deep learning approaches, and as a result, may inherit unnecessary complexity and redundant computation. In this paper, we reduce this excess complexity through successively removing nonlinearities and collapsing weight matrices between consecutive layers. We theoretically analyze the resulting linear model and show that it corresponds to a fixed low-pass filter followed by a linear classifier. Notably, our experimental evaluation demonstrates that these simplifications do not negatively impact accuracy in many downstream applications. Moreover, the resulting model scales to larger datasets, is naturally interpretable, and yields up to two orders of magnitude speedup over FastGCN.", "field": [], "task": ["Graph Regression", "Image Classification", "Relation Extraction", "Sentiment Analysis", "Skeleton Based Action Recognition", "Text Classification"], "method": [], "dataset": ["TACRED", "R8", "20NEWS", "MR", "Lipophilicity ", "R52", "SBU", "Ohsumed"], "metric": ["RMSE", "F1", "Accuracy"], "title": "Simplifying Graph Convolutional Networks"} {"abstract": "This work generalizes graph neural networks (GNNs) beyond those based on the Weisfeiler-Lehman (WL) algorithm, graph Laplacians, and diffusions. Our approach, denoted Relational Pooling (RP), draws from the theory of finite partial exchangeability to provide a framework with maximal representation power for graphs. RP can work with existing graph representation models and, somewhat counterintuitively, can make them even more powerful than the original WL isomorphism test. Additionally, RP allows architectures like Recurrent Neural Networks and Convolutional Neural Networks to be used in a theoretically sound approach for graph classification. We demonstrate improved performance of RP-based graph representations over state-of-the-art methods on a number of tasks.", "field": [], "task": ["Graph Classification"], "method": [], "dataset": ["Tox21", "MUV", "HIV dataset"], "metric": ["AUC"], "title": "Relational Pooling for Graph Representations"} {"abstract": "Most caregivers of people with dementia (CPWD) experience a high degree of stress due to the demands of providing care, especially when addressing unpredictable behavioral and psychological symptoms of dementia. Such challenging responsibilities make caregivers susceptible to poor sleep quality with detrimental effects on their overall health. Hence, monitoring caregivers\u2019 sleep quality can provide important CPWD stress assessment. Most current sleep studies are based on polysomnography, which is expensive and potentially disrupts the caregiving routine. To address these issues, we propose a clinical decision support system to predict sleep quality based on trends of physiological signals in the deep sleep stage. This system utilizes four raw physiological signals using a wearable device (E4 wristband): heart rate variability, electrodermal activity, body movement, and skin temperature. To evaluate the performance of the proposed method, analyses were conducted on a two-week period of sleep monitored on eight CPWD. The best performance is achieved using the random forest classifier with an accuracy of 75% for sleep quality, and 73% for restfulness, respectively. We found that the most important features to detect these measures are sleep efficiency (ratio of amount of time asleep to the amount of time in bed) and skin temperature. The results from our sleep analysis system demonstrate the capability of using wearable sensors to measure sleep quality and restfulness in CPWD.", "field": [], "task": ["Heart Rate Variability", "Sleep Quality Prediction"], "method": [], "dataset": ["100 sleep nights of 8 caregivers"], "metric": ["Accuracy"], "title": "Sleep quality prediction in caregivers using physiological signals"} {"abstract": "Bayesian optimization is an effective methodology for the global optimization\nof functions with expensive evaluations. It relies on querying a distribution\nover functions defined by a relatively cheap surrogate model. An accurate model\nfor this distribution over functions is critical to the effectiveness of the\napproach, and is typically fit using Gaussian processes (GPs). However, since\nGPs scale cubically with the number of observations, it has been challenging to\nhandle objectives whose optimization requires many evaluations, and as such,\nmassively parallelizing the optimization.\n In this work, we explore the use of neural networks as an alternative to GPs\nto model distributions over functions. We show that performing adaptive basis\nfunction regression with a neural network as the parametric form performs\ncompetitively with state-of-the-art GP-based approaches, but scales linearly\nwith the number of data rather than cubically. This allows us to achieve a\npreviously intractable degree of parallelism, which we apply to large scale\nhyperparameter optimization, rapidly finding competitive models on benchmark\nobject recognition tasks using convolutional networks, and image caption\ngeneration using neural language models.", "field": [], "task": ["Gaussian Processes", "Hyperparameter Optimization", "Image Classification", "Object Recognition", "Regression"], "method": [], "dataset": ["CIFAR-100", "CIFAR-10"], "metric": ["Percentage correct"], "title": "Scalable Bayesian Optimization Using Deep Neural Networks"} {"abstract": "Pre-trained language representation models (PLMs) cannot well capture factual knowledge from text. In contrast, knowledge embedding (KE) methods can effectively represent the relational facts in knowledge graphs (KGs) with informative entity embeddings, but conventional KE models cannot take full advantage of the abundant textual information. In this paper, we propose a unified model for Knowledge Embedding and Pre-trained LanguagE Representation (KEPLER), which can not only better integrate factual knowledge into PLMs but also produce effective text-enhanced KE with the strong PLMs. In KEPLER, we encode textual entity descriptions with a PLM as their embeddings, and then jointly optimize the KE and language modeling objectives. Experimental results show that KEPLER achieves state-of-the-art performances on various NLP tasks, and also works remarkably well as an inductive KE model on KG link prediction. Furthermore, for pre-training and evaluating KEPLER, we construct Wikidata5M, a large-scale KG dataset with aligned entity descriptions, and benchmark state-of-the-art KE methods on it. It shall serve as a new KE benchmark and facilitate the research on large KG, inductive KE, and KG with text. The source code can be obtained from https://github.com/THU-KEG/KEPLER.", "field": [], "task": ["Entity Embeddings", "Entity Typing", "Inductive knowledge graph completion", "Knowledge Graph Completion", "Knowledge Graph Embeddings", "Knowledge Graphs", "Language Modelling", "Link Prediction", "Relation Extraction"], "method": [], "dataset": ["Wikidata5m-ind", "TACRED"], "metric": ["Hits@3", "Hits@1", "MRR", "F1", "Hits@10"], "title": "KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation"} {"abstract": "Graph Neural Nets (GNNs) have received increasing attentions, partially due to their superior performance in many node and graph classification tasks. However, there is a lack of understanding on what they are learning and how sophisticated the learned graph functions are. In this work, we propose a dissection of GNNs on graph classification into two parts: 1) the graph filtering, where graph-based neighbor aggregations are performed, and 2) the set function, where a set of hidden node features are composed for prediction. To study the importance of both parts, we propose to linearize them separately. We first linearize the graph filtering function, resulting Graph Feature Network (GFN), which is a simple lightweight neural net defined on a \\textit{set} of graph augmented features. Further linearization of GFN's set function results in Graph Linear Network (GLN), which is a linear function. Empirically we perform evaluations on common graph classification benchmarks. To our surprise, we find that, despite the simplification, GFN could match or exceed the best accuracies produced by recently proposed GNNs (with a fraction of computation cost), while GLN underperforms significantly. Our results demonstrate the importance of non-linear set function, and suggest that linear graph filtering with non-linear set function is an efficient and powerful scheme for modeling existing graph classification benchmarks.", "field": [], "task": ["Graph Classification"], "method": [], "dataset": ["COLLAB", "RE-M12K", "IMDb-B", "ENZYMES", "PROTEINS", "D&D", "NCI1", "MUTAG", "IMDb-M", "RE-M5K"], "metric": ["Accuracy"], "title": "Are Powerful Graph Neural Nets Necessary? A Dissection on Graph Classification"} {"abstract": "Recently, graph neural networks (GNNs) have revolutionized the field of graph\nrepresentation learning through effectively learned node embeddings, and\nachieved state-of-the-art results in tasks such as node classification and link\nprediction. However, current GNN methods are inherently flat and do not learn\nhierarchical representations of graphs---a limitation that is especially\nproblematic for the task of graph classification, where the goal is to predict\nthe label associated with an entire graph. Here we propose DiffPool, a\ndifferentiable graph pooling module that can generate hierarchical\nrepresentations of graphs and can be combined with various graph neural network\narchitectures in an end-to-end fashion. DiffPool learns a differentiable soft\ncluster assignment for nodes at each layer of a deep GNN, mapping nodes to a\nset of clusters, which then form the coarsened input for the next GNN layer.\nOur experimental results show that combining existing GNN methods with DiffPool\nyields an average improvement of 5-10% accuracy on graph classification\nbenchmarks, compared to all existing pooling approaches, achieving a new\nstate-of-the-art on four out of five benchmark data sets.", "field": [], "task": ["Graph Classification", "Graph Representation Learning", "Link Prediction", "Node Classification", "Representation Learning"], "method": [], "dataset": ["COLLAB", "ENZYMES", "D&D", "PROTEINS", "REDDIT-MULTI-12K"], "metric": ["Accuracy"], "title": "Hierarchical Graph Representation Learning with Differentiable Pooling"} {"abstract": "While Generative Adversarial Networks (GANs) have seen huge successes in image synthesis tasks, they are notoriously difficult to adapt to different datasets, in part due to instability during training and sensitivity to hyperparameters. One commonly accepted reason for this instability is that gradients passing from the discriminator to the generator become uninformative when there isn't enough overlap in the supports of the real and fake distributions. In this work, we propose the Multi-Scale Gradient Generative Adversarial Network (MSG-GAN), a simple but effective technique for addressing this by allowing the flow of gradients from the discriminator to the generator at multiple scales. This technique provides a stable approach for high resolution image synthesis, and serves as an alternative to the commonly used progressive growing technique. We show that MSG-GAN converges stably on a variety of image datasets of different sizes, resolutions and domains, as well as different types of loss functions and architectures, all with the same set of fixed hyperparameters. When compared to state-of-the-art GANs, our approach matches or exceeds the performance in most of the cases we tried.", "field": [], "task": ["Image Generation"], "method": [], "dataset": ["Oxford 102 Flowers 256 x 256", "CIFAR-10", "LSUN Churches 256 x 256", "FFHQ", "Indian Celebs 256 x 256", "CelebA-HQ 1024x1024"], "metric": ["Inception score", "FID"], "title": "MSG-GAN: Multi-Scale Gradients for Generative Adversarial Networks"} {"abstract": "Wearable cameras allow to collect images and videos of humans interacting with the world. While human-object interactions have been thoroughly investigated in third person vision, the problem has been understudied in egocentric settings and in industrial scenarios. To fill this gap, we introduce MECCANO, the first dataset of egocentric videos to study human-object interactions in industrial-like settings. MECCANO has been acquired by 20 participants who were asked to build a motorbike model, for which they had to interact with tiny objects and tools. The dataset has been explicitly labeled for the task of recognizing human-object interactions from an egocentric perspective. Specifically, each interaction has been labeled both temporally (with action segments) and spatially (with active object bounding boxes). With the proposed dataset, we investigate four different tasks including 1) action recognition, 2) active object detection, 3) active object recognition and 4) egocentric human-object interaction detection, which is a revisited version of the standard human-object interaction detection task. Baseline results show that the MECCANO dataset is a challenging benchmark to study egocentric human-object interactions in industrial-like scenarios. We publicy release the dataset at https://iplab.dmi.unict.it/MECCANO.", "field": [], "task": ["Action Recognition", "Human-Object Interaction Detection", "Object Detection", "Object Recognition"], "method": [], "dataset": ["MECCANO"], "metric": ["mAP", "mAP@0.5 role", "Top-1 Accuracy"], "title": "The MECCANO Dataset: Understanding Human-Object Interactions from Egocentric Videos in an Industrial-like Domain"} {"abstract": "While traditional machine learning methods for malware detection largely depend on hand-designed features, which are based on experts\u2019 knowledge of the domain, end-to-end learning approaches take the raw executable as input, and try to learn a set of descriptive features from it. Although the latter might behave badly in problems where there are not many data available or where the dataset is imbalanced. In this paper we present HYDRA, a novel framework to address the task of malware detection and classification by combining various types of features to discover the relationships between distinct modalities. Our approach learns from various sources to maximize the benefits of multiple feature types to reflect the characteristics of malware executables. We propose a baseline system that consists of both hand-engineered and end-to-end components to combine the benefits of feature engineering and deep learning so that malware characteristics are effectively represented. An extensive analysis of state-of-the-art methods on the Microsoft Malware Classification Challenge benchmark shows that the proposed solution achieves comparable results to gradient boosting methods in the literature and higher yield in comparison with deep learning approaches.", "field": [], "task": ["Feature Engineering", "Malware Classification", "Malware Detection", "Multimodal Deep Learning"], "method": [], "dataset": ["Microsoft Malware Classification Challenge"], "metric": ["Accuracy (10-fold)", "Macro F1 (10-fold)"], "title": "HYDRA: A multimodal deep learning framework for malware classification"} {"abstract": "Video Retrieval is a challenging task where a text query is matched to a video or vice versa. Most of the existing approaches for addressing such a problem rely on annotations made by the users. Although simple, this approach is not always feasible in practice. In this work, we explore the application of the language-image model, CLIP, to obtain video representations without the need for said annotations. This model was explicitly trained to learn a common space where images and text can be compared. Using various techniques described in this document, we extended its application to videos, obtaining state-of-the-art results on the MSR-VTT and MSVD benchmarks.", "field": [], "task": ["Video Retrieval"], "method": [], "dataset": ["MSVD", "MSR-VTT-1kA", "MSR-VTT", "LSMDC"], "metric": ["text-to-video Median Rank", "text-to-video R@5", "video-to-text R@10", "text-to-video R@1", "video-to-text Median Rank", "video-to-text R@1", "text-to-video R@10", "video-to-text R@5"], "title": "A Straightforward Framework For Video Retrieval Using CLIP"} {"abstract": "In this paper, we describe TextEnt, a neural network model that learns\ndistributed representations of entities and documents directly from a knowledge\nbase (KB). Given a document in a KB consisting of words and entity annotations,\nwe train our model to predict the entity that the document describes and map\nthe document and its target entity close to each other in a continuous vector\nspace. Our model is trained using a large number of documents extracted from\nWikipedia. The performance of the proposed model is evaluated using two tasks,\nnamely fine-grained entity typing and multiclass text classification. The\nresults demonstrate that our model achieves state-of-the-art performance on\nboth tasks. The code and the trained representations are made available online\nfor further academic research.", "field": [], "task": ["Entity Typing", "Representation Learning", "Text Classification"], "method": [], "dataset": ["Freebase FIGER", "R8", "20NEWS"], "metric": ["Macro F1", "P@1", "F-measure", "BEP", "Accuracy", "Micro F1"], "title": "Representation Learning of Entities and Documents from Knowledge Base Descriptions"} {"abstract": "Recently, Visual Question Answering (VQA) has emerged as one of the most\nsignificant tasks in multimodal learning as it requires understanding both\nvisual and textual modalities. Existing methods mainly rely on extracting image\nand question features to learn their joint feature embedding via multimodal\nfusion or attention mechanism. Some recent studies utilize external\nVQA-independent models to detect candidate entities or attributes in images,\nwhich serve as semantic knowledge complementary to the VQA task. However, these\ncandidate entities or attributes might be unrelated to the VQA task and have\nlimited semantic capacities. To better utilize semantic knowledge in images, we\npropose a novel framework to learn visual relation facts for VQA. Specifically,\nwe build up a Relation-VQA (R-VQA) dataset based on the Visual Genome dataset\nvia a semantic similarity module, in which each data consists of an image, a\ncorresponding question, a correct answer and a supporting relation fact. A\nwell-defined relation detector is then adopted to predict visual\nquestion-related relation facts. We further propose a multi-step attention\nmodel composed of visual attention and semantic attention sequentially to\nextract related visual knowledge and semantic knowledge. We conduct\ncomprehensive experiments on the two benchmark datasets, demonstrating that our\nmodel achieves state-of-the-art performance and verifying the benefit of\nconsidering visual relation facts.", "field": [], "task": ["Question Answering", "Semantic Similarity", "Semantic Textual Similarity", "Visual Question Answering"], "method": [], "dataset": ["COCO Visual Question Answering (VQA) real images 1.0 open ended", "COCO Visual Question Answering (VQA) real images 1.0 multiple choice"], "metric": ["Percentage correct"], "title": "R-VQA: Learning Visual Relation Facts with Semantic Attention for Visual Question Answering"} {"abstract": "Unsupervised image-to-image translation aims at learning a joint distribution\nof images in different domains by using images from the marginal distributions\nin individual domains. Since there exists an infinite set of joint\ndistributions that can arrive the given marginal distributions, one could infer\nnothing about the joint distribution from the marginal distributions without\nadditional assumptions. To address the problem, we make a shared-latent space\nassumption and propose an unsupervised image-to-image translation framework\nbased on Coupled GANs. We compare the proposed framework with competing\napproaches and present high quality image translation results on various\nchallenging unsupervised image translation tasks, including street scene image\ntranslation, animal image translation, and face image translation. We also\napply the proposed framework to domain adaptation and achieve state-of-the-art\nperformance on benchmark datasets. Code and additional results are available in\nhttps://github.com/mingyuliutw/unit .", "field": [], "task": ["Domain Adaptation", "Image-to-Image Translation", "Multimodal Unsupervised Image-To-Image Translation", "Unsupervised Image-To-Image Translation"], "method": [], "dataset": ["Edge-to-Shoes", "Edge-to-Handbags", "Freiburg Forest Dataset", "Cats-and-Dogs", "EPFL NIR-VIS"], "metric": ["Quality", "PSNR", "Diversity", "CIS", "IS"], "title": "Unsupervised Image-to-Image Translation Networks"} {"abstract": "Video summarization aims to facilitate large-scale video browsing by\nproducing short, concise summaries that are diverse and representative of\noriginal videos. In this paper, we formulate video summarization as a\nsequential decision-making process and develop a deep summarization network\n(DSN) to summarize videos. DSN predicts for each video frame a probability,\nwhich indicates how likely a frame is selected, and then takes actions based on\nthe probability distributions to select frames, forming video summaries. To\ntrain our DSN, we propose an end-to-end, reinforcement learning-based\nframework, where we design a novel reward function that jointly accounts for\ndiversity and representativeness of generated summaries and does not rely on\nlabels or user interactions at all. During training, the reward function judges\nhow diverse and representative the generated summaries are, while DSN strives\nfor earning higher rewards by learning to produce more diverse and more\nrepresentative summaries. Since labels are not required, our method can be\nfully unsupervised. Extensive experiments on two benchmark datasets show that\nour unsupervised method not only outperforms other state-of-the-art\nunsupervised methods, but also is comparable to or even superior than most of\npublished supervised approaches.", "field": [], "task": ["Decision Making", "Supervised Video Summarization", "Unsupervised Video Summarization", "Video Summarization"], "method": [], "dataset": ["TvSum", "SumMe"], "metric": ["F1-score", "F1-score (Canonical)", "F1-score (Augmented)"], "title": "Deep Reinforcement Learning for Unsupervised Video Summarization with Diversity-Representativeness Reward"} {"abstract": "Deep learning methods achieve state-of-the-art performance in many\napplication scenarios. Yet, these methods require a significant amount of\nhyperparameters tuning in order to achieve the best results. In particular,\ntuning the learning rates in the stochastic optimization process is still one\nof the main bottlenecks. In this paper, we propose a new stochastic gradient\ndescent procedure for deep networks that does not require any learning rate\nsetting. Contrary to previous methods, we do not adapt the learning rates nor\nwe make use of the assumed curvature of the objective function. Instead, we\nreduce the optimization process to a game of betting on a coin and propose a\nlearning-rate-free optimal algorithm for this scenario. Theoretical convergence\nis proven for convex and quasi-convex functions and empirical evidence shows\nthe advantage of our algorithm over popular stochastic gradient algorithms.", "field": [], "task": ["Stochastic Optimization"], "method": [], "dataset": ["MNIST"], "metric": ["NLL"], "title": "Training Deep Networks without Learning Rates Through Coin Betting"} {"abstract": "In this paper we address three different computer vision tasks using a single\nbasic architecture: depth prediction, surface normal estimation, and semantic\nlabeling. We use a multiscale convolutional network that is able to adapt\neasily to each task using only small modifications, regressing from the input\nimage to the output map directly. Our method progressively refines predictions\nusing a sequence of scales, and captures many image details without any\nsuperpixels or low-level segmentation. We achieve state-of-the-art performance\non benchmarks for all three tasks.", "field": [], "task": ["Depth Estimation", "Monocular Depth Estimation"], "method": [], "dataset": ["NYU-Depth V2"], "metric": ["RMSE"], "title": "Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-Scale Convolutional Architecture"} {"abstract": "The complementary characteristics of active and passive depth sensing\ntechniques motivate the fusion of the Li-DAR sensor and stereo camera for\nimproved depth perception. Instead of directly fusing estimated depths across\nLiDAR and stereo modalities, we take advantages of the stereo matching network\nwith two enhanced techniques: Input Fusion and Conditional Cost Volume\nNormalization (CCVNorm) on the LiDAR information. The proposed framework is\ngeneric and closely integrated with the cost volume component that is commonly\nutilized in stereo matching neural networks. We experimentally verify the\nefficacy and robustness of our method on the KITTI Stereo and Depth Completion\ndatasets, obtaining favorable performance against various fusion strategies.\nMoreover, we demonstrate that, with a hierarchical extension of CCVNorm, the\nproposed method brings only slight overhead to the stereo matching network in\nterms of computation time and model size. For project page, see\nhttps://zswang666.github.io/Stereo-LiDAR-CCVNorm-Project-Page/", "field": [], "task": ["Depth Completion", "Stereo Matching", "Stereo Matching Hand"], "method": [], "dataset": ["KITTI Depth Completion Validation"], "metric": ["RMSE"], "title": "3D LiDAR and Stereo Fusion using Stereo Matching Network with Conditional Cost Volume Normalization"} {"abstract": "We propose 4 insights that help to significantly improve the performance of deep learning models that predict surface normals and semantic labels from a single RGB image. These insights are: (1) denoise the \"ground truth\" surface normals in the training set to ensure consistency with the semantic labels; (2) concurrently train on a mix of real and synthetic data, instead of pretraining on synthetic and finetuning on real; (3) jointly predict normals and semantics using a shared model, but only backpropagate errors on pixels that have valid training labels; (4) slim down the model and use grayscale instead of color inputs. Despite the simplicity of these steps, we demonstrate consistently improved results on several datasets, using a model that runs at 12 fps on a standard mobile phone.", "field": [], "task": ["Semantic Segmentation", "Surface Normals Estimation"], "method": [], "dataset": ["ScanNetV2", "NYU Depth v2"], "metric": ["Mean Angle Error", "Pixel Accuracy", "% < 22.5", "RMSE", "% < 30", "% < 11.25"], "title": "Floors are Flat: Leveraging Semantics for Real-Time Surface Normal Prediction"} {"abstract": "We describe a modulation-domain loss function for deeplearning-based speech enhancement systems. Learnable\r\nspectro-temporal receptive fields (STRFs) were adapted to\r\noptimize for a speaker identification task. The learned STRFs\r\nwere then used to calculate a weighted mean-squared error (MSE) in the modulation domain for training a speech\r\nenhancement system. Experiments showed that adding the\r\nmodulation-domain MSE to the MSE in the spectro-temporal\r\ndomain substantially improved the objective prediction of\r\nspeech quality and intelligibility for real-time speech enhancement systems without incurring additional computation\r\nduring inference", "field": [], "task": ["Speaker Identification", "Speech Enhancement"], "method": [], "dataset": ["DNS Challenge"], "metric": ["PESQ-WB"], "title": "A MODULATION-DOMAIN LOSS FOR NEURAL-NETWORK-BASED REAL-TIME SPEECH ENHANCEMENT"} {"abstract": "This work proposes a new method to accurately complete sparse LiDAR maps\nguided by RGB images. For autonomous vehicles and robotics the use of LiDAR is\nindispensable in order to achieve precise depth predictions. A multitude of\napplications depend on the awareness of their surroundings, and use depth cues\nto reason and react accordingly. On the one hand, monocular depth prediction\nmethods fail to generate absolute and precise depth maps. On the other hand,\nstereoscopic approaches are still significantly outperformed by LiDAR based\napproaches. The goal of the depth completion task is to generate dense depth\npredictions from sparse and irregular point clouds which are mapped to a 2D\nplane. We propose a new framework which extracts both global and local\ninformation in order to produce proper depth maps. We argue that simple depth\ncompletion does not require a deep network. However, we additionally propose a\nfusion method with RGB guidance from a monocular camera in order to leverage\nobject information and to correct mistakes in the sparse input. This improves\nthe accuracy significantly. Moreover, confidence masks are exploited in order\nto take into account the uncertainty in the depth predictions from each\nmodality. This fusion method outperforms the state-of-the-art and ranks first\non the KITTI depth completion benchmark. Our code with visualizations is\navailable.", "field": [], "task": ["Autonomous Vehicles", "Depth Completion", "Depth Estimation"], "method": [], "dataset": ["KITTI Depth Completion"], "metric": ["iMAE", "RMSE", "Runtime [ms]", "MAE", "iRMSE"], "title": "Sparse and noisy LiDAR completion with RGB guidance and uncertainty"} {"abstract": "Action recognition is computationally expensive. In this paper, we address the problem of frame selection to improve the accuracy of action recognition. In particular, we show that selecting good frames helps in action recognition performance even in the trimmed videos domain. Recent work has successfully leveraged frame selection for long, untrimmed videos, where much of the content is not relevant, and easy to discard. In this work, however, we focus on the more standard short, trimmed action recognition problem. We argue that good frame selection can not only reduce the computational cost of action recognition but also increase the accuracy by getting rid of frames that are hard to classify. In contrast to previous work, we propose a method that instead of selecting frames by considering one at a time, considers them jointly. This results in a more efficient selection, where good frames are more effectively distributed over the video, like snapshots that tell a story. We call the proposed frame selection SMART and we test it in combination with different backbone architectures and on multiple benchmarks (Kinetics, Something-something, UCF101). We show that the SMART frame selection consistently improves the accuracy compared to other frame selection strategies while reducing the computational cost by a factor of 4 to 10 times. Additionally, we show that when the primary goal is recognition performance, our selection strategy can improve over recent state-of-the-art models and frame selection strategies on various benchmarks (UCF101, HMDB51, FCVID, and ActivityNet).", "field": [], "task": ["Action Recognition"], "method": [], "dataset": ["UCF101", "HMDB-51", "ActivityNet"], "metric": ["Average accuracy of 3 splits", "mAP", "3-fold Accuracy"], "title": "SMART Frame Selection for Action Recognition"} {"abstract": "Unsupervised approaches to learning in neural networks are of substantial\ninterest for furthering artificial intelligence, both because they would enable\nthe training of networks without the need for large numbers of expensive\nannotations, and because they would be better models of the kind of\ngeneral-purpose learning deployed by humans. However, unsupervised networks\nhave long lagged behind the performance of their supervised counterparts,\nespecially in the domain of large-scale visual recognition. Recent developments\nin training deep convolutional embeddings to maximize non-parametric instance\nseparation and clustering objectives have shown promise in closing this gap.\nHere, we describe a method that trains an embedding function to maximize a\nmetric of local aggregation, causing similar data instances to move together in\nthe embedding space, while allowing dissimilar instances to separate. This\naggregation metric is dynamic, allowing soft clusters of different scales to\nemerge. We evaluate our procedure on several large-scale visual recognition\ndatasets, achieving state-of-the-art unsupervised transfer learning performance\non object recognition in ImageNet, scene recognition in Places 205, and object\ndetection in PASCAL VOC.", "field": [], "task": ["Object Detection", "Object Recognition", "Scene Recognition", "Self-Supervised Image Classification", "Transfer Learning"], "method": [], "dataset": ["ImageNet"], "metric": ["Top 1 Accuracy (kNN, k=20)", "Number of Params", "Top 1 Accuracy"], "title": "Local Aggregation for Unsupervised Learning of Visual Embeddings"} {"abstract": "Learning text-video embeddings usually requires a dataset of video clips with manually provided captions. However, such datasets are expensive and time consuming to create and therefore difficult to obtain on a large scale. In this work, we propose instead to learn such embeddings from video data with readily available natural language annotations in the form of automatically transcribed narrations. The contributions of this work are three-fold. First, we introduce HowTo100M: a large-scale dataset of 136 million video clips sourced from 1.22M narrated instructional web videos depicting humans performing and describing over 23k different visual tasks. Our data collection procedure is fast, scalable and does not require any additional manual annotation. Second, we demonstrate that a text-video embedding trained on this data leads to state-of-the-art results for text-to-video retrieval and action localization on instructional video datasets such as YouCook2 or CrossTask. Finally, we show that this embedding transfers well to other domains: fine-tuning on generic Youtube videos (MSR-VTT dataset) and movies (LSMDC dataset) outperforms models trained on these datasets alone. Our dataset, code and models will be publicly available at: www.di.ens.fr/willow/research/howto100m/.", "field": [], "task": ["Action Localization", "Video Retrieval"], "method": [], "dataset": ["MSR-VTT-1kA", "CrossTask", "LSMDC", "YouCook2", "MSR-VTT"], "metric": ["text-to-video Median Rank", "Recall", "text-to-video R@5", "text-to-video R@1", "text-to-video R@10", "video-to-text R@5"], "title": "HowTo100M: Learning a Text-Video Embedding by Watching Hundred Million Narrated Video Clips"} {"abstract": "Dependency trees convey rich structural information that is proven useful for extracting relations among entities in text. However, how to effectively make use of relevant information while ignoring irrelevant information from the dependency trees remains a challenging research question. Existing approaches employing rule based hard-pruning strategies for selecting relevant partial dependency structures may not always yield optimal results. In this work, we propose Attention Guided Graph Convolutional Networks (AGGCNs), a novel model which directly takes full dependency trees as inputs. Our model can be understood as a soft-pruning approach that automatically learns how to selectively attend to the relevant sub-structures useful for the relation extraction task. Extensive results on various tasks including cross-sentence n-ary relation extraction and large-scale sentence-level relation extraction show that our model is able to better leverage the structural information of the full dependency trees, giving significantly better results than previous approaches.", "field": [], "task": ["Relation Extraction"], "method": [], "dataset": ["TACRED"], "metric": ["F1"], "title": "Attention Guided Graph Convolutional Networks for Relation Extraction"} {"abstract": "Traditional deep methods for skeleton-based action recognition usually structure the skeleton as a coordinates sequence or a pseudo-image to feed to RNNs or CNNs, which cannot explicitly exploit the natural connectivity among the joints. Recently, graph convolutional networks (GCNs), which generalize CNNs to more generic non-Euclidean structures, obtains remarkable performance for skeleton-based action recognition. However, the topology of the graph is set by hand and fixed over all layers, which may be not optimal for the action recognition task and the hierarchical CNN structures. Besides, the first-order information (the coordinate of joints) is mainly used in former GCNs, while the second-order information (the length and direction of bones) is less exploited. In this work, a novel two-stream nonlocal graph convolutional network is proposed to solve these problems. The topology of the graph in each layer of the model can be either uniformly or individually learned by BP algorithm, which brings more flexibility and generality. Meanwhile, a two-stream framework is proposed to model both of the joints and bones information simultaneously, which further boost the recognition performance. Extensive experiments on two large-scale datasets, NTU-RGB+D and Kinetics, demonstrate the performance of our model exceeds the state-of-the-art by a significant margin.", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition"], "method": [], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "Non-Local Graph Convolutional Networks for Skeleton-Based Action Recognition"} {"abstract": "Text-image cross-modal retrieval is a challenging task in the field of language and vision. Most previous approaches independently embed images and sentences into a joint embedding space and compare their similarities. However, previous approaches rarely explore the interactions between images and sentences before calculating similarities in the joint space. Intuitively, when matching between images and sentences, human beings would alternatively attend to regions in images and words in sentences, and select the most salient information considering the interaction between both modalities. In this paper, we propose Cross-modal Adaptive Message Passing (CAMP), which adaptively controls the information flow for message passing across modalities. Our approach not only takes comprehensive and fine-grained cross-modal interactions into account, but also properly handles negative pairs and irrelevant information with an adaptive gating scheme. Moreover, instead of conventional joint embedding approaches for text-image matching, we infer the matching score based on the fused features, and propose a hardest negative binary cross-entropy loss for training. Results on COCO and Flickr30k significantly surpass state-of-the-art methods, demonstrating the effectiveness of our approach.", "field": [], "task": ["Cross-Modal Retrieval", "Image Retrieval", "Text-Image Retrieval"], "method": [], "dataset": ["Flickr30K 1K test"], "metric": ["R@10", "R@1", "R@5"], "title": "CAMP: Cross-Modal Adaptive Message Passing for Text-Image Retrieval"} {"abstract": "Multi-turn retrieval-based conversation is an important task for building intelligent dialogue systems. Existing works mainly focus on matching candidate responses with every context utterance on multiple levels of granularity, which ignore the side effect of using excessive context information. Context utterances provide abundant information for extracting more matching features, but it also brings noise signals and unnecessary information. In this paper, we will analyze the side effect of using too many context utterances and propose a multi-hop selector network (MSN) to alleviate the problem. Specifically, MSN firstly utilizes a multi-hop selector to select the relevant utterances as context. Then, the model matches the filtered context with the candidate response and obtains a matching score. Experimental results show that MSN outperforms some state-of-the-art methods on three public multi-turn dialogue datasets.", "field": [], "task": ["Conversational Response Selection"], "method": [], "dataset": ["Ubuntu Dialogue (v1, Ranking)"], "metric": ["R10@1", "R10@5", "R10@2"], "title": "Multi-hop Selector Network for Multi-turn Response Selection in Retrieval-based Chatbots"} {"abstract": "It is challenging for weakly supervised object detection network to precisely predict the positions of the objects, since there are no instance-level category annotations. Most existing methods tend to solve this problem by using a two-phase learning procedure, i.e., multiple instance learning detector followed by a fully supervised learning detector with bounding-box regression. Based on our observation, this procedure may lead to local minima for some object categories. In this paper, we propose to jointly train the two phases in an end-to-end manner to tackle this problem. Specifically, we design a single network with both multiple instance learning and bounding-box regression branches that share the same backbone. Meanwhile, a guided attention module using classification loss is added to the backbone for effectively extracting the implicit location information in the features. Experimental results on public datasets show that our method achieves state-of-the-art performance.", "field": [], "task": ["Multiple Instance Learning", "Object Detection", "Regression", "Weakly Supervised Object Detection"], "method": [], "dataset": ["PASCAL VOC 2007", "PASCAL VOC 2012 test"], "metric": ["MAP"], "title": "Towards Precise End-to-end Weakly Supervised Object Detection Network"} {"abstract": "Graphs are complex objects that do not lend themselves easily to typical learning tasks. Recently, a range of approaches based on graph kernels or graph neural networks have been developed for graph classification and for representation learning on graphs in general. As the developed methodologies become more sophisticated, it is important to understand which components of the increasingly complex methods are necessary or most effective. As a first step, we develop a simple yet meaningful graph representation, and explore its effectiveness in graph classification. We test our baseline representation for the graph classification task on a range of graph datasets. Interestingly, this simple representation achieves similar performance as the state-of-the-art graph kernels and graph neural networks for non-attributed graph classification. Its performance on classifying attributed graphs is slightly weaker as it does not incorporate attributes. However, given its simplicity and efficiency, we believe that it still serves as an effective baseline for attributed graph classification. Our graph representation is efficient (linear-time) to compute. We also provide a simple connection with the graph neural networks. Note that these observations are only for the task of graph classification while existing methods are often designed for a broader scope including node embedding and link prediction. The results are also likely biased due to the limited amount of benchmark datasets available. Nevertheless, the good performance of our simple baseline calls for the development of new, more comprehensive benchmark datasets so as to better evaluate and analyze different graph learning methods. Furthermore, given the computational efficiency of our graph summary, we believe that it is a good candidate as a baseline method for future graph classification (or even other graph learning) studies.", "field": [], "task": ["Graph Classification", "Graph Learning", "Link Prediction", "Representation Learning"], "method": [], "dataset": ["ENZYMES", "PROTEINS", "D&D", "NCI1", "MUTAG", "PTC"], "metric": ["Accuracy"], "title": "A simple yet effective baseline for non-attributed graph classification"} {"abstract": "Knowledge graph embeddings rank among the most successful methods for link prediction in knowledge graphs, i.e., the task of completing an incomplete collection of relational facts. A downside of these models is their strong sensitivity to model hyperparameters, in particular regularizers, which have to be extensively tuned to reach good performance [Kadlec et al., 2017]. We propose an efficient method for large scale hyperparameter tuning by interpreting these models in a probabilistic framework. After a model augmentation that introduces per-entity hyperparameters, we use a variational expectation-maximization approach to tune thousands of such hyperparameters with minimal additional cost. Our approach is agnostic to details of the model and results in a new state of the art in link prediction on standard benchmark data.", "field": [], "task": ["Knowledge Graph Embeddings", "Knowledge Graphs", "Link Prediction"], "method": [], "dataset": [" FB15k", "WN18RR", "WN18", "FB15k-237"], "metric": ["Hits@10", "MRR"], "title": "Augmenting and Tuning Knowledge Graph Embeddings"} {"abstract": "We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. Our implementation is available at https://github.com/hojonathanho/diffusion", "field": [], "task": ["Denoising", "Image Generation", "Latent Variable Models"], "method": [], "dataset": ["LSUN Cat 256 x 256", "CIFAR-10", "LSUN Bedroom", "LSUN Churches 256 x 256", "LSUN Bedroom 256 x 256"], "metric": ["FID-50k", "Inception score", "FID", "bits/dimension"], "title": "Denoising Diffusion Probabilistic Models"} {"abstract": "Video semantic segmentation requires to utilize the complex temporal relations between frames of the video sequence. Previous works usually exploit accurate optical flow to leverage the temporal relations, which suffer much from heavy computational cost. In this paper, we propose a Temporal Memory Attention Network (TMANet) to adaptively integrate the long-range temporal relations over the video sequence based on the self-attention mechanism without exhaustive optical flow prediction. Specially, we construct a memory using several past frames to store the temporal information of the current frame. We then propose a temporal memory attention module to capture the relation between the current frame and the memory to enhance the representation of the current frame. Our method achieves new state-of-the-art performances on two challenging video semantic segmentation datasets, particularly 80.3% mIoU on Cityscapes and 76.5% mIoU on CamVid with ResNet-50.", "field": [], "task": ["Semantic Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["CamVid", "Cityscapes val"], "metric": ["Mean IoU", "mIoU"], "title": "Temporal Memory Attention for Video Semantic Segmentation"} {"abstract": "Sequence to sequence learning models still require several days to reach\nstate of the art performance on large benchmark datasets using a single\nmachine. This paper shows that reduced precision and large batch training can\nspeedup training by nearly 5x on a single 8-GPU machine with careful tuning and\nimplementation. On WMT'14 English-German translation, we match the accuracy of\nVaswani et al. (2017) in under 5 hours when training on 8 GPUs and we obtain a\nnew state of the art of 29.3 BLEU after training for 85 minutes on 128 GPUs. We\nfurther improve these results to 29.8 BLEU by training on the much larger\nParacrawl dataset. On the WMT'14 English-French task, we obtain a\nstate-of-the-art BLEU of 43.2 in 8.5 hours on 128 GPUs.", "field": [], "task": ["Machine Translation"], "method": [], "dataset": ["WMT2014 English-German", "WMT2014 English-French"], "metric": ["BLEU", "BLEU score"], "title": "Scaling Neural Machine Translation"} {"abstract": "Abstract meaning representations (AMRs) are broad-coverage sentence-level\nsemantic representations. AMRs represent sentences as rooted labeled directed\nacyclic graphs. AMR parsing is challenging partly due to the lack of annotated\nalignments between nodes in the graphs and words in the corresponding\nsentences. We introduce a neural parser which treats alignments as latent\nvariables within a joint probabilistic model of concepts, relations and\nalignments. As exact inference requires marginalizing over alignments and is\ninfeasible, we use the variational auto-encoding framework and a continuous\nrelaxation of the discrete alignments. We show that joint modeling is\npreferable to using a pipeline of align and parse. The parser achieves the best\nreported results on the standard benchmark (74.4% on LDC2016E25).", "field": [], "task": ["AMR Parsing"], "method": [], "dataset": ["LDC2015E86", "LDC2017T10"], "metric": ["Smatch"], "title": "AMR Parsing as Graph Prediction with Latent Alignment"} {"abstract": "At present, the vast majority of building blocks, techniques, and\narchitectures for deep learning are based on real-valued operations and\nrepresentations. However, recent work on recurrent neural networks and older\nfundamental theoretical analysis suggests that complex numbers could have a\nricher representational capacity and could also facilitate noise-robust memory\nretrieval mechanisms. Despite their attractive properties and potential for\nopening up entirely new neural architectures, complex-valued deep neural\nnetworks have been marginalized due to the absence of the building blocks\nrequired to design such models. In this work, we provide the key atomic\ncomponents for complex-valued deep neural networks and apply them to\nconvolutional feed-forward networks and convolutional LSTMs. More precisely, we\nrely on complex convolutions and present algorithms for complex\nbatch-normalization, complex weight initialization strategies for\ncomplex-valued neural nets and we use them in experiments with end-to-end\ntraining schemes. We demonstrate that such complex-valued models are\ncompetitive with their real-valued counterparts. We test deep complex models on\nseveral computer vision tasks, on music transcription using the MusicNet\ndataset and on Speech Spectrum Prediction using the TIMIT dataset. We achieve\nstate-of-the-art performance on these audio-related tasks.", "field": [], "task": ["Image Classification", "Music Transcription"], "method": [], "dataset": ["MusicNet", "SVHN", "CIFAR-10"], "metric": ["Percentage error", "Number of params", "APS", "Percentage correct"], "title": "Deep Complex Networks"} {"abstract": "Human face-to-face communication is a complex multimodal signal. We use words\n(language modality), gestures (vision modality) and changes in tone (acoustic\nmodality) to convey our intentions. Humans easily process and understand\nface-to-face communication, however, comprehending this form of communication\nremains a significant challenge for Artificial Intelligence (AI). AI must\nunderstand each modality and the interactions between them that shape human\ncommunication. In this paper, we present a novel neural architecture for\nunderstanding human communication called the Multi-attention Recurrent Network\n(MARN). The main strength of our model comes from discovering interactions\nbetween modalities through time using a neural component called the\nMulti-attention Block (MAB) and storing them in the hybrid memory of a\nrecurrent component called the Long-short Term Hybrid Memory (LSTHM). We\nperform extensive comparisons on six publicly available datasets for multimodal\nsentiment analysis, speaker trait recognition and emotion recognition. MARN\nshows state-of-the-art performance on all the datasets.", "field": [], "task": ["Emotion Recognition", "Multimodal Sentiment Analysis", "Sentiment Analysis"], "method": [], "dataset": ["MOSI"], "metric": ["Accuracy"], "title": "Multi-attention Recurrent Network for Human Communication Comprehension"} {"abstract": "We look at the problem of developing a compact and accurate model for gesture\nrecognition from videos in a deep-learning framework. Towards this we propose a\njoint 3DCNN-LSTM model that is end-to-end trainable and is shown to be better\nsuited to capture the dynamic information in actions. The solution achieves\nclose to state-of-the-art accuracy on the ChaLearn dataset, with only half the\nmodel size. We also explore ways to derive a much more compact representation\nin a knowledge distillation framework followed by model compression. The final\nmodel is less than $1~MB$ in size, which is less than one hundredth of our\ninitial model, with a drop of $7\\%$ in accuracy, and is suitable for real-time\ngesture recognition on mobile devices.", "field": [], "task": ["Gesture Recognition", "Knowledge Distillation", "Model Compression"], "method": [], "dataset": ["Chalearn 2014"], "metric": ["Accuracy"], "title": "Learning Deep and Compact Models for Gesture Recognition"} {"abstract": "We propose a new framework for abstractive text summarization based on a\nsequence-to-sequence oriented encoder-decoder model equipped with a deep\nrecurrent generative decoder (DRGN).\n Latent structure information implied in the target summaries is learned based\non a recurrent latent random model for improving the summarization quality.\n Neural variational inference is employed to address the intractable posterior\ninference for the recurrent latent variables.\n Abstractive summaries are generated based on both the generative latent\nvariables and the discriminative deterministic states.\n Extensive experiments on some benchmark datasets in different languages show\nthat DRGN achieves improvements over the state-of-the-art methods.", "field": [], "task": ["Abstractive Text Summarization", "Text Summarization", "Variational Inference"], "method": [], "dataset": ["GigaWord", "DUC 2004 Task 1"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Deep Recurrent Generative Decoder for Abstractive Text Summarization"} {"abstract": "We tackle the task of semi-supervised video object segmentation, i.e.\nsegmenting the pixels belonging to an object in the video using the ground\ntruth pixel mask for the first frame. We build on the recently introduced\none-shot video object segmentation (OSVOS) approach which uses a pretrained\nnetwork and fine-tunes it on the first frame. While achieving impressive\nperformance, at test time OSVOS uses the fine-tuned network in unchanged form\nand is not able to adapt to large changes in object appearance. To overcome\nthis limitation, we propose Online Adaptive Video Object Segmentation (OnAVOS)\nwhich updates the network online using training examples selected based on the\nconfidence of the network and the spatial configuration. Additionally, we add a\npretraining step based on objectness, which is learned on PASCAL. Our\nexperiments show that both extensions are highly effective and improve the\nstate of the art on DAVIS to an intersection-over-union score of 85.7%.", "field": [], "task": ["Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation", "Visual Object Tracking"], "method": [], "dataset": ["YouTube-VOS", "DAVIS 2017 (test-dev)", "DAVIS 2017 (val)", "YouTube", "DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "Jaccard (Unseen)", "F-Measure (Seen)", "Jaccard (Seen)", "mIoU", "F-measure (Recall)", "Jaccard (Decay)", "O (Average of Measures)", "Jaccard (Recall)", "F-measure (Mean)", "J&F", "F-Measure (Unseen)"], "title": "Online Adaptation of Convolutional Neural Networks for Video Object Segmentation"} {"abstract": "Transfer and multi-task learning have traditionally focused on either a\nsingle source-target pair or very few, similar tasks. Ideally, the linguistic\nlevels of morphology, syntax and semantics would benefit each other by being\ntrained in a single model. We introduce a joint many-task model together with a\nstrategy for successively growing its depth to solve increasingly complex\ntasks. Higher layers include shortcut connections to lower-level task\npredictions to reflect linguistic hierarchies. We use a simple regularization\nterm to allow for optimizing all model weights to improve one task's loss\nwithout exhibiting catastrophic interference of the other tasks. Our single\nend-to-end model obtains state-of-the-art or competitive results on five\ndifferent tasks from tagging, parsing, relatedness, and entailment tasks.", "field": [], "task": ["Chunking", "Multi-Task Learning"], "method": [], "dataset": ["Penn Treebank"], "metric": ["F1 score"], "title": "A Joint Many-Task Model: Growing a Neural Network for Multiple NLP Tasks"} {"abstract": "Large-scale multi-relational embedding refers to the task of learning the\nlatent representations for entities and relations in large knowledge graphs. An\neffective and scalable solution for this problem is crucial for the true\nsuccess of knowledge-based inference in a broad range of applications. This\npaper proposes a novel framework for optimizing the latent representations with\nrespect to the \\textit{analogical} properties of the embedded entities and\nrelations. By formulating the learning objective in a differentiable fashion,\nour model enjoys both theoretical power and computational scalability, and\nsignificantly outperformed a large number of representative baseline methods on\nbenchmark datasets. Furthermore, the model offers an elegant unification of\nseveral well-known methods in multi-relational embedding, which can be proven\nto be special instantiations of our framework.", "field": [], "task": ["Knowledge Graphs", "Link Prediction"], "method": [], "dataset": ["WN18"], "metric": ["Hits@10", "MRR", "Hits@3", "Hits@1"], "title": "Analogical Inference for Multi-Relational Embeddings"} {"abstract": "We present the first fully convolutional end-to-end solution for\ninstance-aware semantic segmentation task. It inherits all the merits of FCNs\nfor semantic segmentation and instance mask proposal. It performs instance mask\nprediction and classification jointly. The underlying convolutional\nrepresentation is fully shared between the two sub-tasks, as well as between\nall regions of interest. The proposed network is highly integrated and achieves\nstate-of-the-art performance in both accuracy and efficiency. It wins the COCO\n2016 segmentation competition by a large margin. Code would be released at\n\\url{https://github.com/daijifeng001/TA-FCN}.", "field": [], "task": ["Semantic Segmentation"], "method": [], "dataset": ["COCO test-dev"], "metric": ["AP50", "APM", "APL", "APS"], "title": "Fully Convolutional Instance-aware Semantic Segmentation"} {"abstract": "We study the problem of 3D object generation. We propose a novel framework,\nnamely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects\nfrom a probabilistic space by leveraging recent advances in volumetric\nconvolutional networks and generative adversarial nets. The benefits of our\nmodel are three-fold: first, the use of an adversarial criterion, instead of\ntraditional heuristic criteria, enables the generator to capture object\nstructure implicitly and to synthesize high-quality 3D objects; second, the\ngenerator establishes a mapping from a low-dimensional probabilistic space to\nthe space of 3D objects, so that we can sample objects without a reference\nimage or CAD models, and explore the 3D object manifold; third, the adversarial\ndiscriminator provides a powerful 3D shape descriptor which, learned without\nsupervision, has wide applications in 3D object recognition. Experiments\ndemonstrate that our method generates high-quality 3D objects, and our\nunsupervisedly learned features achieve impressive performance on 3D object\nrecognition, comparable with those of supervised learning methods.", "field": [], "task": ["3D Object Recognition", "Object Recognition"], "method": [], "dataset": ["Pix3D"], "metric": ["R@16", "R@8", "R@2", "R@4", "R@1", "R@32"], "title": "Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling"} {"abstract": "Given semantic descriptions of object classes, zero-shot learning aims to\naccurately recognize objects of the unseen classes, from which no examples are\navailable at the training stage, by associating them to the seen classes, from\nwhich labeled examples are provided. We propose to tackle this problem from the\nperspective of manifold learning. Our main idea is to align the semantic space\nthat is derived from external information to the model space that concerns\nitself with recognizing visual features. To this end, we introduce a set of\n\"phantom\" object classes whose coordinates live in both the semantic space and\nthe model space. Serving as bases in a dictionary, they can be optimized from\nlabeled data such that the synthesized real object classifiers achieve optimal\ndiscriminative performance. We demonstrate superior accuracy of our approach\nover the state of the art on four benchmark datasets for zero-shot learning,\nincluding the full ImageNet Fall 2011 dataset with more than 20,000 unseen\nclasses.", "field": [], "task": ["Zero-Shot Learning"], "method": [], "dataset": ["SUN - 0-Shot", "AWA - 0-Shot", "CUB-200-2011 - 0-Shot", "ImageNet - 0-Shot"], "metric": ["Top-1 Accuracy", "Accuracy"], "title": "Synthesized Classifiers for Zero-Shot Learning"} {"abstract": "Subspace clustering methods based on $\\ell_1$, $\\ell_2$ or nuclear norm\nregularization have become very popular due to their simplicity, theoretical\nguarantees and empirical success. However, the choice of the regularizer can\ngreatly impact both theory and practice. For instance, $\\ell_1$ regularization\nis guaranteed to give a subspace-preserving affinity (i.e., there are no\nconnections between points from different subspaces) under broad conditions\n(e.g., arbitrary subspaces and corrupted data). However, it requires solving a\nlarge scale convex optimization problem. On the other hand, $\\ell_2$ and\nnuclear norm regularization provide efficient closed form solutions, but\nrequire very strong assumptions to guarantee a subspace-preserving affinity,\ne.g., independent subspaces and uncorrupted data. In this paper we study a\nsubspace clustering method based on orthogonal matching pursuit. We show that\nthe method is both computationally efficient and guaranteed to give a\nsubspace-preserving affinity under broad conditions. Experiments on synthetic\ndata verify our theoretical analysis, and applications in handwritten digit and\nface clustering show that our approach achieves the best trade off between\naccuracy and efficiency.", "field": [], "task": ["Face Clustering", "Image Clustering"], "method": [], "dataset": ["Extended Yale-B"], "metric": ["Accuracy"], "title": "Scalable Sparse Subspace Clustering by Orthogonal Matching Pursuit"} {"abstract": "The ability to accurately model a sentence at varying stages (e.g.,\nword-phrase-sentence) plays a central role in natural language processing. As\nan effort towards this goal we propose a self-adaptive hierarchical sentence\nmodel (AdaSent). AdaSent effectively forms a hierarchy of representations from\nwords to phrases and then to sentences through recursive gated local\ncomposition of adjacent segments. We design a competitive mechanism (through\ngating networks) to allow the representations of the same sentence to be\nengaged in a particular learning task (e.g., classification), therefore\neffectively mitigating the gradient vanishing problem persistent in other\nrecursive models. Both qualitative and quantitative analysis shows that AdaSent\ncan automatically form and select the representations suitable for the task at\nhand during training, yielding superior classification performance over\ncompetitor models on 5 benchmark data sets.", "field": [], "task": ["Subjectivity Analysis"], "method": [], "dataset": ["SUBJ"], "metric": ["Accuracy"], "title": "Self-Adaptive Hierarchical Sentence Model"} {"abstract": "Unsupervised video object segmentation aims to automatically segment moving objects over an unconstrained video without any user annotation. So far, only few unsupervised online methods have been reported in literature and their performance is still far from satisfactory, because the complementary information from future frames cannot be processed under online setting. To solve this challenging problem, in this paper, we propose a novel Unsupervised Online Video Object Segmentation (UOVOS) framework by construing the motion property to mean moving in concurrence with a generic object for segmented regions. By incorporating salient motion detection and object proposal, a pixel-wise fusion strategy is developed to effectively remove detection noise such as dynamic background and stationary objects. Furthermore, by leveraging the obtained segmentation from immediately preceding frames, a forward propagation algorithm is employed to deal with unreliable motion detection and object proposals. Experimental results on several benchmark datasets demonstrate the efficacy of the proposed method. Compared to the state-of-the-art unsupervised online segmentation algorithms, the proposed method achieves an absolute gain of 6.2%. Moreover, our method achieves better performance than the best unsupervised offline algorithm on the DAVIS-2016 benchmark dataset. Our code is available on the project website: https://github.com/visiontao/uovos.", "field": [], "task": ["Motion Detection", "Semantic Segmentation", "Unsupervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "Unsupervised Online Video Object Segmentation with Motion Property Understanding"} {"abstract": "Video super-resolution (VSR) has become even more important recently to provide high resolution (HR) contents for ultra high definition displays. While many deep learning based VSR methods have been proposed, most of them rely heavily on the accuracy of motion estimation and compensation. We introduce a fundamentally different framework for VSR in this paper. We propose a novel end-to-end deep neural network that generates dynamic upsampling filters and a residual image, which are computed depending on the local spatio-temporal neighborhood of each pixel to avoid explicit motion compensation. With our approach, an HR image is reconstructed directly from the input image using the dynamic upsampling filters, and the fine details are added through the computed residual. Our network with the help of a new data augmentation technique can generate much sharper HR videos with temporal consistency, compared with the previous methods. We also provide analysis of our network through extensive experiments to show how the network deals with motions implicitly.", "field": [], "task": ["Data Augmentation", "Motion Compensation", "Motion Estimation", "Super-Resolution", "Video Super-Resolution"], "method": [], "dataset": ["Vid4 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Deep Video Super-Resolution Network Using Dynamic Upsampling Filters Without Explicit Motion Compensation"} {"abstract": "Learning in the space-time domain remains a very challenging problem in machine learning and computer vision. Current computational models for understanding spatio-temporal visual data are heavily rooted in the classical single-image based paradigm. It is not yet well understood how to integrate information in space and time into a single, general model. We propose a neural graph model, recurrent in space and time, suitable for capturing both the local appearance and the complex higher-level interactions of different entities and objects within the changing world scene. Nodes and edges in our graph have dedicated neural networks for processing information. Nodes operate over features extracted from local parts in space and time and previous memory states. Edges process messages between connected nodes at different locations and spatial scales or between past and present time. Messages are passed iteratively in order to transmit information globally and establish long range interactions. Our model is general and could learn to recognize a variety of high level spatio-temporal concepts and be applied to different learning tasks. We demonstrate, through extensive experiments and ablation studies, that our model outperforms strong baselines and top published methods on recognizing complex activities in video. Moreover, we obtain state-of-the-art performance on the challenging Something-Something human-object interaction dataset.", "field": [], "task": ["Action Recognition", "Human-Object Interaction Detection", "Video Understanding"], "method": [], "dataset": ["Something-Something V1"], "metric": ["Top 1 Accuracy"], "title": "Recurrent Space-time Graph Neural Networks"} {"abstract": "Inter-personal anatomical differences limit the accuracy of person-independent gaze estimation networks. Yet there is a need to lower gaze errors further to enable applications requiring higher quality. Further gains can be achieved by personalizing gaze networks, ideally with few calibration samples. However, over-parameterized neural networks are not amenable to learning from few examples as they can quickly over-fit. We embrace these challenges and propose a novel framework for Few-shot Adaptive GaZE Estimation (FAZE) for learning person-specific gaze networks with very few (less than or equal to 9) calibration samples. FAZE learns a rotation-aware latent representation of gaze via a disentangling encoder-decoder architecture along with a highly adaptable gaze estimator trained using meta-learning. It is capable of adapting to any new person to yield significant performance gains with as few as 3 samples, yielding state-of-the-art performance of 3.18 degrees on GazeCapture, a 19% improvement over prior art. We open-source our code at https://github.com/NVlabs/few_shot_gaze", "field": [], "task": ["Gaze Estimation", "Meta-Learning"], "method": [], "dataset": ["MPII Gaze"], "metric": ["Angular Error"], "title": "Few-Shot Adaptive Gaze Estimation"} {"abstract": "Conventionally, deep neural networks are trained offline, relying on a large dataset prepared in advance. This paradigm is often challenged in real-world applications, e.g. online services that involve continuous streams of incoming data. Recently, incremental learning receives increasing attention, and is considered as a promising solution to the practical challenges mentioned above. However, it has been observed that incremental learning is subject to a fundamental difficulty -- catastrophic forgetting, namely adapting a model to new data often results in severe performance degradation on previous tasks or classes. Our study reveals that the imbalance between previous and new data is a crucial cause to this problem. In this work, we develop a new framework for incrementally learning a unified classifier, e.g. a classifier that treats both old and new classes uniformly. Specifically, we incorporate three components, cosine normalization, less-forget constraint, and inter-class separation, to mitigate the adverse effects of the imbalance. Experiments show that the proposed method can effectively rebalance the training process, thus obtaining superior performance compared to the existing methods. On CIFAR-100 and ImageNet, our method can reduce the classification errors by more than 6% and 13% respectively, under the incremental setting of 10 phases.\r", "field": [], "task": ["Incremental Learning"], "method": [], "dataset": ["ImageNet - 500 classes + 10 steps of 50 classes", "CIFAR-100 - 50 classes + 10 steps of 5 classes", "CIFAR-100 - 50 classes + 5 steps of 10 classes"], "metric": ["Average Incremental Accuracy"], "title": "Learning a Unified Classifier Incrementally via Rebalancing"} {"abstract": "Shadow detection is an important and challenging task for scene understanding. Despite promising results from recent deep learning based methods. Existing works still struggle with ambiguous cases where the visual appearances of shadow and non-shadow regions are similar (referred to as distraction in our context). In this paper, we propose a Distraction-aware Shadow Detection Network (DSDNet) by explicitly learning and integrating the semantics of visual distraction regions in an end-to-end framework. At the core of our framework is a novel standalone, differentiable Distraction-aware Shadow (DS) module, which allows us to learn distraction-aware, discriminative features for robust shadow detection, by explicitly predicting false positives and false negatives. We conduct extensive experiments on three public shadow detection datasets, SBU, UCF and ISTD, to evaluate our method. Experimental results demonstrate that our model can boost shadow detection performance, by effectively suppressing the detection of false positives and false negatives, achieving state-of-the-art results.\r", "field": [], "task": ["Scene Understanding", "Shadow Detection"], "method": [], "dataset": ["SBU"], "metric": ["BER"], "title": "Distraction-Aware Shadow Detection"} {"abstract": "Advertising and feed ranking are essential to many Internet companies such as Facebook and Sina Weibo. Among many real-world advertising and feed ranking systems, click through rate (CTR) prediction plays a central role. There are many proposed models in this field such as logistic regression, tree based models, factorization machine based models and deep learning based CTR models. However, many current works calculate the feature interactions in a simple way such as Hadamard product and inner product and they care less about the importance of features. In this paper, a new model named FiBiNET as an abbreviation for Feature Importance and Bilinear feature Interaction NETwork is proposed to dynamically learn the feature importance and fine-grained feature interactions. On the one hand, the FiBiNET can dynamically learn the importance of features via the Squeeze-Excitation network (SENET) mechanism; on the other hand, it is able to effectively learn the feature interactions via bilinear function. We conduct extensive experiments on two real-world datasets and show that our shallow model outperforms other shallow models such as factorization machine(FM) and field-aware factorization machine(FFM). In order to improve performance further, we combine a classical deep neural network(DNN) component with the shallow model to be a deep model. The deep FiBiNET consistently outperforms the other state-of-the-art deep models such as DeepFM and extreme deep factorization machine(XdeepFM).", "field": [], "task": ["Click-Through Rate Prediction", "Feature Importance", "Regression"], "method": [], "dataset": ["Criteo"], "metric": ["Log Loss", "AUC"], "title": "FiBiNET: Combining Feature Importance and Bilinear feature Interaction for Click-Through Rate Prediction"} {"abstract": "In few-shot learning, a machine learning system learns from a small set of labelled examples relating to a specific task, such that it can generalize to new examples of the same task. Given the limited availability of labelled examples in such tasks, we wish to make use of all the information we can. Usually a model learns task-specific information from a small training-set (support-set) to predict on an unlabelled validation set (target-set). The target-set contains additional task-specific information which is not utilized by existing few-shot learning methods. Making use of the target-set examples via transductive learning requires approaches beyond the current methods; at inference time, the target-set contains only unlabelled input data-points, and so discriminative learning cannot be used. In this paper, we propose a framework called Self-Critique and Adapt or SCA, which learns to learn a label-free loss function, parameterized as a neural network. A base-model learns on a support-set using existing methods (e.g. stochastic gradient descent combined with the cross-entropy loss), and then is updated for the incoming target-task using the learnt loss function. This label-free loss function is itself optimized such that the learnt model achieves higher generalization performance. Experiments demonstrate that SCA offers substantially reduced error-rates compared to baselines which only adapt on the support-set, and results in state of the art benchmark performance on Mini-ImageNet and Caltech-UCSD Birds 200.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning"], "method": [], "dataset": ["Mini-Imagenet 5-way (1-shot)", "CUB 200 5-way 1-shot", "CUB 200 5-way 5-shot", "Mini-Imagenet 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Learning to learn via Self-Critique"} {"abstract": "The rapid development of deep learning (DL) has driven single image super-resolution (SR) into a new era. However, in most existing DL based image SR networks, the information flows are solely feedforward, and the high-level features cannot be fully explored. In this paper, we propose the gated multiple feedback network (GMFN) for accurate image SR, in which the representation of low-level features are efficiently enriched by rerouting multiple high-level features. We cascade multiple residual dense blocks (RDBs) and recurrently unfolds them across time. The multiple feedback connections between two adjacent time steps in the proposed GMFN exploits multiple high-level features captured under large receptive fields to refine the low-level features lacking enough contextual information. The elaborately designed gated feedback module (GFM) efficiently selects and further enhances useful information from multiple rerouted high-level features, and then refine the low-level features with the enhanced high-level information. Extensive experiments demonstrate the superiority of our proposed GMFN against state-of-the-art SR methods in terms of both quantitative metrics and visual quality. Code is available at https://github.com/liqilei/GMFN.", "field": [], "task": ["Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["Set14 - 4x upscaling", "Manga109 - 4x upscaling", "BSD100 - 4x upscaling", "Set5 - 4x upscaling", "Urban100 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Gated Multiple Feedback Network for Image Super-Resolution"} {"abstract": "We present an approach to recover absolute 3D human poses from multi-view images by incorporating multi-view geometric priors in our model. It consists of two separate steps: (1) estimating the 2D poses in multi-view images and (2) recovering the 3D poses from the multi-view 2D poses. First, we introduce a cross-view fusion scheme into CNN to jointly estimate 2D poses for multiple views. Consequently, the 2D pose estimation for each view already benefits from other views. Second, we present a recursive Pictorial Structure Model to recover the 3D pose from the multi-view 2D poses. It gradually improves the accuracy of 3D pose with affordable computational cost. We test our method on two public datasets H36M and Total Capture. The Mean Per Joint Position Errors on the two datasets are 26mm and 29mm, which outperforms the state-of-the-arts remarkably (26mm vs 52mm, 29mm vs 35mm). Our code is released at \\url{https://github.com/microsoft/multiview-human-pose-estimation-pytorch}.", "field": [], "task": ["3D Human Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["Total Capture", "Human3.6M"], "metric": ["Average MPJPE (mm)", "Multi-View or Monocular", "Using 2D ground-truth joints"], "title": "Cross View Fusion for 3D Human Pose Estimation"} {"abstract": "The WaveForm DataBase (WFDB) Toolbox for MATLAB/Octave enables integrated access to PhysioNet's software and databases. Using the WFDB Toolbox for MATLAB/Octave, users have access to over 50 physiological databases in PhysioNet. The toolbox provides access over 4 TB of biomedical signals including ECG, EEG, EMG, and PLETH. Additionally, most signals are accompanied by metadata such as medical annotations of clinical events: arrhythmias, sleep stages, seizures, hypotensive episodes, etc. Users of this toolbox should easily be able to reproduce, validate, and compare results published based on PhysioNet's software and databases.", "field": [], "task": ["Arrhythmia Detection", "EEG", "Electrocardiography (ECG)", "Seizure Detection", "Sleep Stage Detection"], "method": [], "dataset": ["The PhysioNet Computing in Cardiology Challenge 2017"], "metric": ["Accuracy (TEST-DB)", "Accuracy (TRAIN-DB)"], "title": "An Open-source Toolbox for Analysing and Processing PhysioNet Databases in MATLAB and Octave"} {"abstract": "Large-scale pre-training methods of learning cross-modal representations on image-text pairs are becoming popular for vision-language tasks. While existing methods simply concatenate image region features and text features as input to the model to be pre-trained and use self-attention to learn image-text semantic alignments in a brute force manner, in this paper, we propose a new learning method Oscar (Object-Semantics Aligned Pre-training), which uses object tags detected in images as anchor points to significantly ease the learning of alignments. Our method is motivated by the observation that the salient objects in an image can be accurately detected, and are often mentioned in the paired text. We pre-train an Oscar model on the public corpus of 6.5 million text-image pairs, and fine-tune it on downstream tasks, creating new state-of-the-arts on six well-established vision-language understanding and generation tasks.", "field": [], "task": ["Image Captioning", "Text-Image Retrieval", "Visual Question Answering"], "method": [], "dataset": ["COCO", "COCO (image as query)", "COCO Captions", "VQA v2 test-dev"], "metric": ["METEOR", "Recall@10", "SPICE", "CIDER", "Accuracy", "BLEU-4"], "title": "Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks"} {"abstract": "Unlabeled data is often abundant in the clinic, making machine learning methods based on semi-supervised learning a good match for this setting. Despite this, they are currently receiving relatively little attention in medical image analysis literature. Instead, most practitioners and researchers focus on supervised or transfer learning approaches. The recently proposed MixMatch and FixMatch algorithms have demonstrated promising results in extracting useful representations while requiring very few labels. Motivated by these recent successes, we apply MixMatch and FixMatch in an ophthalmological diagnostic setting and investigate how they fare against standard transfer learning. We find that both algorithms outperform the transfer learning baseline on all fractions of labelled data. Furthermore, our experiments show that exponential moving average (EMA) of model parameters, which is a component of both algorithms, is not needed for our classification problem, as disabling it leaves the outcome unchanged. Our code is available online: https://github.com/Valentyn1997/oct-diagn-semi-supervised", "field": [], "task": ["Retinal OCT Disease Classification", "Semi-Supervised Image Classification", "Transfer Learning"], "method": [], "dataset": ["OCT2017"], "metric": ["Acc"], "title": "Matching the Clinical Reality: Accurate OCT-Based Diagnosis From Few Labels"} {"abstract": "The encoder-decoder framework is state-of-the-art for offline semantic image\nsegmentation. Since the rise in autonomous systems, real-time computation is\nincreasingly desirable. In this paper, we introduce fast segmentation\nconvolutional neural network (Fast-SCNN), an above real-time semantic\nsegmentation model on high resolution image data (1024x2048px) suited to\nefficient computation on embedded devices with low memory. Building on existing\ntwo-branch methods for fast segmentation, we introduce our `learning to\ndownsample' module which computes low-level features for multiple resolution\nbranches simultaneously. Our network combines spatial detail at high resolution\nwith deep features extracted at lower resolution, yielding an accuracy of 68.0%\nmean intersection over union at 123.5 frames per second on Cityscapes. We also\nshow that large scale pre-training is unnecessary. We thoroughly validate our\nmetric in experiments with ImageNet pre-training and the coarse labeled data of\nCityscapes. Finally, we show even faster computation with competitive results\non subsampled inputs, without any network modifications.", "field": [], "task": ["Real-Time Semantic Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["Cityscapes val", "Cityscapes test"], "metric": ["Mean IoU (class)", "mIoU"], "title": "Fast-SCNN: Fast Semantic Segmentation Network"} {"abstract": "We propose a novel alignment mechanism to deal with procedural reasoning on a newly released multimodal QA dataset, named RecipeQA. Our model is solving the textual cloze task which is a reading comprehension on a recipe containing images and instructions. We exploit the power of attention networks, cross-modal representations, and a latent alignment space between instructions and candidate answers to solve the problem. We introduce constrained max-pooling which refines the max-pooling operation on the alignment matrix to impose disjoint constraints among the outputs of the model. Our evaluation result indicates a 19\\% improvement over the baselines.", "field": [], "task": ["Question Answering", "Reading Comprehension"], "method": [], "dataset": ["RecipeQA"], "metric": ["Accuracy"], "title": "Latent Alignment of Procedural Concepts in Multimodal Recipes"} {"abstract": "Emotion recognition (ER) is an important task in Natural Language Processing (NLP), due to its high impact in real-world applications from health and well-being to author profiling, consumer analysis and security. Current approaches to ER, mainly classify emotions independently without considering that emotions can co-exist. Such approaches overlook potential ambiguities, in which multiple emotions overlap. We propose a new model \"SpanEmo\" casting multi-label emotion classification as span-prediction, which can aid ER models to learn associations between labels and words in a sentence. Furthermore, we introduce a loss function focused on modelling multiple co-existing emotions in the input sentence. Experiments performed on the SemEval2018 multi-label emotion data over three language sets (i.e., English, Arabic and Spanish) demonstrate our method's effectiveness. Finally, we present different analyses that illustrate the benefits of our method in terms of improving the model performance and learning meaningful associations between emotion classes and words in the sentence.", "field": [], "task": ["Emotion Classification", "Emotion Recognition"], "method": [], "dataset": ["SemEval 2018 Task 1E-c"], "metric": ["Micro-F1", "Macro-F1", "Accuracy"], "title": "SpanEmo: Casting Multi-label Emotion Classification as Span-prediction"} {"abstract": "Classifying facial expressions into different categories requires capturing\nregional distortions of facial landmarks. We believe that second-order\nstatistics such as covariance is better able to capture such distortions in\nregional facial fea- tures. In this work, we explore the benefits of using a\nman- ifold network structure for covariance pooling to improve facial\nexpression recognition. In particular, we first employ such kind of manifold\nnetworks in conjunction with tradi- tional convolutional networks for spatial\npooling within in- dividual image feature maps in an end-to-end deep learning\nmanner. By doing so, we are able to achieve a recognition accuracy of 58.14% on\nthe validation set of Static Facial Expressions in the Wild (SFEW 2.0) and\n87.0% on the vali- dation set of Real-World Affective Faces (RAF) Database.\nBoth of these results are the best results we are aware of. Besides, we\nleverage covariance pooling to capture the tem- poral evolution of per-frame\nfeatures for video-based facial expression recognition. Our reported results\ndemonstrate the advantage of pooling image-set features temporally by stacking\nthe designed manifold network of covariance pool-ing on top of convolutional\nnetwork layers.", "field": [], "task": ["Facial Expression Recognition"], "method": [], "dataset": [" Static Facial Expressions in the Wild", "Real-World Affective Faces"], "metric": ["Accuracy"], "title": "Covariance Pooling For Facial Expression Recognition"} {"abstract": "In this paper, we study the problem of learning image classification models\nwith label noise. Existing approaches depending on human supervision are\ngenerally not scalable as manually identifying correct or incorrect labels is\ntime-consuming, whereas approaches not relying on human supervision are\nscalable but less effective. To reduce the amount of human supervision for\nlabel noise cleaning, we introduce CleanNet, a joint neural embedding network,\nwhich only requires a fraction of the classes being manually verified to\nprovide the knowledge of label noise that can be transferred to other classes.\nWe further integrate CleanNet and conventional convolutional neural network\nclassifier into one framework for image classification learning. We demonstrate\nthe effectiveness of the proposed algorithm on both of the label noise\ndetection task and the image classification on noisy data task on several\nlarge-scale datasets. Experimental results show that CleanNet can reduce label\nnoise detection error rate on held-out classes where no human supervision\navailable by 41.5% compared to current weakly supervised methods. It also\nachieves 47% of the performance gain of verifying all images with only 3.2%\nimages verified on an image classification task. Source code and dataset will\nbe available at kuanghuei.github.io/CleanNetProject.", "field": [], "task": ["Image Classification", "Transfer Learning"], "method": [], "dataset": ["Food-101N", "Clothing1M"], "metric": ["Accuracy"], "title": "CleanNet: Transfer Learning for Scalable Image Classifier Training with Label Noise"} {"abstract": "CNN architectures have terrific recognition performance but rely on spatial\npooling which makes it difficult to adapt them to tasks that require dense,\npixel-accurate labeling. This paper makes two contributions: (1) We demonstrate\nthat while the apparent spatial resolution of convolutional feature maps is\nlow, the high-dimensional feature representation contains significant sub-pixel\nlocalization information. (2) We describe a multi-resolution reconstruction\narchitecture based on a Laplacian pyramid that uses skip connections from\nhigher resolution feature maps and multiplicative gating to successively refine\nsegment boundaries reconstructed from lower-resolution maps. This approach\nyields state-of-the-art semantic segmentation results on the PASCAL VOC and\nCityscapes segmentation benchmarks without resorting to more complex\nrandom-field inference or instance detection driven architectures.", "field": [], "task": ["Semantic Segmentation"], "method": [], "dataset": ["Cityscapes test"], "metric": ["Mean IoU (class)"], "title": "Laplacian Pyramid Reconstruction and Refinement for Semantic Segmentation"} {"abstract": "Problems at the intersection of vision and language are of significant\nimportance both as challenging research questions and for the rich set of\napplications they enable. However, inherent structure in our world and bias in\nour language tend to be a simpler signal for learning than visual modalities,\nresulting in models that ignore visual information, leading to an inflated\nsense of their capability.\n We propose to counter these language priors for the task of Visual Question\nAnswering (VQA) and make vision (the V in VQA) matter! Specifically, we balance\nthe popular VQA dataset by collecting complementary images such that every\nquestion in our balanced dataset is associated with not just a single image,\nbut rather a pair of similar images that result in two different answers to the\nquestion. Our dataset is by construction more balanced than the original VQA\ndataset and has approximately twice the number of image-question pairs. Our\ncomplete balanced dataset is publicly available at www.visualqa.org as part of\nthe 2nd iteration of the Visual Question Answering Dataset and Challenge (VQA\nv2.0).\n We further benchmark a number of state-of-art VQA models on our balanced\ndataset. All models perform significantly worse on our balanced dataset,\nsuggesting that these models have indeed learned to exploit language priors.\nThis finding provides the first concrete empirical evidence for what seems to\nbe a qualitative sense among practitioners.\n Finally, our data collection protocol for identifying complementary images\nenables us to develop a novel interpretable model, which in addition to\nproviding an answer to the given (image, question) pair, also provides a\ncounter-example based explanation. Specifically, it identifies an image that is\nsimilar to the original image, but it believes has a different answer to the\nsame question. This can help in building trust for machines among their users.", "field": [], "task": ["Visual Question Answering"], "method": [], "dataset": ["VQA v2 test-std", "COCO Visual Question Answering (VQA) real images 2.0 open ended"], "metric": ["overall", "Percentage correct"], "title": "Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering"} {"abstract": "We propose a novel deep layer cascade (LC) method to improve the accuracy and\nspeed of semantic segmentation. Unlike the conventional model cascade (MC) that\nis composed of multiple independent models, LC treats a single deep model as a\ncascade of several sub-models. Earlier sub-models are trained to handle easy\nand confident regions, and they progressively feed-forward harder regions to\nthe next sub-model for processing. Convolutions are only calculated on these\nregions to reduce computations. The proposed method possesses several\nadvantages. First, LC classifies most of the easy regions in the shallow stage\nand makes deeper stage focuses on a few hard regions. Such an adaptive and\n'difficulty-aware' learning improves segmentation performance. Second, LC\naccelerates both training and testing of deep network thanks to early decisions\nin the shallow stage. Third, in comparison to MC, LC is an end-to-end trainable\nframework, allowing joint learning of all sub-models. We evaluate our method on\nPASCAL VOC and Cityscapes datasets, achieving state-of-the-art performance and\nfast speed.", "field": [], "task": ["Semantic Segmentation"], "method": [], "dataset": ["PASCAL VOC 2012 test"], "metric": ["Mean IoU"], "title": "Not All Pixels Are Equal: Difficulty-aware Semantic Segmentation via Deep Layer Cascade"} {"abstract": "This paper addresses the challenge of 6DoF pose estimation from a single RGB\nimage under severe occlusion or truncation. Many recent works have shown that a\ntwo-stage approach, which first detects keypoints and then solves a\nPerspective-n-Point (PnP) problem for pose estimation, achieves remarkable\nperformance. However, most of these methods only localize a set of sparse\nkeypoints by regressing their image coordinates or heatmaps, which are\nsensitive to occlusion and truncation. Instead, we introduce a Pixel-wise\nVoting Network (PVNet) to regress pixel-wise unit vectors pointing to the\nkeypoints and use these vectors to vote for keypoint locations using RANSAC.\nThis creates a flexible representation for localizing occluded or truncated\nkeypoints. Another important feature of this representation is that it provides\nuncertainties of keypoint locations that can be further leveraged by the PnP\nsolver. Experiments show that the proposed approach outperforms the state of\nthe art on the LINEMOD, Occlusion LINEMOD and YCB-Video datasets by a large\nmargin, while being efficient for real-time pose estimation. We further create\na Truncation LINEMOD dataset to validate the robustness of our approach against\ntruncation. The code will be avaliable at https://zju-3dv.github.io/pvnet/.", "field": [], "task": ["6D Pose Estimation using RGB", "Pose Estimation"], "method": [], "dataset": ["LineMOD", "YCB-Video", "Occlusion LineMOD"], "metric": ["Mean ADD", "Accuracy (ADD)", "Mean AUC", "Accuracy"], "title": "PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation"} {"abstract": "Extreme multi-label text classification (XMTC) aims at tagging a document with most relevant labels from an extremely large-scale label set. It is a challenging problem especially for the tail labels because there are only few training documents to build classifier. This paper is motivated to better explore the semantic relationship between each document and extreme labels by taking advantage of both document content and label correlation. Our objective is to establish an explicit label-aware representation for each document with a hybrid attention deep neural network model(LAHA). LAHA consists of three parts. The first part adopts a multi-label self-attention mechanism to detect the contribution of each word to labels. The second part exploits the label structure and document content to determine the semantic connection between words and labels in a same latent space. An adaptive fusion strategy is designed in the third part to obtain the final label-aware document representation so that the essence of previous two parts can be sufficiently integrated. Extensive experiments have been conducted on six benchmark datasets by comparing with the state-of-the-art methods. The results show the superiority of our proposed LAHA method, especially on the tail labels.", "field": [], "task": ["Multi-Label Text Classification", "Text Classification"], "method": [], "dataset": ["Wiki-30K", "AAPD", "Kan-Shan Cup", "Amazon-12K", "EUR-Lex"], "metric": ["P@3", "P@5", "nDCG@3", "P@1", "nDCG@5"], "title": "Label-aware Document Representation via Hybrid Attention for Extreme Multi-Label Text Classification"} {"abstract": "Aggregation structures with explicit information, such as image attributes and scene semantics, are effective and popular for intelligent systems for assessing aesthetics of visual data. However, useful information may not be available due to the high cost of manual annotation and expert design. In this paper, we present a novel multi-patch (MP) aggregation method for image aesthetic assessment. Different from state-of-the-art methods, which augment an MP aggregation network with various visual attributes, we train the model in an end-to-end manner with aesthetic labels only (i.e., aesthetically positive or negative). We achieve the goal by resorting to an attention-based mechanism that adaptively adjusts the weight of each patch during the training process to improve learning efficiency. In addition, we propose a set of objectives with three typical attention mechanisms (i.e., average, minimum, and adaptive) and evaluate their effectiveness on the Aesthetic Visual Analysis (AVA) benchmark. Numerical results show that our approach outperforms existing methods by a large margin. We further verify the effectiveness of the proposed attention-based objectives via ablation studies and shed light on the design of aesthetic assessment systems.", "field": [], "task": ["Aesthetics Quality Assessment"], "method": [], "dataset": ["AVA"], "metric": ["Accuracy"], "title": "Attention-based Multi-Patch Aggregation for Image Aesthetic Assessment"} {"abstract": "Semantic image synthesis aims at generating photorealistic images from semantic layouts. Previous approaches with conditional generative adversarial networks (GAN) show state-of-the-art performance on this task, which either feed the semantic label maps as inputs to the generator, or use them to modulate the activations in normalization layers via affine transformations. We argue that convolutional kernels in the generator should be aware of the distinct semantic labels at different locations when generating images. In order to better exploit the semantic layout for the image generator, we propose to predict convolutional kernels conditioned on the semantic label map to generate the intermediate feature maps from the noise maps and eventually generate the images. Moreover, we propose a feature pyramid semantics-embedding discriminator, which is more effective in enhancing fine details and semantic alignments between the generated images and the input semantic layouts than previous multi-scale discriminators. We achieve state-of-the-art results on both quantitative metrics and subjective evaluation on various semantic segmentation datasets, demonstrating the effectiveness of our approach.", "field": [], "task": ["Image Generation", "Image-to-Image Translation", "Semantic Segmentation"], "method": [], "dataset": ["ADE20K Labels-to-Photos", "COCO-Stuff Labels-to-Photos", "Cityscapes Labels-to-Photo"], "metric": ["Accuracy", "FID", "Per-pixel Accuracy", "mIoU"], "title": "Learning to Predict Layout-to-image Conditional Convolutions for Semantic Image Synthesis"} {"abstract": "The ability to learn new concepts with small amounts of data is a critical aspect of intelligence that has proven challenging for deep learning methods. Meta-learning has emerged as a promising technique for leveraging data from previous tasks to enable efficient learning of new tasks. However, most meta-learning algorithms implicitly require that the meta-training tasks be mutually-exclusive, such that no single model can solve all of the tasks at once. For example, when creating tasks for few-shot image classification, prior work uses a per-task random assignment of image classes to N-way classification labels. If this is not done, the meta-learner can ignore the task training data and learn a single model that performs all of the meta-training tasks zero-shot, but does not adapt effectively to new image classes. This requirement means that the user must take great care in designing the tasks, for example by shuffling labels or removing task identifying information from the inputs. In some domains, this makes meta-learning entirely inapplicable. In this paper, we address this challenge by designing a meta-regularization objective using information theory that places precedence on data-driven adaptation. This causes the meta-learner to decide what must be learned from the task training data and what should be inferred from the task testing input. By doing so, our algorithm can successfully use data from non-mutually-exclusive tasks to efficiently adapt to novel tasks. We demonstrate its applicability to both contextual and gradient-based meta-learning algorithms, and apply it in practical settings where applying standard meta-learning has been difficult. Our approach substantially outperforms standard meta-learning algorithms in these settings.", "field": [], "task": ["Few-Shot Image Classification", "Image Classification", "Meta-Learning"], "method": [], "dataset": ["OMNIGLOT - 5-Shot, 20-way", "OMNIGLOT - 1-Shot, 20-way"], "metric": ["Accuracy"], "title": "Meta-Learning without Memorization"} {"abstract": "Attributes and objects can compose diverse compositions. To model the compositional nature of these general concepts, it is a good choice to learn them through transformations, such as coupling and decoupling. However, complex transformations need to satisfy specific principles to guarantee the rationality. In this paper, we first propose a previously ignored principle of attribute-object transformation: Symmetry. For example, coupling peeled-apple with attribute peeled should result in peeled-apple, and decoupling peeled from apple should still output apple. Incorporating the symmetry principle, a transformation framework inspired by group theory is built, i.e. SymNet. SymNet consists of two modules, Coupling Network and Decoupling Network. With the group axioms and symmetry property as objectives, we adopt Deep Neural Networks to implement SymNet and train it in an end-to-end paradigm. Moreover, we propose a Relative Moving Distance (RMD) based recognition method to utilize the attribute change instead of the attribute pattern itself to classify attributes. Our symmetry learning can be utilized for the Compositional Zero-Shot Learning task and outperforms the state-of-the-art on widely-used benchmarks. Code is available at https://github.com/DirtyHarryLYL/SymNet.", "field": [], "task": ["Compositional Zero-Shot Learning", "Zero-Shot Learning"], "method": [], "dataset": ["UT-Zappos", "MIT-States", "MIT-States, generalized split"], "metric": ["Val AUC top 3", "Top-2 accuracy %", "Test AUC top 2", "Val AUC top 2", "Val AUC top 1", "Seen accuracy", "H-Mean", "Top-1 accuracy %", "Test AUC top 3", "Test AUC top 1", "Unseen accuracy", "Top-3 accuracy %"], "title": "Symmetry and Group in Attribute-Object Compositions"} {"abstract": "Automatic age and gender classification has become relevant to an increasing amount of applications, particularly since the rise of social platforms and social media. Nevertheless, performance of existing methods on real-world images is still significantly lacking, especially when compared to the tremendous leaps in performance recently reported for the related task of face recognition. In this paper we show that by learning representations through the use of deep-convolutional neural networks (CNN), a significant increase in performance can be obtained on these tasks. To this end, we propose a simple convolutional net architecture that can be used even when the amount of learning data is limited. We evaluate our method on the recent Adience benchmark for age and gender estimation and show it to dramatically outperform current state-of-the-art methods.", "field": [], "task": ["Age And Gender Classification", "Face Recognition"], "method": [], "dataset": ["Adience Age", "Adience Gender"], "metric": ["Accuracy (5-fold)"], "title": "Age and Gender Classification using Convolutional Neural Networks"} {"abstract": "Machine translation systems achieve near human-level performance on some\nlanguages, yet their effectiveness strongly relies on the availability of large\namounts of parallel sentences, which hinders their applicability to the\nmajority of language pairs. This work investigates how to learn to translate\nwhen having access to only large monolingual corpora in each language. We\npropose two model variants, a neural and a phrase-based model. Both versions\nleverage a careful initialization of the parameters, the denoising effect of\nlanguage models and automatic generation of parallel data by iterative\nback-translation. These models are significantly better than methods from the\nliterature, while being simpler and having fewer hyper-parameters. On the\nwidely used WMT'14 English-French and WMT'16 German-English benchmarks, our\nmodels respectively obtain 28.1 and 25.2 BLEU points without using a single\nparallel sentence, outperforming the state of the art by more than 11 BLEU\npoints. On low-resource languages like English-Urdu and English-Romanian, our\nmethods achieve even better results than semi-supervised and supervised\napproaches leveraging the paucity of available bitexts. Our code for NMT and\nPBSMT is publicly available.", "field": [], "task": ["Machine Translation", "Unsupervised Machine Translation"], "method": [], "dataset": ["WMT2016 English-Russian", "WMT2016 English-German", "WMT2014 French-English", "WMT2016 English-Romanian", "WMT2016 German-English", "WMT2014 English-German", "WMT2014 English-French"], "metric": ["BLEU", "BLEU score"], "title": "Phrase-Based & Neural Unsupervised Machine Translation"} {"abstract": "Community Question Answering (cQA) forums are very popular nowadays, as they\nrepresent effective means for communities around particular topics to share\ninformation. Unfortunately, this information is not always factual. Thus, here\nwe explore a new dimension in the context of cQA, which has been ignored so\nfar: checking the veracity of answers to particular questions in cQA forums. As\nthis is a new problem, we create a specialized dataset for it. We further\npropose a novel multi-faceted model, which captures information from the answer\ncontent (what is said and how), from the author profile (who says it), from the\nrest of the community forum (where it is said), and from external authoritative\nsources of information (external support). Evaluation results show a MAP value\nof 86.54, which is 21 points absolute above the baseline.", "field": [], "task": ["Community Question Answering", "Question Answering"], "method": [], "dataset": ["LIAR"], "metric": ["32x32 Accuracy"], "title": "Fact Checking in Community Forums"} {"abstract": "Recent advances in deep reinforcement learning have made significant strides\nin performance on applications such as Go and Atari games. However, developing\npractical methods to balance exploration and exploitation in complex domains\nremains largely unsolved. Thompson Sampling and its extension to reinforcement\nlearning provide an elegant approach to exploration that only requires access\nto posterior samples of the model. At the same time, advances in approximate\nBayesian methods have made posterior approximation for flexible neural network\nmodels practical. Thus, it is attractive to consider approximate Bayesian\nneural networks in a Thompson Sampling framework. To understand the impact of\nusing an approximate posterior on Thompson Sampling, we benchmark\nwell-established and recently developed methods for approximate posterior\nsampling combined with Thompson Sampling over a series of contextual bandit\nproblems. We found that many approaches that have been successful in the\nsupervised learning setting underperformed in the sequential decision-making\nscenario. In particular, we highlight the challenge of adapting slowly\nconverging uncertainty estimates to the online setting.", "field": [], "task": ["Decision Making", "Multi-Armed Bandits"], "method": [], "dataset": ["Mushroom"], "metric": ["Cumulative regret"], "title": "Deep Bayesian Bandits Showdown: An Empirical Comparison of Bayesian Deep Networks for Thompson Sampling"} {"abstract": "Generative Adversarial Networks (GANs) have recently demonstrated to\nsuccessfully approximate complex data distributions. A relevant extension of\nthis model is conditional GANs (cGANs), where the introduction of external\ninformation allows to determine specific representations of the generated\nimages. In this work, we evaluate encoders to inverse the mapping of a cGAN,\ni.e., mapping a real image into a latent space and a conditional\nrepresentation. This allows, for example, to reconstruct and modify real images\nof faces conditioning on arbitrary attributes. Additionally, we evaluate the\ndesign of cGANs. The combination of an encoder with a cGAN, which we call\nInvertible cGAN (IcGAN), enables to re-generate real images with deterministic\ncomplex modifications.", "field": [], "task": ["Conditional Image Generation", "Image-to-Image Translation"], "method": [], "dataset": ["RaFD"], "metric": ["Classification Error"], "title": "Invertible Conditional GANs for image editing"} {"abstract": "Summarization based on text extraction is inherently limited, but\ngeneration-style abstractive methods have proven challenging to build. In this\nwork, we propose a fully data-driven approach to abstractive sentence\nsummarization. Our method utilizes a local attention-based model that generates\neach word of the summary conditioned on the input sentence. While the model is\nstructurally simple, it can easily be trained end-to-end and scales to a large\namount of training data. The model shows significant performance gains on the\nDUC-2004 shared task compared with several strong baselines.", "field": [], "task": ["Sentence Summarization", "Text Summarization"], "method": [], "dataset": ["GigaWord", "DUC 2004 Task 1"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "A Neural Attention Model for Abstractive Sentence Summarization"} {"abstract": "Face anti-spoofing (a.k.a presentation attack detection) has drawn growing\nattention due to the high-security demand in face authentication systems.\nExisting CNN-based approaches usually well recognize the spoofing faces when\ntraining and testing spoofing samples display similar patterns, but their\nperformance would drop drastically on testing spoofing faces of unseen scenes.\nIn this paper, we try to boost the generalizability and applicability of these\nmethods by designing a CNN model with two major novelties. First, we propose a\nsimple yet effective Total Pairwise Confusion (TPC) loss for CNN training,\nwhich enhances the generalizability of the learned Presentation Attack (PA)\nrepresentations. Secondly, we incorporate a Fast Domain Adaptation (FDA)\ncomponent into the CNN model to alleviate negative effects brought by domain\nchanges. Besides, our proposed model, which is named Generalizable Face\nAuthentication CNN (GFA-CNN), works in a multi-task manner, performing face\nanti-spoofing and face recognition simultaneously. Experimental results show\nthat GFA-CNN outperforms previous face anti-spoofing approaches and also well\npreserves the identity information of input face images.", "field": [], "task": ["Domain Adaptation", "Face Anti-Spoofing", "Face Recognition"], "method": [], "dataset": ["MSU-MFSD"], "metric": ["Equal Error Rate"], "title": "Learning Generalizable and Identity-Discriminative Representations for Face Anti-Spoofing"} {"abstract": "We study the problem of efficient semantic segmentation for large-scale 3D point clouds. By relying on expensive sampling techniques or computationally heavy pre/post-processing steps, most existing approaches are only able to be trained and operate over small-scale point clouds. In this paper, we introduce RandLA-Net, an efficient and lightweight neural architecture to directly infer per-point semantics for large-scale point clouds. The key to our approach is to use random point sampling instead of more complex point selection approaches. Although remarkably computation and memory efficient, random sampling can discard key features by chance. To overcome this, we introduce a novel local feature aggregation module to progressively increase the receptive field for each 3D point, thereby effectively preserving geometric details. Extensive experiments show that our RandLA-Net can process 1 million points in a single pass with up to 200X faster than existing approaches. Moreover, our RandLA-Net clearly surpasses state-of-the-art approaches for semantic segmentation on two large-scale benchmarks Semantic3D and SemanticKITTI.", "field": [], "task": ["3D Semantic Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["Semantic3D", "S3DIS", "SemanticKITTI"], "metric": ["Mean IoU", "oAcc", "mAcc", "mIoU"], "title": "RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds"} {"abstract": "We propose to pre-train a unified language model for both autoencoding and partially autoregressive language modeling tasks using a novel training procedure, referred to as a pseudo-masked language model (PMLM). Given an input text with masked tokens, we rely on conventional masks to learn inter-relations between corrupted tokens and context via autoencoding, and pseudo masks to learn intra-relations between masked spans via partially autoregressive modeling. With well-designed position embeddings and self-attention masks, the context encodings are reused to avoid redundant computation. Moreover, conventional masks used for autoencoding provide global masking information, so that all the position embeddings are accessible in partially autoregressive language modeling. In addition, the two tasks pre-train a unified language model as a bidirectional encoder and a sequence-to-sequence decoder, respectively. Our experiments show that the unified language models pre-trained using PMLM achieve new state-of-the-art results on a wide range of natural language understanding and generation tasks across several widely used benchmarks.", "field": [], "task": ["Abstractive Text Summarization", "Language Modelling", "Natural Language Understanding", "Question Generation"], "method": [], "dataset": ["CNN / Daily Mail", "SQuAD1.1"], "metric": ["ROUGE-L", "BLEU-4", "ROUGE-1", "ROUGE-2"], "title": "UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training"} {"abstract": "Fine-tuned pre-trained language models (LMs) achieve enormous success in many natural language processing (NLP) tasks, but they still require excessive labeled data in the fine-tuning stage. We study the problem of fine-tuning pre-trained LMs using only weak supervision, without any labeled data. This problem is challenging because the high capacity of LMs makes them prone to overfitting the noisy labels generated by weak supervision. To address this problem, we develop a contrastive self-training framework, COSINE, to enable fine-tuning LMs with weak supervision. Underpinned by contrastive regularization and confidence-based reweighting, this contrastive self-training framework can gradually improve model fitting while effectively suppressing error propagation. Experiments on sequence, token, and sentence pair classification tasks show that our model outperforms the strongest baseline by large margins on 7 benchmarks in 6 tasks, and achieves competitive performance with fully-supervised fine-tuning methods.", "field": [], "task": ["Language Modelling", "Word Sense Disambiguation"], "method": [], "dataset": ["Words in Context"], "metric": ["Accuracy"], "title": "Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach"} {"abstract": "Fully convolutional models for dense prediction have proven successful for a\nwide range of visual tasks. Such models perform well in a supervised setting,\nbut performance can be surprisingly poor under domain shifts that appear mild\nto a human observer. For example, training on one city and testing on another\nin a different geographic region and/or weather condition may result in\nsignificantly degraded performance due to pixel-level distribution shift. In\nthis paper, we introduce the first domain adaptive semantic segmentation\nmethod, proposing an unsupervised adversarial approach to pixel prediction\nproblems. Our method consists of both global and category specific adaptation\ntechniques. Global domain alignment is performed using a novel semantic\nsegmentation network with fully convolutional domain adversarial learning. This\ninitially adapted space then enables category specific adaptation through a\ngeneralization of constrained weak learning, with explicit transfer of the\nspatial layout from the source to the target domains. Our approach outperforms\nbaselines across different settings on multiple large-scale datasets, including\nadapting across various real city environments, different synthetic\nsub-domains, from simulated to real environments, and on a novel large-scale\ndash-cam dataset.", "field": [], "task": ["Image-to-Image Translation", "Semantic Segmentation", "Synthetic-to-Real Translation"], "method": [], "dataset": ["GTAV-to-Cityscapes Labels", "SYNTHIA Fall-to-Winter", "SYNTHIA-to-Cityscapes"], "metric": ["mIoU (13 classes)", "mIoU"], "title": "FCNs in the Wild: Pixel-level Adversarial and Constraint-based Adaptation"} {"abstract": "In this work, we present a word embedding model that learns cross-sentence dependency for improving end-to-end co-reference resolution (E2E-CR). While the traditional E2E-CR model generates word representations by running long short-term memory (LSTM) recurrent neural networks on each sentence of an input article or conversation separately, we propose linear sentence linking and attentional sentence linking models to learn cross-sentence dependency. Both sentence linking strategies enable the LSTMs to make use of valuable information from context sentences while calculating the representation of the current input word. With this approach, the LSTMs learn word embeddings considering knowledge not only from the current sentence but also from the entire input document. Experiments show that learning cross-sentence dependency enriches information contained by the word representations, and improves the performance of the co-reference resolution model compared with our baseline.", "field": [], "task": ["Coreference Resolution"], "method": [], "dataset": ["OntoNotes"], "metric": ["F1"], "title": "Learning Word Representations with Cross-Sentence Dependency for End-to-End Co-reference Resolution"} {"abstract": "The key point of image-text matching is how to accurately measure the similarity between visual and textual inputs. Despite the great progress of associating the deep cross-modal embeddings with the bi-directional ranking loss, developing the strategies for mining useful triplets and selecting appropriate margins remains a challenge in real applications. In this paper, we propose a cross-modal projection matching (CMPM) loss and a cross-modal projection classification (CMPC) loss for learning discriminative image-text embeddings. The CMPM loss minimizes the KL divergence between the projection compatibility distributions and the normalized matching distributions defined with all the positive and negative samples in a mini-batch. The CMPC loss attempts to categorize the vector projection of representations from one modality onto another with the improved norm-softmax loss, for further enhancing the feature compactness of each class. Extensive analysis and experiments on multiple datasets demonstrate the superiority of the proposed approach.", "field": [], "task": ["Cross-Modal Retrieval", "Text based Person Retrieval", "Text Matching"], "method": [], "dataset": ["Flickr30k", "CUHK-PEDES"], "metric": ["Image-to-text R@5", "Image-to-text R@1", "R@10", "Image-to-text R@10", "Text-to-image R@10", "Text-to-image R@1", "R@5", "R@1", "Text-to-image R@5"], "title": "Deep Cross-Modal Projection Learning for Image-Text Matching"} {"abstract": "Normalization layers are a staple in state-of-the-art deep neural network\narchitectures. They are widely believed to stabilize training, enable higher\nlearning rate, accelerate convergence and improve generalization, though the\nreason for their effectiveness is still an active research topic. In this work,\nwe challenge the commonly-held beliefs by showing that none of the perceived\nbenefits is unique to normalization. Specifically, we propose fixed-update\ninitialization (Fixup), an initialization motivated by solving the exploding\nand vanishing gradient problem at the beginning of training via properly\nrescaling a standard initialization. We find training residual networks with\nFixup to be as stable as training with normalization -- even for networks with\n10,000 layers. Furthermore, with proper regularization, Fixup enables residual\nnetworks without normalization to achieve state-of-the-art performance in image\nclassification and machine translation.", "field": [], "task": ["Image Classification", "Machine Translation"], "method": [], "dataset": ["SVHN", "CIFAR-10"], "metric": ["Percentage error", "Percentage correct"], "title": "Fixup Initialization: Residual Learning Without Normalization"} {"abstract": "Existing Machine Learning techniques yield close to human performance on\ntext-based classification tasks. However, the presence of multi-modal noise in\nchat data such as emoticons, slang, spelling mistakes, code-mixed data, etc.\nmakes existing deep-learning solutions perform poorly. The inability of\ndeep-learning systems to robustly capture these covariates puts a cap on their\nperformance. We propose NELEC: Neural and Lexical Combiner, a system which\nelegantly combines textual and deep-learning based methods for sentiment\nclassification. We evaluate our system as part of the third task of 'Contextual\nEmotion Detection in Text' as part of SemEval-2019. Our system performs\nsignificantly better than the baseline, as well as our deep-learning model\nbenchmarks. It achieved a micro-averaged F1 score of 0.7765, ranking 3rd on the\ntest-set leader-board. Our code is available at\nhttps://github.com/iamgroot42/nelec", "field": [], "task": ["Emotion Recognition in Conversation"], "method": [], "dataset": ["EC"], "metric": ["Micro-F1"], "title": "NELEC at SemEval-2019 Task 3: Think Twice Before Going Deep"} {"abstract": "Background: Sleep arousals are transient periods of wakefulness punctuated into sleep. Excessive sleep arousals are associated with many negative effects including daytime sleepiness and sleep disorders. High-quality annotation of polysomnographic recordings is crucial for the diagnosis of sleep arousal disorders. Currently, sleep arousals are mainly annotated by human experts through looking at millions of data points manually, which requires considerable time and effort.\r\n\r\nMethods: We used the polysomnograms of 2,994 individuals from two independent datasets (i) PhysioNet Challenge dataset (n=994), and (ii) Sleep Heart Health Study dataset (n=2000) for model training (60%), validation (15%), and testing (25%). We developed a deep convolutional neural network approach, DeepSleep, to automatically segment sleep arousal events. Our method captured the long-range and short-range interactions among physiological signals at multiple time scales to empower the detection of sleep arousals. A novel augmentation strategy by randomly swapping similar physiological channels was further applied to improve the prediction accuracy.\r\n\r\nFindings: Compared with other computational methods in sleep study, DeepSleep features accurate (area under receiver operating characteristic curve of 0.93 and area under the precision recall curve of 0.55), high-resolution (5-millisecond resolution), and fast (10 seconds per sleep record) delineation of sleep arousals. This method ranked first in segmenting non-apenic arousals when evaluated on a large held-out dataset (n=989) in the 2018 PhysioNet Challenge. We found that DeepSleep provided more detailed delineations than humans, especially at the low-confident boundary regions between arousal and non-arousal events. This indicates that in silico annotations is a complement to human annotations and potentially advances the current binary label system and scoring criteria for sleep arousals.\r\n\r\nInterpretation: The proposed deep learning model achieved state-of-the-art performance in detection of sleep arousals. By introducing the probability of annotation confidence, this model would provide more accurate information for the diagnosis of sleep disorders and the evaluation of sleep quality.", "field": [], "task": ["Sleep Arousal Detection", "Sleep Micro-event detection", "Sleep Quality"], "method": [], "dataset": ["You Snooze You Win - The PhysioNet Computing in Cardiology Challenge 2018"], "metric": ["AUROC", "AUPRC"], "title": "Deepsleep: Fast and Accurate Delineation of Sleep Arousals at Millisecond Resolution by Deep Learning"} {"abstract": "Generalization capability to unseen domains is crucial for machine learning models when deploying to real-world conditions. We investigate the challenging problem of domain generalization, i.e., training a model on multi-domain source data such that it can directly generalize to target domains with unknown statistics. We adopt a model-agnostic learning paradigm with gradient-based meta-train and meta-test procedures to expose the optimization to domain shift. Further, we introduce two complementary losses which explicitly regularize the semantic structure of the feature space. Globally, we align a derived soft confusion matrix to preserve general knowledge about inter-class relationships. Locally, we promote domain-independent class-specific cohesion and separation of sample features with a metric-learning component. The effectiveness of our method is demonstrated with new state-of-the-art results on two common object recognition benchmarks. Our method also shows consistent improvement on a medical image segmentation task.", "field": [], "task": ["Domain Generalization", "Medical Image Segmentation", "Metric Learning", "Object Recognition", "Semantic Segmentation"], "method": [], "dataset": ["PACS"], "metric": ["Average Accuracy"], "title": "Domain Generalization via Model-Agnostic Learning of Semantic Features"} {"abstract": "Extracting relations is critical for knowledge base completion and\nconstruction in which distant supervised methods are widely used to extract\nrelational facts automatically with the existing knowledge bases. However, the\nautomatically constructed datasets comprise amounts of low-quality sentences\ncontaining noisy words, which is neglected by current distant supervised\nmethods resulting in unacceptable precisions. To mitigate this problem, we\npropose a novel word-level distant supervised approach for relation extraction.\nWe first build Sub-Tree Parse(STP) to remove noisy words that are irrelevant to\nrelations. Then we construct a neural network inputting the sub-tree while\napplying the entity-wise attention to identify the important semantic features\nof relational words in each instance. To make our model more robust against\nnoisy words, we initialize our network with a priori knowledge learned from the\nrelevant task of entity classification by transfer learning. We conduct\nextensive experiments using the corpora of New York Times(NYT) and Freebase.\nExperiments show that our approach is effective and improves the area of\nPrecision/Recall(PR) from 0.35 to 0.39 over the state-of-the-art work.", "field": [], "task": ["Knowledge Base Completion", "Relation Extraction", "Relationship Extraction (Distant Supervised)", "Transfer Learning"], "method": [], "dataset": ["New York Times Corpus"], "metric": ["Average Precision", "AUC"], "title": "Neural Relation Extraction via Inner-Sentence Noise Reduction and Transfer Learning"} {"abstract": "Medical images are naturally associated with rich semantics about the human anatomy, reflected in an abundance of recurring anatomical patterns, offering unique potential to foster deep semantic representation learning and yield semantically more powerful models for different medical applications. But how exactly such strong yet free semantics embedded in medical images can be harnessed for self-supervised learning remains largely unexplored. To this end, we train deep models to learn semantically enriched visual representation by self-discovery, self-classification, and self-restoration of the anatomy underneath medical images, resulting in a semantics-enriched, general-purpose, pre-trained 3D model, named Semantic Genesis. We examine our Semantic Genesis with all the publicly-available pre-trained models, by either self-supervision or fully supervision, on the six distinct target tasks, covering both classification and segmentation in various medical modalities (i.e.,CT, MRI, and X-ray). Our extensive experiments demonstrate that Semantic Genesis significantly exceeds all of its 3D counterparts as well as the de facto ImageNet-based transfer learning in 2D. This performance is attributed to our novel self-supervised learning framework, encouraging deep models to learn compelling semantic representation from abundant anatomical patterns resulting from consistent anatomies embedded in medical images. Code and pre-trained Semantic Genesis are available at https://github.com/JLiangLab/SemanticGenesis .", "field": [], "task": ["Brain Tumor Segmentation", "Liver Segmentation", "Lung Nodule Detection", "Lung Nodule Segmentation", "Representation Learning", "Self-Supervised Learning", "Transfer Learning"], "method": [], "dataset": ["LiTS2017", "BRATS 2018", "BRATS-2013", "LUNA2016 FPRED", "LIDC-IDRI"], "metric": ["Dice", "IoU", "AUC", "Dice Score"], "title": "Learning Semantics-enriched Representation via Self-discovery, Self-classification, and Self-restoration"} {"abstract": "In a typical multi-label setting, a picture contains on average few positive labels, and many negative ones. This positive-negative imbalance dominates the optimization process, and can lead to under-emphasizing gradients from positive labels during training, resulting in poor accuracy. In this paper, we introduce a novel asymmetric loss (\"ASL\"), which operates differently on positive and negative samples. The loss enables to dynamically down-weights and hard-thresholds easy negative samples, while also discarding possibly mislabeled samples. We demonstrate how ASL can balance the probabilities of different samples, and how this balancing is translated to better mAP scores. With ASL, we reach state-of-the-art results on multiple popular multi-label datasets: MS-COCO, Pascal-VOC, NUS-WIDE and Open Images. We also demonstrate ASL applicability for other tasks, such as single-label classification and object detection. ASL is effective, easy to implement, and does not increase the training time or complexity. Implementation is available at: https://github.com/Alibaba-MIIL/ASL.", "field": [], "task": ["Image Classification", "Multi-Label Classification", "Object Detection"], "method": [], "dataset": ["MS-COCO", "NUS-WIDE", "PASCAL VOC 2007"], "metric": ["mAP", "MAP"], "title": "Asymmetric Loss For Multi-Label Classification"} {"abstract": "In several domains, data objects can be decomposed into sets of simpler objects. It is then natural to represent each object as the set of its components or parts. Many conventional machine learning algorithms are unable to process this kind of representations, since sets may vary in cardinality and elements lack a meaningful ordering. In this paper, we present a new neural network architecture, called RepSet, that can handle examples that are represented as sets of vectors. The proposed model computes the correspondences between an input set and some hidden sets by solving a series of network flow problems. This representation is then fed to a standard neural network architecture to produce the output. The architecture allows end-to-end gradient-based learning. We demonstrate RepSet on classification tasks, including text categorization, and graph classification, and we show that the proposed neural network achieves performance better or comparable to state-of-the-art algorithms.", "field": [], "task": ["Graph Classification", "Text Categorization"], "method": [], "dataset": ["IMDb-B", "BBCSport", "Amazon", "REDDIT-B", "PROTEINS", "20NEWS", "Reuters-21578", "Classic", "MUTAG", "IMDb-M", "Recipe", "Twitter", "Ohsumed"], "metric": ["Accuracy"], "title": "Rep the Set: Neural Networks for Learning Set Representations"} {"abstract": "Recognizing human actions is a core challenge for autonomous systems as they\ndirectly share the same space with humans. Systems must be able to recognize\nand assess human actions in real-time. In order to train corresponding\ndata-driven algorithms, a significant amount of annotated training data is\nrequired. We demonstrated a pipeline to detect humans, estimate their pose,\ntrack them over time and recognize their actions in real-time with standard\nmonocular camera sensors. For action recognition, we encode the human pose into\na new data format called Encoded Human Pose Image (EHPI) that can then be\nclassified using standard methods from the computer vision community. With this\nsimple procedure we achieve competitive state-of-the-art performance in\npose-based action detection and can ensure real-time performance. In addition,\nwe show a use case in the context of autonomous driving to demonstrate how such\na system can be trained to recognize human actions using simulation data.", "field": [], "task": ["Action Detection", "Action Recognition", "Autonomous Driving", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["JHMDB (2D poses only)", "J-HMDB"], "metric": ["Average accuracy of 3 splits", "Accuracy (pose)", "Accuracy (RGB+pose)"], "title": "Simple yet efficient real-time pose-based action recognition"} {"abstract": "Several recent works have shown how highly realistic human head images can be obtained by training convolutional neural networks to generate them. In order to create a personalized talking head model, these works require training on a large dataset of images of a single person. However, in many practical scenarios, such personalized talking head models need to be learned from a few image views of a person, potentially even a single image. Here, we present a system with such few-shot capability. It performs lengthy meta-learning on a large dataset of videos, and after that is able to frame few- and one-shot learning of neural talking head models of previously unseen people as adversarial training problems with high capacity generators and discriminators. Crucially, the system is able to initialize the parameters of both the generator and the discriminator in a person-specific way, so that training can be based on just a few images and done quickly, despite the need to tune tens of millions of parameters. We show that such an approach is able to learn highly realistic and personalized talking head models of new people and even portrait paintings.", "field": [], "task": ["Meta-Learning", "One-Shot Learning", "Talking Head Generation"], "method": [], "dataset": ["VoxCeleb2 - 32-shot learning", "VoxCeleb1 - 1-shot learning", "VoxCeleb2 - 1-shot learning", "VoxCeleb1 - 8-shot learning", "VoxCeleb1 - 32-shot learning", "VoxCeleb2 - 8-shot learning"], "metric": ["FID"], "title": "Few-Shot Adversarial Learning of Realistic Neural Talking Head Models"} {"abstract": "While current monocular 3D face reconstruction methods can recover fine geometric details, they suffer several limitations. Some methods produce faces that cannot be realistically animated because they do not model how wrinkles vary with expression. Other methods are trained on high-quality face scans and do not generalize well to in-the-wild images. We present the first approach to jointly learn a model with animatable detail and a detailed 3D face regressor from in-the-wild images that recovers shape details as well as their relationship to facial expressions. Our DECA (Detailed Expression Capture and Animation) model is trained to robustly produce a UV displacement map from a low-dimensional latent representation that consists of person-specific detail parameters and generic expression parameters, while a regressor is trained to predict detail, shape, albedo, expression, pose and illumination parameters from a single image. We introduce a novel detail-consistency loss to disentangle person-specific details and expression-dependent wrinkles. This disentanglement allows us to synthesize realistic person-specific wrinkles by controlling expression parameters while keeping person-specific details unchanged. DECA achieves state-of-the-art shape reconstruction accuracy on two benchmarks. Qualitative results on in-the-wild data demonstrate DECA's robustness and its ability to disentangle identity and expression dependent details enabling animation of reconstructed faces. The model and code are publicly available at https://github.com/YadiraF/DECA.", "field": [], "task": ["3D Face Reconstruction", "Face Model", "Face Reconstruction"], "method": [], "dataset": ["NoW Benchmark", "Stirling-LQ (FG2018 3D face reconstruction challenge)", "Stirling-HQ (FG2018 3D face reconstruction challenge)"], "metric": ["Mean Reconstruction Error (mm)"], "title": "Learning an Animatable Detailed 3D Face Model from In-The-Wild Images"} {"abstract": "When an entity name contains other names within it, the identification of all combinations of names can become difficult and expensive. We propose a new method to recognize not only outermost named entities but also inner nested ones. We design an objective function for training a neural model that treats the tag sequence for nested entities as the second best path within the span of their parent entity. In addition, we provide the decoding method for inference that extracts entities iteratively from outermost ones to inner ones in an outside-to-inside way. Our method has no additional hyperparameters to the conditional random field based model widely used for flat named entity recognition tasks. Experiments demonstrate that our method performs better than or at least as well as existing methods capable of handling nested entities, achieving the F1-scores of 85.82%, 84.34%, and 77.36% on ACE-2004, ACE-2005, and GENIA datasets, respectively.", "field": [], "task": ["Named Entity Recognition", "Nested Mention Recognition", "Nested Named Entity Recognition"], "method": [], "dataset": ["GENIA", "ACE 2005", "ACE 2004"], "metric": ["F1"], "title": "Nested Named Entity Recognition via Second-best Sequence Learning and Decoding"} {"abstract": "Single document summarization is the task of producing a shorter version of a\ndocument while preserving its principal information content. In this paper we\nconceptualize extractive summarization as a sentence ranking task and propose a\nnovel training algorithm which globally optimizes the ROUGE evaluation metric\nthrough a reinforcement learning objective. We use our algorithm to train a\nneural summarization model on the CNN and DailyMail datasets and demonstrate\nexperimentally that it outperforms state-of-the-art extractive and abstractive\nsystems when evaluated automatically and by humans.", "field": [], "task": ["Document Summarization", "Extractive Text Summarization"], "method": [], "dataset": ["CNN / Daily Mail"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Ranking Sentences for Extractive Summarization with Reinforcement Learning"} {"abstract": "Unsupervised image-to-image translation is an important and challenging\nproblem in computer vision. Given an image in the source domain, the goal is to\nlearn the conditional distribution of corresponding images in the target\ndomain, without seeing any pairs of corresponding images. While this\nconditional distribution is inherently multimodal, existing approaches make an\noverly simplified assumption, modeling it as a deterministic one-to-one\nmapping. As a result, they fail to generate diverse outputs from a given source\ndomain image. To address this limitation, we propose a Multimodal Unsupervised\nImage-to-image Translation (MUNIT) framework. We assume that the image\nrepresentation can be decomposed into a content code that is domain-invariant,\nand a style code that captures domain-specific properties. To translate an\nimage to another domain, we recombine its content code with a random style code\nsampled from the style space of the target domain. We analyze the proposed\nframework and establish several theoretical results. Extensive experiments with\ncomparisons to the state-of-the-art approaches further demonstrates the\nadvantage of the proposed framework. Moreover, our framework allows users to\ncontrol the style of translation outputs by providing an example style image.\nCode and pretrained models are available at https://github.com/nvlabs/MUNIT", "field": [], "task": ["Image-to-Image Translation", "Multimodal Unsupervised Image-To-Image Translation", "Unsupervised Image-To-Image Translation"], "method": [], "dataset": ["Edge-to-Shoes", "Edge-to-Handbags", "AFHQ", "Cats-and-Dogs", "CelebA-HQ"], "metric": ["Quality", "FID", "Diversity", "CIS", "IS"], "title": "Multimodal Unsupervised Image-to-Image Translation"} {"abstract": "As an alternative to question answering methods based on feature engineering, deep learning approaches such as convolutional neural networks (CNNs) and Long Short-Term Memory Models (LSTMs) have recently been proposed for semantic matching of questions and answers. To achieve good results, however, these models have been combined with additional features such as word overlap or BM25 scores. Without this combination, these models perform significantly worse than methods based on linguistic feature engineering. In this paper, we propose an attention based neural matching model for ranking short answer text. We adopt value-shared weighting scheme instead of position-shared weighting scheme for combining different matching signals and incorporate question term importance learning using question attention network. Using the popular benchmark TREC QA data, we show that the relatively simple aNMM model can significantly outperform other neural network models that have been used for the question answering task, and is competitive with models that are combined with additional features. When aNMM is combined with additional features, it outperforms all baselines.", "field": [], "task": ["Feature Engineering", "Question Answering"], "method": [], "dataset": ["TrecQA"], "metric": ["MRR", "MAP"], "title": "aNMM: Ranking Short Answer Texts with Attention-Based Neural Matching Model"} {"abstract": "Motion representation plays a vital role in human action recognition in\nvideos. In this study, we introduce a novel compact motion representation for\nvideo action recognition, named Optical Flow guided Feature (OFF), which\nenables the network to distill temporal information through a fast and robust\napproach. The OFF is derived from the definition of optical flow and is\northogonal to the optical flow. The derivation also provides theoretical\nsupport for using the difference between two frames. By directly calculating\npixel-wise spatiotemporal gradients of the deep feature maps, the OFF could be\nembedded in any existing CNN based video action recognition framework with only\na slight additional cost. It enables the CNN to extract spatiotemporal\ninformation, especially the temporal information between frames simultaneously.\nThis simple but powerful idea is validated by experimental results. The network\nwith OFF fed only by RGB inputs achieves a competitive accuracy of 93.3% on\nUCF-101, which is comparable with the result obtained by two streams (RGB and\noptical flow), but is 15 times faster in speed. Experimental results also show\nthat OFF is complementary to other motion modalities such as optical flow. When\nthe proposed method is plugged into the state-of-the-art video action\nrecognition framework, it has 96:0% and 74:2% accuracy on UCF-101 and HMDB-51\nrespectively. The code for this project is available at\nhttps://github.com/kevin-ssy/Optical-Flow-Guided-Feature.", "field": [], "task": ["Action Recognition", "Action Recognition In Videos", "Action Recognition In Videos ", "Optical Flow Estimation", "Temporal Action Localization"], "method": [], "dataset": ["UCF101", "HMDB-51"], "metric": ["Average accuracy of 3 splits", "3-fold Accuracy"], "title": "Optical Flow Guided Feature: A Fast and Robust Motion Representation for Video Action Recognition"} {"abstract": "This paper focuses on style transfer on the basis of non-parallel text. This\nis an instance of a broad family of problems including machine translation,\ndecipherment, and sentiment modification. The key challenge is to separate the\ncontent from other aspects such as style. We assume a shared latent content\ndistribution across different text corpora, and propose a method that leverages\nrefined alignment of latent representations to perform style transfer. The\ntransferred sentences from one style should match example sentences from the\nother style as a population. We demonstrate the effectiveness of this\ncross-alignment method on three tasks: sentiment modification, decipherment of\nword substitution ciphers, and recovery of word order.", "field": [], "task": ["Decipherment", "Machine Translation", "Style Transfer", "Text Style Transfer"], "method": [], "dataset": ["Yelp Review Dataset (Small)"], "metric": ["G-Score (BLEU, Accuracy)"], "title": "Style Transfer from Non-Parallel Text by Cross-Alignment"} {"abstract": "We consider matrix completion for recommender systems from the point of view\nof link prediction on graphs. Interaction data such as movie ratings can be\nrepresented by a bipartite user-item graph with labeled edges denoting observed\nratings. Building on recent progress in deep learning on graph-structured data,\nwe propose a graph auto-encoder framework based on differentiable message\npassing on the bipartite interaction graph. Our model shows competitive\nperformance on standard collaborative filtering benchmarks. In settings where\ncomplimentary feature information or structured data such as a social network\nis available, our framework outperforms recent state-of-the-art methods.", "field": [], "task": ["Link Prediction", "Matrix Completion", "Recommendation Systems"], "method": [], "dataset": ["MovieLens 1M", "MovieLens 10M", "Flixster Monti", "Douban Monti", "YahooMusic Monti", "MovieLens 100K"], "metric": ["RMSE (u1 Splits)", "RMSE"], "title": "Graph Convolutional Matrix Completion"} {"abstract": "In this work, we model abstractive text summarization using Attentional\nEncoder-Decoder Recurrent Neural Networks, and show that they achieve\nstate-of-the-art performance on two different corpora. We propose several novel\nmodels that address critical problems in summarization that are not adequately\nmodeled by the basic architecture, such as modeling key-words, capturing the\nhierarchy of sentence-to-word structure, and emitting words that are rare or\nunseen at training time. Our work shows that many of our proposed models\ncontribute to further improvement in performance. We also propose a new dataset\nconsisting of multi-sentence summaries, and establish performance benchmarks\nfor further research.", "field": [], "task": ["Abstractive Text Summarization", "Sentence Summarization", "Text Summarization"], "method": [], "dataset": ["CNN / Daily Mail", "GigaWord", "CNN / Daily Mail (Anonymized)", "DUC 2004 Task 1"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond"} {"abstract": "There are multiple cues in an image which reveal what action a person is\nperforming. For example, a jogger has a pose that is characteristic for\njogging, but the scene (e.g. road, trail) and the presence of other joggers can\nbe an additional source of information. In this work, we exploit the simple\nobservation that actions are accompanied by contextual cues to build a strong\naction recognition system. We adapt RCNN to use more than one region for\nclassification while still maintaining the ability to localize the action. We\ncall our system R*CNN. The action-specific models and the feature maps are\ntrained jointly, allowing for action specific representations to emerge. R*CNN\nachieves 90.2% mean AP on the PASAL VOC Action dataset, outperforming all other\napproaches in the field by a significant margin. Last, we show that R*CNN is\nnot limited to action recognition. In particular, R*CNN can also be used to\ntackle fine-grained tasks such as attribute classification. We validate this\nclaim by reporting state-of-the-art performance on the Berkeley Attributes of\nPeople dataset.", "field": [], "task": ["Action Recognition", "Human-Object Interaction Detection", "Temporal Action Localization"], "method": [], "dataset": ["HICO-DET", "HICO", "Charades"], "metric": ["mAP", "MAP"], "title": "Contextual Action Recognition with R*CNN"} {"abstract": "Deep convolutional neural networks perform better on images containing spatially invariant noise (synthetic noise); however, their performance is limited on real-noisy photographs and requires multiple stage network modeling. To advance the practicability of denoising algorithms, this paper proposes a novel single-stage blind real image denoising network (RIDNet) by employing a modular architecture. We use a residual on the residual structure to ease the flow of low-frequency information and apply feature attention to exploit the channel dependencies. Furthermore, the evaluation in terms of quantitative metrics and visual quality on three synthetic and four real noisy datasets against 19 state-of-the-art algorithms demonstrate the superiority of our RIDNet.", "field": [], "task": ["Color Image Denoising", "Denoising", "Image Denoising"], "method": [], "dataset": ["BSD68 sigma15", "DND", "CBSD68 sigma50", "Darmstadt Noise Dataset", "BSD68 sigma50", "SIDD", "BSD68 sigma25"], "metric": ["SSIM (sRGB)", "PSNR", "PSNR (sRGB)"], "title": "Real Image Denoising with Feature Attention"} {"abstract": "This paper presents a new neural network for enhancing underexposed photos. Instead of directly learning an image-to-image mapping as previous work, we introduce intermediate illumination in our network to associate the input with expected enhancement result, which augments the network's capability to learn complex photographic adjustment from expert-retouched input/output image pairs. Based on this model, we formulate a loss function that adopts constraints and priors on the illumination, prepare a new dataset of 3,000 underexposed image pairs, and train the network to effectively learn a rich variety of adjustment for diverse lighting conditions. By these means, our network is able to recover clear details, distinct contrast, and natural color in the enhancement results. We perform extensive experiments on the benchmark MIT-Adobe FiveK dataset and our new dataset, and show that our network is effective to deal with previously challenging images.\r", "field": [], "task": [], "method": [], "dataset": ["DICM", "VV", "MEF"], "metric": ["User Study Score"], "title": "Underexposed Photo Enhancement Using Deep Illumination Estimation"} {"abstract": "Click-Through Rate (CTR) prediction has been an indispensable component for many industrial applications, such as recommendation systems and online advertising. CTR prediction systems are usually based on multi-field categorical features, i.e., every feature is categorical and belongs to one and only one field. Modeling feature conjunctions is crucial for CTR prediction accuracy. However, it requires a massive number of parameters to explicitly model all feature conjunctions, which is not scalable for real-world production systems. In this paper, we describe a novel Field-Leveraged Embedding Network (FLEN) which has been deployed in the commercial recommender system in Meitu and serves the main traffic. FLEN devises a field-wise bi-interaction pooling technique. By suitably exploiting field information, the field-wise bi-interaction pooling captures both inter-field and intra-field feature conjunctions with a small number of model parameters and an acceptable time complexity for industrial applications. We show that a variety of state-of-the-art CTR models can be expressed under this technique. Furthermore, we develop Dicefactor: a dropout technique to prevent independent latent features from co-adapting. Extensive experiments, including offline evaluations and online A/B testing on real production systems, demonstrate the effectiveness and efficiency of FLEN against the state-of-the-arts. Notably, FLEN has obtained 5.19% improvement on CTR with 1/6 of memory usage and computation time, compared to last version (i.e. NFM).", "field": [], "task": ["Click-Through Rate Prediction", "Recommendation Systems"], "method": [], "dataset": ["Avazu"], "metric": ["AUC"], "title": "FLEN: Leveraging Field for Scalable CTR Prediction"} {"abstract": "In this paper, we propose an end-to-end feature fusion at-tention network (FFA-Net) to directly restore the haze-free image. The FFA-Net architecture consists of three key components: 1) A novel Feature Attention (FA) module combines Channel Attention with Pixel Attention mechanism, considering that different channel-wise features contain totally different weighted information and haze distribution is uneven on the different image pixels. FA treats different features and pixels unequally, which provides additional flexibility in dealing with different types of information, expanding the representational ability of CNNs. 2) A basic block structure consists of Local Residual Learning and Feature Attention, Local Residual Learning allowing the less important information such as thin haze region or low-frequency to be bypassed through multiple local residual connections, let main network architecture focus on more effective information. 3) An Attention-based different levels Feature Fusion (FFA) structure, the feature weights are adaptively learned from the Feature Attention (FA) module, giving more weight to important features. This structure can also retain the information of shallow layers and pass it into deep layers. The experimental results demonstrate that our proposed FFA-Net surpasses previous state-of-the-art single image dehazing methods by a very large margin both quantitatively and qualitatively, boosting the best published PSNR metric from 30.23db to 36.39db on the SOTS indoor test dataset. Code has been made available at GitHub.", "field": [], "task": ["Image Dehazing", "Single Image Dehazing"], "method": [], "dataset": ["SOTS Indoor", "SOTS Outdoor"], "metric": ["SSIM", "PSNR"], "title": "FFA-Net: Feature Fusion Attention Network for Single Image Dehazing"} {"abstract": "Existing graph neural networks may suffer from the \"suspended animation problem\" when the model architecture goes deep. Meanwhile, for some graph learning scenarios, e.g., nodes with text/image attributes or graphs with long-distance node correlations, deep graph neural networks will be necessary for effective graph representation learning. In this paper, we propose a new graph neural network, namely DIFNET (Graph Diffusive Neural Network), for graph representation learning and node classification. DIFNET utilizes both neural gates and graph residual learning for node hidden state modeling, and includes an attention mechanism for node neighborhood information diffusion. Extensive experiments will be done in this paper to compare DIFNET against several state-of-the-art graph neural network models. The experimental results can illustrate both the learning performance advantages and effectiveness of DIFNET, especially in addressing the \"suspended animation problem\".", "field": [], "task": ["Graph Learning", "Graph Representation Learning", "Node Classification", "Representation Learning"], "method": [], "dataset": ["Cora", "Pubmed", "Citeseer"], "metric": ["Accuracy"], "title": "Get Rid of Suspended Animation Problem: Deep Diffusive Neural Network on Graph Semi-Supervised Classification"} {"abstract": "Lip-reading has attracted a lot of research attention lately thanks to advances in deep learning. The current state-of-the-art model for recognition of isolated words in-the-wild consists of a residual network and Bidirectional Gated Recurrent Unit (BGRU) layers. In this work, we address the limitations of this model and we propose changes which further improve its performance. Firstly, the BGRU layers are replaced with Temporal Convolutional Networks (TCN). Secondly, we greatly simplify the training procedure, which allows us to train the model in one single stage. Thirdly, we show that the current state-of-the-art methodology produces models that do not generalize well to variations on the sequence length, and we addresses this issue by proposing a variable-length augmentation. We present results on the largest publicly-available datasets for isolated word recognition in English and Mandarin, LRW and LRW1000, respectively. Our proposed model results in an absolute improvement of 1.2% and 3.2%, respectively, in these datasets which is the new state-of-the-art performance.", "field": [], "task": ["Lipreading", "Lip Reading"], "method": [], "dataset": ["Lip Reading in the Wild", "LRW-1000"], "metric": ["Top-1 Accuracy"], "title": "Lipreading using Temporal Convolutional Networks"} {"abstract": "Real-time scene understanding has become crucial in many applications such as\nautonomous driving. In this paper, we propose a deep architecture, called\nBlitzNet, that jointly performs object detection and semantic segmentation in\none forward pass, allowing real-time computations. Besides the computational\ngain of having a single network to perform several tasks, we show that object\ndetection and semantic segmentation benefit from each other in terms of\naccuracy. Experimental results for VOC and COCO datasets show state-of-the-art\nperformance for object detection and segmentation among real time systems.", "field": [], "task": ["Autonomous Driving", "Object Detection", "Real-Time Object Detection", "Real-Time Semantic Segmentation", "Scene Understanding", "Semantic Segmentation"], "method": [], "dataset": ["PASCAL VOC 2007"], "metric": ["FPS", "MAP"], "title": "BlitzNet: A Real-Time Deep Network for Scene Understanding"} {"abstract": "The promise of reinforcement learning is to solve complex sequential decision problems by specifying a high-level reward function only. However, RL algorithms struggle when, as is often the case, simple and intuitive rewards provide sparse and deceptive feedback. Avoiding these pitfalls requires thoroughly exploring the environment, but despite substantial investments by the community, creating algorithms that can do so remains one of the central challenges of the field. We hypothesize that the main impediment to effective exploration originates from algorithms forgetting how to reach previously visited states (\"detachment\") and from failing to first return to a state before exploring from it (\"derailment\"). We introduce Go-Explore, a family of algorithms that addresses these two challenges directly through the simple principles of explicitly remembering promising states and first returning to such states before exploring. Go-Explore solves all heretofore unsolved Atari games (those for which algorithms could not previously outperform humans when evaluated following current community standards) and surpasses the state of the art on all hard-exploration games, with orders of magnitude improvements on the grand challenges Montezuma's Revenge and Pitfall. We also demonstrate the practical potential of Go-Explore on a challenging and extremely sparse-reward robotics task. Additionally, we show that adding a goal-conditioned policy can further improve Go-Explore's exploration efficiency and enable it to handle stochasticity throughout training. The striking contrast between the substantial performance gains from Go-Explore and the simplicity of its mechanisms suggests that remembering promising states, returning to them, and exploring from them is a powerful and general approach to exploration, an insight that may prove critical to the creation of truly intelligent learning agents.", "field": [], "task": ["Atari Games", "Montezuma's Revenge"], "method": [], "dataset": ["Atari 2600 Bowling", "Atari 2600 Venture", "Atari 2600 Private Eye", "Atari 2600 Berzerk", "Atari 2600 Montezuma's Revenge", "Atari 2600 Pitfall!", "Atari 2600 Solaris", "Atari 2600 Freeway", "Atari 2600 Gravitar", "Atari 2600 Centipede", "Atari 2600 Skiing"], "metric": ["Score"], "title": "First return, then explore"} {"abstract": "The ability to produce convincing textural details is essential for the fidelity of synthesized person images. However, existing methods typically follow a ``warping-based'' strategy that propagates appearance features through the same pathway used for pose transfer. However, most fine-grained features would be lost due to down-sampling, leading to over-smoothed clothes and missing details in the output images. In this paper we presents RATE-Net, a novel framework for synthesizing person images with sharp texture details. The proposed framework leverages an additional texture enhancing module to extract appearance information from the source image and estimate a fine-grained residual texture map, which helps to refine the coarse estimation from the pose transfer module. In addition, we design an effective alternate updating strategy to promote mutual guidance between two modules for better shape and appearance consistency. Experiments conducted on DeepFashion benchmark dataset have demonstrated the superiority of our framework compared with existing networks.", "field": [], "task": ["Image Generation", "Pose Transfer"], "method": [], "dataset": ["Deep-Fashion"], "metric": ["FID", "Retrieval Top10 Recall", "LPIPS", "SSIM", "IS"], "title": "Region-adaptive Texture Enhancement for Detailed Person Image Synthesis"} {"abstract": "We show that an end-to-end deep learning approach can be used to recognize\neither English or Mandarin Chinese speech--two vastly different languages.\nBecause it replaces entire pipelines of hand-engineered components with neural\nnetworks, end-to-end learning allows us to handle a diverse variety of speech\nincluding noisy environments, accents and different languages. Key to our\napproach is our application of HPC techniques, resulting in a 7x speedup over\nour previous system. Because of this efficiency, experiments that previously\ntook weeks now run in days. This enables us to iterate more quickly to identify\nsuperior architectures and algorithms. As a result, in several cases, our\nsystem is competitive with the transcription of human workers when benchmarked\non standard datasets. Finally, using a technique called Batch Dispatch with\nGPUs in the data center, we show that our system can be inexpensively deployed\nin an online setting, delivering low latency when serving users at scale.", "field": [], "task": ["Accented Speech Recognition", "End-To-End Speech Recognition", "Noisy Speech Recognition", "Speech Recognition"], "method": [], "dataset": ["LibriSpeech test-other", "VoxForge American-Canadian", "WSJ eval92", "LibriSpeech test-clean", "CHiME clean", "VoxForge Commonwealth", "CHiME real", "VoxForge European", "VoxForge Indian", "WSJ eval93"], "metric": ["Percentage error", "Word Error Rate (WER)"], "title": "Deep Speech 2: End-to-End Speech Recognition in English and Mandarin"} {"abstract": "In this paper, we propose multimodal convolutional neural networks (m-CNNs)\nfor matching image and sentence. Our m-CNN provides an end-to-end framework\nwith convolutional architectures to exploit image representation, word\ncomposition, and the matching relations between the two modalities. More\nspecifically, it consists of one image CNN encoding the image content, and one\nmatching CNN learning the joint representation of image and sentence. The\nmatching CNN composes words to different semantic fragments and learns the\ninter-modal relations between image and the composed fragments at different\nlevels, thus fully exploit the matching relations between image and sentence.\nExperimental results on benchmark databases of bidirectional image and sentence\nretrieval demonstrate that the proposed m-CNNs can effectively capture the\ninformation necessary for image and sentence matching. Specifically, our\nproposed m-CNNs for bidirectional image and sentence retrieval on Flickr30K and\nMicrosoft COCO databases achieve the state-of-the-art performances.", "field": [], "task": [], "method": [], "dataset": ["Flickr30K 1K test"], "metric": ["R@10", "R@1", "R@5"], "title": "Multimodal Convolutional Neural Networks for Matching Image and Sentence"} {"abstract": "Saliency detection on RGB-D images is receiving more and more research interests recently. Previous models adopt the early fusion or the result fusion scheme to fuse the input RGB and depth data or their saliency maps, which incur the problem of distribution gap or information loss. Some other models use the feature fusion scheme but are limited by the linear feature fusion methods. In this paper, we propose to fuse attention learned in both modalities. Inspired by the Non-local model, we integrate the self-attention and each other's attention to propagate long-range contextual dependencies, thus incorporating multi-modal information to learn attention and propagate contexts more accurately. Considering the reliability of the other modality's attention, we further propose a selection attention to weight the newly added attention term. We embed the proposed attention module in a two-stream CNN for RGB-D saliency detection. Furthermore, we also propose a residual fusion module to fuse the depth decoder features into the RGB stream. Experimental results on seven benchmark datasets demonstrate the effectiveness of the proposed model components and our final saliency model. Our code and saliency maps are available at https://github.com/nnizhang/S2MA.\r", "field": [], "task": ["RGB-D Salient Object Detection", "Saliency Detection"], "method": [], "dataset": ["NJU2K"], "metric": ["max E-Measure", "Average MAE", "S-Measure", "max F-Measure"], "title": "Learning Selective Self-Mutual Attention for RGB-D Saliency Detection"} {"abstract": "We propose several new models for semi-supervised nonnegative matrix factorization (SSNMF) and provide motivation for SSNMF models as maximum likelihood estimators given specific distributions of uncertainty. We present multiplicative updates training methods for each new model, and demonstrate the application of these models to classification, although they are flexible to other supervised learning tasks. We illustrate the promise of these models and training methods on both synthetic and real data, and achieve high classification accuracy on the 20 Newsgroups dataset.", "field": [], "task": ["Text Classification"], "method": [], "dataset": ["20NEWS"], "metric": ["Accuracy"], "title": "Semi-supervised NMF Models for Topic Modeling in Learning Tasks"} {"abstract": "Detecting rare objects from a few examples is an emerging problem. Prior works show meta-learning is a promising approach. But, fine-tuning techniques have drawn scant attention. We find that fine-tuning only the last layer of existing detectors on rare classes is crucial to the few-shot object detection task. Such a simple approach outperforms the meta-learning methods by roughly 2~20 points on current benchmarks and sometimes even doubles the accuracy of the prior methods. However, the high variance in the few samples often leads to the unreliability of existing benchmarks. We revise the evaluation protocols by sampling multiple groups of training examples to obtain stable comparisons and build new benchmarks based on three datasets: PASCAL VOC, COCO and LVIS. Again, our fine-tuning approach establishes a new state of the art on the revised benchmarks. The code as well as the pretrained models are available at https://github.com/ucbdrive/few-shot-object-detection.", "field": [], "task": ["Few-Shot Object Detection", "Meta-Learning", "Object Detection"], "method": [], "dataset": ["MS-COCO (30-shot)", "MS-COCO (10-shot)"], "metric": ["AP"], "title": "Frustratingly Simple Few-Shot Object Detection"} {"abstract": "This paper builds off recent work from Kiperwasser & Goldberg (2016) using\nneural attention in a simple graph-based dependency parser. We use a larger but\nmore thoroughly regularized parser than other recent BiLSTM-based approaches,\nwith biaffine classifiers to predict arcs and labels. Our parser gets state of\nthe art or near state of the art performance on standard treebanks for six\ndifferent languages, achieving 95.7% UAS and 94.1% LAS on the most popular\nEnglish PTB dataset. This makes it the highest-performing graph-based parser on\nthis benchmark---outperforming Kiperwasser Goldberg (2016) by 1.8% and\n2.2%---and comparable to the highest performing transition-based parser\n(Kuncoro et al., 2016), which achieves 95.8% UAS and 94.6% LAS. We also show\nwhich hyperparameter choices had a significant effect on parsing accuracy,\nallowing us to achieve large gains over other graph-based approaches.", "field": [], "task": ["Dependency Parsing"], "method": [], "dataset": ["Penn Treebank", "CoNLL-2009"], "metric": ["UAS", "POS", "LAS"], "title": "Deep Biaffine Attention for Neural Dependency Parsing"} {"abstract": "We introduce a novel strategy for learning to extract semantically meaningful\nfeatures from aerial imagery. Instead of manually labeling the aerial imagery,\nwe propose to predict (noisy) semantic features automatically extracted from\nco-located ground imagery. Our network architecture takes an aerial image as\ninput, extracts features using a convolutional neural network, and then applies\nan adaptive transformation to map these features into the ground-level\nperspective. We use an end-to-end learning approach to minimize the difference\nbetween the semantic segmentation extracted directly from the ground image and\nthe semantic segmentation predicted solely based on the aerial image. We show\nthat a model learned using this strategy, with no additional training, is\nalready capable of rough semantic labeling of aerial imagery. Furthermore, we\ndemonstrate that by finetuning this model we can achieve more accurate semantic\nsegmentation than two baseline initialization strategies. We use our network to\naddress the task of estimating the geolocation and geoorientation of a ground\nimage. Finally, we show how features extracted from an aerial image can be used\nto hallucinate a plausible ground-level panorama.", "field": [], "task": ["Cross-View Image-to-Image Translation", "Semantic Segmentation"], "method": [], "dataset": ["cvusa"], "metric": ["SSIM"], "title": "Predicting Ground-Level Scene Layout from Aerial Imagery"} {"abstract": "The goal of this paper is to estimate the 6D pose and dimensions of unseen object instances in an RGB-D image. Contrary to \"instance-level\" 6D pose estimation tasks, our problem assumes that no exact object CAD models are available during either training or testing time. To handle different and unseen object instances in a given category, we introduce a Normalized Object Coordinate Space (NOCS)---a shared canonical representation for all possible object instances within a category. Our region-based neural network is then trained to directly infer the correspondence from observed pixels to this shared object representation (NOCS) along with other object information such as class label and instance mask. These predictions can be combined with the depth map to jointly estimate the metric 6D pose and dimensions of multiple objects in a cluttered scene. To train our network, we present a new context-aware technique to generate large amounts of fully annotated mixed reality data. To further improve our model and evaluate its performance on real data, we also provide a fully annotated real-world dataset with large environment and instance variation. Extensive experiments demonstrate that the proposed method is able to robustly estimate the pose and size of unseen object instances in real environments while also achieving state-of-the-art performance on standard 6D pose estimation benchmarks.", "field": [], "task": ["6D Pose Estimation", "6D Pose Estimation using RGB", "Pose Estimation"], "method": [], "dataset": ["REAL275", "CAMERA25"], "metric": ["mAP 10, 10cm", "mAP 3DIou@50", "mAP 10, 5cm", "mAP 5, 5cm", "mAP 3DIou@25"], "title": "Normalized Object Coordinate Space for Category-Level 6D Object Pose and Size Estimation"} {"abstract": "One technique to improve the retrieval effectiveness of a search engine is to expand documents with terms that are related or representative of the documents' content.From the perspective of a question answering system, this might comprise questions the document can potentially answer. Following this observation, we propose a simple method that predicts which queries will be issued for a given document and then expands it with those predictions with a vanilla sequence-to-sequence model, trained using datasets consisting of pairs of query and relevant documents. By combining our method with a highly-effective re-ranking component, we achieve the state of the art in two retrieval tasks. In a latency-critical regime, retrieval results alone (without re-ranking) approach the effectiveness of more computationally expensive neural re-rankers but are much faster.", "field": [], "task": ["Passage Re-Ranking", "Question Answering"], "method": [], "dataset": ["MS MARCO", "TREC-PM"], "metric": ["mAP", "MRR"], "title": "Document Expansion by Query Prediction"} {"abstract": "Models for reading comprehension (RC) commonly restrict their output space to the set of all single contiguous spans from the input, in order to alleviate the learning problem and avoid the need for a model that generates text explicitly. However, forcing an answer to be a single span can be restrictive, and some recent datasets also include multi-span questions, i.e., questions whose answer is a set of non-contiguous spans in the text. Naturally, models that return single spans cannot answer these questions. In this work, we propose a simple architecture for answering multi-span questions by casting the task as a sequence tagging problem, namely, predicting for each input token whether it should be part of the output or not. Our model substantially improves performance on span extraction questions from DROP and Quoref by 9.9 and 5.5 EM points respectively.", "field": [], "task": ["Question Answering", "Reading Comprehension"], "method": [], "dataset": ["DROP Test"], "metric": ["F1"], "title": "A Simple and Effective Model for Answering Multi-span Questions"} {"abstract": "This paper introduces a novel deep network for estimating depth maps from a light field image. For utilizing the views more effectively and reducing redundancy within views, we propose a view selection module that generates an attention map indicating the importance of each view and its potential for contributing to accurate depth estimation. By exploring the symmetric property of light field views, we enforce symmetry in the attention map and further improve accuracy. With the attention map, our architecture utilizes all views more effectively and efficiently. Experiments show that the proposed method achieves state-of-the-art performance in terms of accuracy and ranks the first on a popular benchmark for disparity estimation for light field images.", "field": [], "task": ["Depth Estimation", "Disparity Estimation"], "method": [], "dataset": ["4D Light Field Dataset"], "metric": ["BadPix(0.03)", "BadPix(0.01)", "BadPix(0.07)", "MSE "], "title": "Attention-based View Selection Networks for Light-field Disparity Estimation"} {"abstract": "An abstract must not change the meaning of the original text. A single most effective way to achieve that is to increase the amount of copying while still allowing for text abstraction. Human editors can usually exercise control over copying, resulting in summaries that are more extractive than abstractive, or vice versa. However, it remains poorly understood whether modern neural abstractive summarizers can provide the same flexibility, i.e., learning from single reference summaries to generate multiple summary hypotheses with varying degrees of copying. In this paper, we present a neural summarization model that, by learning from single human abstracts, can produce a broad spectrum of summaries ranging from purely extractive to highly generative ones. We frame the task of summarization as language modeling and exploit alternative mechanisms to generate summary hypotheses. Our method allows for control over copying during both training and decoding stages of a neural summarization model. Through extensive experiments we illustrate the significance of our proposed method on controlling the amount of verbatim copying and achieve competitive results over strong baselines. Our analysis further reveals interesting and unobvious facts.", "field": [], "task": ["Abstractive Text Summarization", "Language Modelling", "Text Summarization"], "method": [], "dataset": ["GigaWord"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Controlling the Amount of Verbatim Copying in Abstractive Summarization"} {"abstract": "3D point cloud completion, the task of inferring the complete geometric shape from a partial point cloud, has been attracting attention in the community. For acquiring high-fidelity dense point clouds and avoiding uneven distribution, blurred details, or structural loss of existing methods' results, we propose a novel approach to complete the partial point cloud in two stages. Specifically, in the first stage, the approach predicts a complete but coarse-grained point cloud with a collection of parametric surface elements. Then, in the second stage, it merges the coarse-grained prediction with the input point cloud by a novel sampling algorithm. Our method utilizes a joint loss function to guide the distribution of the points. Extensive experiments verify the effectiveness of our method and demonstrate that it outperforms the existing methods in both the Earth Mover's Distance (EMD) and the Chamfer Distance (CD).", "field": [], "task": ["Point Cloud Completion"], "method": [], "dataset": ["ShapeNet"], "metric": ["F-Score@1%", "Chamfer Distance"], "title": "Morphing and Sampling Network for Dense Point Cloud Completion"} {"abstract": "Video super-resolution (SR) aims at generating a sequence of high-resolution (HR) frames with plausible and temporally consistent details from their low-resolution (LR) counterparts. The key challenge for video SR lies in the effective exploitation of temporal dependency between consecutive frames. Existing deep learning based methods commonly estimate optical flows between LR frames to provide temporal dependency. However, the resolution conflict between LR optical flows and HR outputs hinders the recovery of fine details. In this paper, we propose an end-to-end video SR network to super-resolve both optical flows and images. Optical flow SR from LR frames provides accurate temporal dependency and ultimately improves video SR performance. Specifically, we first propose an optical flow reconstruction network (OFRnet) to infer HR optical flows in a coarse-to-fine manner. Then, motion compensation is performed using HR optical flows to encode temporal dependency. Finally, compensated LR inputs are fed to a super-resolution network (SRnet) to generate SR results. Extensive experiments have been conducted to demonstrate the effectiveness of HR optical flows for SR performance improvement. Comparative results on the Vid4 and DAVIS-10 datasets show that our network achieves the state-of-the-art performance.", "field": [], "task": ["Motion Compensation", "Optical Flow Estimation", "Super-Resolution", "Video Super-Resolution"], "method": [], "dataset": ["Vid4 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Deep Video Super-Resolution using HR Optical Flow Estimation"} {"abstract": "The availability of large-scale datasets has helped unleash the true potential of deep convolutional neural networks (CNNs). However, for the single-image denoising problem, capturing a real dataset is an unacceptably expensive and cumbersome procedure. Consequently, image denoising algorithms are mostly developed and evaluated on synthetic data that is usually generated with a widespread assumption of additive white Gaussian noise (AWGN). While the CNNs achieve impressive results on these synthetic datasets, they do not perform well when applied on real camera images, as reported in recent benchmark datasets. This is mainly because the AWGN is not adequate for modeling the real camera noise which is signal-dependent and heavily transformed by the camera imaging pipeline. In this paper, we present a framework that models camera imaging pipeline in forward and reverse directions. It allows us to produce any number of realistic image pairs for denoising both in RAW and sRGB spaces. By training a new image denoising network on realistic synthetic data, we achieve the state-of-the-art performance on real camera benchmark datasets. The parameters in our model are ~5 times lesser than the previous best method for RAW denoising. Furthermore, we demonstrate that the proposed framework generalizes beyond image denoising problem e.g., for color matching in stereoscopic cinema. The source code and pre-trained models are available at https://github.com/swz30/CycleISP.", "field": [], "task": ["Denoising", "Image Denoising", "Image Restoration"], "method": [], "dataset": ["SIDD", "DND"], "metric": ["SSIM (sRGB)", "PSNR (sRGB)"], "title": "CycleISP: Real Image Restoration via Improved Data Synthesis"} {"abstract": "Unsupervised learning of optical flow, which leverages the supervision from view synthesis, has emerged as a promising alternative to supervised methods. However, the objective of unsupervised learning is likely to be unreliable in challenging scenes. In this work, we present a framework to use more reliable supervision from transformations. It simply twists the general unsupervised learning pipeline by running another forward pass with transformed data from augmentation, along with using transformed predictions of original data as the self-supervision signal. Besides, we further introduce a lightweight network with multiple frames by a highly-shared flow decoder. Our method consistently gets a leap of performance on several benchmarks with the best accuracy among deep unsupervised methods. Also, our method achieves competitive results to recent fully supervised methods while with much fewer parameters.", "field": [], "task": ["Optical Flow Estimation", "Self-Supervised Learning"], "method": [], "dataset": ["Sintel Final unsupervised", "KITTI 2015 unsupervised", "KITTI 2012 unsupervised", "Sintel Clean unsupervised"], "metric": ["Average End-Point Error", "Fl-all"], "title": "Learning by Analogy: Reliable Supervision from Transformations for Unsupervised Optical Flow Estimation"} {"abstract": "Partially supervised instance segmentation aims to perform learning on limited mask-annotated categories of data thus eliminating expensive and exhaustive mask annotation. The learned models are expected to be generalizable to novel categories. Existing methods either learn a transfer function from detection to segmentation, or cluster shape priors for segmenting novel categories. We propose to learn the underlying class-agnostic commonalities that can be generalized from mask-annotated categories to novel categories. Specifically, we parse two types of commonalities: 1) shape commonalities which are learned by performing supervised learning on instance boundary prediction; and 2) appearance commonalities which are captured by modeling pairwise affinities among pixels of feature maps to optimize the separability between instance and the background. Incorporating both the shape and appearance commonalities, our model significantly outperforms the state-of-the-art methods on both partially supervised setting and few-shot setting for instance segmentation on COCO dataset.", "field": [], "task": ["Instance Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["COCO test-dev"], "metric": ["APM", "AP75", "APS", "APL", "AP50", "mask AP"], "title": "Commonality-Parsing Network across Shape and Appearance for Partially Supervised Instance Segmentation"} {"abstract": "Single-stage instance segmentation approaches have recently gained popularity due to their speed and simplicity, but are still lagging behind in accuracy, compared to two-stage methods. We propose a fast single-stage instance segmentation method, called SipMask, that preserves instance-specific spatial information by separating mask prediction of an instance to different sub-regions of a detected bounding-box. Our main contribution is a novel light-weight spatial preservation (SP) module that generates a separate set of spatial coefficients for each sub-region within a bounding-box, leading to improved mask predictions. It also enables accurate delineation of spatially adjacent instances. Further, we introduce a mask alignment weighting loss and a feature alignment scheme to better correlate mask prediction with object detection. On COCO test-dev, our SipMask outperforms the existing single-stage methods. Compared to the state-of-the-art single-stage TensorMask, SipMask obtains an absolute gain of 1.0% (mask AP), while providing a four-fold speedup. In terms of real-time capabilities, SipMask outperforms YOLACT with an absolute gain of 3.0% (mask AP) under similar settings, while operating at comparable speed on a Titan Xp. We also evaluate our SipMask for real-time video instance segmentation, achieving promising results on YouTube-VIS dataset. The source code is available at https://github.com/JialeCao001/SipMask.", "field": [], "task": ["Instance Segmentation", "Object Detection", "Real-time Instance Segmentation", "Semantic Segmentation", "Video Instance Segmentation"], "method": [], "dataset": ["YouTube-VIS validation", "COCO test-dev", "MSCOCO"], "metric": ["AR10", "APM", "inference time (ms)", "AR1", "AP75", "APS", "APL", "AP50", "Frame (fps)", "mask AP"], "title": "SipMask: Spatial Information Preservation for Fast Image and Video Instance Segmentation"} {"abstract": "We present the first parser for UCCA, a cross-linguistically applicable\nframework for semantic representation, which builds on extensive typological\nwork and supports rapid annotation. UCCA poses a challenge for existing parsing\ntechniques, as it exhibits reentrancy (resulting in DAG structures),\ndiscontinuous structures and non-terminal nodes corresponding to complex\nsemantic units. To our knowledge, the conjunction of these formal properties is\nnot supported by any existing parser. Our transition-based parser, which uses a\nnovel transition set and features based on bidirectional LSTMs, has value not\njust for UCCA parsing: its ability to handle more general graph structures can\ninform the development of parsers for other semantic DAG structures, and in\nlanguages that frequently use discontinuous structures.", "field": [], "task": ["Semantic Parsing", "UCCA Parsing"], "method": [], "dataset": ["SemEval 2019 Task 1"], "metric": ["English-20K (open) F1", "English-Wiki (open) F1"], "title": "A Transition-Based Directed Acyclic Graph Parser for UCCA"} {"abstract": "Recently, researchers have made significant progress combining the advances\nin deep learning for learning feature representations with reinforcement\nlearning. Some notable examples include training agents to play Atari games\nbased on raw pixel data and to acquire advanced manipulation skills using raw\nsensory inputs. However, it has been difficult to quantify progress in the\ndomain of continuous control due to the lack of a commonly adopted benchmark.\nIn this work, we present a benchmark suite of continuous control tasks,\nincluding classic tasks like cart-pole swing-up, tasks with very high state and\naction dimensionality such as 3D humanoid locomotion, tasks with partial\nobservations, and tasks with hierarchical structure. We report novel findings\nbased on the systematic evaluation of a range of implemented reinforcement\nlearning algorithms. Both the benchmark and reference implementations are\nreleased at https://github.com/rllab/rllab in order to facilitate experimental\nreproducibility and to encourage adoption by other researchers.", "field": [], "task": ["Atari Games", "Continuous Control", "Hierarchical structure"], "method": [], "dataset": ["Cart-Pole Balancing", "2D Walker", "Ant", "Mountain Car (system identifications)", "Swimmer", "Swimmer + Maze", "Hopper", "Double Inverted Pendulum", "Ant + Maze", "Mountain Car (limited sensors)", "Mountain Car (noisy observations)", "Inverted Pendulum", "Cart-Pole Balancing (system identifications)", "Acrobot (system identifications)", "Acrobot", "Inverted Pendulum (system identifications)", "Mountain Car", "Inverted Pendulum (limited sensors)", "Inverted Pendulum (noisy observations)", "Acrobot (noisy observations)", "Half-Cheetah", "Cart-Pole Balancing (limited sensors)", "Ant + Gathering", "Swimmer + Gathering", "Cart-Pole Balancing (noisy observations)", "Full Humanoid", "Simple Humanoid", "Acrobot (limited sensors)"], "metric": ["Score"], "title": "Benchmarking Deep Reinforcement Learning for Continuous Control"} {"abstract": "Typical human actions last several seconds and exhibit characteristic\nspatio-temporal structure. Recent methods attempt to capture this structure and\nlearn action representations with convolutional neural networks. Such\nrepresentations, however, are typically learned at the level of a few video\nframes failing to model actions at their full temporal extent. In this work we\nlearn video representations using neural networks with long-term temporal\nconvolutions (LTC). We demonstrate that LTC-CNN models with increased temporal\nextents improve the accuracy of action recognition. We also study the impact of\ndifferent low-level representations, such as raw values of video pixels and\noptical flow vector fields and demonstrate the importance of high-quality\noptical flow estimation for learning accurate action models. We report\nstate-of-the-art results on two challenging benchmarks for human action\nrecognition UCF101 (92.7%) and HMDB51 (67.2%).", "field": [], "task": ["Action Recognition", "Optical Flow Estimation", "Temporal Action Localization"], "method": [], "dataset": ["UCF101", "HMDB-51"], "metric": ["Average accuracy of 3 splits", "3-fold Accuracy"], "title": "Long-term Temporal Convolutions for Action Recognition"} {"abstract": "In the past few years, a lot of work has been done towards reconstructing the\n3D facial structure from single images by capitalizing on the power of Deep\nConvolutional Neural Networks (DCNNs). In the most recent works, differentiable\nrenderers were employed in order to learn the relationship between the facial\nidentity features and the parameters of a 3D morphable model for shape and\ntexture. The texture features either correspond to components of a linear\ntexture space or are learned by auto-encoders directly from in-the-wild images.\nIn all cases, the quality of the facial texture reconstruction of the\nstate-of-the-art methods is still not capable of modeling textures in high\nfidelity. In this paper, we take a radically different approach and harness the\npower of Generative Adversarial Networks (GANs) and DCNNs in order to\nreconstruct the facial texture and shape from single images. That is, we\nutilize GANs to train a very powerful generator of facial texture in UV space.\nThen, we revisit the original 3D Morphable Models (3DMMs) fitting approaches\nmaking use of non-linear optimization to find the optimal latent parameters\nthat best reconstruct the test image but under a new perspective. We optimize\nthe parameters with the supervision of pretrained deep identity features\nthrough our end-to-end differentiable framework. We demonstrate excellent\nresults in photorealistic and identity preserving 3D face reconstructions and\nachieve for the first time, to the best of our knowledge, facial texture\nreconstruction with high-frequency details.", "field": [], "task": ["3D Face Reconstruction", "Face Reconstruction"], "method": [], "dataset": ["Florence"], "metric": ["Average 3D Error"], "title": "GANFIT: Generative Adversarial Network Fitting for High Fidelity 3D Face Reconstruction"} {"abstract": "We propose a universal image reconstruction method to represent detailed\nimages purely from binary sparse edge and flat color domain. Inspired by the\nprocedures of painting, our framework, based on generative adversarial network,\nconsists of three phases: Imitation Phase aims at initializing networks,\nfollowed by Generating Phase to reconstruct preliminary images. Moreover,\nRefinement Phase is utilized to fine-tune preliminary images into final outputs\nwith details. This framework allows our model generating abundant high\nfrequency details from sparse input information. We also explore the defects of\ndisentangling style latent space implicitly from images, and demonstrate that\nexplicit color domain in our model performs better on controllability and\ninterpretability. In our experiments, we achieve outstanding results on\nreconstructing realistic images and translating hand drawn drafts into\nsatisfactory paintings. Besides, within the domain of edge-to-image\ntranslation, our model PI-REC outperforms existing state-of-the-art methods on\nevaluations of realism and accuracy, both quantitatively and qualitatively.", "field": [], "task": ["Image Reconstruction", "Image-to-Image Translation"], "method": [], "dataset": ["Edge-to-Shoes", "Edge-to-Handbags"], "metric": ["HP", "FID", "LPIPS", "MMD"], "title": "PI-REC: Progressive Image Reconstruction Network With Edge and Color Domain"} {"abstract": "Detection identifies objects as axis-aligned boxes in an image. Most\nsuccessful object detectors enumerate a nearly exhaustive list of potential\nobject locations and classify each. This is wasteful, inefficient, and requires\nadditional post-processing. In this paper, we take a different approach. We\nmodel an object as a single point --- the center point of its bounding box. Our\ndetector uses keypoint estimation to find center points and regresses to all\nother object properties, such as size, 3D location, orientation, and even pose.\nOur center point based approach, CenterNet, is end-to-end differentiable,\nsimpler, faster, and more accurate than corresponding bounding box based\ndetectors. CenterNet achieves the best speed-accuracy trade-off on the MS COCO\ndataset, with 28.1% AP at 142 FPS, 37.4% AP at 52 FPS, and 45.1% AP with\nmulti-scale testing at 1.4 FPS. We use the same approach to estimate 3D\nbounding box in the KITTI benchmark and human pose on the COCO keypoint\ndataset. Our method performs competitively with sophisticated multi-stage\nmethods and runs in real-time.", "field": [], "task": ["Keypoint Detection", "Object Detection", "Real-Time Object Detection"], "method": [], "dataset": ["COCO", "COCO test-dev"], "metric": ["APM", "FPS", "MAP", "inference time (ms)", "box AP", "APL", "APS"], "title": "Objects as Points"} {"abstract": "Handling previously unseen tasks after given only a few training examples continues to be a tough challenge in machine learning. We propose TapNets, neural networks augmented with task-adaptive projection for improved few-shot learning. Here, employing a meta-learning strategy with episode-based training, a network and a set of per-class reference vectors are learned across widely varying tasks. At the same time, for every episode, features in the embedding space are linearly projected into a new space as a form of quick task-specific conditioning. The training loss is obtained based on a distance metric between the query and the reference vectors in the projection space. Excellent generalization results in this way. When tested on the Omniglot, miniImageNet and tieredImageNet datasets, we obtain state of the art classification accuracies under various few-shot scenarios.", "field": [], "task": ["Few-Shot Learning", "Meta-Learning", "Omniglot"], "method": [], "dataset": ["OMNIGLOT - 5-Shot, 20-way", "Mini-Imagenet 5-way (1-shot)", "Tiered ImageNet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "OMNIGLOT - 1-Shot, 20-way", "Tiered ImageNet 5-way (5-shot)"], "metric": ["Accuracy"], "title": "TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning"} {"abstract": "Long short-term memory (LSTM) networks and their variants are capable of encapsulating long-range dependencies, which is evident from their performance on a variety of linguistic tasks. On the other hand, simple recurrent networks (SRNs), which appear more biologically grounded in terms of synaptic connections, have generally been less successful at capturing long-range dependencies as well as the loci of grammatical errors in an unsupervised setting. In this paper, we seek to develop models that bridge the gap between biological plausibility and linguistic competence. We propose a new architecture, the Decay RNN, which incorporates the decaying nature of neuronal activations and models the excitatory and inhibitory connections in a population of neurons. Besides its biological inspiration, our model also shows competitive performance relative to LSTMs on subject-verb agreement, sentence grammaticality, and language modeling tasks. These results provide some pointers towards probing the nature of the inductive biases required for RNN architectures to model linguistic phenomena successfully.", "field": [], "task": ["Language Modelling"], "method": [], "dataset": ["WikiText-103"], "metric": ["Validation perplexity"], "title": "How much complexity does an RNN architecture need to learn syntax-sensitive dependencies?"} {"abstract": "In neural abstractive summarization, the conventional sequence-to-sequence\n(seq2seq) model often suffers from repetition and semantic irrelevance. To\ntackle the problem, we propose a global encoding framework, which controls the\ninformation flow from the encoder to the decoder based on the global\ninformation of the source context. It consists of a convolutional gated unit to\nperform global encoding to improve the representations of the source-side\ninformation. Evaluations on the LCSTS and the English Gigaword both demonstrate\nthat our model outperforms the baseline models, and the analysis shows that our\nmodel is capable of reducing repetition.", "field": [], "task": ["Abstractive Text Summarization"], "method": [], "dataset": ["GigaWord"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Global Encoding for Abstractive Summarization"} {"abstract": "Adversarial training (AT) is a powerful regularization method for neural\nnetworks, aiming to achieve robustness to input perturbations. Yet, the\nspecific effects of the robustness obtained from AT are still unclear in the\ncontext of natural language processing. In this paper, we propose and analyze a\nneural POS tagging model that exploits AT. In our experiments on the Penn\nTreebank WSJ corpus and the Universal Dependencies (UD) dataset (27 languages),\nwe find that AT not only improves the overall tagging accuracy, but also 1)\nprevents over-fitting well in low resource languages and 2) boosts tagging\naccuracy for rare / unseen words. We also demonstrate that 3) the improved\ntagging performance by AT contributes to the downstream task of dependency\nparsing, and that 4) AT helps the model to learn cleaner word representations.\n5) The proposed AT model is generally effective in different sequence labeling\ntasks. These positive results motivate further use of AT for natural language\ntasks.", "field": [], "task": ["Chunking", "Dependency Parsing", "Named Entity Recognition", "Part-Of-Speech Tagging"], "method": [], "dataset": ["CoNLL 2000", "CoNLL 2003 (English)", "Penn Treebank", "UD"], "metric": ["Exact Span F1", "Avg accuracy", "F1", "Accuracy"], "title": "Robust Multilingual Part-of-Speech Tagging via Adversarial Training"} {"abstract": "We introduce a new count-based optimistic exploration algorithm for\nReinforcement Learning (RL) that is feasible in environments with\nhigh-dimensional state-action spaces. The success of RL algorithms in these\ndomains depends crucially on generalisation from limited training experience.\nFunction approximation techniques enable RL agents to generalise in order to\nestimate the value of unvisited states, but at present few methods enable\ngeneralisation regarding uncertainty. This has prevented the combination of\nscalable RL algorithms with efficient exploration strategies that drive the\nagent to reduce its uncertainty. We present a new method for computing a\ngeneralised state visit-count, which allows the agent to estimate the\nuncertainty associated with any state. Our \\phi-pseudocount achieves\ngeneralisation by exploiting same feature representation of the state space\nthat is used for value function approximation. States that have less frequently\nobserved features are deemed more uncertain. The \\phi-Exploration-Bonus\nalgorithm rewards the agent for exploring in feature space rather than in the\nuntransformed state space. The method is simpler and less computationally\nexpensive than some previous proposals, and achieves near state-of-the-art\nresults on high-dimensional RL benchmarks.", "field": [], "task": ["Atari Games", "Efficient Exploration"], "method": [], "dataset": ["Atari 2600 Venture", "Atari 2600 Montezuma's Revenge", "Atari 2600 Frostbite", "Atari 2600 Freeway", "Atari 2600 Q*Bert"], "metric": ["Score"], "title": "Count-Based Exploration in Feature Space for Reinforcement Learning"} {"abstract": "In this paper, we propose a new multi-scale face detector having an extremely tiny number of parameters (EXTD),less than 0.1 million, as well as achieving comparable performance to deep heavy detectors. While existing multi-scale face detectors extract feature maps with different scales from a single backbone network, our method generates the feature maps by iteratively reusing a shared lightweight and shallow backbone network. This iterative sharing of the backbone network significantly reduces the number of parameters, and also provides the abstract image semantics captured from the higher stage of the network layers to the lower-level feature map. The proposed idea is employed by various model architectures and evaluated by extensive experiments. From the experiments from WIDER FACE dataset, we show that the proposed face detector can handle faces with various scale and conditions, and achieved comparable performance to the more massive face detectors that few hundreds and tens times heavier in model size and floating point operations.", "field": [], "task": ["Face Detection"], "method": [], "dataset": ["WIDER Face (Hard)", "WIDER Face (Medium)", "WIDER Face (Easy)"], "metric": ["AP"], "title": "EXTD: Extremely Tiny Face Detector via Iterative Filter Reuse"} {"abstract": "We present a novel graph-based neural network model for relation extraction. Our model treats multiple pairs in a sentence simultaneously and considers interactions among them. All the entities in a sentence are placed as nodes in a fully-connected graph structure. The edges are represented with position-aware contexts around the entity pairs. In order to consider different relation paths between two entities, we construct up to l-length walks between each pair. The resulting walks are merged and iteratively used to update the edge representations into longer walks representations. We show that the model achieves performance comparable to the state-of-the-art systems on the ACE 2005 dataset without using any external tools.", "field": [], "task": ["Relation Extraction"], "method": [], "dataset": ["ACE 2005"], "metric": ["Relation classification F1"], "title": "A Walk-based Model on Entity Graphs for Relation Extraction"} {"abstract": "Depth estimation from a single image represents a fascinating, yet\nchallenging problem with countless applications. Recent works proved that this\ntask could be learned without direct supervision from ground truth labels\nleveraging image synthesis on sequences or stereo pairs. Focusing on this\nsecond case, in this paper we leverage stereo matching in order to improve\nmonocular depth estimation. To this aim we propose monoResMatch, a novel deep\narchitecture designed to infer depth from a single input image by synthesizing\nfeatures from a different point of view, horizontally aligned with the input\nimage, performing stereo matching between the two cues. In contrast to previous\nworks sharing this rationale, our network is the first trained end-to-end from\nscratch. Moreover, we show how obtaining proxy ground truth annotation through\ntraditional stereo algorithms, such as Semi-Global Matching, enables more\naccurate monocular depth estimation still countering the need for expensive\ndepth labels by keeping a self-supervised approach. Exhaustive experimental\nresults prove how the synergy between i) the proposed monoResMatch architecture\nand ii) proxy-supervision attains state-of-the-art for self-supervised\nmonocular depth estimation. The code is publicly available at\nhttps://github.com/fabiotosi92/monoResMatch-Tensorflow.", "field": [], "task": ["Depth Estimation", "Image Generation", "Monocular Depth Estimation", "Stereo Matching", "Stereo Matching Hand"], "method": [], "dataset": ["KITTI Eigen split"], "metric": ["absolute relative error"], "title": "Learning monocular depth estimation infusing traditional stereo knowledge"} {"abstract": "In this work, we move beyond the traditional complex-valued representations, introducing more expressive hypercomplex representations to model entities and relations for knowledge graph embeddings. More specifically, quaternion embeddings, hypercomplex-valued embeddings with three imaginary components, are utilized to represent entities. Relations are modelled as rotations in the quaternion space. The advantages of the proposed approach are: (1) Latent inter-dependencies (between all components) are aptly captured with Hamilton product, encouraging a more compact interaction between entities and relations; (2) Quaternions enable expressive rotation in four-dimensional space and have more degree of freedom than rotation in complex plane; (3) The proposed framework is a generalization of ComplEx on hypercomplex space while offering better geometrical interpretations, concurrently satisfying the key desiderata of relational representation learning (i.e., modeling symmetry, anti-symmetry and inversion). Experimental results demonstrate that our method achieves state-of-the-art performance on four well-established knowledge graph completion benchmarks.", "field": [], "task": ["Graph Embedding", "Knowledge Graph Completion", "Knowledge Graph Embedding", "Knowledge Graph Embeddings", "Knowledge Graphs", "Link Prediction", "Representation Learning"], "method": [], "dataset": [" FB15k", "WN18RR", "WN18", "FB15k-237"], "metric": ["Hits@3", "Hits@1", "MR", "MRR", "Hits@10"], "title": "Quaternion Knowledge Graph Embeddings"} {"abstract": "Anaphora resolution (coreference) systems designed for the CONLL 2012 dataset typically cannot handle key aspects of the full anaphora resolution task such as the identification of singletons and of certain types of non-referring expressions (e.g., expletives), as these aspects are not annotated in that corpus. However, the recently released dataset for the CRAC 2018 Shared Task can now be used for that purpose. In this paper, we introduce an architecture to simultaneously identify non-referring expressions (including expletives, predicative s, and other types) and build coreference chains, including singletons. Our cluster-ranking system uses an attention mechanism to determine the relative importance of the mentions in the same cluster. Additional classifiers are used to identify singletons and non-referring markables. Our contributions are as follows. First all, we report the first result on the CRAC data using system mentions; our result is 5.8% better than the shared task baseline system, which used gold mentions. Second, we demonstrate that the availability of singleton clusters and non-referring expressions can lead to substantially improved performance on non-singleton clusters as well. Third, we show that despite our model not being designed specifically for the CONLL data, it achieves a score equivalent to that of the state-of-the-art system by Kantor and Globerson (2019) on that dataset.", "field": [], "task": ["Coreference Resolution"], "method": [], "dataset": ["CoNLL 2012", "The ARRAU Corpus"], "metric": ["Avg F1"], "title": "A Cluster Ranking Model for Full Anaphora Resolution"} {"abstract": "In this work, we re-think the task of speech enhancement in unconstrained real-world environments. Current state-of-the-art methods use only the audio stream and are limited in their performance in a wide range of real-world noises. Recent works using lip movements as additional cues improve the quality of generated speech over \"audio-only\" methods. But, these methods cannot be used for several applications where the visual stream is unreliable or completely absent. We propose a new paradigm for speech enhancement by exploiting recent breakthroughs in speech-driven lip synthesis. Using one such model as a teacher network, we train a robust student network to produce accurate lip movements that mask away the noise, thus acting as a \"visual noise filter\". The intelligibility of the speech enhanced by our pseudo-lip approach is comparable (< 3% difference) to the case of using real lips. This implies that we can exploit the advantages of using lip movements even in the absence of a real video stream. We rigorously evaluate our model using quantitative metrics as well as human evaluations. Additional ablation studies and a demo video on our website containing qualitative comparisons and results clearly illustrate the effectiveness of our approach. We provide a demo video which clearly illustrates the effectiveness of our proposed approach on our website: \\url{http://cvit.iiit.ac.in/research/projects/cvit-projects/visual-speech-enhancement-without-a-real-visual-stream}. The code and models are also released for future research: \\url{https://github.com/Sindhu-Hegde/pseudo-visual-speech-denoising}.", "field": [], "task": ["Denoising", "Speech Denoising", "Speech Enhancement"], "method": [], "dataset": ["LRS3+VGGSound", "LRS2+VGGSound"], "metric": ["PESQ", "COVL", "CBAK", "STOI", "CSIG"], "title": "Visual Speech Enhancement Without A Real Visual Stream"} {"abstract": "This paper proposes Omnidirectional Representations from Transformers (OmniNet). In OmniNet, instead of maintaining a strictly horizontal receptive field, each token is allowed to attend to all tokens in the entire network. This process can also be interpreted as a form of extreme or intensive attention mechanism that has the receptive field of the entire width and depth of the network. To this end, the omnidirectional attention is learned via a meta-learner, which is essentially another self-attention based model. In order to mitigate the computationally expensive costs of full receptive field attention, we leverage efficient self-attention models such as kernel-based (Choromanski et al.), low-rank attention (Wang et al.) and/or Big Bird (Zaheer et al.) as the meta-learner. Extensive experiments are conducted on autoregressive language modeling (LM1B, C4), Machine Translation, Long Range Arena (LRA), and Image Recognition. The experiments show that OmniNet achieves considerable improvements across these tasks, including achieving state-of-the-art performance on LM1B, WMT'14 En-De/En-Fr, and Long Range Arena. Moreover, using omnidirectional representation in Vision Transformers leads to significant improvements on image recognition tasks on both few-shot learning and fine-tuning setups.", "field": [], "task": ["Few-Shot Learning", "Language Modelling", "Machine Translation"], "method": [], "dataset": ["WMT2017 English-French", "WMT2017 Russian-English", "WMT2014 English-German", "WMT2017 English-German", "WMT2017 English-Finnish", "WMT2014 English-French", "WMT2017 Chinese-English", "One Billion Word"], "metric": ["Number of params", "PPL", "BLEU", "BLEU score"], "title": "OmniNet: Omnidirectional Representations from Transformers"} {"abstract": "Multivariate time series (MTS) arise when multiple interconnected sensors\nrecord data over time. Dealing with this high-dimensional data is challenging\nfor every classifier for at least two aspects: First, an MTS is not only\ncharacterized by individual feature values, but also by the interplay of\nfeatures in different dimensions. Second, this typically adds large amounts of\nirrelevant data and noise. We present our novel MTS classifier WEASEL+MUSE\nwhich addresses both challenges. WEASEL+MUSE builds a multivariate feature\nvector, first using a sliding-window approach applied to each dimension of the\nMTS, then extracts discrete features per window and dimension. The feature\nvector is subsequently fed through feature selection, removing\nnon-discriminative features, and analysed by a machine learning classifier. The\nnovelty of WEASEL+MUSE lies in its specific way of extracting and filtering\nmultivariate features from MTS by encoding context information into each\nfeature. Still the resulting feature set is small, yet very discriminative and\nuseful for MTS classification. Based on a popular benchmark of 20 MTS datasets,\nwe found that WEASEL+MUSE is among the most accurate classifiers, when compared\nto the state of the art. The outstanding robustness of WEASEL+MUSE is further\nconfirmed based on motion gesture recognition data, where it out-of-the-box\nachieved similar accuracies as domain-specific methods.", "field": [], "task": ["Feature Selection", "Gesture Recognition", "Time Series", "Time Series Classification"], "method": [], "dataset": ["AATLD Gesture Recognition"], "metric": ["Absolute Time (ms)"], "title": "Multivariate Time Series Classification with WEASEL+MUSE"} {"abstract": "Time series are widely used as signals in many classification/regression\ntasks. It is ubiquitous that time series contains many missing values. Given\nmultiple correlated time series data, how to fill in missing values and to\npredict their class labels? Existing imputation methods often impose strong\nassumptions of the underlying data generating process, such as linear dynamics\nin the state space. In this paper, we propose BRITS, a novel method based on\nrecurrent neural networks for missing value imputation in time series data. Our\nproposed method directly learns the missing values in a bidirectional recurrent\ndynamical system, without any specific assumption. The imputed values are\ntreated as variables of RNN graph and can be effectively updated during the\nbackpropagation.BRITS has three advantages: (a) it can handle multiple\ncorrelated missing values in time series; (b) it generalizes to time series\nwith nonlinear dynamics underlying; (c) it provides a data-driven imputation\nprocedure and applies to general settings with missing data.We evaluate our\nmodel on three real-world datasets, including an air quality dataset, a\nhealth-care data, and a localization data for human activity. Experiments show\nthat our model outperforms the state-of-the-art methods in both imputation and\nclassification/regression accuracies.", "field": [], "task": ["Imputation", "Multivariate Time Series Forecasting", "Multivariate Time Series Imputation", "Regression", "Time Series"], "method": [], "dataset": ["MIMIC-III", "Beijing Air Quality", "UCI localization data", "PhysioNet Challenge 2012", "PEMS-SF", "USHCN-Daily", "Basketball Players Movement"], "metric": ["MAE (PM2.5)", "OOB Rate (10^\u22123) ", "Player Distance ", "Step Change (10^\u22123)", "MAE (10% missing)", "MSE", "NegLL", "Path Difference", "L2 Loss (10^-4)", "Path Length", "MAE (10% of data as GT)"], "title": "BRITS: Bidirectional Recurrent Imputation for Time Series"} {"abstract": "Estimating the 6D pose of objects from images is an important problem in various applications such as robot manipulation and virtual reality. While direct regression of images to object poses has limited accuracy, matching rendered images of an object against the observed image can produce accurate results. In this work, we propose a novel deep neural network for 6D pose matching named DeepIM. Given an initial pose estimation, our network is able to iteratively refine the pose by matching the rendered image against the observed image. The network is trained to predict a relative pose transformation using an untangled representation of 3D location and 3D orientation and an iterative training process. Experiments on two commonly used benchmarks for 6D pose estimation demonstrate that DeepIM achieves large improvements over state-of-the-art methods. We furthermore show that DeepIM is able to match previously unseen objects.", "field": [], "task": ["6D Pose Estimation", "6D Pose Estimation using RGB", "Pose Estimation", "Regression"], "method": [], "dataset": ["LineMOD", "YCB-Video", "Occlusion LineMOD"], "metric": ["Mean ADI", "Mean ADD", "Accuracy (ADD)", "Accuracy"], "title": "DeepIM: Deep Iterative Matching for 6D Pose Estimation"} {"abstract": "Despite the impressive improvements achieved by unsupervised deep neural\nnetworks in computer vision and NLP tasks, such improvements have not yet been\nobserved in ranking for information retrieval. The reason may be the complexity\nof the ranking problem, as it is not obvious how to learn from queries and\ndocuments when no supervised signal is available. Hence, in this paper, we\npropose to train a neural ranking model using weak supervision, where labels\nare obtained automatically without human annotators or any external resources\n(e.g., click data). To this aim, we use the output of an unsupervised ranking\nmodel, such as BM25, as a weak supervision signal. We further train a set of\nsimple yet effective ranking models based on feed-forward neural networks. We\nstudy their effectiveness under various learning scenarios (point-wise and\npair-wise models) and using different input representations (i.e., from\nencoding query-document pairs into dense/sparse vectors to using word embedding\nrepresentation). We train our networks using tens of millions of training\ninstances and evaluate it on two standard collections: a homogeneous news\ncollection(Robust) and a heterogeneous large-scale web collection (ClueWeb).\nOur experiments indicate that employing proper objective functions and letting\nthe networks to learn the input representation based on weakly supervised data\nleads to impressive performance, with over 13% and 35% MAP improvements over\nthe BM25 model on the Robust and the ClueWeb collections. Our findings also\nsuggest that supervised neural ranking models can greatly benefit from\npre-training on large amounts of weakly labeled data that can be easily\nobtained from unsupervised IR models.", "field": [], "task": ["Ad-Hoc Information Retrieval", "Information Retrieval"], "method": [], "dataset": ["TREC Robust04"], "metric": ["MAP"], "title": "Neural Ranking Models with Weak Supervision"} {"abstract": "Compositional embedding models build a representation (or embedding) for a\nlinguistic structure based on its component word embeddings. We propose a\nFeature-rich Compositional Embedding Model (FCM) for relation extraction that\nis expressive, generalizes to new domains, and is easy-to-implement. The key\nidea is to combine both (unlexicalized) hand-crafted features with learned word\nembeddings. The model is able to directly tackle the difficulties met by\ntraditional compositional embeddings models, such as handling arbitrary types\nof sentence annotations and utilizing global information for composition. We\ntest the proposed model on two relation extraction tasks, and demonstrate that\nour model outperforms both previous compositional models and traditional\nfeature rich models on the ACE 2005 relation extraction task, and the SemEval\n2010 relation classification task. The combination of our model and a\nlog-linear classifier with hand-crafted features gives state-of-the-art\nresults.", "field": [], "task": ["Relation Classification", "Relation Extraction", "Word Embeddings"], "method": [], "dataset": ["ACE 2005"], "metric": ["Relation classification F1"], "title": "Improved Relation Extraction with Feature-Rich Compositional Embedding Models"} {"abstract": "We propose a deep learning method for single image super-resolution (SR). Our\nmethod directly learns an end-to-end mapping between the low/high-resolution\nimages. The mapping is represented as a deep convolutional neural network (CNN)\nthat takes the low-resolution image as the input and outputs the\nhigh-resolution one. We further show that traditional sparse-coding-based SR\nmethods can also be viewed as a deep convolutional network. But unlike\ntraditional methods that handle each component separately, our method jointly\noptimizes all layers. Our deep CNN has a lightweight structure, yet\ndemonstrates state-of-the-art restoration quality, and achieves fast speed for\npractical on-line usage. We explore different network structures and parameter\nsettings to achieve trade-offs between performance and speed. Moreover, we\nextend our network to cope with three color channels simultaneously, and show\nbetter overall reconstruction quality.", "field": [], "task": ["Image Super-Resolution", "Super-Resolution", "Video Super-Resolution"], "method": [], "dataset": ["FFHQ 256 x 256 - 4x upscaling", "Set14 - 4x upscaling", "Manga109 - 4x upscaling", "Vid4 - 4x upscaling", "BSD100 - 4x upscaling", "Xiph HD - 4x upscaling", "FFHQ 1024 x 1024 - 4x upscaling", "Set5 - 4x upscaling", "Urban100 - 4x upscaling", "Ultra Video Group HD - 4x upscaling"], "metric": ["Average PSNR", "PSNR", "FID", "MS-SSIM", "SSIM", "MOVIE"], "title": "Image Super-Resolution Using Deep Convolutional Networks"} {"abstract": "This paper proposes a simple but effective graph-based agglomerative\nalgorithm, for clustering high-dimensional data. We explore the different roles\nof two fundamental concepts in graph theory, indegree and outdegree, in the\ncontext of clustering. The average indegree reflects the density near a sample,\nand the average outdegree characterizes the local geometry around a sample.\nBased on such insights, we define the affinity measure of clusters via the\nproduct of average indegree and average outdegree. The product-based affinity\nmakes our algorithm robust to noise. The algorithm has three main advantages:\ngood performance, easy implementation, and high computational efficiency. We\ntest the algorithm on two fundamental computer vision problems: image\nclustering and object matching. Extensive experiments demonstrate that it\noutperforms the state-of-the-arts in both applications.", "field": [], "task": ["Image Clustering"], "method": [], "dataset": ["coil-100", "MNIST-test", "Extended Yale-B", "USPS", "Coil-20", "Fashion-MNIST", "MNIST-full"], "metric": ["NMI", "Accuracy"], "title": "Graph Degree Linkage: Agglomerative Clustering on a Directed Graph"} {"abstract": "We introduce the concept of dynamic image, a novel compact representation of videos useful for video analysis especially when convolutional neural networks (CNNs) are used. The dynamic image is based on the rank pooling concept and is obtained through the parameters of a ranking machine that encodes the temporal evolution of the frames of the video. Dynamic images are obtained by directly applying rank pooling on the raw image pixels of a video producing a single RGB image per video. This idea is simple but powerful as it enables the use of existing CNN models directly on video data with fine-tuning. We present an efficient and effective approximate rank pooling operator, speeding it up orders of magnitude compared to rank pooling. Our new approximate rank pooling CNN layer allows us to generalize dynamic images to dynamic feature maps and we demonstrate the power of our new representations on standard benchmarks in action recognition achieving state-of-the-art performance. ", "field": [], "task": ["Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["UCF101", "HMDB-51"], "metric": ["Average accuracy of 3 splits", "3-fold Accuracy"], "title": "Dynamic Image Networks for Action Recognition"} {"abstract": "In this paper, we address the problem of forecasting agent trajectories in unknown environments, conditioned on their past motion and scene structure. Trajectory forecasting is a challenging problem due to the large variation in scene structure, and the multi-modal nature of the distribution of future trajectories. Unlike prior approaches that directly learn one-to-many mappings from observed context, to multiple future trajectories, we propose to condition trajectory forecasts on \\textit{plans} sampled from a grid based policy learned using maximum entropy inverse reinforcement learning policy (MaxEnt IRL). We reformulate MaxEnt IRL to allow the policy to jointly infer plausible agent goals and paths to those goals on a coarse 2-D grid defined over an unknown scene. We propose an attention based trajectory generator that generates continuous valued future trajectories conditioned on state sequences sampled from the MaxEnt policy. Quantitative and qualitative evaluation on the publicly available Stanford drone dataset (SDD) shows that our model generates trajectories that are (1) diverse, representing the multi-modal predictive distribution, and (2) precise, conforming to the underlying scene structure over long prediction horizons, achieving state of the art results on the TrajNet benchmark split of SDD.", "field": [], "task": ["Trajectory Forecasting", "Trajectory Prediction"], "method": [], "dataset": ["Stanford Drone"], "metric": ["ADE-8/12 @K = 20", "FDE-8/12 @K= 20"], "title": "Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans"} {"abstract": "We present a new loss function, namely Wing loss, for robust facial landmark\nlocalisation with Convolutional Neural Networks (CNNs). We first compare and\nanalyse different loss functions including L2, L1 and smooth L1. The analysis\nof these loss functions suggests that, for the training of a CNN-based\nlocalisation model, more attention should be paid to small and medium range\nerrors. To this end, we design a piece-wise loss function. The new loss\namplifies the impact of errors from the interval (-w, w) by switching from L1\nloss to a modified logarithm function.\n To address the problem of under-representation of samples with large\nout-of-plane head rotations in the training set, we propose a simple but\neffective boosting strategy, referred to as pose-based data balancing. In\nparticular, we deal with the data imbalance problem by duplicating the minority\ntraining samples and perturbing them by injecting random image rotation,\nbounding box translation and other data augmentation approaches. Last, the\nproposed approach is extended to create a two-stage framework for robust facial\nlandmark localisation. The experimental results obtained on AFLW and 300W\ndemonstrate the merits of the Wing loss function, and prove the superiority of\nthe proposed method over the state-of-the-art approaches.", "field": [], "task": ["Data Augmentation", "Face Alignment"], "method": [], "dataset": ["WFLW"], "metric": ["ME (%, all) ", "FR@0.1(%, all)", "AUC@0.1 (all)"], "title": "Wing Loss for Robust Facial Landmark Localisation with Convolutional Neural Networks"} {"abstract": "Unsupervised image classification is a challenging computer vision task. Deep learning-based algorithms have achieved superb results, where the latest approach adopts unified losses from embedding and class assignment processes. Since these processes inherently have different goals, jointly optimizing them may lead to a suboptimal solution. To address this limitation, we propose a novel two-stage algorithm in which an embedding module for pretraining precedes a refining module that concurrently performs embedding and class assignment. Our model outperforms SOTA when tested with multiple datasets, by substantially high accuracy of 81.0% for the CIFAR-10 dataset (i.e., increased by 19.3 percent points), 35.3% accuracy for CIFAR-100-20 (9.6 pp) and 66.5% accuracy for STL-10 (6.9 pp) in unsupervised tasks.", "field": [], "task": ["Image Classification", "Unsupervised Image Classification"], "method": [], "dataset": ["STL-10", "CIFAR-20", "CIFAR-100", "CIFAR-10"], "metric": ["Train set", "ARI", "Backbone", "NMI", "Accuracy"], "title": "Mitigating Embedding and Class Assignment Mismatch in Unsupervised Image Classification"} {"abstract": "Numerical reasoning over texts, such as addition, subtraction, sorting and counting, is a challenging machine reading comprehension task, since it requires both natural language understanding and arithmetic computation. To address this challenge, we propose a heterogeneous graph representation for the context of the passage and question needed for such reasoning, and design a question directed graph attention network to drive multi-step numerical reasoning over this context graph.", "field": [], "task": ["Machine Reading Comprehension", "Natural Language Understanding", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["DROP Test"], "metric": ["F1"], "title": "Question Directed Graph Attention Network for Numerical Reasoning over Text"} {"abstract": "Short-term tracking is an open and challenging problem for which\ndiscriminative correlation filters (DCF) have shown excellent performance. We\nintroduce the channel and spatial reliability concepts to DCF tracking and\nprovide a novel learning algorithm for its efficient and seamless integration\nin the filter update and the tracking process. The spatial reliability map\nadjusts the filter support to the part of the object suitable for tracking.\nThis both allows to enlarge the search region and improves tracking of\nnon-rectangular objects. Reliability scores reflect channel-wise quality of the\nlearned filters and are used as feature weighting coefficients in localization.\nExperimentally, with only two simple standard features, HoGs and Colornames,\nthe novel CSR-DCF method -- DCF with Channel and Spatial Reliability --\nachieves state-of-the-art results on VOT 2016, VOT 2015 and OTB100. The CSR-DCF\nruns in real-time on a CPU.", "field": [], "task": ["Visual Object Tracking"], "method": [], "dataset": ["VOT2017/18"], "metric": ["Expected Average Overlap (EAO)"], "title": "Discriminative Correlation Filter with Channel and Spatial Reliability"} {"abstract": "Protein secondary structure (SS) prediction is important for studying protein\nstructure and function. When only the sequence (profile) information is used as\ninput feature, currently the best predictors can obtain ~80% Q3 accuracy, which\nhas not been improved in the past decade. Here we present DeepCNF (Deep\nConvolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep\nLearning extension of Conditional Neural Fields (CNF), which is an integration\nof Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can\nmodel not only complex sequence-structure relationship by a deep hierarchical\narchitecture, but also interdependency between adjacent SS labels, so it is\nmuch more powerful than CNF. Experimental results show that DeepCNF can obtain\n~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the\nCASP and CAMEO test proteins, greatly outperforming currently popular\npredictors. As a general framework, DeepCNF can be used to predict other\nprotein structure properties such as contact number, disorder regions, and\nsolvent accessibility.", "field": [], "task": ["Protein Secondary Structure Prediction"], "method": [], "dataset": ["CB513", "CullPDB"], "metric": ["Q8"], "title": "Protein secondary structure prediction using deep convolutional neural fields"} {"abstract": "We propose a conditional non-autoregressive neural sequence model based on\niterative refinement. The proposed model is designed based on the principles of\nlatent variable models and denoising autoencoders, and is generally applicable\nto any sequence generation task. We extensively evaluate the proposed model on\nmachine translation (En-De and En-Ro) and image caption generation, and observe\nthat it significantly speeds up decoding while maintaining the generation\nquality comparable to the autoregressive counterpart.", "field": [], "task": ["Denoising", "Latent Variable Models", "Machine Translation"], "method": [], "dataset": ["WMT2014 German-English", "WMT2016 English-Romanian", "IWSLT2015 German-English", "IWSLT2015 English-German", "WMT2014 English-German", "WMT2016 Romanian-English"], "metric": ["BLEU score"], "title": "Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement"} {"abstract": "Entity Linking (EL) is an essential task for semantic text understanding and\ninformation extraction. Popular methods separately address the Mention\nDetection (MD) and Entity Disambiguation (ED) stages of EL, without leveraging\ntheir mutual dependency. We here propose the first neural end-to-end EL system\nthat jointly discovers and links entities in a text document. The main idea is\nto consider all possible spans as potential mentions and learn contextual\nsimilarity scores over their entity candidates that are useful for both MD and\nED decisions. Key components are context-aware mention embeddings, entity\nembeddings and a probabilistic mention - entity map, without demanding other\nengineered features. Empirically, we show that our end-to-end method\nsignificantly outperforms popular systems on the Gerbil platform when enough\ntraining data is available. Conversely, if testing datasets follow different\nannotation conventions compared to the training set (e.g. queries/ tweets vs\nnews documents), our ED model coupled with a traditional NER system offers the\nbest or second best EL accuracy.", "field": [], "task": ["Entity Disambiguation", "Entity Embeddings", "Entity Linking"], "method": [], "dataset": ["Derczynski", "OKE-2015", "MSNBC", "N3-Reuters-128", "OKE-2016", "AIDA-CoNLL"], "metric": ["Micro-F1", "Micro-F1 strong", "Macro-F1 strong"], "title": "End-to-End Neural Entity Linking"} {"abstract": "Conversational machine comprehension requires the understanding of the\nconversation history, such as previous question/answer pairs, the document\ncontext, and the current question. To enable traditional, single-turn models to\nencode the history comprehensively, we introduce Flow, a mechanism that can\nincorporate intermediate representations generated during the process of\nanswering previous questions, through an alternating parallel processing\nstructure. Compared to approaches that concatenate previous questions/answers\nas input, Flow integrates the latent semantics of the conversation history more\ndeeply. Our model, FlowQA, shows superior performance on two recently proposed\nconversational challenges (+7.2% F1 on CoQA and +4.0% on QuAC). The\neffectiveness of Flow also shows in other tasks. By reducing sequential\ninstruction understanding to conversational machine comprehension, FlowQA\noutperforms the best models on all three domains in SCONE, with +1.8% to +4.4%\nimprovement in accuracy.", "field": [], "task": ["Question Answering", "Reading Comprehension"], "method": [], "dataset": ["CoQA", "QuAC"], "metric": ["Out-of-domain", "HEQD", "HEQQ", "Overall", "F1", "In-domain"], "title": "FlowQA: Grasping Flow in History for Conversational Machine Comprehension"} {"abstract": "How to incorporate cross-modal complementarity sufficiently is the cornerstone question for RGB-D salient object detection. Previous works mainly address this issue by simply concatenating multi-modal features or combining unimodal predictions. In this paper, we answer this question from two perspectives: (1) We argue that if the complementary part can be modelled more explicitly, the cross-modal complement is likely to be better captured. To this end, we design a novel complementarity-aware fusion (CA-Fuse) module when adopting the Convolutional Neural Network (CNN). By introducing cross-modal residual functions and complementarity-aware supervisions in each CA-Fuse module, the problem of learning complementary information from the paired modality is explicitly posed as asymptotically approximating the residual function. (2) Exploring the complement across all the levels. By cascading the CA-Fuse module and adding level-wise supervision from deep to shallow densely, the cross-level complement can be selected and combined progressively. The proposed RGB-D fusion network disambiguates both cross-modal and cross-level fusion processes and enables more sufficient fusion results. The experiments on public datasets show the effectiveness of the proposed CA-Fuse module and the RGB-D salient object detection network.", "field": [], "task": ["Object Detection", "RGB-D Salient Object Detection", "RGB Salient Object Detection", "Salient Object Detection"], "method": [], "dataset": ["NJU2K"], "metric": ["max E-Measure", "Average MAE", "S-Measure", "max F-Measure"], "title": "Progressively Complementarity-Aware Fusion Network for RGB-D Salient Object Detection"} {"abstract": "Computer vision tasks such as image classification, image retrieval and few-shot learning are currently dominated by Euclidean and spherical embeddings, so that the final decisions about class belongings or the degree of similarity are made using linear hyperplanes, Euclidean distances, or spherical geodesic distances (cosine similarity). In this work, we demonstrate that in many practical scenarios hyperbolic embeddings provide a better alternative.", "field": [], "task": ["Few-Shot Learning", "Image Classification", "Image Retrieval"], "method": [], "dataset": ["OMNIGLOT - 1-Shot, 5-way", "OMNIGLOT - 5-Shot, 20-way", "Mini-Imagenet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "OMNIGLOT - 5-Shot, 5-way", "OMNIGLOT - 1-Shot, 20-way", "CUB 200 5-way 1-shot", "CUB 200 5-way 5-shot"], "metric": ["Accuracy"], "title": "Hyperbolic Image Embeddings"} {"abstract": "We describe a method to infer dense depth from camera motion and sparse depth as estimated using a visual-inertial odometry system. Unlike other scenarios using point clouds from lidar or structured light sensors, we have few hundreds to few thousand points, insufficient to inform the topology of the scene. Our method first constructs a piecewise planar scaffolding of the scene, and then uses it to infer dense depth using the image along with the sparse points. We use a predictive cross-modal criterion, akin to `self-supervision,' measuring photometric consistency across time, forward-backward pose consistency, and geometric compatibility with the sparse point cloud. We also launch the first visual-inertial + depth dataset, which we hope will foster additional exploration into combining the complementary strengths of visual and inertial sensors. To compare our method to prior work, we adopt the unsupervised KITTI depth completion benchmark, and show state-of-the-art performance on it.", "field": [], "task": ["Depth Completion"], "method": [], "dataset": ["KITTI Depth Completion"], "metric": ["iMAE", "RMSE", "Runtime [ms]", "MAE", "iRMSE"], "title": "Unsupervised Depth Completion from Visual Inertial Odometry"} {"abstract": "Deep learning algorithms can fare poorly when the training dataset suffers from heavy class-imbalance but the testing criterion requires good generalization on less frequent classes. We design two novel methods to improve performance in such scenarios. First, we propose a theoretically-principled label-distribution-aware margin (LDAM) loss motivated by minimizing a margin-based generalization bound. This loss replaces the standard cross-entropy objective during training and can be applied with prior strategies for training with class-imbalance such as re-weighting or re-sampling. Second, we propose a simple, yet effective, training schedule that defers re-weighting until after the initial stage, allowing the model to learn an initial representation while avoiding some of the complications associated with re-weighting or re-sampling. We test our methods on several benchmark vision tasks including the real-world imbalanced dataset iNaturalist 2018. Our experiments show that either of these methods alone can already improve over existing techniques and their combination achieves even better performance gains.", "field": [], "task": ["Long-tail Learning", "Long-tail learning with class descriptors"], "method": [], "dataset": ["SUN-LT", "CIFAR-10-LT (\u03c1=100)", "CIFAR-100-LT (\u03c1=10)", "CIFAR-10-LT (\u03c1=10)", "AWA-LT", "CIFAR-100-LT (\u03c1=100)", "CUB-LT"], "metric": ["Per-Class Accuracy", "Error Rate", "Long-Tailed Accuracy"], "title": "Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss"} {"abstract": "Unsupervised video object segmentation has often been tackled by methods based on recurrent neural networks and optical flow. Despite their complexity, these kinds of approaches tend to favour short-term temporal dependencies and are thus prone to accumulating inaccuracies, which cause drift over time. Moreover, simple (static) image segmentation models, alone, can perform competitively against these methods, which further suggests that the way temporal dependencies are modelled should be reconsidered. Motivated by these observations, in this paper we explore simple yet effective strategies to model long-term temporal dependencies. Inspired by the non-local operators of [70], we introduce a technique to establish dense correspondences between pixel embeddings of a reference \"anchor\" frame and the current one. This allows the learning of pairwise dependencies at arbitrarily long distances without conditioning on intermediate frames. Without online supervision, our approach can suppress the background and precisely segment the foreground object even in challenging scenarios, while maintaining consistent performance over time. With a mean IoU of $81.7\\%$, our method ranks first on the DAVIS-2016 leaderboard of unsupervised methods, while still being competitive against state-of-the-art online semi-supervised approaches. We further evaluate our method on the FBMS dataset and the ViSal video saliency dataset, showing results competitive with the state of the art.", "field": [], "task": ["Optical Flow Estimation", "Semantic Segmentation", "Unsupervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "Anchor Diffusion for Unsupervised Video Object Segmentation"} {"abstract": "Identifying what is at the center of the meaning of a word and what discriminates it from other words is a fundamental natural language inference task. This paper describes an explicit word vector representation model (WVM) to support the identification of discriminative attributes. A core contribution of the paper is a quantitative and qualitative comparative analysis of different types of data sources and Knowledge Bases in the construction of explainable and explicit WVMs: (i) knowledge graphs built from dictionary definitions, (ii) entity-attribute-relationships graphs derived from images and (iii) commonsense knowledge graphs. Using a detailed quantitative and qualitative analysis, we demonstrate that these data sources have complementary semantic aspects, supporting the creation of explicit semantic vector spaces. The explicit vector spaces are evaluated using the task of discriminative attribute identification, showing comparable performance to the state-of-the-art systems in the task (F1-score = 0.69), while delivering full model transparency and explainability.", "field": [], "task": ["Knowledge Graphs", "Natural Language Inference", "Relation Extraction"], "method": [], "dataset": ["SemEval 2018 Task 10"], "metric": ["F1-Score"], "title": "Identifying and Explaining Discriminative Attributes"} {"abstract": "Multi-hop question answering requires models to gather information from different parts of a text to answer a question. Most current approaches learn to address this task in an end-to-end way with neural networks, without maintaining an explicit representation of the reasoning process. We propose a method to extract a discrete reasoning chain over the text, which consists of a series of sentences leading to the answer. We then feed the extracted chains to a BERT-based QA model to do final answer prediction. Critically, we do not rely on gold annotated chains or \"supporting facts:\" at training time, we derive pseudogold reasoning chains using heuristics based on named entity recognition and coreference resolution. Nor do we rely on these annotations at test time, as our model learns to extract chains from raw text alone. We test our approach on two recently proposed large multi-hop question answering datasets: WikiHop and HotpotQA, and achieve state-of-art performance on WikiHop and strong performance on HotpotQA. Our analysis shows the properties of chains that are crucial for high performance: in particular, modeling extraction sequentially is important, as is dealing with each candidate sentence in a context-aware way. Furthermore, human evaluation shows that our extracted chains allow humans to give answers with high confidence, indicating that these are a strong intermediate abstraction for this task.", "field": [], "task": ["Coreference Resolution", "Multi-hop Question Answering", "Named Entity Recognition", "Question Answering"], "method": [], "dataset": ["WikiHop"], "metric": ["Test"], "title": "Multi-hop Question Answering via Reasoning Chains"} {"abstract": "In this paper, we propose a novel joint instance and semantic segmentation approach, which is called JSNet, in order to address the instance and semantic segmentation of 3D point clouds simultaneously. Firstly, we build an effective backbone network to extract robust features from the raw point clouds. Secondly, to obtain more discriminative features, a point cloud feature fusion module is proposed to fuse the different layer features of the backbone network. Furthermore, a joint instance semantic segmentation module is developed to transform semantic features into instance embedding space, and then the transformed features are further fused with instance features to facilitate instance segmentation. Meanwhile, this module also aggregates instance features into semantic feature space to promote semantic segmentation. Finally, the instance predictions are generated by applying a simple mean-shift clustering on instance embeddings. As a result, we evaluate the proposed JSNet on a large-scale 3D indoor point cloud dataset S3DIS and a part dataset ShapeNet, and compare it with existing approaches. Experimental results demonstrate our approach outperforms the state-of-the-art method in 3D instance segmentation with a significant improvement in 3D semantic prediction and our method is also beneficial for part segmentation. The source code for this work is available at https://github.com/dlinzhao/JSNet.", "field": [], "task": ["3D Instance Segmentation", "Instance Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["S3DIS", "ShapeNet"], "metric": ["oAcc", "mWCov", "mRec", "Mean IoU", "mAcc", "mCov", "mPrec"], "title": "JSNet: Joint Instance and Semantic Segmentation of 3D Point Clouds"} {"abstract": "We propose a webly-supervised representation learning method that does not suffer from the annotation unscalability of supervised learning, nor the computation unscalability of self-supervised learning. Most existing works on webly-supervised representation learning adopt a vanilla supervised learning method without accounting for the prevalent noise in the training data, whereas most prior methods in learning with label noise are less effective for real-world large-scale noisy data. We propose momentum prototypes (MoPro), a simple contrastive learning method that achieves online label noise correction, out-of-distribution sample removal, and representation learning. MoPro achieves state-of-the-art performance on WebVision, a weakly-labeled noisy dataset. MoPro also shows superior performance when the pretrained model is transferred to down-stream image classification and detection tasks. It outperforms the ImageNet supervised pretrained model by +10.5 on 1-shot classification on VOC, and outperforms the best self-supervised pretrained model by +17.3 when finetuned on 1\\% of ImageNet labeled samples. Furthermore, MoPro is more robust to distribution shifts. Code and pretrained models are available at https://github.com/salesforce/MoPro.", "field": [], "task": ["Image Classification", "Representation Learning", "Self-Supervised Learning"], "method": [], "dataset": ["WebVision-1000"], "metric": ["Top-1 Accuracy"], "title": "MoPro: Webly Supervised Learning with Momentum Prototypes"} {"abstract": "We propose a general framework for denoising high-dimensional measurements which requires no prior on the signal, no estimate of the noise, and no clean training data. The only assumption is that the noise exhibits statistical independence across different dimensions of the measurement, while the true signal exhibits some correlation. For a broad class of functions (\"$\\mathcal{J}$-invariant\"), it is then possible to estimate the performance of a denoiser from noisy data alone. This allows us to calibrate $\\mathcal{J}$-invariant versions of any parameterised denoising algorithm, from the single hyperparameter of a median filter to the millions of weights of a deep neural network. We demonstrate this on natural image and microscopy data, where we exploit noise independence between pixels, and on single-cell gene expression data, where we exploit independence between detections of individual molecules. This framework generalizes recent work on training neural nets from noisy images and on cross-validation for matrix factorization.", "field": [], "task": ["Denoising"], "method": [], "dataset": ["Hanzi", "CellNet", "ImageNet"], "metric": ["PSNR"], "title": "Noise2Self: Blind Denoising by Self-Supervision"} {"abstract": "Imperfect labels are ubiquitous in real-world datasets. Several recent successful methods for training deep neural networks (DNNs) robust to label noise have used two primary techniques: filtering samples based on loss during a warm-up phase to curate an initial set of cleanly labeled samples, and using the output of a network as a pseudo-label for subsequent loss calculations. In this paper, we evaluate different augmentation strategies for algorithms tackling the \"learning with noisy labels\" problem. We propose and examine multiple augmentation strategies and evaluate them using synthetic datasets based on CIFAR-10 and CIFAR-100, as well as on the real-world dataset Clothing1M. Due to several commonalities in these algorithms, we find that using one set of augmentations for loss modeling tasks and another set for learning is the most effective, improving results on the state-of-the-art and other previous methods. Furthermore, we find that applying augmentation during the warm-up period can negatively impact the loss convergence behavior of correctly versus incorrectly labeled samples. We introduce this augmentation strategy to the state-of-the-art technique and demonstrate that we can improve performance across all evaluated noise levels. In particular, we improve accuracy on the CIFAR-10 benchmark at 90% symmetric noise by more than 15% in absolute accuracy and we also improve performance on the real-world dataset Clothing1M. (* equal contribution)", "field": [], "task": ["Image Classification", "Learning with noisy labels"], "method": [], "dataset": ["Clothing1M"], "metric": ["Accuracy"], "title": "Augmentation Strategies for Learning with Noisy Labels"} {"abstract": "Joint understanding of video and language is an active research area with many applications. Prior work in this domain typically relies on learning text-video embeddings. One difficulty with this approach, however, is the lack of large-scale annotated video-caption datasets for training. To address this issue, we aim at learning text-video embeddings from heterogeneous data sources. To this end, we propose a Mixture-of-Embedding-Experts (MEE) model with ability to handle missing input modalities during training. As a result, our framework can learn improved text-video embeddings simultaneously from image and video datasets. We also show the generalization of MEE to other input modalities such as face descriptors. We evaluate our method on the task of video retrieval and report results for the MPII Movie Description and MSR-VTT datasets. The proposed MEE model demonstrates significant improvements and outperforms previously reported methods on both text-to-video and video-to-text retrieval tasks. Code is available at: https://github.com/antoine77340/Mixture-of-Embedding-Experts", "field": [], "task": ["Video Retrieval"], "method": [], "dataset": ["LSMDC"], "metric": ["text-to-video R@1", "text-to-video R@10", "text-to-video Median Rank", "text-to-video R@5"], "title": "Learning a Text-Video Embedding from Incomplete and Heterogeneous Data"} {"abstract": "In this paper, we propose an adversarial process for abstractive text\nsummarization, in which we simultaneously train a generative model G and a\ndiscriminative model D. In particular, we build the generator G as an agent of\nreinforcement learning, which takes the raw text as input and predicts the\nabstractive summarization. We also build a discriminator which attempts to\ndistinguish the generated summary from the ground truth summary. Extensive\nexperiments demonstrate that our model achieves competitive ROUGE scores with\nthe state-of-the-art methods on CNN/Daily Mail dataset. Qualitatively, we show\nthat our model is able to generate more abstractive, readable and diverse\nsummaries.", "field": [], "task": ["Abstractive Text Summarization", "Text Summarization"], "method": [], "dataset": ["CNN / Daily Mail (Anonymized)"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Generative Adversarial Network for Abstractive Text Summarization"} {"abstract": "In this work, we propose a simple yet effective semi-supervised learning approach called Augmented Distribution Alignment. We reveal that an essential sampling bias exists in semi-supervised learning due to the limited number of labeled samples, which often leads to a considerable empirical distribution mismatch between labeled data and unlabeled data. To this end, we propose to align the empirical distributions of labeled and unlabeled data to alleviate the bias. On one hand, we adopt an adversarial training strategy to minimize the distribution distance between labeled and unlabeled data as inspired by domain adaptation works. On the other hand, to deal with the small sample size issue of labeled data, we also propose a simple interpolation strategy to generate pseudo training samples. Those two strategies can be easily implemented into existing deep neural networks. We demonstrate the effectiveness of our proposed approach on the benchmark SVHN and CIFAR10 datasets. Our code is available at \\url{https://github.com/qinenergy/adanet}.", "field": [], "task": ["Domain Adaptation", "Semi-Supervised Image Classification"], "method": [], "dataset": ["CIFAR-10, 4000 Labels"], "metric": ["Accuracy"], "title": "Semi-Supervised Learning by Augmented Distribution Alignment"} {"abstract": "Perception in autonomous vehicles is often carried out through a suite of different sensing modalities. Given the massive amount of openly available labeled RGB data and the advent of high-quality deep learning algorithms for image-based recognition, high-level semantic perception tasks are pre-dominantly solved using high-resolution cameras. As a result of that, other sensor modalities potentially useful for this task are often ignored. In this paper, we push the state of the art in LiDAR-only semantic segmentation forward in order to provide another independent source of semantic information to the vehicle. Our approach can accurately perform full semantic segmentation of LiDAR point clouds at sensor frame rate. We exploit range images as an intermediate representation in combination with a Convolutional Neural Network (CNN) exploiting the rotating LiDAR sensor model. To obtain accurate results, we propose a novel post-processing algorithm that deals with problems arising from this intermediate representation such as discretization errors and blurry CNN outputs. We implemented and thoroughly evaluated our approach including several comparisons to the state of the art. Our experiments show that our approach outperforms state-of-the-art approaches, while still running online on a single embedded GPU. The code can be accessed at https://github.com/PRBonn/lidar-bonnetal", "field": [], "task": ["3D Semantic Segmentation", "Autonomous Vehicles", "LIDAR Semantic Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["SemanticKITTI"], "metric": ["mIoU"], "title": "RangeNet++: Fast and Accurate LiDAR Semantic Segmentation"} {"abstract": "During the recent years, correlation filters have shown dominant and\nspectacular results for visual object tracking. The types of the features that\nare employed in these family of trackers significantly affect the performance\nof visual tracking. The ultimate goal is to utilize robust features invariant\nto any kind of appearance change of the object, while predicting the object\nlocation as properly as in the case of no appearance change. As the deep\nlearning based methods have emerged, the study of learning features for\nspecific tasks has accelerated. For instance, discriminative visual tracking\nmethods based on deep architectures have been studied with promising\nperformance. Nevertheless, correlation filter based (CFB) trackers confine\nthemselves to use the pre-trained networks which are trained for object\nclassification problem. To this end, in this manuscript the problem of learning\ndeep fully convolutional features for the CFB visual tracking is formulated. In\norder to learn the proposed model, a novel and efficient backpropagation\nalgorithm is presented based on the loss function of the network. The proposed\nlearning framework enables the network model to be flexible for a custom\ndesign. Moreover, it alleviates the dependency on the network trained for\nclassification. Extensive performance analysis shows the efficacy of the\nproposed custom design in the CFB tracking framework. By fine-tuning the\nconvolutional parts of a state-of-the-art network and integrating this model to\na CFB tracker, which is the top performing one of VOT2016, 18% increase is\nachieved in terms of expected average overlap, and tracking failures are\ndecreased by 25%, while maintaining the superiority over the state-of-the-art\nmethods in OTB-2013 and OTB-2015 tracking datasets.", "field": [], "task": ["Object Classification", "Object Tracking", "Visual Object Tracking", "Visual Tracking"], "method": [], "dataset": ["VOT2016"], "metric": ["Expected Average Overlap (EAO)"], "title": "Good Features to Correlate for Visual Tracking"} {"abstract": "Object location is fundamental to panoptic segmentation as it is related to all things and stuff in the image scene. Knowing the locations of objects in the image provides clues for segmenting and helps the network better understand the scene. How to integrate object location in both thing and stuff segmentation is a crucial problem. In this paper, we propose spatial information flows to achieve this objective. The flows can bridge all sub-tasks in panoptic segmentation by delivering the object's spatial context from the box regression task to others. More importantly, we design four parallel sub-networks to get a preferable adaptation of object spatial information in sub-tasks. Upon the sub-networks and the flows, we present a location-aware and unified framework for panoptic segmentation, denoted as SpatialFlow. We perform a detailed ablation study on each component and conduct extensive experiments to prove the effectiveness of SpatialFlow. Furthermore, we achieve state-of-the-art results, which are $47.9$ PQ and $62.5$ PQ respectively on MS-COCO and Cityscapes panoptic benchmarks. Code will be available at https://github.com/chensnathan/SpatialFlow.", "field": [], "task": ["Instance Segmentation", "Object Detection", "Panoptic Segmentation", "Regression", "Semantic Segmentation"], "method": [], "dataset": ["COCO test-dev"], "metric": ["PQst", "PQ", "PQth"], "title": "SpatialFlow: Bridging All Tasks for Panoptic Segmentation"} {"abstract": "Person re-identification (re-ID), is a challenging task due to the high variance within identity samples and imaging conditions. Although recent advances in deep learning have achieved remarkable accuracy in settled scenes, i.e., source domain, few works can generalize well on the unseen target domain. One popular solution is assigning unlabeled target images with pseudo labels by clustering, and then retraining the model. However, clustering methods tend to introduce noisy labels and discard low confidence samples as outliers, which may hinder the retraining process and thus limit the generalization ability. In this study, we argue that by explicitly adding a sample filtering procedure after the clustering, the mined examples can be much more efficiently used. To this end, we design an asymmetric co-teaching framework, which resists noisy labels by cooperating two models to select data with possibly clean labels for each other. Meanwhile, one of the models receives samples as pure as possible, while the other takes in samples as diverse as possible. This procedure encourages that the selected training samples can be both clean and miscellaneous, and that the two models can promote each other iteratively. Extensive experiments show that the proposed framework can consistently benefit most clustering-based methods, and boost the state-of-the-art adaptation accuracy. Our code is available at https://github.com/FlyingRoastDuck/ACT_AAAI20.", "field": [], "task": ["Person Re-Identification", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["Duke to Market", "Market to Duke"], "metric": ["rank-10", "mAP", "rank-5", "rank-1"], "title": "Asymmetric Co-Teaching for Unsupervised Cross Domain Person Re-Identification"} {"abstract": "Emotion-cause pair extraction aims to extract all potential pairs of emotions and corresponding causes from unannotated emotion text. Most existing methods are pipelined framework, which identifies emotions and extracts causes separately, leading to a drawback of error propagation. Towards this issue, we propose a transition-based model to transform the task into a procedure of parsing-like directed graph construction. The proposed model incrementally generates the directed graph with labeled edges based on a sequence of actions, from which we can recognize emotions with the corresponding causes simultaneously, thereby optimizing separate subtasks jointly and maximizing mutual benefits of tasks interdependently. Experimental results show that our approach achieves the best performance, outperforming the state-of-the-art methods by 6.71{\\%} (p{\\textless}0.01) in F1 measure.", "field": [], "task": ["Emotion-Cause Pair Extraction", "graph construction"], "method": [], "dataset": ["ECPE-FanSplit"], "metric": ["F1"], "title": "Transition-based Directed Graph Construction for Emotion-Cause Pair Extraction"} {"abstract": "Neural models have achieved remarkable success on relation extraction (RE) benchmarks. However, there is no clear understanding which type of information affects existing RE models to make decisions and how to further improve the performance of these models. To this end, we empirically study the effect of two main information sources in text: textual context and entity mentions (names). We find that (i) while context is the main source to support the predictions, RE models also heavily rely on the information from entity mentions, most of which is type information, and (ii) existing datasets may leak shallow heuristics via entity mentions and thus contribute to the high performance on RE benchmarks. Based on the analyses, we propose an entity-masked contrastive pre-training framework for RE to gain a deeper understanding on both textual context and type information while avoiding rote memorization of entities or use of superficial cues in mentions. We carry out extensive experiments to support our views, and show that our framework can improve the effectiveness and robustness of neural models in different RE scenarios. All the code and datasets are released at https://github.com/thunlp/RE-Context-or-Names.", "field": [], "task": ["Relation Extraction"], "method": [], "dataset": ["TACRED"], "metric": ["F1"], "title": "Learning from Context or Names? An Empirical Study on Neural Relation Extraction"} {"abstract": "This paper proposes a new Generative Partition Network (GPN) to address the\nchallenging multi-person pose estimation problem. Different from existing\nmodels that are either completely top-down or bottom-up, the proposed GPN\nintroduces a novel strategy--it generates partitions for multiple persons from\ntheir global joint candidates and infers instance-specific joint configurations\nsimultaneously. The GPN is favorably featured by low complexity and high\naccuracy of joint detection and re-organization. In particular, GPN designs a\ngenerative model that performs one feed-forward pass to efficiently generate\nrobust person detections with joint partitions, relying on dense regressions\nfrom global joint candidates in an embedding space parameterized by centroids\nof persons. In addition, GPN formulates the inference procedure for joint\nconfigurations of human poses as a graph partition problem, and conducts local\noptimization for each person detection with reliable global affinity cues,\nleading to complexity reduction and performance improvement. GPN is implemented\nwith the Hourglass architecture as the backbone network to simultaneously learn\njoint detector and dense regressor. Extensive experiments on benchmarks MPII\nHuman Pose Multi-Person, extended PASCAL-Person-Part, and WAF, show the\nefficiency of GPN with new state-of-the-art performance.", "field": [], "task": ["Human Detection", "Keypoint Detection", "Multi-Person Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["MPII Multi-Person", "WAF"], "metric": ["AP", "mAP@0.5"], "title": "Generative Partition Networks for Multi-Person Pose Estimation"} {"abstract": "Multi-label zero-shot learning strives to classify images into multiple unseen categories for which no data is available during training. The test samples can additionally contain seen categories in the generalized variant. Existing approaches rely on learning either shared or label-specific attention from the seen classes. Nevertheless, computing reliable attention maps for unseen classes during inference in a multi-label setting is still a challenge. In contrast, state-of-the-art single-label generative adversarial network (GAN) based approaches learn to directly synthesize the class-specific visual features from the corresponding class attribute embeddings. However, synthesizing multi-label features from GANs is still unexplored in the context of zero-shot setting. In this work, we introduce different fusion approaches at the attribute-level, feature-level and cross-level (across attribute and feature-levels) for synthesizing multi-label features from their corresponding multi-label class embedding. To the best of our knowledge, our work is the first to tackle the problem of multi-label feature synthesis in the (generalized) zero-shot setting. Comprehensive experiments are performed on three zero-shot image classification benchmarks: NUS-WIDE, Open Images and MS COCO. Our cross-level fusion-based generative approach outperforms the state-of-the-art on all three datasets. Furthermore, we show the generalization capabilities of our fusion approach in the zero-shot detection task on MS COCO, achieving favorable performance against existing methods. The source code is available at https://github.com/akshitac8/Generative_MLZSL.", "field": [], "task": ["Image Classification", "Multi-label zero-shot learning", "Zero-Shot Learning"], "method": [], "dataset": ["NUS-WIDE"], "metric": ["mAP"], "title": "Generative Multi-Label Zero-Shot Learning"} {"abstract": "Recent works on click-based interactive segmentation have demonstrated state-of-the-art results by using various inference-time optimization schemes. These methods are considerably more computationally expensive compared to feedforward approaches, as they require performing backward passes through a network during inference and are hard to deploy on mobile frameworks that usually support only forward passes. In this paper, we extensively evaluate various design choices for interactive segmentation and discover that new state-of-the-art results can be obtained without any additional optimization schemes. Thus, we propose a simple feedforward model for click-based interactive segmentation that employs the segmentation masks from previous steps. It allows not only to segment an entirely new object, but also to start with an external mask and correct it. When analyzing the performance of models trained on different datasets, we observe that the choice of a training dataset greatly impacts the quality of interactive segmentation. We find that the models trained on a combination of COCO and LVIS with diverse and high-quality annotations show performance superior to all existing models. The code and trained models are available at https://github.com/saic-vul/ritm_interactive_segmentation.", "field": [], "task": ["Interactive Segmentation"], "method": [], "dataset": ["Berkeley", "GrabCut", "DAVIS", "SBD"], "metric": ["NoC@90", "NoC@85"], "title": "Reviving Iterative Training with Mask Guidance for Interactive Segmentation"} {"abstract": "We present results related to the performance of an algorithm for community\ndetection which incorporates event-driven computation. We define a mapping\nwhich takes a graph G to a system of spiking neurons. Using a fully connected\nspiking neuron system, with both inhibitory and excitatory synaptic\nconnections, the firing patterns of neurons within the same community can be\ndistinguished from firing patterns of neurons in different communities. On a\nrandom graph with 128 vertices and known community structure we show that by\nusing binary decoding and a Hamming-distance based metric, individual\ncommunities can be identified from spike train similarities. Using bipolar\ndecoding and finite rate thresholding, we verify that inhibitory connections\nprevent the spread of spiking patterns.", "field": [], "task": ["Community Detection"], "method": [], "dataset": ["2010 i2b2/VA"], "metric": ["14 gestures accuracy"], "title": "Community detection with spiking neural networks for neuromorphic hardware"} {"abstract": "The problem of session-based recommendation aims to predict user actions\nbased on anonymous sessions. Previous methods model a session as a sequence and\nestimate user representations besides item representations to make\nrecommendations. Though achieved promising results, they are insufficient to\nobtain accurate user vectors in sessions and neglect complex transitions of\nitems. To obtain accurate item embedding and take complex transitions of items\ninto account, we propose a novel method, i.e. Session-based Recommendation with\nGraph Neural Networks, SR-GNN for brevity. In the proposed method, session\nsequences are modeled as graph-structured data. Based on the session graph, GNN\ncan capture complex transitions of items, which are difficult to be revealed by\nprevious conventional sequential methods. Each session is then represented as\nthe composition of the global preference and the current interest of that\nsession using an attention network. Extensive experiments conducted on two real\ndatasets show that SR-GNN evidently outperforms the state-of-the-art\nsession-based recommendation methods consistently.", "field": [], "task": ["Session-Based Recommendations"], "method": [], "dataset": ["yoochoose1", "Diginetica", "yoochoose1/4", "yoochoose1/64"], "metric": ["MRR@20", "Precision@20"], "title": "Session-based Recommendation with Graph Neural Networks"} {"abstract": "The imputeTS package specializes on univariate time series imputation. It offers multiple state-of-the-art imputation algorithm implementations along with plotting functions for time series missing data statistics. While imputation in general is a well-known problem and widely covered by R packages, finding packages able to fill missing values in univariate time series is more complicated. The reason for this lies in the fact that most imputation algorithms rely on inter-attribute correlations, while univariate time series imputation instead needs to employ time dependencies. This paper provides an introduction to the imputeTS package and its provided algorithms and tools. Furthermore, it gives a short overview about univariate time series imputation in R.", "field": [], "task": ["Imputation", "Multivariate Time Series Imputation", "Time Series"], "method": [], "dataset": ["Beijing Air Quality", "UCI localization data", "PhysioNet Challenge 2012"], "metric": ["MAE (PM2.5)", "MAE (10% of data as GT)", "MAE (10% missing)"], "title": "imputeTS: Time Series Missing Value Imputation in R"} {"abstract": "State-of-the-art subspace clustering methods are based on expressing each\ndata point as a linear combination of other data points while regularizing the\nmatrix of coefficients with $\\ell_1$, $\\ell_2$ or nuclear norms. $\\ell_1$\nregularization is guaranteed to give a subspace-preserving affinity (i.e.,\nthere are no connections between points from different subspaces) under broad\ntheoretical conditions, but the clusters may not be connected. $\\ell_2$ and\nnuclear norm regularization often improve connectivity, but give a\nsubspace-preserving affinity only for independent subspaces. Mixed $\\ell_1$,\n$\\ell_2$ and nuclear norm regularizations offer a balance between the\nsubspace-preserving and connectedness properties, but this comes at the cost of\nincreased computational complexity. This paper studies the geometry of the\nelastic net regularizer (a mixture of the $\\ell_1$ and $\\ell_2$ norms) and uses\nit to derive a provably correct and scalable active set method for finding the\noptimal coefficients. Our geometric analysis also provides a theoretical\njustification and a geometric interpretation for the balance between the\nconnectedness (due to $\\ell_2$ regularization) and subspace-preserving (due to\n$\\ell_1$ regularization) properties for elastic net subspace clustering. Our\nexperiments show that the proposed active set method not only achieves\nstate-of-the-art clustering performance, but also efficiently handles\nlarge-scale datasets.", "field": [], "task": ["Image Clustering"], "method": [], "dataset": ["coil-100", "MNIST-full"], "metric": ["NMI", "Accuracy"], "title": "Oracle Based Active Set Algorithm for Scalable Elastic Net Subspace Clustering"} {"abstract": "Since the past few decades, human trajectory forecasting has been a field of active research owing to its numerous real-world applications: evacuation situation analysis, deployment of intelligent transport systems, traffic operations, to name a few. Early works handcrafted this representation based on domain knowledge. However, social interactions in crowded environments are not only diverse but often subtle. Recently, deep learning methods have outperformed their handcrafted counterparts, as they learned about human-human interactions in a more generic data-driven fashion. In this work, we present an in-depth analysis of existing deep learning-based methods for modelling social interactions. We propose two knowledge-based data-driven methods to effectively capture these social interactions. To objectively compare the performance of these interaction-based forecasting models, we develop a large scale interaction-centric benchmark TrajNet++, a significant yet missing component in the field of human trajectory forecasting. We propose novel performance metrics that evaluate the ability of a model to output socially acceptable trajectories. Experiments on TrajNet++ validate the need for our proposed metrics, and our method outperforms competitive baselines on both real-world and synthetic datasets.", "field": [], "task": ["Trajectory Forecasting", "Trajectory Prediction"], "method": [], "dataset": ["TrajNet++"], "metric": ["COL", "FDE"], "title": "Human Trajectory Forecasting in Crowds: A Deep Learning Perspective"} {"abstract": "This paper proposes a novel deep reinforcement learning (RL) architecture,\ncalled Value Prediction Network (VPN), which integrates model-free and\nmodel-based RL methods into a single neural network. In contrast to typical\nmodel-based RL methods, VPN learns a dynamics model whose abstract states are\ntrained to make option-conditional predictions of future values (discounted sum\nof rewards) rather than of future observations. Our experimental results show\nthat VPN has several advantages over both model-free and model-based baselines\nin a stochastic environment where careful planning is required but building an\naccurate observation-prediction model is difficult. Furthermore, VPN\noutperforms Deep Q-Network (DQN) on several Atari games even with\nshort-lookahead planning, demonstrating its potential as a new way of learning\na good state representation.", "field": [], "task": ["Atari Games", "Value prediction"], "method": [], "dataset": ["Atari 2600 Amidar", "Atari 2600 Enduro", "Atari 2600 Ms. Pacman", "Atari 2600 Seaquest", "Atari 2600 Alien", "Atari 2600 Crazy Climber", "Atari 2600 Frostbite", "Atari 2600 Krull", "Atari 2600 Q*Bert"], "metric": ["Score"], "title": "Value Prediction Network"} {"abstract": "State-of-the-art methods for zero-shot visual recognition formulate learning\nas a joint embedding problem of images and side information. In these\nformulations the current best complement to visual features are attributes:\nmanually encoded vectors describing shared characteristics among categories.\nDespite good performance, attributes have limitations: (1) finer-grained\nrecognition requires commensurately more attributes, and (2) attributes do not\nprovide a natural language interface. We propose to overcome these limitations\nby training neural language models from scratch; i.e. without pre-training and\nonly consuming words and characters. Our proposed models train end-to-end to\nalign with the fine-grained and category-specific content of images. Natural\nlanguage provides a flexible and compact way of encoding only the salient\nvisual aspects for distinguishing categories. By training on raw text, our\nmodel can do inference on raw text as well, providing humans a familiar mode\nboth for annotation and retrieval. Our model achieves strong performance on\nzero-shot text-based image retrieval and significantly outperforms the\nattribute-based state-of-the-art for zero-shot classification on the Caltech\nUCSD Birds 200-2011 dataset.", "field": [], "task": ["Image Retrieval", "Zero-Shot Learning"], "method": [], "dataset": ["Flowers-102 - 0-Shot", "CUB 200 50-way (0-shot)", "CUB-200-2011 - 0-Shot"], "metric": ["AP50", "Top-1 Accuracy", "Accuracy"], "title": "Learning Deep Representations of Fine-grained Visual Descriptions"} {"abstract": "Artificial neural networks typically have a fixed, non-linear activation\nfunction at each neuron. We have designed a novel form of piecewise linear\nactivation function that is learned independently for each neuron using\ngradient descent. With this adaptive activation function, we are able to\nimprove upon deep neural network architectures composed of static rectified\nlinear units, achieving state-of-the-art performance on CIFAR-10 (7.51%),\nCIFAR-100 (30.83%), and a benchmark from high-energy physics involving Higgs\nboson decay modes.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["CIFAR-100", "CIFAR-10"], "metric": ["Percentage correct"], "title": "Learning Activation Functions to Improve Deep Neural Networks"} {"abstract": "Scene text image contains two levels of contents: visual texture and semantic information. Although the previous scene text recognition methods have made great progress over the past few years, the research on mining semantic information to assist text recognition attracts less attention, only RNN-like structures are explored to implicitly model semantic information. However, we observe that RNN based methods have some obvious shortcomings, such as time-dependent decoding manner and one-way serial transmission of semantic context, which greatly limit the help of semantic information and the computation efficiency. To mitigate these limitations, we propose a novel end-to-end trainable framework named semantic reasoning network (SRN) for accurate scene text recognition, where a global semantic reasoning module (GSRM) is introduced to capture global semantic context through multi-way parallel transmission. The state-of-the-art results on 7 public benchmarks, including regular text, irregular text and non-Latin long text, verify the effectiveness and robustness of the proposed method. In addition, the speed of SRN has significant advantages over the RNN based methods, demonstrating its value in practical use.", "field": [], "task": ["Scene Text", "Scene Text Recognition"], "method": [], "dataset": ["ICDAR2013", "SVT"], "metric": ["Accuracy"], "title": "Towards Accurate Scene Text Recognition with Semantic Reasoning Networks"} {"abstract": "Multi-turn conversation understanding is a major challenge for building\nintelligent dialogue systems. This work focuses on retrieval-based response\nmatching for multi-turn conversation whose related work simply concatenates the\nconversation utterances, ignoring the interactions among previous utterances\nfor context modeling. In this paper, we formulate previous utterances into\ncontext using a proposed deep utterance aggregation model to form a\nfine-grained context representation. In detail, a self-matching attention is\nfirst introduced to route the vital information in each utterance. Then the\nmodel matches a response with each refined utterance and the final matching\nscore is obtained after attentive turns aggregation. Experimental results show\nour model outperforms the state-of-the-art methods on three multi-turn\nconversation benchmarks, including a newly introduced e-commerce dialogue\ncorpus.", "field": [], "task": ["Conversational Response Selection"], "method": [], "dataset": ["Ubuntu Dialogue (v1, Ranking)"], "metric": ["R10@1", "R10@5", "R10@2"], "title": "Modeling Multi-turn Conversation with Deep Utterance Aggregation"} {"abstract": "Object categories inherently form a hierarchy with different levels of\nconcept abstraction, especially for fine-grained categories. For example, birds\n(Aves) can be categorized according to a four-level hierarchy of order, family,\ngenus, and species. This hierarchy encodes rich correlations among various\ncategories across different levels, which can effectively regularize the\nsemantic space and thus make prediction less ambiguous. However, previous\nstudies of fine-grained image recognition primarily focus on categories of one\ncertain level and usually overlook this correlation information. In this work,\nwe investigate simultaneously predicting categories of different levels in the\nhierarchy and integrating this structured correlation information into the deep\nneural network by developing a novel Hierarchical Semantic Embedding (HSE)\nframework. Specifically, the HSE framework sequentially predicts the category\nscore vector of each level in the hierarchy, from highest to lowest. At each\nlevel, it incorporates the predicted score vector of the higher level as prior\nknowledge to learn finer-grained feature representation. During training, the\npredicted score vector of the higher level is also employed to regularize label\nprediction by using it as soft targets of corresponding sub-categories. To\nevaluate the proposed framework, we organize the 200 bird species of the\nCaltech-UCSD birds dataset with the four-level category hierarchy and construct\na large-scale butterfly dataset that also covers four level categories.\nExtensive experiments on these two and the newly-released VegFru datasets\ndemonstrate the superiority of our HSE framework over the baseline methods and\nexisting competitors.", "field": [], "task": ["Fine-Grained Image Classification", "Fine-Grained Image Recognition", "Representation Learning"], "method": [], "dataset": [" CUB-200-2011"], "metric": ["Accuracy"], "title": "Fine-Grained Representation Learning and Recognition by Exploiting Hierarchical Semantic Embedding"} {"abstract": "Understanding human motion behaviour is a critical task for several possible applications like self-driving cars or social robots, and in general for all those settings where an autonomous agent has to navigate inside a human-centric environment. This is non-trivial because human motion is inherently multi-modal: given a history of human motion paths, there are many plausible ways by which people could move in the future. Additionally, people activities are often driven by goals, e.g. reaching particular locations or interacting with the environment. We address the aforementioned aspects by proposing a new recurrent generative model that considers both single agents' future goals and interactions between different agents. The model exploits a double attention-based graph neural network to collect information about the mutual influences among different agents and to integrate it with data about agents' possible future objectives. Our proposal is general enough to be applied to different scenarios: the model achieves state-of-the-art results in both urban environments and also in sports applications.", "field": [], "task": ["Human motion prediction", "Multi-future Trajectory Prediction", "Time Series Analysis", "Time Series Prediction", "Trajectory Forecasting", "Trajectory Prediction"], "method": [], "dataset": ["Stanford Drone", "STATS SportVu NBA [DEF]", "STATS SportVu NBA [ATK]"], "metric": ["ADE (in world coordinates)", "FDE (in world coordinates)", "ADE", "FDE"], "title": "DAG-Net: Double Attentive Graph Neural Network for Trajectory Forecasting"} {"abstract": "In this paper, we focus on the spatio-temporal aspect of recognizing Activities of Daily Living (ADL). ADL have two specific properties (i) subtle spatio-temporal patterns and (ii) similar visual patterns varying with time. Therefore, ADL may look very similar and often necessitate to look at their fine-grained details to distinguish them. Because the recent spatio-temporal 3D ConvNets are too rigid to capture the subtle visual patterns across an action, we propose a novel Video-Pose Network: VPN. The 2 key components of this VPN are a spatial embedding and an attention network. The spatial embedding projects the 3D poses and RGB cues in a common semantic space. This enables the action recognition framework to learn better spatio-temporal features exploiting both modalities. In order to discriminate similar actions, the attention network provides two functionalities - (i) an end-to-end learnable pose backbone exploiting the topology of human body, and (ii) a coupler to provide joint spatio-temporal attention weights across a video. Experiments show that VPN outperforms the state-of-the-art results for action classification on a large scale human activity dataset: NTU-RGB+D 120, its subset NTU-RGB+D 60, a real-world challenging human activity dataset: Toyota Smarthome and a small scale human-object interaction dataset Northwestern UCLA.", "field": [], "task": ["Action Classification", "Action Classification ", "Action Recognition", "Human-Object Interaction Detection", "Skeleton Based Action Recognition"], "method": [], "dataset": ["NTU RGB+D", "NTU RGB+D 120"], "metric": ["Accuracy (CS)", "Accuracy (Cross-Setup)", "Accuracy (CV)", "Accuracy (Cross-Subject)"], "title": "VPN: Learning Video-Pose Embedding for Activities of Daily Living"} {"abstract": "Effective convolutional features play an important role in saliency estimation but how to learn powerful features for saliency is still a challenging task. FCN-based methods directly apply multi-level convolutional features without distinction, which leads to sub-optimal results due to the distraction from redundant details. In this paper, we propose a novel attention guided network which selectively integrates multi-level contextual information in a progressive manner. Attentive features generated by our network can alleviate distraction of background thus achieve better performance. On the other hand, it is observed that most of existing algorithms conduct salient object detection by exploiting side-output features of the backbone feature extraction network. However, shallower layers of backbone network lack the ability to obtain global semantic information, which limits the effective feature learning. To address the problem, we introduce multi-path recurrent feedback to enhance our proposed progressive attention driven framework. Through multi-path recurrent connections, global semantic information from the top convolutional layer is transferred to shallower layers, which intrinsically refines the entire network. Experimental results on six benchmark datasets demonstrate that our algorithm performs favorably against the state-of-the-art approaches.", "field": [], "task": ["Object Detection", "RGB Salient Object Detection", "Saliency Prediction", "Salient Object Detection"], "method": [], "dataset": ["DUTS-TE"], "metric": ["MAE", "F-measure"], "title": "Progressive Attention Guided Recurrent Network for Salient Object Detection"} {"abstract": "We introduce UCF101 which is currently the largest dataset of human actions.\nIt consists of 101 action classes, over 13k clips and 27 hours of video data.\nThe database consists of realistic user uploaded videos containing camera\nmotion and cluttered background. Additionally, we provide baseline action\nrecognition results on this new dataset using standard bag of words approach\nwith overall performance of 44.5%. To the best of our knowledge, UCF101 is\ncurrently the most challenging dataset of actions due to its large number of\nclasses, large number of clips and also unconstrained nature of such clips.", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["UCF101"], "metric": ["3-fold Accuracy"], "title": "UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild"} {"abstract": "Shape completion, the problem of estimating the complete geometry of objects from partial observations, lies at the core of many vision and robotics applications. In this work, we propose Point Completion Network (PCN), a novel learning-based approach for shape completion. Unlike existing shape completion methods, PCN directly operates on raw point clouds without any structural assumption (e.g. symmetry) or annotation (e.g. semantic class) about the underlying shape. It features a decoder design that enables the generation of fine-grained completions while maintaining a small number of parameters. Our experiments show that PCN produces dense, complete point clouds with realistic structures in the missing regions on inputs with various levels of incompleteness and noise, including cars from LiDAR scans in the KITTI dataset.", "field": [], "task": ["Point Cloud Completion"], "method": [], "dataset": ["Completion3D", "ShapeNet"], "metric": ["F-Score@1%", "Chamfer Distance"], "title": "PCN: Point Completion Network"} {"abstract": "The milestone improvements brought about by deep representation learning and pre-training techniques have led to large performance gains across downstream NLP, IR and Vision tasks. Multimodal modeling techniques aim to leverage large high-quality visio-linguistic datasets for learning complementary information (across image and text modalities). In this paper, we introduce the Wikipedia-based Image Text (WIT) Dataset (https://github.com/google-research-datasets/wit) to better facilitate multimodal, multilingual learning. WIT is composed of a curated set of 37.6 million entity rich image-text examples with 11.5 million unique images across 108 Wikipedia languages. Its size enables WIT to be used as a pretraining dataset for multimodal models, as we show when applied to downstream tasks such as image-text retrieval. WIT has four main and unique advantages. First, WIT is the largest multimodal dataset by the number of image-text examples by 3x (at the time of writing). Second, WIT is massively multilingual (first of its kind) with coverage over 100+ languages (each of which has at least 12K examples) and provides cross-lingual texts for many images. Third, WIT represents a more diverse set of concepts and real world entities relative to what previous datasets cover. Lastly, WIT provides a very challenging real-world test set, as we empirically illustrate using an image-text retrieval task as an example.", "field": [], "task": ["Representation Learning", "Text-Image Retrieval"], "method": [], "dataset": ["WIT"], "metric": ["R@5", "R@1"], "title": "WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning"} {"abstract": "Motion estimation (ME) and motion compensation (MC) have dominated classical video frame interpolation systems over the past decades. Recently, the convolutional neural networks set up a new data-driven paradigm for frame interpolation. However, existing learning based methods typically fall into estimating only one of the ME and MC building blocks, resulting in a limited performance on both computational efficiency and interpolation accuracy. In this work, we propose a motion estimation and motion compensation driven neural network for video frame interpolation. A novel adaptive warping layer is proposed to integrate both optical flow and interpolation kernels to synthesize target frame pixels. This layer is fully differentiable such that both the flow and kernel estimation networks can be optimized jointly. Our method benefits from the ME and MC model-driven architecture while avoiding the conventional hand-crafted design by training on a large amount of video data. Compared to existing methods, our approach is computationally efficient and able to generate more visually appealing results. Moreover, our MEMC architecture is a general framework, which can be seamlessly adapted to several video enhancement tasks, e.g., super-resolution, denoising, and deblocking. Extensive quantitative and qualitative evaluations demonstrate that the proposed method performs favorably against the state-of-the-art video frame interpolation and enhancement algorithms on a wide range of datasets.", "field": [], "task": ["Denoising", "Motion Compensation", "Motion Estimation", "Optical Flow Estimation", "Super-Resolution", "Video Enhancement", "Video Frame Interpolation"], "method": [], "dataset": ["Middlebury"], "metric": ["Interpolation Error"], "title": "MEMC-Net: Motion Estimation and Motion Compensation Driven Neural Network for Video Frame Interpolation and Enhancement"} {"abstract": "Despite the noticeable progress in perceptual tasks like detection, instance\nsegmentation and human parsing, computers still perform unsatisfactorily on\nvisually understanding humans in crowded scenes, such as group behavior\nanalysis, person re-identification and autonomous driving, etc. To this end,\nmodels need to comprehensively perceive the semantic information and the\ndifferences between instances in a multi-human image, which is recently defined\nas the multi-human parsing task. In this paper, we present a new large-scale\ndatabase \"Multi-Human Parsing (MHP)\" for algorithm development and evaluation,\nand advances the state-of-the-art in understanding humans in crowded scenes.\nMHP contains 25,403 elaborately annotated images with 58 fine-grained semantic\ncategory labels, involving 2-26 persons per image and captured in real-world\nscenes from various viewpoints, poses, occlusion, interactions and background.\nWe further propose a novel deep Nested Adversarial Network (NAN) model for\nmulti-human parsing. NAN consists of three Generative Adversarial Network\n(GAN)-like sub-nets, respectively performing semantic saliency prediction,\ninstance-agnostic parsing and instance-aware clustering. These sub-nets form a\nnested structure and are carefully designed to learn jointly in an end-to-end\nway. NAN consistently outperforms existing state-of-the-art solutions on our\nMHP and several other datasets, and serves as a strong baseline to drive the\nfuture research for multi-human parsing.", "field": [], "task": ["Autonomous Driving", "Human Parsing", "Instance Segmentation", "Multi-Human Parsing", "Person Re-Identification", "Saliency Prediction", "Semantic Segmentation"], "method": [], "dataset": ["MHP v1.0", "MHP v2.0", "PASCAL-Part"], "metric": ["AP 0.5"], "title": "Understanding Humans in Crowded Scenes: Deep Nested Adversarial Learning and A New Benchmark for Multi-Human Parsing"} {"abstract": "Learning both hierarchical and temporal representation has been among the\nlong-standing challenges of recurrent neural networks. Multiscale recurrent\nneural networks have been considered as a promising approach to resolve this\nissue, yet there has been a lack of empirical evidence showing that this type\nof models can actually capture the temporal dependencies by discovering the\nlatent hierarchical structure of the sequence. In this paper, we propose a\nnovel multiscale approach, called the hierarchical multiscale recurrent neural\nnetworks, which can capture the latent hierarchical structure in the sequence\nby encoding the temporal dependencies with different timescales using a novel\nupdate mechanism. We show some evidence that our proposed multiscale\narchitecture can discover underlying hierarchical structure in the sequences\nwithout using explicit boundary information. We evaluate our proposed model on\ncharacter-level language modelling and handwriting sequence modelling.", "field": [], "task": ["Hierarchical structure", "Language Modelling"], "method": [], "dataset": ["Text8", "enwik8"], "metric": ["Number of params", "Bit per Character (BPC)"], "title": "Hierarchical Multiscale Recurrent Neural Networks"} {"abstract": "Theoretical and empirical evidence indicates that the depth of neural\nnetworks is crucial for their success. However, training becomes more difficult\nas depth increases, and training of very deep networks remains an open problem.\nHere we introduce a new architecture designed to overcome this. Our so-called\nhighway networks allow unimpeded information flow across many layers on\ninformation highways. They are inspired by Long Short-Term Memory recurrent\nnetworks and use adaptive gating units to regulate the information flow. Even\nwith hundreds of layers, highway networks can be trained directly through\nsimple gradient descent. This enables the study of extremely deep and efficient\narchitectures.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["MNIST", "CIFAR-100", "CIFAR-10"], "metric": ["Percentage error", "Percentage correct"], "title": "Training Very Deep Networks"} {"abstract": "Many of the leading approaches in language modeling introduce novel, complex\nand specialized architectures. We take existing state-of-the-art word level\nlanguage models based on LSTMs and QRNNs and extend them to both larger\nvocabularies as well as character-level granularity. When properly tuned, LSTMs\nand QRNNs achieve state-of-the-art results on character-level (Penn Treebank,\nenwik8) and word-level (WikiText-103) datasets, respectively. Results are\nobtained in only 12 hours (WikiText-103) to 2 days (enwik8) using a single\nmodern GPU.", "field": [], "task": ["Language Modelling"], "method": [], "dataset": ["WikiText-103", "enwik8", "Penn Treebank (Character Level)", "Hutter Prize"], "metric": ["Number of params", "Bit per Character (BPC)", "Validation perplexity", "Test perplexity"], "title": "An Analysis of Neural Language Modeling at Multiple Scales"} {"abstract": "Convolutional Neural Networks (CNNs) are prone to overfit small training datasets. We present a novel two-phase pipeline that leverages self-supervised learning and knowledge distillation to improve the generalization ability of CNN models for image classification under the data-deficient setting. The first phase is to learn a teacher model which possesses rich and generalizable visual representations via self-supervised learning, and the second phase is to distill the representations into a student model in a self-distillation manner, and meanwhile fine-tune the student model for the image classification task. We also propose a novel margin loss for the self-supervised contrastive learning proxy task to better learn the representation under the data-deficient scenario. Together with other tricks, we achieve competitive performance in the VIPriors image classification challenge.", "field": [], "task": ["Image Classification", "Knowledge Distillation", "Self-Supervised Learning"], "method": [], "dataset": ["ImageNet VIPriors subset"], "metric": ["Top-1"], "title": "Distilling Visual Priors from Self-Supervised Learning"} {"abstract": "We present an approach named JSFusion (Joint Sequence Fusion) that can\nmeasure semantic similarity between any pairs of multimodal sequence data (e.g.\na video clip and a language sentence). Our multimodal matching network consists\nof two key components. First, the Joint Semantic Tensor composes a dense\npairwise representation of two sequence data into a 3D tensor. Then, the\nConvolutional Hierarchical Decoder computes their similarity score by\ndiscovering hidden hierarchical matches between the two sequence modalities.\nBoth modules leverage hierarchical attention mechanisms that learn to promote\nwell-matched representation patterns while prune out misaligned ones in a\nbottom-up manner. Although the JSFusion is a universal model to be applicable\nto any multimodal sequence data, this work focuses on video-language tasks\nincluding multimodal retrieval and video QA. We evaluate the JSFusion model in\nthree retrieval and VQA tasks in LSMDC, for which our model achieves the best\nperformance reported so far. We also perform multiple-choice and movie\nretrieval tasks for the MSR-VTT dataset, on which our approach outperforms many\nstate-of-the-art methods.", "field": [], "task": ["Question Answering", "Semantic Similarity", "Semantic Textual Similarity", "Video Question Answering", "Video Retrieval", "Visual Question Answering"], "method": [], "dataset": ["LSMDC", "MSR-VTT-1kA", "MSR-VTT"], "metric": ["text-to-video Median Rank", "text-to-video R@5", "text-to-video R@1", "text-to-video R@10", "video-to-text R@5"], "title": "A Joint Sequence Fusion Model for Video Question Answering and Retrieval"} {"abstract": "We propose a novel deep-learning-based system for vessel segmentation.\nExisting methods using CNNs have mostly relied on local appearances learned on\nthe regular image grid, without considering the graphical structure of vessel\nshape. To address this, we incorporate a graph convolutional network into a\nunified CNN architecture, where the final segmentation is inferred by combining\nthe different types of features. The proposed method can be applied to expand\nany type of CNN-based vessel segmentation method to enhance the performance.\nExperiments show that the proposed method outperforms the current\nstate-of-the-art methods on two retinal image datasets as well as a coronary\nartery X-ray angiography dataset.", "field": [], "task": ["Retinal Vessel Segmentation"], "method": [], "dataset": ["STARE", "CHASE_DB1", "HRF", "DRIVE"], "metric": ["F1 score", "AUC"], "title": "Deep Vessel Segmentation By Learning Graphical Connectivity"} {"abstract": "We propose a novel architecture which is able to automatically anonymize faces in images while retaining the original data distribution. We ensure total anonymization of all faces in an image by generating images exclusively on privacy-safe information. Our model is based on a conditional generative adversarial network, generating images considering the original pose and image background. The conditional information enables us to generate highly realistic faces with a seamless transition between the generated face and the existing background. Furthermore, we introduce a diverse dataset of human faces, including unconventional poses, occluded faces, and a vast variability in backgrounds. Finally, we present experimental results reflecting the capability of our model to anonymize images while preserving the data distribution, making the data suitable for further training of deep learning models. As far as we know, no other solution has been proposed that guarantees the anonymization of faces while generating realistic images.", "field": [], "task": ["Face Anonymization"], "method": [], "dataset": ["2019_test set"], "metric": ["10%"], "title": "DeepPrivacy: A Generative Adversarial Network for Face Anonymization"} {"abstract": "Recently, the Visual Question Answering (VQA) task has gained increasing\nattention in artificial intelligence. Existing VQA methods mainly adopt the\nvisual attention mechanism to associate the input question with corresponding\nimage regions for effective question answering. The free-form region based and\nthe detection-based visual attention mechanisms are mostly investigated, with\nthe former ones attending free-form image regions and the latter ones attending\npre-specified detection-box regions. We argue that the two attention mechanisms\nare able to provide complementary information and should be effectively\nintegrated to better solve the VQA problem. In this paper, we propose a novel\ndeep neural network for VQA that integrates both attention mechanisms. Our\nproposed framework effectively fuses features from free-form image regions,\ndetection boxes, and question representations via a multi-modal multiplicative\nfeature embedding scheme to jointly attend question-related free-form image\nregions and detection boxes for more accurate question answering. The proposed\nmethod is extensively evaluated on two publicly available datasets, COCO-QA and\nVQA, and outperforms state-of-the-art approaches. Source code is available at\nhttps://github.com/lupantech/dual-mfa-vqa.", "field": [], "task": ["Visual Question Answering"], "method": [], "dataset": ["COCO Visual Question Answering (VQA) real images 1.0 open ended", "COCO Visual Question Answering (VQA) real images 1.0 multiple choice"], "metric": ["Percentage correct"], "title": "Co-attending Free-form Regions and Detections with Multi-modal Multiplicative Feature Embedding for Visual Question Answering"} {"abstract": "With the widespread use of mobile phones and scanners to photograph and upload documents, the need for extracting the information trapped in unstructured document images such as retail receipts, insurance claim forms and financial invoices is becoming more acute. A major hurdle to this objective is that these images often contain information in the form of tables and extracting data from tabular sub-images presents a unique set of challenges. This includes accurate detection of the tabular region within an image, and subsequently detecting and extracting information from the rows and columns of the detected table. While some progress has been made in table detection, extracting the table contents is still a challenge since this involves more fine grained table structure(rows & columns) recognition. Prior approaches have attempted to solve the table detection and structure recognition problems independently using two separate models. In this paper, we propose TableNet: a novel end-to-end deep learning model for both table detection and structure recognition. The model exploits the interdependence between the twin tasks of table detection and table structure recognition to segment out the table and column regions. This is followed by semantic rule-based row extraction from the identified tabular sub-regions. The proposed model and extraction approach was evaluated on the publicly available ICDAR 2013 and Marmot Table datasets obtaining state of the art results. Additionally, we demonstrate that feeding additional semantic features further improves model performance and that the model exhibits transfer learning across datasets. Another contribution of this paper is to provide additional table structure annotations for the Marmot data, which currently only has annotations for table detection.", "field": [], "task": ["Table Detection", "Transfer Learning"], "method": [], "dataset": ["ICDAR2013"], "metric": ["Avg F1"], "title": "TableNet: Deep Learning model for end-to-end Table detection and Tabular data extraction from Scanned Document Images"} {"abstract": "The construction of models for video action classification progresses rapidly. However, the performance of those models can still be easily improved by ensembling with the same models trained on different modalities (e.g. Optical flow). Unfortunately, it is computationally expensive to use several modalities during inference. Recent works examine the ways to integrate advantages of multi-modality into a single RGB-model. Yet, there is still a room for improvement. In this paper, we explore the various methods to embed the ensemble power into a single model. We show that proper initialization, as well as mutual modality learning, enhances single-modality models. As a result, we achieve state-of-the-art results in the Something-Something-v2 benchmark.", "field": [], "task": ["Action Classification", "Action Classification ", "Action Recognition", "Optical Flow Estimation"], "method": [], "dataset": ["Something-Something V2"], "metric": ["Top-5 Accuracy", "Top-1 Accuracy"], "title": "Mutual Modality Learning for Video Action Classification"} {"abstract": "Multi-sensor perception is crucial to ensure the reliability and accuracy in autonomous driving system, while multi-object tracking (MOT) improves that by tracing sequential movement of dynamic objects. Most current approaches for multi-sensor multi-object tracking are either lack of reliability by tightly relying on a single input source (e.g., center camera), or not accurate enough by fusing the results from multiple sensors in post processing without fully exploiting the inherent information. In this study, we design a generic sensor-agnostic multi-modality MOT framework (mmMOT), where each modality (i.e., sensors) is capable of performing its role independently to preserve reliability, and further improving its accuracy through a novel multi-modality fusion module. Our mmMOT can be trained in an end-to-end manner, enables joint optimization for the base feature extractor of each modality and an adjacency estimator for cross modality. Our mmMOT also makes the first attempt to encode deep representation of point cloud in data association process in MOT. We conduct extensive experiments to evaluate the effectiveness of the proposed framework on the challenging KITTI benchmark and report state-of-the-art performance. Code and models are available at https://github.com/ZwwWayne/mmMOT.", "field": [], "task": ["Autonomous Driving", "Multi-Object Tracking", "Object Tracking"], "method": [], "dataset": ["KITTI Tracking test"], "metric": ["MOTA"], "title": "Robust Multi-Modality Multi-Object Tracking"} {"abstract": "Semi-Supervised Learning (SSL) algorithms have shown great potential in training regimes when access to labeled data is scarce but access to unlabeled data is plentiful. However, our experiments illustrate several shortcomings that prior SSL algorithms suffer from. In particular, poor performance when unlabeled and labeled data distributions differ. To address these observations, we develop RealMix, which achieves state-of-the-art results on standard benchmark datasets across different labeled and unlabeled set sizes while overcoming the aforementioned challenges. Notably, RealMix achieves an error rate of 9.79% on CIFAR10 with 250 labels and is the only SSL method tested able to surpass baseline performance when there is significant mismatch in the labeled and unlabeled data distributions. RealMix demonstrates how SSL can be used in real world situations with limited access to both data and compute and guides further research in SSL with practical applicability in mind.", "field": [], "task": ["Semi-Supervised Image Classification"], "method": [], "dataset": ["cifar10, 250 Labels", "CIFAR-10, 250 Labels", "SVHN, 250 Labels", "CIFAR-10, 4000 Labels"], "metric": ["Percentage correct", "Accuracy"], "title": "RealMix: Towards Realistic Semi-Supervised Deep Learning Algorithms"} {"abstract": "Biomedical event extraction is critical in understanding biomolecular interactions described in scientific corpus. One of the main challenges is to identify nested structured events that are associated with non-indicative trigger words. We propose to incorporate domain knowledge from Unified Medical Language System (UMLS) to a pre-trained language model via Graph Edge-conditioned Attention Networks (GEANet) and hierarchical graph representation. To better recognize the trigger words, each sentence is first grounded to a sentence graph based on a jointly modeled hierarchical knowledge graph from UMLS. The grounded graphs are then propagated by GEANet, a novel graph neural networks for enhanced capabilities in inferring complex events. On BioNLP 2011 GENIA Event Extraction task, our approach achieved 1.41% F1 and 3.19% F1 improvements on all events and complex events, respectively. Ablation studies confirm the importance of GEANet and hierarchical KG.", "field": [], "task": ["Event Extraction"], "method": [], "dataset": ["GENIA"], "metric": ["F1"], "title": "Biomedical Event Extraction with Hierarchical Knowledge Graphs"} {"abstract": "We introduce a novel architecture for dependency parsing: \\emph{stack-pointer\nnetworks} (\\textbf{\\textsc{StackPtr}}). Combining pointer\nnetworks~\\citep{vinyals2015pointer} with an internal stack, the proposed model\nfirst reads and encodes the whole sentence, then builds the dependency tree\ntop-down (from root-to-leaf) in a depth-first fashion. The stack tracks the\nstatus of the depth-first search and the pointer networks select one child for\nthe word at the top of the stack at each step. The \\textsc{StackPtr} parser\nbenefits from the information of the whole sentence and all previously derived\nsubtree structures, and removes the left-to-right restriction in classical\ntransition-based parsers. Yet, the number of steps for building any (including\nnon-projective) parse tree is linear in the length of the sentence just as\nother transition-based parsers, yielding an efficient decoding algorithm with\n$O(n^2)$ time complexity. We evaluate our model on 29 treebanks spanning 20\nlanguages and different dependency annotation schemas, and achieve\nstate-of-the-art performance on 21 of them.", "field": [], "task": ["Dependency Parsing"], "method": [], "dataset": ["Penn Treebank"], "metric": ["UAS", "LAS"], "title": "Stack-Pointer Networks for Dependency Parsing"} {"abstract": "Recurrent neural networks have achieved great success in many NLP tasks.\nHowever, they have difficulty in parallelization because of the recurrent\nstructure, so it takes much time to train RNNs. In this paper, we introduce\nsliced recurrent neural networks (SRNNs), which could be parallelized by\nslicing the sequences into many subsequences. SRNNs have the ability to obtain\nhigh-level information through multiple layers with few extra parameters. We\nprove that the standard RNN is a special case of the SRNN when we use linear\nactivation functions. Without changing the recurrent units, SRNNs are 136 times\nas fast as standard RNNs and could be even faster when we train longer\nsequences. Experiments on six largescale sentiment analysis datasets show that\nSRNNs achieve better performance than standard RNNs.", "field": [], "task": ["Sentiment Analysis"], "method": [], "dataset": ["Amazon Review Polarity", "Yelp Binary classification", "Amazon Review Full"], "metric": ["Error", "Accuracy"], "title": "Sliced Recurrent Neural Networks"} {"abstract": "Recent advances in deep learning have facilitated the demand of neural models for real applications. In practice, these applications often need to be deployed with limited resources while keeping high accuracy. This paper touches the core of neural models in NLP, word embeddings, and presents a new embedding distillation framework that remarkably reduces the dimension of word embeddings without compromising accuracy. A novel distillation ensemble approach is also proposed that trains a high-efficient student model using multiple teacher models. In our approach, the teacher models play roles only during training such that the student model operates on its own without getting supports from the teacher models during decoding, which makes it eighty times faster and lighter than other typical ensemble methods. All models are evaluated on seven document classification datasets and show a significant advantage over the teacher models for most cases. Our analysis depicts insightful transformation of word embeddings from distillation and suggests a future direction to ensemble approaches using neural models.", "field": [], "task": ["Document Classification", "Word Embeddings"], "method": [], "dataset": ["CR", "SST-2 Binary classification", "MR", "SST-5 Fine-grained classification", "TREC-6", "SUBJ", "MPQA"], "metric": ["Error", "Accuracy"], "title": "The Pupil Has Become the Master: Teacher-Student Model-Based Word Embedding Distillation with Ensemble Learning"} {"abstract": "In the last years, the computer vision research community has studied on how to model temporal dynamics in videos to employ 3D human action recognition. To that end, two main baseline approaches have been researched: (i) Recurrent Neural Networks (RNNs) with Long-Short Term Memory (LSTM); and (ii) skeleton image representations used as input to a Convolutional Neural Network (CNN). Although RNN approaches present excellent results, such methods lack the ability to efficiently learn the spatial relations between the skeleton joints. On the other hand, the representations used to feed CNN approaches present the advantage of having the natural ability of learning structural information from 2D arrays (i.e., they learn spatial relations from the skeleton joints). To further improve such representations, we introduce the Tree Structure Reference Joints Image (TSRJI), a novel skeleton image representation to be used as input to CNNs. The proposed representation has the advantage of combining the use of reference joints and a tree structure skeleton. While the former incorporates different spatial relationships between the joints, the latter preserves important spatial relations by traversing a skeleton tree with a depth-first order algorithm. Experimental results demonstrate the effectiveness of the proposed representation for 3D action recognition on two datasets achieving state-of-the-art results on the recent NTU RGB+D~120 dataset.", "field": [], "task": ["3D Action Recognition", "Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D", "NTU RGB+D 120"], "metric": ["Accuracy (Cross-Subject)", "Accuracy (Cross-Setup)", "Accuracy (CV)", "Accuracy (CS)"], "title": "Skeleton Image Representation for 3D Action Recognition based on Tree Structure and Reference Joints"} {"abstract": "Intelligent agents need to generalize from past experience to achieve goals in complex environments. World models facilitate such generalization and allow learning behaviors from imagined outcomes to increase sample-efficiency. While learning world models from image inputs has recently become feasible for some tasks, modeling Atari games accurately enough to derive successful behaviors has remained an open challenge for many years. We introduce DreamerV2, a reinforcement learning agent that learns behaviors purely from predictions in the compact latent space of a powerful world model. The world model uses discrete representations and is trained separately from the policy. DreamerV2 constitutes the first agent that achieves human-level performance on the Atari benchmark of 55 tasks by learning behaviors inside a separately trained world model. With the same computational budget and wall-clock time, DreamerV2 reaches 200M frames and exceeds the final performance of the top single-GPU agents IQN and Rainbow.", "field": [], "task": ["Atari Games"], "method": [], "dataset": ["Atari 2600 Amidar", "Atari 2600 River Raid", "Atari 2600 Beam Rider", "Atari 2600 Video Pinball", "Atari 2600 Demon Attack", "Atari 2600 Enduro", "Atari 2600 Alien", "Atari 2600 Boxing", "Atari 2600 Pitfall!", "Atari 2600 Bank Heist", "Atari 2600 Tutankham", "Atari 2600 Time Pilot", "Atari 2600 Space Invaders", "Atari 2600 Assault", "Atari 2600 Phoenix", "Atari 2600 Gravitar", "Atari 2600 Ice Hockey", "Atari 2600 Bowling", "Atari 2600 Private Eye", "Atari 2600 Berzerk", "Atari 2600 Asterix", "Atari 2600 Breakout", "Atari 2600 Name This Game", "Atari 2600 Crazy Climber", "Atari 2600 Pong", "Atari 2600 Krull", "Atari 2600 Freeway", "Atari 2600 James Bond", "Atari 2600 Robotank", "Atari 2600 Kangaroo", "Atari 2600 Venture", "Atari 2600 Asteroids", "Atari 2600 Fishing Derby", "Atari 2600 Ms. Pacman", "Atari 2600 Seaquest", "Atari 2600 Tennis", "Atari 2600 Solaris", "Atari 2600 Zaxxon", "Atari 2600 Frostbite", "Atari 2600 Star Gunner", "Atari 2600 Double Dunk", "Atari 2600 Battle Zone", "Atari 2600 Gopher", "Atari 2600 Skiing", "Atari 2600 Road Runner", "Atari 2600 Atlantis", "Atari 2600 Kung-Fu Master", "Atari 2600 Chopper Command", "Atari 2600 Yars Revenge", "Atari 2600 Up and Down", "Atari 2600 Montezuma's Revenge", "Atari 2600 Wizard of Wor", "Atari 2600 Q*Bert", "Atari 2600 Centipede", "Atari 2600 HERO"], "metric": ["Score"], "title": "Mastering Atari with Discrete World Models"} {"abstract": "We propose RIFE, a Real-time Intermediate Flow Estimation algorithm for Video Frame Interpolation (VFI). Most existing flow-based methods first estimate the bi-directional optical flows, then scale and reverse them to approximate intermediate flows, leading to artifacts on motion boundaries. RIFE uses a neural network named IFNet that can directly estimate the intermediate flows from images with much better speed. Based on our proposed leakage distillation loss, RIFE can be trained in an end-to-end fashion. Experiments demonstrate that our method is flexible and can achieve impressive performance on several public benchmarks. The code is available at https://github.com/hzwer/arXiv2020-RIFE.", "field": [], "task": ["Video Frame Interpolation"], "method": [], "dataset": ["Vimeo90k"], "metric": ["PSNR"], "title": "RIFE: Real-Time Intermediate Flow Estimation for Video Frame Interpolation"} {"abstract": "This paper explores a simple and efficient baseline for text classification.\nOur experiments show that our fast text classifier fastText is often on par\nwith deep learning classifiers in terms of accuracy, and many orders of\nmagnitude faster for training and evaluation. We can train fastText on more\nthan one billion words in less than ten minutes using a standard multicore~CPU,\nand classify half a million sentences among~312K classes in less than a minute.", "field": [], "task": ["Sentiment Analysis", "Text Classification"], "method": [], "dataset": ["Yelp Fine-grained classification", "Amazon Review Polarity", "Yelp Binary classification", "Yahoo! Answers", "DBpedia", "Amazon Review Full", "AG News", "Sogou News"], "metric": ["Error", "Accuracy"], "title": "Bag of Tricks for Efficient Text Classification"} {"abstract": "We aim to dismantle the prevalent black-box neural architectures used in\ncomplex visual reasoning tasks, into the proposed eXplainable and eXplicit\nNeural Modules (XNMs), which advance beyond existing neural module networks\ntowards using scene graphs --- objects as nodes and the pairwise relationships\nas edges --- for explainable and explicit reasoning with structured knowledge.\nXNMs allow us to pay more attention to teach machines how to \"think\",\nregardless of what they \"look\". As we will show in the paper, by using scene\ngraphs as an inductive bias, 1) we can design XNMs in a concise and flexible\nfashion, i.e., XNMs merely consist of 4 meta-types, which significantly reduce\nthe number of parameters by 10 to 100 times, and 2) we can explicitly trace the\nreasoning-flow in terms of graph attentions. XNMs are so generic that they\nsupport a wide range of scene graph implementations with various qualities. For\nexample, when the graphs are detected perfectly, XNMs achieve 100% accuracy on\nboth CLEVR and CLEVR CoGenT, establishing an empirical performance upper-bound\nfor visual reasoning; when the graphs are noisily detected from real-world\nimages, XNMs are still robust to achieve a competitive 67.5% accuracy on\nVQAv2.0, surpassing the popular bag-of-objects attention models without graph\nstructures.", "field": [], "task": ["Visual Question Answering", "Visual Reasoning"], "method": [], "dataset": ["CLEVR"], "metric": ["Accuracy"], "title": "Explainable and Explicit Visual Reasoning over Scene Graphs"} {"abstract": "There has been an increasing research interest in age-invariant face\nrecognition. However, matching faces with big age gaps remains a challenging\nproblem, primarily due to the significant discrepancy of face appearances\ncaused by aging. To reduce such a discrepancy, in this paper we propose a novel\nalgorithm to remove age-related components from features mixed with both\nidentity and age information. Specifically, we factorize a mixed face feature\ninto two uncorrelated components: identity-dependent component and\nage-dependent component, where the identity-dependent component includes\ninformation that is useful for face recognition. To implement this idea, we\npropose the Decorrelated Adversarial Learning (DAL) algorithm, where a\nCanonical Mapping Module (CMM) is introduced to find the maximum correlation\nbetween the paired features generated by a backbone network, while the backbone\nnetwork and the factorization module are trained to generate features reducing\nthe correlation. Thus, the proposed model learns the decomposed features of age\nand identity whose correlation is significantly reduced. Simultaneously, the\nidentity-dependent feature and the age-dependent feature are respectively\nsupervised by ID and age preserving signals to ensure that they both contain\nthe correct information. Extensive experiments are conducted on popular\npublic-domain face aging datasets (FG-NET, MORPH Album 2, and CACD-VS) to\ndemonstrate the effectiveness of the proposed approach.", "field": [], "task": ["Age-Invariant Face Recognition", "Face Recognition"], "method": [], "dataset": ["CACDVS"], "metric": ["Accuracy"], "title": "Decorrelated Adversarial Learning for Age-Invariant Face Recognition"} {"abstract": "Cover song identification represents a challenging task in the field of Music Information Retrieval (MIR) due to complex musical variations between query tracks and cover versions. Previous works typically utilize hand-crafted features and alignment algorithms for the task. More recently, further breakthroughs are achieved employing neural network approaches. In this paper, we propose a novel Convolutional Neural Network (CNN) architecture based on the characteristics of the cover song task. We first train the network through classification strategies; the network is then used to extract music representation for cover song identification. A scheme is designed to train robust models against tempo changes. Experimental results show that our approach outperforms state-of-the-art methods on all public datasets, improving the performance especially on the large dataset.", "field": [], "task": ["Cover song identification", "Information Retrieval", "Music Information Retrieval"], "method": [], "dataset": ["Covers80", "YouTube350", "SHS100K-TEST"], "metric": ["mAP", "MAP"], "title": "Learning a Representation for Cover Song Identification Using Convolutional Neural Network"} {"abstract": "In this paper, we introduce the Reinforced Mnemonic Reader for machine\nreading comprehension tasks, which enhances previous attentive readers in two\naspects. First, a reattention mechanism is proposed to refine current\nattentions by directly accessing to past attentions that are temporally\nmemorized in a multi-round alignment architecture, so as to avoid the problems\nof attention redundancy and attention deficiency. Second, a new optimization\napproach, called dynamic-critical reinforcement learning, is introduced to\nextend the standard supervised method. It always encourages to predict a more\nacceptable answer so as to address the convergence suppression problem occurred\nin traditional reinforcement learning algorithms. Extensive experiments on the\nStanford Question Answering Dataset (SQuAD) show that our model achieves\nstate-of-the-art results. Meanwhile, our model outperforms previous systems by\nover 6% in terms of both Exact Match and F1 metrics on two adversarial SQuAD\ndatasets.", "field": [], "task": ["Machine Reading Comprehension", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["SQuAD1.1 dev", "SQuAD1.1", "TriviaQA"], "metric": ["EM", "F1"], "title": "Reinforced Mnemonic Reader for Machine Reading Comprehension"} {"abstract": "Training of the neural autoregressive density estimator (NADE) can be viewed\nas doing one step of probabilistic inference on missing values in data. We\npropose a new model that extends this inference scheme to multiple steps,\narguing that it is easier to learn to improve a reconstruction in $k$ steps\nrather than to learn to reconstruct in a single inference step. The proposed\nmodel is an unsupervised building block for deep learning that combines the\ndesirable properties of NADE and multi-predictive training: (1) Its test\nlikelihood can be computed analytically, (2) it is easy to generate independent\nsamples from it, and (3) it uses an inference engine that is a superset of\nvariational inference for Boltzmann machines. The proposed NADE-k is\ncompetitive with the state-of-the-art in density estimation on the two datasets\ntested.", "field": [], "task": ["Density Estimation", "Image Generation", "Variational Inference"], "method": [], "dataset": ["Binarized MNIST"], "metric": ["nats"], "title": "Iterative Neural Autoregressive Distribution Estimator (NADE-k)"} {"abstract": "Semantic image segmentation is an essential component of modern autonomous\ndriving systems, as an accurate understanding of the surrounding scene is\ncrucial to navigation and action planning. Current state-of-the-art approaches\nin semantic image segmentation rely on pre-trained networks that were initially\ndeveloped for classifying images as a whole. While these networks exhibit\noutstanding recognition performance (i.e., what is visible?), they lack\nlocalization accuracy (i.e., where precisely is something located?). Therefore,\nadditional processing steps have to be performed in order to obtain\npixel-accurate segmentation masks at the full image resolution. To alleviate\nthis problem we propose a novel ResNet-like architecture that exhibits strong\nlocalization and recognition performance. We combine multi-scale context with\npixel-level accuracy by using two processing streams within our network: One\nstream carries information at the full image resolution, enabling precise\nadherence to segment boundaries. The other stream undergoes a sequence of\npooling operations to obtain robust features for recognition. The two streams\nare coupled at the full image resolution using residuals. Without additional\nprocessing steps and without pre-training, our approach achieves an\nintersection-over-union score of 71.8% on the Cityscapes dataset.", "field": [], "task": ["Autonomous Driving", "Real-Time Semantic Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["Cityscapes test"], "metric": ["Time (ms)", "Mean IoU (class)", "Frame (fps)", "mIoU"], "title": "Full-Resolution Residual Networks for Semantic Segmentation in Street Scenes"} {"abstract": "We propose a transductive Laplacian-regularized inference for few-shot tasks. Given any feature embedding learned from the base classes, we minimize a quadratic binary-assignment function containing two terms: (1) a unary term assign- ing query samples to the nearest class prototype, and (2) a pairwise Laplacian term encouraging nearby query samples to have consistent label as- signments. Our transductive inference does not re-train the base model, and can be viewed as a graph clustering of the query set, subject to super- vision constraints from the support set. We derive a computationally efficient bound optimizer of a relaxation of our function, which computes inde- pendent (parallel) updates for each query sample, while guaranteeing convergence. Following a sim- ple cross-entropy training on the base classes, and without complex meta-learning strategies, we con- ducted comprehensive experiments over five few- shot learning benchmarks. Our LaplacianShot consistently outperforms state-of-the-art methods by significant margins across different models, settings, and data sets. Furthermore, our trans- ductive inference is very fast, with computational times that are close to inductive inference, and can be used for large-scale few-shot tasks.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Graph Clustering", "Meta-Learning"], "method": [], "dataset": ["miniImagenet \u2192 CUB (5-way 5-shot)", "Mini-Imagenet 5-way (1-shot)", "Tiered ImageNet 5-way (1-shot)", "iNaturalist (227-way multi-shot)", "Mini-Imagenet 5-way (5-shot)", "Mini-ImageNet-CUB 5-way (5-shot)", "miniImagenet \u2192 CUB (5-way 1-shot)", "CUB 200 5-way 1-shot", "CUB 200 5-way 5-shot", "Tiered ImageNet 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Laplacian Regularized Few-Shot Learning"} {"abstract": "In this paper, we propose a Detect-to-Summarize network (DSNet) framework for supervised video summarization. Our DSNet contains anchor-based and anchor-free counterparts. The anchor-based method generates temporal interest proposals to determine and localize the representative contents of video sequences, while the anchor-free method eliminates the pre-defined temporal proposals and directly predicts the importance scores and segment locations. Different from existing supervised video summarization methods which formulate video summarization as a regression problem without temporal consistency and integrity constraints, our interest detection framework is the first attempt to leverage temporal consistency via the temporal interest detection formulation. Specifically, in the anchor-based approach, we first provide a dense sampling of temporal interest proposals with multi-scale intervals that accommodate interest variations in length, and then extract their long-range temporal features for interest proposal location regression and importance prediction. Notably, positive and negative segments are both assigned for the correctness and completeness information of the generated summaries. In the anchor-free approach, we alleviate drawbacks of temporal proposals by directly predicting importance scores of video frames and segment locations. Particularly, the interest detection framework can be flexibly plugged into off-the-shelf supervised video summarization methods. We evaluate the anchor-based and anchor-free approaches on the SumMe and TVSum datasets. Experimental results clearly validate the effectiveness of the anchor-based and anchor-free approaches.", "field": [], "task": ["Regression", "Supervised Video Summarization", "Video Summarization"], "method": [], "dataset": ["TvSum", "SumMe"], "metric": ["F1-score (Canonical)", "F1-score (Augmented)"], "title": "DSNet: A Flexible Detect-to-Summarize Network for Video Summarization"} {"abstract": "A 3D point cloud describes the real scene precisely and intuitively.To date\nhow to segment diversified elements in such an informative 3D scene is rarely\ndiscussed. In this paper, we first introduce a simple and flexible framework to\nsegment instances and semantics in point clouds simultaneously. Then, we\npropose two approaches which make the two tasks take advantage of each other,\nleading to a win-win situation. Specifically, we make instance segmentation\nbenefit from semantic segmentation through learning semantic-aware point-level\ninstance embedding. Meanwhile, semantic features of the points belonging to the\nsame instance are fused together to make more accurate per-point semantic\npredictions. Our method largely outperforms the state-of-the-art method in 3D\ninstance segmentation along with a significant improvement in 3D semantic\nsegmentation. Code has been made available at:\nhttps://github.com/WXinlong/ASIS.", "field": [], "task": ["3D Instance Segmentation", "3D Semantic Segmentation", "Instance Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["S3DIS"], "metric": ["Mean IoU", "mRec", "mPrec"], "title": "Associatively Segmenting Instances and Semantics in Point Clouds"} {"abstract": "For years, recursive neural networks (RvNNs) have been shown to be suitable\nfor representing text into fixed-length vectors and achieved good performance\non several natural language processing tasks. However, the main drawback of\nRvNNs is that they require structured input, which makes data preparation and\nmodel implementation hard. In this paper, we propose Gumbel Tree-LSTM, a novel\ntree-structured long short-term memory architecture that learns how to compose\ntask-specific tree structures only from plain text data efficiently. Our model\nuses Straight-Through Gumbel-Softmax estimator to decide the parent node among\ncandidates dynamically and to calculate gradients of the discrete decision. We\nevaluate the proposed model on natural language inference and sentiment\nanalysis, and show that our model outperforms or is at least comparable to\nprevious models. We also find that our model converges significantly faster\nthan other models.", "field": [], "task": ["Natural Language Inference", "Sentiment Analysis"], "method": [], "dataset": ["SNLI"], "metric": ["Parameters", "% Train Accuracy", "% Test Accuracy"], "title": "Learning to Compose Task-Specific Tree Structures"} {"abstract": "Whereas conventional spoken language understanding (SLU) systems map speech to text, and then text to intent, end-to-end SLU systems map speech directly to intent through a single trainable model. Achieving high accuracy with these end-to-end models without a large amount of training data is difficult. We propose a method to reduce the data requirements of end-to-end SLU in which the model is first pre-trained to predict words and phonemes, thus learning good features for SLU. We introduce a new SLU dataset, Fluent Speech Commands, and show that our method improves performance both when the full dataset is used for training and when only a small subset is used. We also describe preliminary experiments to gauge the model's ability to generalize to new phrases not heard during training.", "field": [], "task": ["Spoken Language Understanding"], "method": [], "dataset": ["Fluent Speech Commands"], "metric": ["Accuracy (%)"], "title": "Speech Model Pre-training for End-to-End Spoken Language Understanding"} {"abstract": "Graph neural networks (GNNs) are a popular class of machine learning models whose major advantage is their ability to incorporate a sparse and discrete dependency structure between data points. Unfortunately, GNNs can only be used when such a graph-structure is available. In practice, however, real-world graphs are often noisy and incomplete or might not be available at all. With this work, we propose to jointly learn the graph structure and the parameters of graph convolutional networks (GCNs) by approximately solving a bilevel program that learns a discrete probability distribution on the edges of the graph. This allows one to apply GCNs not only in scenarios where the given graph is incomplete or corrupted but also in those where a graph is not available. We conduct a series of experiments that analyze the behavior of the proposed method and demonstrate that it outperforms related methods by a significant margin.", "field": [], "task": ["Music Genre Recognition", "Node Classification"], "method": [], "dataset": ["Cora", "Citeseer", "Cora: fixed 20 node per class", "Cora with Public Split: fixed 20 nodes per class", "CiteSeer with Public Split: fixed 20 nodes per class"], "metric": ["Accuracy"], "title": "Learning Discrete Structures for Graph Neural Networks"} {"abstract": "Contextual string embeddings are a recent type of contextualized word embedding that were shown to yield state-of-the-art results when utilized in a range of sequence labeling tasks. They are based on character-level language models which treat text as distributions over characters and are capable of generating embeddings for any string of characters within any textual context. However, such purely character-based approaches struggle to produce meaningful embeddings if a rare string is used in a underspecified context. To address this drawback, we propose a method in which we dynamically aggregate contextualized embeddings of each unique string that we encounter. We then use a pooling operation to distill a {''}global{''} word representation from all contextualized instances. We evaluate these {''}pooled contextualized embeddings{''} on common named entity recognition (NER) tasks such as CoNLL-03 and WNUT and show that our approach significantly improves the state-of-the-art for NER. We make all code and pre-trained models available to the research community for use and reproduction.", "field": [], "task": ["Named Entity Recognition"], "method": [], "dataset": ["Long-tail emerging entities", "CoNLL 2003 (English)"], "metric": ["F1"], "title": "Pooled Contextualized Embeddings for Named Entity Recognition"} {"abstract": "Many state-of-the-art subspace clustering methods follow a two-step process by first constructing an affinity matrix between data points and then applying spectral clustering to this affinity. Most of the research into these methods focuses on the first step of generating the affinity matrix, which often exploits the self-expressive property of linear subspaces, with little consideration typically given to the spectral clustering step that produces the final clustering. Moreover, existing methods obtain the affinity by applying ad-hoc postprocessing steps to the self-expressive representation of the data, and this postprocessing can have a significant impact on the subsequent spectral clustering step. In this work, we propose to unify these two steps by jointly learning both a self-expressive representation of the data and an affinity matrix that is well-normalized for spectral clustering. In the proposed model, we constrain the affinity matrix to be doubly stochastic, which results in a principled method for affinity matrix normalization while also exploiting the known benefits of doubly stochastic normalization in spectral clustering. While our proposed model is non-convex, we give a convex relaxation that is provably equivalent in many regimes; we also develop an efficient approximation to the full model that works well in practice. Experiments show that our method achieves state-of-the-art subspace clustering performance on many common datasets in computer vision.", "field": [], "task": ["Image Clustering"], "method": [], "dataset": ["coil-100", "coil-40", "Extended Yale-B", "UMist"], "metric": ["NMI", "Accuracy"], "title": "Doubly Stochastic Subspace Clustering"} {"abstract": "Deep learning techniques have become the to-go models for most vision-related\ntasks on 2D images. However, their power has not been fully realised on several\ntasks in 3D space, e.g., 3D scene understanding. In this work, we jointly\naddress the problems of semantic and instance segmentation of 3D point clouds.\nSpecifically, we develop a multi-task pointwise network that simultaneously\nperforms two tasks: predicting the semantic classes of 3D points and embedding\nthe points into high-dimensional vectors so that points of the same object\ninstance are represented by similar embeddings. We then propose a multi-value\nconditional random field model to incorporate the semantic and instance labels\nand formulate the problem of semantic and instance segmentation as jointly\noptimising labels in the field model. The proposed method is thoroughly\nevaluated and compared with existing methods on different indoor scene datasets\nincluding S3DIS and SceneNN. Experimental results showed the robustness of the\nproposed joint semantic-instance segmentation scheme over its single\ncomponents. Our method also achieved state-of-the-art performance on semantic\nsegmentation.", "field": [], "task": ["3D Instance Segmentation", "3D Semantic Instance Segmentation", "3D Semantic Segmentation", "Scene Understanding"], "method": [], "dataset": ["SceneNN"], "metric": ["mAP@0.5"], "title": "JSIS3D: Joint Semantic-Instance Segmentation of 3D Point Clouds with Multi-Task Pointwise Networks and Multi-Value Conditional Random Fields"} {"abstract": "In this work, we propose a new solution to 3D human pose estimation in videos. Instead of directly regressing the 3D joint locations, we draw inspiration from the human skeleton anatomy and decompose the task into bone direction prediction and bone length prediction, from which the 3D joint locations can be completely derived. Our motivation is the fact that the bone lengths of a human skeleton remain consistent across time. This promotes us to develop effective techniques to utilize global information across all the frames in a video for high-accuracy bone length prediction. Moreover, for the bone direction prediction network, we propose a fully-convolutional propagating architecture with long skip connections. Essentially, it predicts the directions of different bones hierarchically without using any time-consuming memory units e.g. LSTM). A novel joint shift loss is further introduced to bridge the training of the bone length and bone direction prediction networks. Finally, we employ an implicit attention mechanism to feed the 2D keypoint visibility scores into the model as extra guidance, which significantly mitigates the depth ambiguity in many challenging poses. Our full model outperforms the previous best results on Human3.6M and MPI-INF-3DHP datasets, where comprehensive evaluation validates the effectiveness of our model.", "field": [], "task": ["3D Human Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["Human3.6M", "MPI-INF-3DHP"], "metric": ["Average MPJPE (mm)", "Using 2D ground-truth joints", "Multi-View or Monocular", "AUC", "3DPCK"], "title": "Anatomy-aware 3D Human Pose Estimation with Bone-based Pose Decomposition"} {"abstract": "The artistic style (or artistic movement) of a painting is a rich descriptor that captures both\r\nvisual and historical information about the painting. Correctly identifying the artistic style\r\nof a paintings is crucial for indexing large artistic databases. In this paper, we investigate\r\nthe use of deep residual neural to solve the problem of detecting the artistic style of a\r\npainting and outperform existing approaches by almost 10% on the Wikipaintings dataset\r\n(for 25 di\u000berent style). To achieve this result, the network is \frst pre-trained on ImageNet,\r\nand deeply retrained for artistic style. We empirically evaluate that to achieve the best\r\nperformance, one need to retrain about 20 layers. This suggests that the two tasks are as\r\nsimilar as expected, and explain the previous success of hand crafted features. We also\r\ndemonstrate that the style detected on the Wikipaintings dataset are consistent with styles\r\ndetected on an independent dataset and describe a number of experiments we conducted\r\nto validate this approach both qualitatively and quantitatively.", "field": [], "task": ["Artistic style classification"], "method": [], "dataset": ["RASTA"], "metric": ["Top-1 Accuracy"], "title": "Recognizing Art Style Automatically in painting with deep learning"} {"abstract": "Snorkel MeTaL: A framework for training models with multi-task weak supervision", "field": [], "task": ["Matrix Completion", "Natural Language Inference", "Paraphrase Identification", "Semantic Textual Similarity", "Sentiment Analysis"], "method": [], "dataset": ["MultiNLI", "SST-2 Binary classification", "Quora Question Pairs", "SentEval"], "metric": ["SICK-E", "Matched", "STS", "MRPC", "SICK-R", "Accuracy", "Mismatched", "F1"], "title": "Training Complex Models with Multi-Task Weak Supervision"} {"abstract": "Video deblurring is a challenging task due to the spatially variant blur caused by camera shake, object motions, and depth variations, etc. Existing methods usually estimate optical flow in the blurry video to align consecutive frames or approximate blur kernels. However, they tend to generate artifacts or cannot effectively remove blur when the estimated optical flow is not accurate. To overcome the limitation of separate optical flow estimation, we propose a Spatio-Temporal Filter Adaptive Network (STFAN) for the alignment and deblurring in a unified framework. The proposed STFAN takes both blurry and restored images of the previous frame as well as blurry image of the current frame as input, and dynamically generates the spatially adaptive filters for the alignment and deblurring. We then propose the new Filter Adaptive Convolutional (FAC) layer to align the deblurred features of the previous frame with the current frame and remove the spatially variant blur from the features of the current frame. Finally, we develop a reconstruction network which takes the fusion of two transformed features to restore the clear frames. Both quantitative and qualitative evaluation results on the benchmark datasets and real-world videos demonstrate that the proposed algorithm performs favorably against state-of-the-art methods in terms of accuracy, speed as well as model size.", "field": [], "task": ["Deblurring", "Optical Flow Estimation"], "method": [], "dataset": ["GoPro", "DVD "], "metric": ["SSIM", "PSNR"], "title": "Spatio-Temporal Filter Adaptive Network for Video Deblurring"} {"abstract": "We propose a new regularization method based on virtual adversarial loss: a\nnew measure of local smoothness of the conditional label distribution given\ninput. Virtual adversarial loss is defined as the robustness of the conditional\nlabel distribution around each input data point against local perturbation.\nUnlike adversarial training, our method defines the adversarial direction\nwithout label information and is hence applicable to semi-supervised learning.\nBecause the directions in which we smooth the model are only \"virtually\"\nadversarial, we call our method virtual adversarial training (VAT). The\ncomputational cost of VAT is relatively low. For neural networks, the\napproximated gradient of virtual adversarial loss can be computed with no more\nthan two pairs of forward- and back-propagations. In our experiments, we\napplied VAT to supervised and semi-supervised learning tasks on multiple\nbenchmark datasets. With a simple enhancement of the algorithm based on the\nentropy minimization principle, our VAT achieves state-of-the-art performance\nfor semi-supervised learning tasks on SVHN and CIFAR-10.", "field": [], "task": ["Semi-Supervised Image Classification"], "method": [], "dataset": ["ImageNet - 10% labeled data", "cifar10, 250 Labels", "CIFAR-10, 250 Labels", "SVHN, 250 Labels", "SVHN, 1000 labels", "CIFAR-10, 4000 Labels"], "metric": ["Top 5 Accuracy", "Percentage correct", "Accuracy"], "title": "Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning"} {"abstract": "Despite significant recent advances in the field of face recognition,\nimplementing face verification and recognition efficiently at scale presents\nserious challenges to current approaches. In this paper we present a system,\ncalled FaceNet, that directly learns a mapping from face images to a compact\nEuclidean space where distances directly correspond to a measure of face\nsimilarity. Once this space has been produced, tasks such as face recognition,\nverification and clustering can be easily implemented using standard techniques\nwith FaceNet embeddings as feature vectors.\n Our method uses a deep convolutional network trained to directly optimize the\nembedding itself, rather than an intermediate bottleneck layer as in previous\ndeep learning approaches. To train, we use triplets of roughly aligned matching\n/ non-matching face patches generated using a novel online triplet mining\nmethod. The benefit of our approach is much greater representational\nefficiency: we achieve state-of-the-art face recognition performance using only\n128-bytes per face.\n On the widely used Labeled Faces in the Wild (LFW) dataset, our system\nachieves a new record accuracy of 99.63%. On YouTube Faces DB it achieves\n95.12%. Our system cuts the error rate in comparison to the best published\nresult by 30% on both datasets.\n We also introduce the concept of harmonic embeddings, and a harmonic triplet\nloss, which describe different versions of face embeddings (produced by\ndifferent networks) that are compatible to each other and allow for direct\ncomparison between each other.", "field": [], "task": ["Face Identification", "Face Recognition", "Face Verification"], "method": [], "dataset": ["MegaFace", "YouTube Faces DB", "Labeled Faces in the Wild", "IJB-C"], "metric": ["TAR @ FAR=0.01", "Accuracy"], "title": "FaceNet: A Unified Embedding for Face Recognition and Clustering"} {"abstract": "We present 3DMV, a novel method for 3D semantic scene segmentation of RGB-D\nscans in indoor environments using a joint 3D-multi-view prediction network. In\ncontrast to existing methods that either use geometry or RGB data as input for\nthis task, we combine both data modalities in a joint, end-to-end network\narchitecture. Rather than simply projecting color data into a volumetric grid\nand operating solely in 3D -- which would result in insufficient detail -- we\nfirst extract feature maps from associated RGB images. These features are then\nmapped into the volumetric feature grid of a 3D network using a differentiable\nbackprojection layer. Since our target is 3D scanning scenarios with possibly\nmany frames, we use a multi-view pooling approach in order to handle a varying\nnumber of RGB input views. This learned combination of RGB and geometric\nfeatures with our joint 2D-3D architecture achieves significantly better\nresults than existing baselines. For instance, our final result on the ScanNet\n3D segmentation benchmark increases from 52.8\\% to 75\\% accuracy compared to\nexisting volumetric architectures.", "field": [], "task": ["Scene Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["ScanNet"], "metric": ["3DIoU", "Average Accuracy"], "title": "3DMV: Joint 3D-Multi-View Prediction for 3D Semantic Scene Segmentation"} {"abstract": "Facial beauty prediction (FBP) is a significant visual recognition problem to\nmake assessment of facial attractiveness that is consistent to human\nperception. To tackle this problem, various data-driven models, especially\nstate-of-the-art deep learning techniques, were introduced, and benchmark\ndataset become one of the essential elements to achieve FBP. Previous works\nhave formulated the recognition of facial beauty as a specific supervised\nlearning problem of classification, regression or ranking, which indicates that\nFBP is intrinsically a computation problem with multiple paradigms. However,\nmost of FBP benchmark datasets were built under specific computation\nconstrains, which limits the performance and flexibility of the computational\nmodel trained on the dataset. In this paper, we argue that FBP is a\nmulti-paradigm computation problem, and propose a new diverse benchmark\ndataset, called SCUT-FBP5500, to achieve multi-paradigm facial beauty\nprediction. The SCUT-FBP5500 dataset has totally 5500 frontal faces with\ndiverse properties (male/female, Asian/Caucasian, ages) and diverse labels\n(face landmarks, beauty scores within [1,~5], beauty score distribution), which\nallows different computational models with different FBP paradigms, such as\nappearance-based/shape-based facial beauty classification/regression model for\nmale/female of Asian/Caucasian. We evaluated the SCUT-FBP5500 dataset for FBP\nusing different combinations of feature and predictor, and various deep\nlearning methods. The results indicates the improvement of FBP and the\npotential applications based on the SCUT-FBP5500.", "field": [], "task": ["Facial Beauty Prediction", "Regression"], "method": [], "dataset": ["SCUT-FBP"], "metric": ["MAE"], "title": "SCUT-FBP5500: A Diverse Benchmark Dataset for Multi-Paradigm Facial Beauty Prediction"} {"abstract": "Detecting activities in untrimmed videos is an important but challenging\ntask. The performance of existing methods remains unsatisfactory, e.g., they\noften meet difficulties in locating the beginning and end of a long complex\naction. In this paper, we propose a generic framework that can accurately\ndetect a wide variety of activities from untrimmed videos. Our first\ncontribution is a novel proposal scheme that can efficiently generate\ncandidates with accurate temporal boundaries. The other contribution is a\ncascaded classification pipeline that explicitly distinguishes between\nrelevance and completeness of a candidate instance. On two challenging temporal\nactivity detection datasets, THUMOS14 and ActivityNet, the proposed framework\nsignificantly outperforms the existing state-of-the-art methods, demonstrating\nsuperior accuracy and strong adaptivity in handling activities with various\ntemporal structures.", "field": [], "task": ["Action Detection", "Activity Detection", "Temporal Action Localization"], "method": [], "dataset": ["ActivityNet-1.3"], "metric": ["mAP", "mAP IOU@0.5"], "title": "A Pursuit of Temporal Accuracy in General Activity Detection"} {"abstract": "Automatically describing the content of an image is a fundamental problem in\nartificial intelligence that connects computer vision and natural language\nprocessing. In this paper, we present a generative model based on a deep\nrecurrent architecture that combines recent advances in computer vision and\nmachine translation and that can be used to generate natural sentences\ndescribing an image. The model is trained to maximize the likelihood of the\ntarget description sentence given the training image. Experiments on several\ndatasets show the accuracy of the model and the fluency of the language it\nlearns solely from image descriptions. Our model is often quite accurate, which\nwe verify both qualitatively and quantitatively. For instance, while the\ncurrent state-of-the-art BLEU-1 score (the higher the better) on the Pascal\ndataset is 25, our approach yields 59, to be compared to human performance\naround 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66,\nand on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we\nachieve a BLEU-4 of 27.7, which is the current state-of-the-art.", "field": [], "task": ["Image Captioning", "Image Retrieval with Multi-Modal Query", "Text Generation"], "method": [], "dataset": ["MIT-States", "Fashion200k"], "metric": ["Recall@50", "Recall@1", "Recall@5", "Recall@10"], "title": "Show and Tell: A Neural Image Caption Generator"} {"abstract": "We present a new deep learning architecture (called Kd-network) that is\ndesigned for 3D model recognition tasks and works with unstructured point\nclouds. The new architecture performs multiplicative transformations and share\nparameters of these transformations according to the subdivisions of the point\nclouds imposed onto them by Kd-trees. Unlike the currently dominant\nconvolutional architectures that usually require rasterization on uniform\ntwo-dimensional or three-dimensional grids, Kd-networks do not rely on such\ngrids in any way and therefore avoid poor scaling behaviour. In a series of\nexperiments with popular shape recognition benchmarks, Kd-networks demonstrate\ncompetitive performance in a number of shape recognition tasks such as shape\nclassification, shape retrieval and shape part segmentation.", "field": [], "task": ["3D Part Segmentation", "3D Point Cloud Classification"], "method": [], "dataset": ["ShapeNet-Part", "ModelNet40"], "metric": ["Overall Accuracy", "Class Average IoU", "Instance Average IoU"], "title": "Escape from Cells: Deep Kd-Networks for the Recognition of 3D Point Cloud Models"} {"abstract": "The concepts of unitary evolution matrices and associative memory have\nboosted the field of Recurrent Neural Networks (RNN) to state-of-the-art\nperformance in a variety of sequential tasks. However, RNN still have a limited\ncapacity to manipulate long-term memory. To bypass this weakness the most\nsuccessful applications of RNN use external techniques such as attention\nmechanisms. In this paper we propose a novel RNN model that unifies the\nstate-of-the-art approaches: Rotational Unit of Memory (RUM). The core of RUM\nis its rotational operation, which is, naturally, a unitary matrix, providing\narchitectures with the power to learn long-term dependencies by overcoming the\nvanishing and exploding gradients problem. Moreover, the rotational unit also\nserves as associative memory. We evaluate our model on synthetic memorization,\nquestion answering and language modeling tasks. RUM learns the Copying Memory\ntask completely and improves the state-of-the-art result in the Recall task.\nRUM's performance in the bAbI Question Answering task is comparable to that of\nmodels with attention mechanism. We also improve the state-of-the-art result to\n1.189 bits-per-character (BPC) loss in the Character Level Penn Treebank (PTB)\ntask, which is to signify the applications of RUM to real-world sequential\ndata. The universality of our construction, at the core of RNN, establishes RUM\nas a promising approach to language modeling, speech recognition and machine\ntranslation.", "field": [], "task": ["Language Modelling", "Machine Translation", "Question Answering", "Speech Recognition"], "method": [], "dataset": ["bAbi"], "metric": ["Accuracy (trained on 1k)"], "title": "Rotational Unit of Memory"} {"abstract": "Adversarial attacks to image classification systems present challenges to\nconvolutional networks and opportunities for understanding them. This study\nsuggests that adversarial perturbations on images lead to noise in the features\nconstructed by these networks. Motivated by this observation, we develop new\nnetwork architectures that increase adversarial robustness by performing\nfeature denoising. Specifically, our networks contain blocks that denoise the\nfeatures using non-local means or other filters; the entire networks are\ntrained end-to-end. When combined with adversarial training, our feature\ndenoising networks substantially improve the state-of-the-art in adversarial\nrobustness in both white-box and black-box attack settings. On ImageNet, under\n10-iteration PGD white-box attacks where prior art has 27.9% accuracy, our\nmethod achieves 55.7%; even under extreme 2000-iteration PGD white-box attacks,\nour method secures 42.6% accuracy. Our method was ranked first in Competition\non Adversarial Attacks and Defenses (CAAD) 2018 --- it achieved 50.6%\nclassification accuracy on a secret, ImageNet-like test dataset against 48\nunknown attackers, surpassing the runner-up approach by ~10%. Code is available\nat https://github.com/facebookresearch/ImageNet-Adversarial-Training.", "field": [], "task": ["Adversarial Defense", "Image Classification"], "method": [], "dataset": ["ImageNet (targeted PGD, max perturbation=16)", "ImageNet", "CAAD 2018"], "metric": ["Accuracy"], "title": "Feature Denoising for Improving Adversarial Robustness"} {"abstract": "Recent years the task of incomplete utterance rewriting has raised a large attention. Previous works usually shape it as a machine translation task and employ sequence to sequence based architecture with copy mechanism. In this paper, we present a novel and extensive approach, which formulates it as a semantic segmentation task. Instead of generating from scratch, such a formulation introduces edit operations and shapes the problem as prediction of a word-level edit matrix. Benefiting from being able to capture both local and global information, our approach achieves state-of-the-art performance on several public datasets. Furthermore, our approach is four times faster than the standard approach in inference.", "field": [], "task": ["Context Query Reformulation", "Dialogue Rewriting"], "method": [], "dataset": ["Rewrite", "Multi-Rewrite"], "metric": ["ROUGE-L", "Rewriting F3"], "title": "Incomplete Utterance Rewriting as Semantic Segmentation"} {"abstract": "We address the task of multi-view novel view synthesis, where we are interested in synthesizing a target image with an arbitrary camera pose from given source images. We propose an end-to-end trainable framework that learns to exploit multiple viewpoints to synthesize a novel view without any 3D supervision. Specifically, our model consists of a flow prediction module and a pixel generation module to directly leverage information presented in source views as well as hallucinate missing pixels from statistical priors. To merge the predictions produced by the two modules given multi-view source images, we introduce a self-learned confidence aggregation mechanism. We evaluate our model on images rendered from 3D object models as well as real and synthesized scenes. We demonstrate that our model is able to achieve state-of-the-art results as well as progressively improve its predictions when more source images are available.", "field": [], "task": ["Novel View Synthesis"], "method": [], "dataset": ["ShapeNet Chair", "Synthia Novel View Synthesis", "ShapeNet Car", "KITTI Novel View Synthesis"], "metric": ["SSIM"], "title": "Multi-view to Novel view: Synthesizing Novel Views with Self-Learned Confidence"} {"abstract": "Normalization methods are essential components in convolutional neural networks (CNNs). They either standardize or whiten data using statistics estimated in predefined sets of pixels. Unlike existing works that design normalization techniques for specific tasks, we propose Switchable Whitening (SW), which provides a general form unifying different whitening methods as well as standardization methods. SW learns to switch among these operations in an end-to-end manner. It has several advantages. First, SW adaptively selects appropriate whitening or standardization statistics for different tasks (see Fig.1), making it well suited for a wide range of tasks without manual design. Second, by integrating benefits of different normalizers, SW shows consistent improvements over its counterparts in various challenging benchmarks. Third, SW serves as a useful tool for understanding the characteristics of whitening and standardization techniques. We show that SW outperforms other alternatives on image classification (CIFAR-10/100, ImageNet), semantic segmentation (ADE20K, Cityscapes), domain adaptation (GTA5, Cityscapes), and image style transfer (COCO). For example, without bells and whistles, we achieve state-of-the-art performance with 45.33% mIoU on the ADE20K dataset. Code is available at https://github.com/XingangPan/Switchable-Whitening.", "field": [], "task": ["Domain Adaptation", "Image Classification", "Representation Learning", "Semantic Segmentation", "Style Transfer"], "method": [], "dataset": ["GTAV-to-Cityscapes Labels"], "metric": ["mIoU"], "title": "Switchable Whitening for Deep Representation Learning"} {"abstract": "The area of graph embeddings is currently dominated by contrastive learning methods, which demand formulation of an explicit objective function and sampling of positive and negative examples. This creates a conceptual and computational overhead. Simple, classic unsupervised approaches like Multidimensional Scaling (MSD) or the Laplacian eigenmap skip the necessity of tedious objective optimization, directly exploiting data geometry. Unfortunately, their reliance on very costly operations such as matrix eigendecomposition make them unable to scale to large graphs that are common in today's digital world. In this paper we present Cleora: an algorithm which gets the best of two worlds, being both unsupervised and highly scalable. We show that high quality embeddings can be produced without the popular step-wise learning framework with example sampling. An intuitive learning objective of our algorithm is that a node should be similar to its neighbors, without explicitly pushing disconnected nodes apart. The objective is achieved by iterative weighted averaging of node neigbors' embeddings, followed by normalization across dimensions. Thanks to the averaging operation the algorithm makes rapid strides across the embedding space and usually reaches optimal embeddings in just a few iterations. Cleora runs faster than other state-of-the-art CPU algorithms and produces embeddings of competitive quality as measured on downstream tasks: link prediction and node classification. We show that Cleora learns a data abstraction that is similar to contrastive methods, yet at much lower computational cost. We open-source Cleora under the MIT license allowing commercial use under https://github.com/Synerise/cleora.", "field": [], "task": ["Graph Embedding", "Link Prediction", "Node Classification", "Recommendation Systems"], "method": [], "dataset": ["Cora", "YouTube"], "metric": ["Macro-F1@2%", "Micro-F1@2%", "Accuracy"], "title": "Cleora: A Simple, Strong and Scalable Graph Embedding Scheme"} {"abstract": "Memory-based neural networks model temporal data by leveraging an ability to\nremember information for long periods. It is unclear, however, whether they\nalso have an ability to perform complex relational reasoning with the\ninformation they remember. Here, we first confirm our intuitions that standard\nmemory architectures may struggle at tasks that heavily involve an\nunderstanding of the ways in which entities are connected -- i.e., tasks\ninvolving relational reasoning. We then improve upon these deficits by using a\nnew memory module -- a \\textit{Relational Memory Core} (RMC) -- which employs\nmulti-head dot product attention to allow memories to interact. Finally, we\ntest the RMC on a suite of tasks that may profit from more capable relational\nreasoning across sequential information, and show large gains in RL domains\n(e.g. Mini PacMan), program evaluation, and language modeling, achieving\nstate-of-the-art results on the WikiText-103, Project Gutenberg, and GigaWord\ndatasets.", "field": [], "task": ["Language Modelling", "Relational Reasoning"], "method": [], "dataset": ["WikiText-103"], "metric": ["Validation perplexity", "Test perplexity"], "title": "Relational recurrent neural networks"} {"abstract": "The human visual system has the remarkably ability to be able to effortlessly\nlearn novel concepts from only a few examples. Mimicking the same behavior on\nmachine learning vision systems is an interesting and very challenging research\nproblem with many practical advantages on real world vision applications. In\nthis context, the goal of our work is to devise a few-shot visual learning\nsystem that during test time it will be able to efficiently learn novel\ncategories from only a few training data while at the same time it will not\nforget the initial categories on which it was trained (here called base\ncategories). To achieve that goal we propose (a) to extend an object\nrecognition system with an attention based few-shot classification weight\ngenerator, and (b) to redesign the classifier of a ConvNet model as the cosine\nsimilarity function between feature representations and classification weight\nvectors. The latter, apart from unifying the recognition of both novel and base\ncategories, it also leads to feature representations that generalize better on\n\"unseen\" categories. We extensively evaluate our approach on Mini-ImageNet\nwhere we manage to improve the prior state-of-the-art on few-shot recognition\n(i.e., we achieve 56.20% and 73.00% on the 1-shot and 5-shot settings\nrespectively) while at the same time we do not sacrifice any accuracy on the\nbase categories, which is a characteristic that most prior approaches lack.\nFinally, we apply our approach on the recently introduced few-shot benchmark of\nBharath and Girshick [4] where we also achieve state-of-the-art results. The\ncode and models of our paper will be published on:\nhttps://github.com/gidariss/FewShotWithoutForgetting", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Object Recognition", "One-Shot Learning"], "method": [], "dataset": ["Mini-Imagenet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Dynamic Few-Shot Visual Learning without Forgetting"} {"abstract": "The goal of semi-supervised learning is to utilize the unlabeled, in-domain dataset U to improve models trained on the labeled dataset D. Under the context of large-scale language-model (LM) pretraining, how we can make the best use of U is poorly understood: is semi-supervised learning still beneficial with the presence of large-scale pretraining? should U be used for in-domain LM pretraining or pseudo-label generation? how should the pseudo-label based semi-supervised model be actually implemented? how different semi-supervised strategies affect performances regarding D of different sizes, U of different sizes, etc. In this paper, we conduct comprehensive studies on semi-supervised learning in the task of text classification under the context of large-scale LM pretraining. Our studies shed important lights on the behavior of semi-supervised learning methods: (1) with the presence of in-domain pretraining LM on U, open-domain LM pretraining is unnecessary; (2) both the in-domain pretraining strategy and the pseudo-label based strategy introduce significant performance boosts, with the former performing better with larger U, the latter performing better with smaller U, and the combination leading to the largest performance boost; (3) self-training (pretraining first on pseudo labels D' and then fine-tuning on D) yields better performances when D is small, while joint training on the combination of pseudo labels D' and the original dataset D yields better performances when D is large. Using semi-supervised learning strategies, we are able to achieve a performance of around 93.8% accuracy with only 50 training data points on the IMDB dataset, and a competitive performance of 96.6% with the full IMDB dataset. Our work marks an initial step in understanding the behavior of semi-supervised learning models under the context of large-scale pretraining.", "field": [], "task": ["Language Modelling", "Text Classification"], "method": [], "dataset": ["IMDb"], "metric": ["Accuracy (2 classes)", "Accuracy (10 classes)"], "title": "Neural Semi-supervised Learning for Text Classification Under Large-Scale Pretraining"} {"abstract": "In this work, we study 3D object detection from RGB-D data in both indoor and\noutdoor scenes. While previous methods focus on images or 3D voxels, often\nobscuring natural 3D patterns and invariances of 3D data, we directly operate\non raw point clouds by popping up RGB-D scans. However, a key challenge of this\napproach is how to efficiently localize objects in point clouds of large-scale\nscenes (region proposal). Instead of solely relying on 3D proposals, our method\nleverages both mature 2D object detectors and advanced 3D deep learning for\nobject localization, achieving efficiency as well as high recall for even small\nobjects. Benefited from learning directly in raw point clouds, our method is\nalso able to precisely estimate 3D bounding boxes even under strong occlusion\nor with very sparse points. Evaluated on KITTI and SUN RGB-D 3D detection\nbenchmarks, our method outperforms the state of the art by remarkable margins\nwhile having real-time capability.", "field": [], "task": ["3D Object Detection", "Object Detection", "Object Localization", "Region Proposal"], "method": [], "dataset": ["KITTI Pedestrians Hard", "KITTI Cars Moderate", "KITTI Cyclists Moderate", "KITTI Cars Hard val", "KITTI Pedestrian Moderate val", "KITTI Cyclist Moderate val", "SUN-RGBD val", "KITTI Cars Hard", "KITTI Pedestrians Easy", "KITTI Cyclist Hard val", "KITTI Cars Moderate val", "KITTI Pedestrian Easy val", "KITTI Cyclist Easy val", "SUN-RGBD", "KITTI Cars Easy val", "KITTI Cyclists Hard", "KITTI Pedestrians Moderate", "KITTI Cyclists Easy", "KITTI Pedestrian Hard val", "KITTI Cars Easy"], "metric": ["AP", "MAP"], "title": "Frustum PointNets for 3D Object Detection from RGB-D Data"} {"abstract": "We generalize the scattering transform to graphs and consequently construct a\nconvolutional neural network on graphs. We show that under certain conditions,\nany feature generated by such a network is approximately invariant to\npermutations and stable to graph manipulations. Numerical results demonstrate\ncompetitive performance on relevant datasets.", "field": [], "task": ["Node Classification"], "method": [], "dataset": ["Cora"], "metric": ["Accuracy"], "title": "Graph Convolutional Neural Networks via Scattering"} {"abstract": "This paper aims at high-accuracy 3D object detection in autonomous driving\nscenario. We propose Multi-View 3D networks (MV3D), a sensory-fusion framework\nthat takes both LIDAR point cloud and RGB images as input and predicts oriented\n3D bounding boxes. We encode the sparse 3D point cloud with a compact\nmulti-view representation. The network is composed of two subnetworks: one for\n3D object proposal generation and another for multi-view feature fusion. The\nproposal network generates 3D candidate boxes efficiently from the bird's eye\nview representation of 3D point cloud. We design a deep fusion scheme to\ncombine region-wise features from multiple views and enable interactions\nbetween intermediate layers of different paths. Experiments on the challenging\nKITTI benchmark show that our approach outperforms the state-of-the-art by\naround 25% and 30% AP on the tasks of 3D localization and 3D detection. In\naddition, for 2D detection, our approach obtains 10.3% higher AP than the\nstate-of-the-art on the hard data among the LIDAR-based methods.", "field": [], "task": ["3D Object Detection", "Autonomous Driving", "Object Detection", "Object Proposal Generation"], "method": [], "dataset": ["KITTI Cars Moderate val", "KITTI Cars Hard val", "KITTI Pedestrian Moderate val", "KITTI Pedestrian Easy val", "KITTI Cyclist Easy val", "KITTI Cyclist Moderate val", "KITTI Cyclist Hard val", "KITTI Pedestrian Hard val", "KITTI Cars Easy val"], "metric": ["AP"], "title": "Multi-View 3D Object Detection Network for Autonomous Driving"} {"abstract": "State-of-the-art human pose estimation methods are based on heat map\nrepresentation. In spite of the good performance, the representation has a few\nissues in nature, such as not differentiable and quantization error. This work\nshows that a simple integral operation relates and unifies the heat map\nrepresentation and joint regression, thus avoiding the above issues. It is\ndifferentiable, efficient, and compatible with any heat map based methods. Its\neffectiveness is convincingly validated via comprehensive ablation experiments\nunder various settings, specifically on 3D pose estimation, for the first time.", "field": [], "task": ["3D Pose Estimation", "Pose Estimation", "Quantization", "Regression"], "method": [], "dataset": ["MPII Human Pose"], "metric": ["PCKh-0.5"], "title": "Integral Human Pose Regression"} {"abstract": "Object detection in point clouds is an important aspect of many robotics applications such as autonomous driving. In this paper we consider the problem of encoding a point cloud into a format appropriate for a downstream detection pipeline. Recent literature suggests two types of encoders; fixed encoders tend to be fast but sacrifice accuracy, while encoders that are learned from data are more accurate, but slower. In this work we propose PointPillars, a novel encoder which utilizes PointNets to learn a representation of point clouds organized in vertical columns (pillars). While the encoded features can be used with any standard 2D convolutional detection architecture, we further propose a lean downstream network. Extensive experimentation shows that PointPillars outperforms previous encoders with respect to both speed and accuracy by a large margin. Despite only using lidar, our full detection pipeline significantly outperforms the state of the art, even among fusion methods, with respect to both the 3D and bird's eye view KITTI benchmarks. This detection performance is achieved while running at 62 Hz: a 2 - 4 fold runtime improvement. A faster version of our method matches the state of the art at 105 Hz. These benchmarks suggest that PointPillars is an appropriate encoding for object detection in point clouds.", "field": [], "task": ["3D Object Detection", "Autonomous Driving", "Birds Eye View Object Detection", "Object Detection"], "method": [], "dataset": ["KITTI Cars Hard", "KITTI Cyclists Hard", "KITTI Cars Moderate", "KITTI Cyclists Moderate", "KITTI Pedestrians Moderate", "KITTI Cyclists Easy", "KITTI Cars Easy"], "metric": ["AP"], "title": "PointPillars: Fast Encoders for Object Detection from Point Clouds"} {"abstract": "We study the problem of distilling knowledge from a large deep teacher network to a much smaller student network for the task of road marking segmentation. In this work, we explore a novel knowledge distillation (KD) approach that can transfer 'knowledge' on scene structure more effectively from a teacher to a student model. Our method is known as Inter-Region Affinity KD (IntRA-KD). It decomposes a given road scene image into different regions and represents each region as a node in a graph. An inter-region affinity graph is then formed by establishing pairwise relationships between nodes based on their similarity in feature distribution. To learn structural knowledge from the teacher network, the student is required to match the graph generated by the teacher. The proposed method shows promising results on three large-scale road marking segmentation benchmarks, i.e., ApolloScape, CULane and LLAMAS, by taking various lightweight models as students and ResNet-101 as the teacher. IntRA-KD consistently brings higher performance gains on all lightweight models, compared to previous distillation methods. Our code is available at https://github.com/cardwing/Codes-for-IntRA-KD.", "field": [], "task": ["Knowledge Distillation", "Lane Detection", "Semantic Segmentation"], "method": [], "dataset": ["CULane", "Apolloscape"], "metric": ["F1 score", "mIoU"], "title": "Inter-Region Affinity Distillation for Road Marking Segmentation"} {"abstract": "This paper studies the problem of blind face restoration from an\nunconstrained blurry, noisy, low-resolution, or compressed image (i.e.,\ndegraded observation). For better recovery of fine facial details, we modify\nthe problem setting by taking both the degraded observation and a high-quality\nguided image of the same identity as input to our guided face restoration\nnetwork (GFRNet). However, the degraded observation and guided image generally\nare different in pose, illumination and expression, thereby making plain CNNs\n(e.g., U-Net) fail to recover fine and identity-aware facial details. To tackle\nthis issue, our GFRNet model includes both a warping subnetwork (WarpNet) and a\nreconstruction subnetwork (RecNet). The WarpNet is introduced to predict flow\nfield for warping the guided image to correct pose and expression (i.e., warped\nguidance), while the RecNet takes the degraded observation and warped guidance\nas input to produce the restoration result. Due to that the ground-truth flow\nfield is unavailable, landmark loss together with total variation\nregularization are incorporated to guide the learning of WarpNet. Furthermore,\nto make the model applicable to blind restoration, our GFRNet is trained on the\nsynthetic data with versatile settings on blur kernel, noise level,\ndownsampling scale factor, and JPEG quality factor. Experiments show that our\nGFRNet not only performs favorably against the state-of-the-art image and face\nrestoration methods, but also generates visually photo-realistic results on\nreal degraded facial images.", "field": [], "task": [], "method": [], "dataset": ["VggFace2 - 8x upscaling", "WebFace - 8x upscaling"], "metric": ["PSNR"], "title": "Learning Warped Guidance for Blind Face Restoration"} {"abstract": "The ability to generalize quickly from few observations is crucial for\nintelligent systems. In this paper we introduce APL, an algorithm that\napproximates probability distributions by remembering the most surprising\nobservations it has encountered. These past observations are recalled from an\nexternal memory module and processed by a decoder network that can combine\ninformation from different memory slots to generalize beyond direct recall. We\nshow this algorithm can perform as well as state of the art baselines on\nfew-shot classification benchmarks with a smaller memory footprint. In\naddition, its memory compression allows it to scale to thousands of unknown\nlabels. Finally, we introduce a meta-learning reasoning task which is more\nchallenging than direct classification. In this setting, APL is able to\ngeneralize with fewer than one example per class via deductive reasoning.", "field": [], "task": ["Few-Shot Learning", "Meta-Learning"], "method": [], "dataset": ["OMNIGLOT - 1-Shot, 5-way", "OMNIGLOT - 1-Shot, 1000 way", "OMNIGLOT - 5-Shot, 1000 way", "OMNIGLOT - 5-Shot, 20-way", "OMNIGLOT - 1-Shot, 423 way", "OMNIGLOT - 5-Shot, 5-way", "OMNIGLOT - 1-Shot, 20-way", "OMNIGLOT - 5-Shot, 423 way"], "metric": ["Accuracy"], "title": "Adaptive Posterior Learning: few-shot learning with a surprise-based memory module"} {"abstract": "In this paper, we adapt Recurrent Neural Networks with Stochastic Layers,\nwhich are the state-of-the-art for generating text, music and speech, to the\nproblem of acoustic novelty detection. By integrating uncertainty into the\nhidden states, this type of network is able to learn the distribution of\ncomplex sequences. Because the learned distribution can be calculated\nexplicitly in terms of probability, we can evaluate how likely an observation\nis then detect low-probability events as novel. The model is robust, highly\nunsupervised, end-to-end and requires minimum preprocessing, feature\nengineering or hyperparameter tuning. An experiment on a benchmark dataset\nshows that our model outperforms the state-of-the-art acoustic novelty\ndetectors.", "field": [], "task": ["Acoustic Novelty Detection", "Feature Engineering"], "method": [], "dataset": ["A3Lab PASCAL CHiME"], "metric": ["F1"], "title": "Recurrent Neural Networks with Stochastic Layers for Acoustic Novelty Detection"} {"abstract": "Finding the best neural network architecture requires significant time,\nresources, and human expertise. These challenges are partially addressed by\nneural architecture search (NAS) which is able to find the best convolutional\nlayer or cell that is then used as a building block for the network. However,\nonce a good building block is found, manual design is still required to\nassemble the final architecture as a combination of multiple blocks under a\npredefined parameter budget constraint. A common solution is to stack these\nblocks into a single tower and adjust the width and depth to fill the parameter\nbudget. However, these single tower architectures may not be optimal. Instead,\nin this paper we present the AdaNAS algorithm, that uses ensemble techniques to\ncompose a neural network as an ensemble of smaller networks automatically.\nAdditionally, we introduce a novel technique based on knowledge distillation to\niteratively train the smaller networks using the previous ensemble as a\nteacher. Our experiments demonstrate that ensembles of networks improve\naccuracy upon a single neural network while keeping the same number of\nparameters. Our models achieve comparable results with the state-of-the-art on\nCIFAR-10 and sets a new state-of-the-art on CIFAR-100.", "field": [], "task": ["Image Classification", "Knowledge Distillation", "Neural Architecture Search"], "method": [], "dataset": ["CIFAR-100"], "metric": ["Percentage correct"], "title": "Improving Neural Architecture Search Image Classifiers via Ensemble Learning"} {"abstract": "Person re-identification (re-id) remains challenging due to significant intra-class variations across different cameras. Recently, there has been a growing interest in using generative models to augment training data and enhance the invariance to input changes. The generative pipelines in existing methods, however, stay relatively separate from the discriminative re-id learning stages. Accordingly, re-id models are often trained in a straightforward manner on the generated data. In this paper, we seek to improve learned re-id embeddings by better leveraging the generated data. To this end, we propose a joint learning framework that couples re-id learning and data generation end-to-end. Our model involves a generative module that separately encodes each person into an appearance code and a structure code, and a discriminative module that shares the appearance encoder with the generative module. By switching the appearance or structure codes, the generative module is able to generate high-quality cross-id composed images, which are online fed back to the appearance encoder and used to improve the discriminative module. The proposed joint learning framework renders significant improvement over the baseline without using generated data, leading to the state-of-the-art performance on several benchmark datasets.", "field": [], "task": ["Image Generation", "Image-to-Image Translation", "Person Re-Identification"], "method": [], "dataset": ["MSMT17", "Market-1501", "DukeMTMC-reID", "CUHK03"], "metric": ["mAP", "Rank-10", "MAP", "Rank-1", "Rank-5"], "title": "Joint Discriminative and Generative Learning for Person Re-identification"} {"abstract": "Inverse problems in imaging are extensively studied, with a variety of strategies, tools, and theory that have been accumulated over the years. Recently, this field has been immensely influenced by the emergence of deep-learning techniques. One such contribution, which is the focus of this paper, is the Deep Image Prior (DIP) work by Ulyanov, Vedaldi, and Lempitsky (2018). DIP offers a new approach towards the regularization of inverse problems, obtained by forcing the recovered image to be synthesized from a given deep architecture. While DIP has been shown to be quite an effective unsupervised approach, its results still fall short when compared to state-of-the-art alternatives. In this work, we aim to boost DIP by adding an explicit prior, which enriches the overall regularization effect in order to lead to better-recovered images. More specifically, we propose to bring-in the concept of Regularization by Denoising (RED), which leverages existing denoisers for regularizing inverse problems. Our work shows how the two (DIP and RED) can be merged into a highly effective unsupervised recovery process while avoiding the need to differentiate the chosen denoiser, and leading to very effective results, demonstrated for several tested problems.", "field": [], "task": ["Deblurring", "Denoising", "Image Super-Resolution"], "method": [], "dataset": ["Set5 - 4x upscaling", "Set14 - 8x upscaling", "Set14 - 4x upscaling", "Set5 - 8x upscaling"], "metric": ["PSNR"], "title": "DeepRED: Deep Image Prior Powered by RED"} {"abstract": "Adversarial training, in which a network is trained on adversarial examples, is one of the few defenses against adversarial attacks that withstands strong attacks. Unfortunately, the high cost of generating strong adversarial examples makes standard adversarial training impractical on large-scale problems like ImageNet. We present an algorithm that eliminates the overhead cost of generating adversarial examples by recycling the gradient information computed when updating model parameters. Our \"free\" adversarial training algorithm achieves comparable robustness to PGD adversarial training on the CIFAR-10 and CIFAR-100 datasets at negligible additional cost compared to natural training, and can be 7 to 30 times faster than other strong adversarial training methods. Using a single workstation with 4 P100 GPUs and 2 days of runtime, we can train a robust model for the large-scale ImageNet classification task that maintains 40% accuracy against PGD attacks. The code is available at https://github.com/ashafahi/free_adv_train.", "field": [], "task": ["Adversarial Attack", "Adversarial Defense"], "method": [], "dataset": ["ImageNet (non-targeted PGD, max perturbation=4)"], "metric": ["Accuracy"], "title": "Adversarial Training for Free!"} {"abstract": "Sequence-to-sequence models have recently gained the state of the art\nperformance in summarization. However, not too many large-scale high-quality\ndatasets are available and almost all the available ones are mainly news\narticles with specific writing style. Moreover, abstractive human-style systems\ninvolving description of the content at a deeper level require data with higher\nlevels of abstraction. In this paper, we present WikiHow, a dataset of more\nthan 230,000 article and summary pairs extracted and constructed from an online\nknowledge base written by different human authors. The articles span a wide\nrange of topics and therefore represent high diversity styles. We evaluate the\nperformance of the existing methods on WikiHow to present its challenges and\nset some baselines to further improve it.", "field": [], "task": ["Text Summarization"], "method": [], "dataset": ["WikiHow"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "WikiHow: A Large Scale Text Summarization Dataset"} {"abstract": "The recent trend in vision-based multi-object tracking (MOT) is heading towards leveraging the representational power of deep learning to jointly learn to detect and track objects. However, existing methods train only certain sub-modules using loss functions that often do not correlate with established tracking evaluation measures such as Multi-Object Tracking Accuracy (MOTA) and Precision (MOTP). As these measures are not differentiable, the choice of appropriate loss functions for end-to-end training of multi-object tracking methods is still an open research problem. In this paper, we bridge this gap by proposing a differentiable proxy of MOTA and MOTP, which we combine in a loss function suitable for end-to-end training of deep multi-object trackers. As a key ingredient, we propose a Deep Hungarian Net (DHN) module that approximates the Hungarian matching algorithm. DHN allows estimating the correspondence between object tracks and ground truth objects to compute differentiable proxies of MOTA and MOTP, which are in turn used to optimize deep trackers directly. We experimentally demonstrate that the proposed differentiable framework improves the performance of existing multi-object trackers, and we establish a new state of the art on the MOTChallenge benchmark. Our code is publicly available from https://github.com/yihongXU/deepMOT.", "field": [], "task": ["Multi-Object Tracking", "Multiple Object Tracking", "Object Tracking"], "method": [], "dataset": ["2D MOT 2015", "MOT16", "MOT17"], "metric": ["MOTA"], "title": "How To Train Your Deep Multi-Object Tracker"} {"abstract": "Pedestrian detection is a critical problem in computer vision with\nsignificant impact on safety in urban autonomous driving. In this work, we\nexplore how semantic segmentation can be used to boost pedestrian detection\naccuracy while having little to no impact on network efficiency. We propose a\nsegmentation infusion network to enable joint supervision on semantic\nsegmentation and pedestrian detection. When placed properly, the additional\nsupervision helps guide features in shared layers to become more sophisticated\nand helpful for the downstream pedestrian detector. Using this approach, we\nfind weakly annotated boxes to be sufficient for considerable performance\ngains. We provide an in-depth analysis to demonstrate how shared layers are\nshaped by the segmentation supervision. In doing so, we show that the resulting\nfeature maps become more semantically meaningful and robust to shape and\nocclusion. Overall, our simultaneous detection and segmentation framework\nachieves a considerable gain over the state-of-the-art on the Caltech\npedestrian dataset, competitive performance on KITTI, and executes 2x faster\nthan competitive methods.", "field": [], "task": ["Autonomous Driving", "Pedestrian Detection", "Semantic Segmentation"], "method": [], "dataset": ["Caltech"], "metric": ["Reasonable Miss Rate"], "title": "Illuminating Pedestrians via Simultaneous Detection & Segmentation"} {"abstract": "The recent years have seen remarkable success in the use of deep neural networks on text summarization. However, there is no clear understanding of \\textit{why} they perform so well, or \\textit{how} they might be improved. In this paper, we seek to better understand how neural extractive summarization systems could benefit from different types of model architectures, transferable knowledge and learning schemas. Additionally, we find an effective way to improve current frameworks and achieve the state-of-the-art result on CNN/DailyMail by a large margin based on our observations and analyses. Hopefully, our work could provide more clues for future research on extractive summarization.", "field": [], "task": ["Extractive Text Summarization", "Text Summarization"], "method": [], "dataset": ["CNN / Daily Mail"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Searching for Effective Neural Extractive Summarization: What Works and What's Next"} {"abstract": "We show that existing upsampling operators can be unified using the notion of the index function. This notion is inspired by an observation in the decoding process of deep image matting where indices-guided unpooling can often recover boundary details considerably better than other upsampling operators such as bilinear interpolation. By viewing the indices as a function of the feature map, we introduce the concept of \"learning to index\", and present a novel index-guided encoder-decoder framework where indices are self-learned adaptively from data and are used to guide the downsampling and upsampling stages, without extra training supervision. At the core of this framework is a new learnable module, termed Index Network (IndexNet), which dynamically generates indices conditioned on the feature map itself. IndexNet can be used as a plug-in applying to almost all off-the-shelf convolutional networks that have coupled downsampling and upsampling stages, giving the networks the ability to dynamically capture variations of local patterns. In particular, we instantiate and investigate five families of IndexNet and demonstrate their effectiveness on four dense prediction tasks, including image denoising, image matting, semantic segmentation, and monocular depth estimation. Code and models have been made available at: https://tinyurl.com/IndexNetV1", "field": [], "task": ["Denoising", "Depth Estimation", "Grayscale Image Denoising", "Image Denoising", "Image Matting", "Monocular Depth Estimation", "Scene Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["Set12 sigma15", "BSD68 sigma15", "Set12 sigma50", "BSD68 sigma50", "SUN-RGBD", "Set12 sigma30", "BSD68 sigma25", "NYU-Depth V2"], "metric": ["Mean IoU", "PSNR", "RMSE"], "title": "Index Network"} {"abstract": "Word segmentation is a fundamental pre-processing step for Thai Natural Language Processing. The current off-the-shelf solutions are not benchmarked consistently, so it is difficult to compare their trade-offs. We conducted a speed and accuracy comparison of the popular systems on three different domains and found that the state-of-the-art deep learning system is slow and moreover does not use sub-word structures to guide the model. Here, we propose a fast and accurate neural Thai Word Segmenter that uses dilated CNN filters to capture the environment of each character and uses syllable embeddings as features. Our system runs at least 5.6x faster and outperforms the previous state-of-the-art system on some domains. In addition, we develop the first ML-based Thai orthographical syllable segmenter, which yields syllable embeddings to be used as features by the word segmenter.", "field": [], "task": ["Thai Word Segmentation", "Tokenization"], "method": [], "dataset": ["BEST-2010"], "metric": ["F1-Score"], "title": "AttaCut: A Fast and Accurate Neural Thai Word Segmenter"} {"abstract": "Recently, different machine learning methods have been introduced to tackle the challenging few-shot learning scenario that is, learning from a small labeled dataset related to a specific task. Common approaches have taken the form of meta-learning: learning to learn on the new problem given the old. Following the recognition that meta-learning is implementing learning in a multi-level model, we present a Bayesian treatment for the meta-learning inner loop through the use of deep kernels. As a result we can learn a kernel that transfers to new tasks; we call this Deep Kernel Transfer (DKT). This approach has many advantages: is straightforward to implement as a single optimizer, provides uncertainty quantification, and does not require estimation of task-specific parameters. We empirically demonstrate that DKT outperforms several state-of-the-art algorithms in few-shot classification, and is the state of the art for cross-domain adaptation and regression. We conclude that complex meta-learning routines can be replaced by a simpler Bayesian model without loss of accuracy.", "field": [], "task": ["Bayesian Inference", "Domain Adaptation", "Few-Shot Image Classification", "Few-Shot Learning", "Few-shot Regression", "Gaussian Processes", "Meta-Learning", "Regression"], "method": [], "dataset": ["OMNIGLOT-EMNIST 5-way (1-shot)", "Mini-Imagenet 5-way (1-shot)", "OMNIGLOT-EMNIST 5-way (5-shot)", "Mini-Imagenet 5-way (5-shot)", "Mini-ImageNet-CUB 5-way (1-shot)", "Mini-ImageNet-CUB 5-way (5-shot)", "CUB 200 5-way 1-shot", "CUB 200 5-way 5-shot"], "metric": ["Accuracy"], "title": "Bayesian Meta-Learning for the Few-Shot Setting via Deep Kernels"} {"abstract": "Knowledge graphs are structured representations of real world facts. However, they typically contain only a small subset of all possible facts. Link prediction is a task of inferring missing facts based on existing ones. We propose TuckER, a relatively straightforward but powerful linear model based on Tucker decomposition of the binary tensor representation of knowledge graph triples. TuckER outperforms previous state-of-the-art models across standard link prediction datasets, acting as a strong baseline for more elaborate models. We show that TuckER is a fully expressive model, derive sufficient bounds on its embedding dimensionalities and demonstrate that several previously introduced linear models can be viewed as special cases of TuckER.", "field": [], "task": ["Knowledge Graph Completion", "Knowledge Graphs", "Link Prediction"], "method": [], "dataset": [" FB15k", "WN18RR", "WN18", "FB15k-237"], "metric": ["Hits@10", "MRR", "Hits@3", "Hits@1"], "title": "TuckER: Tensor Factorization for Knowledge Graph Completion"} {"abstract": "Recently, convolutional neural networks with 3D kernels (3D CNNs) have been very popular in computer vision community as a result of their superior ability of extracting spatio-temporal features within video frames compared to 2D CNNs. Although there has been great advances recently to build resource efficient 2D CNN architectures considering memory and power budget, there is hardly any similar resource efficient architectures for 3D CNNs. In this paper, we have converted various well-known resource efficient 2D CNNs to 3D CNNs and evaluated their performance on three major benchmarks in terms of classification accuracy for different complexity levels. We have experimented on (1) Kinetics-600 dataset to inspect their capacity to learn, (2) Jester dataset to inspect their ability to capture motion patterns, and (3) UCF-101 to inspect the applicability of transfer learning. We have evaluated the run-time performance of each model on a single Titan XP GPU and a Jetson TX2 embedded system. The results of this study show that these models can be utilized for different types of real-world applications since they provide real-time performance with considerable accuracies and memory usage. Our analysis on different complexity levels shows that the resource efficient 3D CNNs should not be designed too shallow or narrow in order to save complexity. The codes and pretrained models used in this work are publicly available.", "field": [], "task": ["Action Recognition", "Transfer Learning"], "method": [], "dataset": ["Jester", "UCF101"], "metric": ["Val", "3-fold Accuracy"], "title": "Resource Efficient 3D Convolutional Neural Networks"} {"abstract": "The last decade has witnessed a growing interest in video salient object detection (VSOD). However, the research community long-term lacked a well-established VSOD dataset representative of real dynamic scenes with high-quality annotations. To address this issue, we elaborately collected a visual-attention-consistent Densely Annotated VSOD (DAVSOD) dataset, which contains 226 videos with 23,938 frames that cover diverse realistic-scenes, objects, instances and motions. With corresponding real human eye-fixation data, we obtain precise ground-truths. This is the first work that explicitly emphasizes the challenge of saliency shift, i.e., the video salient object(s) may dynamically change. To further contribute the community a complete benchmark, we systematically assess 17 representative VSOD algorithms over seven existing VSOD datasets and our DAVSOD with totally 84K frames (largest-scale). Utilizing three famous metrics, we then present a comprehensive and insightful performance analysis. Furthermore, we propose a baseline model. It is equipped with a saliency shift- aware convLSTM, which can efficiently capture video saliency dynamics through learning human attention-shift behavior. Extensive experiments open up promising future directions for model development and comparison.\r", "field": [], "task": ["Object Detection", "RGB Salient Object Detection", "Salient Object Detection", "Video Object Segmentation", "Video Salient Object Detection"], "method": [], "dataset": ["ViSal", "MCL", "DAVIS-2016", "DAVSOD-Difficult20", "VOS-T", "DAVSOD-Normal25", "SegTrack v2", "UVSD", "DAVSOD-easy35", "DAVIS 2016", "FBMS-59"], "metric": ["max E-Measure", "MAX F-MEASURE", "S-Measure", "AVERAGE MAE", "Average MAE", "max E-measure", "MAX E-MEASURE", "max F-Measure"], "title": "Shifting More Attention to Video Salient Object Detection"} {"abstract": "We develop a functional encoder-decoder approach to supervised meta-learning, where labeled data is encoded into an infinite-dimensional functional representation rather than a finite-dimensional one. Furthermore, rather than directly producing the representation, we learn a neural update rule resembling functional gradient descent which iteratively improves the representation. The final representation is used to condition the decoder to make predictions on unlabeled data. Our approach is the first to demonstrates the success of encoder-decoder style meta-learning methods like conditional neural processes on large-scale few-shot classification benchmarks such as miniImageNet and tieredImageNet, where it achieves state-of-the-art performance.", "field": [], "task": ["Few-Shot Image Classification", "Meta-Learning"], "method": [], "dataset": ["Mini-Imagenet 5-way (1-shot)", "Tiered ImageNet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "Tiered ImageNet 5-way (5-shot)"], "metric": ["Accuracy"], "title": "MetaFun: Meta-Learning with Iterative Functional Updates"} {"abstract": "Text contained in an image carries high-level semantics that can be exploited to achieve richer image understanding. In particular, the mere presence of text provides strong guiding content that should be employed to tackle a diversity of computer vision tasks such as image retrieval, fine-grained classification, and visual question answering. In this paper, we address the problem of fine-grained classification and image retrieval by leveraging textual information along with visual cues to comprehend the existing intrinsic relation between the two modalities. The novelty of the proposed model consists of the usage of a PHOC descriptor to construct a bag of textual words along with a Fisher Vector Encoding that captures the morphology of text. This approach provides a stronger multimodal representation for this task and as our experiments demonstrate, it achieves state-of-the-art results on two different tasks, fine-grained classification and image retrieval.", "field": [], "task": ["Fine-Grained Image Classification", "Image Classification", "Image Retrieval", "Question Answering", "Visual Question Answering"], "method": [], "dataset": ["Con-Text", "Bottles"], "metric": ["mAP"], "title": "Fine-grained Image Classification and Retrieval by Combining Visual and Locally Pooled Textual Features"} {"abstract": "Edge nodes are crucial for detection against multitudes of cyber attacks on Internet-of-Things endpoints and is set to become part of a multi-billion industry. The resource constraints in this novel network infrastructure tier constricts the deployment of existing Network Intrusion Detection System with Deep Learning models (DLM). We address this issue by developing a novel light, fast and accurate 'Edge-Detect' model, which detects Distributed Denial of Service attack on edge nodes using DLM techniques. Our model can work within resource restrictions i.e. low power, memory and processing capabilities, to produce accurate results at a meaningful pace. It is built by creating layers of Long Short-Term Memory or Gated Recurrent Unit based cells, which are known for their excellent representation of sequential data. We designed a practical data science pipeline with Recurring Neural Network to learn from the network packet behavior in order to identify whether it is normal or attack-oriented. The model evaluation is from deployment on actual edge node represented by Raspberry Pi using current cybersecurity dataset (UNSW2015). Our results demonstrate that in comparison to conventional DLM techniques, our model maintains a high testing accuracy of 99% even with lower resource utilization in terms of cpu and memory. In addition, it is nearly 3 times smaller in size than the state-of-art model and yet requires a much lower testing time.", "field": [], "task": ["Edge-computing", "Intrusion Detection", "Network Intrusion Detection"], "method": [], "dataset": ["UNSW-NB15"], "metric": ["Precision", "Recall", "Accuracy"], "title": "Edge-Detect: Edge-centric Network Intrusion Detection using Deep Neural Network"} {"abstract": "We present ToTTo, an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description. To obtain generated targets that are natural but also faithful to the source table, we introduce a dataset construction process where annotators directly revise existing candidate sentences from Wikipedia. We present systematic analyses of our dataset and annotation process as well as results achieved by several state-of-the-art baselines. While usually fluent, existing methods often hallucinate phrases that are not supported by the table, suggesting that this dataset can serve as a useful research benchmark for high-precision conditional text generation.", "field": [], "task": ["Conditional Text Generation", "Data-to-Text Generation", "Table-to-Text Generation", "Text Generation"], "method": [], "dataset": ["ToTTo"], "metric": ["BLEU", "PARENT"], "title": "ToTTo: A Controlled Table-To-Text Generation Dataset"} {"abstract": "We present a novel approach to sentence simplification which departs from\nprevious work in two main ways. First, it requires neither hand written rules\nnor a training corpus of aligned standard and simplified sentences. Second,\nsentence splitting operates on deep semantic structure. We show (i) that the\nunsupervised framework we propose is competitive with four state-of-the-art\nsupervised systems and (ii) that our semantic based approach allows for a\nprincipled and effective handling of sentence splitting.", "field": [], "task": ["Text Simplification"], "method": [], "dataset": ["PWKP / WikiSmall"], "metric": ["BLEU"], "title": "Unsupervised Sentence Simplification Using Deep Semantics"} {"abstract": "Noise is a growing problem in urban areas, and according to the WHO is the second environmental cause of health problems in Europe. Noise monitoring using Wireless Sensor Networks are being applied in order to understand and help mitigate these noise problems. It is desirable that these sensor systems, in addition to logging the sound level, can indicate what the likely sound source is. However, transmitting audio to a cloud system for classification is energy-intensive and may cause privacy issues. It is also critical for widespread adoption and dense sensor coverage that individual sensor nodes are low-cost. Therefore we propose to perform the noise classification on the sensor node, using a low-cost microcontroller.\r\n\r\nSeveral Convolutional Neural Networks were designed for the STM32L476 low-power microcontroller using the Keras deep-learning framework, and deployed using the vendor-provided X-CUBE-AI inference engine. The resource budget for the model was set at maximum 50% utilization of CPU, RAM, and FLASH. 10 model variations were evaluated on the Environmental Sound Classification task using the standard Urbansound8k dataset.\r\n\r\nThe best models used Depthwise-Separable convolutions with striding for downsampling, and were able to reach 70.9% mean 10-fold accuracy while consuming only 20% CPU. To our knowledge, this is the highest reported performance on Urbansound8k using a microcontroller. One of the models was also tested on a microcontroller development device, demonstrating the classification of environmental sounds in real-time.\r\n\r\nThese results indicate that it is computationally feasible to classify environmental sound on low-power microcontrollers. Further development should make it possible to create wireless sensor-networks for noise monitoring with on-edge noise source classification.", "field": [], "task": ["Environmental Sound Classification"], "method": [], "dataset": ["UrbanSound8k"], "metric": ["Accuracy (10-fold)"], "title": "Environmental Sound Classification on Microcontrollers using Convolutional Neural Networks"} {"abstract": "In multi-person pose estimation actors can be heavily occluded, even become fully invisible behind another person. While temporal methods can still predict a reasonable estimation for a temporarily disappeared pose using past and future frames, they exhibit large errors nevertheless. We present an energy minimization approach to generate smooth, valid trajectories in time, bridging gaps in visibility. We show that it is better than other interpolation based approaches and achieves state of the art results. In addition, we present the synthetic MuCo-Temp dataset, a temporal extension of the MuCo-3DHP dataset. Our code is made publicly available.", "field": [], "task": ["3D Human Pose Estimation", "3D Multi-Person Pose Estimation", "3D Multi-Person Pose Estimation (root-relative)", "Multi-Person Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["MuPoTS-3D"], "metric": ["3DPCK", "MPJPE"], "title": "Temporal Smoothing for 3D Human Pose Estimation and Localization for Occluded People"} {"abstract": "Online news recommender systems aim to address the information explosion of\nnews and make personalized recommendation for users. In general, news language\nis highly condensed, full of knowledge entities and common sense. However,\nexisting methods are unaware of such external knowledge and cannot fully\ndiscover latent knowledge-level connections among news. The recommended results\nfor a user are consequently limited to simple patterns and cannot be extended\nreasonably. Moreover, news recommendation also faces the challenges of high\ntime-sensitivity of news and dynamic diversity of users' interests. To solve\nthe above problems, in this paper, we propose a deep knowledge-aware network\n(DKN) that incorporates knowledge graph representation into news\nrecommendation. DKN is a content-based deep recommendation framework for\nclick-through rate prediction. The key component of DKN is a multi-channel and\nword-entity-aligned knowledge-aware convolutional neural network (KCNN) that\nfuses semantic-level and knowledge-level representations of news. KCNN treats\nwords and entities as multiple channels, and explicitly keeps their alignment\nrelationship during convolution. In addition, to address users' diverse\ninterests, we also design an attention module in DKN to dynamically aggregate a\nuser's history with respect to current candidate news. Through extensive\nexperiments on a real online news platform, we demonstrate that DKN achieves\nsubstantial gains over state-of-the-art deep recommendation models. We also\nvalidate the efficacy of the usage of knowledge in DKN.", "field": [], "task": ["Click-Through Rate Prediction", "Common Sense Reasoning", "Recommendation Systems"], "method": [], "dataset": ["Bing News"], "metric": ["AUC"], "title": "DKN: Deep Knowledge-Aware Network for News Recommendation"} {"abstract": "Several mechanisms to focus attention of a neural network on selected parts\nof its input or memory have been used successfully in deep learning models in\nrecent years. Attention has improved image classification, image captioning,\nspeech recognition, generative models, and learning algorithmic tasks, but it\nhad probably the largest impact on neural machine translation.\n Recently, similar improvements have been obtained using alternative\nmechanisms that do not focus on a single part of a memory but operate on all of\nit in parallel, in a uniform way. Such mechanism, which we call active memory,\nimproved over attention in algorithmic tasks, image processing, and in\ngenerative modelling.\n So far, however, active memory has not improved over attention for most\nnatural language processing tasks, in particular for machine translation. We\nanalyze this shortcoming in this paper and propose an extended model of active\nmemory that matches existing attention models on neural machine translation and\ngeneralizes better to longer sentences. We investigate this model and explain\nwhy previous active memory models did not succeed. Finally, we discuss when\nactive memory brings most benefits and where attention can be a better choice.", "field": [], "task": ["Image Captioning", "Machine Translation"], "method": [], "dataset": ["WMT2014 English-French"], "metric": ["BLEU score"], "title": "Can Active Memory Replace Attention?"} {"abstract": "Recent advances on 3D object detection heavily rely on how the 3D data are represented, \\emph{i.e.}, voxel-based or point-based representation. Many existing high performance 3D detectors are point-based because this structure can better retain precise point positions. Nevertheless, point-level features lead to high computation overheads due to unordered storage. In contrast, the voxel-based structure is better suited for feature extraction but often yields lower accuracy because the input data are divided into grids. In this paper, we take a slightly different viewpoint -- we find that precise positioning of raw points is not essential for high performance 3D object detection and that the coarse voxel granularity can also offer sufficient detection accuracy. Bearing this view in mind, we devise a simple but effective voxel-based framework, named Voxel R-CNN. By taking full advantage of voxel features in a two stage approach, our method achieves comparable detection accuracy with state-of-the-art point-based models, but at a fraction of the computation cost. Voxel R-CNN consists of a 3D backbone network, a 2D bird-eye-view (BEV) Region Proposal Network and a detect head. A voxel RoI pooling is devised to extract RoI features directly from voxel features for further refinement. Extensive experiments are conducted on the widely used KITTI Dataset and the more recent Waymo Open Dataset. Our results show that compared to existing voxel-based methods, Voxel R-CNN delivers a higher detection accuracy while maintaining a real-time frame processing rate, \\emph{i.e}., at a speed of 25 FPS on an NVIDIA RTX 2080 Ti GPU. The code is available at \\url{https://github.com/djiajunustc/Voxel-R-CNN}.", "field": [], "task": ["3D Object Detection", "Object Detection", "Region Proposal"], "method": [], "dataset": ["KITTI Cars Hard", "KITTI Cars Moderate", "KITTI Cars Moderate val", "KITTI Cars Hard val", "KITTI Cars Easy val", "KITTI Cars Easy"], "metric": ["AP"], "title": "Voxel R-CNN: Towards High Performance Voxel-based 3D Object Detection"} {"abstract": "Domain adaptation is critical for learning in new and unseen environments.\nWith domain adversarial training, deep networks can learn disentangled and\ntransferable features that effectively diminish the dataset shift between the\nsource and target domains for knowledge transfer. In the era of Big Data, the\nready availability of large-scale labeled datasets has stimulated wide interest\nin partial domain adaptation (PDA), which transfers a recognizer from a labeled\nlarge domain to an unlabeled small domain. It extends standard domain\nadaptation to the scenario where target labels are only a subset of source\nlabels. Under the condition that target labels are unknown, the key challenge\nof PDA is how to transfer relevant examples in the shared classes to promote\npositive transfer, and ignore irrelevant ones in the specific classes to\nmitigate negative transfer. In this work, we propose a unified approach to PDA,\nExample Transfer Network (ETN), which jointly learns domain-invariant\nrepresentations across the source and target domains, and a progressive\nweighting scheme that quantifies the transferability of source examples while\ncontrolling their importance to the learning task in the target domain. A\nthorough evaluation on several benchmark datasets shows that our approach\nachieves state-of-the-art results for partial domain adaptation tasks.", "field": [], "task": ["Domain Adaptation", "Partial Domain Adaptation", "Transfer Learning"], "method": [], "dataset": ["ImageNet-Caltech", "Office-31", "Office-Home"], "metric": ["Accuracy (%)"], "title": "Learning to Transfer Examples for Partial Domain Adaptation"} {"abstract": "This paper presents stacked attention networks (SANs) that learn to answer\nnatural language questions from images. SANs use semantic representation of a\nquestion as query to search for the regions in an image that are related to the\nanswer. We argue that image question answering (QA) often requires multiple\nsteps of reasoning. Thus, we develop a multiple-layer SAN in which we query an\nimage multiple times to infer the answer progressively. Experiments conducted\non four image QA data sets demonstrate that the proposed SANs significantly\noutperform previous state-of-the-art approaches. The visualization of the\nattention layers illustrates the progress that the SAN locates the relevant\nvisual clues that lead to the answer of the question layer-by-layer.", "field": [], "task": ["Visual Question Answering"], "method": [], "dataset": ["COCO Visual Question Answering (VQA) real images 1.0 open ended", "VQA v1 test-std"], "metric": ["Percentage correct", "Accuracy"], "title": "Stacked Attention Networks for Image Question Answering"} {"abstract": "In this paper we present a system that exploits different pre-trained Language Models for assigning domain labels to WordNet synsets without any kind of supervision. Furthermore, the system is not restricted to use a particular set of domain labels. We exploit the knowledge encoded within different off-the-shelf pre-trained Language Models and task formulations to infer the domain label of a particular WordNet definition. The proposed zero-shot system achieves a new state-of-the-art on the English dataset used in the evaluation.", "field": [], "task": ["Domain Labelling"], "method": [], "dataset": ["BabelDomains"], "metric": ["F1-Score"], "title": "Ask2Transformers: Zero-Shot Domain labelling with Pre-trained Language Models"} {"abstract": "We present a real-time approach for multi-person 3D motion capture at over 30 fps using a single RGB camera. It operates successfully in generic scenes which may contain occlusions by objects and by other people. Our method operates in subsequent stages. The first stage is a convolutional neural network (CNN) that estimates 2D and 3D pose features along with identity assignments for all visible joints of all individuals.We contribute a new architecture for this CNN, called SelecSLS Net, that uses novel selective long and short range skip connections to improve the information flow allowing for a drastically faster network without compromising accuracy. In the second stage, a fully connected neural network turns the possibly partial (on account of occlusion) 2Dpose and 3Dpose features for each subject into a complete 3Dpose estimate per individual. The third stage applies space-time skeletal model fitting to the predicted 2D and 3D pose per subject to further reconcile the 2D and 3D pose, and enforce temporal coherence. Our method returns the full skeletal pose in joint angles for each subject. This is a further key distinction from previous work that do not produce joint angle results of a coherent skeleton in real time for multi-person scenes. The proposed system runs on consumer hardware at a previously unseen speed of more than 30 fps given 512x320 images as input while achieving state-of-the-art accuracy, which we will demonstrate on a range of challenging real-world scenes.", "field": [], "task": ["3D Human Pose Estimation", "Motion Capture", "Pose Estimation"], "method": [], "dataset": ["Human3.6M", "MPI-INF-3DHP", "MuPoTS-3D"], "metric": ["3DPCK", "Average MPJPE (mm)", "MJPE", "AUC"], "title": "XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera"} {"abstract": "Recent approaches based on artificial neural networks (ANNs) have shown\npromising results for short-text classification. However, many short texts\noccur in sequences (e.g., sentences in a document or utterances in a dialog),\nand most existing ANN-based systems do not leverage the preceding short texts\nwhen classifying a subsequent one. In this work, we present a model based on\nrecurrent neural networks and convolutional neural networks that incorporates\nthe preceding short texts. Our model achieves state-of-the-art results on three\ndifferent datasets for dialog act prediction.", "field": [], "task": ["Text Classification"], "method": [], "dataset": ["Switchboard corpus"], "metric": ["Accuracy"], "title": "Sequential Short-Text Classification with Recurrent and Convolutional Neural Networks"} {"abstract": "Generating natural questions from an image is a semantic task that requires using visual and language modality to learn multimodal representations. Images can have multiple visual and language contexts that are relevant for generating questions namely places, captions, and tags. In this paper, we propose the use of exemplars for obtaining the relevant context. We obtain this by using a Multimodal Differential Network to produce natural and engaging questions. The generated questions show a remarkable similarity to the natural questions as validated by a human study. Further, we observe that the proposed approach substantially improves over state-of-the-art benchmarks on the quantitative metrics (BLEU, METEOR, ROUGE, and CIDEr).", "field": [], "task": ["Question Generation"], "method": [], "dataset": ["COCO Visual Question Answering (VQA) real images 1.0 open ended", "Visual Question Generation"], "metric": ["BLEU-1"], "title": "Multimodal Differential Network for Visual Question Generation"} {"abstract": "This paper presents a novel method for instance segmentation of 3D point clouds. The proposed method is called Gaussian Instance Center Network (GICN), which can approximate the distributions of instance centers scattered in the whole scene as Gaussian center heatmaps. Based on the predicted heatmaps, a small number of center candidates can be easily selected for the subsequent predictions with efficiency, including i) predicting the instance size of each center to decide a range for extracting features, ii) generating bounding boxes for centers, and iii) producing the final instance masks. GICN is a single-stage, anchor-free, and end-to-end architecture that is easy to train and efficient to perform inference. Benefited from the center-dictated mechanism with adaptive instance size selection, our method achieves state-of-the-art performance in the task of 3D instance segmentation on ScanNet and S3DIS datasets.", "field": [], "task": ["3D Instance Segmentation", "Instance Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["ScanNet(v2)", "S3DIS"], "metric": ["mRec", "Mean AP @ 0.5", "mPrec"], "title": "Learning Gaussian Instance Segmentation in Point Clouds"} {"abstract": "Recent advances cast the entity-relation extraction\r\nto a multi-turn question answering (QA) task and\r\nprovide an effective solution based on the machine\r\nreading comprehension (MRC) models. However,\r\nthey use a single question to characterize the meaning of entities and relations, which is intuitively\r\nnot enough because of the variety of context semantics. Meanwhile, existing models enumerate all relation types to generate questions, which\r\nis inefficient and easily leads to confusing questions. In this paper, we improve the existing MRCbased entity-relation extraction model through diverse question answering. First, a diversity question answering mechanism is introduced to detect\r\nentity spans and two answering selection strategies\r\nare designed to integrate different answers. Then,\r\nwe propose to predict a subset of potential relations\r\nand filter out irrelevant ones to generate questions\r\neffectively. Finally, entity and relation extractions\r\nare integrated in an end-to-end way and optimized\r\nthrough joint learning. Experiment results show\r\nthat the proposed method significantly outperforms\r\nbaseline models, which improves the relation F1\r\nto 62.1% (+1.9%) on ACE05 and 71.9% (+3.0%)\r\non CoNLL04. Our implementation is available at\r\nhttps://github.com/TanyaZhao/MRC4ERE.", "field": [], "task": ["Machine Reading Comprehension", "Question Answering", "Reading Comprehension", "Relation Extraction"], "method": [], "dataset": ["ACE 2005"], "metric": ["Sentence Encoder", "NER Micro F1", "RE+ Micro F1"], "title": "Asking Effective and Diverse Questions: A Machine Reading Comprehension based Framework for Joint Entity-Relation Extraction"} {"abstract": "In this paper, we propose a two-step training procedure for source separation via a deep neural network. In the first step we learn a transform (and it's inverse) to a latent space where masking-based separation performance using oracles is optimal. For the second step, we train a separation module that operates on the previously learned space. In order to do so, we also make use of a scale-invariant signal to distortion ratio (SI-SDR) loss function that works in the latent space, and we prove that it lower-bounds the SI-SDR in the time domain. We run various sound separation experiments that show how this approach can obtain better performance as compared to systems that learn the transform and the separation module jointly. The proposed methodology is general enough to be applicable to a large class of neural network end-to-end separation systems.", "field": [], "task": ["Speech Separation"], "method": [], "dataset": ["wsj0-2mix"], "metric": ["SI-SDRi"], "title": "Two-Step Sound Source Separation: Training on Learned Latent Targets"} {"abstract": "Matching images and sentences demands a fine understanding of both\nmodalities. In this paper, we propose a new system to discriminatively embed\nthe image and text to a shared visual-textual space. In this field, most\nexisting works apply the ranking loss to pull the positive image / text pairs\nclose and push the negative pairs apart from each other. However, directly\ndeploying the ranking loss is hard for network learning, since it starts from\nthe two heterogeneous features to build inter-modal relationship. To address\nthis problem, we propose the instance loss which explicitly considers the\nintra-modal data distribution. It is based on an unsupervised assumption that\neach image / text group can be viewed as a class. So the network can learn the\nfine granularity from every image/text group. The experiment shows that the\ninstance loss offers better weight initialization for the ranking loss, so that\nmore discriminative embeddings can be learned. Besides, existing works usually\napply the off-the-shelf features, i.e., word2vec and fixed visual feature. So\nin a minor contribution, this paper constructs an end-to-end dual-path\nconvolutional network to learn the image and text representations. End-to-end\nlearning allows the system to directly learn from the data and fully utilize\nthe supervision. On two generic retrieval datasets (Flickr30k and MSCOCO),\nexperiments demonstrate that our method yields competitive accuracy compared to\nstate-of-the-art methods. Moreover, in language based person retrieval, we\nimprove the state of the art by a large margin. The code has been made publicly\navailable.", "field": [], "task": ["Content-Based Image Retrieval", "Cross-Modal Retrieval", "NLP based Person Retrival", "Person Retrieval", "Text based Person Retrieval", "Text-Image Retrieval"], "method": [], "dataset": ["Flickr30k"], "metric": ["Image-to-text R@5", "Image-to-text R@1", "Image-to-text R@10", "Text-to-image R@10", "Text-to-image R@1", "Text-to-image R@5"], "title": "Dual-Path Convolutional Image-Text Embedding with Instance Loss"} {"abstract": "Entity alignment is the task of linking entities with the same real-world identity from different knowledge graphs (KGs), which has been recently dominated by embedding-based methods. Such approaches work by learning KG representations so that entity alignment can be performed by measuring the similarities between entity embeddings. While promising, prior works in the field often fail to properly capture complex relation information that commonly exists in multi-relational KGs, leaving much room for improvement. In this paper, we propose a novel Relation-aware Dual-Graph Convolutional Network (RDGCN) to incorporate relation information via attentive interactions between the knowledge graph and its dual relation counterpart, and further capture neighboring structures to learn better entity representations. Experiments on three real-world cross-lingual datasets show that our approach delivers better and more robust results over the state-of-the-art alignment methods by learning better KG representations.", "field": [], "task": ["Entity Alignment", "Entity Embeddings", "Knowledge Graphs"], "method": [], "dataset": ["DBP15k zh-en"], "metric": ["Hits@1"], "title": "Relation-Aware Entity Alignment for Heterogeneous Knowledge Graphs"} {"abstract": "We address the problem of guided image-to-image translation where we translate an input image into another while respecting the constraints provided by an external, user-provided guidance image. Various conditioning methods for leveraging the given guidance image have been explored, including input concatenation , feature concatenation, and conditional affine transformation of feature activations. All these conditioning mechanisms, however, are uni-directional, i.e., no information flow from the input image back to the guidance. To better utilize the constraints of the guidance image, we present a bi-directional feature transformation (bFT) scheme. We show that our bFT scheme outperforms other conditioning schemes and has comparable results to state-of-the-art methods on different tasks.", "field": [], "task": ["Image-to-Image Translation", "Pose Transfer"], "method": [], "dataset": ["Edge-to-Shoes", "Edge-to-Handbags", "Edge-to-Clothes", "Deep-Fashion"], "metric": ["SSIM", "FID", "LPIPS", "IS"], "title": "Guided Image-to-Image Translation with Bi-Directional Feature Transformation"} {"abstract": "In NLP, text classification is one of the primary problems we try to solve and its uses in language analyses are indisputable. The lack of labeled training data made it harder to do these tasks in low resource languages like Amharic. The task of collecting, labeling, annotating, and making valuable this kind of data will encourage junior researchers, schools, and machine learning practitioners to implement existing classification models in their language. In this short paper, we aim to introduce the Amharic text classification dataset that consists of more than 50k news articles that were categorized into 6 classes. This dataset is made available with easy baseline performances to encourage studies and better performance experiments.", "field": [], "task": ["Text Classification"], "method": [], "dataset": ["An Amharic News Text classification Dataset"], "metric": ["Accuracy"], "title": "An Amharic News Text classification Dataset"} {"abstract": "During the last half decade, convolutional neural networks (CNNs) have\ntriumphed over semantic segmentation, which is one of the core tasks in many\napplications such as autonomous driving. However, to train CNNs requires a\nconsiderable amount of data, which is difficult to collect and laborious to\nannotate. Recent advances in computer graphics make it possible to train CNNs\non photo-realistic synthetic imagery with computer-generated annotations.\nDespite this, the domain mismatch between the real images and the synthetic\ndata cripples the models' performance. Hence, we propose a curriculum-style\nlearning approach to minimize the domain gap in urban scenery semantic\nsegmentation. The curriculum domain adaptation solves easy tasks first to infer\nnecessary properties about the target domain; in particular, the first task is\nto learn global label distributions over images and local distributions over\nlandmark superpixels. These are easy to estimate because images of urban scenes\nhave strong idiosyncrasies (e.g., the size and spatial relations of buildings,\nstreets, cars, etc.). We then train a segmentation network while regularizing\nits predictions in the target domain to follow those inferred properties. In\nexperiments, our method outperforms the baselines on two datasets and two\nbackbone networks. We also report extensive ablation studies about our\napproach.", "field": [], "task": ["Autonomous Driving", "Domain Adaptation", "Image-to-Image Translation", "Semantic Segmentation", "Synthetic-to-Real Translation"], "method": [], "dataset": ["GTAV-to-Cityscapes Labels", "SYNTHIA-to-Cityscapes"], "metric": ["mIoU (13 classes)", "mIoU"], "title": "Curriculum Domain Adaptation for Semantic Segmentation of Urban Scenes"} {"abstract": "With the rapid increase of large-scale, real-world datasets, it becomes\ncritical to address the problem of long-tailed data distribution (i.e., a few\nclasses account for most of the data, while most classes are\nunder-represented). Existing solutions typically adopt class re-balancing\nstrategies such as re-sampling and re-weighting based on the number of\nobservations for each class. In this work, we argue that as the number of\nsamples increases, the additional benefit of a newly added data point will\ndiminish. We introduce a novel theoretical framework to measure data overlap by\nassociating with each sample a small neighboring region rather than a single\npoint. The effective number of samples is defined as the volume of samples and\ncan be calculated by a simple formula $(1-\\beta^{n})/(1-\\beta)$, where $n$ is\nthe number of samples and $\\beta \\in [0,1)$ is a hyperparameter. We design a\nre-weighting scheme that uses the effective number of samples for each class to\nre-balance the loss, thereby yielding a class-balanced loss. Comprehensive\nexperiments are conducted on artificially induced long-tailed CIFAR datasets\nand large-scale datasets including ImageNet and iNaturalist. Our results show\nthat when trained with the proposed class-balanced loss, the network is able to\nachieve significant performance gains on long-tailed datasets.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["iNaturalist 2018"], "metric": ["Top-1 Accuracy"], "title": "Class-Balanced Loss Based on Effective Number of Samples"} {"abstract": "Deep convolutional neutral networks have achieved great success on image\nrecognition tasks. Yet, it is non-trivial to transfer the state-of-the-art\nimage recognition networks to videos as per-frame evaluation is too slow and\nunaffordable. We present deep feature flow, a fast and accurate framework for\nvideo recognition. It runs the expensive convolutional sub-network only on\nsparse key frames and propagates their deep feature maps to other frames via a\nflow field. It achieves significant speedup as flow computation is relatively\nfast. The end-to-end training of the whole architecture significantly boosts\nthe recognition accuracy. Deep feature flow is flexible and general. It is\nvalidated on two recent large scale video datasets. It makes a large step\ntowards practical video recognition.", "field": [], "task": ["Video Recognition", "Video Semantic Segmentation"], "method": [], "dataset": ["Cityscapes val"], "metric": ["mIoU"], "title": "Deep Feature Flow for Video Recognition"} {"abstract": "Co-salient object detection (CoSOD) is a newly emerging and rapidly growing branch of salient object detection (SOD), which aims to detect the co-occurring salient objects in multiple images. However, existing CoSOD datasets often have a serious data bias, which assumes that each group of images contains salient objects of similar visual appearances. This bias results in the ideal settings and the effectiveness of the models, trained on existing datasets, may be impaired in real-life situations, where the similarity is usually semantic or conceptual. To tackle this issue, we first collect a new high-quality dataset, named CoSOD3k, which contains 3,316 images divided into 160 groups with multiple level annotations, i.e., category, bounding box, object, and instance levels. CoSOD3k makes a significant leap in terms of diversity, difficulty and scalability, benefiting related vision tasks. Besides, we comprehensively summarize 34 cutting-edge algorithms, benchmarking 19 of them over four existing CoSOD datasets (MSRC, iCoSeg, Image Pair and CoSal2015) and our CoSOD3k with a total of 61K images (largest scale), and reporting group-level performance analysis. Finally, we discuss the challenge and future work of CoSOD. Our study would give a strong boost to growth in the CoSOD community. Benchmark toolbox and results are available on our project page.\r", "field": [], "task": ["Co-Salient Object Detection", "Object Detection", "RGB Salient Object Detection", "Salient Object Detection"], "method": [], "dataset": ["iCoSeg", "CoSOD3k", "CoSal2015"], "metric": ["max E-Measure", "Average MAE", "S-Measure", "max F-Measure"], "title": "Taking a Deeper Look at Co-Salient Object Detection"} {"abstract": "In this paper, we develop novel, efficient 2D encodings for 3D geometry,\nwhich enable reconstructing full 3D shapes from a single image at high\nresolution. The key idea is to pose 3D shape reconstruction as a 2D prediction\nproblem. To that end, we first develop a simple baseline network that predicts\nentire voxel tubes at each pixel of a reference view. By leveraging well-proven\narchitectures for 2D pixel-prediction tasks, we attain state-of-the-art\nresults, clearly outperforming purely voxel-based approaches. We scale this\nbaseline to higher resolutions by proposing a memory-efficient shape encoding,\nwhich recursively decomposes a 3D shape into nested shape layers, similar to\nthe pieces of a Matryoshka doll. This allows reconstructing highly detailed\nshapes with complex topology, as demonstrated in extensive experiments; we\nclearly outperform previous octree-based approaches despite having a much\nsimpler architecture using standard network components. Our Matryoshka networks\nfurther enable reconstructing shapes from IDs or shape similarity, as well as\nshape sampling.", "field": [], "task": ["3D Object Reconstruction", "3D Shape Reconstruction"], "method": [], "dataset": ["Data3D\u2212R2N2"], "metric": ["3DIoU"], "title": "Matryoshka Networks: Predicting 3D Geometry via Nested Shape Layers"} {"abstract": "Sentence simplification aims to reduce the complexity of a sentence while\nretaining its original meaning. Current models for sentence simplification\nadopted ideas from ma- chine translation studies and implicitly learned\nsimplification mapping rules from normal- simple sentence pairs. In this paper,\nwe explore a novel model based on a multi-layer and multi-head attention\narchitecture and we pro- pose two innovative approaches to integrate the Simple\nPPDB (A Paraphrase Database for Simplification), an external paraphrase\nknowledge base for simplification that covers a wide range of real-world\nsimplification rules. The experiments show that the integration provides two\nmajor benefits: (1) the integrated model outperforms multiple state- of-the-art\nbaseline models for sentence simplification in the literature (2) through\nanalysis of the rule utilization, the model seeks to select more accurate\nsimplification rules. The code and models used in the paper are available at\nhttps://github.com/ Sanqiang/text_simplification.", "field": [], "task": ["Text Simplification"], "method": [], "dataset": ["ASSET", "Newsela", "TurkCorpus"], "metric": ["BLEU", "SARI (EASSE>=0.2.1)", "SARI"], "title": "Integrating Transformer and Paraphrase Rules for Sentence Simplification"} {"abstract": "Recent years have witnessed the unprecedented success of deep convolutional\nneural networks (CNNs) in single image super-resolution (SISR). However,\nexisting CNN-based SISR methods mostly assume that a low-resolution (LR) image\nis bicubicly downsampled from a high-resolution (HR) image, thus inevitably\ngiving rise to poor performance when the true degradation does not follow this\nassumption. Moreover, they lack scalability in learning a single model to\nnon-blindly deal with multiple degradations. To address these issues, we\npropose a general framework with dimensionality stretching strategy that\nenables a single convolutional super-resolution network to take two key factors\nof the SISR degradation process, i.e., blur kernel and noise level, as input.\nConsequently, the super-resolver can handle multiple and even spatially variant\ndegradations, which significantly improves the practicability. Extensive\nexperimental results on synthetic and real LR images show that the proposed\nconvolutional super-resolution network not only can produce favorable results\non multiple degradations but also is computationally efficient, providing a\nhighly effective and scalable solution to practical SISR applications.", "field": [], "task": ["Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["Set5 - 4x upscaling", "Urban100 - 4x upscaling", "BSD100 - 4x upscaling", "Set14 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Learning a Single Convolutional Super-Resolution Network for Multiple Degradations"} {"abstract": "We propose a novel two-stage detection method, D2Det, that collectively addresses both precise localization and accurate classification. For precise localization, we introduce a dense local regression that predicts multiple dense box offsets for an object proposal. Different from traditional regression and keypoint-based localization employed in two-stage detectors, our dense local regression is not limited to a quantized set of keypoints within a fixed region and has the ability to regress position-sensitive real number dense offsets, leading to more precise localization. The dense local regression is further improved by a binary overlap prediction strategy that reduces the influence of background region on the final box regression. For accurate classification, we introduce a discriminative RoI pooling scheme that samples from various sub-regions of a proposal and performs adaptive weighting to obtain discriminative features. On MS COCO test-dev, our D2Det outperforms existing two-stage methods, with a single-model performance of 45.4 AP, using ResNet101 backbone. When using multi-scale training and inference, D2Det obtains AP of 50.1. In addition to detection, we adapt D2Det for instance segmentation, achieving a mask AP of 40.2 with a two-fold speedup, compared to the state-of-the-art. We also demonstrate the effectiveness of our D2Det on airborne sensors by performing experiments for object detection in UAV images (UAVDT dataset) and instance segmentation in satellite images (iSAID dataset). Source code is available at https://github.com/JialeCao001/D2Det.\r", "field": [], "task": ["Instance Segmentation", "Object Detection", "Regression", "Semantic Segmentation"], "method": [], "dataset": ["COCO test-dev"], "metric": ["APM", "box AP", "AP75", "APS", "APL", "AP50", "mask AP"], "title": "D2Det: Towards High Quality Object Detection and Instance Segmentation"} {"abstract": "Temporally localizing activities within untrimmed videos has been extensively studied in recent years. Despite recent advances, existing methods for weakly-supervised temporal activity localization struggle to recognize when an activity is not occurring. To address this issue, we propose a novel method named A2CL-PT. Two triplets of the feature space are considered in our approach: one triplet is used to learn discriminative features for each activity class, and the other one is used to distinguish the features where no activity occurs (i.e. background features) from activity-related features for each video. To further improve the performance, we build our network using two parallel branches which operate in an adversarial way: the first branch localizes the most salient activities of a video and the second one finds other supplementary activities from non-localized parts of the video. Extensive experiments performed on THUMOS14 and ActivityNet datasets demonstrate that our proposed method is effective. Specifically, the average mAP of IoU thresholds from 0.1 to 0.9 on the THUMOS14 dataset is significantly improved from 27.9% to 30.0%.", "field": [], "task": ["Metric Learning", "Weakly Supervised Action Localization", "Weakly-supervised Temporal Action Localization"], "method": [], "dataset": ["THUMOS\u201914", "ActivityNet-1.3", "THUMOS 2014"], "metric": ["mAP IOU@0.6", "mAP IOU@0.7", "mAP@AVG(0.1:0.9)", "mAP IOU@0.9", "mAP@0.1:0.7", "mAP IOU@0.5", "mAP IOU@0.2", "mAP IOU@0.4", "mAP IOU@0.3", "mAP@0.5", "mAP IOU@0.8", "mAP IOU@0.1"], "title": "Adversarial Background-Aware Loss for Weakly-supervised Temporal Activity Localization"} {"abstract": "We propose an approach to self-supervised representation learning based on maximizing mutual information between features extracted from multiple views of a shared context. For example, one could produce multiple views of a local spatio-temporal context by observing it from different locations (e.g., camera positions within a scene), and via different modalities (e.g., tactile, auditory, or visual). Or, an ImageNet image could provide a context from which one produces multiple views by repeatedly applying data augmentation. Maximizing mutual information between features extracted from these views requires capturing information about high-level factors whose influence spans multiple views -- e.g., presence of certain objects or occurrence of certain events. Following our proposed approach, we develop a model which learns image representations that significantly outperform prior methods on the tasks we consider. Most notably, using self-supervised learning, our model learns representations which achieve 68.1% accuracy on ImageNet using standard linear evaluation. This beats prior results by over 12% and concurrent results by 7%. When we extend our model to use mixture-based representations, segmentation behaviour emerges as a natural side-effect. Our code is available online: https://github.com/Philip-Bachman/amdim-public.", "field": [], "task": ["Data Augmentation", "Image Classification", "Representation Learning", "Self-Supervised Image Classification", "Self-Supervised Learning"], "method": [], "dataset": ["ImageNet", "STL-10"], "metric": ["Number of Params", "Percentage correct", "Top 1 Accuracy"], "title": "Learning Representations by Maximizing Mutual Information Across Views"} {"abstract": "Increased growth in the global Unmanned Aerial Vehicles (UAV) (drone) industry has expanded possibilities for fully autonomous UAV applications. A particular application which has in part motivated this research is the use of UAV in wide area search and surveillance operations in unstructured outdoor environments. The critical issue with such environments is the lack of structured features that could aid in autonomous flight, such as road lines or paths. In this paper, we propose an End-to-End Multi-Task Regression-based Learning approach capable of defining flight commands for navigation and exploration under the forest canopy, regardless of the presence of trails or additional sensors (i.e. GPS). Training and testing are performed using a software in the loop pipeline which allows for a detailed evaluation against state-of-the-art pose estimation techniques. Our extensive experiments demonstrate that our approach excels in performing dense exploration within the required search perimeter, is capable of covering wider search regions, generalises to previously unseen and unexplored environments and outperforms contemporary state-of-the-art techniques.", "field": [], "task": ["Autonomous Flight (Dense Forest)", "Autonomous Navigation", "Pose Estimation", "Regression"], "method": [], "dataset": ["mtrl-auto-uav"], "metric": ["NI"], "title": "Multi-Task Regression-based Learning for Autonomous Unmanned Aerial Vehicle Flight Control within Unstructured Outdoor Environments"} {"abstract": "We propose a new framework called Ego-Splitting for detecting clusters in complex networks which leverage the local structures known as ego-nets (i.e. the subgraph induced by the neighborhood of each node) to de-couple overlapping clusters. Ego-Splitting is a highly scalable and flexible framework, with provable theoretical guarantees, that reduces the complex overlapping clustering problem to a simpler and more amenable non-overlapping (partitioning) problem. We can solve community detection in graphs with tens of billions of edges and outperform previous solutions based on ego-nets analysis.\r\n\r\nMore precisely, our framework works in two steps: a local ego-net analysis phase, and a global graph partitioning phase . In the local step, we first partition the nodes\u2019 ego-nets using a partitioning algorithm. We then use the computed clusters to split each node into its persona nodes that represent the instantiations of the node in its communities. Then, in the global step, we partition the newly created graph to obtain an overlapping clustering of the original graph.", "field": [], "task": ["Community Detection", "graph partitioning"], "method": [], "dataset": ["Amazon"], "metric": ["F1-score", "NMI"], "title": "Ego-splitting Framework: from Non-Overlapping to Overlapping Clusters"} {"abstract": "Self-attention has been a huge success for many downstream tasks in NLP, which led to exploration of applying self-attention to speech problems as well. The efficacy of self-attention in speech applications, however, seems not fully blown yet since it is challenging to handle highly correlated speech frames in the context of self-attention. In this paper we propose a new neural network model architecture, namely multi-stream self-attention, to address the issue thus make the self-attention mechanism more effective for speech recognition. The proposed model architecture consists of parallel streams of self-attention encoders, and each stream has layers of 1D convolutions with dilated kernels whose dilation rates are unique given stream, followed by a self-attention layer. The self-attention mechanism in each stream pays attention to only one resolution of input speech frames and the attentive computation can be more efficient. In a later stage, outputs from all the streams are concatenated then linearly projected to the final embedding. By stacking the proposed multi-stream self-attention encoder blocks and rescoring the resultant lattices with neural network language models, we achieve the word error rate of 2.2% on the test-clean dataset of the LibriSpeech corpus, the best number reported thus far on the dataset.", "field": [], "task": ["Speech Recognition"], "method": [], "dataset": ["LibriSpeech test-other", "LibriSpeech test-clean"], "metric": ["Word Error Rate (WER)"], "title": "State-of-the-Art Speech Recognition Using Multi-Stream Self-Attention With Dilated 1D Convolutions"} {"abstract": "Image-text matching plays a critical role in bridging the vision and language, and great progress has been made by exploiting the global alignment between image and sentence, or local alignments between regions and words. However, how to make the most of these alignments to infer more accurate matching scores is still underexplored. In this paper, we propose a novel Similarity Graph Reasoning and Attention Filtration (SGRAF) network for image-text matching. Specifically, the vector-based similarity representations are firstly learned to characterize the local and global alignments in a more comprehensive manner, and then the Similarity Graph Reasoning (SGR) module relying on one graph convolutional neural network is introduced to infer relation-aware similarities with both the local and global alignments. The Similarity Attention Filtration (SAF) module is further developed to integrate these alignments effectively by selectively attending on the significant and representative alignments and meanwhile casting aside the interferences of non-meaningful alignments. We demonstrate the superiority of the proposed method with achieving state-of-the-art performances on the Flickr30K and MSCOCO datasets, and the good interpretability of SGR and SAF modules with extensive qualitative experiments and analyses.", "field": [], "task": ["Cross-Modal Retrieval", "Image Retrieval", "Text Matching"], "method": [], "dataset": ["Flickr30k", "COCO 2014", "Flickr30K 1K test"], "metric": ["Image-to-text R@5", "Image-to-text R@1", "R@10", "Image-to-text R@10", "Text-to-image R@10", "Text-to-image R@1", "R@5", "R@1", "Text-to-image R@5"], "title": "Similarity Reasoning and Filtration for Image-Text Matching"} {"abstract": "Combining Generative Adversarial Networks (GANs) with encoders that learn to\nencode data points has shown promising results in learning data representations\nin an unsupervised way. We propose a framework that combines an encoder and a\ngenerator to learn disentangled representations which encode meaningful\ninformation about the data distribution without the need for any labels. While\ncurrent approaches focus mostly on the generative aspects of GANs, our\nframework can be used to perform inference on both real and generated data\npoints. Experiments on several data sets show that the encoder learns\ninterpretable, disentangled representations which encode descriptive properties\nand can be used to sample images that exhibit specific characteristics.", "field": [], "task": ["Representation Learning", "Unsupervised Image Classification", "Unsupervised MNIST", "Unsupervised Representation Learning"], "method": [], "dataset": ["MNIST"], "metric": ["Accuracy"], "title": "Inferencing Based on Unsupervised Learning of Disentangled Representations"} {"abstract": "Shortage of available training data is holding back progress in the area of\nautomated error detection. This paper investigates two alternative methods for\nartificially generating writing errors, in order to create additional\nresources. We propose treating error generation as a machine translation task,\nwhere grammatically correct text is translated to contain errors. In addition,\nwe explore a system for extracting textual patterns from an annotated corpus,\nwhich can then be used to insert errors into grammatically correct sentences.\nOur experiments show that the inclusion of artificially generated errors\nsignificantly improves error detection accuracy on both FCE and CoNLL 2014\ndatasets.", "field": [], "task": ["Grammatical Error Detection", "Machine Translation"], "method": [], "dataset": ["CoNLL-2014 A2", "FCE", "CoNLL-2014 A1"], "metric": ["F0.5"], "title": "Artificial Error Generation with Machine Translation and Syntactic Patterns"} {"abstract": "This paper describes Stanford's system at the CoNLL 2018 UD Shared Task. We\nintroduce a complete neural pipeline system that takes raw text as input, and\nperforms all tasks required by the shared task, ranging from tokenization and\nsentence segmentation, to POS tagging and dependency parsing. Our single system\nsubmission achieved very competitive performance on big treebanks. Moreover,\nafter fixing an unfortunate bug, our corrected system would have placed the\n2nd, 1st, and 3rd on the official evaluation metrics LAS,MLAS, and BLEX, and\nwould have outperformed all submission systems on low-resource treebank\ncategories on all metrics by a large margin. We further show the effectiveness\nof different model components through extensive ablation studies.", "field": [], "task": ["Dependency Parsing", "Sentence segmentation", "Tokenization"], "method": [], "dataset": ["Universal Dependencies"], "metric": ["UAS", "BLEX", "LAS"], "title": "Universal Dependency Parsing from Scratch"} {"abstract": "Change detection (CD) is essential to the accurate understanding of land surface changes using available Earth observation data. Due to the great advantages in deep feature representation and nonlinear problem modeling, deep learning is becoming increasingly popular to solve CD tasks in remote-sensing community. However, most existing deep learning-based CD methods are implemented by either generating difference images using deep features or learning change relations between pixel patches, which leads to error accumulation problems since many intermediate processing steps are needed to obtain final change maps. To address the above-mentioned issues, a novel end-to-end CD method is proposed based on an effective encoder-decoder architecture for semantic segmentation named UNet++, where change maps could be learned from scratch using available annotated datasets. Firstly, co-registered image pairs are concatenated as an input for the improved UNet++ network, where both global and fine-grained information can be utilized to generate feature maps with high spatial accuracy. Then, the fusion strategy of multiple side outputs is adopted to combine change maps from different semantic levels, thereby generating a final change map with high accuracy. The effectiveness and reliability of our proposed CD method are verified on very-high-resolution (VHR) satellite image datasets. Extensive experimental results have shown that our proposed approach outperforms the other state-of-the-art CD methods.", "field": [], "task": ["Change detection for remote sensing images", "Semantic Segmentation"], "method": [], "dataset": ["CDD Dataset (season-varying)"], "metric": ["F1-Score"], "title": "End-to-End Change Detection for High Resolution Satellite Images Using Improved UNet++"} {"abstract": "In this paper we introduce EfficientPose, a new approach for 6D object pose estimation. Our method is highly accurate, efficient and scalable over a wide range of computational resources. Moreover, it can detect the 2D bounding box of multiple objects and instances as well as estimate their full 6D poses in a single shot. This eliminates the significant increase in runtime when dealing with multiple objects other approaches suffer from. These approaches aim to first detect 2D targets, e.g. keypoints, and solve a Perspective-n-Point problem for their 6D pose for each object afterwards. We also propose a novel augmentation method for direct 6D pose estimation approaches to improve performance and generalization, called 6D augmentation. Our approach achieves a new state-of-the-art accuracy of 97.35% in terms of the ADD(-S) metric on the widely-used 6D pose estimation benchmark dataset Linemod using RGB input, while still running end-to-end at over 27 FPS. Through the inherent handling of multiple objects and instances and the fused single shot 2D object detection as well as 6D pose estimation, our approach runs even with multiple objects (eight) end-to-end at over 26 FPS, making it highly attractive to many real world scenarios. Code will be made publicly available at https://github.com/ybkscht/EfficientPose.", "field": [], "task": ["2D Object Detection", "6D Pose Estimation", "6D Pose Estimation using RGB", "Object Detection", "Pose Estimation"], "method": [], "dataset": ["LineMOD"], "metric": ["Mean ADD", "Accuracy (ADD)"], "title": "EfficientPose: An efficient, accurate and scalable end-to-end 6D multi object pose estimation approach"} {"abstract": "For portrait matting without the green screen, existing works either require auxiliary inputs that are costly to obtain or use multiple models that are computationally expensive. Consequently, they are unavailable in real-time applications. In contrast, we present a light-weight matting objective decomposition network (MODNet), which can process portrait matting from a single input image in real time. The design of MODNet benefits from optimizing a series of correlated sub-objectives simultaneously via explicit constraints. Moreover, since trimap-free methods usually suffer from the domain shift problem in practice, we introduce (1) a self-supervised strategy based on sub-objectives consistency to adapt MODNet to real-world data and (2) a one-frame delay trick to smooth the results when applying MODNet to portrait video sequence. MODNet is easy to be trained in an end-to-end style. It is much faster than contemporaneous matting methods and runs at 63 frames per second. On a carefully designed portrait matting benchmark newly proposed in this work, MODNet greatly outperforms prior trimap-free methods. More importantly, our method achieves remarkable results in daily photos and videos. Now, do you really need a green screen for real-time portrait matting?", "field": [], "task": ["Image Matting", "VIDEO MATTING"], "method": [], "dataset": ["PHM-100", "AMD"], "metric": ["MSE", "MAD"], "title": "Is a Green Screen Really Necessary for Real-Time Portrait Matting?"} {"abstract": "Human parsing is attracting increasing research attention. In this work, we\naim to push the frontier of human parsing by introducing the problem of\nmulti-human parsing in the wild. Existing works on human parsing mainly tackle\nsingle-person scenarios, which deviates from real-world applications where\nmultiple persons are present simultaneously with interaction and occlusion. To\naddress the multi-human parsing problem, we introduce a new multi-human parsing\n(MHP) dataset and a novel multi-human parsing model named MH-Parser. The MHP\ndataset contains multiple persons captured in real-world scenes with\npixel-level fine-grained semantic annotations in an instance-aware setting. The\nMH-Parser generates global parsing maps and person instance masks\nsimultaneously in a bottom-up fashion with the help of a new Graph-GAN model.\nWe envision that the MHP dataset will serve as a valuable data resource to\ndevelop new multi-human parsing models, and the MH-Parser offers a strong\nbaseline to drive future research for multi-human parsing in the wild.", "field": [], "task": ["Human Parsing", "Multi-Human Parsing"], "method": [], "dataset": ["MHP v1.0", "MHP v2.0"], "metric": ["AP 0.5"], "title": "Multiple-Human Parsing in the Wild"} {"abstract": "We present Ordinary Differential Equation Variational Auto-Encoder (ODE$^2$VAE), a latent second order ODE model for high-dimensional sequential data. Leveraging the advances in deep generative models, ODE$^2$VAE can simultaneously learn the embedding of high dimensional trajectories and infer arbitrarily complex continuous-time latent dynamics. Our model explicitly decomposes the latent space into momentum and position components and solves a second order ODE system, which is in contrast to recurrent neural network (RNN) based time series models and recently proposed black-box ODE techniques. In order to account for uncertainty, we propose probabilistic latent ODE dynamics parameterized by deep Bayesian neural networks. We demonstrate our approach on motion capture, image rotation and bouncing balls datasets. We achieve state-of-the-art performance in long term motion prediction and imputation tasks.", "field": [], "task": ["Imputation", "Motion Capture", "motion prediction", "Time Series", "Video Prediction"], "method": [], "dataset": ["CMU Mocap-1", "CMU Mocap-2"], "metric": ["Test Error"], "title": "ODE$^2$VAE: Deep generative second order ODEs with Bayesian neural networks"} {"abstract": "Current developments in temporal event or action localization usually target actions captured by a single camera. However, extensive events or actions in the wild may be captured as a sequence of shots by multiple cameras at different positions. In this paper, we propose a new and challenging task called multi-shot temporal event localization, and accordingly, collect a large scale dataset called MUlti-Shot EventS (MUSES). MUSES has 31,477 event instances for a total of 716 video hours. The core nature of MUSES is the frequent shot cuts, for an average of 19 shots per instance and 176 shots per video, which induces large intrainstance variations. Our comprehensive evaluations show that the state-of-the-art method in temporal action localization only achieves an mAP of 13.1% at IoU=0.5. As a minor contribution, we present a simple baseline approach for handling the intra-instance variations, which reports an mAP of 18.9% on MUSES and 56.9% on THUMOS14 at IoU=0.5. To facilitate research in this direction, we release the dataset and the project code at https://songbai.site/muses.", "field": [], "task": ["Action Localization", "Temporal Action Localization"], "method": [], "dataset": ["THUMOS\u201914"], "metric": ["mAP IOU@0.6", "mAP IOU@0.7", "mAP IOU@0.5", "mAP IOU@0.4", "mAP IOU@0.3"], "title": "Multi-shot Temporal Event Localization: a Benchmark"} {"abstract": "We explore the properties of byte-level recurrent language models. When given\nsufficient amounts of capacity, training data, and compute time, the\nrepresentations learned by these models include disentangled features\ncorresponding to high-level concepts. Specifically, we find a single unit which\nperforms sentiment analysis. These representations, learned in an unsupervised\nmanner, achieve state of the art on the binary subset of the Stanford Sentiment\nTreebank. They are also very data efficient. When using only a handful of\nlabeled examples, our approach matches the performance of strong baselines\ntrained on full datasets. We also demonstrate the sentiment unit has a direct\ninfluence on the generative process of the model. Simply fixing its value to be\npositive or negative generates samples with the corresponding positive or\nnegative sentiment.", "field": [], "task": ["Sentiment Analysis", "Subjectivity Analysis"], "method": [], "dataset": ["SST-2 Binary classification", "SUBJ"], "metric": ["Accuracy"], "title": "Learning to Generate Reviews and Discovering Sentiment"} {"abstract": "Most contemporary approaches to instance segmentation use complex pipelines\ninvolving conditional random fields, recurrent neural networks, object\nproposals, or template matching schemes. In our paper, we present a simple yet\npowerful end-to-end convolutional neural network to tackle this task. Our\napproach combines intuitions from the classical watershed transform and modern\ndeep learning to produce an energy map of the image where object instances are\nunambiguously represented as basins in the energy map. We then perform a cut at\na single energy level to directly yield connected components corresponding to\nobject instances. Our model more than doubles the performance of the\nstate-of-the-art on the challenging Cityscapes Instance Level Segmentation\ntask.", "field": [], "task": ["Instance Segmentation", "Semantic Segmentation", "Template Matching"], "method": [], "dataset": ["Cityscapes test"], "metric": ["Average Precision"], "title": "Deep Watershed Transform for Instance Segmentation"} {"abstract": "We focus on multi-modal fusion for egocentric action recognition, and propose a novel architecture for multi-modal temporal-binding, i.e. the combination of modalities within a range of temporal offsets. We train the architecture with three modalities -- RGB, Flow and Audio -- and combine them with mid-level fusion alongside sparse temporal sampling of fused representations. In contrast with previous works, modalities are fused before temporal aggregation, with shared modality and fusion weights over time. Our proposed architecture is trained end-to-end, outperforming individual modalities as well as late-fusion of modalities. We demonstrate the importance of audio in egocentric vision, on per-class basis, for identifying actions as well as interacting objects. Our method achieves state of the art results on both the seen and unseen test sets of the largest egocentric dataset: EPIC-Kitchens, on all metrics using the public leaderboard.", "field": [], "task": ["Action Recognition", "Egocentric Activity Recognition"], "method": [], "dataset": ["EPIC-KITCHENS-55"], "metric": ["Actions Top-1 (S2)", "Actions Top-1 (S1)"], "title": "EPIC-Fusion: Audio-Visual Temporal Binding for Egocentric Action Recognition"} {"abstract": "We propose a novel method that tracks fast moving objects, mainly non-uniform spherical, in full 6 degrees of freedom, estimating simultaneously their 3D motion trajectory, 3D pose and object appearance changes with a time step that is a fraction of the video frame exposure time. The sub-frame object localization and appearance estimation allows realistic temporal super-resolution and precise shape estimation. The method, called TbD-3D (Tracking by Deblatting in 3D) relies on a novel reconstruction algorithm which solves a piece-wise deblurring and matting problem. The 3D rotation is estimated by minimizing the reprojection error. As a second contribution, we present a new challenging dataset with fast moving objects that change their appearance and distance to the camera. High speed camera recordings with zero lag between frame exposures were used to generate videos with different frame rates annotated with ground-truth trajectory and pose.", "field": [], "task": ["6D Pose Estimation", "Deblurring", "Object Localization", "Pose Estimation", "Super-Resolution"], "method": [], "dataset": ["Falling Objects", "TbD", "TbD-3D"], "metric": ["SSIM", "TIoU", "PSNR"], "title": "Sub-frame Appearance and 6D Pose Estimation of Fast Moving Objects"} {"abstract": "Translating or rotating an input image should not affect the results of many\ncomputer vision tasks. Convolutional neural networks (CNNs) are already\ntranslation equivariant: input image translations produce proportionate feature\nmap translations. This is not the case for rotations. Global rotation\nequivariance is typically sought through data augmentation, but patch-wise\nequivariance is more difficult. We present Harmonic Networks or H-Nets, a CNN\nexhibiting equivariance to patch-wise translation and 360-rotation. We achieve\nthis by replacing regular CNN filters with circular harmonics, returning a\nmaximal response and orientation for every receptive field patch.\n H-Nets use a rich, parameter-efficient and low computational complexity\nrepresentation, and we show that deep feature maps within the network encode\ncomplicated rotational invariants. We demonstrate that our layers are general\nenough to be used in conjunction with the latest architectures and techniques,\nsuch as deep supervision and batch normalization. We also achieve\nstate-of-the-art classification on rotated-MNIST, and competitive results on\nother benchmark challenges.", "field": [], "task": ["Data Augmentation", "Rotated MNIST"], "method": [], "dataset": ["Rotated MNIST"], "metric": ["Test error"], "title": "Harmonic Networks: Deep Translation and Rotation Equivariance"} {"abstract": "Machine comprehension (MC), answering a query about a given context\nparagraph, requires modeling complex interactions between the context and the\nquery. Recently, attention mechanisms have been successfully extended to MC.\nTypically these methods use attention to focus on a small portion of the\ncontext and summarize it with a fixed-size vector, couple attentions\ntemporally, and/or often form a uni-directional attention. In this paper we\nintroduce the Bi-Directional Attention Flow (BIDAF) network, a multi-stage\nhierarchical process that represents the context at different levels of\ngranularity and uses bi-directional attention flow mechanism to obtain a\nquery-aware context representation without early summarization. Our\nexperimental evaluations show that our model achieves the state-of-the-art\nresults in Stanford Question Answering Dataset (SQuAD) and CNN/DailyMail cloze\ntest.", "field": [], "task": ["Open-Domain Question Answering", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["SQuAD1.1", "CNN / Daily Mail", "MS MARCO", "NarrativeQA", "Quasar", "SQuAD1.1 dev"], "metric": ["METEOR", "BLEU-1", "EM (Quasar-T)", "EM", "CNN", "F1", "Rouge-L", "BLEU-4", "Daily Mail", "F1 (Quasar-T)"], "title": "Bidirectional Attention Flow for Machine Comprehension"} {"abstract": "Object detection is an essential step towards holistic scene understanding. Most existing object detection algorithms attend to certain object areas once and then predict the object locations. However, neuroscientists have revealed that humans do not look at the scene in fixed steadiness. Instead, human eyes move around, locating informative parts to understand the object location. This active perceiving movement process is called \\textit{saccade}. %In this paper, Inspired by such mechanism, we propose a fast and accurate object detector called \\textit{SaccadeNet}. It contains four main modules, the \\cenam, the \\coram, the \\atm, and the \\aggatt, which allows it to attend to different informative object keypoints, and predict object locations from coarse to fine. The \\coram~is used only during training to extract more informative corner features which brings free-lunch performance boost. On the MS COCO dataset, we achieve the performance of 40.4\\% mAP at 28 FPS and 30.5\\% mAP at 118 FPS. Among all the real-time object detectors, %that can run faster than 25 FPS, our SaccadeNet achieves the best detection performance, which demonstrates the effectiveness of the proposed detection mechanism.", "field": [], "task": ["Object Detection", "Scene Understanding"], "method": [], "dataset": ["COCO test-dev"], "metric": ["APM", "box AP", "AP75", "APS", "APL", "AP50"], "title": "SaccadeNet: A Fast and Accurate Object Detector"} {"abstract": "Deep one-class classification variants for anomaly detection learn a mapping that concentrates nominal samples in feature space causing anomalies to be mapped away. Because this transformation is highly non-linear, finding interpretations poses a significant challenge. In this paper we present an explainable deep one-class classification method, Fully Convolutional Data Description (FCDD), where the mapped samples are themselves also an explanation heatmap. FCDD yields competitive detection performance and provides reasonable explanations on common anomaly detection benchmarks with CIFAR-10 and ImageNet. On MVTec-AD, a recent manufacturing dataset offering ground-truth anomaly maps, FCDD sets a new state of the art in the unsupervised setting. Our method can incorporate ground-truth anomaly maps during training and using even a few of these (~5) improves performance significantly. Finally, using FCDD's explanations we demonstrate the vulnerability of deep one-class classification models to spurious image features such as image watermarks.", "field": [], "task": ["Anomaly Detection", "Outlier Detection"], "method": [], "dataset": ["MVTec AD"], "metric": ["Detection AUROC", "Segmentation AUROC"], "title": "Explainable Deep One-Class Classification"} {"abstract": "Conventional zero-shot learning (ZSL) methods generally learn an embedding,\ne.g., visual-semantic mapping, to handle the unseen visual samples via an\nindirect manner. In this paper, we take the advantage of generative adversarial\nnetworks (GANs) and propose a novel method, named leveraging invariant side GAN\n(LisGAN), which can directly generate the unseen features from random noises\nwhich are conditioned by the semantic descriptions. Specifically, we train a\nconditional Wasserstein GANs in which the generator synthesizes fake unseen\nfeatures from noises and the discriminator distinguishes the fake from real via\na minimax game. Considering that one semantic description can correspond to\nvarious synthesized visual samples, and the semantic description, figuratively,\nis the soul of the generated features, we introduce soul samples as the\ninvariant side of generative zero-shot learning in this paper. A soul sample is\nthe meta-representation of one class. It visualizes the most\nsemantically-meaningful aspects of each sample in the same category. We\nregularize that each generated sample (the varying side of generative ZSL)\nshould be close to at least one soul sample (the invariant side) which has the\nsame class label with it. At the zero-shot recognition stage, we propose to use\ntwo classifiers, which are deployed in a cascade way, to achieve a\ncoarse-to-fine result. Experiments on five popular benchmarks verify that our\nproposed approach can outperform state-of-the-art methods with significant\nimprovements.", "field": [], "task": ["Generalized Zero-Shot Learning", "Zero-Shot Learning"], "method": [], "dataset": ["SUN Attribute", "CUB-200-2011"], "metric": ["average top-1 classification accuracy", "Harmonic mean"], "title": "Leveraging the Invariant Side of Generative Zero-Shot Learning"} {"abstract": "Cartesian Genetic Programming (CGP) has previously shown capabilities in\nimage processing tasks by evolving programs with a function set specialized for\ncomputer vision. A similar approach can be applied to Atari playing. Programs\nare evolved using mixed type CGP with a function set suited for matrix\noperations, including image processing, but allowing for controller behavior to\nemerge. While the programs are relatively small, many controllers are\ncompetitive with state of the art methods for the Atari benchmark set and\nrequire less training time. By evaluating the programs of the best evolved\nindividuals, simple but effective strategies can be found.", "field": [], "task": ["Atari Games"], "method": [], "dataset": ["Atari 2600 Amidar", "Atari 2600 River Raid", "Atari 2600 Beam Rider", "Atari 2600 Video Pinball", "Atari 2600 Demon Attack", "Atari 2600 Enduro", "Atari 2600 Alien", "Atari 2600 Boxing", "Atari 2600 Pitfall!", "Atari 2600 Bank Heist", "Atari 2600 Tutankham", "Atari 2600 Time Pilot", "Atari 2600 Space Invaders", "Atari 2600 Assault", "Atari 2600 Phoenix", "Atari 2600 Gravitar", "Atari 2600 Ice Hockey", "Atari 2600 Bowling", "Atari 2600 Private Eye", "Atari 2600 Berzerk", "Atari 2600 Asterix", "Atari 2600 Breakout", "Atari 2600 Name This Game", "Atari 2600 Crazy Climber", "Atari 2600 Pong", "Atari 2600 Krull", "Atari 2600 Freeway", "Atari 2600 James Bond", "Atari 2600 Defender", "Atari 2600 Robotank", "Atari 2600 Kangaroo", "Atari 2600 Venture", "Atari 2600 Asteroids", "Atari 2600 Fishing Derby", "Atari 2600 Ms. Pacman", "Atari 2600 Seaquest", "Atari 2600 Tennis", "Atari 2600 Solaris", "Atari 2600 Zaxxon", "Atari 2600 Frostbite", "Atari 2600 Star Gunner", "Atari 2600 Double Dunk", "Atari 2600 Battle Zone", "Atari 2600 Gopher", "Atari 2600 Skiing", "Atari 2600 Road Runner", "Atari 2600 Atlantis", "Atari 2600 Kung-Fu Master", "Atari 2600 Chopper Command", "Atari 2600 Yars Revenge", "Atari 2600 Up and Down", "Atari 2600 Montezuma's Revenge", "Atari 2600 Wizard of Wor", "Atari 2600 Q*Bert", "Atari 2600 Centipede", "Atari 2600 HERO"], "metric": ["Score"], "title": "Evolving simple programs for playing Atari games"} {"abstract": "Recent advances in the integration of deep learning with automated theorem proving have centered around the representation of logical formulae as inputs to deep learning systems. In particular, there has been a growing interest in adapting structure-aware neural methods to work with the underlying graph representations of logical expressions. While more effective than character and token-level approaches, graph-based methods have often made representational trade-offs that limited their ability to capture key structural properties of their inputs. In this work we propose a novel approach for embedding logical formulae that is designed to overcome the representational limitations of prior approaches. Our architecture works for logics of different expressivity; e.g., first-order and higher-order logic. We evaluate our approach on two standard datasets and show that the proposed architecture achieves state-of-the-art performance on both premise selection and proof step classification.", "field": [], "task": ["Automated Theorem Proving"], "method": [], "dataset": ["HolStep (Conditional)"], "metric": ["Classification Accuracy"], "title": "Improving Graph Neural Network Representations of Logical Formulae with Subgraph Pooling"} {"abstract": "Recent work in Dialogue Act classification has treated the task as a sequence labeling problem using hierarchical deep neural networks. We build on this prior work by leveraging the effectiveness of a context-aware self-attention mechanism coupled with a hierarchical recurrent neural network. We conduct extensive evaluations on standard Dialogue Act classification datasets and show significant improvement over state-of-the-art results on the Switchboard Dialogue Act (SwDA) Corpus. We also investigate the impact of different utterance-level representation learning methods and show that our method is effective at capturing utterance-level semantic text representations while maintaining high accuracy.", "field": [], "task": ["Dialogue Act Classification", "Representation Learning"], "method": [], "dataset": ["Switchboard corpus", "ICSI Meeting Recorder Dialog Act (MRDA) corpus"], "metric": ["Accuracy"], "title": "Dialogue Act Classification with Context-Aware Self-Attention"} {"abstract": "We report on a series of experiments with convolutional neural networks (CNN)\ntrained on top of pre-trained word vectors for sentence-level classification\ntasks. We show that a simple CNN with little hyperparameter tuning and static\nvectors achieves excellent results on multiple benchmarks. Learning\ntask-specific vectors through fine-tuning offers further gains in performance.\nWe additionally propose a simple modification to the architecture to allow for\nthe use of both task-specific and static vectors. The CNN models discussed\nherein improve upon the state of the art on 4 out of 7 tasks, which include\nsentiment analysis and question classification.", "field": [], "task": ["Sentence Classification", "Sentiment Analysis"], "method": [], "dataset": ["SST-2 Binary classification", "SNLI"], "metric": ["% Test Accuracy", "Accuracy"], "title": "Convolutional Neural Networks for Sentence Classification"} {"abstract": "Existing state-of-the-art RGB-D salient object detection methods explore RGB-D data relying on a two-stream architecture, in which an independent subnetwork is required to process depth data. This inevitably incurs extra computational costs and memory consumption, and using depth data during testing may hinder the practical applications of RGB-D saliency detection. To tackle these two dilemmas, we propose a depth distiller (A2dele) to explore the way of using network prediction and attention as two bridges to transfer the depth knowledge from the depth stream to the RGB stream. First, by adaptively minimizing the differences between predictions generated from the depth stream and RGB stream, we realize the desired control of pixel-wise depth knowledge transferred to the RGB stream. Second, to transfer the localization knowledge to RGB features, we encourage consistencies between the dilated prediction of the depth stream and the attention map from the RGB stream. As a result, we achieve a lightweight architecture without use of depth data at test time by embedding our A2dele. Our extensive experimental evaluation on five benchmarks demonstrate that our RGB stream achieves state-of-the-art performance, which tremendously minimizes the model size by 76% and runs 12 times faster, compared with the best performing method. Furthermore, our A2dele can be applied to existing RGB-D networks to significantly improve their efficiency while maintaining performance (boosts FPS by nearly twice for DMRA and 3 times for CPFP).\r", "field": [], "task": ["Object Detection", "RGB-D Salient Object Detection", "RGB Salient Object Detection", "Saliency Detection", "Salient Object Detection"], "method": [], "dataset": ["NJU2K"], "metric": ["Average MAE"], "title": "A2dele: Adaptive and Attentive Depth Distiller for Efficient RGB-D Salient Object Detection"} {"abstract": "The convolution layer has been the dominant feature extractor in computer\nvision for years. However, the spatial aggregation in convolution is basically\na pattern matching process that applies fixed filters which are inefficient at\nmodeling visual elements with varying spatial distributions. This paper\npresents a new image feature extractor, called the local relation layer, that\nadaptively determines aggregation weights based on the compositional\nrelationship of local pixel pairs. With this relational approach, it can\ncomposite visual elements into higher-level entities in a more efficient manner\nthat benefits semantic inference. A network built with local relation layers,\ncalled the Local Relation Network (LR-Net), is found to provide greater\nmodeling capacity than its counterpart built with regular convolution on\nlarge-scale recognition tasks such as ImageNet classification.", "field": ["Image Model Blocks"], "task": [], "method": ["Local Relation Network", "LRNet"], "dataset": ["ImageNet"], "metric": ["Top 5 Accuracy", "Top 1 Accuracy"], "title": "Local Relation Networks for Image Recognition"} {"abstract": "Recently, Siamese networks have drawn great attention in visual tracking\ncommunity because of their balanced accuracy and speed. However, features used\nin most Siamese tracking approaches can only discriminate foreground from the\nnon-semantic backgrounds. The semantic backgrounds are always considered as\ndistractors, which hinders the robustness of Siamese trackers. In this paper,\nwe focus on learning distractor-aware Siamese networks for accurate and\nlong-term tracking. To this end, features used in traditional Siamese trackers\nare analyzed at first. We observe that the imbalanced distribution of training\ndata makes the learned features less discriminative. During the off-line\ntraining phase, an effective sampling strategy is introduced to control this\ndistribution and make the model focus on the semantic distractors. During\ninference, a novel distractor-aware module is designed to perform incremental\nlearning, which can effectively transfer the general embedding to the current\nvideo domain. In addition, we extend the proposed approach for long-term\ntracking by introducing a simple yet effective local-to-global search region\nstrategy. Extensive experiments on benchmarks show that our approach\nsignificantly outperforms the state-of-the-arts, yielding 9.6% relative gain in\nVOT2016 dataset and 35.9% relative gain in UAV20L dataset. The proposed tracker\ncan perform at 160 FPS on short-term benchmarks and 110 FPS on long-term\nbenchmarks.", "field": [], "task": ["Incremental Learning", "Object Tracking", "Visual Object Tracking", "Visual Tracking"], "method": [], "dataset": ["VOT2017/18"], "metric": ["Expected Average Overlap (EAO)"], "title": "Distractor-aware Siamese Networks for Visual Object Tracking"} {"abstract": "Multi-label image classification is a fundamental but challenging task\ntowards general visual understanding. Existing methods found the region-level\ncues (e.g., features from RoIs) can facilitate multi-label classification.\nNevertheless, such methods usually require laborious object-level annotations\n(i.e., object labels and bounding boxes) for effective learning of the\nobject-level visual features. In this paper, we propose a novel and efficient\ndeep framework to boost multi-label classification by distilling knowledge from\nweakly-supervised detection task without bounding box annotations.\nSpecifically, given the image-level annotations, (1) we first develop a\nweakly-supervised detection (WSD) model, and then (2) construct an end-to-end\nmulti-label image classification framework augmented by a knowledge\ndistillation module that guides the classification model by the WSD model\naccording to the class-level predictions for the whole image and the\nobject-level visual features for object RoIs. The WSD model is the teacher\nmodel and the classification model is the student model. After this cross-task\nknowledge distillation, the performance of the classification model is\nsignificantly improved and the efficiency is maintained since the WSD model can\nbe safely discarded in the test phase. Extensive experiments on two large-scale\ndatasets (MS-COCO and NUS-WIDE) show that our framework achieves superior\nperformances over the state-of-the-art methods on both performance and\nefficiency.", "field": [], "task": ["Image Classification", "Knowledge Distillation", "Multi-Label Classification"], "method": [], "dataset": ["NUS-WIDE"], "metric": ["MAP"], "title": "Multi-Label Image Classification via Knowledge Distillation from Weakly-Supervised Detection"} {"abstract": "We investigate two crucial and closely related aspects of CNNs for optical\nflow estimation: models and training. First, we design a compact but effective\nCNN model, called PWC-Net, according to simple and well-established principles:\npyramidal processing, warping, and cost volume processing. PWC-Net is 17 times\nsmaller in size, 2 times faster in inference, and 11\\% more accurate on Sintel\nfinal than the recent FlowNet2 model. It is the winning entry in the optical\nflow competition of the robust vision challenge. Next, we experimentally\nanalyze the sources of our performance gains. In particular, we use the same\ntraining procedure of PWC-Net to retrain FlowNetC, a sub-network of FlowNet2.\nThe retrained FlowNetC is 56\\% more accurate on Sintel final than the\npreviously trained one and even 5\\% more accurate than the FlowNet2 model. We\nfurther improve the training procedure and increase the accuracy of PWC-Net on\nSintel by 10\\% and on KITTI 2012 and 2015 by 20\\%. Our newly trained model\nparameters and training protocols will be available on\nhttps://github.com/NVlabs/PWC-Net", "field": [], "task": ["Optical Flow Estimation"], "method": [], "dataset": ["KITTI 2012", "Sintel-final", "Sintel-clean", "KITTI 2015"], "metric": ["Average End-Point Error", "Fl-all"], "title": "Models Matter, So Does Training: An Empirical Study of CNNs for Optical Flow Estimation"} {"abstract": "The Ken Burns effect allows animating still images with a virtual camera scan and zoom. Adding parallax, which results in the 3D Ken Burns effect, enables significantly more compelling results. Creating such effects manually is time-consuming and demands sophisticated editing skills. Existing automatic methods, however, require multiple input images from varying viewpoints. In this paper, we introduce a framework that synthesizes the 3D Ken Burns effect from a single image, supporting both a fully automatic mode and an interactive mode with the user controlling the camera. Our framework first leverages a depth prediction pipeline, which estimates scene depth that is suitable for view synthesis tasks. To address the limitations of existing depth estimation methods such as geometric distortions, semantic distortions, and inaccurate depth boundaries, we develop a semantic-aware neural network for depth prediction, couple its estimate with a segmentation-based depth adjustment process, and employ a refinement neural network that facilitates accurate depth predictions at object boundaries. According to this depth estimate, our framework then maps the input image to a point cloud and synthesizes the resulting video frames by rendering the point cloud from the corresponding camera positions. To address disocclusions while maintaining geometrically and temporally coherent synthesis results, we utilize context-aware color- and depth-inpainting to fill in the missing information in the extreme views of the camera path, thus extending the scene geometry of the point cloud. Experiments with a wide variety of image content show that our method enables realistic synthesis results. Our study demonstrates that our system allows users to achieve better results while requiring little effort compared to existing solutions for the 3D Ken Burns effect creation.", "field": [], "task": ["Depth Estimation"], "method": [], "dataset": ["NYU-Depth V2"], "metric": ["RMS"], "title": "3D Ken Burns Effect from a Single Image"} {"abstract": "Anomaly detection, finding patterns that substantially deviate from those seen previously, is one of the fundamental problems of artificial intelligence. Recently, classification-based methods were shown to achieve superior results on this task. In this work, we present a unifying view and propose an open-set method, GOAD, to relax current generalization assumptions. Furthermore, we extend the applicability of transformation-based methods to non-image data using random affine transformations. Our method is shown to obtain state-of-the-art accuracy and is applicable to broad data types. The strong performance of our method is extensively validated on multiple datasets from different domains.", "field": [], "task": ["Anomaly Detection"], "method": [], "dataset": ["One-class CIFAR-10"], "metric": ["AUROC"], "title": "Classification-Based Anomaly Detection for General Data"} {"abstract": "Learning invariant representations has been proposed as a key technique for addressing the domain generalization problem. However, the question of identifying the right conditions for invariance remains unanswered. In this work, we propose a causal interpretation of domain generalization that defines domains as interventions under a data-generating process. Based on a general causal model for data from multiple domains, we show that prior methods for learning an invariant representation optimize for an incorrect objective. We highlight an alternative condition: inputs across domains should have the same representation if they are derived from the same base object. Inputs that share the same base object may be available through data augmentation or in some specific contexts, but base object information is not always available. Hence we propose an iterative algorithm called MatchDG that approximates base object similarity by using a contrastive loss formulation adapted for multiple domains. We then match inputs that are similar under the resultant representation to build an invariant classifier. We evaluate our matching-based methods on rotated MNIST, Fashion-MNIST, PACS and Chest X-ray datasets and find that they outperform prior work on out-of-domain accuracy. In particular, top-10 matches from MatchDG have over 50% overlap with ground-truth matches in MNIST and Fashion-MNIST. Code repository can be accessed here: \\textit{https://github.com/microsoft/robustdg}", "field": [], "task": ["Data Augmentation", "Domain Generalization", "Rotated MNIST"], "method": [], "dataset": ["PACS", "Rotated Fashion-MNIST"], "metric": ["Average Accuracy", "Accuracy"], "title": "Domain Generalization using Causal Matching"} {"abstract": "Our goal in this work is to train an image captioning model that generates more dense and informative captions. We introduce \"relational captioning,\" a novel image captioning task which aims to generate multiple captions with respect to relational information between objects in an image. Relational captioning is a framework that is advantageous in both diversity and amount of information, leading to image understanding based on relationships. Part-of speech (POS, i.e. subject-object-predicate categories) tags can be assigned to every English word. We leverage the POS as a prior to guide the correct sequence of words in a caption. To this end, we propose a multi-task triple-stream network (MTTSNet) which consists of three recurrent units for the respective POS and jointly performs POS prediction and captioning. We demonstrate more diverse and richer representations generated by the proposed model against several baselines and competing methods.", "field": [], "task": ["Image Captioning", "Relational Captioning"], "method": [], "dataset": ["relational captioning dataset"], "metric": ["Image-Level Recall"], "title": "Dense Relational Captioning: Triple-Stream Networks for Relationship-Based Captioning"} {"abstract": "Video restoration tasks, including super-resolution, deblurring, etc, are drawing increasing attention in the computer vision community. A challenging benchmark named REDS is released in the NTIRE19 Challenge. This new benchmark challenges existing methods from two aspects: (1) how to align multiple frames given large motions, and (2) how to effectively fuse different frames with diverse motion and blur. In this work, we propose a novel Video Restoration framework with Enhanced Deformable networks, termed EDVR, to address these challenges. First, to handle large motions, we devise a Pyramid, Cascading and Deformable (PCD) alignment module, in which frame alignment is done at the feature level using deformable convolutions in a coarse-to-fine manner. Second, we propose a Temporal and Spatial Attention (TSA) fusion module, in which attention is applied both temporally and spatially, so as to emphasize important features for subsequent restoration. Thanks to these modules, our EDVR wins the champions and outperforms the second place by a large margin in all four tracks in the NTIRE19 video restoration and enhancement challenges. EDVR also demonstrates superior performance to state-of-the-art published methods on video super-resolution and deblurring. The code is available at https://github.com/xinntao/EDVR.", "field": [], "task": ["Deblurring", "Super-Resolution", "Video Restoration", "Video Super-Resolution"], "method": [], "dataset": ["REDS", "Vid4 - 4x upscaling"], "metric": ["Average PSNR", "PSNR", "SSIM"], "title": "EDVR: Video Restoration with Enhanced Deformable Convolutional Networks"} {"abstract": "In order to learn quickly with few samples, meta-learning utilizes prior knowledge learned from previous tasks. However, a critical challenge in meta-learning is task uncertainty and heterogeneity, which can not be handled via globally sharing knowledge among tasks. In this paper, based on gradient-based meta-learning, we propose a hierarchically structured meta-learning (HSML) algorithm that explicitly tailors the transferable knowledge to different clusters of tasks. Inspired by the way human beings organize knowledge, we resort to a hierarchical task clustering structure to cluster tasks. As a result, the proposed approach not only addresses the challenge via the knowledge customization to different clusters of tasks, but also preserves knowledge generalization among a cluster of similar tasks. To tackle the changing of task relationship, in addition, we extend the hierarchical structure to a continual learning environment. The experimental results show that our approach can achieve state-of-the-art performance in both toy-regression and few-shot image classification problems.", "field": [], "task": ["Continual Learning", "Few-Shot Image Classification", "Hierarchical structure", "Image Classification", "Meta-Learning", "Regression"], "method": [], "dataset": ["Mini-Imagenet 5-way (1-shot)"], "metric": ["Accuracy"], "title": "Hierarchically Structured Meta-learning"} {"abstract": "Deep learning-based methods have achieved remarkable success in image restoration and enhancement, but are they still competitive when there is a lack of paired training data? As one such example, this paper explores the low-light image enhancement problem, where in practice it is extremely challenging to simultaneously take a low-light and a normal-light photo of the same visual scene. We propose a highly effective unsupervised generative adversarial network, dubbed EnlightenGAN, that can be trained without low/normal-light image pairs, yet proves to generalize very well on various real-world test images. Instead of supervising the learning using ground truth data, we propose to regularize the unpaired training using the information extracted from the input itself, and benchmark a series of innovations for the low-light image enhancement problem, including a global-local discriminator structure, a self-regularized perceptual loss fusion, and attention mechanism. Through extensive experiments, our proposed approach outperforms recent methods under a variety of metrics in terms of visual quality and subjective user study. Thanks to the great flexibility brought by unpaired training, EnlightenGAN is demonstrated to be easily adaptable to enhancing real-world images from various domains. The code is available at \\url{https://github.com/yueruchen/EnlightenGAN}", "field": [], "task": ["Image Enhancement", "Image Restoration", "Low-Light Image Enhancement"], "method": [], "dataset": ["DICM", "MEF", "VV", "AFLW (Zhang CVPR 2018 crops)"], "metric": ["14 gestures accuracy", "User Study Score"], "title": "EnlightenGAN: Deep Light Enhancement without Paired Supervision"} {"abstract": "Recent deep-learning approaches have shown that Frequency Transformation (FT) blocks can significantly improve spectrogram-based single-source separation models by capturing frequency patterns. The goal of this paper is to extend the FT block to fit the multi-source task. We propose the Latent Source Attentive Frequency Transformation (LaSAFT) block to capture source-dependent frequency patterns. We also propose the Gated Point-wise Convolutional Modulation (GPoCM), an extension of Feature-wise Linear Modulation (FiLM), to modulate internal features. By employing these two novel methods, we extend the Conditioned-U-Net (CUNet) for multi-source separation, and the experimental results indicate that our LaSAFT and GPoCM can improve the CUNet's performance, achieving state-of-the-art SDR performance on several MUSDB18 source separation tasks.", "field": [], "task": ["Music Source Separation"], "method": [], "dataset": ["MUSDB18"], "metric": ["SDR (vocals)", "SDR (other)", "SDR (drums)", "SDR (bass)"], "title": "LaSAFT: Latent Source Attentive Frequency Transformation for Conditioned Source Separation"} {"abstract": "Weakly Supervised Object Detection (WSOD), using only image-level annotations\nto train object detectors, is of growing importance in object recognition. In\nthis paper, we propose a novel deep network for WSOD. Unlike previous networks\nthat transfer the object detection problem to an image classification problem\nusing Multiple Instance Learning (MIL), our strategy generates proposal\nclusters to learn refined instance classifiers by an iterative process. The\nproposals in the same cluster are spatially adjacent and associated with the\nsame object. This prevents the network from concentrating too much on parts of\nobjects instead of whole objects. We first show that instances can be assigned\nobject or background labels directly based on proposal clusters for instance\nclassifier refinement, and then show that treating each cluster as a small new\nbag yields fewer ambiguities than the directly assigning label method. The\niterative instance classifier refinement is implemented online using multiple\nstreams in convolutional neural networks, where the first is an MIL network and\nthe others are for instance classifier refinement supervised by the preceding\none. Experiments are conducted on the PASCAL VOC, ImageNet detection, and\nMS-COCO benchmarks for WSOD. Results show that our method outperforms the\nprevious state of the art significantly.", "field": [], "task": ["Multiple Instance Learning", "Object Detection", "Object Recognition", "Weakly Supervised Object Detection"], "method": [], "dataset": ["PASCAL VOC 2012 test", "HICO-DET", "PASCAL VOC 2007", "ImageNet", "Charades"], "metric": ["MAP"], "title": "PCL: Proposal Cluster Learning for Weakly Supervised Object Detection"} {"abstract": "Dialogue state tracking, which estimates user goals and requests given the\ndialogue context, is an essential part of task-oriented dialogue systems. In\nthis paper, we propose the Global-Locally Self-Attentive Dialogue State Tracker\n(GLAD), which learns representations of the user utterance and previous system\nactions with global-local modules. Our model uses global modules to share\nparameters between estimators for different types (called slots) of dialogue\nstates, and uses local modules to learn slot-specific features. We show that\nthis significantly improves tracking of rare states and achieves\nstate-of-the-art performance on the WoZ and DSTC2 state tracking tasks. GLAD\nobtains 88.1% joint goal accuracy and 97.1% request accuracy on WoZ,\noutperforming prior work by 3.7% and 5.5%. On DSTC2, our model obtains 74.5%\njoint goal accuracy and 97.5% request accuracy, outperforming prior work by\n1.1% and 1.0%.", "field": [], "task": ["Dialogue State Tracking", "Multi-domain Dialogue State Tracking", "Task-Oriented Dialogue Systems"], "method": [], "dataset": ["Wizard-of-Oz", "Second dialogue state tracking challenge"], "metric": ["Joint", "Price", "Area", "Food", "Request"], "title": "Global-Locally Self-Attentive Dialogue State Tracker"} {"abstract": "Weakly supervised learning has attracted growing research attention due to the significant saving in annotation cost for tasks that require intra-image annotations, such as object detection and semantic segmentation. To this end, existing weakly supervised object detection and semantic segmentation approaches follow an iterative label mining and model training pipeline. However, such a self-enforcement pipeline makes both tasks easy to be trapped in local minimums. In this paper, we join weakly supervised object detection and segmentation tasks with a multi-task learning scheme for the first time, which uses their respective failure patterns to complement each other's learning. Such cross-task enforcement helps both tasks to leap out of their respective local minimums. In particular, we present an efficient and effective framework termed Weakly Supervised Joint Detection and Segmentation (WS-JDS). WS-JDS has two branches for the above two tasks, which share the same backbone network. In the learning stage, it uses the same cyclic training paradigm but with a specific loss function such that the two branches benefit each other. Extensive experiments have been conducted on the widely-used Pascal VOC and COCO benchmarks, which demonstrate that our model has achieved competitive performance with the state-of-the-art algorithms.\r", "field": [], "task": ["Multi-Task Learning", "Object Detection", "Semantic Segmentation", "Weakly Supervised Object Detection"], "method": [], "dataset": ["PASCAL VOC 2007", "PASCAL VOC 2012 test"], "metric": ["MAP"], "title": "Cyclic Guidance for Weakly Supervised Joint Detection and Segmentation"} {"abstract": "3D face reconstruction is a fundamental Computer Vision problem of\nextraordinary difficulty. Current systems often assume the availability of\nmultiple facial images (sometimes from the same subject) as input, and must\naddress a number of methodological challenges such as establishing dense\ncorrespondences across large facial poses, expressions, and non-uniform\nillumination. In general these methods require complex and inefficient\npipelines for model building and fitting. In this work, we propose to address\nmany of these limitations by training a Convolutional Neural Network (CNN) on\nan appropriate dataset consisting of 2D images and 3D facial models or scans.\nOur CNN works with just a single 2D facial image, does not require accurate\nalignment nor establishes dense correspondence between images, works for\narbitrary facial poses and expressions, and can be used to reconstruct the\nwhole 3D facial geometry (including the non-visible parts of the face)\nbypassing the construction (during training) and fitting (during testing) of a\n3D Morphable Model. We achieve this via a simple CNN architecture that performs\ndirect regression of a volumetric representation of the 3D facial geometry from\na single 2D image. We also demonstrate how the related task of facial landmark\nlocalization can be incorporated into the proposed framework and help improve\nreconstruction quality, especially for the cases of large poses and facial\nexpressions. Testing code will be made available online, along with pre-trained\nmodels http://aaronsplace.co.uk/papers/jackson2017recon", "field": [], "task": ["3D Face Reconstruction", "Face Alignment", "Face Reconstruction", "Regression"], "method": [], "dataset": ["Florence"], "metric": ["Mean NME "], "title": "Large Pose 3D Face Reconstruction from a Single Image via Direct Volumetric CNN Regression"} {"abstract": "Recurrent neural network models with an attention mechanism have proven to be\nextremely effective on a wide variety of sequence-to-sequence problems.\nHowever, the fact that soft attention mechanisms perform a pass over the entire\ninput sequence when producing each element in the output sequence precludes\ntheir use in online settings and results in a quadratic time complexity. Based\non the insight that the alignment between input and output sequence elements is\nmonotonic in many problems of interest, we propose an end-to-end differentiable\nmethod for learning monotonic alignments which, at test time, enables computing\nattention online and in linear time. We validate our approach on sentence\nsummarization, machine translation, and online speech recognition problems and\nachieve results competitive with existing sequence-to-sequence models.", "field": [], "task": ["Machine Translation", "Sentence Summarization", "Speech Recognition", "Text Summarization"], "method": [], "dataset": ["TIMIT"], "metric": ["Percentage error"], "title": "Online and Linear-Time Attention by Enforcing Monotonic Alignments"} {"abstract": "Learning distributions of graphs can be used for automatic drug discovery, molecular design, complex network analysis, and much more. We present an improved framework for learning generative models of graphs based on the idea of deep state machines. To learn state transition decisions we use a set of graph and node embedding techniques as memory of the state machine. Our analysis is based on learning the distribution of random graph generators for which we provide statistical tests to determine which properties can be learned and how well the original distribution of graphs is represented. We show that the design of the state machine favors specific distributions. Models of graphs of size up to 150 vertices are learned. Code and parameters are publicly available to reproduce our results.", "field": [], "task": ["Drug Discovery", "Graph Embedding"], "method": [], "dataset": ["Barabasi-Albert"], "metric": ["Entropy Difference"], "title": "DeepGG: a Deep Graph Generator"} {"abstract": "Sentiment analysis has immense implications in e-commerce through user feedback mining. Aspect-based sentiment analysis takes this one step further by enabling businesses to extract aspect specific sentimental information. In this paper, we present a novel approach of incorporating the neighboring aspects related information into the sentiment classification of the target aspect using memory networks. We show that our method outperforms the state of the art by 1.6{\\%} on average in two distinct domains: restaurant and laptop.", "field": [], "task": ["Aspect-Based Sentiment Analysis", "Extract Aspect", "Sentiment Analysis"], "method": [], "dataset": ["SemEval 2014 Task 4 Sub Task 2"], "metric": ["Laptop (Acc)", "Restaurant (Acc)", "Mean Acc (Restaurant + Laptop)"], "title": "IARM: Inter-Aspect Relation Modeling with Memory Networks in Aspect-Based Sentiment Analysis"} {"abstract": "The brain electrical activity presents several short events during sleep that can be observed as distinctive micro-structures in the electroencephalogram (EEG), such as sleep spindles and K-complexes. These events have been associated with biological processes and neurological disorders, making them a research topic in sleep medicine. However, manual detection limits their study because it is time-consuming and affected by significant inter-expert variability, motivating automatic approaches. We propose a deep learning approach based on convolutional and recurrent neural networks for sleep EEG event detection called Recurrent Event Detector (RED). RED uses one of two input representations: a) the time-domain EEG signal, or b) a complex spectrogram of the signal obtained with the Continuous Wavelet Transform (CWT). Unlike previous approaches, a fixed time window is avoided and temporal context is integrated to better emulate the visual criteria of experts. When evaluated on the MASS dataset, our detectors outperform the state of the art in both sleep spindle and K-complex detection with a mean F1-score of at least 80.9% and 82.6%, respectively. Although the CWT-domain model obtained a similar performance than its time-domain counterpart, the former allows in principle a more interpretable input representation due to the use of a spectrogram. The proposed approach is event-agnostic and can be used directly to detect other types of sleep events.", "field": [], "task": ["EEG", "K-complex detection", "Sleep Micro-event detection", "Spindle Detection", "Time Series"], "method": [], "dataset": ["MASS SS2"], "metric": ["F1-score (@IoU = 0.2)", "F1-score (@IoU = 0.3)"], "title": "RED: Deep Recurrent Neural Networks for Sleep EEG Event Detection"} {"abstract": "Generating an image from a given text description has two goals: visual\nrealism and semantic consistency. Although significant progress has been made\nin generating high-quality and visually realistic images using generative\nadversarial networks, guaranteeing semantic consistency between the text\ndescription and visual content remains very challenging. In this paper, we\naddress this problem by proposing a novel global-local attentive and\nsemantic-preserving text-to-image-to-text framework called MirrorGAN. MirrorGAN\nexploits the idea of learning text-to-image generation by redescription and\nconsists of three modules: a semantic text embedding module (STEM), a\nglobal-local collaborative attentive module for cascaded image generation\n(GLAM), and a semantic text regeneration and alignment module (STREAM). STEM\ngenerates word- and sentence-level embeddings. GLAM has a cascaded architecture\nfor generating target images from coarse to fine scales, leveraging both local\nword attention and global sentence attention to progressively enhance the\ndiversity and semantic consistency of the generated images. STREAM seeks to\nregenerate the text description from the generated image, which semantically\naligns with the given text description. Thorough experiments on two public\nbenchmark datasets demonstrate the superiority of MirrorGAN over other\nrepresentative state-of-the-art methods.", "field": [], "task": ["Image Generation", "Text-to-Image Generation"], "method": [], "dataset": ["COCO", "CUB"], "metric": ["Inception score"], "title": "MirrorGAN: Learning Text-to-image Generation by Redescription"} {"abstract": "SCENE text recognition has attracted great interest from\r\nthe academia and the industry in recent years owing to\r\nits importance in a wide range of applications. Despite the\r\nmaturity of Optical Character Recognition (OCR) systems\r\ndedicated to document text, scene text recognition remains\r\na challenging problem. The large variations in background,\r\nappearance, and layout pose significant challenges, which\r\nthe traditional OCR methods cannot handle effectively.\r\nRecent advances in scene text recognition are driven\r\nby the success of deep learning-based recognition models.\r\nAmong them are methods that recognize text by characters\r\nusing convolutional neural networks (CNN), methods that\r\nclassify words with CNNs [24], [26], and methods that\r\nrecognize character sequences using a combination of a\r\nCNN and a recurrent neural network (RNN) [54]. In spite\r\nof their success, these methods do not explicitly address the\r\nproblem of irregular text, which is text that is not horizontal\r\nand frontal, has curved layout, etc. Instances of irregular\r\ntext frequently appear in natural scenes. As exemplified\r\nin Figure 1, typical cases include oriented text, perspective\r\ntext [49], and curved text. Designed without the invariance\r\nto such irregularities, previous methods often struggle in\r\nrecognizing such text instances.", "field": [], "task": ["Optical Character Recognition", "Rectification", "Scene Text", "Scene Text Recognition"], "method": [], "dataset": ["ICDAR2013", "ICDAR2015", "SVT"], "metric": ["Accuracy"], "title": "ASTER: An Attentional Scene Text Recognizer with Flexible Rectification"} {"abstract": "Attention-based encoder-decoder neural network models have recently shown promising results in goal-oriented dialogue systems. However, these models struggle to reason over and incorporate state-full knowledge while preserving their end-to-end text generation functionality. Since such models can greatly benefit from user intent and knowledge graph integration, in this paper we propose an RNN-based end-to-end encoder-decoder architecture which is trained with joint embeddings of the knowledge graph and the corpus as input. The model provides an additional integration of user intent along with text generation, trained with a multi-task learning paradigm along with an additional regularization technique to penalize generating the wrong entity as output. The model further incorporates a Knowledge Graph entity lookup during inference to guarantee the generated output is state-full based on the local knowledge graph provided. We finally evaluated the model using the BLEU score, empirical evaluation depicts that our proposed architecture can aid in the betterment of task-oriented dialogue system`s performance.", "field": [], "task": ["Goal-Oriented Dialog", "Goal-Oriented Dialogue Systems", "Multi-Task Learning", "Text Generation"], "method": [], "dataset": ["Kvret"], "metric": ["BLEU", "Vector Extrema", "Greedy Matching", "Embedding Average"], "title": "Incorporating Joint Embeddings into Goal-Oriented Dialogues with Multi-Task Learning"} {"abstract": "The main contribution of this paper is a simple semi-supervised pipeline that\nonly uses the original training set without collecting extra data. It is\nchallenging in 1) how to obtain more training data only from the training set\nand 2) how to use the newly generated data. In this work, the generative\nadversarial network (GAN) is used to generate unlabeled samples. We propose the\nlabel smoothing regularization for outliers (LSRO). This method assigns a\nuniform label distribution to the unlabeled images, which regularizes the\nsupervised model and improves the baseline. We verify the proposed method on a\npractical problem: person re-identification (re-ID). This task aims to retrieve\na query person from other cameras. We adopt the deep convolutional generative\nadversarial network (DCGAN) for sample generation, and a baseline convolutional\nneural network (CNN) for representation learning. Experiments show that adding\nthe GAN-generated data effectively improves the discriminative ability of\nlearned CNN embeddings. On three large-scale datasets, Market-1501, CUHK03 and\nDukeMTMC-reID, we obtain +4.37%, +1.6% and +2.46% improvement in rank-1\nprecision over the baseline CNN, respectively. We additionally apply the\nproposed method to fine-grained bird recognition and achieve a +0.6%\nimprovement over a strong baseline. The code is available at\nhttps://github.com/layumi/Person-reID_GAN.", "field": [], "task": ["Person Re-Identification", "Representation Learning"], "method": [], "dataset": ["DukeMTMC-reID", "Market-1501", "CUHK03"], "metric": ["Rank-1", "MAP"], "title": "Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in vitro"} {"abstract": "Weakly supervised semantic instance segmentation with only image-level supervision, instead of relying on expensive pixel wise masks or bounding box annotations, is an important problem to alleviate the data-hungry nature of deep learning. In this paper, we tackle this challenging problem by aggregating the image-level information of all training images into a large knowledge graph and exploiting semantic relationships from this graph. Specifically, our effort starts with some generic segment-based object proposals (SOP) without category priors. We propose a multiple instance learning (MIL) framework, which can be trained in an end-to-end manner using training images with image-level labels. For each proposal, this MIL framework can simultaneously compute probability distributions and category-aware semantic features, with which we can formulate a large undirected graph. The category of background is also included in this graph to remove the massive noisy object proposals. An optimal multi-way cut of this graph can thus assign a reliable category label to each proposal. The denoised SOP with assigned category labels can be viewed as pseudo instance segmentation of training images, which are used to train fully supervised models. The proposed approach achieves state-of-the-art performance for both weakly supervised instance segmentation and semantic segmentation.", "field": [], "task": ["Instance Segmentation", "Multiple Instance Learning", "Semantic Segmentation", "Weakly-supervised instance segmentation", "Weakly-Supervised Semantic Segmentation"], "method": [], "dataset": ["PASCAL VOC 2012 test", "PASCAL VOC 2012 val"], "metric": ["Average Best Overlap", "Mean IoU", "mAP@0.75", "mAP@0.5", "mAP@0.25"], "title": "Leveraging Instance-, Image- and Dataset-Level Information for Weakly Supervised Instance Segmentation"} {"abstract": "Face detection is one of the most studied topics in the computer vision\ncommunity. Much of the progresses have been made by the availability of face\ndetection benchmark datasets. We show that there is a gap between current face\ndetection performance and the real world requirements. To facilitate future\nface detection research, we introduce the WIDER FACE dataset, which is 10 times\nlarger than existing datasets. The dataset contains rich annotations, including\nocclusions, poses, event categories, and face bounding boxes. Faces in the\nproposed dataset are extremely challenging due to large variations in scale,\npose and occlusion, as shown in Fig. 1. Furthermore, we show that WIDER FACE\ndataset is an effective training source for face detection. We benchmark\nseveral representative detection systems, providing an overview of\nstate-of-the-art performance and propose a solution to deal with large scale\nvariation. Finally, we discuss common failure cases that worth to be further\ninvestigated. Dataset can be downloaded at:\nmmlab.ie.cuhk.edu.hk/projects/WIDERFace", "field": [], "task": ["Face Detection"], "method": [], "dataset": ["WIDER Face (Hard)", "WIDER Face (Medium)", "WIDER Face (Easy)"], "metric": ["AP"], "title": "WIDER FACE: A Face Detection Benchmark"} {"abstract": "Learning to navigate in a visual environment following natural-language instructions is a challenging task, because the multimodal inputs to the agent are highly variable, and the training data on a new task is often limited. In this paper, we present the first pre-training and fine-tuning paradigm for vision-and-language navigation (VLN) tasks. By training on a large amount of image-text-action triplets in a self-supervised learning manner, the pre-trained model provides generic representations of visual environments and language instructions. It can be easily used as a drop-in for existing VLN frameworks, leading to the proposed agent called Prevalent. It learns more effectively in new tasks and generalizes better in a previously unseen environment. The performance is validated on three VLN tasks. On the Room-to-Room benchmark, our model improves the state-of-the-art from 47% to 51% on success rate weighted by path length. Further, the learned representation is transferable to other VLN tasks. On two recent tasks, vision-and-dialog navigation and \"Help, Anna!\" the proposed Prevalent leads to significant improvement over existing methods, achieving a new state of the art.", "field": [], "task": ["Self-Supervised Learning", "Vision and Language Navigation", "Visual Navigation"], "method": [], "dataset": ["R2R", "Help, Anna! (HANNA)"], "metric": ["spl"], "title": "Towards Learning a Generic Agent for Vision-and-Language Navigation via Pre-training"} {"abstract": "Graph Convolutional Networks (GCNs) have become a crucial tool on learning\nrepresentations of graph vertices. The main challenge of adapting GCNs on\nlarge-scale graphs is the scalability issue that it incurs heavy cost both in\ncomputation and memory due to the uncontrollable neighborhood expansion across\nlayers. In this paper, we accelerate the training of GCNs through developing an\nadaptive layer-wise sampling method. By constructing the network layer by layer\nin a top-down passway, we sample the lower layer conditioned on the top one,\nwhere the sampled neighborhoods are shared by different parent nodes and the\nover expansion is avoided owing to the fixed-size sampling. More importantly,\nthe proposed sampler is adaptive and applicable for explicit variance\nreduction, which in turn enhances the training of our method. Furthermore, we\npropose a novel and economical approach to promote the message passing over\ndistant nodes by applying skip connections. Intensive experiments on several\nbenchmarks verify the effectiveness of our method regarding the classification\naccuracy while enjoying faster convergence speed.", "field": [], "task": ["Graph Representation Learning", "Node Classification", "Representation Learning"], "method": [], "dataset": ["Cora", "Reddit", "Pubmed Full-supervised", "Cora Full-supervised", "Citeseer Full-supervised"], "metric": ["Accuracy"], "title": "Adaptive Sampling Towards Fast Graph Representation Learning"} {"abstract": "There is a growing interest in designing models that can deal with images\nfrom different visual domains. If there exists a universal structure in\ndifferent visual domains that can be captured via a common parameterization,\nthen we can use a single model for all domains rather than one model per\ndomain. A model aware of the relationships between different domains can also\nbe trained to work on new domains with less resources. However, to identify the\nreusable structure in a model is not easy. In this paper, we propose a\nmulti-domain learning architecture based on depthwise separable convolution.\nThe proposed approach is based on the assumption that images from different\ndomains share cross-channel correlations but have domain-specific spatial\ncorrelations. The proposed model is compact and has minimal overhead when being\napplied to new domains. Additionally, we introduce a gating mechanism to\npromote soft sharing between different domains. We evaluate our approach on\nVisual Decathlon Challenge, a benchmark for testing the ability of multi-domain\nmodels. The experiments show that our approach can achieve the highest score\nwhile only requiring 50% of the parameters compared with the state-of-the-art\napproaches.", "field": [], "task": ["Continual Learning"], "method": [], "dataset": ["visual domain decathlon (10 tasks)"], "metric": ["decathlon discipline (Score)"], "title": "Depthwise Convolution is All You Need for Learning Multiple Visual Domains"} {"abstract": "Robust geometric and semantic scene understanding is ever more important in many real-world applications such as autonomous driving and robotic navigation. In this paper, we propose a multi-task learning-based approach capable of jointly performing geometric and semantic scene understanding, namely depth prediction (monocular depth estimation and depth completion) and semantic scene segmentation. Within a single temporally constrained recurrent network, our approach uniquely takes advantage of a complex series of skip connections, adversarial training and the temporal constraint of sequential frame recurrence to produce consistent depth and semantic class labels simultaneously. Extensive experimental evaluation demonstrates the efficacy of our approach compared to other contemporary state-of-the-art techniques.", "field": [], "task": ["Autonomous Driving", "Depth Completion", "Depth Estimation", "Monocular Depth Estimation", "Multi-Task Learning", "Scene Segmentation", "Scene Understanding"], "method": [], "dataset": ["KITTI Eigen split"], "metric": ["absolute relative error"], "title": "Veritatem Dies Aperit- Temporally Consistent Depth Prediction Enabled by a Multi-Task Geometric and Semantic Scene Understanding Approach"} {"abstract": "Previous work on multimodal machine translation has shown that visual information is only needed in very specific cases, for example in the presence of ambiguous words where the textual context is not sufficient. As a consequence, models tend to learn to ignore this information. We propose a translate-and-refine approach to this problem where images are only used by a second stage decoder. This approach is trained jointly to generate a good first draft translation and to improve over this draft by (i) making better use of the target language textual context (both left and right-side contexts) and (ii) making use of visual context. This approach leads to the state of the art results. Additionally, we show that it has the ability to recover from erroneous or missing words in the source language.", "field": [], "task": ["Machine Translation", "Multimodal Machine Translation"], "method": [], "dataset": ["Multi30K"], "metric": ["BLEU (EN-FR)", "Meteor (EN-DE)", "Meteor (EN-FR)", "BLEU (EN-DE)"], "title": "Distilling Translations with Visual Awareness"} {"abstract": "We present the first sentence simplification model that learns explicit edit operations (ADD, DELETE, and KEEP) via a neural programmer-interpreter approach. Most current neural sentence simplification systems are variants of sequence-to-sequence models adopted from machine translation. These methods learn to simplify sentences as a byproduct of the fact that they are trained on complex-simple sentence pairs. By contrast, our neural programmer-interpreter is directly trained to predict explicit edit operations on targeted parts of the input sentence, resembling the way that humans might perform simplification and revision. Our model outperforms previous state-of-the-art neural sentence simplification models (without external knowledge) by large margins on three benchmark text simplification corpora in terms of SARI (+0.95 WikiLarge, +1.89 WikiSmall, +1.41 Newsela), and is judged by humans to produce overall better and simpler output sentences.", "field": [], "task": ["Machine Translation", "Text Simplification"], "method": [], "dataset": ["PWKP / WikiSmall", "Newsela", "TurkCorpus"], "metric": ["BLEU", "SARI (EASSE>=0.2.1)", "SARI"], "title": "EditNTS: An Neural Programmer-Interpreter Model for Sentence Simplification through Explicit Editing"} {"abstract": "We present Espresso, an open-source, modular, extensible end-to-end neural automatic speech recognition (ASR) toolkit based on the deep learning library PyTorch and the popular neural machine translation toolkit fairseq. Espresso supports distributed training across GPUs and computing nodes, and features various decoding approaches commonly employed in ASR, including look-ahead word-based language model fusion, for which a fast, parallelized decoder is implemented. Espresso achieves state-of-the-art ASR performance on the WSJ, LibriSpeech, and Switchboard data sets among other end-to-end systems without data augmentation, and is 4--11x faster for decoding than similar systems (e.g. ESPnet).", "field": [], "task": ["Data Augmentation", "Language Modelling", "Machine Translation", "Speech Recognition"], "method": [], "dataset": ["Hub5'00 CallHome", "Hub5'00 SwitchBoard", "LibriSpeech test-other", "WSJ eval92", "LibriSpeech test-clean"], "metric": ["Eval2000", "Word Error Rate (WER)"], "title": "Espresso: A Fast End-to-end Neural Speech Recognition Toolkit"} {"abstract": "The estimation of 3D face shape from a single image must be robust to variations in lighting, head pose, expression, facial hair, makeup, and occlusions. Robustness requires a large training set of in-the-wild images, which by construction, lack ground truth 3D shape. To train a network without any 2D-to-3D supervision, we present RingNet, which learns to compute 3D face shape from a single image. Our key observation is that an individual's face shape is constant across images, regardless of expression, pose, lighting, etc. RingNet leverages multiple images of a person and automatically detected 2D face features. It uses a novel loss that encourages the face shape to be similar when the identity is the same and different for different people. We achieve invariance to expression by representing the face using the FLAME model. Once trained, our method takes a single image and outputs the parameters of FLAME, which can be readily animated. Additionally we create a new database of faces `not quite in-the-wild' (NoW) with 3D head scans and high-resolution images of the subjects in a wide variety of conditions. We evaluate publicly available methods and find that RingNet is more accurate than methods that use 3D supervision. The dataset, model, and results are available for research purposes at http://ringnet.is.tuebingen.mpg.de.", "field": [], "task": ["3D Face Reconstruction"], "method": [], "dataset": ["NoW Benchmark", "Stirling-LQ (FG2018 3D face reconstruction challenge)", "Stirling-HQ (FG2018 3D face reconstruction challenge)"], "metric": ["Mean Reconstruction Error (mm)"], "title": "Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision"} {"abstract": "We propose a novel self-supervised method, referred to as Video Cloze Procedure (VCP), to learn rich spatial-temporal representations. VCP first generates \"blanks\" by withholding video clips and then creates \"options\" by applying spatio-temporal operations on the withheld clips. Finally, it fills the blanks with \"options\" and learns representations by predicting the categories of operations applied on the clips. VCP can act as either a proxy task or a target task in self-supervised learning. As a proxy task, it converts rich self-supervised representations into video clip operations (options), which enhances the flexibility and reduces the complexity of representation learning. As a target task, it can assess learned representation models in a uniform and interpretable manner. With VCP, we train spatial-temporal representation models (3D-CNNs) and apply such models on action recognition and video retrieval tasks. Experiments on commonly used benchmarks show that the trained models outperform the state-of-the-art self-supervised models with significant margins.", "field": [], "task": ["Action Recognition", "Representation Learning", "Self-Supervised Action Recognition", "Self-Supervised Learning", "Self-supervised Video Retrieval", "Video Retrieval"], "method": [], "dataset": ["UCF101", "HMDB51"], "metric": ["3-fold Accuracy", "Pre-Training Dataset", "Top-1 Accuracy"], "title": "Video Cloze Procedure for Self-Supervised Spatio-Temporal Learning"} {"abstract": "Single shot detector simultaneously predicts object categories and regression offsets of the default boxes. Despite of high efficiency, this structure has some inappropriate designs: (1) The classification result of the default box is improperly assigned to that of the regressed box during inference, (2) Only regression once is not good enough for accurate object detection. To solve the first problem, a novel reg-offset-cls (ROC) module is proposed. It contains three hierarchical steps: box regression, the feature sampling location predication, and the regressed box classification with the features of offset locations. To further solve the second problem, a hierarchical shot detector (HSD) is proposed, which stacks two ROC modules and one feature enhanced module. The second ROC treats the regressed boxes and the feature sampling locations of features in the first ROC as the inputs. Meanwhile, the feature enhanced module injected between two ROCs aims to extract the local and non-local context. Experiments on the MS COCO and PASCAL VOC datasets demonstrate the superiority of proposed HSD. Without the bells or whistles, HSD outperforms all one-stage methods at real-time speed.\r", "field": [], "task": ["Object Detection", "Regression"], "method": [], "dataset": ["PASCAL VOC 2007", "COCO test-dev"], "metric": ["APM", "MAP", "box AP", "AP75", "APS", "APL", "AP50"], "title": "Hierarchical Shot Detector"} {"abstract": "Dialogue systems in open domain have achieved great success due to the easily obtained single-turn corpus and the development of deep learning, but the multi-turn scenario is still a challenge because of the frequent coreference and information omission. In this paper, we investigate the incomplete utterance restoration which has brought general improvement over multi-turn dialogue systems in recent studies. Meanwhile, jointly inspired by the autoregression for text generation and the sequence labeling for text editing, we propose a novel semi autoregressive generator (SARG) with the high efficiency and flexibility. Moreover, experiments on two benchmarks show that our proposed model significantly outperforms the state-of-the-art models in terms of quality and inference speed.", "field": [], "task": ["Dialogue Rewriting", "Text Generation"], "method": [], "dataset": ["Multi-Rewrite", "CANARD"], "metric": ["ROUGE-1", "BLEU-2", "Rewriting F1", "Rewriting F3", "BLEU-1", "Rewriting F2", "ROUGE-2", "BLEU"], "title": "SARG: A Novel Semi Autoregressive Generator for Multi-turn Incomplete Utterance Restoration"} {"abstract": "We describe Howl, an open-source wake word detection toolkit with native support for open speech datasets, like Mozilla Common Voice and Google Speech Commands. We report benchmark results on Speech Commands and our own freely available wake word detection dataset, built from MCV. We operationalize our system for Firefox Voice, a plugin enabling speech interactivity for the Firefox web browser. Howl represents, to the best of our knowledge, the first fully productionized yet open-source wake word detection toolkit with a web browser deployment target. Our codebase is at https://github.com/castorini/howl.", "field": [], "task": ["Keyword Spotting"], "method": [], "dataset": ["Google Speech Commands"], "metric": ["Google Speech Commands V1 12"], "title": "Howl: A Deployed, Open-Source Wake Word Detection System"} {"abstract": "We propose a single-stage Human-Object Interaction (HOI) detection method that has outperformed all existing methods on HICO-DET dataset at 37 fps on a single Titan XP GPU. It is the first real-time HOI detection method. Conventional HOI detection methods are composed of two stages, i.e., human-object proposals generation, and proposals classification. Their effectiveness and efficiency are limited by the sequential and separate architecture. In this paper, we propose a Parallel Point Detection and Matching (PPDM) HOI detection framework. In PPDM, an HOI is defined as a point triplet < human point, interaction point, object point>. Human and object points are the center of the detection boxes, and the interaction point is the midpoint of the human and object points. PPDM contains two parallel branches, namely point detection branch and point matching branch. The point detection branch predicts three points. Simultaneously, the point matching branch predicts two displacements from the interaction point to its corresponding human and object points. The human point and the object point originated from the same interaction point are considered as matched pairs. In our novel parallel architecture, the interaction points implicitly provide context and regularization for human and object detection. The isolated detection boxes are unlikely to form meaning HOI triplets are suppressed, which increases the precision of HOI detection. Moreover, the matching between human and object detection boxes is only applied around limited numbers of filtered candidate interaction points, which saves much computational cost. Additionally, we build a new application-oriented database named HOI-A, which severs as a good supplement to the existing datasets. The source code and the dataset will be made publicly available to facilitate the development of HOI detection.", "field": [], "task": ["Human-Object Interaction Detection", "Object Detection"], "method": [], "dataset": ["HICO-DET"], "metric": ["Time Per Frame (ms)", "MAP"], "title": "PPDM: Parallel Point Detection and Matching for Real-time Human-Object Interaction Detection"} {"abstract": "Novelty detection, i.e., identifying whether a given sample is drawn from outside the training distribution, is essential for reliable machine learning. To this end, there have been many attempts at learning a representation well-suited for novelty detection and designing a score based on such representation. In this paper, we propose a simple, yet effective method named contrasting shifted instances (CSI), inspired by the recent success on contrastive learning of visual representations. Specifically, in addition to contrasting a given sample with other instances as in conventional contrastive learning methods, our training scheme contrasts the sample with distributionally-shifted augmentations of itself. Based on this, we propose a new detection score that is specific to the proposed training scheme. Our experiments demonstrate the superiority of our method under various novelty detection scenarios, including unlabeled one-class, unlabeled multi-class and labeled multi-class settings, with various image benchmark datasets. Code and pre-trained models are available at https://github.com/alinlab/CSI.", "field": [], "task": ["Anomaly Detection", "Out-of-Distribution Detection", "Representation Learning", "Unsupervised Anomaly Detection"], "method": [], "dataset": ["One-class CIFAR-100", "Unlabeled CIFAR-10 vs CIFAR-100", "One-class CIFAR-10", "One-class ImageNet-30"], "metric": ["AUROC"], "title": "CSI: Novelty Detection via Contrastive Learning on Distributionally Shifted Instances"} {"abstract": "MetaDL Challenge 2020 focused on image classification tasks in few-shot settings. This paper describes second best submission in the competition. Our meta learning approach modifies the distribution of classes in a latent space produced by a backbone network for each class in order to better follow the Gaussian distribution. After this operation which we call Latent Space Transform algorithm, centers of classes are further aligned in an iterative fashion of the Expectation Maximisation algorithm to utilize information in unlabeled data that are often provided on top of few labelled instances. For this task, we utilize optimal transport mapping using the Sinkhorn algorithm. Our experiments show that this approach outperforms previous works as well as other variants of the algorithm, using K-Nearest Neighbour algorithm, Gaussian Mixture Models, etc.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Image Classification", "Meta-Learning", "Transfer Learning"], "method": [], "dataset": ["CIFAR-FS 5-way (1-shot)", "CUB 200 5-way 1-shot", "CUB 200 5-way 5-shot", "CIFAR-FS 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Transfer learning based few-shot classification using optimal transport mapping from preprocessed latent space of backbone neural network"} {"abstract": "Video object segmentation targets at segmenting a specific object throughout\na video sequence, given only an annotated first frame. Recent deep learning\nbased approaches find it effective by fine-tuning a general-purpose\nsegmentation model on the annotated frame using hundreds of iterations of\ngradient descent. Despite the high accuracy these methods achieve, the\nfine-tuning process is inefficient and fail to meet the requirements of real\nworld applications. We propose a novel approach that uses a single forward pass\nto adapt the segmentation model to the appearance of a specific object.\nSpecifically, a second meta neural network named modulator is learned to\nmanipulate the intermediate layers of the segmentation network given limited\nvisual and spatial information of the target object. The experiments show that\nour approach is 70times faster than fine-tuning approaches while achieving\nsimilar accuracy.", "field": [], "task": ["Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Instance Segmentation", "Video Object Segmentation", "Video Semantic Segmentation", "Visual Object Tracking"], "method": [], "dataset": ["YouTube-VOS", "DAVIS 2017 (test-dev)", "DAVIS 2017 (val)", "YouTube-VIS validation", "DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "Overall", "F-Measure (Unseen)", "Jaccard (Unseen)", "AR1", "AP75", "Jaccard (Recall)", "mask AP", "F-Measure (Seen)", "Jaccard (Decay)", "F-measure (Mean)", "AR10", "Speed (FPS)", "Jaccard (Seen)", "O (Average of Measures)", "F-measure (Recall)", "AP50", "J&F"], "title": "Efficient Video Object Segmentation via Network Modulation"} {"abstract": "We present a modular cross-domain neural network the XPDNet and its application to the MRI reconstruction task. This approach consists in unrolling the PDHG algorithm as well as learning the acceleration scheme between steps. We also adopt state-of-the-art techniques specific to Deep Learning for MRI reconstruction. At the time of writing, this approach is the best performer in PSNR on the fastMRI leaderboards for both knee and brain at acceleration factor 4.", "field": [], "task": ["Image Reconstruction", "MRI Reconstruction"], "method": [], "dataset": ["fastMRI"], "metric": ["PSNR"], "title": "XPDNet for MRI Reconstruction: an Application to the fastMRI 2020 Brain Challenge"} {"abstract": "Recently, consistency-based methods have achieved state-of-the-art results in semi-supervised learning (SSL). These methods always involve two roles, an explicit or implicit teacher model and a student model, and penalize predictions under different perturbations by a consistency constraint. However, the weights of these two roles are tightly coupled since the teacher is essentially an exponential moving average (EMA) of the student. In this work, we show that the coupled EMA teacher causes a performance bottleneck. To address this problem, we introduce Dual Student, which replaces the teacher with another student. We also define a novel concept, stable sample, following which a stabilization constraint is designed for our structure to be trainable. Further, we discuss two variants of our method, which produce even higher performance. Extensive experiments show that our method improves the classification performance significantly on several main SSL benchmarks. Specifically, it reduces the error rate of the 13-layer CNN from 16.84% to 12.39% on CIFAR-10 with 1k labels and from 34.10% to 31.56% on CIFAR-100 with 10k labels. In addition, our method also achieves a clear improvement in domain adaptation.", "field": [], "task": ["Semi-Supervised Image Classification", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["ImageNet - 10% labeled data", "SVHN, 500 Labels", "CIFAR-10, 2000 Labels", "SVHN, 250 Labels", "CIFAR-10, 1000 Labels", "cifar-100, 10000 Labels", "CIFAR-10, 4000 Labels"], "metric": ["Top 5 Accuracy", "Accuracy", "Top 1 Accuracy"], "title": "Dual Student: Breaking the Limits of the Teacher in Semi-supervised Learning"} {"abstract": "Bayesian Optimisation (BO) refers to a class of methods for global\noptimisation of a function $f$ which is only accessible via point evaluations.\nIt is typically used in settings where $f$ is expensive to evaluate. A common\nuse case for BO in machine learning is model selection, where it is not\npossible to analytically model the generalisation performance of a statistical\nmodel, and we resort to noisy and expensive training and validation procedures\nto choose the best model. Conventional BO methods have focused on Euclidean and\ncategorical domains, which, in the context of model selection, only permits\ntuning scalar hyper-parameters of machine learning algorithms. However, with\nthe surge of interest in deep learning, there is an increasing demand to tune\nneural network \\emph{architectures}. In this work, we develop NASBOT, a\nGaussian process based BO framework for neural architecture search. To\naccomplish this, we develop a distance metric in the space of neural network\narchitectures which can be computed efficiently via an optimal transport\nprogram. This distance might be of independent interest to the deep learning\ncommunity as it may find applications outside of BO. We demonstrate that NASBOT\noutperforms other alternatives for architecture search in several cross\nvalidation based model selection tasks on multi-layer perceptrons and\nconvolutional neural networks.", "field": [], "task": ["Bayesian Optimisation", "Model Selection", "Neural Architecture Search"], "method": [], "dataset": ["NAS-Bench-201, ImageNet-16-120"], "metric": ["Search time (s)", "Accuracy (Test)"], "title": "Neural Architecture Search with Bayesian Optimisation and Optimal Transport"} {"abstract": "We investigate the following question for machine translation (MT): can we develop a single universal MT model to serve as the common seed and obtain derivative and improved models on arbitrary language pairs? We propose mRASP, an approach to pre-train a universal multilingual neural machine translation model. Our key idea in mRASP is its novel technique of random aligned substitution, which brings words and phrases with similar meanings across multiple languages closer in the representation space. We pre-train a mRASP model on 32 language pairs jointly with only public datasets. The model is then fine-tuned on downstream language pairs to obtain specialized MT models. We carry out extensive experiments on 42 translation directions across a diverse settings, including low, medium, rich resource, and as well as transferring to exotic language pairs. Experimental results demonstrate that mRASP achieves significant performance improvement compared to directly training on those target pairs. It is the first time to verify that multiple low-resource language pairs can be utilized to improve rich resource MT. Surprisingly, mRASP is even able to improve the translation quality on exotic languages that never occur in the pre-training corpus. Code, data, and pre-trained models are available at https://github.com/linzehui/mRASP.", "field": [], "task": ["Machine Translation"], "method": [], "dataset": ["WMT2014 English-French"], "metric": ["BLEU score", "SacreBLEU"], "title": "Pre-training Multilingual Neural Machine Translation by Leveraging Alignment Information"} {"abstract": "Automatic question generation aims to generate questions from a text passage\nwhere the generated questions can be answered by certain sub-spans of the given\npassage. Traditional methods mainly use rigid heuristic rules to transform a\nsentence into related questions. In this work, we propose to apply the neural\nencoder-decoder model to generate meaningful and diverse questions from natural\nlanguage sentences. The encoder reads the input text and the answer position,\nto produce an answer-aware input representation, which is fed to the decoder to\ngenerate an answer focused question. We conduct a preliminary study on neural\nquestion generation from text with the SQuAD dataset, and the experiment\nresults show that our method can produce fluent and diverse questions.", "field": [], "task": ["Question Generation"], "method": [], "dataset": ["SQuAD1.1"], "metric": ["BLEU-4"], "title": "Neural Question Generation from Text: A Preliminary Study"} {"abstract": "Mastering a video game requires skill, tactics and strategy. While these\nattributes may be acquired naturally by human players, teaching them to a\ncomputer program is a far more challenging task. In recent years, extensive\nresearch was carried out in the field of reinforcement learning and numerous\nalgorithms were introduced, aiming to learn how to perform human tasks such as\nplaying video games. As a result, the Arcade Learning Environment (ALE)\n(Bellemare et al., 2013) has become a commonly used benchmark environment\nallowing algorithms to train on various Atari 2600 games. In many games the\nstate-of-the-art algorithms outperform humans. In this paper we introduce a new\nlearning environment, the Retro Learning Environment --- RLE, that can run\ngames on the Super Nintendo Entertainment System (SNES), Sega Genesis and\nseveral other gaming consoles. The environment is expandable, allowing for more\nvideo games and consoles to be easily added to the environment, while\nmaintaining the same interface as ALE. Moreover, RLE is compatible with Python\nand Torch. SNES games pose a significant challenge to current algorithms due to\ntheir higher level of complexity and versatility.", "field": [], "task": ["Atari Games", "SNES Games"], "method": [], "dataset": ["Wolfenstein", "Super Mario", "Gradius III", "Mortal Kombat", "F-Zero"], "metric": ["Score"], "title": "Playing SNES in the Retro Learning Environment"} {"abstract": "Many of the leading approaches for video understanding are data-hungry and\ntime-consuming, failing to capture the gist of spatial-temporal evolution in an\nefficient manner. The latest research shows that CNN network can reason about\nstatic relation of entities in images. To further exploit its capacity in\ndynamic evolution reasoning, we introduce a novel network module called\nDenseImage Network(DIN) with two main contributions. 1) A novel compact\nrepresentation of video which distills its significant spatial-temporal\nevolution into a matrix called DenseImage, primed for efficient video encoding.\n2) A simple yet powerful learning strategy based on DenseImage and a\ntemporal-order-preserving CNN network is proposed for video understanding,\nwhich contains a local temporal correlation constraint capturing temporal\nevolution at multiple time scales with different filter widths. Extensive\nexperiments on two recent challenging benchmarks demonstrate that our\nDenseImage Network can accurately capture the common spatial-temporal evolution\nbetween similar actions, even with enormous visual variations or different time\nscales. Moreover, we obtain the state-of-the-art results in action and gesture\nrecognition with much less time-and-memory cost, indicating its immense\npotential in video representing and understanding.", "field": [], "task": ["Gesture Recognition", "Video Understanding"], "method": [], "dataset": ["Jester", "Something-Something V2"], "metric": ["Val", "Top-1 Accuracy"], "title": "DenseImage Network: Video Spatial-Temporal Evolution Encoding and Understanding"} {"abstract": "Advances in image super-resolution (SR) have recently benefited significantly\nfrom rapid developments in deep neural networks. Inspired by these recent\ndiscoveries, we note that many state-of-the-art deep SR architectures can be\nreformulated as a single-state recurrent neural network (RNN) with finite\nunfoldings. In this paper, we explore new structures for SR based on this\ncompact RNN view, leading us to a dual-state design, the Dual-State Recurrent\nNetwork (DSRN). Compared to its single state counterparts that operate at a\nfixed spatial resolution, DSRN exploits both low-resolution (LR) and\nhigh-resolution (HR) signals jointly. Recurrent signals are exchanged between\nthese states in both directions (both LR to HR and HR to LR) via delayed\nfeedback. Extensive quantitative and qualitative evaluations on benchmark\ndatasets and on a recent challenge demonstrate that the proposed DSRN performs\nfavorably against state-of-the-art algorithms in terms of both memory\nconsumption and predictive accuracy.", "field": [], "task": ["Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["Set5 - 4x upscaling", "Urban100 - 4x upscaling", "BSD100 - 4x upscaling", "Set14 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Image Super-Resolution via Dual-State Recurrent Networks"} {"abstract": "We present CURL: Contrastive Unsupervised Representations for Reinforcement Learning. CURL extracts high-level features from raw pixels using contrastive learning and performs off-policy control on top of the extracted features. CURL outperforms prior pixel-based methods, both model-based and model-free, on complex tasks in the DeepMind Control Suite and Atari Games showing 1.9x and 1.2x performance gains at the 100K environment and interaction steps benchmarks respectively. On the DeepMind Control Suite, CURL is the first image-based algorithm to nearly match the sample-efficiency of methods that use state-based features. Our code is open-sourced and available at https://github.com/MishaLaskin/curl.", "field": [], "task": ["Atari Games", "Continuous Control"], "method": [], "dataset": ["Atari 2600 Amidar", "Atari 2600 Demon Attack", "Atari 2600 Alien", "Atari 2600 Boxing", "Cartpole, swingup (DMControl100k)", "Atari 2600 Bank Heist", "Atari 2600 Assault", "Reacher, easy (DMControl100k)", "Atari 2600 Private Eye", "Atari 2600 Asterix", "Walker, walk (DMControl100k)", "Reacher, easy (DMControl500k)", "Atari 2600 Breakout", "Atari 2600 Crazy Climber", "Atari 2600 Pong", "Atari 2600 Krull", "Cheetah, run (DMControl100k)", "Atari 2600 Freeway", "Atari 2600 James Bond", "Cheetah, run (DMControl500k)", "Atari 2600 Kangaroo", "Atari 2600 Ms. Pacman", "Atari 2600 Seaquest", "Atari 2600 Frostbite", "Atari 2600 Battle Zone", "Atari 2600 Gopher", "Walker, walk (DMControl500k)", "Atari 2600 Road Runner", "Atari 2600 Chopper Command", "Atari 2600 Kung-Fu Master", "Atari 2600 Up and Down", "Ball in cup, catch (DMControl100k)", "Ball in cup, catch (DMControl500k)", "Finger, spin (DMControl100k)", "Cartpole, swingup (DMControl500k)", "Finger, spin (DMControl500k)", "Atari 2600 Q*Bert", "Atari 2600 HERO"], "metric": ["Score", "Medium Human-Normalized Score"], "title": "CURL: Contrastive Unsupervised Representations for Reinforcement Learning"} {"abstract": "Object parsing -- the task of decomposing an object into its semantic parts\n-- has traditionally been formulated as a category-level segmentation problem.\nConsequently, when there are multiple objects in an image, current methods\ncannot count the number of objects in the scene, nor can they determine which\npart belongs to which object. We address this problem by segmenting the parts\nof objects at an instance-level, such that each pixel in the image is assigned\na part label, as well as the identity of the object it belongs to. Moreover, we\nshow how this approach benefits us in obtaining segmentations at coarser\ngranularities as well. Our proposed network is trained end-to-end given\ndetections, and begins with a category-level segmentation module. Thereafter, a\ndifferentiable Conditional Random Field, defined over a variable number of\ninstances for every input image, reasons about the identity of each part by\nassociating it with a human detection. In contrast to other approaches, our\nmethod can handle the varying number of people in each image and our holistic\nnetwork produces state-of-the-art results in instance-level part and human\nsegmentation, together with competitive results in category-level part\nsegmentation, all achieved by a single forward-pass through our neural network.", "field": [], "task": ["Human Detection", "Human Parsing", "Multi-Human Parsing"], "method": [], "dataset": ["PASCAL-Part"], "metric": ["AP 0.5"], "title": "Holistic, Instance-Level Human Parsing"} {"abstract": "Neural networks trained on datasets such as ImageNet have led to major\nadvances in visual object classification. One obstacle that prevents networks\nfrom reasoning more deeply about complex scenes and situations, and from\nintegrating visual knowledge with natural language, like humans do, is their\nlack of common sense knowledge about the physical world. Videos, unlike still\nimages, contain a wealth of detailed information about the physical world.\nHowever, most labelled video datasets represent high-level concepts rather than\ndetailed physical aspects about actions and scenes. In this work, we describe\nour ongoing collection of the \"something-something\" database of video\nprediction tasks whose solutions require a common sense understanding of the\ndepicted situation. The database currently contains more than 100,000 videos\nacross 174 classes, which are defined as caption-templates. We also describe\nthe challenges in crowd-sourcing this data at scale.", "field": [], "task": ["Action Recognition", "Common Sense Reasoning", "Object Classification", "Video Prediction"], "method": [], "dataset": ["Something-Something V2"], "metric": ["Top-5 Accuracy", "Top-1 Accuracy"], "title": "The \"something something\" video database for learning and evaluating visual common sense"} {"abstract": "Simple Online and Realtime Tracking (SORT) is a pragmatic approach to\nmultiple object tracking with a focus on simple, effective algorithms. In this\npaper, we integrate appearance information to improve the performance of SORT.\nDue to this extension we are able to track objects through longer periods of\nocclusions, effectively reducing the number of identity switches. In spirit of\nthe original framework we place much of the computational complexity into an\noffline pre-training stage where we learn a deep association metric on a\nlarge-scale person re-identification dataset. During online application, we\nestablish measurement-to-track associations using nearest neighbor queries in\nvisual appearance space. Experimental evaluation shows that our extensions\nreduce the number of identity switches by 45%, achieving overall competitive\nperformance at high frame rates.", "field": [], "task": ["Large-Scale Person Re-Identification", "Multiple Object Tracking", "Object Tracking", "Person Re-Identification", "Video Instance Segmentation"], "method": [], "dataset": ["YouTube-VIS validation"], "metric": ["AR10", "AR1", "AP75", "AP50", "mask AP"], "title": "Simple Online and Realtime Tracking with a Deep Association Metric"} {"abstract": "We present the first real-time method to capture the full global 3D skeletal\npose of a human in a stable, temporally consistent manner using a single RGB\ncamera. Our method combines a new convolutional neural network (CNN) based pose\nregressor with kinematic skeleton fitting. Our novel fully-convolutional pose\nformulation regresses 2D and 3D joint positions jointly in real time and does\nnot require tightly cropped input frames. A real-time kinematic skeleton\nfitting method uses the CNN output to yield temporally stable 3D global pose\nreconstructions on the basis of a coherent kinematic skeleton. This makes our\napproach the first monocular RGB method usable in real-time applications such\nas 3D character control---thus far, the only monocular methods for such\napplications employed specialized RGB-D cameras. Our method's accuracy is\nquantitatively on par with the best offline 3D monocular RGB pose estimation\nmethods. Our results are qualitatively comparable to, and sometimes better\nthan, results from monocular RGB-D approaches, such as the Kinect. However, we\nshow that our approach is more broadly applicable than RGB-D solutions, i.e. it\nworks for outdoor scenes, community videos, and low quality commodity RGB\ncameras.", "field": [], "task": ["3D Human Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["MPI-INF-3DHP"], "metric": ["3DPCK", "MJPE", "AUC"], "title": "VNect: Real-time 3D Human Pose Estimation with a Single RGB Camera"} {"abstract": "Text classification is one of the fundamental tasks in natural language\nprocessing. Recently, deep neural networks have achieved promising performance\nin the text classification task compared to shallow models. Despite of the\nsignificance of deep models, they ignore the fine-grained (matching signals\nbetween words and classes) classification clues since their classifications\nmainly rely on the text-level representations. To address this problem, we\nintroduce the interaction mechanism to incorporate word-level matching signals\ninto the text classification task. In particular, we design a novel framework,\nEXplicit interAction Model (dubbed as EXAM), equipped with the interaction\nmechanism. We justified the proposed approach on several benchmark datasets\nincluding both multi-label and multi-class text classification tasks. Extensive\nexperimental results demonstrate the superiority of the proposed method. As a\nbyproduct, we have released the codes and parameter settings to facilitate\nother researches.", "field": [], "task": ["Sentiment Analysis", "Text Classification"], "method": [], "dataset": ["Amazon Review Polarity", "Yahoo! Answers", "DBpedia", "Amazon Review Full", "AG News"], "metric": ["Error", "Accuracy"], "title": "Explicit Interaction Model towards Text Classification"} {"abstract": "Video inpainting, which aims at filling in missing regions of a video, remains challenging due to the difficulty of preserving the precise spatial and temporal coherence of video contents. In this work we propose a novel flow-guided video inpainting approach. Rather than filling in the RGB pixels of each frame directly, we consider video inpainting as a pixel propagation problem. We first synthesize a spatially and temporally coherent optical flow field across video frames using a newly designed Deep Flow Completion network. Then the synthesized flow field is used to guide the propagation of pixels to fill up the missing regions in the video. Specifically, the Deep Flow Completion network follows a coarse-to-fine refinement to complete the flow fields, while their quality is further improved by hard flow example mining. Following the guide of the completed flow, the missing video regions can be filled up precisely. Our method is evaluated on DAVIS and YouTube-VOS datasets qualitatively and quantitatively, achieving the state-of-the-art performance in terms of inpainting quality and speed.", "field": [], "task": ["Optical Flow Estimation", "Video fixed region Inpainting", "Video Inpainting", "Youtube-VOS"], "method": [], "dataset": ["YouTube-VOS", "DAVIS"], "metric": ["SSIM", "PSNR"], "title": "Deep Flow-Guided Video Inpainting"} {"abstract": "Rapid progress has been made in the field of reading comprehension and question answering, where several systems have achieved human parity in some simplified settings. However, the performance of these models degrades significantly when they are applied to more realistic scenarios, such as answers involve various types, multiple text strings are correct answers, or discrete reasoning abilities are required. In this paper, we introduce the Multi-Type Multi-Span Network (MTMSN), a neural reading comprehension model that combines a multi-type answer predictor designed to support various answer types (e.g., span, count, negation, and arithmetic expression) with a multi-span extraction method for dynamically producing one or multiple text spans. In addition, an arithmetic expression reranking mechanism is proposed to rank expression candidates for further confirming the prediction. Experiments show that our model achieves 79.9 F1 on the DROP hidden test set, creating new state-of-the-art results. Source code\\footnote{\\url{https://github.com/huminghao16/MTMSN}} is released to facilitate future work.", "field": [], "task": ["Question Answering", "Reading Comprehension"], "method": [], "dataset": ["DROP Test"], "metric": ["F1"], "title": "A Multi-Type Multi-Span Network for Reading Comprehension that Requires Discrete Reasoning"} {"abstract": "Although cameras are ubiquitous, robotic platforms typically rely on active sensors like LiDAR for direct 3D perception. In this work, we propose a novel self-supervised monocular depth estimation method combining geometry with a new deep network, PackNet, learned only from unlabeled monocular videos. Our architecture leverages novel symmetrical packing and unpacking blocks to jointly learn to compress and decompress detail-preserving representations using 3D convolutions. Although self-supervised, our method outperforms other self, semi, and fully supervised methods on the KITTI benchmark. The 3D inductive bias in PackNet enables it to scale with input resolution and number of parameters without overfitting, generalizing better on out-of-domain data such as the NuScenes dataset. Furthermore, it does not require large-scale supervised pretraining on ImageNet and can run in real-time. Finally, we release DDAD (Dense Depth for Automated Driving), a new urban driving dataset with more challenging and accurate depth evaluation, thanks to longer-range and denser ground-truth depth generated from high-density LiDARs mounted on a fleet of self-driving cars operating world-wide.", "field": [], "task": ["Depth Estimation", "Monocular Depth Estimation", "Self-Driving Cars"], "method": [], "dataset": ["KITTI Eigen split unsupervised", "KITTI Eigen split"], "metric": ["absolute relative error"], "title": "3D Packing for Self-Supervised Monocular Depth Estimation"} {"abstract": "Egocentric action anticipation consists in understanding which objects the camera wearer will interact with in the near future and which actions they will perform. We tackle the problem proposing an architecture able to anticipate actions at multiple temporal scales using two LSTMs to 1) summarize the past, and 2) formulate predictions about the future. The input video is processed considering three complimentary modalities: appearance (RGB), motion (optical flow) and objects (object-based features). Modality-specific predictions are fused using a novel Modality ATTention (MATT) mechanism which learns to weigh modalities in an adaptive fashion. Extensive evaluations on two large-scale benchmark datasets show that our method outperforms prior art by up to +7% on the challenging EPIC-Kitchens dataset including more than 2500 actions, and generalizes to EGTEA Gaze+. Our approach is also shown to generalize to the tasks of early action recognition and action recognition. Our method is ranked first in the public leaderboard of the EPIC-Kitchens egocentric action anticipation challenge 2019. Please see our web pages for code and examples: http://iplab.dmi.unict.it/rulstm - https://github.com/fpv-iplab/rulstm.", "field": [], "task": ["Action Anticipation", "Action Recognition", "Egocentric Activity Recognition", "Optical Flow Estimation", "Temporal Action Localization"], "method": [], "dataset": ["EPIC-KITCHENS-55"], "metric": ["Actions Top-1 (S2)", "Actions Top-1 (S1)"], "title": "What Would You Expect? Anticipating Egocentric Actions with Rolling-Unrolling LSTMs and Modality Attention"} {"abstract": "Recent works show that Graph Neural Networks (GNNs) are highly non-robust with respect to adversarial attacks on both the graph structure and the node attributes, making their outcomes unreliable. We propose the first method for certifiable (non-)robustness of graph convolutional networks with respect to perturbations of the node attributes. We consider the case of binary node attributes (e.g. bag-of-words) and perturbations that are L_0-bounded. If a node has been certified with our method, it is guaranteed to be robust under any possible perturbation given the attack model. Likewise, we can certify non-robustness. Finally, we propose a robust semi-supervised training procedure that treats the labeled and unlabeled nodes jointly. As shown in our experimental evaluation, our method significantly improves the robustness of the GNN with only minimal effect on the predictive accuracy.", "field": [], "task": ["Node Classification"], "method": [], "dataset": ["Cora", "Pubmed", "Citeseer"], "metric": ["Accuracy"], "title": "Certifiable Robustness and Robust Training for Graph Convolutional Networks"} {"abstract": "A relation tuple consists of two entities and the relation between them, and often such tuples are found in unstructured text. There may be multiple relation tuples present in a text and they may share one or both entities among them. Extracting such relation tuples from a sentence is a difficult task and sharing of entities or overlapping entities among the tuples makes it more challenging. Most prior work adopted a pipeline approach where entities were identified first followed by finding the relations among them, thus missing the interaction among the relation tuples in a sentence. In this paper, we propose two approaches to use encoder-decoder architecture for jointly extracting entities and relations. In the first approach, we propose a representation scheme for relation tuples which enables the decoder to generate one word at a time like machine translation models and still finds all the tuples present in a sentence with full entity names of different length and with overlapping entities. Next, we propose a pointer network-based decoding approach where an entire tuple is generated at every time step. Experiments on the publicly available New York Times corpus show that our proposed approaches outperform previous work and achieve significantly higher F1 scores.", "field": [], "task": ["Joint Entity and Relation Extraction", "Machine Translation", "Relation Extraction"], "method": [], "dataset": ["NYT24", "NYT29"], "metric": ["F1"], "title": "Effective Modeling of Encoder-Decoder Architecture for Joint Entity and Relation Extraction"} {"abstract": "Many applications require sparse neural networks due to space or inference time restrictions. There is a large body of work on training dense networks to yield sparse networks for inference, but this limits the size of the largest trainable sparse model to that of the largest trainable dense model. In this paper we introduce a method to train sparse neural networks with a fixed parameter count and a fixed computational cost throughout training, without sacrificing accuracy relative to existing dense-to-sparse training methods. Our method updates the topology of the sparse network during training by using parameter magnitudes and infrequent gradient calculations. We show that this approach requires fewer floating-point operations (FLOPs) to achieve a given level of accuracy compared to prior techniques. We demonstrate state-of-the-art sparse training results on a variety of networks and datasets, including ResNet-50, MobileNets on Imagenet-2012, and RNNs on WikiText-103. Finally, we provide some insights into why allowing the topology to change during the optimization can overcome local minima encountered when the topology remains static. Code used in our work can be found in github.com/google-research/rigl.", "field": [], "task": ["Image Classification", "Language Modelling", "Sparse Learning"], "method": [], "dataset": ["ImageNet"], "metric": ["Top-1 Accuracy"], "title": "Rigging the Lottery: Making All Tickets Winners"} {"abstract": "We introduce SCDE, a dataset to evaluate the performance of computational models through sentence prediction. SCDE is a human-created sentence cloze dataset, collected from public school English examinations. Our task requires a model to fill up multiple blanks in a passage from a shared candidate set with distractors designed by English teachers. Experimental results demonstrate that this task requires the use of non-local, discourse-level context beyond the immediate sentence neighborhood. The blanks require joint solving and significantly impair each other's context. Furthermore, through ablations, we show that the distractors are of high quality and make the task more challenging. Our experiments show that there is a significant performance gap between advanced models (72%) and humans (87%), encouraging future models to bridge this gap.", "field": [], "task": ["Question Answering"], "method": [], "dataset": ["SCDE"], "metric": ["BA", "PA", "DE"], "title": "SCDE: Sentence Cloze Dataset with High Quality Distractors From Examinations"} {"abstract": "Embedding based models have been the state of the art in collaborative filtering for over a decade. Traditionally, the dot product or higher order equivalents have been used to combine two or more embeddings, e.g., most notably in matrix factorization. In recent years, it was suggested to replace the dot product with a learned similarity e.g. using a multilayer perceptron (MLP). This approach is often referred to as neural collaborative filtering (NCF). In this work, we revisit the experiments of the NCF paper that popularized learned similarities using MLPs. First, we show that with a proper hyperparameter selection, a simple dot product substantially outperforms the proposed learned similarities. Second, while a MLP can in theory approximate any function, we show that it is non-trivial to learn a dot product with an MLP. Finally, we discuss practical issues that arise when applying MLP based similarities and show that MLPs are too costly to use for item recommendation in production environments while dot products allow to apply very efficient retrieval algorithms. We conclude that MLPs should be used with care as embedding combiner and that dot products might be a better default choice.", "field": [], "task": ["Link Prediction"], "method": [], "dataset": ["Yelp"], "metric": ["nDCG@10", "HR@10"], "title": "Neural Collaborative Filtering vs. Matrix Factorization Revisited"} {"abstract": "We present an extension to the disjoint paths problem in which additional \\emph{lifted} edges are introduced to provide path connectivity priors. We call the resulting optimization problem the lifted disjoint paths problem. We show that this problem is NP-hard by reduction from integer multicommodity flow and 3-SAT. To enable practical global optimization, we propose several classes of linear inequalities that produce a high-quality LP-relaxation. Additionally, we propose efficient cutting plane algorithms for separating the proposed linear inequalities. The lifted disjoint path problem is a natural model for multiple object tracking and allows an elegant mathematical formulation for long range temporal interactions. Lifted edges help to prevent id switches and to re-identify persons. Our lifted disjoint paths tracker achieves nearly optimal assignments with respect to input detections. As a consequence, it leads on all three main benchmarks of the MOT challenge, improving significantly over state-of-the-art.", "field": [], "task": ["Multiple Object Tracking", "Object Tracking"], "method": [], "dataset": ["2D MOT 2015", "MOT16", "MOT17"], "metric": ["MOTA", "IDF1"], "title": "Lifted Disjoint Paths with Application in Multiple Object Tracking"} {"abstract": "In this paper we present two deep-learning systems that competed at SemEval-2017 Task 4 {``}Sentiment Analysis in Twitter{''}. We participated in all subtasks for English tweets, involving message-level and topic-based sentiment polarity classification and quantification. We use Long Short-Term Memory (LSTM) networks augmented with two kinds of attention mechanisms, on top of word embeddings pre-trained on a big collection of Twitter messages. Also, we present a text processing tool suitable for social network messages, which performs tokenization, word normalization, segmentation and spell correction. Moreover, our approach uses no hand-crafted features or sentiment lexicons. We ranked 1st (tie) in Subtask A, and achieved very competitive results in the rest of the Subtasks. Both the word embeddings and our text processing tool are available to the research community.", "field": [], "task": ["Feature Engineering", "Sentiment Analysis", "Tokenization", "Word Embeddings"], "method": [], "dataset": ["SemEval", "SemEval 2017 Task 4-A"], "metric": ["F1-score", "Average Recall"], "title": "DataStories at SemEval-2017 Task 4: Deep LSTM with Attention for Message-level and Topic-based Sentiment Analysis"} {"abstract": "A family of loss functions built on pair-based computation have been proposed in the literature which provide a myriad of solutions for deep metric learning. In this paper, we provide a general weighting framework for understanding recent pair-based loss functions. Our contributions are three-fold: (1) we establish a General Pair Weighting (GPW) framework, which casts the sampling problem of deep metric learning into a unified view of pair weighting through gradient analysis, providing a powerful tool for understanding recent pair-based loss functions; (2) we show that with GPW, various existing pair-based methods can be compared and discussed comprehensively, with clear differences and key limitations identified; (3) we propose a new loss called multi-similarity loss (MS loss) under the GPW, which is implemented in two iterative steps (i.e., mining and weighting). This allows it to fully consider three similarities for pair weighting, providing a more principled approach for collecting and weighting informative pairs. Finally, the proposed MS loss obtains new state-of-the-art performance on four image retrieval benchmarks, where it outperforms the most recent approaches, such as ABE\\cite{Kim_2018_ECCV} and HTL by a large margin: 60.6% to 65.7% on CUB200, and 80.9% to 88.0% on In-Shop Clothes Retrieval dataset at Recall@1. Code is available at https://github.com/MalongTech/research-ms-loss.", "field": [], "task": ["Image Retrieval", "Metric Learning"], "method": [], "dataset": ["CARS196", "In-Shop", "CUB-200-2011", "SOP"], "metric": ["R@1"], "title": "Multi-Similarity Loss with General Pair Weighting for Deep Metric Learning"} {"abstract": "Monocular depth estimation is an essential task for scene understanding. The underlying structure of objects and stuff in a complex scene is critical to recovering accurate and visually-pleasing depth maps. Global structure conveys scene layouts, while local structure reflects shape details. Recently developed approaches based on convolutional neural networks (CNNs) significantly improve the performance of depth estimation. However, few of them take into account multi-scale structures in complex scenes. In this paper, we propose a Structure-Aware Residual Pyramid Network (SARPN) to exploit multi-scale structures for accurate depth prediction. We propose a Residual Pyramid Decoder (RPD) which expresses global scene structure in upper levels to represent layouts, and local structure in lower levels to present shape details. At each level, we propose Residual Refinement Modules (RRM) that predict residual maps to progressively add finer structures on the coarser structure predicted at the upper level. In order to fully exploit multi-scale image features, an Adaptive Dense Feature Fusion (ADFF) module, which adaptively fuses effective features from all scales for inferring structures of each scale, is introduced. Experiment results on the challenging NYU-Depth v2 dataset demonstrate that our proposed approach achieves state-of-the-art performance in both qualitative and quantitative evaluation. The code is available at https://github.com/Xt-Chen/SARPN.", "field": [], "task": ["Depth Estimation", "Monocular Depth Estimation", "Scene Understanding"], "method": [], "dataset": ["NYU-Depth V2"], "metric": ["RMSE"], "title": "Structure-Aware Residual Pyramid Network for Monocular Depth Estimation"} {"abstract": "TASED-Net is a 3D fully-convolutional network architecture for video saliency detection. It consists of two building blocks: first, the encoder network extracts low-resolution spatiotemporal features from an input clip of several consecutive frames, and then the following prediction network decodes the encoded features spatially while aggregating all the temporal information. As a result, a single prediction map is produced from an input clip of multiple frames. Frame-wise saliency maps can be predicted by applying TASED-Net in a sliding-window fashion to a video. The proposed approach assumes that the saliency map of any frame can be predicted by considering a limited number of past frames. The results of our extensive experiments on video saliency detection validate this assumption and demonstrate that our fully-convolutional model with temporal aggregation method is effective. TASED-Net significantly outperforms previous state-of-the-art approaches on all three major large-scale datasets of video saliency detection: DHF1K, Hollywood2, and UCFSports. After analyzing the results qualitatively, we observe that our model is especially better at attending to salient moving objects.", "field": [], "task": ["Saliency Detection", "Video Saliency Detection"], "method": [], "dataset": ["DHF1K"], "metric": ["NSS"], "title": "TASED-Net: Temporally-Aggregating Spatial Encoder-Decoder Network for Video Saliency Detection"} {"abstract": "Recently, transfer learning from pre-trained language models has proven to be effective in a variety of natural language processing tasks, including sentiment analysis. This paper aims at identifying deep transfer learning baselines for sentiment analysis in Russian. Firstly, we identified the most used publicly available sentiment analysis datasets in Russian and recent language models which officially support the Russian language. Secondly, we fine-tuned Multilingual Bidirectional Encoder Representations from Transformers (BERT), RuBERT, and two versions of the Multilingual Universal Sentence Encoder and obtained strong, or even new, state-of-the-art results on seven sentiment datasets in Russian: SentRuEval-2016, SentiRuEval-2015, RuTweetCorp, RuSentiment, LINIS Crowd, and Kaggle Russian News Dataset, and RuReviews. Lastly, we made fine-tuned models publicly available for the research community.", "field": [], "task": ["Sentiment Analysis", "Transfer Learning"], "method": [], "dataset": ["RuSentiment"], "metric": ["Weighted F1"], "title": "Deep Transfer Learning Baselines for Sentiment Analysis in Russian"} {"abstract": "Most of the recent Deep Semantic Segmentation algorithms suffer from large generalization errors, even when powerful hierarchical representation models based on convolutional neural networks have been employed. This could be attributed to limited training data and large distribution gap in train and test domain datasets. In this paper, we propose a multi-level self-supervised learning model for domain adaptation of semantic segmentation. Exploiting the idea that an object (and most of the stuff given context) should be labeled consistently regardless of its location, we generate spatially independent and semantically consistent (SISC) pseudo-labels by segmenting multiple sub-images using base model and designing an aggregation strategy. Image level pseudo weak-labels, PWL, are computed to guide domain adaptation by capturing global context similarity in source and domain at latent space level. Thus helping latent space learn the representation even when there are very few pixels belonging to the domain category (small object for example) compared to rest of the image. Our multi-level Self-supervised learning (MLSL) outperforms existing state-of art (self or adversarial learning) algorithms. Specifically, keeping all setting similar and employing MLSL we obtain an mIoU gain of 5:1% on GTA-V to Cityscapes adaptation and 4:3% on SYNTHIA to Cityscapes adaptation compared to existing state-of-art method.", "field": [], "task": ["Domain Adaptation", "Self-Supervised Learning", "Semantic Segmentation"], "method": [], "dataset": ["GTAV-to-Cityscapes Labels"], "metric": ["mIoU"], "title": "MLSL: Multi-Level Self-Supervised Learning for Domain Adaptation with Spatially Independent and Semantically Consistent Labeling"} {"abstract": "In this paper, we propose Deep Alignment Network (DAN), a robust face\nalignment method based on a deep neural network architecture. DAN consists of\nmultiple stages, where each stage improves the locations of the facial\nlandmarks estimated by the previous stage. Our method uses entire face images\nat all stages, contrary to the recently proposed face alignment methods that\nrely on local patches. This is possible thanks to the use of landmark heatmaps\nwhich provide visual information about landmark locations estimated at the\nprevious stages of the algorithm. The use of entire face images rather than\npatches allows DAN to handle face images with large variation in head pose and\ndifficult initializations. An extensive evaluation on two publicly available\ndatasets shows that DAN reduces the state-of-the-art failure rate by up to 70%.\nOur method has also been submitted for evaluation as part of the Menpo\nchallenge.", "field": [], "task": ["Face Alignment", "Keypoint Detection", "Robust Face Alignment"], "method": [], "dataset": ["300W"], "metric": ["AUC0.08 private", "Fullset (public)", "Mean Error Rate private", "Failure private"], "title": "Deep Alignment Network: A convolutional neural network for robust face alignment"} {"abstract": "We describe a very simple bag-of-words baseline for visual question\nanswering. This baseline concatenates the word features from the question and\nCNN features from the image to predict the answer. When evaluated on the\nchallenging VQA dataset [2], it shows comparable performance to many recent\napproaches using recurrent neural networks. To explore the strength and\nweakness of the trained model, we also provide an interactive web demo and\nopen-source code. .", "field": [], "task": ["Visual Question Answering"], "method": [], "dataset": ["COCO Visual Question Answering (VQA) real images 1.0 open ended", "COCO Visual Question Answering (VQA) real images 1.0 multiple choice"], "metric": ["Percentage correct"], "title": "Simple Baseline for Visual Question Answering"} {"abstract": "Class imbalance has emerged as one of the major challenges for medical image segmentation. The model cascade (MC) strategy significantly alleviates the class imbalance issue via running a set of individual deep models for coarse-to-fine segmentation. Despite its outstanding performance, however, this method leads to undesired system complexity and also ignores the correlation among the models. To handle these flaws, we propose a light-weight deep model, i.e., the One-pass Multi-task Network (OM-Net) to solve class imbalance better than MC does, while requiring only one-pass computation. First, OM-Net integrates the separate segmentation tasks into one deep model, which consists of shared parameters to learn joint features, as well as task-specific parameters to learn discriminative features. Second, to more effectively optimize OM-Net, we take advantage of the correlation among tasks to design both an online training data transfer strategy and a curriculum learning-based training strategy. Third, we further propose sharing prediction results between tasks and design a cross-task guided attention (CGA) module which can adaptively recalibrate channel-wise feature responses based on the category-specific statistics. Finally, a simple yet effective post-processing method is introduced to refine the segmentation results. Extensive experiments are conducted to demonstrate the effectiveness of the proposed techniques. Most impressively, we achieve state-of-the-art performance on the BraTS 2015 testing set and BraTS 2017 online validation set. Using these proposed approaches, we also won joint third place in the BraTS 2018 challenge among 64 participating teams. The code is publicly available at https://github.com/chenhong-zhou/OM-Net.", "field": [], "task": ["Brain Tumor Segmentation", "Curriculum Learning", "Medical Image Segmentation", "Semantic Segmentation", "Tumor Segmentation"], "method": [], "dataset": ["BRATS-2017 val", "BRATS 2018 val", "BRATS-2015"], "metric": ["Dice Score"], "title": "One-pass Multi-task Networks with Cross-task Guided Attention for Brain Tumor Segmentation"} {"abstract": "Salient object detection (SOD) is a crucial and preliminary task for many computer vision applications, which have made progress with deep CNNs. Most of the existing methods mainly rely on the RGB information to distinguish the salient objects, which faces difficulties in some complex scenarios. To solve this, many recent RGBD-based networks are proposed by adopting the depth map as an independent input and fuse the features with RGB information. Taking the advantages of RGB and RGBD methods, we propose a novel depth-aware salient object detection framework, which has following superior designs: 1) It only takes the depth information as training data while only relies on RGB information in the testing phase. 2) It comprehensively optimizes SOD features with multi-level depth-aware regularizations. 3) The depth information also serves as error-weighted map to correct the segmentation process. With these insightful designs combined, we make the first attempt in realizing an unified depth-aware framework with only RGB information as input for inference, which not only surpasses the state-of-the-art performances on five public RGB SOD benchmarks, but also surpasses the RGBD-based methods on five benchmarks by a large margin, while adopting less information and implementation light-weighted. The code and model will be publicly available.", "field": [], "task": ["Object Detection", "RGB-D Salient Object Detection", "RGB Salient Object Detection", "Salient Object Detection"], "method": [], "dataset": ["STERE", "NLPR", "DES", "NJU2K", "SSD"], "metric": ["Average MAE", "S-Measure", "max F-Measure"], "title": "Is Depth Really Necessary for Salient Object Detection?"} {"abstract": "Transfer learning aims to learn robust classifiers for the target domain by leveraging knowledge from a source domain. Since the source and the target domains are usually from different distributions, existing methods mainly focus on adapting the cross-domain marginal or conditional distributions. However, in real applications, the marginal and conditional distributions usually have different contributions to the domain discrepancy. Existing methods fail to quantitatively evaluate the different importance of these two distributions, which will result in unsatisfactory transfer performance. In this paper, we propose a novel concept called Dynamic Distribution Adaptation (DDA), which is capable of quantitatively evaluating the relative importance of each distribution. DDA can be easily incorporated into the framework of structural risk minimization to solve transfer learning problems. On the basis of DDA, we propose two novel learning algorithms: (1) Manifold Dynamic Distribution Adaptation (MDDA) for traditional transfer learning, and (2) Dynamic Distribution Adaptation Network (DDAN) for deep transfer learning. Extensive experiments demonstrate that MDDA and DDAN significantly improve the transfer learning performance and setup a strong baseline over the latest deep and adversarial methods on digits recognition, sentiment analysis, and image classification. More importantly, it is shown that marginal and conditional distributions have different contributions to the domain divergence, and our DDA is able to provide good quantitative evaluation of their relative importance which leads to better performance. We believe this observation can be helpful for future research in transfer learning.", "field": [], "task": ["Domain Adaptation", "Image Classification", "Sentiment Analysis", "Transfer Learning"], "method": [], "dataset": ["Office-31", "Office-Home", "ImageCLEF-DA"], "metric": ["Average Accuracy", "Accuracy"], "title": "Transfer Learning with Dynamic Distribution Adaptation"} {"abstract": "Face Analysis Project on MXNet", "field": [], "task": ["Face Detection", "Face Verification", "Multi-Task Learning"], "method": [], "dataset": ["WIDER Face (Hard)"], "metric": ["AP"], "title": "RetinaFace: Single-stage Dense Face Localisation in the Wild"} {"abstract": "Generative adversarial networks (GANs) have enabled photorealistic image synthesis and editing. However, due to the high computational cost of large-scale generators (e.g., StyleGAN2), it usually takes seconds to see the results of a single edit on edge devices, prohibiting interactive user experience. In this paper, we take inspirations from modern rendering software and propose Anycost GAN for interactive natural image editing. We train the Anycost GAN to support elastic resolutions and channels for faster image generation at versatile speeds. Running subsets of the full generator produce outputs that are perceptually similar to the full generator, making them a good proxy for preview. By using sampling-based multi-resolution training, adaptive-channel training, and a generator-conditioned discriminator, the anycost generator can be evaluated at various configurations while achieving better image quality compared to separately trained models. Furthermore, we develop new encoder training and latent code optimization techniques to encourage consistency between the different sub-generators during image projection. Anycost GAN can be executed at various cost budgets (up to 10x computation reduction) and adapt to a wide range of hardware and latency requirements. When deployed on desktop CPUs and edge devices, our model can provide perceptually similar previews at 6-12x speedup, enabling interactive image editing. The code and demo are publicly available: https://github.com/mit-han-lab/anycost-gan.", "field": [], "task": ["Image Generation"], "method": [], "dataset": ["FFHQ", "FFHQ 256 x 256", "FFHQ 128 x 128", "FFHQ 512 x 512"], "metric": ["FID"], "title": "Anycost GANs for Interactive Image Synthesis and Editing"} {"abstract": "It is important to learn various types of classifiers given training data\nwith noisy labels. Noisy labels, in the most popular noise model hitherto, are\ncorrupted from ground-truth labels by an unknown noise transition matrix. Thus,\nby estimating this matrix, classifiers can escape from overfitting those noisy\nlabels. However, such estimation is practically difficult, due to either the\nindirect nature of two-step approaches, or not big enough data to afford\nend-to-end approaches. In this paper, we propose a human-assisted approach\ncalled Masking that conveys human cognition of invalid class transitions and\nnaturally speculates the structure of the noise transition matrix. To this end,\nwe derive a structure-aware probabilistic model incorporating a structure\nprior, and solve the challenges from structure extraction and structure\nalignment. Thanks to Masking, we only estimate unmasked noise transition\nprobabilities and the burden of estimation is tremendously reduced. We conduct\nextensive experiments on CIFAR-10 and CIFAR-100 with three noise structures as\nwell as the industrial-level Clothing1M with agnostic noise structure, and the\nresults show that Masking can improve the robustness of classifiers\nsignificantly.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["Clothing1M"], "metric": ["Accuracy"], "title": "Masking: A New Perspective of Noisy Supervision"} {"abstract": "We present a box-free bottom-up approach for the tasks of pose estimation and\ninstance segmentation of people in multi-person images using an efficient\nsingle-shot model. The proposed PersonLab model tackles both semantic-level\nreasoning and object-part associations using part-based modeling. Our model\nemploys a convolutional network which learns to detect individual keypoints and\npredict their relative displacements, allowing us to group keypoints into\nperson pose instances. Further, we propose a part-induced geometric embedding\ndescriptor which allows us to associate semantic person pixels with their\ncorresponding person instance, delivering instance-level person segmentations.\nOur system is based on a fully-convolutional architecture and allows for\nefficient inference, with runtime essentially independent of the number of\npeople present in the scene. Trained on COCO data alone, our system achieves\nCOCO test-dev keypoint average precision of 0.665 using single-scale inference\nand 0.687 using multi-scale inference, significantly outperforming all previous\nbottom-up pose estimation systems. We are also the first bottom-up method to\nreport competitive results for the person class in the COCO instance\nsegmentation task, achieving a person category average precision of 0.417.", "field": [], "task": ["Instance Segmentation", "Keypoint Detection", "Multi-Person Pose Estimation", "Pose Estimation", "Semantic Segmentation"], "method": [], "dataset": ["COCO", "COCO test-dev"], "metric": ["Test AP", "APM", "AP75", "AP", "APL", "AP50"], "title": "PersonLab: Person Pose Estimation and Instance Segmentation with a Bottom-Up, Part-Based, Geometric Embedding Model"} {"abstract": "Deep-layered models trained on a large number of labeled samples boost the\naccuracy of many tasks. It is important to apply such models to different\ndomains because collecting many labeled samples in various domains is\nexpensive. In unsupervised domain adaptation, one needs to train a classifier\nthat works well on a target domain when provided with labeled source samples\nand unlabeled target samples. Although many methods aim to match the\ndistributions of source and target samples, simply matching the distribution\ncannot ensure accuracy on the target domain. To learn discriminative\nrepresentations for the target domain, we assume that artificially labeling\ntarget samples can result in a good representation. Tri-training leverages\nthree classifiers equally to give pseudo-labels to unlabeled samples, but the\nmethod does not assume labeling samples generated from a different domain.In\nthis paper, we propose an asymmetric tri-training method for unsupervised\ndomain adaptation, where we assign pseudo-labels to unlabeled samples and train\nneural networks as if they are true labels. In our work, we use three networks\nasymmetrically. By asymmetric, we mean that two networks are used to label\nunlabeled target samples and one network is trained by the samples to obtain\ntarget-discriminative representations. We evaluate our method on digit\nrecognition and sentiment analysis datasets. Our proposed method achieves\nstate-of-the-art performance on the benchmark digit recognition datasets of\ndomain adaptation.", "field": [], "task": ["Domain Adaptation", "Sentiment Analysis", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["Multi-Domain Sentiment Dataset"], "metric": ["DVD", "Average", "Kitchen", "Electronics", "Books"], "title": "Asymmetric Tri-training for Unsupervised Domain Adaptation"} {"abstract": "Sentence scoring and sentence selection are two main steps in extractive\ndocument summarization systems. However, previous works treat them as two\nseparated subtasks. In this paper, we present a novel end-to-end neural network\nframework for extractive document summarization by jointly learning to score\nand select sentences. It first reads the document sentences with a hierarchical\nencoder to obtain the representation of sentences. Then it builds the output\nsummary by extracting sentences one by one. Different from previous methods,\nour approach integrates the selection strategy into the scoring model, which\ndirectly predicts the relative importance given previously selected sentences.\nExperiments on the CNN/Daily Mail dataset show that the proposed framework\nsignificantly outperforms the state-of-the-art extractive summarization models.", "field": [], "task": ["Document Summarization", "Extractive Text Summarization"], "method": [], "dataset": ["CNN / Daily Mail"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Neural Document Summarization by Jointly Learning to Score and Select Sentences"} {"abstract": "Dependency trees help relation extraction models capture long-range relations\nbetween words. However, existing dependency-based models either neglect crucial\ninformation (e.g., negation) by pruning the dependency trees too aggressively,\nor are computationally inefficient because it is difficult to parallelize over\ndifferent tree structures. We propose an extension of graph convolutional\nnetworks that is tailored for relation extraction, which pools information over\narbitrary dependency structures efficiently in parallel. To incorporate\nrelevant information while maximally removing irrelevant content, we further\napply a novel pruning strategy to the input trees by keeping words immediately\naround the shortest path between the two entities among which a relation might\nhold. The resulting model achieves state-of-the-art performance on the\nlarge-scale TACRED dataset, outperforming existing sequence and\ndependency-based neural models. We also show through detailed analysis that\nthis model has complementary strengths to sequence models, and combining them\nfurther improves the state of the art.", "field": [], "task": ["Relation Extraction"], "method": [], "dataset": ["TACRED", "Re-TACRED"], "metric": ["F1"], "title": "Graph Convolution over Pruned Dependency Trees Improves Relation Extraction"} {"abstract": "Traffic sign detection systems constitute a key component in trending real-world applications, such as autonomous driving, and driver safety and assistance. This paper analyses the state-of-the-art of several object-detection systems (Faster R-CNN, R-FCN, SSD, and YOLO V2) combined with various feature extractors (Resnet V1 50, Resnet V1 101, Inception V2, Inception Resnet V2, Mobilenet V1, and Darknet-19) previously developed by their corresponding authors. We aim to explore the properties of these object-detection models which are modified and specifically adapted to the traffic sign detection problem domain by means of transfer learning. In particular, various publicly available object-detection models that were pre-trained on the Microsoft COCO dataset are fine-tuned on the German Traffic Sign Detection Benchmark dataset. The evaluation and comparison of these models include key metrics, such as the mean average precision (mAP), memory allocation, running time, number of floating point operations, number of parameters of the model, and the effect of traffic sign image sizes. Our findings show that Faster R-CNN Inception Resnet V2 obtains the best mAP, while R-FCN Resnet 101 strikes the best trade-off between accuracy and execution time. YOLO V2 and SSD Mobilenet merit a special mention, in that the former achieves competitive accuracy results and is the second fastest detector, while the latter, is the fastest and the lightest model in terms of memory consumption, making it an optimal choice for deployment in mobile and embedded devices.", "field": [], "task": ["Autonomous Driving", "Object Detection", "Traffic Sign Detection", "Transfer Learning"], "method": [], "dataset": ["GTSDB"], "metric": ["mAP"], "title": "Evaluation of deep neural networks for traffic sign detection systems"} {"abstract": "In this paper, we focus on generating realistic images from text\ndescriptions. Current methods first generate an initial image with rough shape\nand color, and then refine the initial image to a high-resolution one. Most\nexisting text-to-image synthesis methods have two main problems. (1) These\nmethods depend heavily on the quality of the initial images. If the initial\nimage is not well initialized, the following processes can hardly refine the\nimage to a satisfactory quality. (2) Each word contributes a different level of\nimportance when depicting different image contents, however, unchanged text\nrepresentation is used in existing image refinement processes. In this paper,\nwe propose the Dynamic Memory Generative Adversarial Network (DM-GAN) to\ngenerate high-quality images. The proposed method introduces a dynamic memory\nmodule to refine fuzzy image contents, when the initial images are not well\ngenerated. A memory writing gate is designed to select the important text\ninformation based on the initial image content, which enables our method to\naccurately generate images from the text description. We also utilize a\nresponse gate to adaptively fuse the information read from the memories and the\nimage features. We evaluate the DM-GAN model on the Caltech-UCSD Birds 200\ndataset and the Microsoft Common Objects in Context dataset. Experimental\nresults demonstrate that our DM-GAN model performs favorably against the\nstate-of-the-art approaches.", "field": [], "task": ["Image Generation", "Text-to-Image Generation"], "method": [], "dataset": ["COCO", "CUB"], "metric": ["Inception score", "SOA-C"], "title": "DM-GAN: Dynamic Memory Generative Adversarial Networks for Text-to-Image Synthesis"} {"abstract": "Noisy labels are ubiquitous in real-world datasets, which poses a challenge for robustly training deep neural networks (DNNs) as DNNs usually have the high capacity to memorize the noisy labels. In this paper, we find that the test accuracy can be quantitatively characterized in terms of the noise ratio in datasets. In particular, the test accuracy is a quadratic function of the noise ratio in the case of symmetric noise, which explains the experimental findings previously published. Based on our analysis, we apply cross-validation to randomly split noisy datasets, which identifies most samples that have correct labels. Then we adopt the Co-teaching strategy which takes full advantage of the identified samples to train DNNs robustly against noisy labels. Compared with extensive state-of-the-art methods, our strategy consistently improves the generalization performance of DNNs under both synthetic and real-world training noise.", "field": [], "task": [], "method": [], "dataset": ["mini WebVision 1.0"], "metric": ["Top-5 Accuracy", "ImageNet Top-1 Accuracy", "ImageNet Top-5 Accuracy", "Top-1 Accuracy"], "title": "Understanding and Utilizing Deep Neural Networks Trained with Noisy Labels"} {"abstract": "The discriminative power of modern deep learning models for 3D human action\nrecognition is growing ever so potent. In conjunction with the recent\nresurgence of 3D human action representation with 3D skeletons, the quality and\nthe pace of recent progress have been significant. However, the inner workings\nof state-of-the-art learning based methods in 3D human action recognition still\nremain mostly black-box. In this work, we propose to use a new class of models\nknown as Temporal Convolutional Neural Networks (TCN) for 3D human action\nrecognition. Compared to popular LSTM-based Recurrent Neural Network models,\ngiven interpretable input such as 3D skeletons, TCN provides us a way to\nexplicitly learn readily interpretable spatio-temporal representations for 3D\nhuman action recognition. We provide our strategy in re-designing the TCN with\ninterpretability in mind and how such characteristics of the model is leveraged\nto construct a powerful 3D activity recognition method. Through this work, we\nwish to take a step towards a spatio-temporal model that is easier to\nunderstand, explain and interpret. The resulting model, Res-TCN, achieves\nstate-of-the-art results on the largest 3D human action recognition dataset,\nNTU-RGBD.", "field": [], "task": ["3D Action Recognition", "Action Recognition", "Activity Recognition", "Multimodal Activity Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["Varying-view RGB-D Action-Skeleton", "NTU RGB+D", "Kinetics-Skeleton dataset", "EV-Action"], "metric": ["Accuracy (CS)", "Accuracy (CV II)", "Accuracy (CV I)", "Accuracy (CV)", "Accuracy (AV I)", "Accuracy (AV II)", "Accuracy"], "title": "Interpretable 3D Human Action Analysis with Temporal Convolutional Networks"} {"abstract": "Modeling real-world multidimensional time series can be particularly challenging when these are sporadically observed (i.e., sampling is irregular both in time and across dimensions)-such as in the case of clinical patient data. To address these challenges, we propose (1) a continuous-time version of the Gated Recurrent Unit, building upon the recent Neural Ordinary Differential Equations (Chen et al., 2018), and (2) a Bayesian update network that processes the sporadic observations. We bring these two ideas together in our GRU-ODE-Bayes method. We then demonstrate that the proposed method encodes a continuity prior for the latent process and that it can exactly represent the Fokker-Planck dynamics of complex processes driven by a multidimensional stochastic differential equation. Additionally, empirical evaluation shows that our method outperforms the state of the art on both synthetic data and real-world data with applications in healthcare and climate forecast. What is more, the continuity prior is shown to be well suited for low number of samples settings.", "field": [], "task": ["Irregular Time Series", "Multivariate Time Series Forecasting", "Time Series"], "method": [], "dataset": ["MIMIC-III", "USHCN-Daily"], "metric": ["MSE", "NegLL"], "title": "GRU-ODE-Bayes: Continuous modeling of sporadically-observed time series"} {"abstract": "In this paper, we propose a novel network design mechanism for efficient embedded computing. Inspired by the limited computing patterns, we propose to fix the number of channels in a group convolution, instead of the existing practice that fixing the total group numbers. Our solution based network, named Variable Group Convolutional Network (VarGNet), can be optimized easier on hardware side, due to the more unified computing schemes among the layers. Extensive experiments on various vision tasks, including classification, detection, pixel-wise parsing and face recognition, have demonstrated the practical value of our VarGNet.", "field": [], "task": ["Face Recognition"], "method": [], "dataset": ["CFP-FP", "AgeDB-30", "Labeled Faces in the Wild"], "metric": ["Accuracy"], "title": "VarGNet: Variable Group Convolutional Neural Network for Efficient Embedded Computing"} {"abstract": "The topological information is essential for studying the relationship\nbetween nodes in a network. Recently, Network Representation Learning (NRL),\nwhich projects a network into a low-dimensional vector space, has been shown\ntheir advantages in analyzing large-scale networks. However, most existing NRL\nmethods are designed to preserve the local topology of a network, they fail to\ncapture the global topology. To tackle this issue, we propose a new NRL\nframework, named HSRL, to help existing NRL methods capture both the local and\nglobal topological information of a network. Specifically, HSRL recursively\ncompresses an input network into a series of smaller networks using a\ncommunity-awareness compressing strategy. Then, an existing NRL method is used\nto learn node embeddings for each compressed network. Finally, the node\nembeddings of the input network are obtained by concatenating the node\nembeddings from all compressed networks. Empirical studies for link prediction\non five real-world datasets demonstrate the advantages of HSRL over\nstate-of-the-art methods.", "field": [], "task": ["Link Prediction", "Representation Learning"], "method": [], "dataset": ["Yelp", "MIT", "Douban", "DBLP"], "metric": ["AUC"], "title": "Learning Topological Representation for Networks via Hierarchical Sampling"} {"abstract": "Despite online learning (OL) techniques have boosted the performance of semi-supervised video object segmentation (VOS) methods, the huge time costs of OL greatly restrict their practicality. Matching based and propagation based methods run at a faster speed by avoiding OL techniques. However, they are limited by sub-optimal accuracy, due to mismatching and drifting problems. In this paper, we develop a real-time yet very accurate Ranking Attention Network (RANet) for VOS. Specifically, to integrate the insights of matching based and propagation based methods, we employ an encoder-decoder framework to learn pixel-level similarity and segmentation in an end-to-end manner. To better utilize the similarity maps, we propose a novel ranking attention module, which automatically ranks and selects these maps for fine-grained VOS performance. Experiments on DAVIS-16 and DAVIS-17 datasets show that our RANet achieves the best speed-accuracy trade-off, e.g., with 33 milliseconds per frame and J&F=85.5% on DAVIS-16. With OL, our RANet reaches J&F=87.1% on DAVIS-16, exceeding state-of-the-art VOS methods. The code can be found at https://github.com/Storife/RANet.", "field": [], "task": ["Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["DAVIS 2017 (val)", "DAVIS 2017 (test-dev)", "DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "RANet: Ranking Attention Network for Fast Video Object Segmentation"} {"abstract": "The availability of massive data and computing power allowing for effective data driven neural approaches is having a major impact\r\non machine learning and information retrieval research, but these models have a basic problem with efficiency. Current neural ranking models are implemented as multistage rankers: for efficiency reasons, the neural model only re-ranks the top ranked documents retrieved by a first-stage efficient ranker in response to a given query. Neural ranking models learn dense representations causing essentially every query term to match every document term, making it highly inefficient or intractable to rank the whole collection. The reliance on a first stage ranker creates a dual problem: First, the interaction and combination effects are not well understood. Second, the first stage ranker serves as a \u201cgate-keeper\u201d or filter, effectively blocking the potential of neural models to uncover new relevant documents.\r\nIn this work, we propose a standalone neural ranking model (SNRM) by introducing a sparsity property to learn a latent sparse representation for each query and document. This representation captures the semantic relationship between the query and documents, but is also sparse enough to enable constructing an inverted index for the whole collection. We parameterize the sparsity of the model to yield a retrieval model as efficient as conventional term based models. Our model gains in efficiency without loss of effectiveness: it not only outperforms the existing term matching baselines, but also performs similarly to the recent re-ranking based neural models with dense representations. Our model can also take advantage of pseudo-relevance feedback for further improvements. More generally, our results demonstrate the importance of sparsity in neuralIR models and show that dense representations can be pruned effectively, giving new insights about essential semantic features and their distributions.", "field": [], "task": ["Ad-Hoc Information Retrieval", "Information Retrieval"], "method": [], "dataset": ["TREC Robust04"], "metric": ["P@20", "nDCG@20", "MAP"], "title": "From Neural Re-Ranking to Neural Ranking: Learning a Sparse Representation for Inverted Indexing"} {"abstract": "Joint extraction of entities and relations has received significant attention due to its potential of providing higher performance for both tasks. Among existing methods, CopyRE is effective and novel, which uses a sequence-to-sequence framework and copy mechanism to directly generate the relation triplets. However, it suffers from two fatal problems. The model is extremely weak at differing the head and tail entity, resulting in inaccurate entity extraction. It also cannot predict multi-token entities (e.g. \\textit{Steven Jobs}). To address these problems, we give a detailed analysis of the reasons behind the inaccurate entity extraction problem, and then propose a simple but extremely effective model structure to solve this problem. In addition, we propose a multi-task learning framework equipped with copy mechanism, called CopyMTL, to allow the model to predict multi-token entities. Experiments reveal the problems of CopyRE and show that our model achieves significant improvement over the current state-of-the-art method by 9% in NYT and 16% in WebNLG (F1 score). Our code is available at https://github.com/WindChimeRan/CopyMTL", "field": [], "task": ["Entity Extraction using GAN", "Multi-Task Learning", "Relation Extraction"], "method": [], "dataset": ["NYT", "WebNLG"], "metric": ["F1"], "title": "CopyMTL: Copy Mechanism for Joint Extraction of Entities and Relations with Multi-Task Learning"} {"abstract": "State-of-the-art models often make use of superficial patterns in the data that do not generalize well to out-of-domain or adversarial settings. For example, textual entailment models often learn that particular key words imply entailment, irrespective of context, and visual question answering models learn to predict prototypical answers, without considering evidence in the image. In this paper, we show that if we have prior knowledge of such biases, we can train a model to be more robust to domain shift. Our method has two stages: we (1) train a naive model that makes predictions exclusively based on dataset biases, and (2) train a robust model as part of an ensemble with the naive one in order to encourage it to focus on other patterns in the data that are more likely to generalize. Experiments on five datasets with out-of-domain test sets show significantly improved robustness in all settings, including a 12 point gain on a changing priors visual question answering dataset and a 9 point gain on an adversarial question answering test set.", "field": [], "task": ["Natural Language Inference", "Question Answering", "Visual Question Answering"], "method": [], "dataset": ["VQA-CP"], "metric": ["Score"], "title": "Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases"} {"abstract": "Point clouds are unstructured and unordered data, as opposed to images. Thus, most machine learning approach developed for image cannot be directly transferred to point clouds. In this paper, we propose a generalization of discrete convolutional neural networks (CNNs) in order to deal with point clouds by replacing discrete kernels by continuous ones. This formulation is simple, allows arbitrary point cloud sizes and can easily be used for designing neural networks similarly to 2D CNNs. We present experimental results with various architectures, highlighting the flexibility of the proposed approach. We obtain competitive results compared to the state-of-the-art on shape classification, part segmentation and semantic segmentation for large-scale point clouds.", "field": [], "task": ["3D Part Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["ShapeNet-Part"], "metric": ["Class Average IoU", "Instance Average IoU"], "title": "ConvPoint: Continuous Convolutions for Point Cloud Processing"} {"abstract": "Video question answering (VideoQA) is challenging as it requires modeling capacity to distill dynamic visual artifacts and distant relations and to associate them with linguistic concepts. We introduce a general-purpose reusable neural unit called Conditional Relation Network (CRN) that serves as a building block to construct more sophisticated structures for representation and reasoning over video. CRN takes as input an array of tensorial objects and a conditioning feature, and computes an array of encoded output objects. Model building becomes a simple exercise of replication, rearrangement and stacking of these reusable units for diverse modalities and contextual information. This design thus supports high-order relational and multi-step reasoning. The resulting architecture for VideoQA is a CRN hierarchy whose branches represent sub-videos or clips, all sharing the same question as the contextual condition. Our evaluations on well-known datasets achieved new SoTA results, demonstrating the impact of building a general-purpose reasoning unit on complex domains such as VideoQA.", "field": [], "task": ["Question Answering", "Video Question Answering", "Visual Question Answering"], "method": [], "dataset": ["MSRVTT-QA", "MSVD-QA"], "metric": ["Accuracy"], "title": "Hierarchical Conditional Relation Networks for Video Question Answering"} {"abstract": "Following a navigation instruction such as 'Walk down the stairs and stop at the brown sofa' requires embodied AI agents to ground scene elements referenced via language (e.g. 'stairs') to visual content in the environment (pixels corresponding to 'stairs'). We ask the following question -- can we leverage abundant 'disembodied' web-scraped vision-and-language corpora (e.g. Conceptual Captions) to learn visual groundings (what do 'stairs' look like?) that improve performance on a relatively data-starved embodied perception task (Vision-and-Language Navigation)? Specifically, we develop VLN-BERT, a visiolinguistic transformer-based model for scoring the compatibility between an instruction ('...stop at the brown sofa') and a sequence of panoramic RGB images captured by the agent. We demonstrate that pretraining VLN-BERT on image-text pairs from the web before fine-tuning on embodied path-instruction data significantly improves performance on VLN -- outperforming the prior state-of-the-art in the fully-observed setting by 4 absolute percentage points on success rate. Ablations of our pretraining curriculum show each stage to be impactful -- with their combination resulting in further positive synergistic effects.", "field": [], "task": ["Vision and Language Navigation"], "method": [], "dataset": ["VLN Challenge"], "metric": ["length", "spl", "oracle success", "success", "error"], "title": "Improving Vision-and-Language Navigation with Image-Text Pairs from the Web"} {"abstract": "We introduce PyTorch Geometric, a library for deep learning on irregularly\nstructured input data such as graphs, point clouds and manifolds, built upon\nPyTorch. In addition to general graph data structures and processing methods,\nit contains a variety of recently published methods from the domains of\nrelational learning and 3D data processing. PyTorch Geometric achieves high\ndata throughput by leveraging sparse GPU acceleration, by providing dedicated\nCUDA kernels and by introducing efficient mini-batch handling for input\nexamples of different size. In this work, we present the library in detail and\nperform a comprehensive comparative study of the implemented methods in\nhomogeneous evaluation scenarios.", "field": [], "task": ["Graph Classification", "Graph Representation Learning", "Node Classification", "Relational Reasoning", "Representation Learning"], "method": [], "dataset": ["COLLAB", "Cora", "IMDb-B", "REDDIT-B", "PROTEINS", "Citeseer", "MUTAG", "Pubmed"], "metric": ["Accuracy"], "title": "Fast Graph Representation Learning with PyTorch Geometric"} {"abstract": "Scene graph generation (SGG) aims to detect objects in an image along with their pairwise relationships. There are three key properties of scene graph that have been underexplored in recent works: namely, the edge direction information, the difference in priority between nodes, and the long-tailed distribution of relationships. Accordingly, in this paper, we propose a Graph Property Sensing Network (GPS-Net) that fully explores these three properties for SGG. First, we propose a novel message passing module that augments the node feature with node-specific contextual information and encodes the edge direction information via a tri-linear model. Second, we introduce a node priority sensitive loss to reflect the difference in priority between nodes during training. This is achieved by designing a mapping function that adjusts the focusing parameter in the focal loss. Third, since the frequency of relationships is affected by the long-tailed distribution problem, we mitigate this issue by first softening the distribution and then enabling it to be adjusted for each subject-object pair according to their visual appearance. Systematic experiments demonstrate the effectiveness of the proposed techniques. Moreover, GPS-Net achieves state-of-the-art performance on three popular databases: VG, OI, and VRD by significant gains under various settings and metrics. The code and models are available at \\url{https://github.com/taksau/GPS-Net}.", "field": [], "task": ["Graph Generation", "Scene Graph Generation"], "method": [], "dataset": ["Visual Genome"], "metric": ["Recall@50"], "title": "GPS-Net: Graph Property Sensing Network for Scene Graph Generation"} {"abstract": "Recent semi-supervised learning methods use pseudo supervision as core idea, especially self-training methods that generate pseudo labels. However, pseudo labels are unreliable. Self-training methods usually rely on single model prediction confidence to filter low-confidence pseudo labels, thus remaining high-confidence errors and wasting many low-confidence correct labels. In this paper, we point out it is difficult for a model to counter its own errors. Instead, leveraging inter-model disagreement between different models is a key to locate pseudo label errors. With this new viewpoint, we propose mutual training between two different models by a dynamically re-weighted loss function, called Dynamic Mutual Training (DMT). We quantify inter-model disagreement by comparing predictions from two different models to dynamically re-weight loss in training, where a larger disagreement indicates a possible error and corresponds to a lower loss value. Extensive experiments show that DMT achieves state-of-the-art performance in both image classification and semantic segmentation. Our codes are released at https://github.com/voldemortX/DST-CBC .", "field": [], "task": ["Curriculum Learning", "Image Classification", "Semantic Segmentation", "Semi-Supervised Image Classification", "Semi-Supervised Semantic Segmentation"], "method": [], "dataset": ["Pascal VOC 2012 1% labeled", "Pascal VOC 2012 12.5% labeled", "Cityscapes 12.5% labeled", "Pascal VOC 2012 5% labeled", "Pascal VOC 2012 2% labeled", "Cityscapes 100 samples labeled", "PASCAL VOC 2012 1464 labels", "CIFAR-10, 4000 Labels"], "metric": ["Validation mIoU", "Accuracy"], "title": "DMT: Dynamic Mutual Training for Semi-Supervised Learning"} {"abstract": "Tracking the 6D pose of objects in video sequences is important for robot manipulation. This task, however, introduces multiple challenges: (i) robot manipulation involves significant occlusions; (ii) data and annotations are troublesome and difficult to collect for 6D poses, which complicates machine learning solutions, and (iii) incremental error drift often accumulates in long term tracking to necessitate re-initialization of the object's pose. This work proposes a data-driven optimization approach for long-term, 6D pose tracking. It aims to identify the optimal relative pose given the current RGB-D observation and a synthetic image conditioned on the previous best estimate and the object's model. The key contribution in this context is a novel neural network architecture, which appropriately disentangles the feature encoding to help reduce domain shift, and an effective 3D orientation representation via Lie Algebra. Consequently, even when the network is trained only with synthetic data can work effectively over real images. Comprehensive experiments over benchmarks - existing ones as well as a new dataset with significant occlusions related to object manipulation - show that the proposed approach achieves consistently robust estimates and outperforms alternatives, even though they have been trained with real images. The approach is also the most computationally efficient among the alternatives and achieves a tracking frequency of 90.9Hz.", "field": [], "task": ["6D Pose Estimation", "6D Pose Estimation using RGB", "Pose Estimation", "Pose Tracking"], "method": [], "dataset": ["YCB-Video"], "metric": ["ADDS AUC"], "title": "se(3)-TrackNet: Data-driven 6D Pose Tracking by Calibrating Image Residuals in Synthetic Domains"} {"abstract": "We consider the problem of segmenting image regions given a natural language phrase, and study it on a novel dataset of 77,262 images and 345,486 phrase-region pairs. Our dataset is collected on top of the Visual Genome dataset and uses the existing annotations to generate a challenging set of referring phrases for which the corresponding regions are manually annotated. Phrases in our dataset correspond to multiple regions and describe a large number of object and stuff categories as well as their attributes such as color, shape, parts, and relationships with other entities in the image. Our experiments show that the scale and diversity of concepts in our dataset poses significant challenges to the existing state-of-the-art. We systematically handle the long-tail nature of these concepts and present a modular approach to combine category, attribute, and relationship cues that outperforms existing approaches.", "field": [], "task": ["Referring Expression Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["PhraseCut"], "metric": ["Mean IoU", "Pr@0.5", "Pr@0.7", "Pr@0.9"], "title": "PhraseCut: Language-based Image Segmentation in the Wild"} {"abstract": "Semantic segmentation and semantic edge detection can be seen as two dual problems with close relationships in computer vision. Despite the fast evolution of learning-based 3D semantic segmentation methods, little attention has been drawn to the learning of 3D semantic edge detectors, even less to a joint learning method for the two tasks. In this paper, we tackle the 3D semantic edge detection task for the first time and present a new two-stream fully-convolutional network that jointly performs the two tasks. In particular, we design a joint refinement module that explicitly wires region information and edge information to improve the performances of both tasks. Further, we propose a novel loss function that encourages the network to produce semantic segmentation results with better boundaries. Extensive evaluations on S3DIS and ScanNet datasets show that our method achieves on par or better performance than the state-of-the-art methods for semantic segmentation and outperforms the baseline methods for semantic edge detection. Code release: https://github.com/hzykent/JSENet", "field": [], "task": ["3D Semantic Segmentation", "Edge Detection", "Semantic Segmentation"], "method": [], "dataset": ["S3DIS"], "metric": ["Mean IoU"], "title": "JSENet: Joint Semantic Segmentation and Edge Detection Network for 3D Point Clouds"} {"abstract": "Recently, knowledge graph embeddings (KGEs) received significant attention, and several software libraries have been developed for training and evaluating KGEs. While each of them addresses specific needs, we re-designed and re-implemented PyKEEN, one of the first KGE libraries, in a community effort. PyKEEN 1.0 enables users to compose knowledge graph embedding models (KGEMs) based on a wide range of interaction models, training approaches, loss functions, and permits the explicit modeling of inverse relations. Besides, an automatic memory optimization has been realized in order to exploit the provided hardware optimally, and through the integration of Optuna extensive hyper-parameter optimization (HPO) functionalities are provided.", "field": [], "task": ["Graph Embedding", "Knowledge Graph Embedding", "Knowledge Graph Embeddings", "Link Prediction"], "method": [], "dataset": ["WN18"], "metric": ["training time (s)"], "title": "PyKEEN 1.0: A Python Library for Training and Evaluating Knowledge Graph Embeddings"} {"abstract": "Automated segmentation of brain tumors from 3D magnetic resonance images\n(MRIs) is necessary for the diagnosis, monitoring, and treatment planning of\nthe disease. Manual delineation practices require anatomical knowledge, are\nexpensive, time consuming and can be inaccurate due to human error. Here, we\ndescribe a semantic segmentation network for tumor subregion segmentation from\n3D MRIs based on encoder-decoder architecture. Due to a limited training\ndataset size, a variational auto-encoder branch is added to reconstruct the\ninput image itself in order to regularize the shared decoder and impose\nadditional constraints on its layers. The current approach won 1st place in the\nBraTS 2018 challenge.", "field": [], "task": ["Brain Tumor Segmentation", "Semantic Segmentation", "Tumor Segmentation"], "method": [], "dataset": ["BRATS 2018"], "metric": ["Dice Score"], "title": "3D MRI brain tumor segmentation using autoencoder regularization"} {"abstract": "Combinatorial features are essential for the success of many commercial\nmodels. Manually crafting these features usually comes with high cost due to\nthe variety, volume and velocity of raw data in web-scale systems.\nFactorization based models, which measure interactions in terms of vector\nproduct, can learn patterns of combinatorial features automatically and\ngeneralize to unseen features as well. With the great success of deep neural\nnetworks (DNNs) in various fields, recently researchers have proposed several\nDNN-based factorization model to learn both low- and high-order feature\ninteractions. Despite the powerful ability of learning an arbitrary function\nfrom data, plain DNNs generate feature interactions implicitly and at the\nbit-wise level. In this paper, we propose a novel Compressed Interaction\nNetwork (CIN), which aims to generate feature interactions in an explicit\nfashion and at the vector-wise level. We show that the CIN share some\nfunctionalities with convolutional neural networks (CNNs) and recurrent neural\nnetworks (RNNs). We further combine a CIN and a classical DNN into one unified\nmodel, and named this new model eXtreme Deep Factorization Machine (xDeepFM).\nOn one hand, the xDeepFM is able to learn certain bounded-degree feature\ninteractions explicitly; on the other hand, it can learn arbitrary low- and\nhigh-order feature interactions implicitly. We conduct comprehensive\nexperiments on three real-world datasets. Our results demonstrate that xDeepFM\noutperforms state-of-the-art models. We have released the source code of\nxDeepFM at \\url{https://github.com/Leavingseason/xDeepFM}.", "field": [], "task": ["Click-Through Rate Prediction", "Recommendation Systems"], "method": [], "dataset": ["Dianping", "Criteo", "Bing News"], "metric": ["Log Loss", "AUC"], "title": "xDeepFM: Combining Explicit and Implicit Feature Interactions for Recommender Systems"} {"abstract": "Retinex model is an effective tool for low-light image enhancement. It\nassumes that observed images can be decomposed into the reflectance and\nillumination. Most existing Retinex-based methods have carefully designed\nhand-crafted constraints and parameters for this highly ill-posed\ndecomposition, which may be limited by model capacity when applied in various\nscenes. In this paper, we collect a LOw-Light dataset (LOL) containing\nlow/normal-light image pairs and propose a deep Retinex-Net learned on this\ndataset, including a Decom-Net for decomposition and an Enhance-Net for\nillumination adjustment. In the training process for Decom-Net, there is no\nground truth of decomposed reflectance and illumination. The network is learned\nwith only key constraints including the consistent reflectance shared by paired\nlow/normal-light images, and the smoothness of illumination. Based on the\ndecomposition, subsequent lightness enhancement is conducted on illumination by\nan enhancement network called Enhance-Net, and for joint denoising there is a\ndenoising operation on reflectance. The Retinex-Net is end-to-end trainable, so\nthat the learned decomposition is by nature good for lightness adjustment.\nExtensive experiments demonstrate that our method not only achieves visually\npleasing quality for low-light enhancement but also provides a good\nrepresentation of image decomposition.", "field": [], "task": ["Denoising", "Image Enhancement", "Low-Light Image Enhancement"], "method": [], "dataset": ["DICM", "VV", "MEF"], "metric": ["User Study Score"], "title": "Deep Retinex Decomposition for Low-Light Enhancement"} {"abstract": "Named entity recognition and relation extraction are two important fundamental problems. Joint learning algorithms have been proposed to solve both tasks simultaneously, and many of them cast the joint task as a table-filling problem. However, they typically focused on learning a single encoder (usually learning representation in the form of a table) to capture information required for both tasks within the same space. We argue that it can be beneficial to design two distinct encoders to capture such two different types of information in the learning process. In this work, we propose the novel {\\em table-sequence encoders} where two different encoders -- a table encoder and a sequence encoder are designed to help each other in the representation learning process. Our experiments confirm the advantages of having {\\em two} encoders over {\\em one} encoder. On several standard datasets, our model shows significant improvements over existing approaches.", "field": [], "task": ["Joint Entity and Relation Extraction", "Named Entity Recognition", "Relation Extraction", "Representation Learning"], "method": [], "dataset": ["CoNLL04", "ACE 2005", "ADE Corpus", "ACE 2004"], "metric": ["RE+ Micro F1", "NER Macro F1", "RE+ Macro F1", "Sentence Encoder", "RE Micro F1", "RE Macro F1", "NER Micro F1", "RE+ Macro F1 "], "title": "Two are Better than One: Joint Entity and Relation Extraction with Table-Sequence Encoders"} {"abstract": "We propose an approach for unsupervised adaptation of object detectors from\nlabel-rich to label-poor domains which can significantly reduce annotation\ncosts associated with detection. Recently, approaches that align distributions\nof source and target images using an adversarial loss have been proven\neffective for adapting object classifiers. However, for object detection, fully\nmatching the entire distributions of source and target images to each other at\nthe global image level may fail, as domains could have distinct scene layouts\nand different combinations of objects. On the other hand, strong matching of\nlocal features such as texture and color makes sense, as it does not change\ncategory level semantics. This motivates us to propose a novel method for\ndetector adaptation based on strong local alignment and weak global alignment.\nOur key contribution is the weak alignment model, which focuses the adversarial\nalignment loss on images that are globally similar and puts less emphasis on\naligning images that are globally dissimilar. Additionally, we design the\nstrong domain alignment model to only look at local receptive fields of the\nfeature map. We empirically verify the effectiveness of our method on four\ndatasets comprising both large and small domain shifts. Our code is available\nat \\url{https://github.com/VisionLearningGroup/DA_Detection}", "field": [], "task": ["Object Detection", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["Cityscapes to Foggy Cityscapes", "SIM10K to BDD100K"], "metric": ["mAP@0.5"], "title": "Strong-Weak Distribution Alignment for Adaptive Object Detection"} {"abstract": "Motivation: Single-cell RNA sequencing (scRNA-seq) technologies and analysis tools have allowed researchers to achieve remarkably detailed understandings of the roles and relationships between cells and genes. However, conventional distance metrics, such as Euclidean, Pearson, and Spearman distances, fail to simultaneously take into account the high dimensionality, monotonicity, and magnitude of gene expression data. To address several shortcomings in these commonly used metrics, we present a magnitude-contingent monotonic correlation metric called Polaratio which is designed to enhance the quality of scRNA-seq data analysis.\r\n\r\nResults: We integrate three state-of-the-art interpretable clustering algorithms \u2013 Single-Cell Consensus Clustering (SC3), Hierarchical Clustering (HC), and K-Medoids (KM) \u2013 through a consensus cell clustering procedure, which we evaluate on various biological datasets to benchmark Polaratio against several well-known metrics. Our results demonstrate Polaratio\u2019s ability to improve the accuracy of cell clustering on 5 out of 7 publicly available datasets.\r\n\r\nAvailability: https://github.com/dubai03nsr/Polaratio\r\n\r\nContact: pcicalese{at}uh.edu", "field": [], "task": ["Graph Clustering"], "method": [], "dataset": ["Pollen et al", "Yan et al", "Treutlein et al", "Goolam et al", "Bozec et al", "Deng et al", "Biase et al"], "metric": ["Adjusted Rand Index"], "title": "Polaratio: A magnitude-contingent monotonic correlation metric and its improvements to scRNA-seq clustering"} {"abstract": "Obstacles hindering the development of capsule networks for challenging NLP applications include poor scalability to large output spaces and less reliable routing processes. In this paper, we introduce: 1) an agreement score to evaluate the performance of routing processes at instance level; 2) an adaptive optimizer to enhance the reliability of routing; 3) capsule compression and partial routing to improve the scalability of capsule networks. We validate our approach on two NLP tasks, namely: multi-label text classification and question answering. Experimental results show that our approach considerably improves over strong competitors on both tasks. In addition, we gain the best results in low-resource settings with few training instances.", "field": [], "task": ["Multi-Label Text Classification", "Question Answering", "Text Classification"], "method": [], "dataset": ["RCV1", "TrecQA", "EUR-Lex"], "metric": ["P@3", "nDCG@1", "P@5", "MAP", "nDCG@3", "MRR", "P@1", "nDCG@5"], "title": "Towards Scalable and Reliable Capsule Networks for Challenging NLP Applications"} {"abstract": "FlowNet2, the state-of-the-art convolutional neural network (CNN) for optical\nflow estimation, requires over 160M parameters to achieve accurate flow\nestimation. In this paper we present an alternative network that outperforms\nFlowNet2 on the challenging Sintel final pass and KITTI benchmarks, while being\n30 times smaller in the model size and 1.36 times faster in the running speed.\nThis is made possible by drilling down to architectural details that might have\nbeen missed in the current frameworks: (1) We present a more effective flow\ninference approach at each pyramid level through a lightweight cascaded\nnetwork. It not only improves flow estimation accuracy through early\ncorrection, but also permits seamless incorporation of descriptor matching in\nour network. (2) We present a novel flow regularization layer to ameliorate the\nissue of outliers and vague flow boundaries by using a feature-driven local\nconvolution. (3) Our network owns an effective structure for pyramidal feature\nextraction and embraces feature warping rather than image warping as practiced\nin FlowNet2. Our code and trained models are available at\nhttps://github.com/twhui/LiteFlowNet .", "field": [], "task": ["Optical Flow Estimation"], "method": [], "dataset": ["Sintel-final", "Sintel-clean"], "metric": ["Average End-Point Error"], "title": "LiteFlowNet: A Lightweight Convolutional Neural Network for Optical Flow Estimation"} {"abstract": "Real-world applications could benefit from the ability to automatically\ngenerate a fine-grained ranking of photo aesthetics. However, previous methods\nfor image aesthetics analysis have primarily focused on the coarse, binary\ncategorization of images into high- or low-aesthetic categories. In this work,\nwe propose to learn a deep convolutional neural network to rank photo\naesthetics in which the relative ranking of photo aesthetics are directly\nmodeled in the loss function. Our model incorporates joint learning of\nmeaningful photographic attributes and image content information which can help\nregularize the complicated photo aesthetics rating problem.\n To train and analyze this model, we have assembled a new aesthetics and\nattributes database (AADB) which contains aesthetic scores and meaningful\nattributes assigned to each image by multiple human raters. Anonymized rater\nidentities are recorded across images allowing us to exploit intra-rater\nconsistency using a novel sampling strategy when computing the ranking loss of\ntraining image pairs. We show the proposed sampling strategy is very effective\nand robust in face of subjective judgement of image aesthetics by individuals\nwith different aesthetic tastes. Experiments demonstrate that our unified model\ncan generate aesthetic rankings that are more consistent with human ratings. To\nfurther validate our model, we show that by simply thresholding the estimated\naesthetic scores, we are able to achieve state-or-the-art classification\nperformance on the existing AVA dataset benchmark.", "field": [], "task": ["Aesthetics Quality Assessment"], "method": [], "dataset": ["AVA"], "metric": ["Accuracy"], "title": "Photo Aesthetics Ranking Network with Attributes and Content Adaptation"} {"abstract": "Retinal vessel segmentation from retinal images is an essential task for developing the computer-aided diagnosis system for retinal diseases. Efforts have been made on high-performance deep learning-based approaches to segment the retinal images in an end-to-end manner. However, the acquisition of retinal vessel images and segmentation labels requires onerous work from professional clinicians, which results in smaller training dataset with incomplete labels. As known, data-driven methods suffer from data insufficiency, and the models will easily over-fit the small-scale training data. Such a situation becomes more severe when the training vessel labels are incomplete or incorrect. In this paper, we propose a Study Group Learning (SGL) scheme to improve the robustness of the model trained on noisy labels. Besides, a learned enhancement map provides better visualization than conventional methods as an auxiliary tool for clinicians. Experiments demonstrate that the proposed method further improves the vessel segmentation performance in DRIVE and CHASE$\\_$DB1 datasets, especially when the training labels are noisy.", "field": [], "task": ["Retinal Vessel Segmentation"], "method": [], "dataset": ["CHASE_DB1", "DRIVE"], "metric": ["F1 score", "AUC"], "title": "Study Group Learning: Improving Retinal Vessel Segmentation Trained with Noisy Labels"} {"abstract": "We introduce SynSE, a novel syntactically guided generative approach for Zero-Shot Learning (ZSL). Our end-to-end approach learns progressively refined generative embedding spaces constrained within and across the involved modalities (visual, language). The inter-modal constraints are defined between action sequence embedding and embeddings of Parts of Speech (PoS) tagged words in the corresponding action description. We deploy SynSE for the task of skeleton-based action sequence recognition. Our design choices enable SynSE to generalize compositionally, i.e., recognize sequences whose action descriptions contain words not encountered during training. We also extend our approach to the more challenging Generalized Zero-Shot Learning (GZSL) problem via a confidence-based gating mechanism. We are the first to present zero-shot skeleton action recognition results on the large-scale NTU-60 and NTU-120 skeleton action datasets with multiple splits. Our results demonstrate SynSE's state of the art performance in both ZSL and GZSL settings compared to strong baselines on the NTU-60 and NTU-120 datasets.", "field": [], "task": ["Action Recognition", "Generalized Zero-Shot Learning", "Zero-Shot Learning", "Zero Shot Skeletal Action Recognition"], "method": [], "dataset": ["NTU RGB+D", "NTU RGB+D 120"], "metric": ["Harmonic Mean (5 unseen classes)", "Harmonic Mean (12 unseen classes)", "Accuracy (5 unseen classes)", "Harmonic Mean (10 unseen classes)", "Harmonic Mean (24 unseen classes)", "Accuracy (24 unseen classes)", "Accuracy (12 unseen classes)", "Accuracy (10 unseen classes)"], "title": "Syntactically Guided Generative Embeddings for Zero-Shot Skeleton Action Recognition"} {"abstract": "Convolutional neural networks have been applied to a wide variety of computer\nvision tasks. Recent advances in semantic segmentation have enabled their\napplication to medical image segmentation. While most CNNs use two-dimensional\nkernels, recent CNN-based publications on medical image segmentation featured\nthree-dimensional kernels, allowing full access to the three-dimensional\nstructure of medical images. Though closely related to semantic segmentation,\nmedical image segmentation includes specific challenges that need to be\naddressed, such as the scarcity of labelled data, the high class imbalance\nfound in the ground truth and the high memory demand of three-dimensional\nimages. In this work, a CNN-based method with three-dimensional filters is\ndemonstrated and applied to hand and brain MRI. Two modifications to an\nexisting CNN architecture are discussed, along with methods on addressing the\naforementioned challenges. While most of the existing literature on medical\nimage segmentation focuses on soft tissue and the major organs, this work is\nvalidated on data both from the central nervous system as well as the bones of\nthe hand.", "field": [], "task": ["Brain Tumor Segmentation", "Medical Image Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["BRATS-2015"], "metric": ["Dice Score"], "title": "CNN-based Segmentation of Medical Imaging Data"} {"abstract": "One of the most challenging problems in modern neuroimaging is detailed characterization of neurodegeneration. Quantifying spatial and longitudinal atrophy patterns is an important component of this process. These spatiotemporal signals will aid in discriminating between related diseases, such as frontotemporal dementia (FTD) and Alzheimer's disease (AD), which manifest themselves in the same at-risk population. Here, we develop a novel symmetric image normalization method (SyN) for maximizing the cross-correlation within the space of diffeomorphic maps and provide the Euler-Lagrange equations necessary for this optimization. We then turn to a careful evaluation of our method. Our evaluation uses gold standard, human cortical segmentation to contrast SyN's performance with a related elastic method and with the standard ITK implementation of Thirion's Demons algorithm. The new method compares favorably with both approaches, in particular when the distance between the template brain and the target brain is large. We then report the correlation of volumes gained by algorithmic cortical labelings of FTD and control subjects with those gained by the manual rater. This comparison shows that, of the three methods tested, SyN's volume measurements are the most strongly correlated with volume measurements gained by expert labeling. This study indicates that SyN, with cross-correlation, is a reliable method for normalizing and making anatomical measurements in volumetric MRI of patients and at-risk elderly individuals.", "field": [], "task": ["BIRL", "Diffeomorphic Medical Image Registration", "Image Registration"], "method": [], "dataset": ["Automatic Cardiac Diagnosis Challenge (ACDC)", "CIMA-10k", "OASIS+ADIBE+ADHD200+MCIC+PPMI+HABS+HarvardGSP", "CUMC12"], "metric": ["Dice (SE)", "Mean target overlap ratio", "CPU (sec)", "MMrTRE", "Grad Det-Jac", "RMSE", "Dice (Average)", "AMrTRE", "Dice", "Hausdorff Distance (mm)", "Neg Jacob Det"], "title": "Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain"} {"abstract": "We propose a novel scene graph generation model called Graph R-CNN, that is\nboth effective and efficient at detecting objects and their relations in\nimages. Our model contains a Relation Proposal Network (RePN) that efficiently\ndeals with the quadratic number of potential relations between objects in an\nimage. We also propose an attentional Graph Convolutional Network (aGCN) that\neffectively captures contextual information between objects and relations.\nFinally, we introduce a new evaluation metric that is more holistic and\nrealistic than existing metrics. We report state-of-the-art performance on\nscene graph generation as evaluated using both existing and our proposed\nmetrics.", "field": [], "task": ["Graph Generation", "Scene Graph Generation"], "method": [], "dataset": ["Visual Genome"], "metric": ["Recall@50"], "title": "Graph R-CNN for Scene Graph Generation"} {"abstract": "Automated affective computing in the wild setting is a challenging problem in\ncomputer vision. Existing annotated databases of facial expressions in the wild\nare small and mostly cover discrete emotions (aka the categorical model). There\nare very limited annotated facial databases for affective computing in the\ncontinuous dimensional model (e.g., valence and arousal). To meet this need, we\ncollected, annotated, and prepared for public distribution a new database of\nfacial emotions in the wild (called AffectNet). AffectNet contains more than\n1,000,000 facial images from the Internet by querying three major search\nengines using 1250 emotion related keywords in six different languages. About\nhalf of the retrieved images were manually annotated for the presence of seven\ndiscrete facial expressions and the intensity of valence and arousal. AffectNet\nis by far the largest database of facial expression, valence, and arousal in\nthe wild enabling research in automated facial expression recognition in two\ndifferent emotion models. Two baseline deep neural networks are used to\nclassify images in the categorical model and predict the intensity of valence\nand arousal. Various evaluation metrics show that our deep neural network\nbaselines can perform better than conventional machine learning methods and\noff-the-shelf facial expression recognition systems.", "field": [], "task": ["Facial Expression Recognition"], "method": [], "dataset": ["AffectNet"], "metric": ["Accuracy (7 emotion)", "Accuracy (8 emotion)"], "title": "AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild"} {"abstract": "Recent random-forest (RF)-based image super-resolution approaches inherit\nsome properties from dictionary-learning-based algorithms, but the\neffectiveness of the properties in RF is overlooked in the literature. In this\npaper, we present a novel feature-augmented random forest (FARF) for image\nsuper-resolution, where the conventional gradient-based features are augmented\nwith gradient magnitudes and different feature recipes are formulated on\ndifferent stages in an RF. The advantages of our method are that, firstly, the\ndictionary-learning-based features are enhanced by adding gradient magnitudes,\nbased on the observation that the non-linear gradient magnitude are with highly\ndiscriminative property. Secondly, generalized locality-sensitive hashing (LSH)\nis used to replace principal component analysis (PCA) for feature\ndimensionality reduction and original high-dimensional features are employed,\ninstead of the compressed ones, for the leaf-nodes' regressors, since\nregressors can benefit from higher dimensional features. This\noriginal-compressed coupled feature sets scheme unifies the unsupervised LSH\nevaluation on both image super-resolution and content-based image retrieval\n(CBIR). Finally, we present a generalized weighted ridge regression (GWRR)\nmodel for the leaf-nodes' regressors. Experiment results on several public\nbenchmark datasets show that our FARF method can achieve an average gain of\nabout 0.3 dB, compared to traditional RF-based methods. Furthermore, a\nfine-tuned FARF model can compare to or (in many cases) outperform some recent\nstateof-the-art deep-learning-based algorithms.", "field": [], "task": ["Content-Based Image Retrieval", "Dictionary Learning", "Dimensionality Reduction", "Image Retrieval", "Image Super-Resolution", "Regression", "Super-Resolution"], "method": [], "dataset": ["Set5 - 4x upscaling", "BSD100 - 4x upscaling", "Set14 - 4x upscaling"], "metric": ["PSNR"], "title": "Image Super-resolution via Feature-augmented Random Forest"} {"abstract": "To understand the world, we humans constantly need to relate the present to\nthe past, and put events in context. In this paper, we enable existing video\nmodels to do the same. We propose a long-term feature bank---supportive\ninformation extracted over the entire span of a video---to augment\nstate-of-the-art video models that otherwise would only view short clips of 2-5\nseconds. Our experiments demonstrate that augmenting 3D convolutional networks\nwith a long-term feature bank yields state-of-the-art results on three\nchallenging video datasets: AVA, EPIC-Kitchens, and Charades.", "field": [], "task": ["Action Classification", "Action Recognition", "Egocentric Activity Recognition", "Video Understanding"], "method": [], "dataset": ["EPIC-KITCHENS-55", "Charades"], "metric": ["Actions Top-1 (S2)", "Actions Top-1 (S1)", "MAP"], "title": "Long-Term Feature Banks for Detailed Video Understanding"} {"abstract": "Ensemble methods, traditionally built with independently trained de-correlated models, have proven to be efficient methods for reducing the remaining residual generalization error, which results in robust and accurate methods for real-world applications. In the context of deep learning, however, training an ensemble of deep networks is costly and generates high redundancy which is inefficient. In this paper, we present experiments on Ensembles with Shared Representations (ESRs) based on convolutional networks to demonstrate, quantitatively and qualitatively, their data processing efficiency and scalability to large-scale datasets of facial expressions. We show that redundancy and computational load can be dramatically reduced by varying the branching level of the ESR without loss of diversity and generalization power, which are both important for ensemble performance. Experiments on large-scale datasets suggest that ESRs reduce the remaining residual generalization error on the AffectNet and FER+ datasets, reach human-level performance, and outperform state-of-the-art methods on facial expression recognition in the wild using emotion and affect concepts.", "field": [], "task": ["Facial Expression Recognition"], "method": [], "dataset": ["AffectNet", "FER+"], "metric": ["Accuracy (7 emotion)", "Accuracy (8 emotion)", "Accuracy"], "title": "Efficient Facial Feature Learning with Wide Ensemble-based Convolutional Neural Networks"} {"abstract": "We address the problem of person re-identification (reID), that is, retrieving person images from a large dataset, given a query image of the person of interest. A key challenge is to learn person representations robust to intra-class variations, as different persons can have the same attribute and the same person's appearance looks different with viewpoint changes. Recent reID methods focus on learning discriminative features but robust to only a particular factor of variations (e.g., human pose), which requires corresponding supervisory signals (e.g., pose annotations). To tackle this problem, we propose to disentangle identity-related and -unrelated features from person images. Identity-related features contain information useful for specifying a particular person (e.g., clothing), while identity-unrelated ones hold other factors (e.g., human pose, scale changes). To this end, we introduce a new generative adversarial network, dubbed \\emph{identity shuffle GAN} (IS-GAN), that factorizes these features using identification labels without any auxiliary information. We also propose an identity-shuffling technique to regularize the disentangled features. Experimental results demonstrate the effectiveness of IS-GAN, significantly outperforming the state of the art on standard reID benchmarks including the Market-1501, CUHK03 and DukeMTMC-reID. Our code and models are available online: https://cvlab-yonsei.github.io/projects/ISGAN/.", "field": [], "task": ["Person Re-Identification"], "method": [], "dataset": ["DukeMTMC-reID", "Market-1501"], "metric": ["Rank-1", "MAP"], "title": "Learning Disentangled Representation for Robust Person Re-identification"} {"abstract": "Most counting questions in visual question answering (VQA) datasets are\nsimple and require no more than object detection. Here, we study algorithms for\ncomplex counting questions that involve relationships between objects,\nattribute identification, reasoning, and more. To do this, we created TallyQA,\nthe world's largest dataset for open-ended counting. We propose a new algorithm\nfor counting that uses relation networks with region proposals. Our method lets\nrelation networks be efficiently used with high-resolution imagery. It yields\nstate-of-the-art results compared to baseline and recent systems on both\nTallyQA and the HowMany-QA benchmark.", "field": [], "task": ["Object Detection", "Question Answering", "Visual Question Answering"], "method": [], "dataset": ["100 sleep nights of 8 caregivers", "HowmanyQA", "TallyQA"], "metric": ["14 gestures accuracy", "Accuracy"], "title": "TallyQA: Answering Complex Counting Questions"} {"abstract": "Video interpolation increases the temporal resolution of a video sequence by synthesizing intermediate frames between two consecutive frames. We propose a novel deep-learning-based video interpolation algorithm based on bilateral motion estimation. First, we develop the bilateral motion network with the bilateral cost volume to estimate bilateral motions accurately. Then, we approximate bi-directional motions to predict a different kind of bilateral motions. We then warp the two input frames using the estimated bilateral motions. Next, we develop the dynamic filter generation network to yield dynamic blending filters. Finally, we combine the warped frames using the dynamic blending filters to generate intermediate frames. Experimental results show that the proposed algorithm outperforms the state-of-the-art video interpolation algorithms on several benchmark datasets.", "field": [], "task": ["Motion Estimation", "Video Frame Interpolation"], "method": [], "dataset": ["Middlebury", "Vimeo90k", "UCF101"], "metric": ["SSIM", "PSNR", "Interpolation Error"], "title": "BMBC:Bilateral Motion Estimation with Bilateral Cost Volume for Video Interpolation"} {"abstract": "Feature fusion, the combination of features from different layers or branches, is an omnipresent part of modern network architectures. It is often implemented via simple operations, such as summation or concatenation, but this might not be the best choice. In this work, we propose a uniform and general scheme, namely attentional feature fusion, which is applicable for most common scenarios, including feature fusion induced by short and long skip connections as well as within Inception layers. To better fuse features of inconsistent semantics and scales, we propose a multi-scale channel attention module, which addresses issues that arise when fusing features given at different scales. We also demonstrate that the initial integration of feature maps can become a bottleneck and that this issue can be alleviated by adding another level of attention, which we refer to as iterative attentional feature fusion. With fewer layers or parameters, our models outperform state-of-the-art networks on both CIFAR-100 and ImageNet datasets, which suggests that more sophisticated attention mechanisms for feature fusion hold great potential to consistently yield better results compared to their direct counterparts. Our codes and trained models are available online.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["ImageNet"], "metric": ["Number of params", "Top 5 Accuracy", "Top 1 Accuracy"], "title": "Attentional Feature Fusion"} {"abstract": "In this paper, we propose a state-of-the-art video denoising algorithm based on a convolutional neural network architecture. Previous neural network based approaches to video denoising have been unsuccessful as their performance cannot compete with the performance of patch-based methods. However, our approach outperforms other patch-based competitors with significantly lower computing times. In contrast to other existing neural network denoisers, our algorithm exhibits several desirable properties such as a small memory footprint, and the ability to handle a wide range of noise levels with a single network model. The combination between its denoising performance and lower computational load makes this algorithm attractive for practical denoising applications. We compare our method with different state-of-art algorithms, both visually and with respect to objective quality metrics. The experiments show that our algorithm compares favorably to other state-of-art methods. Video examples, code and models are publicly available at \\url{https://github.com/m-tassano/dvdnet}.", "field": [], "task": ["Denoising", "Video Denoising"], "method": [], "dataset": ["DAVIS sigma50", "Set8 sigma30", "DAVIS sigma20", "DAVIS sigma40", "DAVIS sigma10", "Set8 sigma40", "Set8 sigma10", "DAVIS sigma30", "Set8 sigma20", "Set8 sigma50"], "metric": ["PSNR"], "title": "DVDnet: A Fast Network for Deep Video Denoising"} {"abstract": "The search for efficient, sparse deep neural network models is most prominently performed by pruning: training a dense, overparameterized network and removing parameters, usually via following a manually-crafted heuristic. Additionally, the recent Lottery Ticket Hypothesis conjectures that, for a typically-sized neural network, it is possible to find small sub-networks which, when trained from scratch on a comparable budget, match the performance of the original dense counterpart. We revisit fundamental aspects of pruning algorithms, pointing out missing ingredients in previous approaches, and develop a method, Continuous Sparsification, which searches for sparse networks based on a novel approximation of an intractable $\\ell_0$ regularization. We compare against dominant heuristic-based methods on pruning as well as ticket search -- finding sparse subnetworks that can be successfully re-trained from an early iterate. Empirical results show that we surpass the state-of-the-art for both objectives, across models and datasets, including VGG trained on CIFAR-10 and ResNet-50 trained on ImageNet. In addition to setting a new standard for pruning, Continuous Sparsification also offers fast parallel ticket search, opening doors to new applications of the Lottery Ticket Hypothesis.", "field": [], "task": ["Network Pruning", "Ticket Search", "Transfer Learning"], "method": [], "dataset": ["ImageNet - ResNet 50 - 90% sparsity"], "metric": ["Top-1 Accuracy"], "title": "Winning the Lottery with Continuous Sparsification"} {"abstract": "Motivation: Computational methods accelerate drug discovery and play an important role in biomedicine, such as molecular property prediction and compound-protein interaction identification. A key challenge is to learn useful molecular representation. In the early years, molecular properties are mainly calculated by quantum mechanics or predicted by traditional machine-learning methods, which requires expert knowledge and is often labor-intensive. Nowadays, graph neural networks have received significant attention because of the powerful ability to learn representation from graph data. Nevertheless, current graph-based methods have some limitations that need to be addressed, such as large-scale parameters and insufficient bond information extraction.\r\n\r\nResults: In this study, we proposed a graph-based approach that employed a novel triplet message mechanism to learn molecular representation efficiently, named triplet message networks (TrimNet). We show that TrimNet can accurately complete multiple molecular representation learning tasks with significant parameter reduction, including the quantum properties, bioactivity, physiology, and compound-protein interaction (CPI) prediction. In the experiments, TrimNet outperforms the previous state-of-the-art method by a significant margin on various datasets. Besides the few parameters and high prediction accuracy, TrimNet could focus on the atoms essential to the target properties, providing a clear interpretation of the prediction tasks. These advantages have established TrimNet as a powerful and useful computational tool in solving the challenging problem of molecular representation learning.", "field": [], "task": ["Drug Discovery", "Molecular Property Prediction", "Representation Learning"], "method": [], "dataset": ["MUV", "ToxCast", "HIV dataset", "ClinTox", "BACE", "Tox21"], "metric": ["AUC"], "title": "TrimNet: learning molecular representation from triplet messages for biomedicine"} {"abstract": "Topological data analysis is an emerging mathematical concept for\ncharacterizing shapes in multi-scale data. In this field, persistence diagrams\nare widely used as a descriptor of the input data, and can distinguish robust\nand noisy topological properties. Nowadays, it is highly desired to develop a\nstatistical framework on persistence diagrams to deal with practical data. This\npaper proposes a kernel method on persistence diagrams. A theoretical\ncontribution of our method is that the proposed kernel allows one to control\nthe effect of persistence, and, if necessary, noisy topological properties can\nbe discounted in data analysis. Furthermore, the method provides a fast\napproximation technique. The method is applied into several problems including\npractical data in physics, and the results show the advantage compared to the\nexisting kernel method on persistence diagrams.", "field": [], "task": ["Graph Classification", "Topological Data Analysis"], "method": [], "dataset": ["NEURON-BINARY", "NEURON-MULTI", "NEURON-Average"], "metric": ["Accuracy"], "title": "Kernel method for persistence diagrams via kernel embedding and weight factor"} {"abstract": "Our proposed deeply-supervised nets (DSN) method simultaneously minimizes\nclassification error while making the learning process of hidden layers direct\nand transparent. We make an attempt to boost the classification performance by\nstudying a new formulation in deep networks. Three aspects in convolutional\nneural networks (CNN) style architectures are being looked at: (1) transparency\nof the intermediate layers to the overall classification; (2)\ndiscriminativeness and robustness of learned features, especially in the early\nlayers; (3) effectiveness in training due to the presence of the exploding and\nvanishing gradients. We introduce \"companion objective\" to the individual\nhidden layers, in addition to the overall objective at the output layer (a\ndifferent strategy to layer-wise pre-training). We extend techniques from\nstochastic gradient methods to analyze our algorithm. The advantage of our\nmethod is evident and our experimental result on benchmark datasets shows\nsignificant performance gain over existing methods (e.g. all state-of-the-art\nresults on MNIST, CIFAR-10, CIFAR-100, and SVHN).", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["SVHN", "MNIST", "CIFAR-100", "CIFAR-10"], "metric": ["Percentage error", "Percentage correct"], "title": "Deeply-Supervised Nets"} {"abstract": "In multi-task learning, multiple tasks are solved jointly, sharing inductive\nbias between them. Multi-task learning is inherently a multi-objective problem\nbecause different tasks may conflict, necessitating a trade-off. A common\ncompromise is to optimize a proxy objective that minimizes a weighted linear\ncombination of per-task losses. However, this workaround is only valid when the\ntasks do not compete, which is rarely the case. In this paper, we explicitly\ncast multi-task learning as multi-objective optimization, with the overall\nobjective of finding a Pareto optimal solution. To this end, we use algorithms\ndeveloped in the gradient-based multi-objective optimization literature. These\nalgorithms are not directly applicable to large-scale learning problems since\nthey scale poorly with the dimensionality of the gradients and the number of\ntasks. We therefore propose an upper bound for the multi-objective loss and\nshow that it can be optimized efficiently. We further prove that optimizing\nthis upper bound yields a Pareto optimal solution under realistic assumptions.\nWe apply our method to a variety of multi-task deep learning problems including\ndigit classification, scene understanding (joint semantic segmentation,\ninstance segmentation, and depth estimation), and multi-label classification.\nOur method produces higher-performing models than recent multi-task learning\nformulations or per-task training.", "field": [], "task": ["Depth Estimation", "Instance Segmentation", "Multi-Label Classification", "Multi-Task Learning", "Scene Understanding", "Semantic Segmentation"], "method": [], "dataset": ["CelebA", "Cityscapes test"], "metric": ["Error", "mIoU"], "title": "Multi-Task Learning as Multi-Objective Optimization"} {"abstract": "Normalising flows (NFS) map two density functions via a differentiable\nbijection whose Jacobian determinant can be computed efficiently. Recently, as\nan alternative to hand-crafted bijections, Huang et al. (2018) proposed neural\nautoregressive flow (NAF) which is a universal approximator for density\nfunctions. Their flow is a neural network (NN) whose parameters are predicted\nby another NN. The latter grows quadratically with the size of the former and\nthus an efficient technique for parametrization is needed. We propose block\nneural autoregressive flow (B-NAF), a much more compact universal approximator\nof density functions, where we model a bijection directly using a single\nfeed-forward network. Invertibility is ensured by carefully designing each\naffine transformation with block matrices that make the flow autoregressive and\n(strictly) monotone. We compare B-NAF to NAF and other established flows on\ndensity estimation and approximate inference for latent variable models. Our\nproposed flow is competitive across datasets while using orders of magnitude\nfewer parameters.", "field": [], "task": ["Density Estimation", "Latent Variable Models", "Normalising Flows"], "method": [], "dataset": ["Freyfaces", "UCI POWER", "Caltech-101", "UCI MINIBOONE", "BSDS300", "OMNIGLOT", "UCI GAS", "MNIST", "UCI HEPMASS"], "metric": ["NLL", "Negative ELBO", "Log-likelihood"], "title": "Block Neural Autoregressive Flow"} {"abstract": "We present a novel learning-based approach to estimate the direction-of-arrival (DOA) of a sound source using a convolutional recurrent neural network (CRNN) trained via regression on synthetic data and Cartesian labels. We also describe an improved method to generate synthetic data to train the neural network using state-of-the-art sound propagation algorithms that model specular as well as diffuse reflections of sound. We compare our model against three other CRNNs trained using different formulations of the same problem: classification on categorical labels, and regression on spherical coordinate labels. In practice, our model achieves up to 43% decrease in angular error over prior methods. The use of diffuse reflection results in 34% and 41% reduction in angular prediction errors on LOCATA and SOFA datasets, respectively, over prior methods based on image-source methods. Our method results in an additional 3% error reduction over prior schemes that use classification based networks, and we use 36% fewer network parameters.", "field": [], "task": ["Direction of Arrival Estimation", "Regression"], "method": [], "dataset": ["SOFA"], "metric": ["Angular Error"], "title": "Regression and Classification for Direction-of-Arrival Estimation with Convolutional Recurrent Neural Networks"} {"abstract": "We propose a dual pathway, 11-layers deep, three-dimensional Convolutional\nNeural Network for the challenging task of brain lesion segmentation. The\ndevised architecture is the result of an in-depth analysis of the limitations\nof current networks proposed for similar applications. To overcome the\ncomputational burden of processing 3D medical scans, we have devised an\nefficient and effective dense training scheme which joins the processing of\nadjacent image patches into one pass through the network while automatically\nadapting to the inherent class imbalance present in the data. Further, we\nanalyze the development of deeper, thus more discriminative 3D CNNs. In order\nto incorporate both local and larger contextual information, we employ a dual\npathway architecture that processes the input images at multiple scales\nsimultaneously. For post-processing of the network's soft segmentation, we use\na 3D fully connected Conditional Random Field which effectively removes false\npositives. Our pipeline is extensively evaluated on three challenging tasks of\nlesion segmentation in multi-channel MRI patient data with traumatic brain\ninjuries, brain tumors, and ischemic stroke. We improve on the state-of-the-art\nfor all three applications, with top ranking performance on the public\nbenchmarks BRATS 2015 and ISLES 2015. Our method is computationally efficient,\nwhich allows its adoption in a variety of research and clinical settings. The\nsource code of our implementation is made publicly available.", "field": [], "task": ["3D Medical Imaging Segmentation", "Brain Lesion Segmentation From Mri", "Brain Tumor Segmentation", "Lesion Segmentation", "Medical Image Segmentation"], "method": [], "dataset": ["ISLES-2015", "BRATS-2015"], "metric": ["Dice Score"], "title": "Efficient Multi-Scale 3D CNN with Fully Connected CRF for Accurate Brain Lesion Segmentation"} {"abstract": "Training competitive deep video models is an order of magnitude slower than training their counterpart image models. Slow training causes long research cycles, which hinders progress in video understanding research. Following standard practice for training image models, video model training assumes a fixed mini-batch shape: a specific number of clips, frames, and spatial size. However, what is the optimal shape? High resolution models perform well, but train slowly. Low resolution models train faster, but they are inaccurate. Inspired by multigrid methods in numerical optimization, we propose to use variable mini-batch shapes with different spatial-temporal resolutions that are varied according to a schedule. The different shapes arise from resampling the training data on multiple sampling grids. Training is accelerated by scaling up the mini-batch size and learning rate when shrinking the other dimensions. We empirically demonstrate a general and robust grid schedule that yields a significant out-of-the-box training speedup without a loss in accuracy for different models (I3D, non-local, SlowFast), datasets (Kinetics, Something-Something, Charades), and training settings (with and without pre-training, 128 GPUs or 1 GPU). As an illustrative example, the proposed multigrid method trains a ResNet-50 SlowFast network 4.5x faster (wall-clock time, same hardware) while also improving accuracy (+0.8% absolute) on Kinetics-400 compared to the baseline training method. Code is available online.", "field": [], "task": ["Action Detection", "Action Recognition", "Video Classification", "Video Understanding"], "method": [], "dataset": ["Something-Something V2", "Kinetics", "Charades"], "metric": ["Top-1", "mAP", "Top-1 Accuracy"], "title": "A Multigrid Method for Efficiently Training Video Models"} {"abstract": "Human-Object Interaction (HOI) detection lies at the core of action understanding. Besides 2D information such as human/object appearance and locations, 3D pose is also usually utilized in HOI learning since its view-independence. However, rough 3D body joints just carry sparse body information and are not sufficient to understand complex interactions. Thus, we need detailed 3D body shape to go further. Meanwhile, the interacted object in 3D is also not fully studied in HOI learning. In light of these, we propose a detailed 2D-3D joint representation learning method. First, we utilize the single-view human body capture method to obtain detailed 3D body, face and hand shapes. Next, we estimate the 3D object location and size with reference to the 2D human-object spatial configuration and object category priors. Finally, a joint learning framework and cross-modal consistency tasks are proposed to learn the joint HOI representation. To better evaluate the 2D ambiguity processing capacity of models, we propose a new benchmark named Ambiguous-HOI consisting of hard ambiguous images. Extensive experiments in large-scale HOI benchmark and Ambiguous-HOI show impressive effectiveness of our method. Code and data are available at https://github.com/DirtyHarryLYL/DJ-RN.", "field": [], "task": ["Human-Object Interaction Detection", "Representation Learning"], "method": [], "dataset": ["HICO-DET", "Ambiguious-HOI"], "metric": ["mAP", "MAP"], "title": "Detailed 2D-3D Joint Representation for Human-Object Interaction"} {"abstract": "In this paper, we develop conditional random field (CRF) based single-stage (SS) acoustic modeling with connectionist temporal classification (CTC) inspired state topology, which is called CTC-CRF for short.\r\nCTC-CRF is conceptually simple, which basically implements a CRF layer on top of features generated by the bottom neural network with the special state topology.\r\nLike SS-LF-MMI (lattice-free maximum-mutual-information), CTC-CRFs can be trained from scratch (flat-start), eliminating GMM-HMM pre-training and tree-building.\r\nEvaluation experiments are conducted on the WSJ, Switchboard and Librispeech datasets.\r\nIn a head-to-head comparison, the CTC-CRF model using simple Bidirectional LSTMs consistently outperforms the strong SS-LF-MMI, across all the three benchmarking datasets and in both cases of mono-phones and mono-chars.\r\nAdditionally, CTC-CRFs avoid some ad-hoc operation in SS-LF-MMI.", "field": [], "task": ["Speech Recognition"], "method": [], "dataset": ["LibriSpeech test-other", "WSJ eval92", "LibriSpeech test-clean", "WSJ eval93"], "metric": ["Word Error Rate (WER)"], "title": "CRF-based Single-stage Acoustic Modeling with CTC Topology"} {"abstract": "Graph Convolutional Neural Networks (GCNNs) are the most recent exciting\nadvancement in deep learning field and their applications are quickly spreading\nin multi-cross-domains including bioinformatics, chemoinformatics, social\nnetworks, natural language processing and computer vision. In this paper, we\nexpose and tackle some of the basic weaknesses of a GCNN model with a capsule\nidea presented in \\cite{hinton2011transforming} and propose our Graph Capsule\nNetwork (GCAPS-CNN) model. In addition, we design our GCAPS-CNN model to solve\nespecially graph classification problem which current GCNN models find\nchallenging. Through extensive experiments, we show that our proposed Graph\nCapsule Network can significantly outperforms both the existing state-of-art\ndeep learning methods and graph kernels on graph classification benchmark\ndatasets.", "field": [], "task": ["Graph Classification"], "method": [], "dataset": ["D&D", "PROTEINS", "IMDb-B", "NCI1"], "metric": ["Accuracy"], "title": "Graph Capsule Convolutional Neural Networks"} {"abstract": "Retinal vessel segmentation is of great interest for diagnosis of retinal vascular diseases. To further improve the performance of vessel segmentation, we propose IterNet, a new model based on UNet, with the ability to find obscured details of the vessel from the segmented vessel image itself, rather than the raw input image. IterNet consists of multiple iterations of a mini-UNet, which can be 4$\\times$ deeper than the common UNet. IterNet also adopts the weight-sharing and skip-connection features to facilitate training; therefore, even with such a large architecture, IterNet can still learn from merely 10$\\sim$20 labeled images, without pre-training or any prior knowledge. IterNet achieves AUCs of 0.9816, 0.9851, and 0.9881 on three mainstream datasets, namely DRIVE, CHASE-DB1, and STARE, respectively, which currently are the best scores in the literature. The source code is available.", "field": [], "task": ["Retinal Vessel Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["CHASE_DB1", "DRIVE"], "metric": ["F1 score", "AUC"], "title": "IterNet: Retinal Image Segmentation Utilizing Structural Redundancy in Vessel Networks"} {"abstract": "While neural networks have been shown to achieve impressive results for\nsentence-level sentiment analysis, targeted aspect-based sentiment analysis\n(TABSA) --- extraction of fine-grained opinion polarity w.r.t. a pre-defined\nset of aspects --- remains a difficult task. Motivated by recent advances in\nmemory-augmented models for machine reading, we propose a novel architecture,\nutilising external \"memory chains\" with a delayed memory update mechanism to\ntrack entities. On a TABSA task, the proposed model demonstrates substantial\nimprovements over state-of-the-art approaches, including those using external\nknowledge bases.", "field": [], "task": ["Aspect-Based Sentiment Analysis", "Reading Comprehension", "Sentiment Analysis"], "method": [], "dataset": ["Sentihood"], "metric": ["Aspect", "Sentiment"], "title": "Recurrent Entity Networks with Delayed Memory Update for Targeted Aspect-based Sentiment Analysis"} {"abstract": "Emotion detection in conversations is a necessary step for a number of applications, including opinion mining over chat history, social media threads, debates, argumentation mining, understanding consumer feedback in live conversations, etc. Currently, systems do not treat the parties in the conversation individually by adapting to the speaker of each utterance. In this paper, we describe a new method based on recurrent neural networks that keeps track of the individual party states throughout the conversation and uses this information for emotion classification. Our model outperforms the state of the art by a significant margin on two different datasets.", "field": [], "task": ["Emotion Classification", "Emotion Recognition in Conversation", "Multimodal Emotion Recognition"], "method": [], "dataset": ["IEMOCAP", "SEMAINE"], "metric": ["MAE (Arousal)", "MAE (Power)", "MAE (Valence)", "MAE (Expectancy)", "F1", "Accuracy"], "title": "DialogueRNN: An Attentive RNN for Emotion Detection in Conversations"} {"abstract": "Interactive image segmentation is characterized by multimodality. When the user clicks on a door, do they intend to select the door or the whole house? We present an end-to-end learning approach to interactive image segmentation that tackles this ambiguity. Our architecture couples two convolutional networks. The first is trained to synthesize a diverse set of plausible segmentations that conform to the user's input. The second is trained to select among these. By selecting a single solution, our approach retains compatibility with existing interactive segmentation interfaces. By synthesizing multiple diverse solutions before selecting one, the architecture is given the representational power to explore the multimodal solution space. We show that the proposed approach outperforms existing methods for interactive image segmentation, including prior work that applied convolutional networks to this problem, while being much faster.", "field": [], "task": ["Interactive Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["GrabCut", "DAVIS", "SBD"], "metric": ["NoC@90", "NoC@85"], "title": "Interactive Image Segmentation With Latent Diversity"} {"abstract": "To alleviate sparsity and cold start problem of collaborative filtering based\nrecommender systems, researchers and engineers usually collect attributes of\nusers and items, and design delicate algorithms to exploit these additional\ninformation. In general, the attributes are not isolated but connected with\neach other, which forms a knowledge graph (KG). In this paper, we propose\nKnowledge Graph Convolutional Networks (KGCN), an end-to-end framework that\ncaptures inter-item relatedness effectively by mining their associated\nattributes on the KG. To automatically discover both high-order structure\ninformation and semantic information of the KG, we sample from the neighbors\nfor each entity in the KG as their receptive field, then combine neighborhood\ninformation with bias when calculating the representation of a given entity.\nThe receptive field can be extended to multiple hops away to model high-order\nproximity information and capture users' potential long-distance interests.\nMoreover, we implement the proposed KGCN in a minibatch fashion, which enables\nour model to operate on large datasets and KGs. We apply the proposed model to\nthree datasets about movie, book, and music recommendation, and experiment\nresults demonstrate that our approach outperforms strong recommender baselines.", "field": [], "task": ["Click-Through Rate Prediction", "Link Prediction", "Recommendation Systems"], "method": [], "dataset": ["MovieLens 20M", "Book-Crossing", "MovieLens 25M", "Yelp", "Last.FM"], "metric": ["HR@10", "F1", "AUC", "Hits@10", "nDCG@10"], "title": "Knowledge Graph Convolutional Networks for Recommender Systems"} {"abstract": "Current action recognition methods heavily rely on trimmed videos for model\ntraining. However, it is expensive and time-consuming to acquire a large-scale\ntrimmed video dataset. This paper presents a new weakly supervised\narchitecture, called UntrimmedNet, which is able to directly learn action\nrecognition models from untrimmed videos without the requirement of temporal\nannotations of action instances. Our UntrimmedNet couples two important\ncomponents, the classification module and the selection module, to learn the\naction models and reason about the temporal duration of action instances,\nrespectively. These two components are implemented with feed-forward networks,\nand UntrimmedNet is therefore an end-to-end trainable architecture. We exploit\nthe learned models for action recognition (WSR) and detection (WSD) on the\nuntrimmed video datasets of THUMOS14 and ActivityNet. Although our UntrimmedNet\nonly employs weak supervision, our method achieves performance superior or\ncomparable to that of those strongly supervised approaches on these two\ndatasets.", "field": [], "task": ["Action Recognition", "Temporal Action Localization", "Weakly Supervised Action Localization", "Weakly-Supervised Action Recognition"], "method": [], "dataset": ["ActivityNet-1.2", "THUMOS 2014", "THUMOS\u201914"], "metric": ["mAP", "mAP@0.1:0.7", "mAP@0.5"], "title": "UntrimmedNets for Weakly Supervised Action Recognition and Detection"} {"abstract": "With the rapid development of fashion market, the customers' demands of\ncustomers for fashion recommendation are rising. In this paper, we aim to\ninvestigate a practical problem of fashion recommendation by answering the\nquestion \"which item should we select to match with the given fashion items and\nform a compatible outfit\". The key to this problem is to estimate the outfit\ncompatibility. Previous works which focus on the compatibility of two items or\nrepresent an outfit as a sequence fail to make full use of the complex\nrelations among items in an outfit. To remedy this, we propose to represent an\noutfit as a graph. In particular, we construct a Fashion Graph, where each node\nrepresents a category and each edge represents interaction between two\ncategories. Accordingly, each outfit can be represented as a subgraph by\nputting items into their corresponding category nodes. To infer the outfit\ncompatibility from such a graph, we propose Node-wise Graph Neural Networks\n(NGNN) which can better model node interactions and learn better node\nrepresentations. In NGNN, the node interaction on each edge is different, which\nis determined by parameters correlated to the two connected nodes. An attention\nmechanism is utilized to calculate the outfit compatibility score with learned\nnode representations. NGNN can not only be used to model outfit compatibility\nfrom visual or textual modality but also from multiple modalities. We conduct\nexperiments on two tasks: (1) Fill-in-the-blank: suggesting an item that\nmatches with existing components of outfit; (2) Compatibility prediction:\npredicting the compatibility scores of given outfits. Experimental results\ndemonstrate the great superiority of our proposed method over others.", "field": [], "task": ["Recommendation Systems"], "method": [], "dataset": ["Polyvore"], "metric": ["Accuracy"], "title": "Dressing as a Whole: Outfit Compatibility Learning Based on Node-wise Graph Neural Networks"} {"abstract": "Recent studies demonstrate the effectiveness of Recurrent Neural Networks (RNNs) for action recognition in videos. However, previous works mainly utilize video-level category as supervision to train RNNs, which may prohibit RNNs to learn complex motion structures along time. In this paper, we propose a recurrent pose-attention network (RPAN) to address this challenge, where we introduce a novel pose-attention mechanism to adaptively learn pose-related features at every time-step action prediction of RNNs. More specifically, we make three main contributions in this paper. Firstly, unlike previous works on pose-related action recognition, our RPAN is an end-to-end recurrent network which can exploit important spatial-temporal evolutions of human pose to assist action recognition in a unified framework. Secondly, instead of learning individual human-joint features separately, our pose-attention mechanism learns robust human-part features by sharing attention parameters partially on the semantically-related human joints. These human-part features are then fed into the human-part pooling layer to construct a highly-discriminative pose-related representation for temporal action modeling. Thirdly, one important byproduct of our RPAN is pose estimation in videos, which can be used for coarse pose annotation in action videos. We evaluate the proposed RPAN quantitatively and qualitatively on two popular benchmarks, i.e., Sub-JHMDB and PennAction. Experimental results show that RPAN outperforms the recent state-of-the-art methods on these challenging datasets.", "field": [], "task": ["Action Recognition", "Action Recognition In Videos", "Action Recognition In Videos ", "Pose Estimation", "Skeleton Based Action Recognition"], "method": [], "dataset": ["J-HMDB"], "metric": ["Accuracy (RGB+pose)"], "title": "RPAN: An End-to-End Recurrent Pose-Attention Network for Action Recognition in Videos"} {"abstract": "Understanding actions and gestures in video streams requires temporal reasoning of the spatial content from different time instants, i.e., spatiotemporal (ST) modeling. In this survey paper, we have made a comparative analysis of different ST modeling techniques for action and gecture recognition tasks. Since Convolutional Neural Networks (CNNs) are proved to be an effective tool as a feature extractor for static images, we apply ST modeling techniques on the features of static images from different time instants extracted by CNNs. All techniques are trained end-to-end together with a CNN feature extraction part and evaluated on two publicly available benchmarks: The Jester and the Something-Something datasets. The Jester dataset contains various dynamic and static hand gestures, whereas the Something-Something dataset contains actions of human-object interactions. The common characteristic of these two benchmarks is that the designed architectures need to capture the full temporal content of videos in order to correctly classify actions/gestures. Contrary to expectations, experimental results show that Recurrent Neural Network (RNN) based ST modeling techniques yield inferior results compared to other techniques such as fully convolutional architectures. Codes and pretrained models of this work are publicly available.", "field": [], "task": ["Action Recognition", "Human-Object Interaction Detection"], "method": [], "dataset": ["Something-Something V2"], "metric": ["Top-1 Accuracy"], "title": "Comparative Analysis of CNN-based Spatiotemporal Reasoning in Videos"} {"abstract": "Easy-to-use,Modular and Extendible package of deep-learning based CTR models.DeepFM,DeepInterestNetwork(DIN),DeepInterestEvolutionNetwork(DIEN),DeepCrossNetwork(DCN),AttentionalFactorizationMachine(AFM),Neural Factorization Machine(NFM),AutoInt,Deep Session Interest Network(DSIN)", "field": [], "task": ["Click-Through Rate Prediction", "Recommendation Systems"], "method": [], "dataset": ["Avazu", "Criteo", "Huawei App Store"], "metric": ["Log Loss", "LogLoss", "AUC"], "title": "Feature Generation by Convolutional Neural Network for Click-Through Rate Prediction"} {"abstract": "The most recent trend in estimating the 6D pose of rigid objects has been to\ntrain deep networks to either directly regress the pose from the image or to\npredict the 2D locations of 3D keypoints, from which the pose can be obtained\nusing a PnP algorithm. In both cases, the object is treated as a global entity,\nand a single pose estimate is computed. As a consequence, the resulting\ntechniques can be vulnerable to large occlusions.\n In this paper, we introduce a segmentation-driven 6D pose estimation\nframework where each visible part of the objects contributes a local pose\nprediction in the form of 2D keypoint locations. We then use a predicted\nmeasure of confidence to combine these pose candidates into a robust set of\n3D-to-2D correspondences, from which a reliable pose estimate can be obtained.\nWe outperform the state-of-the-art on the challenging Occluded-LINEMOD and\nYCB-Video datasets, which is evidence that our approach deals well with\nmultiple poorly-textured objects occluding each other. Furthermore, it relies\non a simple enough architecture to achieve real-time performance.", "field": [], "task": ["6D Pose Estimation", "6D Pose Estimation using RGB", "Pose Estimation", "Pose Prediction"], "method": [], "dataset": ["YCB-Video", "Occlusion LineMOD"], "metric": ["Mean ADD", "Accuracy (ADD)"], "title": "Segmentation-driven 6D Object Pose Estimation"} {"abstract": "A variety of graph neural networks (GNNs) frameworks for representation learning on graphs have been recently developed. These frameworks rely on aggregation and iteration scheme to learn the representation of nodes. However, information between nodes is inevitably lost in the scheme during learning. In order to reduce the loss, we extend the GNNs frameworks by exploring the aggregation and iteration scheme in the methodology of mutual information. We propose a new approach of enlarging the normal neighborhood in the aggregation of GNNs, which aims at maximizing mutual information. Based on a series of experiments conducted on several benchmark datasets, we show that the proposed approach improves the state-of-the-art performance for four types of graph tasks, including supervised and semi-supervised graph classification, graph link prediction and graph edge generation and classification.", "field": [], "task": ["Graph Classification", "Link Prediction", "Representation Learning"], "method": [], "dataset": ["COLLAB", "Cora", "IMDb-B", "PROTEINS", "Citeseer", "20NEWS", "NCI1", "Digits", "IMDb-M", "MUTAG", "Wine", "PTC", "Pubmed", "Cancer"], "metric": ["AP", "AUC", "Accuracy"], "title": "Mutual Information Maximization in Graph Neural Networks"} {"abstract": "The exploding cost and time needed for data labeling and model training are bottlenecks for training DNN models on large datasets. Identifying smaller representative data samples with strategies like active learning can help mitigate such bottlenecks. Previous works on active learning in NLP identify the problem of sampling bias in the samples acquired by uncertainty-based querying and develop costly approaches to address it. Using a large empirical study, we demonstrate that active set selection using the posterior entropy of deep models like FastText.zip (FTZ) is robust to sampling biases and to various algorithmic choices (query size and strategies) unlike that suggested by traditional literature. We also show that FTZ based query strategy produces sample sets similar to those from more sophisticated approaches (e.g ensemble networks). Finally, we show the effectiveness of the selected samples by creating tiny high-quality datasets, and utilizing them for fast and cheap training of large models. Based on the above, we propose a simple baseline for deep active text classification that outperforms the state-of-the-art. We expect the presented work to be useful and informative for dataset compression and for problems involving active, semi-supervised or online learning scenarios. Code and models are available at: https://github.com/drimpossible/Sampling-Bias-Active-Learning", "field": [], "task": ["Active Learning", "Text Classification"], "method": [], "dataset": ["Yelp-2", "Amazon-5", "Yahoo! Answers", "DBpedia", "Yelp-5", "AG News", "Sogou News", "Amazon-2"], "metric": ["Error", "Accuracy"], "title": "Sampling Bias in Deep Active Classification: An Empirical Study"} {"abstract": "Weakly supervised object detection (WSOD) using only image-level annotations has attracted growing attention over the past few years. Existing approaches using multiple instance learning easily fall into local optima, because such mechanism tends to learn from the most discriminative object in an image for each category. Therefore, these methods suffer from missing object instances which degrade the performance of WSOD. To address this problem, this paper introduces an end-to-end object instance mining (OIM) framework for weakly supervised object detection. OIM attempts to detect all possible object instances existing in each image by introducing information propagation on the spatial and appearance graphs, without any additional annotations. During the iterative learning process, the less discriminative object instances from the same class can be gradually detected and utilized for training. In addition, we design an object instance reweighted loss to learn larger portion of each object instance to further improve the performance. The experimental results on two publicly available databases, VOC 2007 and 2012, demonstrate the efficacy of proposed approach.", "field": [], "task": ["Multiple Instance Learning", "Object Detection", "Weakly Supervised Object Detection"], "method": [], "dataset": ["PASCAL VOC 2007", "PASCAL VOC 2012 test"], "metric": ["MAP"], "title": "Object Instance Mining for Weakly Supervised Object Detection"} {"abstract": "We propose a new method for video object segmentation (VOS) that addresses object pattern learning from unlabeled videos, unlike most existing methods which rely heavily on extensive annotated data. We introduce a unified unsupervised/weakly supervised learning framework, called MuG, that comprehensively captures intrinsic properties of VOS at multiple granularities. Our approach can help advance understanding of visual patterns in VOS and significantly reduce annotation burden. With a carefully-designed architecture and strong representation learning ability, our learned model can be applied to diverse VOS settings, including object-level zero-shot VOS, instance-level zero-shot VOS, and one-shot VOS. Experiments demonstrate promising performance in these settings, as well as the potential of MuG in leveraging unlabeled data to further improve the segmentation accuracy.", "field": [], "task": ["Representation Learning", "Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Unsupervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["DAVIS 2017 (val)", "DAVIS 2017 (test-dev)", "DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "Learning Video Object Segmentation from Unlabeled Videos"} {"abstract": "Existing shadow detection methods suffer from an intrinsic limitation in relying on limited labeled datasets, and they may produce poor results in some complicated situations. To boost the shadow detection performance, this paper presents a multi-task mean teacher model for semi-supervised shadow detection by leveraging unlabeled data and exploring the learning of multiple information of shadows simultaneously. To be specific, we first build a multi-task baseline model to simultaneously detect shadow regions, shadow edges, and shadow count by leveraging their complementary information and assign this baseline model to the student and teacher network. After that, we encourage the predictions of the three tasks from the student and teacher networks to be consistent for computing a consistency loss on unlabeled data, which is then added to the supervised loss on the labeled data from the predictions of the multi-task baseline model. Experimental results on three widely-used benchmark datasets show that our method consistently outperforms all the compared state-of- the-art methods, which verifies that the proposed network can effectively leverage additional unlabeled data to boost the shadow detection performance.\r", "field": [], "task": ["Shadow Detection"], "method": [], "dataset": ["SBU"], "metric": ["BER"], "title": "A Multi-Task Mean Teacher for Semi-Supervised Shadow Detection"} {"abstract": "Viewing recurrent neural networks (RNNs) as continuous-time dynamical systems, we propose a recurrent unit that describes the hidden state's evolution with two parts: a well-understood linear component plus a Lipschitz nonlinearity. This particular functional form facilitates stability analysis of the long-term behavior of the recurrent unit using tools from nonlinear systems theory. In turn, this enables architectural design decisions before experimentation. Sufficient conditions for global stability of the recurrent unit are obtained, motivating a novel scheme for constructing hidden-to-hidden matrices. Our experiments demonstrate that the Lipschitz RNN can outperform existing recurrent units on a range of benchmark tasks, including computer vision, language modeling and speech prediction tasks. Finally, through Hessian-based analysis we demonstrate that our Lipschitz recurrent unit is more robust with respect to input and parameter perturbations as compared to other continuous-time RNNs.", "field": [], "task": ["Language Modelling", "Sequential Image Classification"], "method": [], "dataset": ["Sequential CIFAR-10", "Sequential MNIST"], "metric": ["Permuted Accuracy", "Unpermuted Accuracy"], "title": "Lipschitz Recurrent Neural Networks"} {"abstract": "Learning to walk over a graph towards a target node for a given query and a\nsource node is an important problem in applications such as knowledge base\ncompletion (KBC). It can be formulated as a reinforcement learning (RL) problem\nwith a known state transition model. To overcome the challenge of sparse\nrewards, we develop a graph-walking agent called M-Walk, which consists of a\ndeep recurrent neural network (RNN) and Monte Carlo Tree Search (MCTS). The RNN\nencodes the state (i.e., history of the walked path) and maps it separately to\na policy and Q-values. In order to effectively train the agent from sparse\nrewards, we combine MCTS with the neural policy to generate trajectories\nyielding more positive rewards. From these trajectories, the network is\nimproved in an off-policy manner using Q-learning, which modifies the RNN\npolicy via parameter sharing. Our proposed RL algorithm repeatedly applies this\npolicy-improvement step to learn the model. At test time, MCTS is combined with\nthe neural policy to predict the target node. Experimental results on several\ngraph-walking benchmarks show that M-Walk is able to learn better policies than\nother RL-based methods, which are mainly based on policy gradients. M-Walk also\noutperforms traditional KBC baselines.", "field": [], "task": ["Knowledge Base Completion", "Link Prediction", "Q-Learning"], "method": [], "dataset": ["WN18RR"], "metric": ["Hits@3", "MRR", "Hits@1"], "title": "M-Walk: Learning to Walk over Graphs using Monte Carlo Tree Search"} {"abstract": "Models and examples built with TensorFlow", "field": [], "task": ["Depth And Camera Motion", "Depth Estimation", "Motion Estimation", "Robot Navigation"], "method": [], "dataset": ["KITTI Eigen split"], "metric": ["absolute relative error"], "title": "Depth Prediction Without the Sensors: Leveraging Structure for Unsupervised Learning from Monocular Videos"} {"abstract": "Electrocardiogram (ECG) can be reliably used as a measure to monitor the\nfunctionality of the cardiovascular system. Recently, there has been a great\nattention towards accurate categorization of heartbeats. While there are many\ncommonalities between different ECG conditions, the focus of most studies has\nbeen classifying a set of conditions on a dataset annotated for that task\nrather than learning and employing a transferable knowledge between different\ntasks. In this paper, we propose a method based on deep convolutional neural\nnetworks for the classification of heartbeats which is able to accurately\nclassify five different arrhythmias in accordance with the AAMI EC57 standard.\nFurthermore, we suggest a method for transferring the knowledge acquired on\nthis task to the myocardial infarction (MI) classification task. We evaluated\nthe proposed method on PhysionNet's MIT-BIH and PTB Diagnostics datasets.\nAccording to the results, the suggested method is able to make predictions with\nthe average accuracies of 93.4% and 95.9% on arrhythmia classification and MI\nclassification, respectively.", "field": [], "task": ["Arrhythmia Detection", "Electrocardiography (ECG)", "Heartbeat Classification", "Myocardial infarction detection"], "method": [], "dataset": ["PTB dataset, ECG lead II", "MIT-BIH AR"], "metric": ["Accuracy (Inter-Patient)", "Accuracy"], "title": "ECG Heartbeat Classification: A Deep Transferable Representation"} {"abstract": "Recent years, human-object interaction (HOI) detection has achieved impressive advances. However, conventional two-stage methods are usually slow in inference. On the other hand, existing one-stage methods mainly focus on the union regions of interactions, which introduce unnecessary visual information as disturbances to HOI detection. To tackle the problems above, we propose a novel one-stage HOI detection approach DIRV in this paper, based on a new concept called interaction region for the HOI problem. Unlike previous methods, our approach concentrates on the densely sampled interaction regions across different scales for each human-object pair, so as to capture the subtle visual features that is most essential to the interaction. Moreover, in order to compensate for the detection flaws of a single interaction region, we introduce a novel voting strategy that makes full use of those overlapped interaction regions in place of conventional Non-Maximal Suppression (NMS). Extensive experiments on two popular benchmarks: V-COCO and HICO-DET show that our approach outperforms existing state-of-the-arts by a large margin with the highest inference speed and lightest network architecture. We achieved 56.1 mAP on V-COCO without addtional input. Our code is publicly available at: https://github.com/MVIG-SJTU/DIRV", "field": [], "task": ["Human-Object Interaction Detection"], "method": [], "dataset": ["HICO-DET", "V-COCO"], "metric": ["Time Per Frame(ms)", "Time Per Frame (ms)", "MAP"], "title": "DIRV: Dense Interaction Region Voting for End-to-End Human-Object Interaction Detection"} {"abstract": "The use of user/product information in sentiment analysis is important,\nespecially for cold-start users/products, whose number of reviews are very\nlimited. However, current models do not deal with the cold-start problem which\nis typical in review websites. In this paper, we present Hybrid Contextualized\nSentiment Classifier (HCSC), which contains two modules: (1) a fast word\nencoder that returns word vectors embedded with short and long range dependency\nfeatures; and (2) Cold-Start Aware Attention (CSAA), an attention mechanism\nthat considers the existence of cold-start problem when attentively pooling the\nencoded word vectors. HCSC introduces shared vectors that are constructed from\nsimilar users/products, and are used when the original distinct vectors do not\nhave sufficient information (i.e. cold-start). This is decided by a\nfrequency-guided selective gate vector. Our experiments show that in terms of\nRMSE, HCSC performs significantly better when compared with on famous datasets,\ndespite having less complexity, and thus can be trained much faster. More\nimportantly, our model performs significantly better than previous models when\nthe training data is sparse and has cold-start problems.", "field": [], "task": ["Sentiment Analysis"], "method": [], "dataset": ["User and product information"], "metric": ["Yelp 2013 (Acc)", "IMDB (Acc)"], "title": "Cold-Start Aware User and Product Attention for Sentiment Classification"} {"abstract": "OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation", "field": [], "task": ["Keypoint Detection", "Pose Estimation"], "method": [], "dataset": ["MPII Single Person"], "metric": ["PCKh@0.1", "PCKh@0.5"], "title": "OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields"} {"abstract": "Models based on deep convolutional networks have dominated recent image\ninterpretation tasks; we investigate whether models which are also recurrent,\nor \"temporally deep\", are effective for tasks involving sequences, visual and\notherwise. We develop a novel recurrent convolutional architecture suitable for\nlarge-scale visual learning which is end-to-end trainable, and demonstrate the\nvalue of these models on benchmark video recognition tasks, image description\nand retrieval problems, and video narration challenges. In contrast to current\nmodels which assume a fixed spatio-temporal receptive field or simple temporal\naveraging for sequential processing, recurrent convolutional models are \"doubly\ndeep\"' in that they can be compositional in spatial and temporal \"layers\". Such\nmodels may have advantages when target concepts are complex and/or training\ndata are limited. Learning long-term dependencies is possible when\nnonlinearities are incorporated into the network state updates. Long-term RNN\nmodels are appealing in that they directly can map variable-length inputs\n(e.g., video frames) to variable length outputs (e.g., natural language text)\nand can model complex temporal dynamics; yet they can be optimized with\nbackpropagation. Our recurrent long-term models are directly connected to\nmodern visual convnet models and can be jointly trained to simultaneously learn\ntemporal dynamics and convolutional perceptual representations. Our results\nshow such models have distinct advantages over state-of-the-art models for\nrecognition or generation which are separately defined and/or optimized.", "field": [], "task": ["Video Recognition"], "method": [], "dataset": ["UT", "BIT"], "metric": ["Accuracy"], "title": "Long-term Recurrent Convolutional Networks for Visual Recognition and Description"} {"abstract": "In this paper, we demonstrate a novel algorithm that uses ellipse fitting to estimate the bounding box rotation angle and size with the segmentation(mask) on the target for online and real-time visual object tracking. Our method, SiamMask_E, improves the bounding box fitting procedure of the state-of-the-art object tracking algorithm SiamMask and still retains a fast-tracking frame rate (80 fps) on a system equipped with GPU (GeForce GTX 1080 Ti or higher). We tested our approach on the visual object tracking datasets (VOT2016, VOT2018, and VOT2019) that were labeled with rotated bounding boxes. By comparing with the original SiamMask, we achieved an improved Accuracy of 0.652 and 0.309 EAO on VOT2019, which is 0.056 and 0.026 higher than the original SiamMask. The implementation is available on GitHub: https://github.com/baoxinchen/siammask_e.", "field": [], "task": ["Object Tracking", "Visual Object Tracking"], "method": [], "dataset": ["VOT2016", "VOT2017/18", "VOT2019"], "metric": ["Expected Average Overlap (EAO)"], "title": "Fast Visual Object Tracking with Rotated Bounding Boxes"} {"abstract": "Novel neural models have been proposed in recent years for learning under\ndomain shift. Most models, however, only evaluate on a single task, on\nproprietary datasets, or compare to weak baselines, which makes comparison of\nmodels difficult. In this paper, we re-evaluate classic general-purpose\nbootstrapping approaches in the context of neural networks under domain shifts\nvs. recent neural approaches and propose a novel multi-task tri-training method\nthat reduces the time and space complexity of classic tri-training. Extensive\nexperiments on two benchmarks are negative: while our novel method establishes\na new state-of-the-art for sentiment analysis, it does not fare consistently\nthe best. More importantly, we arrive at the somewhat surprising conclusion\nthat classic tri-training, with some additions, outperforms the state of the\nart. We conclude that classic approaches constitute an important and strong\nbaseline.", "field": [], "task": ["Sentiment Analysis"], "method": [], "dataset": ["Multi-Domain Sentiment Dataset"], "metric": ["DVD", "Average", "Kitchen", "Electronics", "Books"], "title": "Strong Baselines for Neural Semi-supervised Learning under Domain Shift"} {"abstract": "Traffic sign recognition is a very important computer vision task for a\nnumber of real-world applications such as intelligent transportation\nsurveillance and analysis. While deep neural networks have been demonstrated in\nrecent years to provide state-of-the-art performance traffic sign recognition,\na key challenge for enabling the widespread deployment of deep neural networks\nfor embedded traffic sign recognition is the high computational and memory\nrequirements of such networks. As a consequence, there are significant benefits\nin investigating compact deep neural network architectures for traffic sign\nrecognition that are better suited for embedded devices. In this paper, we\nintroduce MicronNet, a highly compact deep convolutional neural network for\nreal-time embedded traffic sign recognition designed based on macroarchitecture\ndesign principles (e.g., spectral macroarchitecture augmentation, parameter\nprecision optimization, etc.) as well as numerical microarchitecture\noptimization strategies. The resulting overall architecture of MicronNet is\nthus designed with as few parameters and computations as possible while\nmaintaining recognition performance, leading to optimized information density\nof the proposed network. The resulting MicronNet possesses a model size of just\n~1MB and ~510,000 parameters (~27x fewer parameters than state-of-the-art)\nwhile still achieving a human performance level top-1 accuracy of 98.9% on the\nGerman traffic sign recognition benchmark. Furthermore, MicronNet requires just\n~10 million multiply-accumulate operations to perform inference, and has a\ntime-to-compute of just 32.19 ms on a Cortex-A53 high efficiency processor.\nThese experimental results show that highly compact, optimized deep neural\nnetwork architectures can be designed for real-time traffic sign recognition\nthat are well-suited for embedded scenarios.", "field": [], "task": ["Traffic Sign Recognition"], "method": [], "dataset": ["GTSRB"], "metric": ["Accuracy"], "title": "MicronNet: A Highly Compact Deep Convolutional Neural Network Architecture for Real-time Embedded Traffic Sign Classification"} {"abstract": "We present an approach to semantic scene analysis using deep convolutional\nnetworks. Our approach is based on tangent convolutions - a new construction\nfor convolutional networks on 3D data. In contrast to volumetric approaches,\nour method operates directly on surface geometry. Crucially, the construction\nis applicable to unstructured point clouds and other noisy real-world data. We\nshow that tangent convolutions can be evaluated efficiently on large-scale\npoint clouds with millions of points. Using tangent convolutions, we design a\ndeep fully-convolutional network for semantic segmentation of 3D point clouds,\nand apply it to challenging real-world datasets of indoor and outdoor 3D\nenvironments. Experimental results show that the presented approach outperforms\nother recent deep network constructions in detailed analysis of large 3D\nscenes.", "field": [], "task": ["3D Semantic Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["S3DIS Area5", "SemanticKITTI", "ScanNet"], "metric": ["3DIoU", "mAcc", "mIoU"], "title": "Tangent Convolutions for Dense Prediction in 3D"} {"abstract": "The recently proposed audio-visual scene-aware dialog task paves the way to a more data-driven way of learning virtual assistants, smart speakers and car navigation systems. However, very little is known to date about how to effectively extract meaningful information from a plethora of sensors that pound the computational engine of those devices. Therefore, in this paper, we provide and carefully analyze a simple baseline for audio-visual scene-aware dialog which is trained end-to-end. Our method differentiates in a data-driven manner useful signals from distracting ones using an attention mechanism. We evaluate the proposed approach on the recently introduced and challenging audio-visual scene-aware dataset, and demonstrate the key features that permit to outperform the current state-of-the-art by more than 20% on CIDEr. \r", "field": [], "task": ["Scene-Aware Dialogue"], "method": [], "dataset": ["AVSD"], "metric": ["CIDEr"], "title": "A Simple Baseline for Audio-Visual Scene-Aware Dialog"} {"abstract": "We propose DeepGRU, a novel end-to-end deep network model informed by recent developments in deep learning for gesture and action recognition, that is streamlined and device-agnostic. DeepGRU, which uses only raw skeleton, pose or vector data is quickly understood, implemented, and trained, and yet achieves state-of-the-art results on challenging datasets. At the heart of our method lies a set of stacked gated recurrent units (GRU), two fully-connected layers and a novel global attention model. We evaluate our method on seven publicly available datasets, containing various number of samples and spanning over a broad range of interactions (full-body, multi-actor, hand gestures, etc.). In all but one case we outperform the state-of-the-art pose-based methods. For instance, we achieve a recognition accuracy of 84.9% and 92.3% on cross-subject and cross-view tests of the NTU RGB+D dataset respectively, and also 100% recognition accuracy on the UT-Kinect dataset. While DeepGRU works well on large datasets with many training samples, we show that even in the absence of a large number of training data, and with as little as four samples per class, DeepGRU can beat traditional methods specifically designed for small training sets. Lastly, we demonstrate that even without powerful hardware, and using only the CPU, our method can still be trained in under 10 minutes on small-scale datasets, making it an enticing choice for rapid application prototyping and development.", "field": [], "task": ["Action Recognition", "Gesture Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["SBU"], "metric": ["Accuracy"], "title": "DeepGRU: Deep Gesture Recognition Utility"} {"abstract": "Grammatical error correction (GEC) systems deployed in language learning environments are expected to accurately correct errors in learners{'} writing. However, in practice, they often produce spurious corrections and fail to correct many errors, thereby misleading learners. This necessitates the estimation of the quality of output sentences produced by GEC systems so that instructors can selectively intervene and re-correct the sentences which are poorly corrected by the system and ensure that learners get accurate feedback. We propose the first neural approach to automatic quality estimation of GEC output sentences that does not employ any hand-crafted features. Our system is trained in a supervised manner on learner sentences and corresponding GEC system outputs with quality score labels computed using human-annotated references. Our neural quality estimation models for GEC show significant improvements over a strong feature-based baseline. We also show that a state-of-the-art GEC system can be improved when quality scores are used as features for re-ranking the N-best candidates.", "field": [], "task": ["Grammatical Error Correction", "Machine Translation"], "method": [], "dataset": ["Restricted", "CoNLL-2014 Shared Task"], "metric": ["F0.5"], "title": "Neural Quality Estimation of Grammatical Error Correction"} {"abstract": "We\u2019re releasing highly optimized GPU kernels for an underexplored class of neural network architectures: networks with block-sparse weights. The kernels allow for efficient evaluation and differentiation of linear layers, including convolutional layers, with flexibly configurable block-sparsity patterns in the weight matrix. We find that depending on the sparsity, these kernels can run orders of magnitude faster than the best available alternatives such as cuBLAS. Using the kernels we improve upon the state-of-the-art in text sentiment analysis and generative modeling of text and images. By releasing our kernels in the open we aim to spur further\r\nadvancement in model and algorithm design.", "field": [], "task": ["Sentiment Analysis"], "method": [], "dataset": ["IMDb", "SST-2 Binary classification", "CR", "Yelp Binary classification"], "metric": ["Error", "Accuracy"], "title": "GPU Kernels for Block-Sparse Weights"} {"abstract": "This paper addresses the problem of 3D human pose and shape estimation from a single image. Previous approaches consider a parametric model of the human body, SMPL, and attempt to regress the model parameters that give rise to a mesh consistent with image evidence. This parameter regression has been a very challenging task, with model-based approaches underperforming compared to nonparametric solutions in terms of pose estimation. In our work, we propose to relax this heavy reliance on the model's parameter space. We still retain the topology of the SMPL template mesh, but instead of predicting model parameters, we directly regress the 3D location of the mesh vertices. This is a heavy task for a typical network, but our key insight is that the regression becomes significantly easier using a Graph-CNN. This architecture allows us to explicitly encode the template mesh structure within the network and leverage the spatial locality the mesh has to offer. Image-based features are attached to the mesh vertices and the Graph-CNN is responsible to process them on the mesh structure, while the regression target for each vertex is its 3D location. Having recovered the complete 3D geometry of the mesh, if we still require a specific model parametrization, this can be reliably regressed from the vertices locations. We demonstrate the flexibility and the effectiveness of our proposed graph-based mesh regression by attaching different types of features on the mesh vertices. In all cases, we outperform the comparable baselines relying on model parameter regression, while we also achieve state-of-the-art results among model-based pose estimation approaches.", "field": [], "task": ["Pose Estimation", "Regression"], "method": [], "dataset": ["Human3.6M"], "metric": ["Average MPJPE (mm)"], "title": "Convolutional Mesh Regression for Single-Image Human Shape Reconstruction"} {"abstract": "Incompleteness is a common problem for existing knowledge graphs (KGs), and the completion of KG which aims to predict links between entities is challenging. Most existing KG completion methods only consider the direct relation between nodes and ignore the relation paths which contain useful information for link prediction. Recently, a few methods take relation paths into consideration but pay less attention to the order of relations in paths which is important for reasoning. In addition, these path-based models always ignore nonlinear contributions of path features for link prediction. To solve these problems, we propose a novel KG completion method named OPTransE. Instead of embedding both entities of a relation into the same latent space as in previous methods, we project the head entity and the tail entity of each relation into different spaces to guarantee the order of relations in the path. Meanwhile, we adopt a pooling strategy to extract nonlinear and complex features of different paths to further improve the performance of link prediction. Experimental results on two benchmark datasets show that the proposed model OPTransE performs better than state-of-the-art methods.", "field": [], "task": ["Knowledge Graph Completion", "Knowledge Graphs", "Link Prediction", "Representation Learning"], "method": [], "dataset": [" FB15k", "WN18"], "metric": ["Hits@10", "MR"], "title": "Representation Learning with Ordered Relation Paths for Knowledge Graph Completion"} {"abstract": "Semantic keypoints provide concise abstractions for a variety of visual\nunderstanding tasks. Existing methods define semantic keypoints separately for\neach category with a fixed number of semantic labels in fixed indices. As a\nresult, this keypoint representation is in-feasible when objects have a varying\nnumber of parts, e.g. chairs with varying number of legs. We propose a\ncategory-agnostic keypoint representation, which combines a multi-peak heatmap\n(StarMap) for all the keypoints and their corresponding features as 3D\nlocations in the canonical viewpoint (CanViewFeature) defined for each\ninstance. Our intuition is that the 3D locations of the keypoints in canonical\nobject views contain rich semantic and compositional information. Using our\nflexible representation, we demonstrate competitive performance in keypoint\ndetection and localization compared to category-specific state-of-the-art\nmethods. Moreover, we show that when augmented with an additional depth channel\n(DepthMap) to lift the 2D keypoints to 3D, our representation can achieve\nstate-of-the-art results in viewpoint estimation. Finally, we show that our\ncategory-agnostic keypoint representation can be generalized to novel\ncategories.", "field": [], "task": ["Keypoint Detection", "Viewpoint Estimation"], "method": [], "dataset": [" Pascal3D+"], "metric": ["Mean PCK"], "title": "StarMap for Category-Agnostic Keypoint and Viewpoint Estimation"} {"abstract": "Language Identification (LID) systems are used to classify the spoken\nlanguage from a given audio sample and are typically the first step for many\nspoken language processing tasks, such as Automatic Speech Recognition (ASR)\nsystems. Without automatic language detection, speech utterances cannot be\nparsed correctly and grammar rules cannot be applied, causing subsequent speech\nrecognition steps to fail. We propose a LID system that solves the problem in\nthe image domain, rather than the audio domain. We use a hybrid Convolutional\nRecurrent Neural Network (CRNN) that operates on spectrogram images of the\nprovided audio snippets. In extensive experiments we show, that our model is\napplicable to a range of noisy scenarios and can easily be extended to\npreviously unknown languages, while maintaining its classification accuracy. We\nrelease our code and a large scale training set for LID systems to the\ncommunity.", "field": [], "task": ["Language Identification", "Speech Recognition", "Spoken language identification"], "method": [], "dataset": ["YouTube News dataset (White Noise)", "YouTube News dataset (Crackling Noise)", "YouTube News dataset (No Noise)", "YouTube News dataset (Background Music)"], "metric": ["F1 Score", "Accuracy "], "title": "Language Identification Using Deep Convolutional Recurrent Neural Networks"} {"abstract": "Generative deep learning has sparked a new wave of Super-Resolution (SR) algorithms that enhance single images with impressive aesthetic results, albeit with imaginary details. Multi-frame Super-Resolution (MFSR) offers a more grounded approach to the ill-posed problem, by conditioning on multiple low-resolution views. This is important for satellite monitoring of human impact on the planet -- from deforestation, to human rights violations -- that depend on reliable imagery. To this end, we present HighRes-net, the first deep learning approach to MFSR that learns its sub-tasks in an end-to-end fashion: (i) co-registration, (ii) fusion, (iii) up-sampling, and (iv) registration-at-the-loss. Co-registration of low-resolution views is learned implicitly through a reference-frame channel, with no explicit registration mechanism. We learn a global fusion operator that is applied recursively on an arbitrary number of low-resolution pairs. We introduce a registered loss, by learning to align the SR output to a ground-truth through ShiftNet. We show that by learning deep representations of multiple views, we can super-resolve low-resolution signals and enhance Earth Observation data at scale. Our approach recently topped the European Space Agency's MFSR competition on real-world satellite imagery.", "field": [], "task": ["De-aliasing", "Image Registration", "Multi-Frame Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["PROBA-V"], "metric": ["Normalized cPSNR"], "title": "HighRes-net: Recursive Fusion for Multi-Frame Super-Resolution of Satellite Imagery"} {"abstract": "In light of the recent breakthroughs in automatic machine translation systems, we propose a novel approach that we term as \"Face-to-Face Translation\". As today's digital communication becomes increasingly visual, we argue that there is a need for systems that can automatically translate a video of a person speaking in language A into a target language B with realistic lip synchronization. In this work, we create an automatic pipeline for this problem and demonstrate its impact on multiple real-world applications. First, we build a working speech-to-speech translation system by bringing together multiple existing modules from speech and language. We then move towards \"Face-to-Face Translation\" by incorporating a novel visual module, LipGAN for generating realistic talking faces from the translated audio. Quantitative evaluation of LipGAN on the standard LRW test set shows that it significantly outperforms existing approaches across all standard metrics. We also subject our Face-to-Face Translation pipeline, to multiple human evaluations and show that it can significantly improve the overall user experience for consuming and interacting with multimodal content across languages. Code, models and demo video are made publicly available. Demo video: https://www.youtube.com/watch?v=aHG6Oei8jF0 Code and models: https://github.com/Rudrabha/LipGAN", "field": [], "task": ["Face to Face Translation", "Machine Translation", "Unconstrained Lip-synchronization"], "method": [], "dataset": ["LRW"], "metric": ["SSIM", "LMD"], "title": "Towards Automatic Face-to-Face Translation"} {"abstract": "Image-text matching has received growing interest since it bridges vision and language. The key challenge lies in how to learn correspondence between image and text. Existing works learn coarse correspondence based on object co-occurrence statistics, while failing to learn fine-grained phrase correspondence. In this paper, we present a novel Graph Structured Matching Network (GSMN) to learn fine-grained correspondence. The GSMN explicitly models object, relation and attribute as a structured phrase, which not only allows to learn correspondence of object, relation and attribute separately, but also benefits to learn fine-grained correspondence of structured phrase. This is achieved by node-level matching and structure-level matching. The node-level matching associates each node with its relevant nodes from another modality, where the node can be object, relation or attribute. The associated nodes then jointly infer fine-grained correspondence by fusing neighborhood associations at structure-level matching. Comprehensive experiments show that GSMN outperforms state-of-the-art methods on benchmarks, with relative Recall@1 improvements of nearly 7% and 2% on Flickr30K and MSCOCO, respectively. Code will be released at: https://github.com/CrossmodalGroup/GSMN.", "field": [], "task": ["Cross-Modal Retrieval", "Text Matching"], "method": [], "dataset": ["Flickr30k"], "metric": ["Image-to-text R@5", "Image-to-text R@1", "Image-to-text R@10", "Text-to-image R@10", "Text-to-image R@1", "Text-to-image R@5"], "title": "Graph Structured Network for Image-Text Matching"} {"abstract": "Surface Electromyography (sEMG/EMG) is to record muscles' electrical activity from a restricted area of the skin by using electrodes. The sEMG-based gesture recognition is extremely sensitive of inter-session and inter-subject variances. We propose a model and a deep-learning-based domain adaptation method to approximate the domain shift for recognition accuracy enhancement. Analysis performed on sparse and HighDensity (HD) sEMG public datasets validate that our approach outperforms state-of-the-art methods.", "field": [], "task": ["Domain Adaptation", "Gesture Recognition"], "method": [], "dataset": ["CapgMyo DB-b", "Ninapro DB-1 8 gestures", "Ninapro DB-1 12 gestures", "CapgMyo DB-c", "CapgMyo DB-a"], "metric": ["Accuracy"], "title": "Domain Adaptation for sEMG-based Gesture Recognition with Recurrent Neural Networks"} {"abstract": "In recent years, knowledge graph embedding becomes a pretty hot research topic of artificial intelligence and plays increasingly vital roles in various downstream applications, such as recommendation and question answering. However, existing methods for knowledge graph embedding can not make a proper trade-off between the model complexity and the model expressiveness, which makes them still far from satisfactory. To mitigate this problem, we propose a lightweight modeling framework that can achieve highly competitive relational expressiveness without increasing the model complexity. Our framework focuses on the design of scoring functions and highlights two critical characteristics: 1) facilitating sufficient feature interactions; 2) preserving both symmetry and antisymmetry properties of relations. It is noteworthy that owing to the general and elegant design of scoring functions, our framework can incorporate many famous existing methods as special cases. Moreover, extensive experiments on public benchmarks demonstrate the efficiency and effectiveness of our framework. Source codes and data can be found at \\url{https://github.com/Wentao-Xu/SEEK}.", "field": [], "task": ["Graph Embedding", "Knowledge Graph Embedding", "Knowledge Graphs", "Link Prediction", "Question Answering"], "method": [], "dataset": [" FB15k", "YAGO37"], "metric": ["Hits@10", "MRR", "Hits@3", "Hits@1"], "title": "SEEK: Segmented Embedding of Knowledge Graphs"} {"abstract": "Delicate feature representation about object parts plays a critical role in fine-grained recognition. For example, experts can even distinguish fine-grained objects relying only on object parts according to professional knowledge. In this paper, we propose a novel \"Destruction and Construction Learning\" (DCL) method to enhance the difficulty of fine-grained recognition and exercise the classification model to acquire expert knowledge. Besides the standard classification backbone network, another \"destruction and construction\" stream is introduced to carefully \"destruct\" and then \"reconstruct\" the input image, for learning discriminative regions and features. More specifically, for \"destruction\", we first partition the input image into local regions and then shuffle them by a Region Confusion Mechanism (RCM). To correctly recognize these destructed images, the classification network has to pay more attention to discriminative regions for spotting the differences. To compensate the noises introduced by RCM, an adversarial loss, which distinguishes original images from destructed ones, is applied to reject noisy patterns introduced by RCM. For \"construction\", a region alignment network, which tries to restore the original spatial layout of local regions, is followed to model the semantic correlation among local regions. By jointly training with parameter sharing, our proposed DCL injects more discriminative local details to the classification network. Experimental results show that our proposed framework achieves state-of-the-art performance on three standard benchmarks. Moreover, our proposed method does not need any external knowledge during training, and there is no computation overhead at inference time except the standard classification network feed-forwarding. Source code: https://github.com/JDAI-CV/DCL.\r", "field": [], "task": ["Fine-Grained Image Classification", "Fine-Grained Image Recognition"], "method": [], "dataset": ["Stanford Cars", "FGVC Aircraft"], "metric": ["Accuracy"], "title": "Destruction and Construction Learning for Fine-Grained Image Recognition"} {"abstract": "Although various image-based domain adaptation (DA) techniques have been proposed in recent years, domain shift in videos is still not well-explored. Most previous works only evaluate performance on small-scale datasets which are saturated. Therefore, we first propose a larger-scale dataset with larger domain discrepancy: UCF-HMDB_full. Second, we investigate different DA integration methods for videos, and show that simultaneously aligning and learning temporal dynamics achieves effective alignment even without sophisticated DA methods. Finally, we propose Temporal Attentive Adversarial Adaptation Network (TA3N), which explicitly attends to the temporal dynamics using domain discrepancy for more effective domain alignment, achieving state-of-the-art performance on three video DA datasets. The code and data are released at http://github.com/cmhungsteve/TA3N.", "field": [], "task": ["Domain Adaptation"], "method": [], "dataset": ["UCF-to-HMDBfull", "Olympic-to-HMDBsmall", "HMDBsmall-to-UCF", "HMDBfull-to-UCF", "UCF-to-Olympic", "UCF-to-HMDBsmall"], "metric": ["Accuracy"], "title": "Temporal Attentive Alignment for Video Domain Adaptation"} {"abstract": "Modeling complex spatial and temporal correlations in the correlated time series data is indispensable for understanding the traffic dynamics and predicting the future status of an evolving traffic system. Recent works focus on designing complicated graph neural network architectures to capture shared patterns with the help of pre-defined graphs. In this paper, we argue that learning node-specific patterns is essential for traffic forecasting while the pre-defined graph is avoidable. To this end, we propose two adaptive modules for enhancing Graph Convolutional Network (GCN) with new capabilities: 1) a Node Adaptive Parameter Learning (NAPL) module to capture node-specific patterns; 2) a Data Adaptive Graph Generation (DAGG) module to infer the inter-dependencies among different traffic series automatically. We further propose an Adaptive Graph Convolutional Recurrent Network (AGCRN) to capture fine-grained spatial and temporal correlations in traffic series automatically based on the two modules and recurrent networks. Our experiments on two real-world traffic datasets show AGCRN outperforms state-of-the-art by a significant margin without pre-defined graphs about spatial connections.", "field": [], "task": ["Graph Generation", "Multivariate Time Series Forecasting", "Spatio-Temporal Forecasting", "Time Series", "Time Series Forecasting", "Time Series Prediction", "Traffic Prediction"], "method": [], "dataset": ["PeMS04"], "metric": ["12 Steps MAE"], "title": "Adaptive Graph Convolutional Recurrent Network for Traffic Forecasting"} {"abstract": "Current deep visual recognition systems suffer from severe performance degradation when they encounter new images from classes and scenarios unseen during training. Hence, the core challenge of Zero-Shot Learning (ZSL) is to cope with the semantic-shift whereas the main challenge of Domain Adaptation and Domain Generalization (DG) is the domain-shift. While historically ZSL and DG tasks are tackled in isolation, this work develops with the ambitious goal of solving them jointly,i.e. by recognizing unseen visual concepts in unseen domains. We presentCuMix (CurriculumMixup for recognizing unseen categories in unseen domains), a holistic algorithm to tackle ZSL, DG and ZSL+DG. The key idea of CuMix is to simulate the test-time domain and semantic shift using images and features from unseen domains and categories generated by mixing up the multiple source domains and categories available during training. Moreover, a curriculum-based mixing policy is devised to generate increasingly complex training samples. Results on standard SL and DG datasets and on ZSL+DG using the DomainNet benchmark demonstrate the effectiveness of our approach.", "field": [], "task": ["Domain Generalization", "Zero-Shot Learning", "Zero-Shot Learning + Domain Generalization"], "method": [], "dataset": ["PACS"], "metric": ["Average Accuracy"], "title": "Towards Recognizing Unseen Categories in Unseen Domains"} {"abstract": "Efficient representation of text documents is an important building block in many NLP tasks. Research on long text categorization has shown that simple weighted averaging of word vectors for sentence representation often outperforms more sophisticated neural models. Recently proposed Sparse Composite Document Vector (SCDV) (Mekala et. al, 2017) extends this approach from sentences to documents using soft clustering over word vectors. However, SCDV disregards the multi-sense nature of words, and it also suffers from the curse of higher dimensionality. In this work, we address these shortcomings and propose SCDV-MS. SCDV-MS utilizes multi-sense word embeddings and learns a lower dimensional manifold. Through extensive experiments on multiple real-world datasets, we show that SCDV-MS embeddings outperform previous state-of-the-art embeddings on multi-class and multi-label text categorization tasks. Furthermore, SCDV-MS embeddings are more efficient than SCDV in terms of time and space complexity on textual classification tasks.", "field": [], "task": ["Document Classification", "Text Categorization", "Word Embeddings"], "method": [], "dataset": ["Reuters-21578", "20NEWS"], "metric": ["Recall", "Precision", "F-measure", "F1", "Accuracy"], "title": "Improving Document Classification with Multi-Sense Embeddings"} {"abstract": "Data augmentation is one of the most effective approaches for improving the accuracy of modern machine learning models, and it is also indispensable to train a deep model for meta-learning. In this paper, we introduce a task augmentation method by rotating, which increases the number of classes by rotating the original images 90, 180 and 270 degrees, different from traditional augmentation methods which increase the number of images. With a larger amount of classes, we can sample more diverse task instances during training. Therefore, task augmentation by rotating allows us to train a deep network by meta-learning methods with little over-fitting. Experimental results show that our approach is better than the rotation for increasing the number of images and achieves state-of-the-art performance on miniImageNet, CIFAR-FS, and FC100 few-shot learning benchmarks. The code is available on \\url{www.github.com/AceChuse/TaskLevelAug}.", "field": [], "task": ["Data Augmentation", "Few-Shot Learning", "Meta-Learning"], "method": [], "dataset": ["FC100 5-way (1-shot)", "CIFAR-FS 5-way (5-shot)", "Mini-Imagenet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "CIFAR-FS 5-way (1-shot)", "Mini-ImageNet - 1-Shot Learning", "FC100 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Task Augmentation by Rotating for Meta-Learning"} {"abstract": "One of the key factors of enabling machine learning models to comprehend and solve real-world tasks is to leverage multimodal data. Unfortunately, annotation of multimodal data is challenging and expensive. Recently, self-supervised multimodal methods that combine vision and language were proposed to learn multimodal representations without annotation. However, these methods often choose to ignore the presence of high levels of noise and thus yield sub-optimal results. In this work, we show that the problem of noise estimation for multimodal data can be reduced to a multimodal density estimation task. Using multimodal density estimation, we propose a noise estimation building block for multimodal representation learning that is based strictly on the inherent correlation between different modalities. We demonstrate how our noise estimation can be broadly integrated and achieves comparable results to state-of-the-art performance on five different benchmark datasets for two challenging multimodal tasks: Video Question Answering and Text-To-Video Retrieval. Furthermore, we provide a theoretical probabilistic error bound substantiating our empirical results and analyze failure cases. Code: https://github.com/elad-amrani/ssml.", "field": [], "task": ["Density Estimation", "noise estimation", "Question Answering", "Representation Learning", "Video Question Answering", "Video Retrieval", "Visual Question Answering"], "method": [], "dataset": ["MSVD", "MSRVTT-QA", "MSVD-QA"], "metric": ["text-to-video Median Rank", "text-to-video R@5", "text-to-video R@50", "text-to-video R@1", "text-to-video Mean Rank", "Accuracy", "text-to-video R@10"], "title": "Noise Estimation Using Density Estimation for Self-Supervised Multimodal Learning"} {"abstract": "Few-shot classification is a challenging problem due to the uncertainty caused by using few labelled samples. In the past few years, many methods have been proposed to solve few-shot classification, among which transfer-based methods have proved to achieve the best performance. Following this vein, in this paper we propose a novel transfer-based method that builds on two successive steps: 1) preprocessing the feature vectors so that they become closer to Gaussian-like distributions, and 2) leveraging this preprocessing using an optimal-transport inspired algorithm (in the case of transductive settings). Using standardized vision benchmarks, we prove the ability of the proposed methodology to achieve state-of-the-art accuracy with various datasets, backbone architectures and few-shot settings.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning"], "method": [], "dataset": ["CUB 200 5-way 5-shot", "CIFAR-FS 5-way (5-shot)", "Mini-Imagenet 5-way (1-shot)", "Tiered ImageNet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "Mini-ImageNet-CUB 5-way (1-shot)", "CIFAR-FS 5-way (1-shot)", "Mini-ImageNet - 1-Shot Learning", "Mini-ImageNet-CUB 5-way (5-shot)", "CUB 200 5-way 1-shot", "Mini-Imagenet 5-way (10-shot)", "Tiered ImageNet 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Leveraging the Feature Distribution in Transfer-based Few-Shot Learning"} {"abstract": "We introduce a solution to large scale Augmented Reality for outdoor scenes by registering camera images to textured Digital Elevation Models (DEMs). To accomodate the inherent differences in appearance between real images and DEMs, we train a cross-domain feature descriptor using Structure From Motion (SFM) reconstructions to acquire training data. Our method runs efficiently on a mobile device, and outperforms existing learned and hand designed feature descriptors for this task.", "field": [], "task": ["Patch Matching", "Structure from Motion"], "method": [], "dataset": ["HPatches"], "metric": ["Patch Matching", "Patch Retrieval", "Patch Verification"], "title": "LandscapeAR: Large Scale Outdoor Augmented Reality by Matching Photographs with Terrain Models Using Learned Descriptors"} {"abstract": "Background: Given the importance of relation or event extraction from\nbiomedical research publications to support knowledge capture and synthesis,\nand the strong dependency of approaches to this information extraction task on\nsyntactic information, it is valuable to understand which approaches to\nsyntactic processing of biomedical text have the highest performance. Results:\nWe perform an empirical study comparing state-of-the-art traditional\nfeature-based and neural network-based models for two core natural language\nprocessing tasks of part-of-speech (POS) tagging and dependency parsing on two\nbenchmark biomedical corpora, GENIA and CRAFT. To the best of our knowledge,\nthere is no recent work making such comparisons in the biomedical context;\nspecifically no detailed analysis of neural models on this data is available.\nExperimental results show that in general, the neural models outperform the\nfeature-based models on two benchmark biomedical corpora GENIA and CRAFT. We\nalso perform a task-oriented evaluation to investigate the influences of these\nmodels in a downstream application on biomedical event extraction, and show\nthat better intrinsic parsing performance does not always imply better\nextrinsic event extraction performance. Conclusion: We have presented a\ndetailed empirical study comparing traditional feature-based and neural\nnetwork-based models for POS tagging and dependency parsing in the biomedical\ncontext, and also investigated the influence of parser selection for a\nbiomedical event extraction downstream task. Availability of data and material:\nWe make the retrained models available at\nhttps://github.com/datquocnguyen/BioPosDep", "field": [], "task": ["Dependency Parsing", "Event Extraction", "Part-Of-Speech Tagging"], "method": [], "dataset": ["GENIA - UAS", "GENIA - LAS"], "metric": ["F1"], "title": "From POS tagging to dependency parsing for biomedical event extraction"} {"abstract": "Multimodal sentiment analysis is a very actively growing field of research. A\npromising area of opportunity in this field is to improve the multimodal fusion\nmechanism. We present a novel feature fusion strategy that proceeds in a\nhierarchical fashion, first fusing the modalities two in two and only then\nfusing all three modalities. On multimodal sentiment analysis of individual\nutterances, our strategy outperforms conventional concatenation of features by\n1%, which amounts to 5% reduction in error rate. On utterance-level multimodal\nsentiment analysis of multi-utterance video clips, for which current\nstate-of-the-art techniques incorporate contextual information from other\nutterances of the same clip, our hierarchical fusion gives up to 2.4% (almost\n10% error rate reduction) over currently used concatenation. The implementation\nof our method is publicly available in the form of open-source code.", "field": [], "task": ["Multimodal Emotion Recognition", "Multimodal Sentiment Analysis", "Sentiment Analysis"], "method": [], "dataset": ["MOSI", "IEMOCAP"], "metric": ["UA", "F1", "Accuracy"], "title": "Multimodal Sentiment Analysis using Hierarchical Fusion with Context Modeling"} {"abstract": "In this paper we study the problem of answering cloze-style questions over\ndocuments. Our model, the Gated-Attention (GA) Reader, integrates a multi-hop\narchitecture with a novel attention mechanism, which is based on multiplicative\ninteractions between the query embedding and the intermediate states of a\nrecurrent neural network document reader. This enables the reader to build\nquery-specific representations of tokens in the document for accurate answer\nselection. The GA Reader obtains state-of-the-art results on three benchmarks\nfor this task--the CNN \\& Daily Mail news stories and the Who Did What dataset.\nThe effectiveness of multiplicative interaction is demonstrated by an ablation\nstudy, and by comparing to alternative compositional operators for implementing\nthe gated-attention. The code is available at\nhttps://github.com/bdhingra/ga-reader.", "field": [], "task": ["Answer Selection", "Open-Domain Question Answering", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["Children's Book Test", "Quasar", "CNN / Daily Mail"], "metric": ["Accuracy-CN", "CNN", "EM (Quasar-T)", "Daily Mail", "Accuracy-NE", "F1 (Quasar-T)"], "title": "Gated-Attention Readers for Text Comprehension"} {"abstract": "Emotion recognition has become an important field of research in Human Computer Interactions as we improve upon the techniques for modelling the various aspects of behaviour. With the advancement of technology our understanding of emotions are advancing, there is a growing need for automatic emotion recognition systems. One of the directions the research is heading is the use of Neural Networks which are adept at estimating complex functions that depend on a large number and diverse source of input data. In this paper we attempt to exploit this effectiveness of Neural networks to enable us to perform multimodal Emotion recognition on IEMOCAP dataset using data from Speech, Text, and Motion capture data from face expressions, rotation and hand movements. Prior research has concentrated on Emotion detection from Speech on the IEMOCAP dataset, but our approach is the first that uses the multiple modes of data offered by IEMOCAP for a more robust and accurate emotion detection.", "field": [], "task": ["Emotion Recognition", "Motion Capture", "Multimodal Emotion Recognition"], "method": [], "dataset": ["Expressive hands and faces dataset (EHF)."], "metric": ["v2v error"], "title": "Multi-Modal Emotion recognition on IEMOCAP Dataset using Deep Learning"} {"abstract": "The topic of semantic segmentation has witnessed considerable progress due to\nthe powerful features learned by convolutional neural networks (CNNs). The\ncurrent leading approaches for semantic segmentation exploit shape information\nby extracting CNN features from masked image regions. This strategy introduces\nartificial boundaries on the images and may impact the quality of the extracted\nfeatures. Besides, the operations on the raw image domain require to compute\nthousands of networks on a single image, which is time-consuming. In this\npaper, we propose to exploit shape information via masking convolutional\nfeatures. The proposal segments (e.g., super-pixels) are treated as masks on\nthe convolutional feature maps. The CNN features of segments are directly\nmasked out from these maps and used to train classifiers for recognition. We\nfurther propose a joint method to handle objects and \"stuff\" (e.g., grass, sky,\nwater) in the same framework. State-of-the-art results are demonstrated on\nbenchmarks of PASCAL VOC and new PASCAL-CONTEXT, with a compelling\ncomputational speed.", "field": [], "task": ["Semantic Segmentation"], "method": [], "dataset": ["PASCAL Context"], "metric": ["mIoU"], "title": "Convolutional Feature Masking for Joint Object and Stuff Segmentation"} {"abstract": "We present a corpus of 5,000 richly annotated abstracts of medical articles\ndescribing clinical randomized controlled trials. Annotations include\ndemarcations of text spans that describe the Patient population enrolled, the\nInterventions studied and to what they were Compared, and the Outcomes measured\n(the `PICO' elements). These spans are further annotated at a more granular\nlevel, e.g., individual interventions within them are marked and mapped onto a\nstructured medical vocabulary. We acquired annotations from a diverse set of\nworkers with varying levels of expertise and cost. We describe our data\ncollection process and the corpus itself in detail. We then outline a set of\nchallenging NLP tasks that would aid searching of the medical literature and\nthe practice of evidence-based medicine.", "field": [], "task": ["Participant Intervention Comparison Outcome Extraction", "PICO"], "method": [], "dataset": ["EBM-NLP"], "metric": ["F1"], "title": "A Corpus with Multi-Level Annotations of Patients, Interventions and Outcomes to Support Language Processing for Medical Literature"} {"abstract": "Human actions comprise of joint motion of articulated body parts or\n`gestures'. Human skeleton is intuitively represented as a sparse graph with\njoints as nodes and natural connections between them as edges. Graph\nconvolutional networks have been used to recognize actions from skeletal\nvideos. We introduce a part-based graph convolutional network (PB-GCN) for this\ntask, inspired by Deformable Part-based Models (DPMs). We divide the skeleton\ngraph into four subgraphs with joints shared across them and learn a\nrecognition model using a part-based graph convolutional network. We show that\nsuch a model improves performance of recognition, compared to a model using\nentire skeleton graph. Instead of using 3D joint coordinates as node features,\nwe show that using relative coordinates and temporal displacements boosts\nperformance. Our model achieves state-of-the-art performance on two challenging\nbenchmark datasets NTURGB+D and HDM05, for skeletal action recognition.", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "Part-based Graph Convolutional Network for Action Recognition"} {"abstract": "Every year millions of men, women and children are forced to leave their homes and seek refuge from wars, human rights violations, persecution, and natural disasters. The number of forcibly displaced people came at a record rate of 44,400 every day throughout 2017, raising the cumulative total to 68.5 million at the years end, overtaken the total population of the United Kingdom. Up to 85% of the forcibly displaced find refuge in low- and middle-income countries, calling for increased humanitarian assistance worldwide. To reduce the amount of manual labour required for human-rights-related image analysis, we introduce DisplaceNet, a novel model which infers potential displaced people from images by integrating the control level of the situation and conventional convolutional neural network (CNN) classifier into one framework for image classification. Experimental results show that DisplaceNet achieves up to 4% coverage-the proportion of a data set for which a classifier is able to produce a prediction-gain over the sole use of a CNN classifier. Our dataset, codes and trained models will be available online at https://github.com/GKalliatakis/DisplaceNet.", "field": [], "task": ["Displaced People Recognition", "Image Classification"], "method": [], "dataset": ["Human Righst Archive (HRA)"], "metric": ["coverage"], "title": "DisplaceNet: Recognising Displaced People from Images by Exploiting Dominance Level"} {"abstract": "This paper summarises the experimental setup and results of the first shared\ntask on end-to-end (E2E) natural language generation (NLG) in spoken dialogue\nsystems. Recent end-to-end generation systems are promising since they reduce\nthe need for data annotation. However, they are currently limited to small,\ndelexicalised datasets. The E2E NLG shared task aims to assess whether these\nnovel approaches can generate better-quality output by learning from a dataset\ncontaining higher lexical richness, syntactic complexity and diverse discourse\nphenomena. We compare 62 systems submitted by 17 institutions, covering a wide\nrange of approaches, including machine learning architectures -- with the\nmajority implementing sequence-to-sequence models (seq2seq) -- as well as\nsystems based on grammatical rules and templates.", "field": [], "task": ["Data-to-Text Generation", "Spoken Dialogue Systems", "Text Generation"], "method": [], "dataset": ["E2E NLG Challenge"], "metric": ["NIST", "METEOR", "CIDEr", "ROUGE-L", "BLEU"], "title": "Findings of the E2E NLG Challenge"} {"abstract": "On the one hand, deep neural networks are effective in learning large datasets. On the other, they are inefficient with their data usage. They often require copious amount of labeled-data to train their scads of parameters. Training larger and deeper networks is hard without appropriate regularization, particularly while using a small dataset. Laterally, collecting well-annotated data is expensive, time-consuming and often infeasible. A popular way to regularize these networks is to simply train the network with more data from an alternate representative dataset. This can lead to adverse effects if the statistics of the representative dataset are dissimilar to our target.This predicament is due to the problem of domain shift. Data from a shifted domain might not produce bespoke features when a feature extractor from the representative domain is used. Several techniques of domain adaptation have been proposed in the past to solve this problem. In this paper, we propose a new technique (d-SNE) of domain adaptation that cleverly uses stochastic neighborhood embedding techniques and a novel modified-Hausdorff distance. The proposed technique is learnable end-to-end and is therefore, ideally suited to train neural networks. Extensive experiments demonstrate that d-SNE outperforms the current states-of-the-art and is robust to the variances in different datasets, even in the one-shot and semi-supervised learning settings. d-SNE also demonstrates the ability to generalize to multiple domains concurrently. \r", "field": [], "task": ["Domain Adaptation"], "method": [], "dataset": ["VisDA2017", "SVNH-to-MNIST", "Office-31"], "metric": ["Average Accuracy", "Accuracy"], "title": "d-SNE: Domain Adaptation Using Stochastic Neighborhood Embedding"} {"abstract": "In this work, we propose to model the interaction between visual and textual features for multi-modal neural machine translation (MMT) through a latent variable model. This latent variable can be seen as a multi-modal stochastic embedding of an image and its description in a foreign language. It is used in a target-language decoder and also to predict image features. Importantly, our model formulation utilises visual and textual inputs during training but does not require that images be available at test time. We show that our latent variable MMT formulation improves considerably over strong baselines, including a multi-task learning approach (Elliott and K\\'ad\\'ar, 2017) and a conditional variational auto-encoder approach (Toyama et al., 2016). Finally, we show improvements due to (i) predicting image features in addition to only conditioning on them, (ii) imposing a constraint on the minimum amount of information encoded in the latent variable, and (iii) by training on additional target-language image descriptions (i.e. synthetic data).", "field": [], "task": ["Machine Translation", "Multimodal Machine Translation", "Multi-Task Learning"], "method": [], "dataset": ["Multi30K"], "metric": ["Meteor (EN-DE)", "BLEU (EN-DE)"], "title": "Latent Variable Model for Multi-modal Translation"} {"abstract": "Multimodal sentiment analysis is a developing area of research, which involves the identification of sentiments in videos. Current research considers utterances as independent entities, i.e., ignores the interdependencies and relations among the utterances of a video. In this paper, we propose a LSTM-based model that enables utterances to capture contextual information from their surroundings in the same video, thus aiding the classification process. Our method shows 5-10{\\%} performance improvement over the state of the art and high robustness to generalizability.", "field": [], "task": ["Emotion Recognition", "Emotion Recognition in Conversation", "Multimodal Emotion Recognition", "Multimodal Sentiment Analysis", "Named Entity Recognition", "Sarcasm Detection", "Sentiment Analysis"], "method": [], "dataset": ["MOSI", "IEMOCAP", "SEMAINE"], "metric": ["MAE (Arousal)", "MAE (Power)", "MAE (Valence)", "MAE (Expectancy)", "F1", "UA", "Accuracy"], "title": "Context-Dependent Sentiment Analysis in User-Generated Videos"} {"abstract": "Super-Resolution convolutional neural networks have recently demonstrated high-quality restoration for single images. However, existing algorithms often require very deep architectures and long training times. Furthermore, current convolutional neural networks for super-resolution are unable to exploit features at multiple scales and weigh them equally, limiting their learning capability. In this exposition, we present a compact and accurate super-resolution algorithm namely, Densely Residual Laplacian Network (DRLN). The proposed network employs cascading residual on the residual structure to allow the flow of low-frequency information to focus on learning high and mid-level features. In addition, deep supervision is achieved via the densely concatenated residual blocks settings, which also helps in learning from high-level complex features. Moreover, we propose Laplacian attention to model the crucial features to learn the inter and intra-level dependencies between the feature maps. Furthermore, comprehensive quantitative and qualitative evaluations on low-resolution, noisy low-resolution, and real historical image benchmark datasets illustrate that our DRLN algorithm performs favorably against the state-of-the-art methods visually and accurately.", "field": [], "task": ["Image Super-Resolution"], "method": [], "dataset": ["BSD100 - 4x upscaling", "Urban100 - 8x upscaling", "Set14 - 2x upscaling", "BSD100 - 2x upscaling", "Urban100 - 3x upscaling", "Set5 - 2x upscaling", "Urban100 - 4x upscaling", "Set5 - 3x upscaling", "Manga109 - 3x upscaling", "Set14 - 4x upscaling", "Set14 - 3x upscaling", "Set5 - 4x upscaling", "Set14 - 8x upscaling", "Manga109 - 8x upscaling", "Manga109 - 4x upscaling", "BSD100 - 3x upscaling", "Urban100 - 2x upscaling", "Manga109 - 2x upscaling", "Set5 - 8x upscaling", "BSD100 - 8x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Densely Residual Laplacian Super-Resolution"} {"abstract": "Better machine understanding of pedestrian behaviors enables faster progress in modeling interactions between agents such as autonomous vehicles and humans. Pedestrian trajectories are not only influenced by the pedestrian itself but also by interaction with surrounding objects. Previous methods modeled these interactions by using a variety of aggregation methods that integrate different learned pedestrians states. We propose the Social Spatio-Temporal Graph Convolutional Neural Network (Social-STGCNN), which substitutes the need of aggregation methods by modeling the interactions as a graph. Our results show an improvement over the state of art by 20% on the Final Displacement Error (FDE) and an improvement on the Average Displacement Error (ADE) with 8.5 times less parameters and up to 48 times faster inference speed than previously reported methods. In addition, our model is data efficient, and exceeds previous state of the art on the ADE metric with only 20% of the training data. We propose a kernel function to embed the social interactions between pedestrians within the adjacency matrix. Through qualitative analysis, we show that our model inherited social behaviors that can be expected between pedestrians trajectories. Code is available at https://github.com/abduallahmohamed/Social-STGCNN.", "field": [], "task": ["Autonomous Vehicles", "Trajectory Prediction"], "method": [], "dataset": ["ETH/UCY"], "metric": ["ADE-8/12"], "title": "Social-STGCNN: A Social Spatio-Temporal Graph Convolutional Neural Network for Human Trajectory Prediction"} {"abstract": "This work addresses the unsupervised domain adaptation problem, especially in the case of class labels in the target domain being only a subset of those in the source domain. Such a partial transfer setting is realistic but challenging and existing methods always suffer from two key problems, negative transfer and uncertainty propagation. In this paper, we build on domain adversarial learning and propose a novel domain adaptation method BA$^3$US with two new techniques termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty Suppression (AUS), respectively. On one hand, negative transfer results in misclassification of target samples to the classes only present in the source domain. To address this issue, BAA pursues the balance between label distributions across domains in a fairly simple manner. Specifically, it randomly leverages a few source samples to augment the smaller target domain during domain alignment so that classes in different domains are symmetric. On the other hand, a source sample would be denoted as uncertain if there is an incorrect class that has a relatively high prediction score, and such uncertainty easily propagates to unlabeled target data around it during alignment, which severely deteriorates adaptation performance. Thus we present AUS that emphasizes uncertain samples and exploits an adaptive weighted complement entropy objective to encourage incorrect classes to have uniform and low prediction scores. Experimental results on multiple benchmarks demonstrate our BA$^3$US surpasses state-of-the-arts for partial domain adaptation tasks. Code is available at \\url{https://github.com/tim-learn/BA3US}.", "field": [], "task": ["Domain Adaptation", "Partial Domain Adaptation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["ImageNet-Caltech", "Office-31", "Office-Home"], "metric": ["Accuracy (%)"], "title": "A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation"} {"abstract": "Coronavirus disease 2019 (Covid-19) is highly contagious with limited treatment options. Early and accurate diagnosis of Covid-19 is crucial in reducing the spread of the disease and its accompanied mortality. Currently, detection by reverse transcriptase polymerase chain reaction (RT-PCR) is the gold standard of outpatient and inpatient detection of Covid-19. RT-PCR is a rapid method, however, its accuracy in detection is only ~70-75%. Another approved strategy is computed tomography (CT) imaging. CT imaging has a much higher sensitivity of ~80-98%, but similar accuracy of 70%. To enhance the accuracy of CT imaging detection, we developed an open-source set of algorithms called CovidCTNet that successfully differentiates Covid-19 from community-acquired pneumonia (CAP) and other lung diseases. CovidCTNet increases the accuracy of CT imaging detection to 90% compared to radiologists (70%). The model is designed to work with heterogeneous and small sample sizes independent of the CT imaging hardware. In order to facilitate the detection of Covid-19 globally and assist radiologists and physicians in the screening process, we are releasing all algorithms and parametric details in an open-source format. Open-source sharing of our CovidCTNet enables developers to rapidly improve and optimize services, while preserving user privacy and data ownership.", "field": [], "task": ["Computed Tomography (CT)", "COVID-19 Diagnosis", "COVID-19 Image Segmentation", "Transfer Learning"], "method": [], "dataset": [], "metric": ["10 fold Cross validation"], "title": "CovidCTNet: An Open-Source Deep Learning Approach to Identify Covid-19 Using CT Image"} {"abstract": "In skeleton-based action recognition, graph convolutional networks (GCNs) have achieved remarkable success. Nevertheless, how to efficiently model the spatial-temporal skeleton graph without introducing extra computation burden is a challenging problem for industrial deployment. In this paper, we rethink the spatial aggregation in existing GCN-based skeleton action recognition methods and discover that they are limited by coupling aggregation mechanism. Inspired by the decoupling aggregation mechanism in CNNs, we propose decoupling GCN to boost the graph modeling ability with no extra computation, no extra latency, no extra GPU memory cost, and less than 10% extra parameters. Another prevalent problem of GCNs is over-fitting. Although dropout is a widely used regularization technique, it is not effective for GCNs, due to the fact that activation units are correlated between neighbor nodes. We propose DropGraph to discard features in correlated nodes, which is particularly effective on GCNs. Moreover, we introduce an attention-guided drop mechanism to enhance the regularization effect. All our contributions introduce zero extra computation burden at deployment. We conduct experiments on three datasets (NTU-RGBD, NTU-RGBD-120, and Northwestern-UCLA) and exceed the state-of-the-art performance with less computation cost.", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition"], "method": [], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "Decoupling GCN with DropGraph Module for Skeleton-Based Action Recognition"} {"abstract": "Recently, differentiable architecture search has draw great attention due to its high efficiency and competitive performance. It searches the optimal architecture in a shallow network, and then measures its performance in a deep evaluation network. This leads to the optimization of architecture search is independent of the target evaluation network, and the discovered architecture is sub-optimal. To address this issue, we propose a novel cyclic differentiable architecture search framework (CDARTS). Considering the structure difference, CDARTS builds a cyclic feedback mechanism between the search and evaluation networks. First, the search network generates an initial topology for evaluation, so that the weights of the evaluation network can be optimized. Second, the architecture topology in the search network is further optimized by the label supervision in classification, as well as the regularization from the evaluation network through feature distillation. Repeating the above cycle results in a joint optimization of the search and evaluation networks, and thus enables the evolution of the topology to fit the final evaluation network. The experiments and analysis on CIFAR, ImageNet and NAS-Bench- 201 demonstrate the efficacy of the proposed approach.", "field": [], "task": ["Neural Architecture Search"], "method": [], "dataset": ["NAS-Bench-201, ImageNet-16-120"], "metric": ["Accuracy (Test)", "Accuracy (val)"], "title": "Cyclic Differentiable Architecture Search"} {"abstract": "Photometric loss is widely used for self-supervised depth and egomotion estimation. However, the loss landscapes induced by photometric differences are often problematic for optimization, caused by plateau landscapes for pixels in textureless regions or multiple local minima for less discriminative pixels. In this work, feature-metric loss is proposed and defined on feature representation, where the feature representation is also learned in a self-supervised manner and regularized by both first-order and second-order derivatives to constrain the loss landscapes to form proper convergence basins. Comprehensive experiments and detailed analysis via visualization demonstrate the effectiveness of the proposed feature-metric loss. In particular, our method improves state-of-the-art methods on KITTI from 0.885 to 0.925 measured by $\\delta_1$ for depth estimation, and significantly outperforms previous method for visual odometry.", "field": [], "task": ["Depth Estimation", "Monocular Depth Estimation", "Self-Supervised Learning", "Visual Odometry"], "method": [], "dataset": ["KITTI Eigen split unsupervised"], "metric": ["absolute relative error"], "title": "Feature-metric Loss for Self-supervised Learning of Depth and Egomotion"} {"abstract": "We present collaborative similarity embedding (CSE), a unified framework that\nexploits comprehensive collaborative relations available in a user-item\nbipartite graph for representation learning and recommendation. In the proposed\nframework, we differentiate two types of proximity relations: direct proximity\nand k-th order neighborhood proximity. While learning from the former exploits\ndirect user-item associations observable from the graph, learning from the\nlatter makes use of implicit associations such as user-user similarities and\nitem-item similarities, which can provide valuable information especially when\nthe graph is sparse. Moreover, for improving scalability and flexibility, we\npropose a sampling technique that is specifically designed to capture the two\ntypes of proximity relations. Extensive experiments on eight benchmark datasets\nshow that CSE yields significantly better performance than state-of-the-art\nrecommendation methods.", "field": [], "task": ["Graph Learning", "Recommendation Systems", "Representation Learning"], "method": [], "dataset": ["Frappe", "MovieLens-Latest", "Last.FM-360k", "CiteULike", "Epinions-Extend", "Echonest", "Netflix", "Amazon-Book"], "metric": ["mAP@10", "Recall@10"], "title": "Collaborative Similarity Embedding for Recommender Systems"} {"abstract": "We introduce extreme summarization, a new single-document summarization task\nwhich does not favor extractive strategies and calls for an abstractive\nmodeling approach. The idea is to create a short, one-sentence news summary\nanswering the question \"What is the article about?\". We collect a real-world,\nlarge-scale dataset for this task by harvesting online articles from the\nBritish Broadcasting Corporation (BBC). We propose a novel abstractive model\nwhich is conditioned on the article's topics and based entirely on\nconvolutional neural networks. We demonstrate experimentally that this\narchitecture captures long-range dependencies in a document and recognizes\npertinent content, outperforming an oracle extractive system and\nstate-of-the-art abstractive approaches when evaluated automatically and by\nhumans.", "field": [], "task": ["Document Summarization", "Extreme Summarization", "Text Summarization"], "method": [], "dataset": ["X-Sum"], "metric": ["ROUGE-3", "ROUGE-1", "ROUGE-2"], "title": "Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization"} {"abstract": "We address the problem of semi-supervised video object segmentation (VOS),\nwhere the masks of objects of interests are given in the first frame of an\ninput video. To deal with challenging cases where objects are occluded or\nmissing, previous work relies on greedy data association strategies that make\ndecisions for each frame individually. In this paper, we propose a novel\napproach to defer the decision making for a target object in each frame, until\na global view can be established with the entire video being taken into\nconsideration. Our approach is in the same spirit as Multiple Hypotheses\nTracking (MHT) methods, making several critical adaptations for the VOS\nproblem. We employ the bounding box (bbox) hypothesis for tracking tree\nformation, and the multiple hypotheses are spawned by propagating the preceding\nbbox into the detected bbox proposals within a gated region starting from the\ninitial object mask in the first frame. The gated region is determined by a\ngating scheme which takes into account a more comprehensive motion model rather\nthan the simple Kalman filtering model in traditional MHT. To further design\nmore customized algorithms tailored for VOS, we develop a novel mask\npropagation score instead of the appearance similarity score that could be\nbrittle due to large deformations. The mask propagation score, together with\nthe motion score, determines the affinity between the hypotheses during tree\npruning. Finally, a novel mask merging strategy is employed to handle mask\nconflicts between objects. Extensive experiments on challenging datasets\ndemonstrate the effectiveness of the proposed method, especially in the case of\nobject missing.", "field": [], "task": ["Decision Making", "Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["DAVIS 2017 (val)", "DAVIS 2017 (test-dev)", "DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "MHP-VOS: Multiple Hypotheses Propagation for Video Object Segmentation"} {"abstract": "In the recent literature, \"end-to-end\" speech systems often refer to\nletter-based acoustic models trained in a sequence-to-sequence manner, either\nvia a recurrent model or via a structured output learning approach (such as\nCTC). In contrast to traditional phone (or senone)-based approaches, these\n\"end-to-end'' approaches alleviate the need of word pronunciation modeling, and\ndo not require a \"forced alignment\" step at training time. Phone-based\napproaches remain however state of the art on classical benchmarks. In this\npaper, we propose a letter-based speech recognition system, leveraging a\nConvNet acoustic model. Key ingredients of the ConvNet are Gated Linear Units\nand high dropout. The ConvNet is trained to map audio sequences to their\ncorresponding letter transcriptions, either via a classical CTC approach, or\nvia a recent variant called ASG. Coupled with a simple decoder at inference\ntime, our system matches the best existing letter-based systems on WSJ (in word\nerror rate), and shows near state of the art performance on LibriSpeech.", "field": [], "task": ["Language Modelling", "Speech Recognition"], "method": [], "dataset": ["LibriSpeech test-clean"], "metric": ["Word Error Rate (WER)"], "title": "Letter-Based Speech Recognition with Gated ConvNets"} {"abstract": "In unsupervised image-to-image translation, the goal is to learn the mapping\nbetween an input image and an output image using a set of unpaired training\nimages. In this paper, we propose an extension of the unsupervised\nimage-to-image translation problem to multiple input setting. Given a set of\npaired images from multiple modalities, a transformation is learned to\ntranslate the input into a specified domain. For this purpose, we introduce a\nGenerative Adversarial Network (GAN) based framework along with a multi-modal\ngenerator structure and a new loss term, latent consistency loss. Through\nvarious experiments we show that leveraging multiple inputs generally improves\nthe visual quality of the translated images. Moreover, we show that the\nproposed method outperforms current state-of-the-art unsupervised\nimage-to-image translation methods.", "field": [], "task": ["Image-to-Image Translation", "Multimodal Unsupervised Image-To-Image Translation", "Unsupervised Image-To-Image Translation"], "method": [], "dataset": ["EPFL NIR-VIS", "Freiburg Forest Dataset"], "metric": ["PSNR"], "title": "In2I : Unsupervised Multi-Image-to-Image Translation Using Generative Adversarial Networks"} {"abstract": "We describe a neural network model that jointly learns distributed\nrepresentations of texts and knowledge base (KB) entities. Given a text in the\nKB, we train our proposed model to predict entities that are relevant to the\ntext. Our model is designed to be generic with the ability to address various\nNLP tasks with ease. We train the model using a large corpus of texts and their\nentity annotations extracted from Wikipedia. We evaluated the model on three\nimportant NLP tasks (i.e., sentence textual similarity, entity linking, and\nfactoid question answering) involving both unsupervised and supervised\nsettings. As a result, we achieved state-of-the-art results on all three of\nthese tasks. Our code and trained models are publicly available for further\nacademic research.", "field": [], "task": ["Entity Disambiguation", "Entity Linking", "Question Answering"], "method": [], "dataset": ["TAC2010", "AIDA-CoNLL"], "metric": ["Micro Precision", "In-KB Accuracy"], "title": "Learning Distributed Representations of Texts and Entities from Knowledge Base"} {"abstract": "Text in curve orientation, despite being one of the common text orientations\nin real world environment, has close to zero existence in well received scene\ntext datasets such as ICDAR2013 and MSRA-TD500. The main motivation of\nTotal-Text is to fill this gap and facilitate a new research direction for the\nscene text community. On top of the conventional horizontal and multi-oriented\ntexts, it features curved-oriented text. Total-Text is highly diversified in\norientations, more than half of its images have a combination of more than two\norientations. Recently, a new breed of solutions that casted text detection as\na segmentation problem has demonstrated their effectiveness against\nmulti-oriented text. In order to evaluate its robustness against curved text,\nwe fine-tuned DeconvNet and benchmark it on Total-Text. Total-Text with its\nannotation is available at https://github.com/cs-chan/Total-Text-Dataset", "field": [], "task": ["Curved Text Detection", "Scene Text", "Scene Text Detection", "Scene Text Recognition"], "method": [], "dataset": ["Total-Text"], "metric": ["F-Measure", "Recall", "Precision"], "title": "Total-Text: A Comprehensive Dataset for Scene Text Detection and Recognition"} {"abstract": "We propose an end-to-end deep learning architecture for word-level visual\nspeech recognition. The system is a combination of spatiotemporal\nconvolutional, residual and bidirectional Long Short-Term Memory networks. We\ntrain and evaluate it on the Lipreading In-The-Wild benchmark, a challenging\ndatabase of 500-size target-words consisting of 1.28sec video excerpts from BBC\nTV broadcasts. The proposed network attains word accuracy equal to 83.0,\nyielding 6.8 absolute improvement over the current state-of-the-art, without\nusing information about word boundaries during training or testing.", "field": [], "task": ["Lipreading", "Lip Reading", "Speech Recognition", "Visual Speech Recognition"], "method": [], "dataset": ["Lip Reading in the Wild"], "metric": ["Top-1 Accuracy"], "title": "Combining Residual Networks with LSTMs for Lipreading"} {"abstract": "Recent papers have shown that neural networks obtain state-of-the-art\nperformance on several different sequence tagging tasks. One appealing property\nof such systems is their generality, as excellent performance can be achieved\nwith a unified architecture and without task-specific feature engineering.\nHowever, it is unclear if such systems can be used for tasks without large\namounts of training data. In this paper we explore the problem of transfer\nlearning for neural sequence taggers, where a source task with plentiful\nannotations (e.g., POS tagging on Penn Treebank) is used to improve performance\non a target task with fewer available annotations (e.g., POS tagging for\nmicroblogs). We examine the effects of transfer learning for deep hierarchical\nrecurrent networks across domains, applications, and languages, and show that\nsignificant improvement can often be obtained. These improvements lead to\nimprovements over the current state-of-the-art on several well-studied tasks.", "field": [], "task": ["Feature Engineering", "Named Entity Recognition", "Part-Of-Speech Tagging", "Transfer Learning"], "method": [], "dataset": ["CoNLL 2003 (English)", "Penn Treebank"], "metric": ["F1", "Accuracy"], "title": "Transfer Learning for Sequence Tagging with Hierarchical Recurrent Networks"} {"abstract": "We present a technique for adding global context to deep convolutional\nnetworks for semantic segmentation. The approach is simple, using the average\nfeature for a layer to augment the features at each location. In addition, we\nstudy several idiosyncrasies of training, significantly increasing the\nperformance of baseline networks (e.g. from FCN). When we add our proposed\nglobal feature, and a technique for learning normalization parameters, accuracy\nincreases consistently even over our improved versions of the baselines. Our\nproposed approach, ParseNet, achieves state-of-the-art performance on SiftFlow\nand PASCAL-Context with small additional computational cost over baselines, and\nnear current state-of-the-art performance on PASCAL VOC 2012 semantic\nsegmentation with a simple approach. Code is available at\nhttps://github.com/weiliu89/caffe/tree/fcn .", "field": [], "task": ["Semantic Segmentation"], "method": [], "dataset": ["PASCAL Context", "PASCAL VOC 2012 test"], "metric": ["Mean IoU", "mIoU"], "title": "ParseNet: Looking Wider to See Better"} {"abstract": "Learning specific hands-on skills such as cooking, car maintenance, and home repairs increasingly happens via instructional videos. The user experience with such videos is known to be improved by meta-information such as time-stamped annotations for the main steps involved. Generating such annotations automatically is challenging, and we describe here two relevant contributions. First, we construct and release a new dense video captioning dataset, Video Timeline Tags (ViTT), featuring a variety of instructional videos together with time-stamped annotations. Second, we explore several multimodal sequence-to-sequence pretraining strategies that leverage large unsupervised datasets of videos and caption-like texts. We pretrain and subsequently finetune dense video captioning models using both YouCook2 and ViTT. We show that such models generalize well and are robust over a wide variety of instructional videos.", "field": [], "task": ["Dense Video Captioning", "Video Captioning"], "method": [], "dataset": ["YouCook2"], "metric": ["ROUGE-L", "BLEU-4", "METEOR", "CIDEr"], "title": "Multimodal Pretraining for Dense Video Captioning"} {"abstract": "Out-of-Distribution (OoD) detection is important for building safe artificial intelligence systems. However, current OoD detection methods still cannot meet the performance requirements for practical deployment. In this paper, we propose a simple yet effective algorithm based on a novel observation: in a trained neural network, OoD samples with bounded norms well concentrate in the feature space. We call the center of OoD features the Feature Space Singularity (FSS), and denote the distance of a sample feature to FSS as FSSD. Then, OoD samples can be identified by taking a threshold on the FSSD. Our analysis of the phenomenon reveals why our algorithm works. We demonstrate that our algorithm achieves state-of-the-art performance on various OoD detection benchmarks. Besides, FSSD also enjoys robustness to slight corruption in test data and can be further enhanced by ensembling. These make FSSD a promising algorithm to be employed in real world. We release our code at \\url{https://github.com/megvii-research/FSSD_OoD_Detection}.", "field": [], "task": ["Out-of-Distribution Detection"], "method": [], "dataset": ["MS-1M vs. IJB-C", "ImageNet dogs vs ImageNet non-dogs", "Fashion-MNIST", "CIFAR-10"], "metric": ["AUROC"], "title": "Feature Space Singularity for Out-of-Distribution Detection"} {"abstract": "Robust speech processing in multi-talker environments requires effective\nspeech separation. Recent deep learning systems have made significant progress\ntoward solving this problem, yet it remains challenging particularly in\nreal-time, short latency applications. Most methods attempt to construct a mask\nfor each source in time-frequency representation of the mixture signal which is\nnot necessarily an optimal representation for speech separation. In addition,\ntime-frequency decomposition results in inherent problems such as\nphase/magnitude decoupling and long time window which is required to achieve\nsufficient frequency resolution. We propose Time-domain Audio Separation\nNetwork (TasNet) to overcome these limitations. We directly model the signal in\nthe time-domain using an encoder-decoder framework and perform the source\nseparation on nonnegative encoder outputs. This method removes the frequency\ndecomposition step and reduces the separation problem to estimation of source\nmasks on encoder outputs which is then synthesized by the decoder. Our system\noutperforms the current state-of-the-art causal and noncausal speech separation\nalgorithms, reduces the computational cost of speech separation, and\nsignificantly reduces the minimum required latency of the output. This makes\nTasNet suitable for applications where low-power, real-time implementation is\ndesirable such as in hearable and telecommunication devices.", "field": [], "task": ["Speech Separation"], "method": [], "dataset": ["wsj0-2mix"], "metric": ["SI-SDRi"], "title": "TasNet: time-domain audio separation network for real-time, single-channel speech separation"} {"abstract": "Coreference resolution is essential for automatic text understanding to facilitate high-level information retrieval tasks such as text summarisation or question answering. Previous work indicates that the performance of state-of-the-art approaches (e.g. based on BERT) noticeably declines when applied to scientific papers. In this paper, we investigate the task of coreference resolution in research papers and subsequent knowledge graph population. We present the following contributions: (1) We annotate a corpus for coreference resolution that comprises 10 different scientific disciplines from Science, Technology, and Medicine (STM); (2) We propose transfer learning for automatic coreference resolution in research papers; (3) We analyse the impact of coreference resolution on knowledge graph (KG) population; (4) We release a research KG that is automatically populated from 55,485 papers in 10 STM domains. Comprehensive experiments show the usefulness of the proposed approach. Our transfer learning approach considerably outperforms state-of-the-art baselines on our corpus with an F1 score of 61.4 (+11.0), while the evaluation against a gold standard KG shows that coreference resolution improves the quality of the populated KG significantly with an F1 score of 63.5 (+21.8).", "field": [], "task": ["Coreference Resolution", "Information Retrieval", "research knowledge graph population", "Transfer Learning"], "method": [], "dataset": ["STM-coref"], "metric": ["CoNLL F1"], "title": "Coreference Resolution in Research Papers from Multiple Domains"} {"abstract": "Mesh models are a promising approach for encoding the structure of 3D\nobjects. Current mesh reconstruction systems predict uniformly distributed\nvertex locations of a predetermined graph through a series of graph\nconvolutions, leading to compromises with respect to performance or resolution.\nIn this paper, we argue that the graph representation of geometric objects\nallows for additional structure, which should be leveraged for enhanced\nreconstruction. Thus, we propose a system which properly benefits from the\nadvantages of the geometric structure of graph encoded objects by introducing\n(1) a graph convolutional update preserving vertex information; (2) an adaptive\nsplitting heuristic allowing detail to emerge; and (3) a training objective\noperating both on the local surfaces defined by vertices as well as the global\nstructure defined by the mesh. Our proposed method is evaluated on the task of\n3D object reconstruction from images with the ShapeNet dataset, where we\ndemonstrate state of the art performance, both visually and numerically, while\nhaving far smaller space requirements by generating adaptive meshes", "field": [], "task": ["3D Object Reconstruction", "Object Reconstruction"], "method": [], "dataset": ["Data3D\u2212R2N2"], "metric": ["Avg F1"], "title": "GEOMetrics: Exploiting Geometric Structure for Graph-Encoded Objects"} {"abstract": "We present a multi-purpose algorithm for simultaneous face detection, face\nalignment, pose estimation, gender recognition, smile detection, age estimation\nand face recognition using a single deep convolutional neural network (CNN).\nThe proposed method employs a multi-task learning framework that regularizes\nthe shared parameters of CNN and builds a synergy among different domains and\ntasks. Extensive experiments show that the network has a better understanding\nof face and achieves state-of-the-art result for most of these tasks.", "field": [], "task": ["Age Estimation", "Face Alignment", "Face Detection", "Face Recognition", "Face Verification", "Multi-Task Learning", "Pose Estimation"], "method": [], "dataset": ["IJB-A"], "metric": ["TAR @ FAR=0.01"], "title": "An All-In-One Convolutional Neural Network for Face Analysis"} {"abstract": "The success of neural summarization models stems from the meticulous encodings of source articles. To overcome the impediments of limited and sometimes noisy training data, one promising direction is to make better use of the available training data by applying filters during summarization. In this paper, we propose a novel Bi-directional Selective Encoding with Template (BiSET) model, which leverages template discovered from training data to softly select key information from each source article to guide its summarization process. Extensive experiments on a standard summarization dataset were conducted and the results show that the template-equipped BiSET model manages to improve the summarization performance significantly with a new state of the art.", "field": [], "task": ["Abstractive Text Summarization", "Text Summarization"], "method": [], "dataset": ["GigaWord"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "BiSET: Bi-directional Selective Encoding with Template for Abstractive Summarization"} {"abstract": "Pre-trained embeddings such as word embeddings and sentence embeddings are fundamental tools facilitating a wide range of downstream NLP tasks. In this work, we investigate how to learn a general-purpose embedding of textual relations, defined as the shortest dependency path between entities. Textual relation embedding provides a level of knowledge between word/phrase level and sentence level, and we show that it can facilitate downstream tasks requiring relational understanding of the text. To learn such an embedding, we create the largest distant supervision dataset by linking the entire English ClueWeb09 corpus to Freebase. We use global co-occurrence statistics between textual and knowledge base relations as the supervision signal to train the embedding. Evaluation on two relational understanding tasks demonstrates the usefulness of the learned textual relation embedding. The data and code can be found at https://github.com/czyssrs/GloREPlus", "field": [], "task": ["Action Classification", "Sentence Embeddings", "Word Embeddings"], "method": [], "dataset": ["Kinetics-400"], "metric": ["Vid acc@1"], "title": "Global Textual Relation Embedding for Relational Understanding"} {"abstract": "Natural language processing (NLP) models often require a massive number of\nparameters for word embeddings, resulting in a large storage or memory\nfootprint. Deploying neural NLP models to mobile devices requires compressing\nthe word embeddings without any significant sacrifices in performance. For this\npurpose, we propose to construct the embeddings with few basis vectors. For\neach word, the composition of basis vectors is determined by a hash code. To\nmaximize the compression rate, we adopt the multi-codebook quantization\napproach instead of binary coding scheme. Each code is composed of multiple\ndiscrete numbers, such as (3, 2, 1, 8), where the value of each component is\nlimited to a fixed range. We propose to directly learn the discrete codes in an\nend-to-end neural network by applying the Gumbel-softmax trick. Experiments\nshow the compression rate achieves 98% in a sentiment analysis task and 94% ~\n99% in machine translation tasks without performance loss. In both tasks, the\nproposed method can improve the model performance by slightly lowering the\ncompression rate. Compared to other approaches such as character-level\nsegmentation, the proposed method is language-independent and does not require\nmodifications to the network architecture.", "field": [], "task": ["Machine Translation", "Quantization", "Sentiment Analysis", "Word Embeddings"], "method": [], "dataset": ["IWSLT2015 German-English"], "metric": ["BLEU score"], "title": "Compressing Word Embeddings via Deep Compositional Code Learning"} {"abstract": "The RepEval 2017 Shared Task aims to evaluate natural language understanding\nmodels for sentence representation, in which a sentence is represented as a\nfixed-length vector with neural networks and the quality of the representation\nis tested with a natural language inference task. This paper describes our\nsystem (alpha) that is ranked among the top in the Shared Task, on both the\nin-domain test set (obtaining a 74.9% accuracy) and on the cross-domain test\nset (also attaining a 74.9% accuracy), demonstrating that the model generalizes\nwell to the cross-domain data. Our model is equipped with intra-sentence\ngated-attention composition which helps achieve a better performance. In\naddition to submitting our model to the Shared Task, we have also tested it on\nthe Stanford Natural Language Inference (SNLI) dataset. We obtain an accuracy\nof 85.5%, which is the best reported result on SNLI when cross-sentence\nattention is not allowed, the same condition enforced in RepEval 2017.", "field": [], "task": ["Natural Language Inference", "Natural Language Understanding"], "method": [], "dataset": ["SNLI"], "metric": ["Parameters", "% Train Accuracy", "% Test Accuracy"], "title": "Recurrent Neural Network-Based Sentence Encoder with Gated Attention for Natural Language Inference"} {"abstract": "We present an unsupervised representation learning approach using videos\nwithout semantic labels. We leverage the temporal coherence as a supervisory\nsignal by formulating representation learning as a sequence sorting task. We\ntake temporally shuffled frames (i.e., in non-chronological order) as inputs\nand train a convolutional neural network to sort the shuffled sequences.\nSimilar to comparison-based sorting algorithms, we propose to extract features\nfrom all frame pairs and aggregate them to predict the correct order. As\nsorting shuffled image sequence requires an understanding of the statistical\ntemporal structure of images, training with such a proxy task allows us to\nlearn rich and generalizable visual representation. We validate the\neffectiveness of the learned representation using our method as pre-training on\nhigh-level recognition problems. The experimental results show that our method\ncompares favorably against state-of-the-art methods on action recognition,\nimage classification and object detection tasks.", "field": [], "task": ["Action Recognition", "Image Classification", "Object Detection", "Representation Learning", "Self-Supervised Action Recognition", "Temporal Action Localization", "Unsupervised Representation Learning"], "method": [], "dataset": ["HMDB51"], "metric": ["Pre-Training Dataset", "Top-1 Accuracy"], "title": "Unsupervised Representation Learning by Sorting Sequences"} {"abstract": "Combining deep neural networks with structured logic rules is desirable to harness flexibility and reduce uninterpretability of the neural models. We propose a general framework capable of enhancing various types of neural networks (e.g., CNNs and RNNs) with declarative first-order logic rules. Specifically, we develop an iterative distillation method that transfers the structured information of logic rules into the weights of neural networks. We deploy the framework on a CNN for sentiment analysis, and an RNN for named entity recognition. With a few highly intuitive rules, we obtain substantial improvements and achieve state-of-the-art or comparable results to previous best-performing systems.", "field": [], "task": ["Named Entity Recognition", "Sentiment Analysis"], "method": [], "dataset": ["SST-2 Binary classification", "CoNLL 2003 (English)"], "metric": ["F1", "Accuracy"], "title": "Harnessing Deep Neural Networks with Logic Rules"} {"abstract": "We train one multilingual model for dependency parsing and use it to parse\nsentences in several languages. The parsing model uses (i) multilingual word\nclusters and embeddings; (ii) token-level language information; and (iii)\nlanguage-specific features (fine-grained POS tags). This input representation\nenables the parser not only to parse effectively in multiple languages, but\nalso to generalize across languages based on linguistic universals and\ntypological similarities, making it more effective to learn from limited\nannotations. Our parser's performance compares favorably to strong baselines in\na range of data scenarios, including when the target language has a large\ntreebank, a small treebank, or no treebank for training.", "field": [], "task": ["Cross-lingual zero-shot dependency parsing", "Dependency Parsing"], "method": [], "dataset": ["Universal Dependency Treebank"], "metric": ["LAS"], "title": "Many Languages, One Parser"} {"abstract": "We propose a new single-shot method for multi-person 3D pose estimation in\ngeneral scenes from a monocular RGB camera. Our approach uses novel\nocclusion-robust pose-maps (ORPM) which enable full body pose inference even\nunder strong partial occlusions by other people and objects in the scene. ORPM\noutputs a fixed number of maps which encode the 3D joint locations of all\npeople in the scene. Body part associations allow us to infer 3D pose for an\narbitrary number of people without explicit bounding box prediction. To train\nour approach we introduce MuCo-3DHP, the first large scale training data set\nshowing real images of sophisticated multi-person interactions and occlusions.\nWe synthesize a large corpus of multi-person images by compositing images of\nindividual people (with ground truth from mutli-view performance capture). We\nevaluate our method on our new challenging 3D annotated multi-person test set\nMuPoTs-3D where we achieve state-of-the-art performance. To further stimulate\nresearch in multi-person 3D pose estimation, we will make our new datasets, and\nassociated code publicly available for research purposes.", "field": [], "task": ["3D Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["MuPoTS-3D"], "metric": ["MPJPE"], "title": "Single-Shot Multi-Person 3D Pose Estimation From Monocular RGB"} {"abstract": "This paper addresses the task of detecting and recognizing human-object\ninteractions (HOI) in images and videos. We introduce the Graph Parsing Neural\nNetwork (GPNN), a framework that incorporates structural knowledge while being\ndifferentiable end-to-end. For a given scene, GPNN infers a parse graph that\nincludes i) the HOI graph structure represented by an adjacency matrix, and ii)\nthe node labels. Within a message passing inference framework, GPNN iteratively\ncomputes the adjacency matrices and node labels. We extensively evaluate our\nmodel on three HOI detection benchmarks on images and videos: HICO-DET, V-COCO,\nand CAD-120 datasets. Our approach significantly outperforms state-of-art\nmethods, verifying that GPNN is scalable to large datasets and applies to\nspatial-temporal settings. The code is available at\nhttps://github.com/SiyuanQi/gpnn.", "field": [], "task": ["Human-Object Interaction Detection"], "method": [], "dataset": ["HICO-DET", "V-COCO"], "metric": ["MAP"], "title": "Learning Human-Object Interactions by Graph Parsing Neural Networks"} {"abstract": "Cloud detection in satellite images is an important first-step in many remote\nsensing applications. This problem is more challenging when only a limited\nnumber of spectral bands are available. To address this problem, a deep\nlearning-based algorithm is proposed in this paper. This algorithm consists of\na Fully Convolutional Network (FCN) that is trained by multiple patches of\nLandsat 8 images. This network, which is called Cloud-Net, is capable of\ncapturing global and local cloud features in an image using its convolutional\nblocks. Since the proposed method is an end-to-end solution, no complicated\npre-processing step is required. Our experimental results prove that the\nproposed method outperforms the state-of-the-art method over a benchmark\ndataset by 8.7\\% in Jaccard Index.", "field": [], "task": ["Cloud Detection"], "method": [], "dataset": ["38-Cloud"], "metric": ["Jaccard (Mean)"], "title": "Cloud-Net: An end-to-end Cloud Detection Algorithm for Landsat 8 Imagery"} {"abstract": "We propose ViDeNN: a CNN for Video Denoising without prior knowledge on the\nnoise distribution (blind denoising). The CNN architecture uses a combination\nof spatial and temporal filtering, learning to spatially denoise the frames\nfirst and at the same time how to combine their temporal information, handling\nobjects motion, brightness changes, low-light conditions and temporal\ninconsistencies. We demonstrate the importance of the data used for CNNs\ntraining, creating for this purpose a specific dataset for low-light\nconditions. We test ViDeNN on common benchmarks and on self-collected data,\nachieving good results comparable with the state-of-the-art.", "field": [], "task": ["Denoising", "Video Denoising"], "method": [], "dataset": ["CBSD68 sigma25", "CBSD68 sigma50", "CBSD68 sigma15", "CBSD68 sigma10", "CBSD68 sigma5", "CBSD68 sigma35"], "metric": ["PSNR"], "title": "ViDeNN: Deep Blind Video Denoising"} {"abstract": "In this work we propose a capsule-based approach for semi-supervised video object segmentation. Current video object segmentation methods are frame-based and often require optical flow to capture temporal consistency across frames which can be difficult to compute. To this end, we propose a video based capsule network, CapsuleVOS, which can segment several frames at once conditioned on a reference frame and segmentation mask. This conditioning is performed through a novel routing algorithm for attention-based efficient capsule selection. We address two challenging issues in video object segmentation: 1) segmentation of small objects and 2) occlusion of objects across time. The issue of segmenting small objects is addressed with a zooming module which allows the network to process small spatial regions of the video. Apart from this, the framework utilizes a novel memory module based on recurrent networks which helps in tracking objects when they move out of frame or are occluded. The network is trained end-to-end and we demonstrate its effectiveness on two benchmark video object segmentation datasets; it outperforms current offline approaches on the Youtube-VOS dataset while having a run-time that is almost twice as fast as competing methods. The code is publicly available at https://github.com/KevinDuarte/CapsuleVOS.", "field": [], "task": ["Optical Flow Estimation", "Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation", "Visual Object Tracking", "Youtube-VOS"], "method": [], "dataset": ["YouTube-VOS", "DAVIS 2017 (test-dev)"], "metric": ["Jaccard (Mean)", "Speed (FPS)", "Jaccard (Unseen)", "Jaccard (Seen)", "F-Measure (Seen)", "Overall", "F-measure (Recall)", "Jaccard (Recall)", "F-measure (Mean)", "J&F", "F-Measure (Unseen)"], "title": "CapsuleVOS: Semi-Supervised Video Object Segmentation Using Capsule Routing"} {"abstract": "We examine the novel task of domain-independent scientific concept extraction from abstracts of scholarly articles and present two contributions. First, we suggest a set of generic scientific concepts that have been identified in a systematic annotation process. This set of concepts is utilised to annotate a corpus of scientific abstracts from 10 domains of Science, Technology and Medicine at the phrasal level in a joint effort with domain experts. The resulting dataset is used in a set of benchmark experiments to (a) provide baseline performance for this task, (b) examine the transferability of concepts between domains. Second, we present two deep learning systems as baselines. In particular, we propose active learning to deal with different domains in our task. The experimental results show that (1) a substantial agreement is achievable by non-experts after consultation with domain experts, (2) the baseline system achieves a fairly high F1 score, (3) active learning enables us to nearly halve the amount of required training data.", "field": [], "task": ["Active Learning", "Named Entity Recognition", "Scientific Concept Extraction"], "method": [], "dataset": ["STM-corpus"], "metric": ["Exact Span F1"], "title": "Domain-independent Extraction of Scientific Concepts from Research Articles"} {"abstract": "Real-time video deblurring still remains a challenging task due to the complexity of spatially and temporally varying blur itself and the requirement of low computational cost. To improve the network efficiency, we adopt residual dense blocks into RNN cells, so as to efficiently extract the spatial features of the current frame. Furthermore, a global spatio-temporal attention module is proposed to fuse the effective hierarchical features from past and future frames to help better deblur the current frame. For evaluation, we also collect a novel dataset with paired blurry/sharp video clips by using a co-axis beam splitter system. Through experiments on synthetic and realistic datasets, we show that our proposed method can achieve better deblurring performance both quantitatively and qualitatively with less computational cost against state-of-the-art video deblurring methods.", "field": [], "task": ["Deblurring"], "method": [], "dataset": ["GoPro"], "metric": ["SSIM", "PSNR"], "title": "Efficient Spatio-Temporal Recurrent Neural Network for Video Deblurring"} {"abstract": "Tree boosting is a highly effective and widely used machine learning method.\nIn this paper, we describe a scalable end-to-end tree boosting system called\nXGBoost, which is used widely by data scientists to achieve state-of-the-art\nresults on many machine learning challenges. We propose a novel sparsity-aware\nalgorithm for sparse data and weighted quantile sketch for approximate tree\nlearning. More importantly, we provide insights on cache access patterns, data\ncompression and sharding to build a scalable tree boosting system. By combining\nthese insights, XGBoost scales beyond billions of examples using far fewer\nresources than existing systems.", "field": [], "task": ["Dimensionality Reduction", "Humor Detection", "Regression"], "method": [], "dataset": ["200k Short Texts for Humor Detection"], "metric": ["F1-score"], "title": "XGBoost: A Scalable Tree Boosting System"} {"abstract": "Image denoising performs a prominent role in medical image analysis. In many cases, it can drastically accelerate the diagnostic process by enhancing the perceptual quality of noisy image samples. However, despite the extensive practicability of medical image denoising, the existing denoising methods illustrate deficiencies in addressing the diverse range of noise appears in the multidisciplinary medical images. This study alleviates such challenging denoising task by learning residual noise from a substantial extent of data samples. Additionally, the proposed method accelerates the learning process by introducing a novel deep network, where the network architecture exploits the feature correlation known as the attention mechanism and combines it with spatially refine residual features. The experimental results illustrate that the proposed method can outperform the existing works by a substantial margin in both quantitative and qualitative comparisons. Also, the proposed method can handle real-world image noise and can improve the performance of different medical image analysis tasks without producing any visually disturbing artefacts.", "field": [], "task": ["Denoising", "Image Denoising", "Medical Image Denoising"], "method": [], "dataset": ["Dermatologist level dermoscopy skin cancer classification using different deep learning convolutional neural networks algorithms", "LGG Segmentation Dataset", "Human Protein Atlas Image"], "metric": ["Average PSNR", " SSIM", "SSIM"], "title": "Learning Medical Image Denoising with Deep Dynamic Residual Attention Network"} {"abstract": "Our ability to train end-to-end systems for 3D human pose estimation from\nsingle images is currently constrained by the limited availability of 3D\nannotations for natural images. Most datasets are captured using Motion Capture\n(MoCap) systems in a studio setting and it is difficult to reach the\nvariability of 2D human pose datasets, like MPII or LSP. To alleviate the need\nfor accurate 3D ground truth, we propose to use a weaker supervision signal\nprovided by the ordinal depths of human joints. This information can be\nacquired by human annotators for a wide range of images and poses. We showcase\nthe effectiveness and flexibility of training Convolutional Networks (ConvNets)\nwith these ordinal relations in different settings, always achieving\ncompetitive performance with ConvNets trained with accurate 3D joint\ncoordinates. Additionally, to demonstrate the potential of the approach, we\naugment the popular LSP and MPII datasets with ordinal depth annotations. This\nextension allows us to present quantitative and qualitative evaluation in\nnon-studio conditions. Simultaneously, these ordinal annotations can be easily\nincorporated in the training procedure of typical ConvNets for 3D human pose.\nThrough this inclusion we achieve new state-of-the-art performance for the\nrelevant benchmarks and validate the effectiveness of ordinal depth supervision\nfor 3D human pose.", "field": [], "task": ["3D Human Pose Estimation", "Motion Capture", "Pose Estimation"], "method": [], "dataset": ["Human3.6M"], "metric": ["Average MPJPE (mm)"], "title": "Ordinal Depth Supervision for 3D Human Pose Estimation"} {"abstract": "We propose a unified formulation for the problem of 3D human pose estimation\nfrom a single raw RGB image that reasons jointly about 2D joint estimation and\n3D pose reconstruction to improve both tasks. We take an integrated approach\nthat fuses probabilistic knowledge of 3D human pose with a multi-stage CNN\narchitecture and uses the knowledge of plausible 3D landmark locations to\nrefine the search for better 2D locations. The entire process is trained\nend-to-end, is extremely efficient and obtains state- of-the-art results on\nHuman3.6M outperforming previous approaches both on 2D and 3D errors.", "field": [], "task": ["3D Human Pose Estimation", "3D Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["Human3.6M"], "metric": ["Average MPJPE (mm)"], "title": "Lifting from the Deep: Convolutional 3D Pose Estimation from a Single Image"} {"abstract": "Limited labeled data are available for the research of estimating facial\nexpression intensities. For instance, the ability to train deep networks for\nautomated pain assessment is limited by small datasets with labels of\npatient-reported pain intensities. Fortunately, fine-tuning from a\ndata-extensive pre-trained domain, such as face verification, can alleviate\nthis problem. In this paper, we propose a network that fine-tunes a\nstate-of-the-art face verification network using a regularized regression loss\nand additional data with expression labels. In this way, the expression\nintensity regression task can benefit from the rich feature representations\ntrained on a huge amount of data for face verification. The proposed\nregularized deep regressor is applied to estimate the pain expression intensity\nand verified on the widely-used UNBC-McMaster Shoulder-Pain dataset, achieving\nthe state-of-the-art performance. A weighted evaluation metric is also proposed\nto address the imbalance issue of different pain intensities.", "field": [], "task": ["Face Verification", "Pain Intensity Regression", "Regression"], "method": [], "dataset": ["UNBC-McMaster ShoulderPain dataset"], "metric": ["MAE"], "title": "Regularizing Face Verification Nets For Pain Intensity Regression"} {"abstract": "Knowledge graph (KG) contains well-structured external information and has shown to be effective for high-quality recommendation. However, existing KG enhanced recommendation methods have largely focused on exploring advanced neural network architectures to better investigate the structural information of KG. While for model learning, these methods mainly rely on Negative Sampling (NS) to optimize the models for both KG embedding task and recommendation task. Since NS is not robust (e.g., sampling a small fraction ofnegative instances may lose lots ofuseful information), it is reasonable to argue that these methods are insufficient to capture collaborative information among users, items, and entities.\r\nIn this paper, we propose a novel Jointly Non-Sampling learning model for Knowledge graph enhanced Recommendation (JNSKR). Specifically, we first design a new efficient NS optimization algorithm for knowledge graph embedding learning. The subgraphs are then encoded by the proposed attentive neural network to better characterize user preference over items. Through novel designs of memorization strategies and joint learning framework, JNSKR not only models the fine-grained connections among users, items, and entities, but also efficiently learns model parameters from the whole training data (including all non-observed data) with a rather low time complexity. Experimental results on two public benchmarks show that JNSKR significantly outperforms the state-of-the-art methods like RippleNet and KGAT. Remarkably, JNSKR also shows significant advantages in training efficiency (about 20 times faster than KGAT), which makes it more applicable to real-world largescale systems.", "field": [], "task": ["Graph Embedding", "Knowledge Graph Embedding", "Knowledge Graphs", "Recommendation Systems"], "method": [], "dataset": ["Amazon-Book"], "metric": ["Recall@20", "nDCG@40", "Recall@10", "nDCG@20", "Recall@40", "nDCG@10"], "title": "Jointly Non-Sampling Learning for Knowledge Graph Enhanced Recommendation"} {"abstract": "We propose a unified model combining the strength of extractive and\nabstractive summarization. On the one hand, a simple extractive model can\nobtain sentence-level attention with high ROUGE scores but less readable. On\nthe other hand, a more complicated abstractive model can obtain word-level\ndynamic attention to generate a more readable paragraph. In our model,\nsentence-level attention is used to modulate the word-level attention such that\nwords in less attended sentences are less likely to be generated. Moreover, a\nnovel inconsistency loss function is introduced to penalize the inconsistency\nbetween two levels of attentions. By end-to-end training our model with the\ninconsistency loss and original losses of extractive and abstractive models, we\nachieve state-of-the-art ROUGE scores while being the most informative and\nreadable summarization on the CNN/Daily Mail dataset in a solid human\nevaluation.", "field": [], "task": ["Abstractive Text Summarization"], "method": [], "dataset": ["CNN / Daily Mail"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "A Unified Model for Extractive and Abstractive Summarization using Inconsistency Loss"} {"abstract": "Detecting individual pedestrians in a crowd remains a challenging problem\nsince the pedestrians often gather together and occlude each other in\nreal-world scenarios. In this paper, we first explore how a state-of-the-art\npedestrian detector is harmed by crowd occlusion via experimentation, providing\ninsights into the crowd occlusion problem. Then, we propose a novel bounding\nbox regression loss specifically designed for crowd scenes, termed repulsion\nloss. This loss is driven by two motivations: the attraction by target, and the\nrepulsion by other surrounding objects. The repulsion term prevents the\nproposal from shifting to surrounding objects thus leading to more crowd-robust\nlocalization. Our detector trained by repulsion loss outperforms all the\nstate-of-the-art methods with a significant improvement in occlusion cases.", "field": [], "task": ["Pedestrian Detection", "Regression"], "method": [], "dataset": ["CityPersons", "Caltech"], "metric": ["Reasonable MR^-2", "Heavy MR^-2", "Reasonable Miss Rate", "Partial MR^-2", "Bare MR^-2"], "title": "Repulsion Loss: Detecting Pedestrians in a Crowd"} {"abstract": "This paper proposes the novel Pose Guided Person Generation Network (PG$^2$)\nthat allows to synthesize person images in arbitrary poses, based on an image\nof that person and a novel pose. Our generation framework PG$^2$ utilizes the\npose information explicitly and consists of two key stages: pose integration\nand image refinement. In the first stage the condition image and the target\npose are fed into a U-Net-like network to generate an initial but coarse image\nof the person with the target pose. The second stage then refines the initial\nand blurry result by training a U-Net-like generator in an adversarial way.\nExtensive experimental results on both 128$\\times$64 re-identification images\nand 256$\\times$256 fashion photos show that our model generates high-quality\nperson images with convincing details.", "field": [], "task": ["Gesture-to-Gesture Translation", "Image Generation", "Pose Transfer"], "method": [], "dataset": ["Senz3D", "NTU Hand Digit", "Deep-Fashion"], "metric": ["SSIM", "PSNR", "AMT", "IS"], "title": "Pose Guided Person Image Generation"} {"abstract": "Traditional models for question answering optimize using cross entropy loss,\nwhich encourages exact answers at the cost of penalizing nearby or overlapping\nanswers that are sometimes equally accurate. We propose a mixed objective that\ncombines cross entropy loss with self-critical policy learning. The objective\nuses rewards derived from word overlap to solve the misalignment between\nevaluation metric and optimization objective. In addition to the mixed\nobjective, we improve dynamic coattention networks (DCN) with a deep residual\ncoattention encoder that is inspired by recent work in deep self-attention and\nresidual networks. Our proposals improve model performance across question\ntypes and input lengths, especially for long questions that requires the\nability to capture long-term dependencies. On the Stanford Question Answering\nDataset, our model achieves state-of-the-art results with 75.1% exact match\naccuracy and 83.1% F1, while the ensemble obtains 78.9% exact match accuracy\nand 86.0% F1.", "field": [], "task": ["Question Answering"], "method": [], "dataset": ["SQuAD1.1 dev", "SQuAD1.1"], "metric": ["EM", "F1"], "title": "DCN+: Mixed Objective and Deep Residual Coattention for Question Answering"} {"abstract": "The ability of deep convolutional neural networks (CNN) to learn\ndiscriminative spectro-temporal patterns makes them well suited to\nenvironmental sound classification. However, the relative scarcity of labeled\ndata has impeded the exploitation of this family of high-capacity models. This\nstudy has two primary contributions: first, we propose a deep convolutional\nneural network architecture for environmental sound classification. Second, we\npropose the use of audio data augmentation for overcoming the problem of data\nscarcity and explore the influence of different augmentations on the\nperformance of the proposed CNN architecture. Combined with data augmentation,\nthe proposed model produces state-of-the-art results for environmental sound\nclassification. We show that the improved performance stems from the\ncombination of a deep, high-capacity model and an augmented training set: this\ncombination outperforms both the proposed CNN without augmentation and a\n\"shallow\" dictionary learning model with augmentation. Finally, we examine the\ninfluence of each augmentation on the model's classification accuracy for each\nclass, and observe that the accuracy for each class is influenced differently\nby each augmentation, suggesting that the performance of the model could be\nimproved further by applying class-conditional data augmentation.", "field": [], "task": ["Data Augmentation", "Dictionary Learning", "Environmental Sound Classification"], "method": [], "dataset": ["UrbanSound8k"], "metric": ["Accuracy (10-fold)"], "title": "Deep Convolutional Neural Networks and Data Augmentation for Environmental Sound Classification"} {"abstract": "Adversarial training provides a means of regularizing supervised learning\nalgorithms while virtual adversarial training is able to extend supervised\nlearning algorithms to the semi-supervised setting. However, both methods\nrequire making small perturbations to numerous entries of the input vector,\nwhich is inappropriate for sparse high-dimensional inputs such as one-hot word\nrepresentations. We extend adversarial and virtual adversarial training to the\ntext domain by applying perturbations to the word embeddings in a recurrent\nneural network rather than to the original input itself. The proposed method\nachieves state of the art results on multiple benchmark semi-supervised and\npurely supervised tasks. We provide visualizations and analysis showing that\nthe learned word embeddings have improved in quality and that while training,\nthe model is less prone to overfitting.", "field": [], "task": ["Semi Supervised Text Classification", "Semi-Supervised Text Classification", "Sentiment Analysis", "Text Classification", "Word Embeddings"], "method": [], "dataset": ["IMDb"], "metric": ["Accuracy"], "title": "Adversarial Training Methods for Semi-Supervised Text Classification"} {"abstract": "Many language generation tasks require the production of text conditioned on\nboth structured and unstructured inputs. We present a novel neural network\narchitecture which generates an output sequence conditioned on an arbitrary\nnumber of input functions. Crucially, our approach allows both the choice of\nconditioning context and the granularity of generation, for example characters\nor tokens, to be marginalised, thus permitting scalable and effective training.\nUsing this framework, we address the problem of generating programming code\nfrom a mixed natural language and structured specification. We create two new\ndata sets for this paradigm derived from the collectible trading card games\nMagic the Gathering and Hearthstone. On these, and a third preexisting corpus,\nwe demonstrate that marginalising multiple predictors allows our model to\noutperform strong benchmarks.", "field": [], "task": ["Card Games", "Code Generation", "Text Generation"], "method": [], "dataset": ["Django"], "metric": ["Accuracy"], "title": "Latent Predictor Networks for Code Generation"} {"abstract": "This paper considers the task of articulated human pose estimation of\nmultiple people in real world images. We propose an approach that jointly\nsolves the tasks of detection and pose estimation: it infers the number of\npersons in a scene, identifies occluded body parts, and disambiguates body\nparts between people in close proximity of each other. This joint formulation\nis in contrast to previous strategies, that address the problem by first\ndetecting people and subsequently estimating their body pose. We propose a\npartitioning and labeling formulation of a set of body-part hypotheses\ngenerated with CNN-based part detectors. Our formulation, an instance of an\ninteger linear program, implicitly performs non-maximum suppression on the set\nof part candidates and groups them to form configurations of body parts\nrespecting geometric and appearance constraints. Experiments on four different\ndatasets demonstrate state-of-the-art results for both single person and multi\nperson pose estimation. Models and code available at\nhttp://pose.mpi-inf.mpg.de.", "field": [], "task": ["Multi-Person Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["WAF", "MPII Human Pose"], "metric": ["AOP", "PCKh-0.5"], "title": "DeepCut: Joint Subset Partition and Labeling for Multi Person Pose Estimation"} {"abstract": "We investigate different approaches for dialect identification in Arabic\nbroadcast speech, using phonetic, lexical features obtained from a speech\nrecognition system, and acoustic features using the i-vector framework. We\nstudied both generative and discriminate classifiers, and we combined these\nfeatures using a multi-class Support Vector Machine (SVM). We validated our\nresults on an Arabic/English language identification task, with an accuracy of\n100%. We used these features in a binary classifier to discriminate between\nModern Standard Arabic (MSA) and Dialectal Arabic, with an accuracy of 100%. We\nfurther report results using the proposed method to discriminate between the\nfive most widely used dialects of Arabic: namely Egyptian, Gulf, Levantine,\nNorth African, and MSA, with an accuracy of 52%. We discuss dialect\nidentification errors in the context of dialect code-switching between\nDialectal Arabic and MSA, and compare the error pattern between manually\nlabeled data, and the output from our classifier. We also release the train and\ntest data as standard corpus for dialect identification.", "field": [], "task": ["Dialect Identification", "Language Identification", "Speech Recognition", "Spoken language identification"], "method": [], "dataset": ["Untranscribed mixed-speech dataset"], "metric": ["RCL", "PRC", "ACC"], "title": "Automatic Dialect Detection in Arabic Broadcast Speech"} {"abstract": "Our analysis of large summarization datasets indicates that redundancy is a very serious problem when summarizing long documents. Yet, redundancy reduction has not been thoroughly investigated in neural summarization. In this work, we systematically explore and compare different ways to deal with redundancy when summarizing long documents. Specifically, we organize the existing methods into categories based on when and how the redundancy is considered. Then, in the context of these categories, we propose three additional methods balancing non-redundancy and importance in a general and flexible way. In a series of experiments, we show that our proposed methods achieve the state-of-the-art with respect to ROUGE scores on two scientific paper datasets, Pubmed and arXiv, while reducing redundancy significantly.", "field": [], "task": ["Text Summarization"], "method": [], "dataset": ["arXiv", "Pubmed"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Systematically Exploring Redundancy Reduction in Summarizing Long Documents"} {"abstract": "Electroencephalography (EEG) measures the neuronal activities in different brain regions via electrodes. Many existing studies on EEG-based emotion recognition do not fully exploit the topology of EEG channels. In this paper, we propose a regularized graph neural network (RGNN) for EEG-based emotion recognition. RGNN considers the biological topology among different brain regions to capture both local and global relations among different EEG channels. Specifically, we model the inter-channel relations in EEG signals via an adjacency matrix in a graph neural network where the connection and sparseness of the adjacency matrix are inspired by neuroscience theories of human brain organization. In addition, we propose two regularizers, namely node-wise domain adversarial training (NodeDAT) and emotion-aware distribution learning (EmotionDL), to better handle cross-subject EEG variations and noisy labels, respectively. Extensive experiments on two public datasets, SEED and SEED-IV, demonstrate the superior performance of our model than state-of-the-art models in most experimental settings. Moreover, ablation studies show that the proposed adjacency matrix and two regularizers contribute consistent and significant gain to the performance of our RGNN model. Finally, investigations on the neuronal activities reveal important brain regions and inter-channel relations for EEG-based emotion recognition.", "field": [], "task": ["EEG", "Emotion Recognition"], "method": [], "dataset": ["SEED-IV"], "metric": ["Accuracy"], "title": "EEG-Based Emotion Recognition Using Regularized Graph Neural Networks"} {"abstract": "Fine-grained visual classification aims to recognize images belonging to multiple sub-categories within a same category. It is a challenging task due to the inherently subtle variations among highly-confused categories. Most existing methods only take individual image as input, which may limit the ability of models to recognize contrastive clues from different images. In this paper, we propose an effective method called progressive co-attention network (PCA-Net) to tackle this problem. Specifically, we calculate the channel-wise similarity by interacting the feature channels within same-category images to capture the common discriminative features. Considering that complementary imformation is also crucial for recognition, we erase the prominent areas enhanced by the channel interaction to force the network to focus on other discriminative regions. The proposed model can be trained in an end-to-end manner, and only requires image-level label supervision. It has achieved competitive results on three fine-grained visual classification benchmark datasets: CUB-200-2011, Stanford Cars, and FGVC Aircraft.", "field": [], "task": ["Fine-Grained Image Classification"], "method": [], "dataset": [" CUB-200-2011"], "metric": ["Accuracy"], "title": "Progressive Co-Attention Network for Fine-grained Visual Classification"} {"abstract": "Metric-based meta-learning techniques have successfully been applied to few-shot classification problems. In this paper, we propose to leverage cross-modal information to enhance metric-based few-shot learning methods. Visual and semantic feature spaces have different structures by definition. For certain concepts, visual features might be richer and more discriminative than text ones. While for others, the inverse might be true. Moreover, when the support from visual information is limited in image classification, semantic representations (learned from unsupervised text corpora) can provide strong prior knowledge and context to help learning. Based on these two intuitions, we propose a mechanism that can adaptively combine information from both modalities according to new image categories to be learned. Through a series of experiments, we show that by this adaptive combination of the two modalities, our model outperforms current uni-modality few-shot learning methods and modality-alignment methods by a large margin on all benchmarks and few-shot scenarios tested. Experiments also show that our model can effectively adjust its focus on the two modalities. The improvement in performance is particularly large when the number of shots is very small.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Image Classification", "Meta-Learning"], "method": [], "dataset": ["Mini-Imagenet 5-way (1-shot)", "Tiered ImageNet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "Mini-Imagenet 5-way (10-shot)", "Tiered ImageNet 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Adaptive Cross-Modal Few-Shot Learning"} {"abstract": "Previously, neural methods in grammatical error correction (GEC) did not\nreach state-of-the-art results compared to phrase-based statistical machine\ntranslation (SMT) baselines. We demonstrate parallels between neural GEC and\nlow-resource neural MT and successfully adapt several methods from low-resource\nMT to neural GEC. We further establish guidelines for trustable results in\nneural GEC and propose a set of model-independent methods for neural GEC that\ncan be easily applied in most GEC settings. Proposed methods include adding\nsource-side noise, domain-adaptation techniques, a GEC-specific\ntraining-objective, transfer learning with monolingual data, and ensembling of\nindependently trained GEC models and language models. The combined effects of\nthese methods result in better than state-of-the-art neural GEC models that\noutperform previously best neural GEC systems by more than 10% M$^2$ on the\nCoNLL-2014 benchmark and 5.9% on the JFLEG test set. Non-neural\nstate-of-the-art systems are outperformed by more than 2% on the CoNLL-2014\nbenchmark and by 4% on JFLEG.", "field": [], "task": ["Domain Adaptation", "Grammatical Error Correction", "Machine Translation", "Transfer Learning"], "method": [], "dataset": ["Restricted", "_Restricted_", "CoNLL-2014 Shared Task", "JFLEG"], "metric": ["F0.5", "GLEU"], "title": "Approaching Neural Grammatical Error Correction as a Low-Resource Machine Translation Task"} {"abstract": "Many classic tasks in vision -- such as the estimation of optical flow or stereo disparities -- can be cast as dense correspondence matching. Well-known techniques for doing so make use of a cost volume, typically a 4D tensor of match costs between all pixels in a 2D image and their potential matches in a 2D search window. State-of-the-art (SOTA) deep networks for flow/stereo make use of such volumetric representations as internal layers. However, such layers require significant amounts of memory and compute, making them cumbersome to use in practice. As a result, SOTA networks also employ various heuristics designed to limit volumetric processing, leading to limited accuracy and overfitting. Instead, we introduce several simple modifications that dramatically simplify the use of volumetric layers - (1) volumetric encoder-decoder architectures that efficiently capture large receptive fields, (2) multi-channel cost volumes that capture multi-dimensional notions of pixel similarities, and finally, (3) separable volumetric filtering that significantly reduces computation and parameters while preserving accuracy. Our innovations dramatically improve accuracy over SOTA on standard benchmarks while being significantly easier to work with - training converges in 10X fewer iterations, and most importantly, our networks generalize across correspondence tasks. On-the-fly adaptation of search windows allows us to repurpose optical flow networks for stereo (and vice versa), and can also be used to implement adaptive networks that increase search window sizes on-demand.", "field": [], "task": ["Optical Flow Estimation"], "method": [], "dataset": ["Sintel-final"], "metric": ["Average End-Point Error"], "title": "Volumetric Correspondence Networks for Optical Flow"} {"abstract": "Weakly-supervised temporal action localization is a very challenging problem because frame-wise labels are not given in the training stage while the only hint is video-level labels: whether each video contains action frames of interest. Previous methods aggregate frame-level class scores to produce video-level prediction and learn from video-level action labels. This formulation does not fully model the problem in that background frames are forced to be misclassified as action classes to predict video-level labels accurately. In this paper, we design Background Suppression Network (BaS-Net) which introduces an auxiliary class for background and has a two-branch weight-sharing architecture with an asymmetrical training strategy. This enables BaS-Net to suppress activations from background frames to improve localization performance. Extensive experiments demonstrate the effectiveness of BaS-Net and its superiority over the state-of-the-art methods on the most popular benchmarks - THUMOS'14 and ActivityNet. Our code and the trained model are available at https://github.com/Pilhyeon/BaSNet-pytorch.", "field": [], "task": ["Action Localization", "Temporal Action Localization", "Weakly Supervised Action Localization", "Weakly-supervised Temporal Action Localization", "Weakly Supervised Temporal Action Localization"], "method": [], "dataset": ["ActivityNet-1.2", "ActivityNet-1.3", "THUMOS 2014"], "metric": ["mAP@0.5", "mAP@0.1:0.7"], "title": "Background Suppression Network for Weakly-supervised Temporal Action Localization"} {"abstract": "Learning similarity functions between image pairs with deep neural networks\nyields highly correlated activations of embeddings. In this work, we show how\nto improve the robustness of such embeddings by exploiting the independence\nwithin ensembles. To this end, we divide the last embedding layer of a deep\nnetwork into an embedding ensemble and formulate training this ensemble as an\nonline gradient boosting problem. Each learner receives a reweighted training\nsample from the previous learners. Further, we propose two loss functions which\nincrease the diversity in our ensemble. These loss functions can be applied\neither for weight initialization or during training. Together, our\ncontributions leverage large embedding sizes more effectively by significantly\nreducing correlation of the embedding and consequently increase retrieval\naccuracy of the embedding. Our method works with any differentiable loss\nfunction and does not introduce any additional parameters during test time. We\nevaluate our metric learning method on image retrieval tasks and show that it\nimproves over state-of-the-art methods on the CUB 200-2011, Cars-196, Stanford\nOnline Products, In-Shop Clothes Retrieval and VehicleID datasets.", "field": [], "task": ["Image Retrieval", "Metric Learning"], "method": [], "dataset": ["SOP"], "metric": ["R@1"], "title": "Deep Metric Learning with BIER: Boosting Independent Embeddings Robustly"} {"abstract": "In recent years researchers have achieved considerable success applying\nneural network methods to question answering (QA). These approaches have\nachieved state of the art results in simplified closed-domain settings such as\nthe SQuAD (Rajpurkar et al., 2016) dataset, which provides a pre-selected\npassage, from which the answer to a given question may be extracted. More\nrecently, researchers have begun to tackle open-domain QA, in which the model\nis given a question and access to a large corpus (e.g., wikipedia) instead of a\npre-selected passage (Chen et al., 2017a). This setting is more complex as it\nrequires large-scale search for relevant passages by an information retrieval\ncomponent, combined with a reading comprehension model that \"reads\" the\npassages to generate an answer to the question. Performance in this setting\nlags considerably behind closed-domain performance. In this paper, we present a\nnovel open-domain QA system called Reinforced Ranker-Reader $(R^3)$, based on\ntwo algorithmic innovations. First, we propose a new pipeline for open-domain\nQA with a Ranker component, which learns to rank retrieved passages in terms of\nlikelihood of generating the ground-truth answer to a given question. Second,\nwe propose a novel method that jointly trains the Ranker along with an\nanswer-generation Reader model, based on reinforcement learning. We report\nextensive experimental results showing that our method significantly improves\non the state of the art for multiple open-domain QA datasets.", "field": [], "task": ["Information Retrieval", "Open-Domain Question Answering", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["SearchQA", "Quasar"], "metric": ["N-gram F1", "Unigram Acc", "F1", "EM", "EM (Quasar-T)", "F1 (Quasar-T)"], "title": "R$^3$: Reinforced Reader-Ranker for Open-Domain Question Answering"} {"abstract": "Sentence simplification aims to make sentences easier to read and understand.\nMost recent approaches draw on insights from machine translation to learn\nsimplification rewrites from monolingual corpora of complex and simple\nsentences. We address the simplification problem with an encoder-decoder model\ncoupled with a deep reinforcement learning framework. Our model, which we call\n{\\sc Dress} (as shorthand for {\\bf D}eep {\\bf RE}inforcement {\\bf S}entence\n{\\bf S}implification), explores the space of possible simplifications while\nlearning to optimize a reward function that encourages outputs which are\nsimple, fluent, and preserve the meaning of the input. Experiments on three\ndatasets demonstrate that our model outperforms competitive simplification\nsystems.", "field": [], "task": ["Sentence Compression", "Text Simplification"], "method": [], "dataset": ["PWKP / WikiSmall", "ASSET", "Newsela", "TurkCorpus"], "metric": ["BLEU", "SARI (EASSE>=0.2.1)", "SARI"], "title": "Sentence Simplification with Deep Reinforcement Learning"} {"abstract": "A good image-to-image translation model should learn a mapping between different visual domains while satisfying the following properties: 1) diversity of generated images and 2) scalability over multiple domains. Existing methods address either of the issues, having limited diversity or multiple models for all domains. We propose StarGAN v2, a single framework that tackles both and shows significantly improved results over the baselines. Experiments on CelebA-HQ and a new animal faces dataset (AFHQ) validate our superiority in terms of visual quality, diversity, and scalability. To better assess image-to-image translation models, we release AFHQ, high-quality animal faces with large inter- and intra-domain differences. The code, pretrained models, and dataset can be found at https://github.com/clovaai/stargan-v2.", "field": [], "task": ["Fundus to Angiography Generation", "Image Generation", "Image-to-Image Translation", "Multimodal Unsupervised Image-To-Image Translation"], "method": [], "dataset": ["Fundus Fluorescein Angiogram Photographs & Colour Fundus Images of Diabetic Patients", "AFHQ", "CelebA-HQ"], "metric": ["Kernel Inception Distance", "FID", "LPIPS"], "title": "StarGAN v2: Diverse Image Synthesis for Multiple Domains"} {"abstract": "Recent deep learning methods for object detection rely on a large amount of bounding box annotations. Collecting these annotations is laborious and costly, yet supervised models do not generalize well when testing on images from a different distribution. Domain adaptation provides a solution by adapting existing labels to the target testing data. However, a large gap between domains could make adaptation a challenging task, which leads to unstable training processes and sub-optimal results. In this paper, we propose to bridge the domain gap with an intermediate domain and progressively solve easier adaptation subtasks. This intermediate domain is constructed by translating the source images to mimic the ones in the target domain. To tackle the domain-shift problem, we adopt adversarial learning to align distributions at the feature level. In addition, a weighted task loss is applied to deal with unbalanced image quality in the intermediate domain. Experimental results show that our method performs favorably against the state-of-the-art method in terms of the performance on the target domain.", "field": [], "task": ["Domain Adaptation", "Object Detection"], "method": [], "dataset": ["Cityscapes-to-Foggy Cityscapes"], "metric": ["mAP"], "title": "Progressive Domain Adaptation for Object Detection"} {"abstract": "Learning with complete or partial supervision is powerful but relies on\never-growing human annotation efforts. As a way to mitigate this serious\nproblem, as well as to serve specific applications, unsupervised learning has\nemerged as an important field of research. In computer vision, unsupervised\nlearning comes in various guises. We focus here on the unsupervised discovery\nand matching of object categories among images in a collection, following the\nwork of Cho et al. 2015. We show that the original approach can be reformulated\nand solved as a proper optimization problem. Experiments on several benchmarks\nestablish the merit of our approach.", "field": [], "task": ["Object Discovery", "Single-object colocalization", "Single-object discovery"], "method": [], "dataset": ["VOC_6x2", "VOC_all", "Object Discovery"], "metric": ["CorLoc"], "title": "Unsupervised Image Matching and Object Discovery as Optimization"} {"abstract": "We present several techniques to tackle the mismatch in class distributions between training and test data in the Contextual Emotion Detection task of SemEval 2019, by extending the existing methods for class imbalance problem. Reducing the distance between the distribution of prediction and ground truth, they consistently show positive effects on the performance. Also we propose a novel neural architecture which utilizes representation of overall context as well as of each utterance. The combination of the methods and the models achieved micro F1 score of about 0.766 on the final evaluation.", "field": [], "task": ["Emotion Recognition in Conversation"], "method": [], "dataset": ["EC"], "metric": ["Micro-F1"], "title": "SNU IDS at SemEval-2019 Task 3: Addressing Training-Test Class Distribution Mismatch in Conversational Classification"} {"abstract": "We propose a novel deep learning-based framework to tackle the challenge of\nsemantic segmentation of large-scale point clouds of millions of points. We\nargue that the organization of 3D point clouds can be efficiently captured by a\nstructure called superpoint graph (SPG), derived from a partition of the\nscanned scene into geometrically homogeneous elements. SPGs offer a compact yet\nrich representation of contextual relationships between object parts, which is\nthen exploited by a graph convolutional network. Our framework sets a new state\nof the art for segmenting outdoor LiDAR scans (+11.9 and +8.8 mIoU points for\nboth Semantic3D test sets), as well as indoor scans (+12.4 mIoU points for the\nS3DIS dataset).", "field": [], "task": ["3D Semantic Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["Semantic3D", "S3DIS Area5", "S3DIS", "SemanticKITTI"], "metric": ["oAcc", "Mean IoU", "mAcc", "mIoU"], "title": "Large-scale Point Cloud Semantic Segmentation with Superpoint Graphs"} {"abstract": "Much of the worlds data is streaming, time-series data, where anomalies give\nsignificant information in critical situations. Yet detecting anomalies in\nstreaming data is a difficult task, requiring detectors to process data in\nreal-time, and learn while simultaneously making predictions. We present a\nnovel anomaly detection technique based on an on-line sequence memory algorithm\ncalled Hierarchical Temporal Memory (HTM). We show results from a live\napplication that detects anomalies in financial metrics in real-time. We also\ntest the algorithm on NAB, a published benchmark for real-time anomaly\ndetection, where our algorithm achieves best-in-class results.", "field": [], "task": ["Anomaly Detection", "Time Series"], "method": [], "dataset": ["Numenta Anomaly Benchmark"], "metric": ["NAB score"], "title": "Real-Time Anomaly Detection for Streaming Analytics"} {"abstract": "In this paper, we propose a separable structure modeling approach for semi-supervised video object segmentation.\r\nUnlike most existing methods which preclude the semantically structural information of target objects, our method not only captures pixel-level similarity relationships between the reference and target frames but also reveals the separable structure of the specified objects in target frames. Specifically, we first compute a pixel-wise similarity matrix by using representations of reference and target pixels and then select top-rank reference pixels for target pixel classification. According to the prior knowledge from these top-rank reference pixels, we further appoint the representative target pixels for object structure modeling. Particularly, in the structure modeling branch, we extract the shared and individual features that can well represent the whole object and its components, respectively. Moreover, the proposed method is a fast algorithm without online fine-tuning and any post-processing. We conduct extensive experiments and ablation studies on the DAVIS-16, DAVIS-17, and YouTube-VOS datasets, and experimental results on three widely-used datasets demonstrate that our method achieves superior performance, compared with state-of-the-art semi-supervised video object segmentation approaches in terms of speed and accuracy.", "field": [], "task": ["Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation", "Youtube-VOS"], "method": [], "dataset": ["DAVIS 2017 (val)", "YouTube-VOS", "DAVIS 2017 (test-dev)", "DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "Speed (FPS)", "Jaccard (Unseen)", "F-Measure (Seen)", "Jaccard (Seen)", "Speed (FPS)", "Jaccard (Decay)", "Overall", "F-measure (Recall)", "Jaccard (Recall)", "F-measure (Mean)", "J&F", "F-Measure (Unseen)"], "title": "Separable Structure Modeling for Semi-supervised Video Object Segmentation"} {"abstract": "This paper introduces a video dataset of spatio-temporally localized Atomic\nVisual Actions (AVA). The AVA dataset densely annotates 80 atomic visual\nactions in 430 15-minute video clips, where actions are localized in space and\ntime, resulting in 1.58M action labels with multiple labels per person\noccurring frequently. The key characteristics of our dataset are: (1) the\ndefinition of atomic visual actions, rather than composite actions; (2) precise\nspatio-temporal annotations with possibly multiple annotations for each person;\n(3) exhaustive annotation of these atomic actions over 15-minute video clips;\n(4) people temporally linked across consecutive segments; and (5) using movies\nto gather a varied set of action representations. This departs from existing\ndatasets for spatio-temporal action recognition, which typically provide sparse\nannotations for composite actions in short video clips. We will release the\ndataset publicly.\n AVA, with its realistic scene and action complexity, exposes the intrinsic\ndifficulty of action recognition. To benchmark this, we present a novel\napproach for action localization that builds upon the current state-of-the-art\nmethods, and demonstrates better performance on JHMDB and UCF101-24 categories.\nWhile setting a new state of the art on existing datasets, the overall results\non AVA are low at 15.6% mAP, underscoring the need for developing new\napproaches for video understanding.", "field": [], "task": ["Action Localization", "Action Recognition", "Temporal Action Localization", "Video Understanding"], "method": [], "dataset": ["AVA v2.1", "J-HMDB-21", "UCF101-24"], "metric": ["Video-mAP 0.5", "mAP (Val)", "Frame-mAP"], "title": "AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions"} {"abstract": "One key task of fine-grained sentiment analysis of product reviews is to\nextract product aspects or features that users have expressed opinions on. This\npaper focuses on supervised aspect extraction using deep learning. Unlike other\nhighly sophisticated supervised deep learning models, this paper proposes a\nnovel and yet simple CNN model employing two types of pre-trained embeddings\nfor aspect extraction: general-purpose embeddings and domain-specific\nembeddings. Without using any additional supervision, this model achieves\nsurprisingly good results, outperforming state-of-the-art sophisticated\nexisting methods. To our knowledge, this paper is the first to report such\ndouble embeddings based CNN model for aspect extraction and achieve very good\nresults.", "field": [], "task": ["Aspect-Based Sentiment Analysis", "Aspect Extraction", "Sentiment Analysis"], "method": [], "dataset": ["SemEval 2014 Task 4 Sub Task 1", "SemEval-2016 Task 5 Subtask 1", "SemEval 2014 Task 4 Sub Task 2", " SemEval 2015 Task 12"], "metric": ["Restaurant (F1)", "F1", "Laptop (F1)"], "title": "Double Embeddings and CNN-based Sequence Labeling for Aspect Extraction"} {"abstract": "Shadow detection is a fundamental and challenging task, since it requires an\nunderstanding of global image semantics and there are various backgrounds\naround shadows. This paper presents a novel network for shadow detection by\nanalyzing image context in a direction-aware manner. To achieve this, we first\nformulate the direction-aware attention mechanism in a spatial recurrent neural\nnetwork (RNN) by introducing attention weights when aggregating spatial context\nfeatures in the RNN. By learning these weights through training, we can recover\ndirection-aware spatial context (DSC) for detecting shadows. This design is\ndeveloped into the DSC module and embedded in a CNN to learn DSC features at\ndifferent levels. Moreover, a weighted cross entropy loss is designed to make\nthe training more effective. We employ two common shadow detection benchmark\ndatasets and perform various experiments to evaluate our network. Experimental\nresults show that our network outperforms state-of-the-art methods and achieves\n97% accuracy and 38% reduction on balance error rate.", "field": [], "task": ["Detecting Shadows", "Shadow Detection"], "method": [], "dataset": ["UCF", "SBU", "ISTD"], "metric": ["Balanced Error Rate", "BER"], "title": "Direction-aware Spatial Context Features for Shadow Detection"} {"abstract": "In this paper, we study object detection using a large pool of unlabeled\nimages and only a few labeled images per category, named \"few-example object\ndetection\". The key challenge consists in generating trustworthy training\nsamples as many as possible from the pool. Using few training examples as\nseeds, our method iterates between model training and high-confidence sample\nselection. In training, easy samples are generated first and, then the poorly\ninitialized model undergoes improvement. As the model becomes more\ndiscriminative, challenging but reliable samples are selected. After that,\nanother round of model improvement takes place. To further improve the\nprecision and recall of the generated training samples, we embed multiple\ndetection models in our framework, which has proven to outperform the single\nmodel baseline and the model ensemble method. Experiments on PASCAL VOC'07, MS\nCOCO'14, and ILSVRC'13 indicate that by using as few as three or four samples\nselected for each category, our method produces very competitive results when\ncompared to the state-of-the-art weakly-supervised approaches using a large\nnumber of image-level labels.", "field": [], "task": ["Object Detection"], "method": [], "dataset": ["COCO", "PASCAL VOC 2012 test", "PASCAL VOC 2007", "ImageNet"], "metric": ["MAP"], "title": "Few-Example Object Detection with Model Communication"} {"abstract": "People enjoy food photography because they appreciate food. Behind each meal there is a story described in a complex recipe and, unfortunately, by simply looking at a food image we do not have access to its preparation process. Therefore, in this paper we introduce an inverse cooking system that recreates cooking recipes given food images. Our system predicts ingredients as sets by means of a novel architecture, modeling their dependencies without imposing any order, and then generates cooking instructions by attending to both image and its inferred ingredients simultaneously. We extensively evaluate the whole system on the large-scale Recipe1M dataset and show that (1) we improve performance w.r.t. previous baselines for ingredient prediction; (2) we are able to obtain high quality recipes by leveraging both image and ingredients; (3) our system is able to produce more compelling recipes than retrieval-based approaches according to human judgment. We make code and models publicly available.", "field": [], "task": ["Recipe Generation"], "method": [], "dataset": ["Recipe1M"], "metric": ["Mean IoU", "F1"], "title": "Inverse Cooking: Recipe Generation from Food Images"} {"abstract": "The reinforcement learning community has made great strides in designing\nalgorithms capable of exceeding human performance on specific tasks. These\nalgorithms are mostly trained one task at the time, each new task requiring to\ntrain a brand new agent instance. This means the learning algorithm is general,\nbut each solution is not; each agent can only solve the one task it was trained\non. In this work, we study the problem of learning to master not one but\nmultiple sequential-decision tasks at once. A general issue in multi-task\nlearning is that a balance must be found between the needs of multiple tasks\ncompeting for the limited resources of a single learning system. Many learning\nalgorithms can get distracted by certain tasks in the set of tasks to solve.\nSuch tasks appear more salient to the learning process, for instance because of\nthe density or magnitude of the in-task rewards. This causes the algorithm to\nfocus on those salient tasks at the expense of generality. We propose to\nautomatically adapt the contribution of each task to the agent's updates, so\nthat all tasks have a similar impact on the learning dynamics. This resulted in\nstate of the art performance on learning to play all games in a set of 57\ndiverse Atari games. Excitingly, our method learned a single trained policy -\nwith a single set of weights - that exceeds median human performance. To our\nknowledge, this was the first time a single agent surpassed human-level\nperformance on this multi-task domain. The same approach also demonstrated\nstate of the art performance on a set of 30 tasks in the 3D reinforcement\nlearning platform DeepMind Lab.", "field": [], "task": ["Atari Games", "Multi-Task Learning"], "method": [], "dataset": ["Dmlab-30", "Atari-57"], "metric": ["Medium Human-Normalized Score"], "title": "Multi-task Deep Reinforcement Learning with PopArt"} {"abstract": "We propose a selective encoding model to extend the sequence-to-sequence\nframework for abstractive sentence summarization. It consists of a sentence\nencoder, a selective gate network, and an attention equipped decoder. The\nsentence encoder and decoder are built with recurrent neural networks. The\nselective gate network constructs a second level sentence representation by\ncontrolling the information flow from encoder to decoder. The second level\nrepresentation is tailored for sentence summarization task, which leads to\nbetter performance. We evaluate our model on the English Gigaword, DUC 2004 and\nMSR abstractive sentence summarization datasets. The experimental results show\nthat the proposed selective encoding model outperforms the state-of-the-art\nbaseline models.", "field": [], "task": ["Sentence Summarization"], "method": [], "dataset": ["GigaWord", "DUC 2004 Task 1"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Selective Encoding for Abstractive Sentence Summarization"} {"abstract": "We introduce an end-to-end deep-learning framework for 3D medical image\nregistration. In contrast to existing approaches, our framework combines two\nregistration methods: an affine registration and a vector\nmomentum-parameterized stationary velocity field (vSVF) model. Specifically, it\nconsists of three stages. In the first stage, a multi-step affine network\npredicts affine transform parameters. In the second stage, we use a Unet-like\nnetwork to generate a momentum, from which a velocity field can be computed via\nsmoothing. Finally, in the third stage, we employ a self-iterable map-based\nvSVF component to provide a non-parametric refinement based on the current\nestimate of the transformation map. Once the model is trained, a registration\nis completed in one forward pass. To evaluate the performance, we conducted\nlongitudinal and cross-subject experiments on 3D magnetic resonance images\n(MRI) of the knee of the Osteoarthritis Initiative (OAI) dataset. Results show\nthat our framework achieves comparable performance to state-of-the-art medical\nimage registration approaches, but it is much faster, with a better control of\ntransformation regularity including the ability to produce approximately\nsymmetric transformations, and combining affine and non-parametric\nregistration.", "field": [], "task": ["Image Registration", "Medical Image Registration"], "method": [], "dataset": [" Osteoarthritis Initiative"], "metric": ["Dice"], "title": "Networks for Joint Affine and Non-parametric Image Registration"} {"abstract": "LEDNet: A Lightweight Encoder-Decoder Network for Real-time Semantic Segmentation", "field": [], "task": ["Real-Time Semantic Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["Cityscapes test"], "metric": ["Time (ms)", "Mean IoU (class)", "Frame (fps)", "mIoU"], "title": "LEDNet: A Lightweight Encoder-Decoder Network for Real-Time Semantic Segmentation"} {"abstract": "Video-based person re-identification (Re-ID) aims at matching video sequences of pedestrians across non-overlapping cameras. It is a practical yet challenging task of how to embed spatial and temporal information of a video into its feature representation. While most existing methods learn the video characteristics by aggregating image-wise features and designing attention mechanisms in Neural Networks, they only explore the correlation between frames at high-level features. In this work, we target at refining the intermediate features as well as high-level features with non-local attention operations and make two contributions. (i) We propose a Non-local Video Attention Network (NVAN) to incorporate video characteristics into the representation at multiple feature levels. (ii) We further introduce a Spatially and Temporally Efficient Non-local Video Attention Network (STE-NVAN) to reduce the computation complexity by exploring spatial and temporal redundancy presented in pedestrian videos. Extensive experiments show that our NVAN outperforms state-of-the-arts by 3.8% in rank-1 accuracy on MARS dataset and confirms our STE-NVAN displays a much superior computation footprint compared to existing methods.", "field": [], "task": ["Person Re-Identification", "Video-Based Person Re-Identification"], "method": [], "dataset": ["MARS"], "metric": ["Rank-1", "mAP"], "title": "Spatially and Temporally Efficient Non-local Attention Network for Video-based Person Re-Identification"} {"abstract": "Object attention maps generated by image classifiers are usually used as priors for weakly-supervised segmentation approaches. However, normal image classifiers produce attention only at the most discriminative object parts, which limits the performance of weakly-supervised segmentation task. Therefore, how to effectively identify entire object regions in a weakly-supervised manner has always been a challenging and meaningful problem. We observe that the attention maps produced by a classification network continuously focus on different object parts during training. In order to accumulate the discovered different object parts, we propose an online attention accumulation (OAA) strategy which maintains a cumulative attention map for each target category in each training image so that the integral object regions can be gradually promoted as the training goes. These cumulative attention maps, in turn, serve as the pixel-level supervision, which can further assist the network in discovering more integral object regions. Our method (OAA) can be plugged into any classification network and progressively accumulate the discriminative regions into integral objects as the training process goes. Despite its simplicity, when applying the resulting attention maps to the weakly-supervised semantic segmentation task, our approach improves the existing state-of-the-art methods on the PASCAL VOC 2012 segmentation benchmark, achieving a mIoU score of 66.4% on the test set. Code is available at https://mmcheng.net/oaa/.\r", "field": [], "task": ["Semantic Segmentation", "Weakly-Supervised Semantic Segmentation"], "method": [], "dataset": ["PASCAL VOC 2012 val"], "metric": ["Mean IoU"], "title": "Integral Object Mining via Online Attention Accumulation"} {"abstract": "We introduce a new collection of spoken English audio suitable for training speech recognition systems under limited or no supervision. It is derived from open-source audio books from the LibriVox project. It contains over 60K hours of audio, which is, to our knowledge, the largest freely-available corpus of speech. The audio has been segmented using voice activity detection and is tagged with SNR, speaker ID and genre descriptions. Additionally, we provide baseline systems and evaluation metrics working under three settings: (1) the zero resource/unsupervised setting (ABX), (2) the semi-supervised setting (PER, CER) and (3) the distant supervision setting (WER). Settings (2) and (3) use limited textual resources (10 minutes to 10 hours) aligned with the speech. Setting (3) uses large amounts of unaligned text. They are evaluated on the standard LibriSpeech dev and test sets for comparison with the supervised state-of-the-art.", "field": [], "task": ["Speech Recognition"], "method": [], "dataset": ["Libri-Light test-other", "Libri-Light test-clean"], "metric": ["ABX-across", "ABX-within", "Word Error Rate (WER)"], "title": "Libri-Light: A Benchmark for ASR with Limited or No Supervision"} {"abstract": "We introduce HybridPose, a novel 6D object pose estimation approach. HybridPose utilizes a hybrid intermediate representation to express different geometric information in the input image, including keypoints, edge vectors, and symmetry correspondences. Compared to a unitary representation, our hybrid representation allows pose regression to exploit more and diverse features when one type of predicted representation is inaccurate (e.g., because of occlusion). Different intermediate representations used by HybridPose can all be predicted by the same simple neural network, and outliers in predicted intermediate representations are filtered by a robust regression module. Compared to state-of-the-art pose estimation approaches, HybridPose is comparable in running time and accuracy. For example, on Occlusion Linemod dataset, our method achieves a prediction speed of 30 fps with a mean ADD(-S) accuracy of 47.5%, representing a state-of-the-art performance. The implementation of HybridPose is available at https://github.com/chensong1995/HybridPose.", "field": [], "task": ["6D Pose Estimation using RGB", "Pose Estimation", "Regression"], "method": [], "dataset": ["LineMOD", "Occlusion LineMOD"], "metric": ["Mean ADD", "Accuracy (ADD)"], "title": "HybridPose: 6D Object Pose Estimation under Hybrid Representations"} {"abstract": "Multi-task learning (MTL) is an effective method for learning related tasks, but designing MTL models necessitates deciding which and how many parameters should be task-specific, as opposed to shared between tasks. We investigate this issue for the problem of jointly learning named entity recognition (NER) and relation extraction (RE) and propose a novel neural architecture that allows for deeper task-specificity than does prior work. In particular, we introduce additional task-specific bidirectional RNN layers for both the NER and RE tasks and tune the number of shared and task-specific layers separately for different datasets. We achieve state-of-the-art (SOTA) results for both tasks on the ADE dataset; on the CoNLL04 dataset, we achieve SOTA results on the NER task and competitive results on the RE task while using an order of magnitude fewer trainable parameters than the current SOTA architecture. An ablation study confirms the importance of the additional task-specific layers for achieving these results. Our work suggests that previous solutions to joint NER and RE undervalue task-specificity and demonstrates the importance of correctly balancing the number of shared and task-specific parameters for MTL approaches in general.", "field": [], "task": ["Joint Entity and Relation Extraction", "Multi-Task Learning", "Named Entity Recognition", "Relation Extraction"], "method": [], "dataset": ["ADE Corpus", "CoNLL04"], "metric": ["RE+ Micro F1", "NER Macro F1", "RE+ Macro F1", "NER Micro F1", "RE+ Macro F1 "], "title": "Deeper Task-Specificity Improves Joint Entity and Relation Extraction"} {"abstract": "We propose a hierarchical meta-learning-inspired model for music source separation (Meta-TasNet) in which a generator model is used to predict the weights of individual extractor models. This enables efficient parameter-sharing, while still allowing for instrument-specific parameterization. Meta-TasNet is shown to be more effective than the models trained independently or in a multi-task setting, and achieve performance comparable with state-of-the-art methods. In comparison to the latter, our extractors contain fewer parameters and have faster run-time performance. We discuss important architectural considerations, and explore the costs and benefits of this approach.", "field": [], "task": ["Meta-Learning", "Music Source Separation"], "method": [], "dataset": ["MUSDB18"], "metric": ["SDR (vocals)", "SDR (other)", "SDR (drums)", "SDR (bass)"], "title": "Meta-learning Extractors for Music Source Separation"} {"abstract": "Graph representation learning nowadays becomes fundamental in analyzing graph-structured data. Inspired by recent success of contrastive methods, in this paper, we propose a novel framework for unsupervised graph representation learning by leveraging a contrastive objective at the node level. Specifically, we generate two graph views by corruption and learn node representations by maximizing the agreement of node representations in these two views. To provide diverse node contexts for the contrastive objective, we propose a hybrid scheme for generating graph views on both structure and attribute levels. Besides, we provide theoretical justification behind our motivation from two perspectives, mutual information and the classical triplet loss. We perform empirical experiments on both transductive and inductive learning tasks using a variety of real-world datasets. Experimental experiments demonstrate that despite its simplicity, our proposed method consistently outperforms existing state-of-the-art methods by large margins. Moreover, our unsupervised method even surpasses its supervised counterparts on transductive tasks, demonstrating its great potential in real-world applications.", "field": [], "task": ["Graph Representation Learning", "Node Classification", "Representation Learning"], "method": [], "dataset": ["Cora", "PPI", "DBLP", "Reddit", "Citeseer", "Pubmed"], "metric": ["Micro-F1", "Accuracy"], "title": "Deep Graph Contrastive Representation Learning"} {"abstract": "Object detection and data association are critical components in multi-object tracking (MOT) systems. Despite the fact that the two components are dependent on each other, prior work often designs detection and data association modules separately which are trained with different objectives. As a result, we cannot back-propagate the gradients and optimize the entire MOT system, which leads to sub-optimal performance. To address this issue, recent work simultaneously optimizes detection and data association modules under a joint MOT framework, which has shown improved performance in both modules. In this work, we propose a new instance of joint MOT approach based on Graph Neural Networks (GNNs). The key idea is that GNNs can model relations between variable-sized objects in both the spatial and temporal domains, which is essential for learning discriminative features for detection and data association. Through extensive experiments on the MOT15/16/17/20 datasets, we demonstrate the effectiveness of our GNN-based joint MOT approach and show the state-of-the-art performance for both detection and MOT tasks.", "field": [], "task": ["Multi-Object Tracking", "Object Detection", "Object Tracking"], "method": [], "dataset": ["MOT17", "2D MOT 2015", "MOT16", "MOT20"], "metric": ["MOTA"], "title": "Joint Object Detection and Multi-Object Tracking with Graph Neural Networks"} {"abstract": "Speech separation has been well-developed while there are still problems waiting to be solved. The main problem we focus on in this paper is the frequent label permutation switching of permutation invariant training (PIT). For N-speaker separation, there would be N! possible label permutations. How to stably select correct label permutations is a long-standing problem. In this paper, we utilize self-supervised pre-training to stabilize the label permutations. Among several types of self-supervised tasks, speech enhancement based pre-training tasks show significant effectiveness in our experiments. When using off-the-shelf pre-trained models, training duration could be shortened to one-third to two-thirds. Furthermore, even taking pre-training time into account, the entire training process could still be shorter without a performance drop when using a larger batch size.", "field": [], "task": ["Speaker Separation", "Speech Enhancement", "Speech Separation"], "method": [], "dataset": ["wsj0-2mix"], "metric": ["SI-SDRi"], "title": "Self-supervised Pre-training Reduces Label Permutation Instability of Speech Separation"} {"abstract": "This paper considers the adaptation of semantic segmentation from the synthetic source domain to the real target domain. Different from most previous explorations that often aim at developing adversarial-based domain alignment solutions, we tackle this challenging task from a new perspective, mph{i.e.}, content-consistent matching (CCM). The target of CCM is to acquire those synthetic images that share similar distribution with the real ones in the target domain, so that the domain gap can be naturally alleviated by employing the content-consistent synthetic images for training. To be specific, we facilitate the CCM from two aspects, mph{i.e.}, semantic layout matching and pixel-wise similarity matching. First, we use all the synthetic images from the source domain to train an initial segmentation model, which is then employed to produce coarse pixel-level labels for the unlabeled images in the target domain. With the coarse/accurate label maps for real/synthetic images, we construct their semantic layout matrixes from both horizontal and vertical directions and perform the matrixes matching to find out the synthetic images with similar semantic layout to real images. Second, we choose those predicted labels with high confidence to generate feature embeddings for all classes in the target domain, and further perform the pixel-wise matching on the mined layout-consistent synthetic images to harvest the appearance-consistent pixels. With the proposed CCM, only those content-consistent synthetic images are taken into account for learning the segmentation model, which can effectively alleviate the domain bias caused by those content-irrelevant synthetic images. Extensive experiments are conducted on two popular domain adaptation tasks, mph{i.e.}, GTA5$\\xrightarrow{}$Cityscapes and SYNTHIA$\\xrightarrow{}$Cityscapes. Our CCM yields consistent improvements over the baselines and performs favorably against previous state-of-the-arts.", "field": [], "task": ["Domain Adaptation", "Semantic Segmentation", "Synthetic-to-Real Translation"], "method": [], "dataset": ["GTAV-to-Cityscapes Labels"], "metric": ["mIoU"], "title": "Content-Consistent Matching for Domain Adaptive Semantic Segmentation"} {"abstract": "Stroke is an injury that affects the brain tissue, mainly caused by changes in the blood supply to a particular region of the brain. As consequence, some specific functions related to that affected region can be reduced, decreasing the quality of life of the patient. In this work, we deal with the problem of stroke detection in Computed Tomography (CT) images using Convolutional Neural Networks (CNN) optimized by Particle Swarm optimization (PSO). We considered two different kinds of strokes, ischemic and hemorrhagic, as well as making available a public dataset to foster the research related to stroke detection in the human brain. The dataset comprises three different types of images for each case, i.e., the original CT image, one with the segmented cranium and an additional one with the radiological density's map. The results evidenced that CNN's are suitable to deal with stroke detection, obtaining promising results.", "field": [], "task": ["Computed Tomography (CT)", "Stroke Classification"], "method": [], "dataset": ["CT Lesion Stroke Dataset"], "metric": ["Average Class Accuracy "], "title": "Stroke lesion detection using convolutional neural networks"} {"abstract": "In this paper, we investigate deep image synthesis guided by sketch, color,\nand texture. Previous image synthesis methods can be controlled by sketch and\ncolor strokes but we are the first to examine texture control. We allow a user\nto place a texture patch on a sketch at arbitrary locations and scales to\ncontrol the desired output texture. Our generative network learns to synthesize\nobjects consistent with these texture suggestions. To achieve this, we develop\na local texture loss in addition to adversarial and content loss to train the\ngenerative network. We conduct experiments using sketches generated from real\nimages and textures sampled from a separate texture database and results show\nthat our proposed algorithm is able to generate plausible images that are\nfaithful to user controls. Ablation studies show that our proposed pipeline can\ngenerate more realistic images than adapting existing methods directly.", "field": [], "task": ["Image Generation", "Texture Synthesis"], "method": [], "dataset": ["Edge-to-Shoes", "Edge-to-Handbags"], "metric": ["FID", "LPIPS"], "title": "TextureGAN: Controlling Deep Image Synthesis with Texture Patches"} {"abstract": "A lot of the recent success in natural language processing (NLP) has been\ndriven by distributed vector representations of words trained on large amounts\nof text in an unsupervised manner. These representations are typically used as\ngeneral purpose features for words across a range of NLP problems. However,\nextending this success to learning representations of sequences of words, such\nas sentences, remains an open problem. Recent work has explored unsupervised as\nwell as supervised learning techniques with different training objectives to\nlearn general purpose fixed-length sentence representations. In this work, we\npresent a simple, effective multi-task learning framework for sentence\nrepresentations that combines the inductive biases of diverse training\nobjectives in a single model. We train this model on several data sources with\nmultiple training objectives on over 100 million sentences. Extensive\nexperiments demonstrate that sharing a single recurrent sentence encoder across\nweakly related tasks leads to consistent improvements over previous methods. We\npresent substantial improvements in the context of transfer learning and\nlow-resource settings using our learned general-purpose representations.", "field": [], "task": ["Multi-Task Learning", "Natural Language Inference", "Paraphrase Identification", "Semantic Textual Similarity", "Transfer Learning"], "method": [], "dataset": ["MultiNLI", "MRPC", "Quora Question Pairs", "SentEval"], "metric": ["SICK-E", "Matched", "STS", "MRPC", "SICK-R", "Accuracy", "Mismatched", "F1"], "title": "Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning"} {"abstract": "Today's scene graph generation (SGG) task is still far from practical, mainly due to the severe training bias, e.g., collapsing diverse \"human walk on / sit on / lay on beach\" into \"human on beach\". Given such SGG, the down-stream tasks such as VQA can hardly infer better scene structures than merely a bag of objects. However, debiasing in SGG is not trivial because traditional debiasing methods cannot distinguish between the good and bad bias, e.g., good context prior (e.g., \"person read book\" rather than \"eat\") and bad long-tailed bias (e.g., \"near\" dominating \"behind / in front of\"). In this paper, we present a novel SGG framework based on causal inference but not the conventional likelihood. We first build a causal graph for SGG, and perform traditional biased training with the graph. Then, we propose to draw the counterfactual causality from the trained graph to infer the effect from the bad bias, which should be removed. In particular, we use Total Direct Effect (TDE) as the proposed final predicate score for unbiased SGG. Note that our framework is agnostic to any SGG model and thus can be widely applied in the community who seeks unbiased predictions. By using the proposed Scene Graph Diagnosis toolkit on the SGG benchmark Visual Genome and several prevailing models, we observed significant improvements over the previous state-of-the-art methods.", "field": [], "task": ["Causal Inference", "Graph Generation", "Scene Graph Generation"], "method": ["Causal Inference"], "dataset": ["Visual Genome"], "metric": ["Recall@50", "mean Recall @20"], "title": "Unbiased Scene Graph Generation from Biased Training"} {"abstract": "Remote sensing image change detection (CD) is done to identify desired significant changes between bitemporal images. Given two co-registered images taken at different times, the illumination variations and misregistration errors overwhelm the real object changes. Exploring the relationships among different spatial\u2013temporal pixels may improve the performances of CD methods. In our work, we propose a novel Siamese-based spatial\u2013temporal attention neural network. In contrast to previous methods that separately encode the bitemporal images without referring to any useful spatial\u2013temporal dependency, we design a CD self-attention mechanism to model the spatial\u2013temporal relationships. We integrate a new CD self-attention module in the procedure of feature extraction. Our self-attention module calculates the attention weights between any two pixels at different times and positions and uses them to generate more discriminative features. Considering that the object may have different scales, we partition the image into multi-scale subregions and introduce the self-attention in each subregion. In this way, we could capture spatial\u2013temporal dependencies at various scales, thereby generating better representations to accommodate objects of various sizes. We also introduce a CD dataset LEVIR-CD, which is two orders of magnitude larger than other public datasets of this field. LEVIR-CD consists of a large set of bitemporal Google Earth images, with 637 image pairs (1024 \u0002 1024) and over 31 k independently labeled change instances. Our proposed attention module improves the F1-score of our baseline model from 83.9 to 87.3 with acceptable computational overhead. Experimental results on a public remote sensing image CD dataset show our method outperforms several other state-of-the-art methods.", "field": [], "task": ["Building change detection for remote sensing images", "Change detection for remote sensing images"], "method": [], "dataset": ["LEVIR-CD"], "metric": ["F1"], "title": "A Spatial-Temporal Attention-Based Method and a New Dataset for Remote Sensing Image Change Detection"} {"abstract": "In this paper, we study the problem of salient object detection (SOD) for RGB-D images using both color and depth information.A major technical challenge in performing salient object detection fromRGB-D images is how to fully leverage the two complementary data sources. Current works either simply distill prior knowledge from the corresponding depth map for handling the RGB-image or blindly fuse color and geometric information to generate the coarse depth-aware representations, hindering the performance of RGB-D saliency detectors.In this work, we introduceCascade Graph Neural Networks(Cas-Gnn),a unified framework which is capable of comprehensively distilling and reasoning the mutual benefits between these two data sources through a set of cascade graphs, to learn powerful representations for RGB-D salient object detection. Cas-Gnn processes the two data sources individually and employs a novelCascade Graph Reasoning(CGR) module to learn powerful dense feature embeddings, from which the saliency map can be easily inferred. Contrast to the previous approaches, the explicitly modeling and reasoning of high-level relations between complementary data sources allows us to better overcome challenges such as occlusions and ambiguities. Extensive experiments demonstrate that Cas-Gnn achieves significantly better performance than all existing RGB-DSOD approaches on several widely-used benchmarks.", "field": [], "task": ["Object Detection", "RGB-D Salient Object Detection", "RGB Salient Object Detection", "Salient Object Detection"], "method": [], "dataset": ["NJU2K"], "metric": ["Average MAE", "S-Measure"], "title": "Cascade Graph Neural Networks for RGB-D Salient Object Detection"} {"abstract": "For most deep learning practitioners, sequence modeling is synonymous with\nrecurrent networks. Yet recent results indicate that convolutional\narchitectures can outperform recurrent networks on tasks such as audio\nsynthesis and machine translation. Given a new sequence modeling task or\ndataset, which architecture should one use? We conduct a systematic evaluation\nof generic convolutional and recurrent architectures for sequence modeling. The\nmodels are evaluated across a broad range of standard tasks that are commonly\nused to benchmark recurrent networks. Our results indicate that a simple\nconvolutional architecture outperforms canonical recurrent networks such as\nLSTMs across a diverse range of tasks and datasets, while demonstrating longer\neffective memory. We conclude that the common association between sequence\nmodeling and recurrent networks should be reconsidered, and convolutional\nnetworks should be regarded as a natural starting point for sequence modeling\ntasks. To assist related work, we have made code available at\nhttp://github.com/locuslab/TCN .", "field": [], "task": ["Language Modelling", "Machine Translation", "Music Modeling", "Sequential Image Classification"], "method": [], "dataset": ["Sequential MNIST", "Penn Treebank (Word Level)", "Penn Treebank (Character Level)", "Nottingham", "WikiText-103"], "metric": ["Unpermuted Accuracy", "Bit per Character (BPC)", "Permuted Accuracy", "Test perplexity", "NLL"], "title": "An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling"} {"abstract": "In this paper, we propose an end-to-end trainable regression approach for\nhuman pose estimation from still images. We use the proposed Soft-argmax\nfunction to convert feature maps directly to joint coordinates, resulting in a\nfully differentiable framework. Our method is able to learn heat maps\nrepresentations indirectly, without additional steps of artificial ground truth\ngeneration. Consequently, contextual information can be included to the pose\npredictions in a seamless way. We evaluated our method on two very challenging\ndatasets, the Leeds Sports Poses (LSP) and the MPII Human Pose datasets,\nreaching the best performance among all the existing regression methods and\ncomparable results to the state-of-the-art detection based approaches.", "field": [], "task": ["Pose Estimation", "Regression"], "method": [], "dataset": ["Leeds Sports Poses"], "metric": ["PCK"], "title": "Human Pose Regression by Combining Indirect Part Detection and Contextual Information"} {"abstract": "Semantic instance segmentation remains a challenging task. In this work we\npropose to tackle the problem with a discriminative loss function, operating at\nthe pixel level, that encourages a convolutional network to produce a\nrepresentation of the image that can easily be clustered into instances with a\nsimple post-processing step. The loss function encourages the network to map\neach pixel to a point in feature space so that pixels belonging to the same\ninstance lie close together while different instances are separated by a wide\nmargin. Our approach of combining an off-the-shelf network with a principled\nloss function inspired by a metric learning objective is conceptually simple\nand distinct from recent efforts in instance segmentation. In contrast to\nprevious works, our method does not rely on object proposals or recurrent\nmechanisms. A key contribution of our work is to demonstrate that such a simple\nsetup without bells and whistles is effective and can perform on par with more\ncomplex methods. Moreover, we show that it does not suffer from some of the\nlimitations of the popular detect-and-segment approaches. We achieve\ncompetitive performance on the Cityscapes and CVPPP leaf segmentation\nbenchmarks.", "field": [], "task": ["Instance Segmentation", "Lane Detection", "Metric Learning", "Multi-Human Parsing", "Semantic Segmentation"], "method": [], "dataset": ["MHP v1.0", "TuSimple", "Cityscapes test"], "metric": ["Average Precision", "AP 0.5", "Accuracy"], "title": "Semantic Instance Segmentation with a Discriminative Loss Function"} {"abstract": "Recent research on super-resolution has progressed with the development of\ndeep convolutional neural networks (DCNN). In particular, residual learning\ntechniques exhibit improved performance. In this paper, we develop an enhanced\ndeep super-resolution network (EDSR) with performance exceeding those of\ncurrent state-of-the-art SR methods. The significant performance improvement of\nour model is due to optimization by removing unnecessary modules in\nconventional residual networks. The performance is further improved by\nexpanding the model size while we stabilize the training procedure. We also\npropose a new multi-scale deep super-resolution system (MDSR) and training\nmethod, which can reconstruct high-resolution images of different upscaling\nfactors in a single model. The proposed methods show superior performance over\nthe state-of-the-art methods on benchmark datasets and prove its excellence by\nwinning the NTIRE2017 Super-Resolution Challenge.", "field": [], "task": ["Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["FFHQ 256 x 256 - 4x upscaling", "Set14 - 4x upscaling", "Manga109 - 4x upscaling", "BSD100 - 4x upscaling", "FFHQ 1024 x 1024 - 4x upscaling", "Set5 - 4x upscaling", "FFHQ 512 x 512 - 4x upscaling", "Urban100 - 4x upscaling"], "metric": ["LLE", "PSNR", "FID", "FED", "MS-SSIM", "LPIPS", "NIQE", "SSIM"], "title": "Enhanced Deep Residual Networks for Single Image Super-Resolution"} {"abstract": "Many natural language processing applications use language models to generate\ntext. These models are typically trained to predict the next word in a\nsequence, given the previous words and some context such as an image. However,\nat test time the model is expected to generate the entire sequence from\nscratch. This discrepancy makes generation brittle, as errors may accumulate\nalong the way. We address this issue by proposing a novel sequence level\ntraining algorithm that directly optimizes the metric used at test time, such\nas BLEU or ROUGE. On three different tasks, our approach outperforms several\nstrong baselines for greedy generation. The method is also competitive when\nthese baselines employ beam search, while being several times faster.", "field": [], "task": ["Machine Translation"], "method": [], "dataset": ["IWSLT2015 German-English"], "metric": ["BLEU score"], "title": "Sequence Level Training with Recurrent Neural Networks"} {"abstract": "Convolutional networks almost always incorporate some form of spatial\npooling, and very often it is alpha times alpha max-pooling with alpha=2.\nMax-pooling act on the hidden layers of the network, reducing their size by an\ninteger multiplicative factor alpha. The amazing by-product of discarding 75%\nof your data is that you build into the network a degree of invariance with\nrespect to translations and elastic distortions. However, if you simply\nalternate convolutional layers with max-pooling layers, performance is limited\ndue to the rapid reduction in spatial size, and the disjoint nature of the\npooling regions. We have formulated a fractional version of max-pooling where\nalpha is allowed to take non-integer values. Our version of max-pooling is\nstochastic as there are lots of different ways of constructing suitable pooling\nregions. We find that our form of fractional max-pooling reduces overfitting on\na variety of datasets: for instance, we improve on the state-of-the art for\nCIFAR-100 without even using dropout.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["MNIST", "CIFAR-100", "CIFAR-10"], "metric": ["Percentage error", "Percentage correct"], "title": "Fractional Max-Pooling"} {"abstract": "Lossy compression introduces complex compression artifacts, particularly the\nblocking artifacts, ringing effects and blurring. Existing algorithms either\nfocus on removing blocking artifacts and produce blurred output, or restores\nsharpened images that are accompanied with ringing effects. Inspired by the\ndeep convolutional networks (DCN) on super-resolution, we formulate a compact\nand efficient network for seamless attenuation of different compression\nartifacts. We also demonstrate that a deeper model can be effectively trained\nwith the features learned in a shallow network. Following a similar \"easy to\nhard\" idea, we systematically investigate several practical transfer settings\nand show the effectiveness of transfer learning in low-level vision problems.\nOur method shows superior performance than the state-of-the-arts both on the\nbenchmark datasets and the real-world use case (i.e. Twitter). In addition, we\nshow that our method can be applied as pre-processing to facilitate other\nlow-level vision routines when they take compressed images as input.", "field": [], "task": ["Denoising", "JPEG Artifact Correction", "Transfer Learning"], "method": [], "dataset": ["ICB (Quality 30 Color)", "ICB (Quality 10 Color)", "Live1 (Quality 10 Grayscale)", "LIVE1 (Quality 20 Grayscale)", "LIVE1 (Quality 20 Color)", "ICB (Quality 20 Grayscale)", "ICB (Quality 20 Color)", "LIVE1 (Quality 10 Color)", "ICB (Quality 10 Grayscale)"], "metric": ["SSIM", "PSNR", "PSNR-B"], "title": "Compression Artifacts Reduction by a Deep Convolutional Network"} {"abstract": "Deep neural networks (DNNs) are now a central component of nearly all\nstate-of-the-art speech recognition systems. Building neural network acoustic\nmodels requires several design decisions including network architecture, size,\nand training loss function. This paper offers an empirical investigation on\nwhich aspects of DNN acoustic model design are most important for speech\nrecognition system performance. We report DNN classifier performance and final\nspeech recognizer word error rates, and compare DNNs using several metrics to\nquantify factors influencing differences in task performance. Our first set of\nexperiments use the standard Switchboard benchmark corpus, which contains\napproximately 300 hours of conversational telephone speech. We compare standard\nDNNs to convolutional networks, and present the first experiments using\nlocally-connected, untied neural networks for acoustic modeling. We\nadditionally build systems on a corpus of 2,100 hours of training data by\ncombining the Switchboard and Fisher corpora. This larger corpus allows us to\nmore thoroughly examine performance of large DNN models -- with up to ten times\nmore parameters than those typically used in speech recognition systems. Our\nresults suggest that a relatively simple DNN architecture and optimization\ntechnique produces strong results. These findings, along with previous work,\nhelp establish a set of best practices for building DNN hybrid speech\nrecognition systems with maximum likelihood training. Our experiments in DNN\noptimization additionally serve as a case study for training DNNs with\ndiscriminative loss functions for speech tasks, as well as DNN classifiers more\ngenerally.", "field": [], "task": ["Large Vocabulary Continuous Speech Recognition", "Speech Recognition"], "method": [], "dataset": ["Switchboard + Hub500", "swb_hub_500 WER fullSWBCH"], "metric": ["Percentage error"], "title": "Building DNN Acoustic Models for Large Vocabulary Speech Recognition"} {"abstract": "Gesture recognition is a hot topic in computer vision and pattern\nrecognition, which plays a vitally important role in natural human-computer\ninterface. Although great progress has been made recently, fast and robust hand\ngesture recognition remains an open problem, since the existing methods have\nnot well balanced the performance and the efficiency simultaneously. To bridge\nit, this work combines image entropy and density clustering to exploit the key\nframes from hand gesture video for further feature extraction, which can\nimprove the efficiency of recognition. Moreover, a feature fusion strategy is\nalso proposed to further improve feature representation, which elevates the\nperformance of recognition. To validate our approach in a \"wild\" environment,\nwe also introduce two new datasets called HandGesture and Action3D datasets.\nExperiments consistently demonstrate that our strategy achieves competitive\nresults on Northwestern University, Cambridge, HandGesture and Action3D hand\ngesture datasets. Our code and datasets will release at\nhttps://github.com/Ha0Tang/HandGestureRecognition.", "field": [], "task": ["Gesture Recognition", "Hand Gesture Recognition", "Hand-Gesture Recognition"], "method": [], "dataset": ["Northwestern University", "Cambridge"], "metric": ["Accuracy"], "title": "Fast and Robust Dynamic Hand Gesture Recognition via Key Frames Extraction and Feature Fusion"} {"abstract": "Music genre can be hard to describe: many factors are involved, such as\nstyle, music technique, and historical context. Some genres even have\noverlapping characteristics. Looking for a better understanding of how music\ngenres are related to musical harmonic structures, we gathered data about the\nmusic chords for thousands of popular Brazilian songs. Here, 'popular' does not\nonly refer to the genre named MPB (Brazilian Popular Music) but to nine\ndifferent genres that were considered particular to the Brazilian case. The\nmain goals of the present work are to extract and engineer harmonically related\nfeatures from chords data and to use it to classify popular Brazilian music\ngenres towards establishing a connection between harmonic relationships and\nBrazilian genres. We also emphasize the generalization of the method for\nobtaining the data, allowing for the replication and direct extension of this\nwork. Our final model is a combination of multiple classification trees, also\nknown as the random forest model. We found that features extracted from\nharmonic elements can satisfactorily predict music genre for the Brazilian\ncase, as well as features obtained from the Spotify API. The variables\nconsidered in this work also give an intuition about how they relate to the\ngenres.", "field": [], "task": ["Feature Engineering", "Music Genre Recognition"], "method": [], "dataset": ["chords"], "metric": ["Accuracy"], "title": "Machine learning and chord based feature engineering for genre prediction in popular Brazilian music"} {"abstract": "Reading comprehension has recently seen rapid progress, with systems matching\nhumans on the most popular datasets for the task. However, a large body of work\nhas highlighted the brittleness of these systems, showing that there is much\nwork left to be done. We introduce a new English reading comprehension\nbenchmark, DROP, which requires Discrete Reasoning Over the content of\nParagraphs. In this crowdsourced, adversarially-created, 96k-question\nbenchmark, a system must resolve references in a question, perhaps to multiple\ninput positions, and perform discrete operations over them (such as addition,\ncounting, or sorting). These operations require a much more comprehensive\nunderstanding of the content of paragraphs than what was necessary for prior\ndatasets. We apply state-of-the-art methods from both the reading comprehension\nand semantic parsing literature on this dataset and show that the best systems\nonly achieve 32.7% F1 on our generalized accuracy metric, while expert human\nperformance is 96.0%. We additionally present a new model that combines reading\ncomprehension methods with simple numerical reasoning to achieve 47.0% F1.", "field": [], "task": ["Question Answering", "Reading Comprehension", "Semantic Parsing"], "method": [], "dataset": ["DROP Test"], "metric": ["F1"], "title": "DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs"} {"abstract": "Unsupervised domain adaptation (UDA) is important for applications where large scale annotation of representative data is challenging. For semantic segmentation in particular, it helps deploy on real \"target domain\" data models that are trained on annotated images from a different \"source domain\", notably a virtual environment. To this end, most previous works consider semantic segmentation as the only mode of supervision for source domain data, while ignoring other, possibly available, information like depth. In this work, we aim at exploiting at best such a privileged information while training the UDA model. We propose a unified depth-aware UDA framework that leverages in several complementary ways the knowledge of dense depth in the source domain. As a result, the performance of the trained semantic segmentation model on the target domain is boosted. Our novel approach indeed achieves state-of-the-art performance on different challenging synthetic-2-real benchmarks.", "field": [], "task": ["Domain Adaptation", "Image-to-Image Translation", "Semantic Segmentation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["SYNTHIA-to-Cityscapes"], "metric": ["mIoU (13 classes)"], "title": "DADA: Depth-aware Domain Adaptation in Semantic Segmentation"} {"abstract": "There has been tremendous research progress in estimating the depth of a scene from a monocular camera image. Existing methods for single-image depth prediction are exclusively based on deep neural networks, and their training can be unsupervised using stereo image pairs, supervised using LiDAR point clouds, or semi-supervised using both stereo and LiDAR. In general, semi-supervised training is preferred as it does not suffer from the weaknesses of either supervised training, resulting from the difference in the cameras and the LiDARs field of view, or unsupervised training, resulting from the poor depth accuracy that can be recovered from a stereo pair. In this paper, we present our research in single image depth prediction using semi-supervised training that outperforms the state-of-the-art. We achieve this through a loss function that explicitly exploits left-right consistency in a stereo reconstruction, which has not been adopted in previous semi-supervised training. In addition, we describe the correct use of ground truth depth derived from LiDAR that can significantly reduce prediction error. The performance of our depth prediction model is evaluated on popular datasets, and the importance of each aspect of our semi-supervised training approach is demonstrated through experimental results. Our deep neural network model has been made publicly available.", "field": [], "task": ["Depth Estimation", "Monocular Depth Estimation"], "method": [], "dataset": ["KITTI Eigen split"], "metric": ["absolute relative error"], "title": "Semi-Supervised Monocular Depth Estimation with Left-Right Consistency Using Deep Neural Network"} {"abstract": "The ability of a graph neural network (GNN) to leverage both the graph topology and graph labels is fundamental to building discriminative node and graph embeddings. Building on previous work, we theoretically show that edGNN, our model for directed labeled graphs, is as powerful as the Weisfeiler-Lehman algorithm for graph isomorphism. Our experiments support our theoretical findings, confirming that graph neural networks can be used effectively for inference problems on directed graphs with both node and edge labels. Code available at https://github.com/guillaumejaume/edGNN.", "field": [], "task": ["Graph Classification"], "method": [], "dataset": ["MUTAG"], "metric": ["Accuracy"], "title": "edGNN: a Simple and Powerful GNN for Directed Labeled Graphs"} {"abstract": "Recent years have witnessed the remarkable progress of applying deep learning models in video person re-identification (Re-ID). A key factor for video person Re-ID is to effectively construct discriminative and robust video feature representations for many complicated situations. Part-based approaches employ spatial and temporal attention to extract representative local features. While correlations between parts are ignored in the previous methods, to leverage the relations of different parts, we propose an innovative adaptive graph representation learning scheme for video person Re-ID, which enables the contextual interactions between relevant regional features. Specifically, we exploit the pose alignment connection and the feature affinity connection to construct an adaptive structure-aware adjacency graph, which models the intrinsic relations between graph nodes. We perform feature propagation on the adjacency graph to refine regional features iteratively, and the neighbor nodes' information is taken into account for part feature representation. To learn compact and discriminative representations, we further propose a novel temporal resolution-aware regularization, which enforces the consistency among different temporal resolutions for the same identities. We conduct extensive evaluations on four benchmarks, i.e. iLIDS-VID, PRID2011, MARS, and DukeMTMC-VideoReID, experimental results achieve the competitive performance which demonstrates the effectiveness of our proposed method. The code is available at https://github.com/weleen/AGRL.pytorch.", "field": [], "task": ["Graph Representation Learning", "Person Re-Identification", "Representation Learning", "Video-Based Person Re-Identification"], "method": [], "dataset": ["iLIDS-VID", "MARS", "PRID2011"], "metric": ["Rank-1", "Rank-10", "mAP", "Rank-20"], "title": "Adaptive Graph Representation Learning for Video Person Re-identification"} {"abstract": "Several papers argue that wide minima generalize better than narrow minima. In this paper, through detailed experiments that not only corroborate the generalization properties of wide minima, we also provide empirical evidence for a new hypothesis that the density of wide minima is likely lower than the density of narrow minima. Further, motivated by this hypothesis, we design a novel explore-exploit learning rate schedule. On a variety of image and natural language datasets, compared to their original hand-tuned learning rate baselines, we show that our explore-exploit schedule can result in either up to 0.84% higher absolute accuracy using the original training budget or up to 57% reduced training time while achieving the original reported accuracy. For example, we achieve state-of-the-art (SOTA) accuracy for IWSLT'14 (DE-EN) and WMT'14 (DE-EN) datasets by just modifying the learning rate schedule of a high performing model.", "field": [], "task": ["Machine Translation"], "method": [], "dataset": ["WMT2014 German-English", "IWSLT2014 German-English"], "metric": ["BLEU score"], "title": "Wide-minima Density Hypothesis and the Explore-Exploit Learning Rate Schedule"} {"abstract": "Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes.", "field": [], "task": ["Image Classification", "Panoptic Segmentation"], "method": [], "dataset": ["COCO panoptic", "Cityscapes val", "Mapillary val", "COCO test-dev", "Cityscapes test"], "metric": ["PQst", "mIoU", "PQth", "PQ", "AP"], "title": "Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation"} {"abstract": "In skeleton-based action recognition, graph convolutional networks (GCNs), which model human body skeletons using graphical components such as nodes and connections, have achieved remarkable performance recently. However, current state-of-the-art methods for skeleton-based action recognition usually work on the assumption that the completely observed skeletons will be provided. This may be problematic to apply this assumption in real scenarios since there is always a possibility that captured skeletons are incomplete or noisy. In this work, we propose a skeleton-based action recognition method which is robust to noise information of given skeleton features. The key insight of our approach is to train a model by maximizing the mutual information between normal and noisy skeletons using a predictive coding manner. We have conducted comprehensive experiments about skeleton-based action recognition with defected skeletons using NTU-RGB+D and Kinetics-Skeleton datasets. The experimental results demonstrate that our approach achieves outstanding performance when skeleton samples are noised compared with existing state-of-the-art methods.", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition"], "method": [], "dataset": ["NTU RGB+D", "Kinetics-Skeleton dataset"], "metric": ["Accuracy (CS)", "Accuracy (CV)", "Accuracy"], "title": "Predictively Encoded Graph Convolutional Network for Noise-Robust Skeleton-based Action Recognition"} {"abstract": "Videos are a rich source of multi-modal supervision. In this work, we learn representations using self-supervision by leveraging three modalities naturally present in videos: visual, audio and language streams. To this end, we introduce the notion of a multimodal versatile network -- a network that can ingest multiple modalities and whose representations enable downstream tasks in multiple modalities. In particular, we explore how best to combine the modalities, such that fine-grained representations of the visual and audio modalities can be maintained, whilst also integrating text into a common embedding. Driven by versatility, we also introduce a novel process of deflation, so that the networks can be effortlessly applied to the visual data in the form of video or a static image. We demonstrate how such networks trained on large collections of unlabelled video data can be applied on video, video-text, image and audio tasks. Equipped with these representations, we obtain state-of-the-art performance on multiple challenging benchmarks including UCF101, HMDB51, Kinetics600, AudioSet and ESC-50 when compared to previous self-supervised work. Our models are publicly available.", "field": [], "task": ["Action Recognition In Videos", "Audio Classification", "Self-Supervised Action Recognition"], "method": [], "dataset": ["HMDB51", "UCF101", "AudioSet", "ESC-50"], "metric": ["Test mAP", "3-fold Accuracy", "Pre-Training Dataset", "Top-1 Accuracy"], "title": "Self-Supervised MultiModal Versatile Networks"} {"abstract": "Document date is essential for many important tasks, such as document\nretrieval, summarization, event detection, etc. While existing approaches for\nthese tasks assume accurate knowledge of the document date, this is not always\navailable, especially for arbitrary documents from the Web. Document Dating is\na challenging problem which requires inference over the temporal structure of\nthe document. Prior document dating systems have largely relied on handcrafted\nfeatures while ignoring such document internal structures. In this paper, we\npropose NeuralDater, a Graph Convolutional Network (GCN) based document dating\napproach which jointly exploits syntactic and temporal graph structures of\ndocument in a principled way. To the best of our knowledge, this is the first\napplication of deep learning for the problem of document dating. Through\nextensive experiments on real-world datasets, we find that NeuralDater\nsignificantly outperforms state-of-the-art baseline by 19% absolute (45%\nrelative) accuracy points.", "field": [], "task": ["Document Dating"], "method": [], "dataset": ["APW", "NYT"], "metric": ["Accuracy"], "title": "Dating Documents using Graph Convolution Networks"} {"abstract": "Deep convolutional neural network based image super-resolution (SR) models have shown superior performance in recovering the underlying high resolution (HR) images from low resolution (LR) images obtained from the predefined downscaling methods. In this paper we propose a learned image downscaling method based on content adaptive resampler (CAR) with consideration on the upscaling process. The proposed resampler network generates content adaptive image resampling kernels that are applied to the original HR input to generate pixels on the downscaled image. Moreover, a differentiable upscaling (SR) module is employed to upscale the LR result into its underlying HR counterpart. By back-propagating the reconstruction error down to the original HR input across the entire framework to adjust model parameters, the proposed framework achieves a new state-of-the-art SR performance through upscaling guided image resamplers which adaptively preserve detailed information that is essential to the upscaling. Experimental results indicate that the quality of the generated LR image is comparable to that of the traditional interpolation based method, but the significant SR performance gain is achieved by deep SR models trained jointly with the CAR model. The code is publicly available on: URL https://github.com/sunwj/CAR.", "field": [], "task": ["Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["Set14 - 2x upscaling", "Set14 - 4x upscaling", "BSD100 - 2x upscaling", "DIV2K val - 2x upscaling", "Urban100 - 2x upscaling", "BSD100 - 4x upscaling", "DIV2K val - 4x upscaling", "Set5 - 4x upscaling", "Set5 - 2x upscaling", "Urban100 - 4x upscaling"], "metric": ["PSNR"], "title": "Learned Image Downscaling for Upscaling using Content Adaptive Resampler"} {"abstract": "Measuring the colorfulness of a natural or virtual scene is critical for many applications in image processing field ranging from capturing to display. In this paper, we propose the first deep learning-based colorfulness estimation metric. For this purpose, we develop a color rating model which simultaneously learns to extracts the pertinent characteristic color features and the mapping from feature space to the ideal colorfulness scores for a variety of natural colored images. Additionally, we propose to overcome the lack of adequate annotated dataset problem by combining/aligning two publicly available colorfulness databases using the results of a new subjective test which employs a common subset of both databases. Using the obtained subjectively annotated dataset with 180 colored images, we finally demonstrate the efficacy of our proposed model over the traditional methods, both quantitatively and qualitatively.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["CIFAR-10"], "metric": ["PARAMS", "Percentage correct"], "title": "ColorNet -- Estimating Colorfulness in Natural Images"} {"abstract": "We propose a generative model for single-channel EEG that incorporates the constraints experts actively enforce during visual scoring. The framework takes the form of a dynamic Bayesian network with depth in both the latent variables and the observation likelihoods-while the hidden variables control the durations, state transitions, and robustness, the observation architectures parameterize Normal-Gamma distributions. The resulting model allows for time series segmentation into local, reoccurring dynamical regimes by exploiting probabilistic models and deep learning. Unlike typical detectors, our model takes the raw data (up to resampling) without pre-processing (e.g., filtering, windowing, thresholding) or post-processing (e.g., event merging). This not only makes the model appealing to real-time applications, but it also yields interpretable hyperparameters that are analogous to known clinical criteria. We derive algorithms for exact, tractable inference as a special case of Generalized Expectation Maximization via dynamic programming and backpropagation. We validate the model on three public datasets and provide support that more complex models are able to surpass state-of-the-art detectors while being transparent, auditable, and generalizable.", "field": [], "task": ["EEG", "Sleep spindles detection", "Time Series"], "method": [], "dataset": ["DREAMS sleep spindles"], "metric": ["MCC"], "title": "Deep Neural Dynamic Bayesian Networks applied to EEG sleep spindles modeling"} {"abstract": "Subjective bias detection is critical for applications like propaganda detection, content recommendation, sentiment analysis, and bias neutralization. This bias is introduced in natural language via inflammatory words and phrases, casting doubt over facts, and presupposing the truth. In this work, we perform comprehensive experiments for detecting subjective bias using BERT-based models on the Wiki Neutrality Corpus(WNC). The dataset consists of $360k$ labeled instances, from Wikipedia edits that remove various instances of the bias. We further propose BERT-based ensembles that outperform state-of-the-art methods like $BERT_{large}$ by a margin of $5.6$ F1 score.", "field": [], "task": ["Bias Detection", "Propaganda detection", "Sentiment Analysis", "Word Embeddings"], "method": [], "dataset": ["Wiki Neutrality Corpus"], "metric": ["F1"], "title": "Towards Detection of Subjective Bias using Contextualized Word Embeddings"} {"abstract": "We present a new approach to modeling visual attributes. Prior work casts\nattributes in a similar role as objects, learning a latent representation where\nproperties (e.g., sliced) are recognized by classifiers much in the way objects\n(e.g., apple) are. However, this common approach fails to separate the\nattributes observed during training from the objects with which they are\ncomposed, making it ineffectual when encountering new attribute-object\ncompositions. Instead, we propose to model attributes as operators. Our\napproach learns a semantic embedding that explicitly factors out attributes\nfrom their accompanying objects, and also benefits from novel regularizers\nexpressing attribute operators' effects (e.g., blunt should undo the effects of\nsharp). Not only does our approach align conceptually with the linguistic role\nof attributes as modifiers, but it also generalizes to recognize unseen\ncompositions of objects and attributes. We validate our approach on two\nchallenging datasets and demonstrate significant improvements over the\nstate-of-the-art. In addition, we show that not only can our model recognize\nunseen compositions robustly in an open-world setting, it can also generalize\nto compositions where objects themselves were unseen during training.", "field": [], "task": ["Compositional Zero-Shot Learning", "Image Retrieval with Multi-Modal Query"], "method": [], "dataset": ["MIT-States"], "metric": ["Recall@1", "Recall@5", "Recall@10"], "title": "Attributes as Operators: Factorizing Unseen Attribute-Object Compositions"} {"abstract": "Action recognition and human pose estimation are closely related but both\nproblems are generally handled as distinct tasks in the literature. In this\nwork, we propose a multitask framework for jointly 2D and 3D pose estimation\nfrom still images and human action recognition from video sequences. We show\nthat a single architecture can be used to solve the two problems in an\nefficient way and still achieves state-of-the-art results. Additionally, we\ndemonstrate that optimization from end-to-end leads to significantly higher\naccuracy than separated learning. The proposed architecture can be trained with\ndata from different categories simultaneously in a seamlessly way. The reported\nresults on four datasets (MPII, Human3.6M, Penn Action and NTU) demonstrate the\neffectiveness of our method on the targeted tasks.", "field": [], "task": ["3D Pose Estimation", "Action Recognition", "Pose Estimation", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)"], "title": "2D/3D Pose Estimation and Action Recognition using Multitask Deep Learning"} {"abstract": "This paper explores the use of self-ensembling for visual domain adaptation\nproblems. Our technique is derived from the mean teacher variant (Tarvainen et\nal., 2017) of temporal ensembling (Laine et al;, 2017), a technique that\nachieved state of the art results in the area of semi-supervised learning. We\nintroduce a number of modifications to their approach for challenging domain\nadaptation scenarios and evaluate its effectiveness. Our approach achieves\nstate of the art results in a variety of benchmarks, including our winning\nentry in the VISDA-2017 visual domain adaptation challenge. In small image\nbenchmarks, our algorithm not only outperforms prior art, but can also achieve\naccuracy that is close to that of a classifier trained in a supervised fashion.", "field": [], "task": ["Domain Adaptation"], "method": [], "dataset": ["Synth Signs-to-GTSRB", "SVHN-to-MNIST", "USPS-to-MNIST", "MNIST-to-USPS", "VisDA2017"], "metric": ["Accuracy"], "title": "Self-ensembling for visual domain adaptation"} {"abstract": "Recent neural models have shown significant progress on the problem of\ngenerating short descriptive texts conditioned on a small number of database\nrecords. In this work, we suggest a slightly more difficult data-to-text\ngeneration task, and investigate how effective current approaches are on this\ntask. In particular, we introduce a new, large-scale corpus of data records\npaired with descriptive documents, propose a series of extractive evaluation\nmethods for analyzing performance, and obtain baseline results using current\nneural generation methods. Experiments show that these models produce fluent\ntext, but fail to convincingly approximate human-generated documents. Moreover,\neven templated baselines exceed the performance of these neural models on some\nmetrics, though copy- and reconstruction-based extensions lead to noticeable\nimprovements.", "field": [], "task": ["Data-to-Text Generation", "Text Generation"], "method": [], "dataset": ["Rotowire (Content Selection)", "RotoWire", "RotoWire (Content Ordering)", "RotoWire (Relation Generation)"], "metric": ["count", "Recall", "Precision", "DLD", "BLEU"], "title": "Challenges in Data-to-Document Generation"} {"abstract": "Multiple entities in a document generally exhibit complex inter-sentence relations, and cannot be well handled by existing relation extraction (RE) methods that typically focus on extracting intra-sentence relations for single entity pairs. In order to accelerate the research on document-level RE, we introduce DocRED, a new dataset constructed from Wikipedia and Wikidata with three features: (1) DocRED annotates both named entities and relations, and is the largest human-annotated dataset for document-level RE from plain text; (2) DocRED requires reading multiple sentences in a document to extract entities and infer their relations by synthesizing all information of the document; (3) along with the human-annotated data, we also offer large-scale distantly supervised data, which enables DocRED to be adopted for both supervised and weakly supervised scenarios. In order to verify the challenges of document-level RE, we implement recent state-of-the-art methods for RE and conduct a thorough evaluation of these methods on DocRED. Empirical results show that DocRED is challenging for existing RE methods, which indicates that document-level RE remains an open problem and requires further efforts. Based on the detailed analysis on the experiments, we discuss multiple promising directions for future research.", "field": [], "task": ["Relation Extraction"], "method": [], "dataset": ["DocRED"], "metric": ["Ign F1", "F1"], "title": "DocRED: A Large-Scale Document-Level Relation Extraction Dataset"} {"abstract": "Concepts, which represent a group of different instances sharing common\nproperties, are essential information in knowledge representation. Most\nconventional knowledge embedding methods encode both entities (concepts and\ninstances) and relations as vectors in a low dimensional semantic space\nequally, ignoring the difference between concepts and instances. In this paper,\nwe propose a novel knowledge graph embedding model named TransC by\ndifferentiating concepts and instances. Specifically, TransC encodes each\nconcept in knowledge graph as a sphere and each instance as a vector in the\nsame semantic space. We use the relative positions to model the relations\nbetween concepts and instances (i.e., instanceOf), and the relations between\nconcepts and sub-concepts (i.e., subClassOf). We evaluate our model on both\nlink prediction and triple classification tasks on the dataset based on YAGO.\nExperimental results show that TransC outperforms state-of-the-art methods, and\ncaptures the semantic transitivity for instanceOf and subClassOf relation. Our\ncodes and datasets can be obtained from https:// github.com/davidlvxin/TransC.", "field": [], "task": ["Graph Embedding", "Knowledge Graph Embedding", "Knowledge Graphs", "Link Prediction", "Triple Classification"], "method": [], "dataset": ["YAGO39K"], "metric": ["Hits@3", "Recall", "F1-Score", "Precision", "Hits@1", "MRR", "Accuracy", "Hits@10"], "title": "Differentiating Concepts and Instances for Knowledge Graph Embedding"} {"abstract": "The availability of open-source software is playing a remarkable role in the\npopularization of speech recognition and deep learning. Kaldi, for instance, is\nnowadays an established framework used to develop state-of-the-art speech\nrecognizers. PyTorch is used to build neural networks with the Python language\nand has recently spawn tremendous interest within the machine learning\ncommunity thanks to its simplicity and flexibility.\n The PyTorch-Kaldi project aims to bridge the gap between these popular\ntoolkits, trying to inherit the efficiency of Kaldi and the flexibility of\nPyTorch. PyTorch-Kaldi is not only a simple interface between these software,\nbut it embeds several useful features for developing modern speech recognizers.\nFor instance, the code is specifically designed to naturally plug-in\nuser-defined acoustic models. As an alternative, users can exploit several\npre-implemented neural networks that can be customized using intuitive\nconfiguration files. PyTorch-Kaldi supports multiple feature and label streams\nas well as combinations of neural networks, enabling the use of complex neural\narchitectures. The toolkit is publicly-released along with a rich documentation\nand is designed to properly work locally or on HPC clusters.\n Experiments, that are conducted on several datasets and tasks, show that\nPyTorch-Kaldi can effectively be used to develop modern state-of-the-art speech\nrecognizers.", "field": [], "task": ["Distant Speech Recognition", "Noisy Speech Recognition", "Speech Recognition"], "method": [], "dataset": ["LibriSpeech test-clean", "DIRHA English WSJ", "TIMIT", "CHiME real"], "metric": ["Percentage error", "Word Error Rate (WER)"], "title": "The PyTorch-Kaldi Speech Recognition Toolkit"} {"abstract": "Multi-person pose estimation from a 2D image is an essential technique for\nhuman behavior understanding. In this paper, we propose a human pose refinement\nnetwork that estimates a refined pose from a tuple of an input image and input\npose. The pose refinement was performed mainly through an end-to-end trainable\nmulti-stage architecture in previous methods. However, they are highly\ndependent on pose estimation models and require careful model design. By\ncontrast, we propose a model-agnostic pose refinement method. According to a\nrecent study, state-of-the-art 2D human pose estimation methods have similar\nerror distributions. We use this error statistics as prior information to\ngenerate synthetic poses and use the synthesized poses to train our model. In\nthe testing stage, pose estimation results of any other methods can be input to\nthe proposed method. Moreover, the proposed model does not require code or\nknowledge about other methods, which allows it to be easily used in the\npost-processing step. We show that the proposed approach achieves better\nperformance than the conventional multi-stage refinement models and\nconsistently improves the performance of various state-of-the-art pose\nestimation methods on the commonly used benchmark. The code is available in\nthis https URL\\footnote{\\url{https://github.com/mks0601/PoseFix_RELEASE}}.", "field": [], "task": ["2D Human Pose Estimation", "Keypoint Detection", "Multi-Person Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["COCO", "COCO test-dev"], "metric": ["Test AP", "Validation AP", "APM", "AP75", "AP", "APL", "AP50", "AR"], "title": "PoseFix: Model-agnostic General Human Pose Refinement Network"} {"abstract": "We present SUNNYNLP, our system for solving SemEval 2018 Task 10: {``}Capturing Discriminative Attributes{''}. Our Support-Vector-Machine(SVM)-based system combines features extracted from pre-trained embeddings and statistical information from Is-A taxonomy to detect semantic difference of concepts pairs. Our system is demonstrated to be effective in detecting semantic difference and is ranked 1st in the competition in terms of F1 measure. The open source of our code is coined SUNNYNLP.", "field": [], "task": ["Dialogue State Tracking", "Question Answering", "Relation Extraction", "Semantic Textual Similarity"], "method": [], "dataset": ["SemEval 2018 Task 10"], "metric": ["F1-Score"], "title": "SUNNYNLP at SemEval-2018 Task 10: A Support-Vector-Machine-Based Method for Detecting Semantic Difference using Taxonomy and Word Embedding Features"} {"abstract": "During the last half decade, convolutional neural networks (CNNs) have\ntriumphed over semantic segmentation, which is one of the core tasks in many\napplications such as autonomous driving and augmented reality. However, to\ntrain CNNs requires a considerable amount of data, which is difficult to\ncollect and laborious to annotate. Recent advances in computer graphics make it\npossible to train CNNs on photo-realistic synthetic imagery with\ncomputer-generated annotations. Despite this, the domain mismatch between the\nreal images and the synthetic data hinders the models' performance. Hence, we\npropose a curriculum-style learning approach to minimizing the domain gap in\nurban scene semantic segmentation. The curriculum domain adaptation solves easy\ntasks first to infer necessary properties about the target domain; in\nparticular, the first task is to learn global label distributions over images\nand local distributions over landmark superpixels. These are easy to estimate\nbecause images of urban scenes have strong idiosyncrasies (e.g., the size and\nspatial relations of buildings, streets, cars, etc.). We then train a\nsegmentation network, while regularizing its predictions in the target domain\nto follow those inferred properties. In experiments, our method outperforms the\nbaselines on two datasets and two backbone networks. We also report extensive\nablation studies about our approach.", "field": [], "task": ["Autonomous Driving", "Domain Adaptation", "Image-to-Image Translation", "Semantic Segmentation", "Synthetic-to-Real Translation"], "method": [], "dataset": ["GTAV-to-Cityscapes Labels", "SYNTHIA-to-Cityscapes"], "metric": ["mIoU (13 classes)", "mIoU"], "title": "A Curriculum Domain Adaptation Approach to the Semantic Segmentation of Urban Scenes"} {"abstract": "Deep learning approaches to optical flow estimation have seen rapid progress\nover the recent years. One common trait of many networks is that they refine an\ninitial flow estimate either through multiple stages or across the levels of a\ncoarse-to-fine representation. While leading to more accurate results, the\ndownside of this is an increased number of parameters. Taking inspiration from\nboth classical energy minimization approaches as well as residual networks, we\npropose an iterative residual refinement (IRR) scheme based on weight sharing\nthat can be combined with several backbone networks. It reduces the number of\nparameters, improves the accuracy, or even achieves both. Moreover, we show\nthat integrating occlusion prediction and bi-directional flow estimation into\nour IRR scheme can further boost the accuracy. Our full network achieves\nstate-of-the-art results for both optical flow and occlusion estimation across\nseveral standard datasets.", "field": [], "task": ["Occlusion Estimation", "Optical Flow Estimation"], "method": [], "dataset": ["KITTI 2012", "Sintel-final", "Sintel-clean", "KITTI 2015"], "metric": ["Average End-Point Error", "Fl-all"], "title": "Iterative Residual Refinement for Joint Optical Flow and Occlusion Estimation"} {"abstract": "Humans can robustly learn novel visual concepts even when images undergo various deformations and lose certain information. Mimicking the same behavior and synthesizing deformed instances of new concepts may help visual recognition systems perform better one-shot learning, i.e., learning concepts from one or few examples. Our key insight is that, while the deformed images may not be visually realistic, they still maintain critical semantic information and contribute significantly to formulating classifier decision boundaries. Inspired by the recent progress of meta-learning, we combine a meta-learner with an image deformation sub-network that produces additional training examples, and optimize both models in an end-to-end manner. The deformation sub-network learns to deform images by fusing a pair of images --- a probe image that keeps the visual content and a gallery image that diversifies the deformations. We demonstrate results on the widely used one-shot learning benchmarks (miniImageNet and ImageNet 1K Challenge datasets), which significantly outperform state-of-the-art approaches. Code is available at https://github.com/tankche1/IDeMe-Net.", "field": [], "task": ["Meta-Learning", "One-Shot Learning"], "method": [], "dataset": ["Mini-Imagenet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Image Deformation Meta-Networks for One-Shot Learning"} {"abstract": "Emotion cause identification aims at identifying the potential causes that lead to a certain emotion expression in text. Several techniques including rule based methods and traditional machine learning methods have been proposed to address this problem based on manually designed rules and features. More recently, some deep learning methods have also been applied to this task, with the attempt to automatically capture the causal relationship of emotion and its causes embodied in the text. In this work, we find that in addition to the content of the text, there are another two kinds of information, namely relative position and global labels, that are also very important for emotion cause identification. To integrate such information, we propose a model based on the neural network architecture to encode the three elements ($i.e.$, text content, relative position and global label), in an unified and end-to-end fashion. We introduce a relative position augmented embedding learning algorithm, and transform the task from an independent prediction problem to a reordered prediction problem, where the dynamic global label information is incorporated. Experimental results on a benchmark emotion cause dataset show that our model achieves new state-of-the-art performance and performs significantly better than a number of competitive baselines. Further analysis shows the effectiveness of the relative position augmented embedding learning algorithm and the reordered prediction mechanism with dynamic global labels.", "field": [], "task": ["Emotion Cause Extraction"], "method": [], "dataset": ["ECE"], "metric": ["F1"], "title": "From Independent Prediction to Re-ordered Prediction: Integrating Relative Position and Global Label Information to Emotion Cause Identification"} {"abstract": "Predicting molecular properties (e.g., atomization energy) is an essential issue in quantum chemistry, which could speed up much research progress, such as drug designing and substance discovery. Traditional studies based on density functional theory (DFT) in physics are proved to be time-consuming for predicting large number of molecules. Recently, the machine learning methods, which consider much rule-based information, have also shown potentials for this issue. However, the complex inherent quantum interactions of molecules are still largely underexplored by existing solutions. In this paper, we propose a generalizable and transferable Multilevel Graph Convolutional neural Network (MGCN) for molecular property prediction. Specifically, we represent each molecule as a graph to preserve its internal structure. Moreover, the well-designed hierarchical graph neural network directly extracts features from the conformation and spatial information followed by the multilevel interactions. As a consequence, the multilevel overall representations can be utilized to make the prediction. Extensive experiments on both datasets of equilibrium and off-equilibrium molecules demonstrate the effectiveness of our model. Furthermore, the detailed results also prove that MGCN is generalizable and transferable for the prediction.", "field": [], "task": ["Graph Regression", "Molecular Property Prediction"], "method": [], "dataset": ["Lipophilicity "], "metric": ["RMSE"], "title": "Molecular Property Prediction: A Multilevel Quantum Interactions Modeling Perspective"} {"abstract": "For sequence transduction tasks like speech recognition, a strong structured prior model encodes rich information about the target space, implicitly ruling out invalid sequences by assigning them low probability. In this work, we propose local prior matching (LPM), a semi-supervised objective that distills knowledge from a strong prior (e.g. a language model) to provide learning signal to a discriminative model trained on unlabeled speech. We demonstrate that LPM is theoretically well-motivated, simple to implement, and superior to existing knowledge distillation techniques under comparable settings. Starting from a baseline trained on 100 hours of labeled speech, with an additional 360 hours of unlabeled data, LPM recovers 54% and 73% of the word error rate on clean and noisy test sets relative to a fully supervised model on the same data.", "field": [], "task": ["Knowledge Distillation", "Language Modelling", "Speech Recognition"], "method": [], "dataset": ["LibriSpeech test-other", "LibriSpeech test-clean"], "metric": ["Word Error Rate (WER)"], "title": "Semi-Supervised Speech Recognition via Local Prior Matching"} {"abstract": "In this paper we address the problem of generating person images conditioned\non a given pose. Specifically, given an image of a person and a target pose, we\nsynthesize a new image of that person in the novel pose. In order to deal with\npixel-to-pixel misalignments caused by the pose differences, we introduce\ndeformable skip connections in the generator of our Generative Adversarial\nNetwork. Moreover, a nearest-neighbour loss is proposed instead of the common\nL1 and L2 losses in order to match the details of the generated image with the\ntarget image. We test our approach using photos of persons in different poses\nand we compare our method with previous work in this area showing\nstate-of-the-art results in two benchmarks. Our method can be applied to the\nwider field of deformable object generation, provided that the pose of the\narticulated object can be extracted using a keypoint detector.", "field": [], "task": ["Gesture-to-Gesture Translation", "Image Generation", "Image-to-Image Translation", "Pose Transfer"], "method": [], "dataset": ["Senz3D", "NTU Hand Digit", "Deep-Fashion"], "metric": ["PSNR", "Retrieval Top10 Recall", "LPIPS", "SSIM", "AMT", "IS"], "title": "Deformable GANs for Pose-based Human Image Generation"} {"abstract": "Deep neural networks have enjoyed remarkable success for various vision\ntasks, however it remains challenging to apply CNNs to domains lacking a\nregular underlying structures such as 3D point clouds. Towards this we propose\na novel convolutional architecture, termed SpiderCNN, to efficiently extract\ngeometric features from point clouds. SpiderCNN is comprised of units called\nSpiderConv, which extend convolutional operations from regular grids to\nirregular point sets that can be embedded in R^n, by parametrizing a family of\nconvolutional filters. We design the filter as a product of a simple step\nfunction that captures local geodesic information and a Taylor polynomial that\nensures the expressiveness. SpiderCNN inherits the multi-scale hierarchical\narchitecture from classical CNNs, which allows it to extract semantic deep\nfeatures. Experiments on ModelNet40 demonstrate that SpiderCNN achieves\nstate-of-the-art accuracy 92.4% on standard benchmarks, and shows competitive\nperformance on segmentation task.", "field": [], "task": ["3D Part Segmentation", "3D Point Cloud Classification"], "method": [], "dataset": ["ShapeNet-Part", "ModelNet40"], "metric": ["Overall Accuracy", "Class Average IoU", "Instance Average IoU"], "title": "SpiderCNN: Deep Learning on Point Sets with Parameterized Convolutional Filters"} {"abstract": "Single image super-resolution is the task of inferring a high-resolution\nimage from a single low-resolution input. Traditionally, the performance of\nalgorithms for this task is measured using pixel-wise reconstruction measures\nsuch as peak signal-to-noise ratio (PSNR) which have been shown to correlate\npoorly with the human perception of image quality. As a result, algorithms\nminimizing these metrics tend to produce over-smoothed images that lack\nhigh-frequency textures and do not look natural despite yielding high PSNR\nvalues.\n We propose a novel application of automated texture synthesis in combination\nwith a perceptual loss focusing on creating realistic textures rather than\noptimizing for a pixel-accurate reproduction of ground truth images during\ntraining. By using feed-forward fully convolutional neural networks in an\nadversarial training setting, we achieve a significant boost in image quality\nat high magnification ratios. Extensive experiments on a number of datasets\nshow the effectiveness of our approach, yielding state-of-the-art results in\nboth quantitative and qualitative benchmarks.", "field": [], "task": ["Image Super-Resolution", "Super-Resolution", "Texture Synthesis"], "method": [], "dataset": ["FFHQ 256 x 256 - 4x upscaling", "Set14 - 4x upscaling", "BSD100 - 4x upscaling", "FFHQ 1024 x 1024 - 4x upscaling", "Set5 - 4x upscaling", "Urban100 - 4x upscaling"], "metric": ["SSIM", "PSNR", "FID", "MS-SSIM"], "title": "EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis"} {"abstract": "A capsule is a group of neurons whose outputs represent different properties of the same entity. Each layer in a capsule network contains many capsules. We describe a version of capsules in which each capsule has a logistic unit to represent the presence of an entity and a 4x4 matrix which could learn to represent the relationship between that entity and the viewer (the pose). A capsule in one layer votes for the pose matrix of many different capsules in the layer above by multiplying its own pose matrix by trainable viewpoint-invariant transformation matrices that could learn to represent part-whole relationships. Each of these votes is weighted by an assignment coefficient. These coefficients are iteratively updated for each image using the Expectation-Maximization algorithm such that the output of each capsule is routed to a capsule in the layer above that receives a cluster of similar votes. The transformation matrices are trained discriminatively by backpropagating through the unrolled iterations of EM between each pair of adjacent capsule layers. On the smallNORB benchmark, capsules reduce the number of test errors by 45\\% compared to the state-of-the-art. Capsules also show far more resistance to white box adversarial attacks than our baseline convolutional neural network.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["smallNORB"], "metric": ["Classification Error"], "title": "Matrix capsules with EM routing"} {"abstract": "We introduce a self-supervised representation learning method based on the\ntask of temporal alignment between videos. The method trains a network using\ntemporal cycle consistency (TCC), a differentiable cycle-consistency loss that\ncan be used to find correspondences across time in multiple videos. The\nresulting per-frame embeddings can be used to align videos by simply matching\nframes using the nearest-neighbors in the learned embedding space.\n To evaluate the power of the embeddings, we densely label the Pouring and\nPenn Action video datasets for action phases. We show that (i) the learned\nembeddings enable few-shot classification of these action phases, significantly\nreducing the supervised training requirements; and (ii) TCC is complementary to\nother methods of self-supervised learning in videos, such as Shuffle and Learn\nand Time-Contrastive Networks. The embeddings are also used for a number of\napplications based on alignment (dense temporal correspondence) between video\npairs, including transfer of metadata of synchronized modalities between videos\n(sounds, temporal semantic labels), synchronized playback of multiple videos,\nand anomaly detection. Project webpage:\nhttps://sites.google.com/view/temporal-cycle-consistency .", "field": [], "task": ["Anomaly Detection", "Representation Learning", "Self-Supervised Learning", "Video Alignment"], "method": [], "dataset": ["UPenn Action"], "metric": ["Kendall's Tau"], "title": "Temporal Cycle-Consistency Learning"} {"abstract": "Many NLP tasks such as tagging and machine reading comprehension are faced with the severe data imbalance issue: negative examples significantly outnumber positive examples, and the huge number of background examples (or easy-negative examples) overwhelms the training. The most commonly used cross entropy (CE) criteria is actually an accuracy-oriented objective, and thus creates a discrepancy between training and test: at training time, each training instance contributes equally to the objective function, while at test time F1 score concerns more about positive examples. In this paper, we propose to use dice loss in replacement of the standard cross-entropy objective for data-imbalanced NLP tasks. Dice loss is based on the Sorensen-Dice coefficient or Tversky index, which attaches similar importance to false positives and false negatives, and is more immune to the data-imbalance issue. To further alleviate the dominating influence from easy-negative examples in training, we propose to associate training examples with dynamically adjusted weights to deemphasize easy-negative examples.Theoretical analysis shows that this strategy narrows down the gap between the F1 score in evaluation and the dice loss in training. With the proposed training objective, we observe significant performance boost on a wide range of data imbalanced NLP tasks. Notably, we are able to achieve SOTA results on CTB5, CTB6 and UD1.4 for the part of speech tagging task; SOTA results on CoNLL03, OntoNotes5.0, MSRA and OntoNotes4.0 for the named entity recognition task; along with competitive results on the tasks of machine reading comprehension and paraphrase identification.", "field": [], "task": ["Chinese Named Entity Recognition", "Machine Reading Comprehension", "Named Entity Recognition", "Paraphrase Identification", "Part-Of-Speech Tagging", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["SQuAD2.0 dev", "OntoNotes 4", "MSRA", "Ontonotes v5 (English)", "CoNLL 2003 (English)", "SQuAD1.1 dev"], "metric": ["EM", "F1"], "title": "Dice Loss for Data-imbalanced NLP Tasks"} {"abstract": "Making a precise annotation in a large dataset is crucial to the performance of object detection. While the object detection task requires a huge number of annotated samples to guarantee its performance, placing bounding boxes for every object in each sample is time-consuming and costs a lot. To alleviate this problem, we propose a Consistency-based Semi-supervised learning method for object Detection (CSD), which is a way of using consistency constraints as a tool for enhancing detection performance by making full use of available unlabeled data. Specifically, the consistency constraint is applied not only for object classification but also for the localization. We also proposed Background Elimination (BE) to avoid the negative effect of the predominant backgrounds on the detection performance. We have evaluated the proposed CSD both in single-stage and two-stage detectors and the results show the effectiveness of our method.", "field": [], "task": ["Object Classification", "Object Detection", "Semi-Supervised Object Detection"], "method": [], "dataset": ["COCO 1% labeled data"], "metric": ["mAP"], "title": "Consistency-based Semi-supervised Learning for Object detection"} {"abstract": "Data augmentation is an effective regularization strategy to alleviate the overfitting, which is an inherent drawback of the deep neural networks. However, data augmentation is rarely considered for point cloud processing despite many studies proposing various augmentation methods for image data. Actually, regularization is essential for point clouds since lack of generality is more likely to occur in point cloud due to small datasets. This paper proposes a Rigid Subset Mix (RSMix), a novel data augmentation method for point clouds that generates a virtual mixed sample by replacing part of the sample with shape-preserved subsets from another sample. RSMix preserves structural information of the point cloud sample by extracting subsets from each sample without deformation using a neighboring function. The neighboring function was carefully designed considering unique properties of point cloud, unordered structure and non-grid. Experiments verified that RSMix successfully regularized the deep neural networks with remarkable improvement for shape classification. We also analyzed various combinations of data augmentations including RSMix with single and multi-view evaluations, based on abundant ablation studies.", "field": [], "task": ["3D Point Cloud Classification", "Data Augmentation"], "method": [], "dataset": ["ModelNet40"], "metric": ["Overall Accuracy"], "title": "Regularization Strategy for Point Cloud via Rigidly Mixed Sample"} {"abstract": "Recent deep learning approaches for representation learning on graphs follow\na neighborhood aggregation procedure. We analyze some important properties of\nthese models, and propose a strategy to overcome those. In particular, the\nrange of \"neighboring\" nodes that a node's representation draws from strongly\ndepends on the graph structure, analogous to the spread of a random walk. To\nadapt to local neighborhood properties and tasks, we explore an architecture --\njumping knowledge (JK) networks -- that flexibly leverages, for each node,\ndifferent neighborhood ranges to enable better structure-aware representation.\nIn a number of experiments on social, bioinformatics and citation networks, we\ndemonstrate that our model achieves state-of-the-art performance. Furthermore,\ncombining the JK framework with models like Graph Convolutional Networks,\nGraphSAGE and Graph Attention Networks consistently improves those models'\nperformance.", "field": [], "task": ["Node Classification", "Representation Learning"], "method": [], "dataset": ["PPI"], "metric": ["F1"], "title": "Representation Learning on Graphs with Jumping Knowledge Networks"} {"abstract": "The latest deep learning approaches perform better than the state-of-the-art\nsignal processing approaches in various image restoration tasks. However, if an\nimage contains many patterns and structures, the performance of these CNNs is\nstill inferior. To address this issue, here we propose a novel feature space\ndeep residual learning algorithm that outperforms the existing residual\nlearning. The main idea is originated from the observation that the performance\nof a learning algorithm can be improved if the input and/or label manifolds can\nbe made topologically simpler by an analytic mapping to a feature space. Our\nextensive numerical studies using denoising experiments and NTIRE single-image\nsuper-resolution (SISR) competition demonstrate that the proposed feature space\nresidual learning outperforms the existing state-of-the-art approaches.\nMoreover, our algorithm was ranked third in NTIRE competition with 5-10 times\nfaster computational time compared to the top ranked teams. The source code is\navailable on page : https://github.com/iorism/CNN.git", "field": [], "task": ["Color Image Denoising", "Denoising", "Image Restoration", "Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["CBSD68 sigma50", "Set14 - 4x upscaling", "BSD100 - 4x upscaling", "Set5 - 4x upscaling", "Urban100 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Beyond Deep Residual Learning for Image Restoration: Persistent Homology-Guided Manifold Simplification"} {"abstract": "Many datasets can be viewed as a noisy sampling of an underlying space, and\ntools from topological data analysis can characterize this structure for the\npurpose of knowledge discovery. One such tool is persistent homology, which\nprovides a multiscale description of the homological features within a dataset.\nA useful representation of this homological information is a persistence\ndiagram (PD). Efforts have been made to map PDs into spaces with additional\nstructure valuable to machine learning tasks. We convert a PD to a\nfinite-dimensional vector representation which we call a persistence image\n(PI), and prove the stability of this transformation with respect to small\nperturbations in the inputs. The discriminatory power of PIs is compared\nagainst existing methods, showing significant performance gains. We explore the\nuse of PIs with vector-based machine learning tools, such as linear sparse\nsupport vector machines, which identify features containing discriminating\ntopological information. Finally, high accuracy inference of parameter values\nfrom the dynamic output of a discrete dynamical system (the linked twist map)\nand a partial differential equation (the anisotropic Kuramoto-Sivashinsky\nequation) provide a novel application of the discriminatory power of PIs.", "field": [], "task": ["Graph Classification", "Topological Data Analysis"], "method": [], "dataset": ["NEURON-BINARY", "NEURON-MULTI", "NEURON-Average"], "metric": ["Accuracy"], "title": "Persistence Images: A Stable Vector Representation of Persistent Homology"} {"abstract": "The existing zero-shot detection approaches project visual features to the semantic domain for seen objects, hoping to map unseen objects to their corresponding semantics during inference. However, since the unseen objects are never visualized during training, the detection model is skewed towards seen content, thereby labeling unseen as background or a seen class. In this work, we propose to synthesize visual features for unseen classes, so that the model learns both seen and unseen objects in the visual domain. Consequently, the major challenge becomes, how to accurately synthesize unseen objects merely using their class semantics? Towards this ambitious goal, we propose a novel generative model that uses class-semantics to not only generate the features but also to discriminatively separate them. Further, using a unified model, we ensure the synthesized features have high diversity that represents the intra-class differences and variable localization precision in the detected bounding boxes. We test our approach on three object detection benchmarks, PASCAL VOC, MSCOCO, and ILSVRC detection, under both conventional and generalized settings, showing impressive gains over the state-of-the-art methods. Our codes are available at https://github.com/nasir6/zero_shot_detection.", "field": [], "task": ["Generalized Zero-Shot Object Detection", "Object Detection", "Zero-Shot Object Detection"], "method": [], "dataset": ["MS-COCO", "ImageNet Detection", "PASCAL VOC'07"], "metric": ["mAP", "Recall"], "title": "Synthesizing the Unseen for Zero-shot Object Detection"} {"abstract": "Generative adversarial networks (GANs) often suffer from unpredictable\nmode-collapsing during training. We study the issue of mode collapse of\nBoundary Equilibrium Generative Adversarial Network (BEGAN), which is one of\nthe state-of-the-art generative models. Despite its potential of generating\nhigh-quality images, we find that BEGAN tends to collapse at some modes after a\nperiod of training. We propose a new model, called \\emph{BEGAN with a\nConstrained Space} (BEGAN-CS), which includes a latent-space constraint in the\nloss function. We show that BEGAN-CS can significantly improve training\nstability and suppress mode collapse without either increasing the model\ncomplexity or degrading the image quality. Further, we visualize the\ndistribution of latent vectors to elucidate the effect of latent-space\nconstraint. The experimental results show that our method has additional\nadvantages of being able to train on small datasets and to generate images\nsimilar to a given real image yet with variations of designated attributes\non-the-fly.", "field": [], "task": ["Image Generation"], "method": [], "dataset": ["CelebA 64x64"], "metric": ["FID"], "title": "Escaping from Collapsing Modes in a Constrained Space"} {"abstract": "Much effort has been devoted to evaluate whether multi-task learning can be\nleveraged to learn rich representations that can be used in various Natural\nLanguage Processing (NLP) down-stream applications. However, there is still a\nlack of understanding of the settings in which multi-task learning has a\nsignificant effect. In this work, we introduce a hierarchical model trained in\na multi-task learning setup on a set of carefully selected semantic tasks. The\nmodel is trained in a hierarchical fashion to introduce an inductive bias by\nsupervising a set of low level tasks at the bottom layers of the model and more\ncomplex tasks at the top layers of the model. This model achieves\nstate-of-the-art results on a number of tasks, namely Named Entity Recognition,\nEntity Mention Detection and Relation Extraction without hand-engineered\nfeatures or external NLP tools like syntactic parsers. The hierarchical\ntraining supervision induces a set of shared semantic representations at lower\nlayers of the model. We show that as we move from the bottom to the top layers\nof the model, the hidden states of the layers tend to represent more complex\nsemantic information.", "field": [], "task": ["Multi-Task Learning", "Named Entity Recognition", "Relation Extraction"], "method": [], "dataset": ["ACE 2005"], "metric": ["Sentence Encoder", "NER Micro F1", "RE Micro F1"], "title": "A Hierarchical Multi-task Approach for Learning Embeddings from Semantic Tasks"} {"abstract": "With the online proliferation of hate speech, there is an urgent need for\nsystems that can detect such harmful content. In this paper, We present the\nmachine learning models developed for the Automatic Misogyny Identification\n(AMI) shared task at EVALITA 2018. We generate three types of features:\nSentence Embeddings, TF-IDF Vectors, and BOW Vectors to represent each tweet.\nThese features are then concatenated and fed into the machine learning models.\nOur model came First for the English Subtask A and Fifth for the English\nSubtask B. We release our winning model for public use and it's available at\nhttps://github.com/punyajoy/Hateminers-EVALITA.", "field": [], "task": ["Hate Speech Detection", "Sentence Embeddings"], "method": [], "dataset": ["Automatic Misogynistic Identification"], "metric": ["Accuracy"], "title": "Hateminers : Detecting Hate speech against Women"} {"abstract": "We explore the use of residual networks and neural attention for argument mining and in particular link prediction. The method we propose makes no assumptions on document or argument structure. We propose a residual architecture that exploits attention, multi-task learning, and makes use of ensemble. We evaluate it on a challenging data set consisting of user-generated comments, as well as on two other datasets consisting of scientific publications. On the user-generated content dataset, our model outperforms state-of-the-art methods that rely on domain knowledge. On the scientific literature datasets it achieves results comparable to those yielded by BERT-based approaches but with a much smaller model size.", "field": [], "task": ["Argument Mining", "Component Classification", "Link Prediction", "Multi-Task Learning", "Relation Classification"], "method": [], "dataset": ["CDCP", "DRI Corpus", "AbstRCT - Neoplasm"], "metric": ["Macro F1", "F1"], "title": "Multi-Task Attentive Residual Networks for Argument Mining"} {"abstract": "We present a domain adaptation based generative framework for zero-shot learning. Our framework addresses the problem of domain shift between the seen and unseen class distributions in zero-shot learning and minimizes the shift by developing a generative model trained via adversarial domain adaptation. Our approach is based on end-to-end learning of the class distributions of seen classes and unseen classes. To enable the model to learn the class distributions of unseen classes, we parameterize these class distributions in terms of the class attribute information (which is available for both seen and unseen classes). This provides a very simple way to learn the class distribution of any unseen class, given only its class attribute information, and no labeled training data. Training this model with adversarial domain adaptation further provides robustness against the distribution mismatch between the data from seen and unseen classes. Our approach also provides a novel way for training neural net based classifiers to overcome the hubness problem in zero-shot learning. Through a comprehensive set of experiments, we show that our model yields superior accuracies as compared to various state-of-the-art zero shot learning models, on a variety of benchmark datasets. Code for the experiments is available at github.com/vkkhare/ZSL-ADA", "field": [], "task": ["Domain Adaptation", "Zero-Shot Learning"], "method": [], "dataset": ["CUB-200 - 0-Shot Learning"], "metric": ["Average Per-Class Accuracy"], "title": "A Generative Framework for Zero-Shot Learning with Adversarial Domain Adaptation"} {"abstract": "Deep learning techniques for point cloud data have demonstrated great potentials in solving classical problems in 3D computer vision such as 3D object classification and segmentation. Several recent 3D object classification methods have reported state-of-the-art performance on CAD model datasets such as ModelNet40 with high accuracy (~92%). Despite such impressive results, in this paper, we argue that object classification is still a challenging task when objects are framed with real-world settings. To prove this, we introduce ScanObjectNN, a new real-world point cloud object dataset based on scanned indoor scene data. From our comprehensive benchmark, we show that our dataset poses great challenges to existing point cloud classification techniques as objects from real-world scans are often cluttered with background and/or are partial due to occlusions. We identify three key open problems for point cloud object classification, and propose new point cloud classification neural networks that achieve state-of-the-art performance on classifying objects with cluttered background. Our dataset and code are publicly available in our project page https://hkust-vgd.github.io/scanobjectnn/.", "field": [], "task": ["3D Object Classification", "3D Point Cloud Classification", "Object Classification"], "method": [], "dataset": ["ScanObjectNN"], "metric": ["Overall Accuracy"], "title": "Revisiting Point Cloud Classification: A New Benchmark Dataset and Classification Model on Real-World Data"} {"abstract": "We present a novel face alignment framework based on coarse-to-fine shape searching. Unlike the conventional cascaded regression approaches that start with an initial shape and refine the shape in a cascaded manner, our approach begins with a coarse search over a shape space that contains diverse shapes, and employs the coarse solution to constrain subsequent finer search of shapes. The unique stage-by-stage progressive and adaptive search i) prevents the final solution from being trapped in local optima due to poor initialisation, a common problem encountered by cascaded regression approaches; and ii) improves the robustness in coping with large pose variations. The framework demonstrates real-time performance and state-of-the-art results on various benchmarks including the challenging 300-W dataset.", "field": [], "task": ["Face Alignment", "Regression"], "method": [], "dataset": ["WFLW"], "metric": ["ME (%, all) ", "FR@0.1(%, all)", "AUC@0.1 (all)"], "title": "Face alignment by coarse-to-fine shape searching"} {"abstract": "Despite the eminent successes of deep neural networks, many architectures are often hard to transfer to irregularly-sampled and asynchronous time series that commonly occur in real-world datasets, especially in healthcare applications. This paper proposes a novel approach for classifying irregularly-sampled time series with unaligned measurements, focusing on high scalability and data efficiency. Our method SeFT (Set Functions for Time Series) is based on recent advances in differentiable set function learning, extremely parallelizable with a beneficial memory footprint, thus scaling well to large datasets of long time series and online monitoring scenarios. Furthermore, our approach permits quantifying per-observation contributions to the classification outcome. We extensively compare our method with existing algorithms on multiple healthcare time series datasets and demonstrate that it performs competitively whilst significantly reducing runtime.", "field": [], "task": ["Irregular Time Series", "Time Series", "Time Series Classification"], "method": [], "dataset": ["PhysioNet Challenge 2012"], "metric": ["AUC", "AUC Stdev"], "title": "Set Functions for Time Series"} {"abstract": "Spatiotemporal action localization requires the incorporation of two sources of information into the designed architecture: (1) temporal information from the previous frames and (2) spatial information from the key frame. Current state-of-the-art approaches usually extract these information with separate networks and use an extra mechanism for fusion to get detections. In this work, we present YOWO, a unified CNN architecture for real-time spatiotemporal action localization in video streams. YOWO is a single-stage architecture with two branches to extract temporal and spatial information concurrently and predict bounding boxes and action probabilities directly from video clips in one evaluation. Since the whole architecture is unified, it can be optimized end-to-end. The YOWO architecture is fast providing 34 frames-per-second on 16-frames input clips and 62 frames-per-second on 8-frames input clips, which is currently the fastest state-of-the-art architecture on spatiotemporal action localization task. Remarkably, YOWO outperforms the previous state-of-the art results on J-HMDB-21 and UCF101-24 with an impressive improvement of ~3% and ~12%, respectively. We make our code and pretrained models publicly available.", "field": [], "task": ["Action Localization", "Temporal Action Localization"], "method": [], "dataset": ["J-HMDB-21", "UCF101-24"], "metric": ["Video-mAP 0.5", "Video-mAP 0.75", "Video-mAP 0.2", "Frame-mAP"], "title": "You Only Watch Once: A Unified CNN Architecture for Real-Time Spatiotemporal Action Localization"} {"abstract": "Human trajectory forecasting with multiple socially interacting agents is of critical importance for autonomous navigation in human environments, e.g., for self-driving cars and social robots. In this work, we present Predicted Endpoint Conditioned Network (PECNet) for flexible human trajectory prediction. PECNet infers distant trajectory endpoints to assist in long-range multi-modal trajectory prediction. A novel non-local social pooling layer enables PECNet to infer diverse yet socially compliant trajectories. Additionally, we present a simple \"truncation-trick\" for improving few-shot multi-modal trajectory prediction performance. We show that PECNet improves state-of-the-art performance on the Stanford Drone trajectory prediction benchmark by ~20.9% and on the ETH/UCY benchmark by ~40.8%. Project homepage: https://karttikeya.github.io/publication/htf/", "field": [], "task": ["Autonomous Navigation", "Multi-future Trajectory Prediction", "Multi Future Trajectory Prediction", "Self-Driving Cars", "Trajectory Forecasting", "Trajectory Prediction"], "method": [], "dataset": ["Stanford Drone", "ETH/UCY"], "metric": ["ADE-8/12", "ADE-8/12 @K = 20", "FDE-8/12 @K= 20"], "title": "It Is Not the Journey but the Destination: Endpoint Conditioned Trajectory Prediction"} {"abstract": "Performing diagnosis or exploratory analysis during the training of deep learning models is challenging but often necessary for making a sequence of decisions guided by the incremental observations. Currently available systems for this purpose are limited to monitoring only the logged data that must be specified before the training process starts. Each time a new information is desired, a cycle of stop-change-restart is required in the training process. These limitations make interactive exploration and diagnosis tasks difficult, imposing long tedious iterations during the model development. We present a new system that enables users to perform interactive queries on live processes generating real-time information that can be rendered in multiple formats on multiple surfaces in the form of several desired visualizations simultaneously. To achieve this, we model various exploratory inspection and diagnostic tasks for deep learning training processes as specifications for streams using a map-reduce paradigm with which many data scientists are already familiar. Our design achieves generality and extensibility by defining composable primitives which is a fundamentally different approach than is used by currently available systems. The open source implementation of our system is available as TensorWatch project at https://github.com/microsoft/tensorwatch.", "field": [], "task": ["3D Action Recognition"], "method": [], "dataset": ["100 sleep nights of 8 caregivers"], "metric": ["10%"], "title": "A System for Real-Time Interactive Analysis of Deep Learning Training"} {"abstract": "Pseudo-labeling has recently shown promise in end-to-end automatic speech recognition (ASR). We study Iterative Pseudo-Labeling (IPL), a semi-supervised algorithm which efficiently performs multiple iterations of pseudo-labeling on unlabeled data as the acoustic model evolves. In particular, IPL fine-tunes an existing model at each iteration using both labeled data and a subset of unlabeled data. We study the main components of IPL: decoding with a language model and data augmentation. We then demonstrate the effectiveness of IPL by achieving state-of-the-art word-error rate on the Librispeech test sets in both standard and low-resource setting. We also study the effect of language models trained on different corpora to show IPL can effectively utilize additional text. Finally, we release a new large in-domain text corpus which does not overlap with the Librispeech training transcriptions to foster research in low-resource, semi-supervised ASR", "field": [], "task": ["Data Augmentation", "Language Modelling", "Speech Recognition"], "method": [], "dataset": ["LibriSpeech test-other", "LibriSpeech test-clean"], "metric": ["Word Error Rate (WER)"], "title": "Iterative Pseudo-Labeling for Speech Recognition"} {"abstract": "We introduce a new public video dataset for action recognition: Anonymized Videos from Diverse countries (AViD). Unlike existing public video datasets, AViD is a collection of action videos from many different countries. The motivation is to create a public dataset that would benefit training and pretraining of action recognition models for everybody, rather than making it useful for limited countries. Further, all the face identities in the AViD videos are properly anonymized to protect their privacy. It also is a static dataset where each video is licensed with the creative commons license. We confirm that most of the existing video datasets are statistically biased to only capture action videos from a limited number of countries. We experimentally illustrate that models trained with such biased datasets do not transfer perfectly to action videos from the other countries, and show that AViD addresses such problem. We also confirm that the new AViD dataset could serve as a good dataset for pretraining the models, performing comparably or better than prior datasets.", "field": [], "task": ["Action Classification", "Action Detection", "Action Recognition"], "method": [], "dataset": ["AViD", "Charades"], "metric": ["mAP", "Accuracy"], "title": "AViD Dataset: Anonymized Videos from Diverse Countries"} {"abstract": "During their formative years, radiology trainees are required to interpret hundreds of mammograms per month, with the objective of becoming apt at discerning the subtle patterns differentiating benign from malignant lesions. Unfortunately, medico-legal and technical hurdles make it difficult to access and query medical images for training. In this paper we train a generative adversarial network (GAN) to synthesize 512 x 512 high-resolution mammograms. The resulting model leads to the unsupervised separation of high-level features (e.g. the standard mammography views and the nature of the breast lesions), with stochastic variation in the generated images (e.g. breast adipose tissue, calcification), enabling user-controlled global and local attribute-editing of the synthesized images. We demonstrate the model's ability to generate anatomically and medically relevant mammograms by achieving an average AUC of 0.54 in a double-blind study on four expert mammography radiologists to distinguish between generated and real images, ascribing to the high visual quality of the synthesized and edited mammograms, and to their potential use in advancing and facilitating medical education.", "field": ["Regularization", "Activation Functions", "Normalization", "Latent Variable Sampling", "Convolutions", "Generative Models"], "task": ["Medical Image Generation", "Radiologist Binary Classification"], "method": ["Convolution", "Weight Demodulation", "Leaky ReLU", "R1 Regularization", "Path Length Regularization", "Latent Optimisation", "StyleGAN2"], "dataset": [], "metric": ["Average Precision", "AUC-ROC"], "title": "MammoGANesis: Controlled Generation of High-Resolution Mammograms for Radiology Education"} {"abstract": "The objective of this work is set-based face recognition, i.e. to decide if\ntwo sets of images of a face are of the same person or not. Conventionally, the\nset-wise feature descriptor is computed as an average of the descriptors from\nindividual face images within the set. In this paper, we design a neural\nnetwork architecture that learns to aggregate based on both \"visual\" quality\n(resolution, illumination), and \"content\" quality (relative importance for\ndiscriminative classification). To this end, we propose a Multicolumn Network\n(MN) that takes a set of images (the number in the set can vary) as input, and\nlearns to compute a fix-sized feature descriptor for the entire set. To\nencourage high-quality representations, each individual input image is first\nweighted by its \"visual\" quality, determined by a self-quality assessment\nmodule, and followed by a dynamic recalibration based on \"content\" qualities\nrelative to the other images within the set. Both of these qualities are learnt\nimplicitly during training for set-wise classification. Comparing with the\nprevious state-of-the-art architectures trained with the same dataset\n(VGGFace2), our Multicolumn Networks show an improvement of between 2-6% on the\nIARPA IJB face recognition benchmarks, and exceed the state of the art for all\nmethods on these benchmarks.", "field": [], "task": ["Face Recognition"], "method": [], "dataset": ["IJB-C"], "metric": ["TAR @ FAR=0.01"], "title": "Multicolumn Networks for Face Recognition"} {"abstract": "Many efforts have been made to facilitate natural language processing tasks\nwith pre-trained language models (LMs), and brought significant improvements to\nvarious applications. To fully leverage the nearly unlimited corpora and\ncapture linguistic information of multifarious levels, large-size LMs are\nrequired; but for a specific task, only parts of these information are useful.\nSuch large-sized LMs, even in the inference stage, may cause heavy computation\nworkloads, making them too time-consuming for large-scale applications. Here we\npropose to compress bulky LMs while preserving useful information with regard\nto a specific task. As different layers of the model keep different\ninformation, we develop a layer selection method for model pruning using\nsparsity-inducing regularization. By introducing the dense connectivity, we can\ndetach any layer without affecting others, and stretch shallow and wide LMs to\nbe deep and narrow. In model training, LMs are learned with layer-wise dropouts\nfor better robustness. Experiments on two benchmark datasets demonstrate the\neffectiveness of our method.", "field": [], "task": ["Language Modelling", "Named Entity Recognition"], "method": [], "dataset": ["CoNLL 2003 (English)"], "metric": ["F1"], "title": "Efficient Contextualized Representation: Language Model Pruning for Sequence Labeling"} {"abstract": "Despite significant progress made over the past twenty five years,\nunconstrained face verification remains a challenging problem. This paper\nproposes an approach that couples a deep CNN-based approach with a\nlow-dimensional discriminative embedding learned using triplet probability\nconstraints to solve the unconstrained face verification problem. Aside from\nyielding performance improvements, this embedding provides significant\nadvantages in terms of memory and for post-processing operations like subject\nspecific clustering. Experiments on the challenging IJB-A dataset show that the\nproposed algorithm performs comparably or better than the state of the art\nmethods in verification and identification metrics, while requiring much less\ntraining data and training time. The superior performance of the proposed\nmethod on the CFP dataset shows that the representation learned by our deep CNN\nis robust to extreme pose variation. Furthermore, we demonstrate the robustness\nof the deep features to challenges including age, pose, blur and clutter by\nperforming simple clustering experiments on both IJB-A and LFW datasets.", "field": [], "task": ["Face Verification"], "method": [], "dataset": ["IJB-A"], "metric": ["TAR @ FAR=0.01"], "title": "Triplet Probabilistic Embedding for Face Verification and Clustering"} {"abstract": "Correlation Filter-based trackers have recently achieved excellent\nperformance, showing great robustness to challenging situations exhibiting\nmotion blur and illumination changes. However, since the model that they learn\ndepends strongly on the spatial layout of the tracked object, they are\nnotoriously sensitive to deformation. Models based on colour statistics have\ncomplementary traits: they cope well with variation in shape, but suffer when\nillumination is not consistent throughout a sequence. Moreover, colour\ndistributions alone can be insufficiently discriminative. In this paper, we\nshow that a simple tracker combining complementary cues in a ridge regression\nframework can operate faster than 80 FPS and outperform not only all entries in\nthe popular VOT14 competition, but also recent and far more sophisticated\ntrackers according to multiple benchmarks.", "field": [], "task": ["Regression", "Visual Object Tracking"], "method": [], "dataset": ["TrackingNet"], "metric": ["Normalized Precision", "Precision", "Accuracy"], "title": "Staple: Complementary Learners for Real-Time Tracking"} {"abstract": "State-of-the-art named entity recognition systems rely heavily on\nhand-crafted features and domain-specific knowledge in order to learn\neffectively from the small, supervised training corpora that are available. In\nthis paper, we introduce two new neural architectures---one based on\nbidirectional LSTMs and conditional random fields, and the other that\nconstructs and labels segments using a transition-based approach inspired by\nshift-reduce parsers. Our models rely on two sources of information about\nwords: character-based word representations learned from the supervised corpus\nand unsupervised word representations learned from unannotated corpora. Our\nmodels obtain state-of-the-art performance in NER in four languages without\nresorting to any language-specific knowledge or resources such as gazetteers.", "field": [], "task": ["Named Entity Recognition"], "method": [], "dataset": ["CoNLL 2003 (English)", "CoNLL++"], "metric": ["F1"], "title": "Neural Architectures for Named Entity Recognition"} {"abstract": "Visual features are of vital importance for human action understanding in\nvideos. This paper presents a new video representation, called\ntrajectory-pooled deep-convolutional descriptor (TDD), which shares the merits\nof both hand-crafted features and deep-learned features. Specifically, we\nutilize deep architectures to learn discriminative convolutional feature maps,\nand conduct trajectory-constrained pooling to aggregate these convolutional\nfeatures into effective descriptors. To enhance the robustness of TDDs, we\ndesign two normalization methods to transform convolutional feature maps,\nnamely spatiotemporal normalization and channel normalization. The advantages\nof our features come from (i) TDDs are automatically learned and contain high\ndiscriminative capacity compared with those hand-crafted features; (ii) TDDs\ntake account of the intrinsic characteristics of temporal dimension and\nintroduce the strategies of trajectory-constrained sampling and pooling for\naggregating deep-learned features. We conduct experiments on two challenging\ndatasets: HMDB51 and UCF101. Experimental results show that TDDs outperform\nprevious hand-crafted features and deep-learned features. Our method also\nachieves superior performance to the state of the art on these datasets (HMDB51\n65.9%, UCF101 91.5%).", "field": [], "task": ["Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["UCF101", "HMDB-51", "DogCentric"], "metric": ["Average accuracy of 3 splits", "3-fold Accuracy", "Accuracy"], "title": "Action Recognition with Trajectory-Pooled Deep-Convolutional Descriptors"} {"abstract": "We present a novel clustering objective that learns a neural network classifier from scratch, given only unlabelled data samples. The model discovers clusters that accurately match semantic classes, achieving state-of-the-art results in eight unsupervised clustering benchmarks spanning image classification and segmentation. These include STL10, an unsupervised variant of ImageNet, and CIFAR10, where we significantly beat the accuracy of our closest competitors by 6.6 and 9.5 absolute percentage points respectively. The method is not specialised to computer vision and operates on any paired dataset samples; in our experiments we use random transforms to obtain a pair from each image. The trained network directly outputs semantic labels, rather than high dimensional representations that need external processing to be usable for semantic clustering. The objective is simply to maximise mutual information between the class assignments of each pair. It is easy to implement and rigorously grounded in information theory, meaning we effortlessly avoid degenerate solutions that other clustering methods are susceptible to. In addition to the fully unsupervised mode, we also test two semi-supervised settings. The first achieves 88.8% accuracy on STL10 classification, setting a new global state-of-the-art over all existing methods (whether supervised, semi-supervised or unsupervised). The second shows robustness to 90% reductions in label coverage, of relevance to applications that wish to make use of small amounts of labels. github.com/xu-ji/IIC", "field": [], "task": ["Image Classification", "Semantic Segmentation", "Unsupervised Image Classification", "Unsupervised MNIST"], "method": [], "dataset": ["Potsdam", "Potsdam-3", "CIFAR-10", "COCO-Stuff-15", "COCO-Stuff-3", "STL-10", "MNIST", "CIFAR-20"], "metric": ["Train set", "ARI", "Backbone", "Percentage correct", "NMI", "Accuracy"], "title": "Invariant Information Clustering for Unsupervised Image Classification and Segmentation"} {"abstract": "In this paper, we propose a novel face detection network with three novel\ncontributions that address three key aspects of face detection, including\nbetter feature learning, progressive loss design and anchor assign based data\naugmentation, respectively. First, we propose a Feature Enhance Module (FEM)\nfor enhancing the original feature maps to extend the single shot detector to\ndual shot detector. Second, we adopt Progressive Anchor Loss (PAL) computed by\ntwo different sets of anchors to effectively facilitate the features. Third, we\nuse an Improved Anchor Matching (IAM) by integrating novel anchor assign\nstrategy into data augmentation to provide better initialization for the\nregressor. Since these techniques are all related to the two-stream design, we\nname the proposed network as Dual Shot Face Detector (DSFD). Extensive\nexperiments on popular benchmarks, WIDER FACE and FDDB, demonstrate the\nsuperiority of DSFD over the state-of-the-art face detectors.", "field": [], "task": ["Data Augmentation", "Face Detection"], "method": [], "dataset": ["WIDER Face (Hard)", "WIDER Face (Medium)", "WIDER Face (Easy)", "FDDB"], "metric": ["AP"], "title": "DSFD: Dual Shot Face Detector"} {"abstract": "Planning has been very successful for control tasks with known environment dynamics. To leverage planning in unknown environments, the agent needs to learn the dynamics from interactions with the world. However, learning dynamics models that are accurate enough for planning has been a long-standing challenge, especially in image-based domains. We propose the Deep Planning Network (PlaNet), a purely model-based agent that learns the environment dynamics from images and chooses actions through fast online planning in latent space. To achieve high performance, the dynamics model must accurately predict the rewards ahead for multiple time steps. We approach this using a latent dynamics model with both deterministic and stochastic transition components. Moreover, we propose a multi-step variational inference objective that we name latent overshooting. Using only pixel observations, our agent solves continuous control tasks with contact dynamics, partial observability, and sparse rewards, which exceed the difficulty of tasks that were previously solved by planning with learned models. PlaNet uses substantially fewer episodes and reaches final performance close to and sometimes higher than strong model-free algorithms.", "field": [], "task": ["Continuous Control", "Motion Planning", "Variational Inference"], "method": [], "dataset": ["DeepMind Walker Walk (Images)", "DeepMind Cheetah Run (Images)", "DeepMind Cup Catch (Images)"], "metric": ["Return"], "title": "Learning Latent Dynamics for Planning from Pixels"} {"abstract": "A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma's Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to improve performance on hard-exploration domains. To address this shortfall, we introduce a new algorithm called Go-Explore. It exploits the following principles: (1) remember previously visited states, (2) first return to a promising state (without exploration), then explore from it, and (3) solve simulated environments through any available means (including by introducing determinism), then robustify via imitation learning. The combined effect of these principles is a dramatic performance improvement on hard-exploration problems. On Montezuma's Revenge, Go-Explore scores a mean of over 43k points, almost 4 times the previous state of the art. Go-Explore can also harness human-provided domain knowledge and, when augmented with it, scores a mean of over 650k points on Montezuma's Revenge. Its max performance of nearly 18 million surpasses the human world record, meeting even the strictest definition of \"superhuman\" performance. On Pitfall, Go-Explore with domain knowledge is the first algorithm to score above zero. Its mean score of almost 60k points exceeds expert human performance. Because Go-Explore produces high-performing demonstrations automatically and cheaply, it also outperforms imitation learning work where humans provide solution demonstrations. Go-Explore opens up many new research directions into improving it and weaving its insights into current RL algorithms. It may also enable progress on previously unsolvable hard-exploration problems in many domains, especially those that harness a simulator during training (e.g. robotics).", "field": [], "task": ["Atari Games", "Imitation Learning", "Montezuma's Revenge"], "method": [], "dataset": ["Atari 2600 Montezuma's Revenge", "Atari 2600 Pitfall!"], "metric": ["Score"], "title": "Go-Explore: a New Approach for Hard-Exploration Problems"} {"abstract": "Neural machine translation systems have become state-of-the-art approaches for Grammatical Error Correction (GEC) task. In this paper, we propose a copy-augmented architecture for the GEC task by copying the unchanged words from the source sentence to the target sentence. Since the GEC suffers from not having enough labeled training data to achieve high accuracy. We pre-train the copy-augmented architecture with a denoising auto-encoder using the unlabeled One Billion Benchmark and make comparisons between the fully pre-trained model and a partially pre-trained model. It is the first time copying words from the source context and fully pre-training a sequence to sequence model are experimented on the GEC task. Moreover, We add token-level and sentence-level multi-task learning for the GEC task. The evaluation results on the CoNLL-2014 test set show that our approach outperforms all recently published state-of-the-art results by a large margin. The code and pre-trained models are released at https://github.com/zhawe01/fairseq-gec.", "field": [], "task": ["Denoising", "Grammatical Error Correction", "Machine Translation", "Multi-Task Learning"], "method": [], "dataset": ["CoNLL-2014 Shared Task", "JFLEG"], "metric": ["F0.5", "Precision", "Recall", "GLEU"], "title": "Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data"} {"abstract": "We propose a new supervized learning framework for oversegmenting 3D point\nclouds into superpoints. We cast this problem as learning deep embeddings of\nthe local geometry and radiometry of 3D points, such that the border of objects\npresents high contrasts. The embeddings are computed using a lightweight neural\nnetwork operating on the points' local neighborhood. Finally, we formulate\npoint cloud oversegmentation as a graph partition problem with respect to the\nlearned embeddings.\n This new approach allows us to set a new state-of-the-art in point cloud\noversegmentation by a significant margin, on a dense indoor dataset (S3DIS) and\na sparse outdoor one (vKITTI). Our best solution requires over five times fewer\nsuperpoints to reach similar performance than previously published methods on\nS3DIS. Furthermore, we show that our framework can be used to improve\nsuperpoint-based semantic segmentation algorithms, setting a new\nstate-of-the-art for this task as well.", "field": [], "task": ["Semantic Segmentation"], "method": [], "dataset": ["S3DIS Area5", "S3DIS"], "metric": ["oAcc", "Mean IoU", "mAcc", "mIoU"], "title": "Point Cloud Oversegmentation with Graph-Structured Deep Metric Learning"} {"abstract": "This paper explores the use of knowledge distillation to improve a Multi-Task\nDeep Neural Network (MT-DNN) (Liu et al., 2019) for learning text\nrepresentations across multiple natural language understanding tasks. Although\nensemble learning can improve model performance, serving an ensemble of large\nDNNs such as MT-DNN can be prohibitively expensive. Here we apply the knowledge\ndistillation method (Hinton et al., 2015) in the multi-task learning setting.\nFor each task, we train an ensemble of different MT-DNNs (teacher) that\noutperforms any single model, and then train a single MT-DNN (student) via\nmulti-task learning to \\emph{distill} knowledge from these ensemble teachers.\nWe show that the distilled MT-DNN significantly outperforms the original MT-DNN\non 7 out of 9 GLUE tasks, pushing the GLUE benchmark (single model) to 83.7\\%\n(1.5\\% absolute improvement\\footnote{ Based on the GLUE leaderboard at\nhttps://gluebenchmark.com/leaderboard as of April 1, 2019.}). The code and\npre-trained models will be made publicly available at\nhttps://github.com/namisan/mt-dnn.", "field": [], "task": ["Knowledge Distillation", "Multi-Task Learning", "Natural Language Inference", "Natural Language Understanding", "Semantic Textual Similarity", "Sentiment Analysis"], "method": [], "dataset": ["MultiNLI", "SST-2 Binary classification", "SentEval"], "metric": ["SICK-E", "Matched", "STS", "MRPC", "SICK-R", "Accuracy", "Mismatched"], "title": "Improving Multi-Task Deep Neural Networks via Knowledge Distillation for Natural Language Understanding"} {"abstract": "Human pose estimation has recently made significant progress with the adoption of deep convolutional neural networks. Its many applications have attracted tremendous interest in recent years. However, many practical applications require pose estimation for human crowds, which still is a rarely addressed problem. In this work, we explore methods to optimize pose estimation for human crowds, focusing on challenges introduced with dense crowds, such as occlusions, people in close proximity to each other, and partial visibility of people. In order to address these challenges, we evaluate three aspects of a pose detection approach: i) a data augmentation method to introduce robustness to occlusions, ii) the explicit detection of occluded body parts, and iii) the use of the synthetic generated datasets. The first approach to improve the accuracy in crowded scenarios is to generate occlusions at training time using person and object cutouts from the object recognition dataset COCO (Common Objects in Context). Furthermore, the synthetically generated dataset JTA (Joint Track Auto) is evaluated for the use in real-world crowd applications. In order to overcome the transfer gap of JTA originating from a low pose variety and less dense crowds, an extension dataset is created to ease the use for real-world applications. Additionally, the occlusion flags provided with JTA are utilized to train a model, which explicitly distinguishes between occluded and visible body parts in two distinct branches. The combination of the proposed additions to the baseline method help to improve the overall accuracy by 4.7% AP and thereby provide comparable results to current state-of-the-art approaches on the respective dataset.", "field": [], "task": ["Data Augmentation", "Object Recognition", "Pose Estimation"], "method": [], "dataset": ["CrowdPose"], "metric": ["mAP @0.5:0.95", "AP Hard", "AP Medium", "AP Easy"], "title": "Human Pose Estimation for Real-World Crowded Scenarios"} {"abstract": "Multiple imputation by chained equations (MICE) is a flexible and practical approach to handling missing data. We describe the principles of the method and show how to impute categorical and quantitative variables, including skewed variables. We give guidance on how to specify the imputation model and how many imputations are needed. We describe the practical analysis of multiply imputed data, including model building and model checking. We stress the limitations of the method and discuss the possible pitfalls. We illustrate the ideas using a data set in mental health, giving Stata code fragments.", "field": [], "task": ["Imputation", "Multivariate Time Series Imputation"], "method": [], "dataset": ["KDD CUP Challenge 2018", "Beijing Air Quality", "UCI localization data", "PhysioNet Challenge 2012"], "metric": ["MSE (10% missing)", "MAE (PM2.5)", "MAE (10% of data as GT)", "MAE (10% missing)"], "title": "Multiple imputation using chained equations: issues and guidance for practice"} {"abstract": "We are seeing an enormous increase in the availability of streaming, time-series data. Largely driven by the rise of connected real-time data sources, this data presents technical challenges and opportunities. One fundamental capability for streaming analytics is to model each stream in an unsupervised fashion and detect unusual, anomalous behaviors in real-time. Early anomaly detection is valuable, yet it can be difficult to execute reliably in practice. Application constraints require systems to process data in real-time, not batches. Streaming data inherently exhibits concept drift, favoring algorithms that learn continuously. Furthermore, the massive number of independent streams in practice requires that anomaly detectors be fully automated. In this paper we propose a novel anomaly detection algorithm that meets these constraints. The technique is based on an online sequence memory algorithm called Hierarchical Temporal Memory (HTM). We also present results using the Numenta Anomaly Benchmark (NAB), a benchmark containing real-world data streams with labeled anomalies. The benchmark, the first of its kind, provides a controlled open-source environment for testing anomaly detection algorithms on streaming data. We present results and analysis for a wide range of algorithms on this benchmark, and discuss future challenges for the emerging field of streaming analytics.", "field": [], "task": ["Anomaly Detection", "Time Series"], "method": [], "dataset": ["Numenta Anomaly Benchmark"], "metric": ["NAB score"], "title": "Unsupervised real-time anomaly detection for streaming data"} {"abstract": "Click-through rate (CTR) prediction is an essential task in web applications such as online advertising and recommender systems, whose features are usually in multi-field form. The key of this task is to model feature interactions among different feature fields. Recently proposed deep learning based models follow a general paradigm: raw sparse input multi-filed features are first mapped into dense field embedding vectors, and then simply concatenated together to feed into deep neural networks (DNN) or other specifically designed networks to learn high-order feature interactions. However, the simple \\emph{unstructured combination} of feature fields will inevitably limit the capability to model sophisticated interactions among different fields in a sufficiently flexible and explicit fashion. In this work, we propose to represent the multi-field features in a graph structure intuitively, where each node corresponds to a feature field and different fields can interact through edges. The task of modeling feature interactions can be thus converted to modeling node interactions on the corresponding graph. To this end, we design a novel model Feature Interaction Graph Neural Networks (Fi-GNN). Taking advantage of the strong representative power of graphs, our proposed model can not only model sophisticated feature interactions in a flexible and explicit fashion, but also provide good model explanations for CTR prediction. Experimental results on two real-world datasets show its superiority over the state-of-the-arts.", "field": [], "task": ["Click-Through Rate Prediction", "Recommendation Systems"], "method": [], "dataset": ["Avazu", "Criteo"], "metric": ["Log Loss", "LogLoss", "AUC"], "title": "Fi-GNN: Modeling Feature Interactions via Graph Neural Networks for CTR Prediction"} {"abstract": "Given labeled instances on a source domain and unlabeled ones on a target domain, unsupervised domain adaptation aims to learn a task classifier that can well classify target instances. Recent advances rely on domain-adversarial training of deep networks to learn domain-invariant features. However, due to an issue of mode collapse induced by the separate design of task and domain classifiers, these methods are limited in aligning the joint distributions of feature and category across domains. To overcome it, we propose a novel adversarial learning method termed Discriminative Adversarial Domain Adaptation (DADA). Based on an integrated category and domain classifier, DADA has a novel adversarial objective that encourages a mutually inhibitory relation between category and domain predictions for any input instance. We show that under practical conditions, it defines a minimax game that can promote the joint distribution alignment. Except for the traditional closed set domain adaptation, we also extend DADA for extremely challenging problem settings of partial and open set domain adaptation. Experiments show the efficacy of our proposed methods and we achieve the new state of the art for all the three settings on benchmark datasets.", "field": [], "task": ["Domain Adaptation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["Syn2Real-C", "Office-31"], "metric": ["Average Accuracy", "Accuracy"], "title": "Discriminative Adversarial Domain Adaptation"} {"abstract": "LIDAR semantic segmentation, which assigns a semantic label to each 3D point measured by the LIDAR, is becoming an essential task for many robotic applications such as autonomous driving. Fast and efficient semantic segmentation methods are needed to match the strong computational and temporal restrictions of many of these real-world applications. This work presents 3D-MiniNet, a novel approach for LIDAR semantic segmentation that combines 3D and 2D learning layers. It first learns a 2D representation from the raw points through a novel projection which extracts local and global information from the 3D data. This representation is fed to an efficient 2D Fully Convolutional Neural Network (FCNN) that produces a 2D semantic segmentation. These 2D semantic labels are re-projected back to the 3D space and enhanced through a post-processing module. The main novelty in our strategy relies on the projection learning module. Our detailed ablation study shows how each component contributes to the final performance of 3D-MiniNet. We validate our approach on well known public benchmarks (SemanticKITTI and KITTI), where 3D-MiniNet gets state-of-the-art results while being faster and more parameter-efficient than previous methods.", "field": [], "task": ["3D Semantic Segmentation", "Autonomous Driving", "Autonomous Vehicles", "LIDAR Semantic Segmentation", "Real-Time 3D Semantic Segmentation", "Real-Time Semantic Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["SemanticKITTI"], "metric": ["Parameters (M)", "Speed (FPS)", "mIoU"], "title": "3D-MiniNet: Learning a 2D Representation from Point Clouds for Fast and Efficient 3D LIDAR Semantic Segmentation"} {"abstract": "We propose to study the problem of few-shot learning with the prism of\ninference on a partially observed graphical model, constructed from a\ncollection of input images whose label can be either observed or not. By\nassimilating generic message-passing inference algorithms with their\nneural-network counterparts, we define a graph neural network architecture that\ngeneralizes several of the recently proposed few-shot learning models. Besides\nproviding improved numerical performance, our framework is easily extended to\nvariants of few-shot learning, such as semi-supervised or active learning,\ndemonstrating the ability of graph-based models to operate well on 'relational'\ntasks.", "field": [], "task": ["Active Learning", "Few-Shot Learning"], "method": [], "dataset": ["Stanford Cars 5-way (5-shot)", "Stanford Dogs 5-way (5-shot)", "Stanford Cars 5-way (1-shot)"], "metric": ["Accuracy"], "title": "Few-Shot Learning with Graph Neural Networks"} {"abstract": "We present O-CNN, an Octree-based Convolutional Neural Network (CNN) for 3D\nshape analysis. Built upon the octree representation of 3D shapes, our method\ntakes the average normal vectors of a 3D model sampled in the finest leaf\noctants as input and performs 3D CNN operations on the octants occupied by the\n3D shape surface. We design a novel octree data structure to efficiently store\nthe octant information and CNN features into the graphics memory and execute\nthe entire O-CNN training and evaluation on the GPU. O-CNN supports various CNN\nstructures and works for 3D shapes in different representations. By restraining\nthe computations on the octants occupied by 3D surfaces, the memory and\ncomputational costs of the O-CNN grow quadratically as the depth of the octree\nincreases, which makes the 3D CNN feasible for high-resolution 3D models. We\ncompare the performance of the O-CNN with other existing 3D CNN solutions and\ndemonstrate the efficiency and efficacy of O-CNN in three shape analysis tasks,\nincluding object classification, shape retrieval, and shape segmentation.", "field": [], "task": ["3D Object Classification"], "method": [], "dataset": ["ModelNet40"], "metric": ["Classification Accuracy"], "title": "O-CNN: Octree-based Convolutional Neural Networks for 3D Shape Analysis"} {"abstract": "We propose a novel approach for instance-level image retrieval. It produces a\nglobal and compact fixed-length representation for each image by aggregating\nmany region-wise descriptors. In contrast to previous works employing\npre-trained deep networks as a black box to produce features, our method\nleverages a deep architecture trained for the specific task of image retrieval.\nOur contribution is twofold: (i) we leverage a ranking framework to learn\nconvolution and projection weights that are used to build the region features;\nand (ii) we employ a region proposal network to learn which regions should be\npooled to form the final global descriptor. We show that using clean training\ndata is key to the success of our approach. To that aim, we use a large scale\nbut noisy landmark dataset and develop an automatic cleaning approach. The\nproposed architecture produces a global image representation in a single\nforward pass. Our approach significantly outperforms previous approaches based\non global descriptors on standard datasets. It even surpasses most prior works\nbased on costly local descriptor indexing and spatial verification. Additional\nmaterial is available at www.xrce.xerox.com/Deep-Image-Retrieval.", "field": [], "task": ["Image Retrieval", "Region Proposal"], "method": [], "dataset": ["Par106k", "Par6k", "Oxf5k", "Oxf105k"], "metric": ["mAP", "MAP"], "title": "Deep Image Retrieval: Learning global representations for image search"} {"abstract": "Noisy data and the similarity in the ocular appearances caused by different ophthalmic pathologies pose significant challenges for an automated expert system to accurately detect retinal diseases. In addition, the lack of knowledge transferability and the need for unreasonably large datasets limit clinical application of current machine learning systems. To increase robustness, a better understanding of how the retinal subspace deformations lead to various levels of disease severity needs to be utilized for prioritizing disease-specific model details. In this paper we propose the use of disease-specific feature representation as a novel architecture comprised of two joint networks -- one for supervised encoding of disease model and the other for producing attention maps in an unsupervised manner to retain disease specific spatial information. Our experimental results on publicly available datasets show the proposed joint-network significantly improves the accuracy and robustness of state-of-the-art retinal disease classification networks on unseen datasets.", "field": [], "task": ["Retinal OCT Disease Classification"], "method": [], "dataset": ["Srinivasan2014", "OCT2017"], "metric": ["Acc"], "title": "Improving Robustness using Joint Attention Network For Detecting Retinal Degeneration From Optical Coherence Tomography Images"} {"abstract": "We consider the challenging problem of zero-shot video object segmentation (VOS). That is, segmenting and tracking multiple moving objects within a video fully automatically, without any manual initialization. We treat this as a grouping problem by exploiting object proposals and making a joint inference about grouping over both space and time. We propose a network architecture for tractably performing proposal selection and joint grouping. Crucially, we then show how to train this network with reinforcement learning so that it learns to perform the optimal non-myopic sequence of grouping decisions to segment the whole video. Unlike standard supervised techniques, this also enables us to directly optimize for the non-differentiable overlap-based metrics used to evaluate VOS. We show that the proposed method, which we call ALBA outperforms the previous stateof-the-art on three benchmarks: DAVIS 2017 [2], FBMS [20] and Youtube-VOS [27].", "field": [], "task": ["Semantic Segmentation", "Unsupervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation", "Youtube-VOS"], "method": [], "dataset": ["DAVIS 2017 (val)", "FBMS"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "ALBA : Reinforcement Learning for Video Object Segmentation"} {"abstract": "Estimating the complete 3D point cloud from an incomplete one is a key problem in many vision and robotics applications. Mainstream methods (e.g., PCN and TopNet) use Multi-layer Perceptrons (MLPs) to directly process point clouds, which may cause the loss of details because the structural and context of point clouds are not fully considered. To solve this problem, we introduce 3D grids as intermediate representations to regularize unordered point clouds. We therefore propose a novel Gridding Residual Network (GRNet) for point cloud completion. In particular, we devise two novel differentiable layers, named Gridding and Gridding Reverse, to convert between point clouds and 3D grids without losing structural information. We also present the differentiable Cubic Feature Sampling layer to extract features of neighboring points, which preserves context information. In addition, we design a new loss function, namely Gridding Loss, to calculate the L1 distance between the 3D grids of the predicted and ground truth point clouds, which is helpful to recover details. Experimental results indicate that the proposed GRNet performs favorably against state-of-the-art methods on the ShapeNet, Completion3D, and KITTI benchmarks.", "field": [], "task": ["Point Cloud Completion"], "method": [], "dataset": ["Completion3D", "ShapeNet"], "metric": ["F-Score@1%", "Chamfer Distance"], "title": "GRNet: Gridding Residual Network for Dense Point Cloud Completion"} {"abstract": "We present an approach for unsupervised domain adaptation---with a strong focus on practical considerations of within-domain class imbalance and between-domain class distribution shift---from a class-conditioned domain alignment perspective. Current methods for class-conditioned domain alignment aim to explicitly minimize a loss function based on pseudo-label estimations of the target domain. However, these methods suffer from pseudo-label bias in the form of error accumulation. We propose a method that removes the need for explicit optimization of model parameters from pseudo-labels directly. Instead, we present a sampling-based implicit alignment approach, where the sample selection procedure is implicitly guided by the pseudo-labels. Theoretical analysis reveals the existence of a domain-discriminator shortcut in misaligned classes, which is addressed by the proposed implicit alignment approach to facilitate domain-adversarial learning. Empirical results and ablation studies confirm the effectiveness of the proposed approach, especially in the presence of within-domain class imbalance and between-domain class distribution shift.", "field": [], "task": ["Domain Adaptation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["Office-Home (RS-UT imbalance)", "Office-31", "VisDA2017", "Office-Home"], "metric": ["Avg accuracy", "Average Per-Class Accuracy", "Accuracy"], "title": "Implicit Class-Conditioned Domain Alignment for Unsupervised Domain Adaptation"} {"abstract": "Pretrained language models are promising particularly for low-resource languages as they only require unlabelled data. However, training existing models requires huge amounts of compute, while pretrained cross-lingual models often underperform on low-resource languages. We propose Multi-lingual language model Fine-Tuning (MultiFiT) to enable practitioners to train and fine-tune language models efficiently in their own language. In addition, we propose a zero-shot method using an existing pretrained cross-lingual model. We evaluate our methods on two widely used cross-lingual classification datasets where they outperform models pretrained on orders of magnitude more data and compute. We release all models and code.", "field": [], "task": ["Cross-Lingual Document Classification", "Document Classification", "Language Modelling", "Zero-shot Cross-Lingual Document Classification"], "method": [], "dataset": ["MLDoc Zero-Shot English-to-German", "MLDoc Zero-Shot English-to-French", "MLDoc Zero-Shot English-to-Spanish", "MLDoc Zero-Shot English-to-Chinese", "MLDoc Zero-Shot English-to-Japanese", "MLDoc Zero-Shot English-to-Italian", "MLDoc Zero-Shot English-to-Russian"], "metric": ["Accuracy"], "title": "MultiFiT: Efficient Multi-lingual Language Model Fine-tuning"} {"abstract": "Sleep spindles and K-complexes are among the most prominent micro-events observed in electroencephalographic (EEG) recordings during sleep. These EEG microstructures are thought to be hallmarks of sleep-related cognitive processes. Although tedious and time-consuming, their identification and quantification is important for sleep studies in both healthy subjects and patients with sleep disorders. Therefore, procedures for automatic detection of spindles and K-complexes could provide valuable assistance to researchers and clinicians in the field. Recently, we proposed a framework for joint spindle and K-complex detection (Lajnef et al., 2015a) based on a Tunable Q-factor Wavelet Transform (TQWT; Selesnick, 2011a) and morphological component analysis (MCA). Using a wide range of performance metrics, the present article provides critical validation and benchmarking of the proposed approach by applying it to open-access EEG data from the Montreal Archive of Sleep Studies (MASS; O'Reilly et al., 2014). Importantly, the obtained scores were compared to alternative methods that were previously tested on the same database. With respect to spindle detection, our method achieved higher performance than most of the alternative methods. This was corroborated with statistic tests that took into account both sensitivity and precision (i.e., Matthew's coefficient of correlation (MCC), F1, Cohen \u03ba). Our proposed method has been made available to the community via an open-source tool named Spinky (for spindle and K-complex detection). Thanks to a GUI implementation and access to Matlab and Python resources, Spinky is expected to contribute to an open-science approach that will enhance replicability and reliable comparisons of classifier performances for the detection of sleep EEG microstructure in both healthy and patient populations.", "field": [], "task": ["EEG", "K-complex detection", "Spindle Detection"], "method": [], "dataset": ["MASS SS2"], "metric": ["F1-score (@IoU = 0.3)"], "title": "Meet Spinky: An Open-Source Spindle and K-Complex Detection Toolbox Validated on the Open-Access Montreal Archive of Sleep Studies (MASS)."} {"abstract": "We create a family of powerful video models which are able to: (i) learn interactions between semantic object information and raw appearance and motion features, and (ii) deploy attention in order to better learn the importance of features at each convolutional block of the network. A new network component named peer-attention is introduced, which dynamically learns the attention weights using another block or input modality. Even without pre-training, our models outperform the previous work on standard public activity recognition datasets with continuous videos, establishing new state-of-the-art. We also confirm that our findings of having neural connections from the object modality and the use of peer-attention is generally applicable for different existing architectures, improving their performances. We name our model explicitly as AssembleNet++. The code will be available at: https://sites.google.com/corp/view/assemblenet/", "field": [], "task": ["Action Classification", "Activity Recognition"], "method": [], "dataset": ["Charades"], "metric": ["MAP"], "title": "AssembleNet++: Assembling Modality Representations via Attention Connections"} {"abstract": "In this work, we propose and study annotated code search: the retrieval of code snippets paired with brief descriptions of their intent using natural language queries. On three benchmark datasets, we investigate how code retrieval systems can be improved by leveraging descriptions to better capture the intents of code snippets. Building on recent progress in transfer learning and natural language processing, we create a domain-specific retrieval model for code annotated with a natural language description. We find that our model yields significantly more relevant search results (with absolute gains up to 20.6% in mean reciprocal rank) compared to state-of-the-art code retrieval methods that do not use descriptions but attempt to compute the intent of snippets solely from unannotated code.", "field": [], "task": ["Annotated Code Search", "Code Search", "Information Retrieval", "Transfer Learning"], "method": [], "dataset": ["PACS-StaQC-py", "PACS-CoNaLa", "PACS-SO-DS"], "metric": ["MRR"], "title": "Neural Code Search Revisited: Enhancing Code Snippet Retrieval through Natural Language Intent"} {"abstract": "We present PERIN, a novel permutation-invariant approach to sentence-to-graph semantic parsing. PERIN is a versatile, cross-framework and language independent architecture for universal modeling of semantic structures. Our system participated in the CoNLL 2020 shared task, Cross-Framework Meaning Representation Parsing (MRP 2020), where it was evaluated on five different frameworks (AMR, DRG, EDS, PTG and UCCA) across four languages. PERIN was one of the winners of the shared task. The source code and pretrained models are available at https://github.com/ufal/perin.", "field": [], "task": ["Semantic Parsing"], "method": [], "dataset": ["PTG (english, MRP 2020)", "PTG (czech, MRP 2020)", "DRG (english, MRP 2020)", "UCCA (english, MRP 2020)", "EDS (english, MRP 2020)", "UCCA (german, MRP 2020)", "DRG (german, MRP 2020)", "AMR (chinese, MRP 2020)", "AMR (english, MRP 2020)"], "metric": ["F1"], "title": "\u00daFAL at MRP 2020: Permutation-invariant Semantic Parsing in PERIN"} {"abstract": "Relation extraction (RE) has been extensively studied due to its importance in real-world applications such as knowledge base construction and question answering. Most of the existing works train the models on either distantly supervised data or human-annotated data. To take advantage of the high accuracy of human annotation and the cheap cost of distant supervision, we propose the dual supervision framework which effectively utilizes both types of data. However, simply combining the two types of data to train a RE model may decrease the prediction accuracy since distant supervision has labeling bias. We employ two separate prediction networks HA-Net and DS-Net to predict the labels by human annotation and distant supervision, respectively, to prevent the degradation of accuracy by the incorrect labeling of distant supervision. Furthermore, we propose an additional loss term called disagreement penalty to enable HA-Net to learn from distantly supervised labels. In addition, we exploit additional networks to adaptively assess the labeling bias by considering contextual information. Our performance study on sentence-level and document-level REs confirms the effectiveness of the dual supervision framework.", "field": [], "task": ["Relation Extraction"], "method": [], "dataset": ["DocRED"], "metric": ["F1"], "title": "Dual Supervision Framework for Relation Extraction with Distant Supervision and Human Annotation"} {"abstract": "Image restoration tasks demand a complex balance between spatial details and high-level contextualized information while recovering images. In this paper, we propose a novel synergistic design that can optimally balance these competing goals. Our main proposal is a multi-stage architecture, that progressively learns restoration functions for the degraded inputs, thereby breaking down the overall recovery process into more manageable steps. Specifically, our model first learns the contextualized features using encoder-decoder architectures and later combines them with a high-resolution branch that retains local information. At each stage, we introduce a novel per-pixel adaptive design that leverages in-situ supervised attention to reweight the local features. A key ingredient in such a multi-stage architecture is the information exchange between different stages. To this end, we propose a two-faceted approach where the information is not only exchanged sequentially from early to late stages, but lateral connections between feature processing blocks also exist to avoid any loss of information. The resulting tightly interlinked multi-stage architecture, named as MPRNet, delivers strong performance gains on ten datasets across a range of tasks including image deraining, deblurring, and denoising. The source code and pre-trained models are available at https://github.com/swz30/MPRNet.", "field": [], "task": ["Deblurring", "Denoising", "Image Denoising", "Image Restoration", "Rain Removal", "Single Image Deraining"], "method": [], "dataset": ["DND", "RealBlur-R", "Test2800", "RealBlur-J", "GoPro", "Rain100H", "SIDD", "Test100", "RealBlur-J (trained on GoPro)", "RealBlur-R (trained on GoPro)", "Test1200", "Rain100L", "HIDE (trained on GOPRO)"], "metric": ["SSIM", "SSIM (sRGB)", "PSNR", "PSNR (sRGB)"], "title": "Multi-Stage Progressive Image Restoration"} {"abstract": "We propose the ViNet architecture for audio-visual saliency prediction. ViNet is a fully convolutional encoder-decoder architecture. The encoder uses visual features from a network trained for action recognition, and the decoder infers a saliency map via trilinear interpolation and 3D convolutions, combining features from multiple hierarchies. The overall architecture of ViNet is conceptually simple; it is causal and runs in real-time (60 fps). ViNet does not use audio as input and still outperforms the state-of-the-art audio-visual saliency prediction models on nine different datasets (three visual-only and six audio-visual datasets). ViNet also surpasses human performance on the CC, SIM and AUC metrics for the AVE dataset, and to our knowledge, it is the first network to do so. We also explore a variation of ViNet architecture by augmenting audio features into the decoder. To our surprise, upon sufficient training, the network becomes agnostic to the input audio and provides the same output irrespective of the input. Interestingly, we also observe similar behaviour in the previous state-of-the-art models \\cite{tsiami2020stavis} for audio-visual saliency prediction. Our findings contrast with previous works on deep learning-based audio-visual saliency prediction, suggesting a clear avenue for future explorations incorporating audio in a more effective manner. The code and pre-trained models are available at https://github.com/samyak0210/ViNet.", "field": [], "task": ["Action Recognition", "Saliency Prediction", "Video Saliency Detection", "Video Saliency Prediction"], "method": [], "dataset": ["Hollywood2", "UCFSports", "DIEM", "DHF1K"], "metric": ["AUC-J", "NSS", "CC", "s-AUC"], "title": "ViNet: Pushing the limits of Visual Modality for Audio-Visual Saliency Prediction"} {"abstract": "This paper presents our systems for the three Subtasks of SemEval Task4: Reading Comprehension of Abstract Meaning (ReCAM). We explain the algorithms used to learn our models and the process of tuning the algorithms and selecting the best model. Inspired by the similarity of the ReCAM task and the language pre-training, we propose a simple yet effective technology, namely, negative augmentation with language model. Evaluation results demonstrate the effectiveness of our proposed approach. Our models achieve the 4th rank on both official test sets of Subtask 1 and Subtask 2 with an accuracy of 87.9% and an accuracy of 92.8%, respectively. We further conduct comprehensive model analysis and observe interesting error cases, which may promote future researches.", "field": [], "task": ["Language Modelling", "Reading Comprehension"], "method": [], "dataset": ["ReCAM"], "metric": ["Accuracy"], "title": "ZJUKLAB at SemEval-2021 Task 4: Negative Augmentation with Language Model for Reading Comprehension of Abstract Meaning"} {"abstract": "The design of recurrent neural networks (RNNs) to accurately process sequential inputs with long-time dependencies is very challenging on account of the exploding and vanishing gradient problem. To overcome this, we propose a novel RNN architecture which is based on a structure preserving discretization of a Hamiltonian system of second-order ordinary differential equations that models networks of oscillators. The resulting RNN is fast, invertible (in time), memory efficient and we derive rigorous bounds on the hidden state gradients to prove the mitigation of the exploding and vanishing gradient problem. A suite of experiments are presented to demonstrate that the proposed RNN provides state of the art performance on a variety of learning tasks with (very) long time-dependencies.", "field": [], "task": ["Sentiment Analysis", "Sequential Image Classification", "Time Series"], "method": [], "dataset": ["IMDb", "Sequential MNIST"], "metric": ["Permuted Accuracy", "Accuracy"], "title": "UnICORNN: A recurrent model for learning very long time dependencies"} {"abstract": "The significance of social media has increased manifold in the past few decades as it helps people from even the most remote corners of the world stay connected. With the COVID-19 pandemic raging, social media has become more relevant and widely used than ever before, and along with this, there has been a resurgence in the circulation of fake news and tweets that demand immediate attention. In this paper, we describe our Fake News Detection system that automatically identifies whether a tweet related to COVID-19 is \"real\" or \"fake\", as a part of CONSTRAINT COVID19 Fake News Detection in English challenge. We have used an ensemble model consisting of pre-trained models that has helped us achieve a joint 8th position on the leader board. We have achieved an F1-score of 0.9831 against a top score of 0.9869. Post completion of the competition, we have been able to drastically improve our system by incorporating a novel heuristic algorithm based on username handles and link domains in tweets fetching an F1-score of 0.9883 and achieving state-of-the art results on the given dataset.", "field": [], "task": ["Fake News Detection"], "method": [], "dataset": ["COVID-19 Fake News Dataset"], "metric": ["F1"], "title": "A Heuristic-driven Ensemble Framework for COVID-19 Fake News Detection"} {"abstract": "We propose a straightforward method that simultaneously reconstructs the 3D\nfacial structure and provides dense alignment. To achieve this, we design a 2D\nrepresentation called UV position map which records the 3D shape of a complete\nface in UV space, then train a simple Convolutional Neural Network to regress\nit from a single 2D image. We also integrate a weight mask into the loss\nfunction during training to improve the performance of the network. Our method\ndoes not rely on any prior face model, and can reconstruct full facial geometry\nalong with semantic meaning. Meanwhile, our network is very light-weighted and\nspends only 9.8ms to process an image, which is extremely faster than previous\nworks. Experiments on multiple challenging datasets show that our method\nsurpasses other state-of-the-art methods on both reconstruction and alignment\ntasks by a large margin.", "field": [], "task": ["3D Face Reconstruction", "Face Alignment", "Face Model", "Face Reconstruction", "Regression"], "method": [], "dataset": ["Stirling-LQ (FG2018 3D face reconstruction challenge)", "Stirling-HQ (FG2018 3D face reconstruction challenge)", "AFLW-LFPA", "NoW Benchmark", "Florence", "AFLW2000-3D"], "metric": ["Mean Reconstruction Error (mm)", "Mean NME "], "title": "Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network"} {"abstract": "We present a compact but effective CNN model for optical flow, called\nPWC-Net. PWC-Net has been designed according to simple and well-established\nprinciples: pyramidal processing, warping, and the use of a cost volume. Cast\nin a learnable feature pyramid, PWC-Net uses the cur- rent optical flow\nestimate to warp the CNN features of the second image. It then uses the warped\nfeatures and features of the first image to construct a cost volume, which is\nprocessed by a CNN to estimate the optical flow. PWC-Net is 17 times smaller in\nsize and easier to train than the recent FlowNet2 model. Moreover, it\noutperforms all published optical flow methods on the MPI Sintel final pass and\nKITTI 2015 benchmarks, running at about 35 fps on Sintel resolution (1024x436)\nimages. Our models are available on https://github.com/NVlabs/PWC-Net.", "field": [], "task": ["Dense Pixel Correspondence Estimation", "Optical Flow Estimation"], "method": [], "dataset": ["HPatches"], "metric": ["Viewpoint IV AEPE", "Viewpoint III AEPE", "Viewpoint I AEPE", "Viewpoint V AEPE", "Viewpoint II AEPE"], "title": "PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume"} {"abstract": "Both bottom-up and top-down strategies have been used for neural\ntransition-based constituent parsing. The parsing strategies differ in terms of\nthe order in which they recognize productions in the derivation tree, where\nbottom-up strategies and top-down strategies take post-order and pre-order\ntraversal over trees, respectively. Bottom-up parsers benefit from rich\nfeatures from readily built partial parses, but lack lookahead guidance in the\nparsing process; top-down parsers benefit from non-local guidance for local\ndecisions, but rely on a strong encoder over the input to predict a constituent\nhierarchy before its construction.To mitigate both issues, we propose a novel\nparsing system based on in-order traversal over syntactic trees, designing a\nset of transition actions to find a compromise between bottom-up constituent\ninformation and top-down lookahead information. Based on stack-LSTM, our\npsycholinguistically motivated constituent parsing system achieves 91.8 F1 on\nWSJ benchmark. Furthermore, the system achieves 93.6 F1 with supervised\nreranking and 94.2 F1 with semi-supervised reranking, which are the best\nresults on the WSJ benchmark.", "field": [], "task": [], "method": [], "dataset": ["Penn Treebank"], "metric": ["F1 score"], "title": "In-Order Transition-based Constituent Parsing"} {"abstract": "We present a novel technique for learning semantic representations, which\nextends the distributional hypothesis to multilingual data and joint-space\nembeddings. Our models leverage parallel data and learn to strongly align the\nembeddings of semantically equivalent sentences, while maintaining sufficient\ndistance between those of dissimilar sentences. The models do not rely on word\nalignments or any syntactic information and are successfully applied to a\nnumber of diverse languages. We extend our approach to learn semantic\nrepresentations at the document level, too. We evaluate these models on two\ncross-lingual document classification tasks, outperforming the prior state of\nthe art. Through qualitative analysis and the study of pivoting effects we\ndemonstrate that our representations are semantically plausible and can capture\nsemantic relationships across languages without parallel data.", "field": [], "task": ["Cross-Lingual Document Classification", "Document Classification", "Learning Semantic Representations"], "method": [], "dataset": ["Reuters RCV1/RCV2 English-to-German", "Reuters RCV1/RCV2 German-to-English"], "metric": ["Accuracy"], "title": "Multilingual Models for Compositional Distributed Semantics"} {"abstract": "Learning long-term spatial-temporal features are critical for many video\nanalysis tasks. However, existing video segmentation methods predominantly rely\non static image segmentation techniques, and methods capturing temporal\ndependency for segmentation have to depend on pretrained optical flow models,\nleading to suboptimal solutions for the problem. End-to-end sequential learning\nto explore spatial-temporal features for video segmentation is largely limited\nby the scale of available video segmentation datasets, i.e., even the largest\nvideo segmentation dataset only contains 90 short video clips. To solve this\nproblem, we build a new large-scale video object segmentation dataset called\nYouTube Video Object Segmentation dataset (YouTube-VOS). Our dataset contains\n3,252 YouTube video clips and 78 categories including common objects and human\nactivities. This is by far the largest video object segmentation dataset to our\nknowledge and we have released it at https://youtube-vos.org. Based on this\ndataset, we propose a novel sequence-to-sequence network to fully exploit\nlong-term spatial-temporal information in videos for segmentation. We\ndemonstrate that our method is able to achieve the best results on our\nYouTube-VOS test set and comparable results on DAVIS 2016 compared to the\ncurrent state-of-the-art methods. Experiments show that the large scale dataset\nis indeed a key factor to the success of our model.", "field": [], "task": ["Optical Flow Estimation", "Semantic Segmentation", "Video Object Segmentation", "Video Segmentation", "Video Semantic Segmentation", "Visual Object Tracking", "Youtube-VOS"], "method": [], "dataset": ["YouTube-VOS"], "metric": ["Speed (FPS)", "Jaccard (Unseen)", "Jaccard (Seen)", "F-Measure (Seen)", "Overall", "F-Measure (Unseen)"], "title": "YouTube-VOS: Sequence-to-Sequence Video Object Segmentation"} {"abstract": "We marry two powerful ideas: deep representation learning for visual\nrecognition and language understanding, and symbolic program execution for\nreasoning. Our neural-symbolic visual question answering (NS-VQA) system first\nrecovers a structural scene representation from the image and a program trace\nfrom the question. It then executes the program on the scene representation to\nobtain an answer. Incorporating symbolic structure as prior knowledge offers\nthree unique advantages. First, executing programs on a symbolic space is more\nrobust to long program traces; our model can solve complex reasoning tasks\nbetter, achieving an accuracy of 99.8% on the CLEVR dataset. Second, the model\nis more data- and memory-efficient: it performs well after learning on a small\nnumber of training data; it can also encode an image into a compact\nrepresentation, requiring less storage than existing methods for offline\nquestion answering. Third, symbolic program execution offers full transparency\nto the reasoning process; we are thus able to interpret and diagnose each\nexecution step.", "field": [], "task": ["Question Answering", "Representation Learning", "Visual Question Answering"], "method": [], "dataset": ["CLEVR-Humans", "CLEVR"], "metric": ["Accuracy"], "title": "Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding"} {"abstract": "Neural network models, based on the attentional encoder-decoder model, have good capability in abstractive text summarization. However, these models are hard to be controlled in the process of generation, which leads to a lack of key information. We propose a guiding generation model that combines the extractive method and the abstractive method. Firstly, we obtain keywords from the text by a extractive model. Then, we introduce a Key Information Guide Network (KIGN), which encodes the keywords to the key information representation, to guide the process of generation. In addition, we use a prediction-guide mechanism, which can obtain the long-term value for future decoding, to further guide the summary generation. We evaluate our model on the CNN/Daily Mail dataset. The experimental results show that our model leads to significant improvements.", "field": [], "task": ["Abstractive Text Summarization", "Text Summarization"], "method": [], "dataset": ["CNN / Daily Mail (Anonymized)"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Guiding Generation for Abstractive Text Summarization Based on Key Information Guide Network"} {"abstract": "We propose DecaProp (Densely Connected Attention Propagation), a new densely\nconnected neural architecture for reading comprehension (RC). There are two\ndistinct characteristics of our model. Firstly, our model densely connects all\npairwise layers of the network, modeling relationships between passage and\nquery across all hierarchical levels. Secondly, the dense connectors in our\nnetwork are learned via attention instead of standard residual skip-connectors.\nTo this end, we propose novel Bidirectional Attention Connectors (BAC) for\nefficiently forging connections throughout the network. We conduct extensive\nexperiments on four challenging RC benchmarks. Our proposed approach achieves\nstate-of-the-art results on all four, outperforming existing baselines by up to\n$2.6\\%-14.2\\%$ in absolute F1 score.", "field": [], "task": ["Open-Domain Question Answering", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["SearchQA", "NewsQA", "Quasar", "NarrativeQA", "Quasart-T"], "metric": ["Rouge-L", "METEOR", "BLEU-1", "N-gram F1", "Unigram Acc", "F1", "EM", "BLEU-4", "EM (Quasar-T)", "F1 (Quasar-T)"], "title": "Densely Connected Attention Propagation for Reading Comprehension"} {"abstract": "Recent advances in language modeling using recurrent neural networks have made it viable to model language as distributions over characters. By learning to predict the next character on the basis of previous characters, such models have been shown to automatically internalize linguistic concepts such as words, sentences, subclauses and even sentiment. In this paper, we propose to leverage the internal states of a trained character language model to produce a novel type of word embedding which we refer to as contextual string embeddings. Our proposed embeddings have the distinct properties that they (a) are trained without any explicit notion of words and thus fundamentally model words as sequences of characters, and (b) are contextualized by their surrounding text, meaning that the same word will have different embeddings depending on its contextual use. We conduct a comparative evaluation against previous embeddings and find that our embeddings are highly useful for downstream tasks: across four classic sequence labeling tasks we consistently outperform the previous state-of-the-art. In particular, we significantly outperform previous work on English and German named entity recognition (NER), allowing us to report new state-of-the-art F1-scores on the CoNLL03 shared task. We release all code and pre-trained language models in a simple-to-use framework to the research community, to enable reproduction of these experiments and application of our proposed embeddings to other tasks: https://github.com/zalandoresearch/flair", "field": [], "task": ["Chunking", "Language Modelling", "Named Entity Recognition", "Part-Of-Speech Tagging", "Word Embeddings"], "method": [], "dataset": ["Penn Treebank", "CoNLL 2003 (German) Revised", "CoNLL 2000", "Ontonotes v5 (English)", "CoNLL 2003 (English)", "Long-tail emerging entities", "CoNLL++"], "metric": ["Exact Span F1", "F1", "F1 score", "Accuracy"], "title": "Contextual String Embeddings for Sequence Labeling"} {"abstract": "Few-shot learning in image classification aims to learn a classifier to\nclassify images when only few training examples are available for each class.\nRecent work has achieved promising classification performance, where an\nimage-level feature based measure is usually used. In this paper, we argue that\na measure at such a level may not be effective enough in light of the scarcity\nof examples in few-shot learning. Instead, we think a local descriptor based\nimage-to-class measure should be taken, inspired by its surprising success in\nthe heydays of local invariant features. Specifically, building upon the recent\nepisodic training mechanism, we propose a Deep Nearest Neighbor Neural Network\n(DN4 in short) and train it in an end-to-end manner. Its key difference from\nthe literature is the replacement of the image-level feature based measure in\nthe final layer by a local descriptor based image-to-class measure. This\nmeasure is conducted online via a $k$-nearest neighbor search over the deep\nlocal descriptors of convolutional feature maps. The proposed DN4 not only\nlearns the optimal deep local descriptors for the image-to-class measure, but\nalso utilizes the higher efficiency of such a measure in the case of example\nscarcity, thanks to the exchangeability of visual patterns across the images in\nthe same class. Our work leads to a simple, effective, and computationally\nefficient framework for few-shot learning. Experimental study on benchmark\ndatasets consistently shows its superiority over the related state-of-the-art,\nwith the largest absolute improvement of $17\\%$ over the next best. The source\ncode can be available from \\UrlFont{https://github.com/WenbinLee/DN4.git}.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Image Classification"], "method": [], "dataset": ["Stanford Dogs 5-way (5-shot)", "Mini-Imagenet 5-way (1-shot)", "Stanford Cars 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "Stanford Dogs 5-way (1-shot)", "CUB 200 5-way 1-shot", "Stanford Cars 5-way (5-shot)", "CUB 200 5-way 5-shot"], "metric": ["Accuracy"], "title": "Revisiting Local Descriptor based Image-to-Class Measure for Few-shot Learning"} {"abstract": "In this paper we present DELTA, a deep learning based language technology platform. DELTA is an end-to-end platform designed to solve industry level natural language and speech processing problems. It integrates most popular neural network models for training as well as comprehensive deployment tools for production. DELTA aims to provide easy and fast experiences for using, deploying, and developing natural language processing and speech models for both academia and industry use cases. We demonstrate the reliable performance with DELTA on several natural language processing and speech tasks, including text classification, named entity recognition, natural language inference, speech recognition, speaker verification, etc. DELTA has been used for developing several state-of-the-art algorithms for publications and delivering real production to serve millions of users.", "field": [], "task": ["Abstractive Text Summarization", "Intent Detection", "Named Entity Recognition", "Natural Language Inference", "Speaker Verification", "Speech Recognition", "Text Classification"], "method": [], "dataset": ["CNN / Daily Mail", "Yahoo! Answers", "SNLI", "CoNLL 2003 (English)", "ATIS", "TREC-6"], "metric": ["% Test Accuracy", "Error", "F1", "Accuracy", "ROUGE-L"], "title": "DELTA: A DEep learning based Language Technology plAtform"} {"abstract": "In sequence modeling tasks the token order matters, but this information can be partially lost due to the discretization of the sequence into data points. In this paper, we study the imbalance between the way certain token pairs are included in data points and others are not. We denote this a token order imbalance (TOI) and we link the partial sequence information loss to a diminished performance of the system as a whole, both in text and speech processing tasks. We then provide a mechanism to leverage the full token order information -Alleviated TOI- by iteratively overlapping the token composition of data points. For recurrent networks, we use prime numbers for the batch size to avoid redundancies when building batches from overlapped data points. The proposed method achieved state of the art performance in both text and speech related tasks.", "field": [], "task": ["Language Modelling"], "method": [], "dataset": ["WikiText-2", "WikiText-103"], "metric": ["Number of params", "Validation perplexity", "Test perplexity"], "title": "Alleviating Sequence Information Loss with Data Overlapping and Prime Batch Sizes"} {"abstract": "Contextualized word representations are able to give different representations for the same word in different contexts, and they have been shown to be effective in downstream natural language processing tasks, such as question answering, named entity recognition, and sentiment analysis. However, evaluation on word sense disambiguation (WSD) in prior work shows that using contextualized word representations does not outperform the state-of-the-art approach that makes use of non-contextualized word embeddings. In this paper, we explore different strategies of integrating pre-trained contextualized word representations and our best strategy achieves accuracies exceeding the best prior published accuracies by significant margins on multiple benchmark WSD datasets. We make the source code available at https://github.com/nusnlp/contextemb-wsd.", "field": [], "task": ["Named Entity Recognition", "Question Answering", "Sentiment Analysis", "Word Embeddings", "Word Sense Disambiguation"], "method": [], "dataset": ["Supervised:"], "metric": ["Senseval 2", "Senseval 3", "SemEval 2013", "SemEval 2007", "SemEval 2015"], "title": "Improved Word Sense Disambiguation Using Pre-Trained Contextualized Word Representations"} {"abstract": "Transfer learning has fundamentally changed the landscape of natural language processing (NLP) research. Many existing state-of-the-art models are first pre-trained on a large text corpus and then fine-tuned on downstream tasks. However, due to limited data resources from downstream tasks and the extremely large capacity of pre-trained models, aggressive fine-tuning often causes the adapted model to overfit the data of downstream tasks and forget the knowledge of the pre-trained model. To address the above issue in a more principled manner, we propose a new computational framework for robust and efficient fine-tuning for pre-trained language models. Specifically, our proposed framework contains two important ingredients: 1. Smoothness-inducing regularization, which effectively manages the capacity of the model; 2. Bregman proximal point optimization, which is a class of trust-region methods and can prevent knowledge forgetting. Our experiments demonstrate that our proposed method achieves the state-of-the-art performance on multiple NLP benchmarks.", "field": [], "task": ["Linguistic Acceptability", "Natural Language Inference", "Semantic Textual Similarity", "Sentiment Analysis", "Transfer Learning"], "method": [], "dataset": ["MultiNLI", "SST-2 Binary classification", "SNLI", "STS Benchmark", "WNLI", "MRPC", "QNLI", "SciTail"], "metric": ["Pearson Correlation", "% Test Accuracy", "Spearman Correlation", "Matched", "Accuracy", "Mismatched"], "title": "SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization"} {"abstract": "Graph embedding has become a key component of many data mining and analysis systems. Current graph embedding approaches either sample a large number of node pairs from a graph to learn node embeddings via stochastic optimization or factorize a high-order proximity/adjacency matrix of the graph via computationally expensive matrix factorization techniques. These approaches typically require significant resources for the learning process and rely on multiple parameters, which limits their applicability in practice. Moreover, most of the existing graph embedding techniques operate effectively in one specific metric space only (e.g., the one produced with cosine similarity), do not preserve higher-order structural features of the input graph and cannot automatically determine a meaningful number of embedding dimensions. Typically, the produced embeddings are not easily interpretable, which complicates further analyses and limits their applicability. To address these issues, we propose DAOR, a highly efficient and parameter-free graph embedding technique producing metric space-robust, compact and interpretable embeddings without any manual tuning. Compared to a dozen state-of-the-art graph embedding algorithms, DAOR yields competitive results on both node classification (which benefits form high-order proximity) and link prediction (which relies on low-order proximity mostly). Unlike existing techniques, however, DAOR does not require any parameter tuning and improves the embeddings generation speed by several orders of magnitude. Our approach has hence the ambition to greatly simplify and speed up data analysis tasks involving graph representation learning.", "field": [], "task": ["Community Detection", "Graph Embedding", "Graph Representation Learning", "Link Prediction", "Node Classification", "Representation Learning", "Stochastic Optimization"], "method": [], "dataset": ["BlogCatalog", "Wiki", "DBLP"], "metric": ["Macro F1", "Micro F1"], "title": "Bridging the Gap between Community and Node Representations: Graph Embedding via Community Detection"} {"abstract": "A large number of real-world graphs or networks are inherently heterogeneous, involving a diversity of node types and relation types. Heterogeneous graph embedding is to embed rich structural and semantic information of a heterogeneous graph into low-dimensional node representations. Existing models usually define multiple metapaths in a heterogeneous graph to capture the composite relations and guide neighbor selection. However, these models either omit node content features, discard intermediate nodes along the metapath, or only consider one metapath. To address these three limitations, we propose a new model named Metapath Aggregated Graph Neural Network (MAGNN) to boost the final performance. Specifically, MAGNN employs three major components, i.e., the node content transformation to encapsulate input node attributes, the intra-metapath aggregation to incorporate intermediate semantic nodes, and the inter-metapath aggregation to combine messages from multiple metapaths. Extensive experiments on three real-world heterogeneous graph datasets for node classification, node clustering, and link prediction show that MAGNN achieves more accurate prediction results than state-of-the-art baselines.", "field": [], "task": ["Graph Embedding", "Link Prediction", "Node Classification", "Node Clustering"], "method": [], "dataset": ["Last.FM"], "metric": ["AP", "AUC"], "title": "MAGNN: Metapath Aggregated Graph Neural Network for Heterogeneous Graph Embedding"} {"abstract": "Point clouds are an efficient data format for 3D data. However, existing 3D\nsegmentation methods for point clouds either do not model local dependencies\n\\cite{pointnet} or require added computations \\cite{kd-net,pointnet2}. This\nwork presents a novel 3D segmentation framework, RSNet\\footnote{Codes are\nreleased here https://github.com/qianguih/RSNet}, to efficiently model local\nstructures in point clouds. The key component of the RSNet is a lightweight\nlocal dependency module. It is a combination of a novel slice pooling layer,\nRecurrent Neural Network (RNN) layers, and a slice unpooling layer. The slice\npooling layer is designed to project features of unordered points onto an\nordered sequence of feature vectors so that traditional end-to-end learning\nalgorithms (RNNs) can be applied. The performance of RSNet is validated by\ncomprehensive experiments on the S3DIS\\cite{stanford}, ScanNet\\cite{scannet},\nand ShapeNet \\cite{shapenet} datasets. In its simplest form, RSNets surpass all\nprevious state-of-the-art methods on these benchmarks. And comparisons against\nprevious state-of-the-art methods \\cite{pointnet, pointnet2} demonstrate the\nefficiency of RSNets.", "field": [], "task": ["Semantic Segmentation"], "method": [], "dataset": ["S3DIS"], "metric": ["Mean IoU", "mAcc"], "title": "Recurrent Slice Networks for 3D Segmentation of Point Clouds"} {"abstract": "We present a deep learning strategy that enables, for the first time, contrast-agnostic semantic segmentation of completely unpreprocessed brain MRI scans, without requiring additional training or fine-tuning for new modalities. Classical Bayesian methods address this segmentation problem with unsupervised intensity models, but require significant computational resources. In contrast, learning-based methods can be fast at test time, but are sensitive to the data available at training. Our proposed learning method, SynthSeg, leverages a set of training segmentations (no intensity images required) to generate synthetic sample images of widely varying contrasts on the fly during training. These samples are produced using the generative model of the classical Bayesian segmentation framework, with randomly sampled parameters for appearance, deformation, noise, and bias field. Because each mini-batch has a different synthetic contrast, the final network is not biased towards any MRI contrast. We comprehensively evaluate our approach on four datasets comprising over 1,000 subjects and four types of MR contrast. The results show that our approach successfully segments every contrast in the data, performing slightly better than classical Bayesian segmentation, and three orders of magnitude faster. Moreover, even within the same type of MRI contrast, our strategy generalizes significantly better across datasets, compared to training using real images. Finally, we find that synthesizing a broad range of contrasts, even if unrealistic, increases the generalization of the neural network. Our code and model are open source at https://github.com/BBillot/SynthSeg.", "field": [], "task": ["Brain Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["Brain MRI segmentation"], "metric": ["Dice Scoe", "Dice Score"], "title": "A Learning Strategy for Contrast-agnostic MRI Segmentation"} {"abstract": "Atari games have been a long-standing benchmark in the reinforcement learning (RL) community for the past decade. This benchmark was proposed to test general competency of RL algorithms. Previous work has achieved good average performance by doing outstandingly well on many games of the set, but very poorly in several of the most challenging games. We propose Agent57, the first deep RL agent that outperforms the standard human benchmark on all 57 Atari games. To achieve this result, we train a neural network which parameterizes a family of policies ranging from very exploratory to purely exploitative. We propose an adaptive mechanism to choose which policy to prioritize throughout the training process. Additionally, we utilize a novel parameterization of the architecture that allows for more consistent and stable learning.", "field": [], "task": ["Atari Games"], "method": [], "dataset": ["Atari 2600 Amidar", "Atari 2600 River Raid", "Atari 2600 Beam Rider", "Atari 2600 Video Pinball", "Atari 2600 Demon Attack", "Atari 2600 Enduro", "Atari 2600 Alien", "Atari 2600 Boxing", "Atari 2600 Pitfall!", "Atari 2600 Bank Heist", "Atari 2600 Tutankham", "Atari 2600 Time Pilot", "Atari 2600 Space Invaders", "Atari 2600 Assault", "Atari 2600 Phoenix", "Atari 2600 Gravitar", "Atari 2600 Ice Hockey", "Atari 2600 Bowling", "Atari 2600 Private Eye", "Atari 2600 Berzerk", "Atari 2600 Asterix", "Atari 2600 Breakout", "Atari 2600 Name This Game", "Atari 2600 Crazy Climber", "Atari 2600 Pong", "Atari 2600 Krull", "Atari 2600 Freeway", "Atari 2600 James Bond", "Atari 2600 Defender", "Atari 2600 Robotank", "Atari 2600 Kangaroo", "Atari 2600 Venture", "Atari 2600 Asteroids", "Atari 2600 Fishing Derby", "Atari 2600 Ms. Pacman", "Atari 2600 Seaquest", "Atari 2600 Tennis", "Atari 2600 Solaris", "Atari 2600 Zaxxon", "Atari 2600 Frostbite", "Atari 2600 Star Gunner", "Atari 2600 Double Dunk", "Atari 2600 Battle Zone", "Atari 2600 Gopher", "Atari 2600 Skiing", "Atari 2600 Road Runner", "Atari 2600 Atlantis", "Atari 2600 Kung-Fu Master", "Atari 2600 Chopper Command", "Atari 2600 Surround", "Atari 2600 Yars Revenge", "Atari 2600 Up and Down", "Atari 2600 Montezuma's Revenge", "Atari 2600 Wizard of Wor", "Atari 2600 Q*Bert", "Atari 2600 Centipede", "Atari 2600 HERO"], "metric": ["Score"], "title": "Agent57: Outperforming the Atari Human Benchmark"} {"abstract": "Most recent sentence simplification systems use basic machine translation models to learn lexical and syntactic paraphrases from a manually simplified parallel corpus. These methods are limited by the quality and quantity of manually simplified corpora, which are expensive to build. In this paper, we conduct an in-depth adaptation of statistical machine translation to perform text simplification, taking advantage of large-scale paraphrases learned from bilingual texts and a small amount of manual simplifications with multiple references. Our work is the first to design automatic metrics that are effective for tuning and evaluating simplification systems, which will facilitate iterative development for this task.", "field": [], "task": ["Machine Translation", "Text Simplification"], "method": [], "dataset": ["TurkCorpus"], "metric": ["BLEU", "SARI (EASSE>=0.2.1)"], "title": "Optimizing Statistical Machine Translation for Text Simplification"} {"abstract": "Due to the spatially variant blur caused by camera shake and object motions under different scene depths, deblurring images captured from dynamic scenes is challenging. Although recent works based on deep neural networks have shown great progress on this problem, their models are usually large and computationally expensive. In this paper, we propose a novel spatially variant neural network to address the problem. The proposed network is composed of three deep convolutional neural networks (CNNs) and a recurrent neural network (RNN). RNN is used as a deconvolution operator performed on feature maps extracted from the input image by one of the CNNs. Another CNN is used to learn the weights for the RNN at every location. As a result, the RNN is spatially variant and could implicitly model the deblurring process with spatially variant kernels. The third CNN is used to reconstruct the final deblurred feature maps into restored image. The whole network is end-to-end trainable. Our analysis shows that the proposed network has a large receptive field even with a small model size. Quantitative and qualitative evaluations on public datasets demonstrate that the proposed method performs favorably against state-of-the-art algorithms in terms of accuracy, speed, and model size.", "field": [], "task": ["Deblurring"], "method": [], "dataset": ["RealBlur-J (trained on GoPro)", "GoPro", "RealBlur-R (trained on GoPro)"], "metric": ["SSIM", "SSIM (sRGB)", "PSNR", "PSNR (sRGB)"], "title": "Dynamic Scene Deblurring Using Spatially Variant Recurrent Neural Networks"} {"abstract": "Models and examples built with TensorFlow", "field": [], "task": ["Object Detection", "Object Recognition", "Real-Time Object Detection", "Video Object Detection"], "method": [], "dataset": ["ImageNet VID"], "metric": ["FPS", "MAP"], "title": "Looking Fast and Slow: Memory-Guided Mobile Video Object Detection"} {"abstract": "Current anchor-free object detectors label all the features that spatially fall inside a predefined central region of a ground-truth box as positive. This approach causes label noise during training, since some of these positively labeled features may be on the background or an occluder object, or they are simply not discriminative features. In this paper, we propose a new labeling strategy aimed to reduce the label noise in anchor-free detectors. We sum-pool predictions stemming from individual features into a single prediction. This allows the model to reduce the contributions of non-discriminatory features during training. We develop a new one-stage, anchor-free object detector, PPDet, to employ this labeling strategy during training and a similar prediction pooling method during inference. On the COCO dataset, PPDet achieves the best performance among anchor-free top-down detectors and performs on-par with the other state-of-the-art methods. It also outperforms all major one-stage and two-stage methods in small object detection (${AP}_{S}$ $31.4$). Code is available at https://github.com/nerminsamet/ppdet", "field": [], "task": ["Object Detection", "Small Object Detection"], "method": [], "dataset": ["COCO minival", "COCO test-dev"], "metric": ["APM", "oLRP", "box AP", "AP75", "APS", "APL", "AP50"], "title": "Reducing Label Noise in Anchor-Free Object Detection"} {"abstract": "Lifelong learning has attracted much attention, but existing works still struggle to fight catastrophic forgetting and accumulate knowledge over long stretches of incremental learning. In this work, we propose PODNet, a model inspired by representation learning. By carefully balancing the compromise between remembering the old classes and learning new ones, PODNet fights catastrophic forgetting, even over very long runs of small incremental tasks --a setting so far unexplored by current works. PODNet innovates on existing art with an efficient spatial-based distillation-loss applied throughout the model and a representation comprising multiple proxy vectors for each class. We validate those innovations thoroughly, comparing PODNet with three state-of-the-art models on three datasets: CIFAR100, ImageNet100, and ImageNet1000. Our results showcase a significant advantage of PODNet over existing art, with accuracy gains of 12.10, 6.51, and 2.85 percentage points, respectively. Code is available at https://github.com/arthurdouillard/incremental_learning.pytorch", "field": [], "task": ["Continual Learning", "Incremental Learning", "Representation Learning"], "method": [], "dataset": ["ImageNet - 500 classes + 10 steps of 50 classes", "CIFAR-100 - 50 classes + 25 steps of 2 classes", "CIFAR-100 - 50 classes + 5 steps of 10 classes", "ImageNet-100 - 50 classes + 50 steps of 1 class", "ImageNet-100 - 50 classes + 5 steps of 10 classes", "CIFAR-100 - 50 classes + 50 steps of 1 class", "ImageNet-100 - 50 classes + 10 steps of 5 classes", "CIFAR-100 - 50 classes + 10 steps of 5 classes", "ImageNet-100 - 50 classes + 25 steps of 2 classes", "ImageNet - 500 classes + 5 steps of 100 classes"], "metric": ["Average Incremental Accuracy"], "title": "PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning"} {"abstract": "We propose PeTra, a memory-augmented neural network designed to track entities in its memory slots. PeTra is trained using sparse annotation from the GAP pronoun resolution dataset and outperforms a prior memory model on the task while using a simpler architecture. We empirically compare key modeling choices, finding that we can simplify several aspects of the design of the memory module while retaining strong performance. To measure the people tracking capability of memory models, we (a) propose a new diagnostic evaluation based on counting the number of unique entities in text, and (b) conduct a small scale human evaluation to compare evidence of people tracking in the memory logs of PeTra relative to a previous approach. PeTra is highly effective in both evaluations, demonstrating its ability to track people in its memory despite being trained with limited annotation.", "field": [], "task": ["Coreference Resolution"], "method": [], "dataset": ["GAP"], "metric": ["F1"], "title": "PeTra: A Sparsely Supervised Memory Model for People Tracking"} {"abstract": "Continuous input signals like images and time series that are irregularly sampled or have missing values are challenging for existing deep learning methods. Coherently defined feature representations must depend on the values in unobserved regions of the input. Drawing from the work in probabilistic numerics, we propose Probabilistic Numeric Convolutional Neural Networks which represent features as Gaussian processes (GPs), providing a probabilistic description of discretization error. We then define a convolutional layer as the evolution of a PDE defined on this GP, followed by a nonlinearity. This approach also naturally admits steerable equivariant convolutions under e.g. the rotation group. In experiments we show that our approach yields a $3\\times$ reduction of error from the previous state of the art on the SuperPixel-MNIST dataset and competitive performance on the medical time series dataset PhysioNet2012.", "field": [], "task": ["Gaussian Processes", "Time Series"], "method": [], "dataset": ["75 Superpixel MNIST"], "metric": ["Classification Error"], "title": "Probabilistic Numeric Convolutional Neural Networks"} {"abstract": "Hostile content on social platforms is ever increasing. This has led to the need for proper detection of hostile posts so that appropriate action can be taken to tackle them. Though a lot of work has been done recently in the English Language to solve the problem of hostile content online, similar works in Indian Languages are quite hard to find. This paper presents a transfer learning based approach to classify social media (i.e Twitter, Facebook, etc.) posts in Hindi Devanagari script as Hostile or Non-Hostile. Hostile posts are further analyzed to determine if they are Hateful, Fake, Defamation, and Offensive. This paper harnesses attention based pre-trained models fine-tuned on Hindi data with Hostile-Non hostile task as Auxiliary and fusing its features for further sub-tasks classification. Through this approach, we establish a robust and consistent model without any ensembling or complex pre-processing. We have presented the results from our approach in CONSTRAINT-2021 Shared Task on hostile post detection where our model performs extremely well with 3rd runner up in terms of Weighted Fine-Grained F1 Score.", "field": [], "task": ["Fake News Detection", "Hate Speech Detection", "Transfer Learning"], "method": [], "dataset": ["Hostility Detection Dataset in Hindi"], "metric": ["F1 score"], "title": "Hostility Detection in Hindi leveraging Pre-Trained Language Models"} {"abstract": "Training a sound event detection algorithm on a heterogeneous dataset including both recorded and synthetic soundscapes that can have various labeling granularity is a non-trivial task that can lead to systems requiring several technical choices. These technical choices are often passed from one system to another without being questioned. We propose to perform a detailed analysis of DCASE 2020 task 4 sound event detection baseline with regards to several aspects such as the type of data used for training, the parameters of the mean-teacher or the transformations applied while generating the synthetic soundscapes. Some of the parameters that are usually used as default are shown to be sub-optimal.", "field": [], "task": ["Sound Event Detection"], "method": [], "dataset": ["DESED"], "metric": ["event-based F1 score", "PSDS (gtc=dtc=0.5,emax=100,cttc=0.3,ct=1,st=0)"], "title": "Training Sound Event Detection On A Heterogeneous Dataset"} {"abstract": "In this paper, we present Neural Phrase-based Machine Translation (NPMT). Our\nmethod explicitly models the phrase structures in output sequences using\nSleep-WAke Networks (SWAN), a recently proposed segmentation-based sequence\nmodeling method. To mitigate the monotonic alignment requirement of SWAN, we\nintroduce a new layer to perform (soft) local reordering of input sequences.\nDifferent from existing neural machine translation (NMT) approaches, NPMT does\nnot use attention-based decoding mechanisms. Instead, it directly outputs\nphrases in a sequential order and can decode in linear time. Our experiments\nshow that NPMT achieves superior performances on IWSLT 2014\nGerman-English/English-German and IWSLT 2015 English-Vietnamese machine\ntranslation tasks compared with strong NMT baselines. We also observe that our\nmethod produces meaningful phrases in output languages.", "field": [], "task": ["Machine Translation"], "method": [], "dataset": ["IWSLT2015 German-English", "IWSLT2014 German-English", "IWSLT2015 English-German"], "metric": ["BLEU score"], "title": "Towards Neural Phrase-based Machine Translation"} {"abstract": "Previous monocular depth estimation methods take a single view and directly\nregress the expected results. Though recent advances are made by applying\ngeometrically inspired loss functions during training, the inference procedure\ndoes not explicitly impose any geometrical constraint. Therefore these models\npurely rely on the quality of data and the effectiveness of learning to\ngeneralize. This either leads to suboptimal results or the demand of huge\namount of expensive ground truth labelled data to generate reasonable results.\nIn this paper, we show for the first time that the monocular depth estimation\nproblem can be reformulated as two sub-problems, a view synthesis procedure\nfollowed by stereo matching, with two intriguing properties, namely i)\ngeometrical constraints can be explicitly imposed during inference; ii) demand\non labelled depth data can be greatly alleviated. We show that the whole\npipeline can still be trained in an end-to-end fashion and this new formulation\nplays a critical role in advancing the performance. The resulting model\noutperforms all the previous monocular depth estimation methods as well as the\nstereo block matching method in the challenging KITTI dataset by only using a\nsmall number of real training data. The model also generalizes well to other\nmonocular depth estimation benchmarks. We also discuss the implications and the\nadvantages of solving monocular depth estimation using stereo methods.", "field": [], "task": ["Depth Estimation", "Monocular Depth Estimation", "Stereo Matching", "Stereo Matching Hand"], "method": [], "dataset": ["KITTI Eigen split"], "metric": ["absolute relative error"], "title": "Single View Stereo Matching"} {"abstract": "Occluded face detection is a challenging detection task due to the large\nappearance variations incurred by various real-world occlusions. This paper\nintroduces an Adversarial Occlusion-aware Face Detector (AOFD) by\nsimultaneously detecting occluded faces and segmenting occluded areas.\nSpecifically, we employ an adversarial training strategy to generate\nocclusion-like face features that are difficult for a face detector to\nrecognize. Occlusion mask is predicted simultaneously while detecting occluded\nfaces and the occluded area is utilized as an auxiliary instead of being\nregarded as a hindrance. Moreover, the supervisory signals from the\nsegmentation branch will reversely affect the features, aiding in detecting\nheavily-occluded faces accordingly. Consequently, AOFD is able to find the\nfaces with few exposed facial landmarks with very high confidences and keeps\nhigh detection accuracy even for masked faces. Extensive experiments\ndemonstrate that AOFD not only significantly outperforms state-of-the-art\nmethods on the MAFA occluded face detection dataset, but also achieves\ncompetitive detection accuracy on benchmark dataset for general face detection\nsuch as FDDB.", "field": [], "task": ["Face Detection", "Occluded Face Detection"], "method": [], "dataset": ["MAFA"], "metric": ["MAP"], "title": "Adversarial Occlusion-aware Face Detection"} {"abstract": "Due to the availability of large-scale skeleton datasets, 3D human action recognition has recently called the attention of computer vision community. Many works have focused on encoding skeleton data as skeleton image representations based on spatial structure of the skeleton joints, in which the temporal dynamics of the sequence is encoded as variations in columns and the spatial structure of each frame is represented as rows of a matrix. To further improve such representations, we introduce a novel skeleton image representation to be used as input of Convolutional Neural Networks (CNNs), named SkeleMotion. The proposed approach encodes the temporal dynamics by explicitly computing the magnitude and orientation values of the skeleton joints. Different temporal scales are employed to compute motion values to aggregate more temporal dynamics to the representation making it able to capture longrange joint interactions involved in actions as well as filtering noisy motion values. Experimental results demonstrate the effectiveness of the proposed representation on 3D action recognition outperforming the state-of-the-art on NTU RGB+D 120 dataset.", "field": [], "task": ["3D Action Recognition", "Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D", "NTU RGB+D 120"], "metric": ["Accuracy (Cross-Subject)", "Accuracy (Cross-Setup)", "Accuracy (CV)", "Accuracy (CS)"], "title": "SkeleMotion: A New Representation of Skeleton Joint Sequences Based on Motion Information for 3D Action Recognition"} {"abstract": "Deep learning models trained in natural images are commonly used for different classification tasks in the medical domain. Generally, very high dimensional medical images are down-sampled by us- ing interpolation techniques before feeding them to deep learning models that are ImageNet compliant and accept only low-resolution images of size 224 \u00d7 224 px. This popular technique may lead to the loss of key information thus hampering the classification. Signifi- cant pathological features in medical images typically being small sized and highly affected. To combat this problem, we introduce a convolutional neural network (CNN) based classification approach which learns to reduce the resolution of the image using an autoen- coder and at the same time classify it using another network, while both the tasks are trained jointly. This algorithm guides the model to learn essential representations from high-resolution images for classification along with reconstruction. We have used the publicly available dataset of chest x-rays to evaluate this approach and have outperformed state-of-the-art on test data. Besides, we have experi- mented with the effects of different augmentation approaches in this dataset and report baselines using some well known ImageNet class of CNNs.", "field": [], "task": ["Pneumonia Detection", "Thoracic Disease Classification"], "method": [], "dataset": ["ChestX-ray14"], "metric": ["AUROC"], "title": "Jointly Learning Convolutional Representations to Compress Radiological Images and Classify Thoracic Diseases in the Compressed Domain"} {"abstract": "Offline signature verification is one of the most challenging tasks in\nbiometrics and document forensics. Unlike other verification problems, it needs\nto model minute but critical details between genuine and forged signatures,\nbecause a skilled falsification might often resembles the real signature with\nsmall deformation. This verification task is even harder in writer independent\nscenarios which is undeniably fiscal for realistic cases. In this paper, we\nmodel an offline writer independent signature verification task with a\nconvolutional Siamese network. Siamese networks are twin networks with shared\nweights, which can be trained to learn a feature space where similar\nobservations are placed in proximity. This is achieved by exposing the network\nto a pair of similar and dissimilar observations and minimizing the Euclidean\ndistance between similar pairs while simultaneously maximizing it between\ndissimilar pairs. Experiments conducted on cross-domain datasets emphasize the\ncapability of our network to model forgery in different languages (scripts) and\nhandwriting styles. Moreover, our designed Siamese network, named SigNet,\nexceeds the state-of-the-art results on most of the benchmark signature\ndatasets, which paves the way for further research in this direction.", "field": [], "task": ["Handwriting Verification"], "method": [], "dataset": ["CEDAR Signature"], "metric": ["FAR"], "title": "SigNet: Convolutional Siamese Network for Writer Independent Offline Signature Verification"} {"abstract": "Interlacing is a widely used technique, for television broadcast and video\nrecording, to double the perceived frame rate without increasing the bandwidth.\nBut it presents annoying visual artifacts, such as flickering and silhouette\n\"serration,\" during the playback. Existing state-of-the-art deinterlacing\nmethods either ignore the temporal information to provide real-time performance\nbut lower visual quality, or estimate the motion for better deinterlacing but\nwith a trade-off of higher computational cost. In this paper, we present the\nfirst and novel deep convolutional neural networks (DCNNs) based method to\ndeinterlace with high visual quality and real-time performance. Unlike existing\nmodels for super-resolution problems which relies on the translation-invariant\nassumption, our proposed DCNN model utilizes the temporal information from both\nthe odd and even half frames to reconstruct only the missing scanlines, and\nretains the given odd and even scanlines for producing the full deinterlaced\nframes. By further introducing a layer-sharable architecture, our system can\nachieve real-time performance on a single GPU. Experiments shows that our\nmethod outperforms all existing methods, in terms of reconstruction accuracy\nand computational performance.", "field": [], "task": ["Super-Resolution", "Video Deinterlacing"], "method": [], "dataset": ["MSU Deinterlacer Benchmark"], "metric": ["SSIM", "PSNR", "FPS on CPU"], "title": "Real-time Deep Video Deinterlacing"} {"abstract": "Synthesizing high resolution photorealistic images has been a long-standing\nchallenge in machine learning. In this paper we introduce new methods for the\nimproved training of generative adversarial networks (GANs) for image\nsynthesis. We construct a variant of GANs employing label conditioning that\nresults in 128x128 resolution image samples exhibiting global coherence. We\nexpand on previous work for image quality assessment to provide two new\nanalyses for assessing the discriminability and diversity of samples from\nclass-conditional image synthesis models. These analyses demonstrate that high\nresolution samples provide class information not present in low resolution\nsamples. Across 1000 ImageNet classes, 128x128 samples are more than twice as\ndiscriminable as artificially resized 32x32 samples. In addition, 84.7% of the\nclasses have samples exhibiting diversity comparable to real ImageNet data.", "field": [], "task": ["Conditional Image Generation", "Image Generation", "Image Quality Assessment"], "method": [], "dataset": ["ImageNet 128x128", "CIFAR-10"], "metric": ["Inception score"], "title": "Conditional Image Synthesis With Auxiliary Classifier GANs"} {"abstract": "We introduce EnhanceGAN, an adversarial learning based model that performs\nautomatic image enhancement. Traditional image enhancement frameworks typically\ninvolve training models in a fully-supervised manner, which require expensive\nannotations in the form of aligned image pairs. In contrast to these\napproaches, our proposed EnhanceGAN only requires weak supervision (binary\nlabels on image aesthetic quality) and is able to learn enhancement operators\nfor the task of aesthetic-based image enhancement. In particular, we show the\neffectiveness of a piecewise color enhancement module trained with weak\nsupervision, and extend the proposed EnhanceGAN framework to learning a deep\nfiltering-based aesthetic enhancer. The full differentiability of our image\nenhancement operators enables the training of EnhanceGAN in an end-to-end\nmanner. We further demonstrate the capability of EnhanceGAN in learning\naesthetic-based image cropping without any groundtruth cropping pairs. Our\nweakly-supervised EnhanceGAN reports competitive quantitative results on\naesthetic-based color enhancement as well as automatic image cropping, and a\nuser study confirms that our image enhancement results are on par with or even\npreferred over professional enhancement.", "field": [], "task": ["Image Cropping", "Image Enhancement"], "method": [], "dataset": ["AVA"], "metric": ["Bounding Box AP"], "title": "Aesthetic-Driven Image Enhancement by Adversarial Learning"} {"abstract": "This paper introduces new optimality-preserving operators on Q-functions. We\nfirst describe an operator for tabular representations, the consistent Bellman\noperator, which incorporates a notion of local policy consistency. We show that\nthis local consistency leads to an increase in the action gap at each state;\nincreasing this gap, we argue, mitigates the undesirable effects of\napproximation and estimation errors on the induced greedy policies. This\noperator can also be applied to discretized continuous space and time problems,\nand we provide empirical results evidencing superior performance in this\ncontext. Extending the idea of a locally consistent operator, we then derive\nsufficient conditions for an operator to preserve optimality, leading to a\nfamily of operators which includes our consistent Bellman operator. As\ncorollaries we provide a proof of optimality for Baird's advantage learning\nalgorithm and derive other gap-increasing operators with interesting\nproperties. We conclude with an empirical study on 60 Atari 2600 games\nillustrating the strong potential of these new operators.", "field": [], "task": ["Atari Games", "Q-Learning"], "method": [], "dataset": ["Atari 2600 Amidar", "Atari 2600 River Raid", "Atari 2600 Beam Rider", "Atari 2600 Video Pinball", "Atari 2600 Demon Attack", "Atari 2600 Enduro", "Atari 2600 Elevator Action", "Atari 2600 Alien", "Atari 2600 Boxing", "Atari 2600 Pitfall!", "Atari 2600 Bank Heist", "Atari 2600 Tutankham", "Atari 2600 Time Pilot", "Atari 2600 Space Invaders", "Atari 2600 Assault", "Atari 2600 Phoenix", "Atari 2600 Gravitar", "Atari 2600 Ice Hockey", "Atari 2600 Bowling", "Atari 2600 Private Eye", "Atari 2600 Berzerk", "Atari 2600 Asterix", "Atari 2600 Breakout", "Atari 2600 Name This Game", "Atari 2600 Crazy Climber", "Atari 2600 Pong", "Atari 2600 Krull", "Atari 2600 Freeway", "Atari 2600 James Bond", "Atari 2600 Defender", "Atari 2600 Robotank", "Atari 2600 Kangaroo", "Atari 2600 Venture", "Atari 2600 Asteroids", "Atari 2600 Fishing Derby", "Atari 2600 Ms. Pacman", "Atari 2600 Seaquest", "Atari 2600 Tennis", "Atari 2600 Solaris", "Atari 2600 Zaxxon", "Atari 2600 Frostbite", "Atari 2600 Star Gunner", "Atari 2600 Double Dunk", "Atari 2600 Battle Zone", "Atari 2600 Gopher", "Atari 2600 Skiing", "Atari 2600 Road Runner", "Atari 2600 Atlantis", "Atari 2600 Kung-Fu Master", "Atari 2600 Chopper Command", "Atari 2600 Surround", "Atari 2600 Yars Revenge", "Atari 2600 Up and Down", "Atari 2600 Montezuma's Revenge", "Atari 2600 Pooyan", "Atari 2600 Wizard of Wor", "Atari 2600 Q*Bert", "Atari 2600 Centipede", "Atari 2600 HERO"], "metric": ["Score"], "title": "Increasing the Action Gap: New Operators for Reinforcement Learning"} {"abstract": "Neural Machine Translation (NMT) is a new approach to machine translation\nthat has shown promising results that are comparable to traditional approaches.\nA significant weakness in conventional NMT systems is their inability to\ncorrectly translate very rare words: end-to-end NMTs tend to have relatively\nsmall vocabularies with a single unk symbol that represents every possible\nout-of-vocabulary (OOV) word. In this paper, we propose and implement an\neffective technique to address this problem. We train an NMT system on data\nthat is augmented by the output of a word alignment algorithm, allowing the NMT\nsystem to emit, for each OOV word in the target sentence, the position of its\ncorresponding word in the source sentence. This information is later utilized\nin a post-processing step that translates every OOV word using a dictionary.\nOur experiments on the WMT14 English to French translation task show that this\nmethod provides a substantial improvement of up to 2.8 BLEU points over an\nequivalent NMT system that does not use this technique. With 37.5 BLEU points,\nour NMT system is the first to surpass the best result achieved on a WMT14\ncontest task.", "field": [], "task": ["Machine Translation", "Word Alignment"], "method": [], "dataset": ["WMT2014 English-French"], "metric": ["BLEU score"], "title": "Addressing the Rare Word Problem in Neural Machine Translation"} {"abstract": "We present an accurate, real-time approach to robotic grasp detection based\non convolutional neural networks. Our network performs single-stage regression\nto graspable bounding boxes without using standard sliding window or region\nproposal techniques. The model outperforms state-of-the-art approaches by 14\npercentage points and runs at 13 frames per second on a GPU. Our network can\nsimultaneously perform classification so that in a single step it recognizes\nthe object and finds a good grasp rectangle. A modification to this model\npredicts multiple grasps per object by using a locally constrained prediction\nmechanism. The locally constrained model performs significantly better,\nespecially on objects that can be grasped in a variety of ways.", "field": [], "task": ["Region Proposal", "Regression", "Robotic Grasping"], "method": [], "dataset": ["Cornell Grasp Dataset"], "metric": ["5 fold cross validation"], "title": "Real-Time Grasp Detection Using Convolutional Neural Networks"} {"abstract": "Obtaining accurate depth measurements out of a single image represents a\nfascinating solution to 3D sensing. CNNs led to considerable improvements in\nthis field, and recent trends replaced the need for ground-truth labels with\ngeometry-guided image reconstruction signals enabling unsupervised training.\nCurrently, for this purpose, state-of-the-art techniques rely on images\nacquired with a binocular stereo rig to predict inverse depth (i.e., disparity)\naccording to the aforementioned supervision principle. However, these methods\nsuffer from well-known problems near occlusions, left image border, etc\ninherited from the stereo setup. Therefore, in this paper, we tackle these\nissues by moving to a trinocular domain for training. Assuming the central\nimage as the reference, we train a CNN to infer disparity representations\npairing such image with frames on its left and right side. This strategy allows\nobtaining depth maps not affected by typical stereo artifacts. Moreover, being\ntrinocular datasets seldom available, we introduce a novel interleaved training\nprocedure enabling to enforce the trinocular assumption outlined from current\nbinocular datasets. Exhaustive experimental results on the KITTI dataset\nconfirm that our proposal outperforms state-of-the-art methods for unsupervised\nmonocular depth estimation trained on binocular stereo pairs as well as any\nknown methods relying on other cues.", "field": [], "task": ["Depth Estimation", "Image Reconstruction", "Monocular Depth Estimation"], "method": [], "dataset": ["KITTI Eigen split"], "metric": ["absolute relative error"], "title": "Learning monocular depth estimation with unsupervised trinocular assumptions"} {"abstract": "Many applications of stereo depth estimation in robotics require the\ngeneration of accurate disparity maps in real time under significant\ncomputational constraints. Current state-of-the-art algorithms force a choice\nbetween either generating accurate mappings at a slow pace, or quickly\ngenerating inaccurate ones, and additionally these methods typically require\nfar too many parameters to be usable on power- or memory-constrained devices.\nMotivated by these shortcomings, we propose a novel approach for disparity\nprediction in the anytime setting. In contrast to prior work, our end-to-end\nlearned approach can trade off computation and accuracy at inference time.\nDepth estimation is performed in stages, during which the model can be queried\nat any time to output its current best estimate. Our final model can process\n1242$ \\times $375 resolution images within a range of 10-35 FPS on an NVIDIA\nJetson TX2 module with only marginal increases in error -- using two orders of\nmagnitude fewer parameters than the most competitive baseline. The source code\nis available at https://github.com/mileyan/AnyNet .", "field": [], "task": ["Depth Estimation", "Stereo Depth Estimation"], "method": [], "dataset": ["KITTI2012", "KITTI2015"], "metric": [" three pixel error"], "title": "Anytime Stereo Image Depth Estimation on Mobile Devices"} {"abstract": "We present a highly accurate single-image super-resolution (SR) method. Our\nmethod uses a very deep convolutional network inspired by VGG-net used for\nImageNet classification \\cite{simonyan2015very}. We find increasing our network\ndepth shows a significant improvement in accuracy. Our final model uses 20\nweight layers. By cascading small filters many times in a deep network\nstructure, contextual information over large image regions is exploited in an\nefficient way. With very deep networks, however, convergence speed becomes a\ncritical issue during training. We propose a simple yet effective training\nprocedure. We learn residuals only and use extremely high learning rates\n($10^4$ times higher than SRCNN \\cite{dong2015image}) enabled by adjustable\ngradient clipping. Our proposed method performs better than existing methods in\naccuracy and visual improvements in our results are easily noticeable.", "field": [], "task": ["Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["VggFace2 - 8x upscaling", "Set14 - 2x upscaling", "Urban100 - 2x upscaling", "Set5 - 2x upscaling", "WebFace - 8x upscaling"], "metric": ["PSNR"], "title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"} {"abstract": "We propose a novel neural attention architecture to tackle machine\ncomprehension tasks, such as answering Cloze-style queries with respect to a\ndocument. Unlike previous models, we do not collapse the query into a single\nvector, instead we deploy an iterative alternating attention mechanism that\nallows a fine-grained exploration of both the query and the document. Our model\noutperforms state-of-the-art baselines in standard machine comprehension\nbenchmarks such as CNN news articles and the Children's Book Test (CBT)\ndataset.", "field": [], "task": ["Question Answering", "Reading Comprehension"], "method": [], "dataset": ["Children's Book Test", "CNN / Daily Mail"], "metric": ["CNN", "Accuracy-NE"], "title": "Iterative Alternating Neural Attention for Machine Reading"} {"abstract": "In this paper we investigate learning visual models for the steps of ordinary\ntasks using weak supervision via instructional narrations and an ordered list\nof steps instead of strong supervision via temporal annotations. At the heart\nof our approach is the observation that weakly supervised learning may be\neasier if a model shares components while learning different steps: `pour egg'\nshould be trained jointly with other tasks involving `pour' and `egg'. We\nformalize this in a component model for recognizing steps and a weakly\nsupervised learning framework that can learn this model under temporal\nconstraints from narration and the list of steps. Past data does not permit\nsystematic studying of sharing and so we also gather a new dataset, CrossTask,\naimed at assessing cross-task sharing. Our experiments demonstrate that sharing\nacross tasks improves performance, especially when done at the component level\nand that our component model can parse previously unseen tasks by virtue of its\ncompositionality.", "field": [], "task": [], "method": [], "dataset": ["CrossTask"], "metric": ["Recall"], "title": "Cross-task weakly supervised learning from instructional videos"} {"abstract": "The problem of Knowledge Base Completion can be framed as a 3rd-order binary\ntensor completion problem. In this light, the Canonical Tensor Decomposition\n(CP) (Hitchcock, 1927) seems like a natural solution; however, current\nimplementations of CP on standard Knowledge Base Completion benchmarks are\nlagging behind their competitors. In this work, we attempt to understand the\nlimits of CP for knowledge base completion. First, we motivate and test a novel\nregularizer, based on tensor nuclear $p$-norms. Then, we present a\nreformulation of the problem that makes it invariant to arbitrary choices in\nthe inclusion of predicates or their reciprocals in the dataset. These two\nmethods combined allow us to beat the current state of the art on several\ndatasets with a CP decomposition, and obtain even better results using the more\nadvanced ComplEx model.", "field": [], "task": ["Knowledge Base Completion", "Link Prediction"], "method": [], "dataset": ["WN18RR", " FB15k", "FB15k-237", "YAGO3-10", "WN18"], "metric": ["Hits@10", "MRR", "Hits@3", "Hits@1"], "title": "Canonical Tensor Decomposition for Knowledge Base Completion"} {"abstract": "Graph neural networks (GNNs) have shown great power in learning on attributed graphs. However, it is still a challenge for GNNs to utilize information faraway from the source node. Moreover, general GNNs require graph attributes as input, so they cannot be appled to plain graphs. In the paper, we propose new models named G-GNNs (Global information for GNNs) to address the above limitations. First, the global structure and attribute features for each node are obtained via unsupervised pre-training, which preserve the global information associated to the node. Then, using the global features and the raw network attributes, we propose a parallel framework of GNNs to learn different aspects from these features. The proposed learning methods can be applied to both plain graphs and attributed graphs. Extensive experiments have shown that G-GNNs can outperform other state-of-the-art models on three standard evaluation graphs. Specially, our methods establish new benchmark records on Cora (84.31\\%) and Pubmed (80.95\\%) when learning on attributed graphs.", "field": [], "task": ["Node Classification", "Unsupervised Pre-training"], "method": [], "dataset": ["Cora with Public Split: fixed 20 nodes per class", "CiteSeer with Public Split: fixed 20 nodes per class", "PubMed with Public Split: fixed 20 nodes per class"], "metric": ["Accuracy"], "title": "Pre-train and Learn: Preserve Global Information for Graph Neural Networks"} {"abstract": "Tracking progress in machine learning has become increasingly difficult with the recent explosion in the number of papers. In this paper, we present AxCell, an automatic machine learning pipeline for extracting results from papers. AxCell uses several novel components, including a table segmentation subtask, to learn relevant structural knowledge that aids extraction. When compared with existing methods, our approach significantly improves the state of the art for results extraction. We also release a structured, annotated dataset for training models for results extraction, and a dataset for evaluating the performance of models on this task. Lastly, we show the viability of our approach enables it to be used for semi-automated results extraction in production, suggesting our improvements make this task practically viable for the first time. Code is available on GitHub.", "field": [], "task": ["Scientific Results Extraction"], "method": [], "dataset": ["NLP-TDMS (Exp, arXiv only)", "PWC Leaderboards (restricted)"], "metric": ["Macro F1", "Macro Recall", "Micro Recall", "Macro Precision", "Micro Precision", "Micro F1"], "title": "AxCell: Automatic Extraction of Results from Machine Learning Papers"} {"abstract": "We present an effective method to progressively integrate and refine the cross-modality complementarities for RGB-D salient object detection (SOD). The proposed network mainly solves two challenging issues: 1) how to effectively integrate the complementary information from RGB image and its corresponding depth map, and 2) how to adaptively select more saliency-related features. First, we propose a cross-modality feature modulation (cmFM) module to enhance feature representations by taking the depth features as prior, which models the complementary relations of RGB-D data. Second, we propose an adaptive feature selection (AFS) module to select saliency-related features and suppress the inferior ones. The AFS module exploits multi-modality spatial feature fusion with the self-modality and cross-modality interdependencies of channel features are considered. Third, we employ a saliency-guided position-edge attention (sg-PEA) module to encourage our network to focus more on saliency-related regions. The above modules as a whole, called cmMS block, facilitates the refinement of saliency features in a coarse-to-fine fashion. Coupled with a bottom-up inference, the refined saliency features enable accurate and edge-preserving SOD. Extensive experiments demonstrate that our network outperforms state-of-the-art saliency detectors on six popular RGB-D SOD benchmarks.", "field": [], "task": ["Feature Selection", "Object Detection", "RGB-D Salient Object Detection", "RGB Salient Object Detection", "Salient Object Detection"], "method": [], "dataset": ["NJU2K"], "metric": ["Average MAE", "S-Measure"], "title": "RGB-D Salient Object Detection with Cross-Modality Modulation and Selection"} {"abstract": "Conditional image generation is the task of generating diverse images using class label information. Although many conditional Generative Adversarial Networks (GAN) have shown realistic results, such methods consider pairwise relations between the embedding of an image and the embedding of the corresponding label (data-to-class relations) as the conditioning losses. In this paper, we propose ContraGAN that considers relations between multiple image embeddings in the same batch (data-to-data relations) as well as the data-to-class relations by using a conditional contrastive loss. The discriminator of ContraGAN discriminates the authenticity of given samples and minimizes a contrastive objective to learn the relations between training images. Simultaneously, the generator tries to generate realistic images that deceive the authenticity and have a low contrastive loss. The experimental results show that ContraGAN outperforms state-of-the-art-models by 7.3% and 7.7% on Tiny ImageNet and ImageNet datasets, respectively. Besides, we experimentally demonstrate that contrastive learning helps to relieve the overfitting of the discriminator. For a fair comparison, we re-implement twelve state-of-the-art GANs using the PyTorch library. The software package is available at https://github.com/POSTECH-CVLab/PyTorch-StudioGAN.", "field": [], "task": ["Conditional Image Generation", "Data Augmentation", "Image Generation"], "method": [], "dataset": ["CIFAR-10"], "metric": ["FID"], "title": "ContraGAN: Contrastive Learning for Conditional Image Generation"} {"abstract": "The use of drug combinations, termed polypharmacy, is common to treat\npatients with complex diseases and co-existing conditions. However, a major\nconsequence of polypharmacy is a much higher risk of adverse side effects for\nthe patient. Polypharmacy side effects emerge because of drug-drug\ninteractions, in which activity of one drug may change if taken with another\ndrug. The knowledge of drug interactions is limited because these complex\nrelationships are rare, and are usually not observed in relatively small\nclinical testing. Discovering polypharmacy side effects thus remains an\nimportant challenge with significant implications for patient mortality. Here,\nwe present Decagon, an approach for modeling polypharmacy side effects. The\napproach constructs a multimodal graph of protein-protein interactions,\ndrug-protein target interactions, and the polypharmacy side effects, which are\nrepresented as drug-drug interactions, where each side effect is an edge of a\ndifferent type. Decagon is developed specifically to handle such multimodal\ngraphs with a large number of edge types. Our approach develops a new graph\nconvolutional neural network for multirelational link prediction in multimodal\nnetworks. Decagon predicts the exact side effect, if any, through which a given\ndrug combination manifests clinically. Decagon accurately predicts polypharmacy\nside effects, outperforming baselines by up to 69%. We find that it\nautomatically learns representations of side effects indicative of\nco-occurrence of polypharmacy in patients. Furthermore, Decagon models\nparticularly well side effects with a strong molecular basis, while on\npredominantly non-molecular side effects, it achieves good performance because\nof effective sharing of model parameters across edge types. Decagon creates\nopportunities to use large pharmacogenomic and patient data to flag and\nprioritize side effects for follow-up analysis.", "field": [], "task": ["Link Prediction"], "method": [], "dataset": ["Decagon"], "metric": ["AUROC"], "title": "Modeling polypharmacy side effects with graph convolutional networks"} {"abstract": "We propose average Localisation-Recall-Precision (aLRP), a unified, bounded, balanced and ranking-based loss function for both classification and localisation tasks in object detection. aLRP extends the Localisation-Recall-Precision (LRP) performance metric (Oksuz et al., 2018) inspired from how Average Precision (AP) Loss extends precision to a ranking-based loss function for classification (Chen et al., 2020). aLRP has the following distinct advantages: (i) aLRP is the first ranking-based loss function for both classification and localisation tasks. (ii) Thanks to using ranking for both tasks, aLRP naturally enforces high-quality localisation for high-precision classification. (iii) aLRP provides provable balance between positives and negatives. (iv) Compared to on average $\\sim$6 hyperparameters in the loss functions of state-of-the-art detectors, aLRP Loss has only one hyperparameter, which we did not tune in practice. On the COCO dataset, aLRP Loss improves its ranking-based predecessor, AP Loss, up to around $5$ AP points, achieves $48.9$ AP without test time augmentation and outperforms all one-stage detectors. Code available at: https://github.com/kemaloksuz/aLRPLoss .", "field": [], "task": ["Object Detection"], "method": [], "dataset": ["COCO minival", "COCO test-dev"], "metric": ["APM", "oLRP", "box AP", "AP75", "APS", "APL", "AP50"], "title": "A Ranking-based, Balanced Loss Function Unifying Classification and Localisation in Object Detection"} {"abstract": "Most existing knowledge graphs suffer from incompleteness, which can be alleviated by inferring missing links based on known facts. One popular way to accomplish this is to generate low-dimensional embeddings of entities and relations, and use these to make inferences. ConvE, a recently proposed approach, applies convolutional filters on 2D reshapings of entity and relation embeddings in order to capture rich interactions between their components. However, the number of interactions that ConvE can capture is limited. In this paper, we analyze how increasing the number of these interactions affects link prediction performance, and utilize our observations to propose InteractE. InteractE is based on three key ideas -- feature permutation, a novel feature reshaping, and circular convolution. Through extensive experiments, we find that InteractE outperforms state-of-the-art convolutional link prediction baselines on FB15k-237. Further, InteractE achieves an MRR score that is 9%, 7.5%, and 23% better than ConvE on the FB15k-237, WN18RR and YAGO3-10 datasets respectively. The results validate our central hypothesis -- that increasing feature interaction is beneficial to link prediction performance. We make the source code of InteractE available to encourage reproducible research.", "field": [], "task": ["Knowledge Graph Embeddings", "Knowledge Graphs", "Link Prediction"], "method": [], "dataset": ["WN18RR", "YAGO3-10", "FB15k-237"], "metric": ["Hits@10", "MR", "MRR", "Hits@1"], "title": "InteractE: Improving Convolution-based Knowledge Graph Embeddings by Increasing Feature Interactions"} {"abstract": "This paper studies a training method to jointly estimate an energy-based model and a flow-based model, in which the two models are iteratively updated based on a shared adversarial value function. This joint training method has the following traits. (1) The update of the energy-based model is based on noise contrastive estimation, with the flow model serving as a strong noise distribution. (2) The update of the flow model approximately minimizes the Jensen-Shannon divergence between the flow model and the data distribution. (3) Unlike generative adversarial networks (GAN) which estimates an implicit probability distribution defined by a generator model, our method estimates two explicit probabilistic distributions on the data. Using the proposed method we demonstrate a significant improvement on the synthesis quality of the flow model, and show the effectiveness of unsupervised feature learning by the learned energy-based model. Furthermore, the proposed training method can be easily adapted to semi-supervised learning. We achieve competitive results to the state-of-the-art semi-supervised learning methods.", "field": [], "task": ["Image Generation"], "method": [], "dataset": ["CelebA-HQ 64x64"], "metric": ["FID"], "title": "Flow Contrastive Estimation of Energy-Based Models"} {"abstract": "Deep convolution-based single image super-resolution (SISR) networks embrace the benefits of learning from large-scale external image resources for local recovery, yet most existing works have ignored the long-range feature-wise similarities in natural images. Some recent works have successfully leveraged this intrinsic feature correlation by exploring non-local attention modules. However, none of the current deep models have studied another inherent property of images: cross-scale feature correlation. In this paper, we propose the first Cross-Scale Non-Local (CS-NL) attention module with integration into a recurrent neural network. By combining the new CS-NL prior with local and in-scale non-local priors in a powerful recurrent fusion cell, we can find more cross-scale feature correlations within a single low-resolution (LR) image. The performance of SISR is significantly improved by exhaustively integrating all possible priors. Extensive experiments demonstrate the effectiveness of the proposed CS-NL module by setting new state-of-the-arts on multiple SISR benchmarks.", "field": [], "task": ["Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["Set14 - 2x upscaling", "Set14 - 4x upscaling", "BSD100 - 2x upscaling", "Manga109 - 4x upscaling", "Urban100 - 2x upscaling", "BSD100 - 4x upscaling", "Manga109 - 2x upscaling", "Set5 - 4x upscaling", "Set5 - 2x upscaling", "Urban100 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Image Super-Resolution with Cross-Scale Non-Local Attention and Exhaustive Self-Exemplars Mining"} {"abstract": "In this work, we develop a shared multi-attention model for multi-label zero-shot learning. We argue that designing attention mechanism for recognizing multiple seen and unseen labels in an image is a non-trivial task as there is no training signal to localize unseen labels and an image only contains a few present labels that need attentions out of thousands of possible labels. Therefore, instead of generating attentions for unseen labels which have unknown behaviors and could focus on irrelevant regions due to the lack of any training sample, we let the unseen labels select among a set of shared attentions which are trained to be label-agnostic and to focus on only relevant/foreground regions through our novel loss. Finally, we learn a compatibility function to distinguish labels based on the selected attention. We further propose a novel loss function that consists of three components guiding the attention to focus on diverse and relevant image regions while utilizing all attention features. By extensive experiments, we show that our method improves the state of the art by 2.9% and 1.4% F1 score on the NUS-WIDE and the large scale Open Images datasets, respectively.\r", "field": [], "task": ["Multi-label zero-shot learning", "Zero-Shot Learning"], "method": [], "dataset": ["NUS-WIDE"], "metric": ["mAP"], "title": "A Shared Multi-Attention Framework for Multi-Label Zero-Shot Learning"} {"abstract": "Machine translation has made rapid advances in recent years. Millions of\npeople are using it today in online translation systems and mobile applications\nin order to communicate across language barriers. The question naturally arises\nwhether such systems can approach or achieve parity with human translations. In\nthis paper, we first address the problem of how to define and accurately\nmeasure human parity in translation. We then describe Microsoft's machine\ntranslation system and measure the quality of its translations on the widely\nused WMT 2017 news translation task from Chinese to English. We find that our\nlatest neural machine translation system has reached a new state-of-the-art,\nand that the translation quality is at human parity when compared to\nprofessional human translations. We also find that it significantly exceeds the\nquality of crowd-sourced non-professional translations.", "field": [], "task": ["Machine Translation"], "method": [], "dataset": ["WMT 2017 English-Chinese"], "metric": ["BLEU score"], "title": "Achieving Human Parity on Automatic Chinese to English News Translation"} {"abstract": "This paper reports our efforts toward an ASR system for a new under-resourced language (Fongbe). The aim of this work is to build acoustic models and language models for continuous speech decoding in Fongbe. The problem encountered with Fongbe (an African language spoken especially in Benin, Togo, and Nigeria) is that it does not have any language resources for an ASR system. As part of this work, we have first collected Fongbe text and speech corpora that are described in the following sections. Acoustic modeling has been worked out at a graphemic level and language modeling has provided two language models for performance comparison purposes. We also performed a vowel simplification by removing tones diacritics in order to investigate their impact on the language models.", "field": [], "task": ["Language Modelling", "Speech Recognition"], "method": [], "dataset": ["Fongbe audio"], "metric": ["Word Error Rate (WER)"], "title": "First Automatic Fongbe Continuous Speech Recognition System: Development of Acoustic Models and Language Models"} {"abstract": "We propose a novel method for imputing missing data by adapting the\nwell-known Generative Adversarial Nets (GAN) framework. Accordingly, we call\nour method Generative Adversarial Imputation Nets (GAIN). The generator (G)\nobserves some components of a real data vector, imputes the missing components\nconditioned on what is actually observed, and outputs a completed vector. The\ndiscriminator (D) then takes a completed vector and attempts to determine which\ncomponents were actually observed and which were imputed. To ensure that D\nforces G to learn the desired distribution, we provide D with some additional\ninformation in the form of a hint vector. The hint reveals to D partial\ninformation about the missingness of the original sample, which is used by D to\nfocus its attention on the imputation quality of particular components. This\nhint ensures that G does in fact learn to generate according to the true data\ndistribution. We tested our method on various datasets and found that GAIN\nsignificantly outperforms state-of-the-art imputation methods.", "field": [], "task": ["Imputation", "Multivariate Time Series Imputation"], "method": [], "dataset": ["KDD CUP Challenge 2018"], "metric": ["MSE (10% missing)"], "title": "GAIN: Missing Data Imputation using Generative Adversarial Nets"} {"abstract": "Official Torch7 implementation of \"V2V-PoseNet: Voxel-to-Voxel Prediction Network for Accurate 3D Hand and Human Pose Estimation from a Single Depth Map\", CVPR 2018", "field": [], "task": ["3D Hand Pose Estimation", "3D Pose Estimation", "Hand Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["HANDS 2017"], "metric": ["Average 3D Error"], "title": "Depth-Based 3D Hand Pose Estimation: From Current Achievements to Future Goals"} {"abstract": "Object detection, scene graph generation and region captioning, which are\nthree scene understanding tasks at different semantic levels, are tied\ntogether: scene graphs are generated on top of objects detected in an image\nwith their pairwise relationship predicted, while region captioning gives a\nlanguage description of the objects, their attributes, relations, and other\ncontext information. In this work, to leverage the mutual connections across\nsemantic levels, we propose a novel neural network model, termed as Multi-level\nScene Description Network (denoted as MSDN), to solve the three vision tasks\njointly in an end-to-end manner. Objects, phrases, and caption regions are\nfirst aligned with a dynamic graph based on their spatial and semantic\nconnections. Then a feature refining structure is used to pass messages across\nthe three levels of semantic tasks through the graph. We benchmark the learned\nmodel on three tasks, and show the joint learning across three tasks with our\nproposed method can bring mutual improvements over previous models.\nParticularly, on the scene graph generation task, our proposed method\noutperforms the state-of-art method with more than 3% margin.", "field": [], "task": ["Graph Generation", "Object Detection", "Scene Graph Generation", "Scene Understanding"], "method": [], "dataset": ["Visual Genome"], "metric": ["Recall@50", "MAP"], "title": "Scene Graph Generation from Objects, Phrases and Region Captions"} {"abstract": "Convolutional Neural Networks (ConvNets) have achieved excellent recognition\nperformance in various visual recognition tasks. A large labeled training set\nis one of the most important factors for its success. However, it is difficult\nto collect sufficient training images with precise labels in some domains such\nas apparent age estimation, head pose estimation, multi-label classification\nand semantic segmentation. Fortunately, there is ambiguous information among\nlabels, which makes these tasks different from traditional classification.\nBased on this observation, we convert the label of each image into a discrete\nlabel distribution, and learn the label distribution by minimizing a\nKullback-Leibler divergence between the predicted and ground-truth label\ndistributions using deep ConvNets. The proposed DLDL (Deep Label Distribution\nLearning) method effectively utilizes the label ambiguity in both feature\nlearning and classifier learning, which help prevent the network from\nover-fitting even when the training set is small. Experimental results show\nthat the proposed approach produces significantly better results than\nstate-of-the-art methods for age estimation and head pose estimation. At the\nsame time, it also improves recognition performance for multi-label\nclassification and semantic segmentation tasks.", "field": [], "task": ["Age Estimation", "Head Pose Estimation", "Multi-Label Classification", "Pose Estimation", "Semantic Segmentation"], "method": [], "dataset": ["MORPH Album2", "ChaLearn 2015"], "metric": ["MAE"], "title": "Deep Label Distribution Learning with Label Ambiguity"} {"abstract": "We present PPF-FoldNet for unsupervised learning of 3D local descriptors on\npure point cloud geometry. Based on the folding-based auto-encoding of well\nknown point pair features, PPF-FoldNet offers many desirable properties: it\nnecessitates neither supervision, nor a sensitive local reference frame,\nbenefits from point-set sparsity, is end-to-end, fast, and can extract powerful\nrotation invariant descriptors. Thanks to a novel feature visualization, its\nevolution can be monitored to provide interpretable insights. Our extensive\nexperiments demonstrate that despite having six degree-of-freedom invariance\nand lack of training labels, our network achieves state of the art results in\nstandard benchmark datasets and outperforms its competitors when rotations and\nvarying point densities are present. PPF-FoldNet achieves $9\\%$ higher recall\non standard benchmarks, $23\\%$ higher recall when rotations are introduced into\nthe same datasets and finally, a margin of $>35\\%$ is attained when point\ndensity is significantly decreased.", "field": [], "task": ["Point Cloud Registration"], "method": [], "dataset": ["3DMatch Benchmark"], "metric": ["Recall"], "title": "PPF-FoldNet: Unsupervised Learning of Rotation Invariant 3D Local Descriptors"} {"abstract": "Recent graph-to-text models generate text from graph-based data using either global or local aggregation to learn node representations. Global node encoding allows explicit communication between two distant nodes, thereby neglecting graph topology as all nodes are directly connected. In contrast, local node encoding considers the relations between neighbor nodes capturing the graph structure, but it can fail to capture long-range relations. In this work, we gather both encoding strategies, proposing novel neural models which encode an input graph combining both global and local node contexts, in order to learn better contextualized node embeddings. In our experiments, we demonstrate that our approaches lead to significant improvements on two graph-to-text datasets achieving BLEU scores of 18.01 on AGENDA dataset, and 63.69 on the WebNLG dataset for seen categories, outperforming state-of-the-art models by 3.7 and 3.1 points, respectively.", "field": [], "task": ["Data-to-Text Generation", "Graph-to-Sequence", "Knowledge Graphs", "Text Generation"], "method": [], "dataset": ["AGENDA", "WebNLG"], "metric": ["BLEU"], "title": "Modeling Global and Local Node Contexts for Text Generation from Knowledge Graphs"} {"abstract": "In this paper we illustrate how to perform both visual object tracking and semi-supervised video object segmentation, in real-time, with a single simple approach. Our method, dubbed SiamMask, improves the offline training procedure of popular fully-convolutional Siamese approaches for object tracking by augmenting their loss with a binary segmentation task. Once trained, SiamMask solely relies on a single bounding box initialisation and operates online, producing class-agnostic object segmentation masks and rotated bounding boxes at 55 frames per second. Despite its simplicity, versatility and fast speed, our strategy allows us to establish a new state of the art among real-time trackers on VOT-2018, while at the same time demonstrating competitive performance and the best speed for the semi-supervised video object segmentation task on DAVIS-2016 and DAVIS-2017. The project website is http://www.robots.ox.ac.uk/~qwang/SiamMask.", "field": [], "task": ["Object Tracking", "Real-Time Visual Tracking", "Semi-Supervised Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Visual Object Tracking", "Visual Tracking"], "method": [], "dataset": ["YouTube-VOS", "DAVIS 2017 (test-dev)", "DAVIS 2017 (val)", "VOT2017/18", "DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "Jaccard (Unseen)", "F-Measure (Seen)", "Jaccard (Seen)", "O (Average of Measures)", "F-measure (Recall)", "Jaccard (Decay)", "Expected Average Overlap (EAO)", "Jaccard (Recall)", "F-measure (Mean)", "J&F", "F-Measure (Unseen)"], "title": "Fast Online Object Tracking and Segmentation: A Unifying Approach"} {"abstract": "Memory-augmented neural networks consisting of a neural controller and an\nexternal memory have shown potentials in long-term sequential learning. Current\nRAM-like memory models maintain memory accessing every timesteps, thus they do\nnot effectively leverage the short-term memory held in the controller. We\nhypothesize that this scheme of writing is suboptimal in memory utilization and\nintroduces redundant computation. To validate our hypothesis, we derive a\ntheoretical bound on the amount of information stored in a RAM-like system and\nformulate an optimization problem that maximizes the bound. The proposed\nsolution dubbed Uniform Writing is proved to be optimal under the assumption of\nequal timestep contributions. To relax this assumption, we introduce\nmodifications to the original solution, resulting in a solution termed Cached\nUniform Writing. This method aims to balance between maximizing memorization\nand forgetting via overwriting mechanisms. Through an extensive set of\nexperiments, we empirically demonstrate the advantages of our solutions over\nother recurrent architectures, claiming the state-of-the-arts in various\nsequential modeling tasks.", "field": [], "task": ["Sentiment Analysis", "Sequential Image Classification", "Text Classification"], "method": [], "dataset": ["Yelp Fine-grained classification", "Yelp Binary classification", "Sequential MNIST", "Yahoo! Answers", "AG News"], "metric": ["Error", "Unpermuted Accuracy", "Permuted Accuracy", "Accuracy"], "title": "Learning to Remember More with Less Memorization"} {"abstract": "This paper proposes a new generative adversarial network for pose transfer, i.e., transferring the pose of a given person to a target pose. The generator of the network comprises a sequence of Pose-Attentional Transfer Blocks that each transfers certain regions it attends to, generating the person image progressively. Compared with those in previous works, our generated person images possess better appearance consistency and shape consistency with the input images, thus significantly more realistic-looking. The efficacy and efficiency of the proposed network are validated both qualitatively and quantitatively on Market-1501 and DeepFashion. Furthermore, the proposed architecture can generate training images for person re-identification, alleviating data insufficiency. Codes and models are available at: https://github.com/tengteng95/Pose-Transfer.git.", "field": [], "task": ["Image Generation", "Person Re-Identification", "Pose Transfer"], "method": [], "dataset": ["Market-1501", "Deep-Fashion"], "metric": ["Retrieval Top10 Recall", "DS", "SSIM", "mask-SSIM", "mask-IS", "PCKh", "IS"], "title": "Progressive Pose Attention Transfer for Person Image Generation"} {"abstract": "Combining simple elements from the literature, we define a linear model that is geared toward sparse data, in particular implicit feedback data for recommender systems. We show that its training objective has a closed-form solution, and discuss the resulting conceptual insights. Surprisingly, this simple model achieves better ranking accuracy than various state-of-the-art collaborative-filtering approaches, including deep non-linear models, on most of the publicly available data-sets used in our experiments.", "field": [], "task": ["Recommendation Systems"], "method": [], "dataset": ["Netflix", "MovieLens 20M", "Million Song Dataset"], "metric": ["Recall@50", "Recall@20", "nDCG@100"], "title": "Embarrassingly Shallow Autoencoders for Sparse Data"} {"abstract": "Text-based question answering (TBQA) has been studied extensively in recent years. Most existing approaches focus on finding the answer to a question within a single paragraph. However, many difficult questions require multiple supporting evidence from scattered text among two or more documents. In this paper, we propose Dynamically Fused Graph Network(DFGN), a novel method to answer those questions requiring multiple scattered evidence and reasoning over them. Inspired by human's step-by-step reasoning behavior, DFGN includes a dynamic fusion layer that starts from the entities mentioned in the given query, explores along the entity graph dynamically built from the text, and gradually finds relevant supporting entities from the given documents. We evaluate DFGN on HotpotQA, a public TBQA dataset requiring multi-hop reasoning. DFGN achieves competitive results on the public board. Furthermore, our analysis shows DFGN produces interpretable reasoning chains.", "field": [], "task": ["Question Answering"], "method": [], "dataset": ["HotpotQA"], "metric": ["Joint F1"], "title": "Dynamically Fused Graph Network for Multi-hop Reasoning"} {"abstract": "The dominant approaches for named entity recognition (NER) mostly adopt complex recurrent neural networks (RNN), e.g., long-short-term-memory (LSTM). However, RNNs are limited by their recurrent nature in terms of computational efficiency. In contrast, convolutional neural networks (CNN) can fully exploit the GPU parallelism with their feedforward architectures. However, little attention has been paid to performing NER with CNNs, mainly owing to their difficulties in capturing the long-term context information in a sequence. In this paper, we propose a simple but effective CNN-based network for NER, i.e., gated relation network (GRN), which is more capable than common CNNs in capturing long-term context. Specifically, in GRN we firstly employ CNNs to explore the local context features of each word. Then we model the relations between words and use them as gates to fuse local context features into global ones for predicting labels. Without using recurrent layers that process a sentence in a sequential manner, our GRN allows computations to be performed in parallel across the entire sentence. Experiments on two benchmark NER datasets (i.e., CoNLL2003 and Ontonotes 5.0) show that, our proposed GRN can achieve state-of-the-art performance with or without external knowledge. It also enjoys lower time costs to train and test.We have made the code publicly available at https://github.com/HuiChen24/NER-GRN.", "field": [], "task": ["Named Entity Recognition"], "method": [], "dataset": ["Ontonotes v5 (English)", "CoNLL 2003 (English)"], "metric": ["F1"], "title": "GRN: Gated Relation Network to Enhance Convolutional Neural Network for Named Entity Recognition"} {"abstract": "The need for automatic surgical skills assessment is increasing, especially\nbecause manual feedback from senior surgeons observing junior surgeons is prone\nto subjectivity and time consuming. Thus, automating surgical skills evaluation\nis a very important step towards improving surgical practice. In this paper, we\ndesigned a Convolutional Neural Network (CNN) to evaluate surgeon skills by\nextracting patterns in the surgeon motions performed in robotic surgery. The\nproposed method is validated on the JIGSAWS dataset and achieved very\ncompetitive results with 100% accuracy on the suturing and needle passing\ntasks. While we leveraged from the CNNs efficiency, we also managed to mitigate\nits black-box effect using class activation map. This feature allows our method\nto automatically highlight which parts of the surgical task influenced the\nskill prediction and can be used to explain the classification and to provide\npersonalized feedback to the trainee.", "field": [], "task": ["Surgical Skills Evaluation"], "method": [], "dataset": ["JIGSAWS"], "metric": ["Accuracy"], "title": "Evaluating surgical skills from kinematic data using convolutional neural networks"} {"abstract": "Semantic Role Labeling (SRL) is believed to be a crucial step towards natural\nlanguage understanding and has been widely studied. Recent years, end-to-end\nSRL with recurrent neural networks (RNN) has gained increasing attention.\nHowever, it remains a major challenge for RNNs to handle structural information\nand long range dependencies. In this paper, we present a simple and effective\narchitecture for SRL which aims to address these problems. Our model is based\non self-attention which can directly capture the relationships between two\ntokens regardless of their distance. Our single model achieves F$_1=83.4$ on\nthe CoNLL-2005 shared task dataset and F$_1=82.7$ on the CoNLL-2012 shared task\ndataset, which outperforms the previous state-of-the-art results by $1.8$ and\n$1.0$ F$_1$ score respectively. Besides, our model is computationally\nefficient, and the parsing speed is 50K tokens per second on a single Titan X\nGPU.", "field": [], "task": ["Natural Language Understanding", "Semantic Role Labeling"], "method": [], "dataset": ["OntoNotes"], "metric": ["F1"], "title": "Deep Semantic Role Labeling with Self-Attention"} {"abstract": "Meta-learning has been widely used for implementing few-shot learning and fast model adaptation. One kind of meta-learning methods attempt to learn how to control the gradient descent process in order to make the gradient-based learning have high speed and generalization. This work proposes a method that controls the gradient descent process of the model parameters of a neural network by limiting the model parameters in a low-dimensional latent space. The main challenge of this idea is that a decoder with too many parameters is required. This work designs a decoder with typical structure and shares a part of weights in the decoder to reduce the number of the required parameters. Besides, this work has introduced ensemble learning to work with the proposed approach for improving performance. The results show that the proposed approach is witnessed by the superior performance over the Omniglot classification and the miniImageNet classification tasks.", "field": [], "task": ["Few-Shot Learning", "Meta-Learning", "Omniglot"], "method": [], "dataset": ["OMNIGLOT - 1-Shot, 5-way", "OMNIGLOT - 5-Shot, 20-way", "OMNIGLOT - 5-Shot, 5-way", "OMNIGLOT - 1-Shot, 20-way"], "metric": ["Accuracy"], "title": "Decoder Choice Network for Meta-Learning"} {"abstract": "While metric learning is important for Person re-identification (RE-ID), a\nsignificant problem in visual surveillance for cross-view pedestrian matching,\nexisting metric models for RE-ID are mostly based on supervised learning that\nrequires quantities of labeled samples in all pairs of camera views for\ntraining. However, this limits their scalabilities to realistic applications,\nin which a large amount of data over multiple disjoint camera views is\navailable but not labelled. To overcome the problem, we propose unsupervised\nasymmetric metric learning for unsupervised RE-ID. Our model aims to learn an\nasymmetric metric, i.e., specific projection for each view, based on asymmetric\nclustering on cross-view person images. Our model finds a shared space where\nview-specific bias is alleviated and thus better matching performance can be\nachieved. Extensive experiments have been conducted on a baseline and five\nlarge-scale RE-ID datasets to demonstrate the effectiveness of the proposed\nmodel. Through the comparison, we show that our model works much more suitable\nfor unsupervised RE-ID compared to classical unsupervised metric learning\nmodels. We also compare with existing unsupervised RE-ID methods, and our model\noutperforms them with notable margins. Specifically, we report the results on\nlarge-scale unlabelled RE-ID dataset, which is important but unfortunately less\nconcerned in literatures.", "field": [], "task": ["Metric Learning", "Person Re-Identification", "Unsupervised Person Re-Identification"], "method": [], "dataset": ["Market-1501"], "metric": ["Rank-1", "MAP"], "title": "Cross-view Asymmetric Metric Learning for Unsupervised Person Re-identification"} {"abstract": "Modern multiple object tracking (MOT) systems usually follow the \\emph{tracking-by-detection} paradigm. It has 1) a detection model for target localization and 2) an appearance embedding model for data association. Having the two models separately executed might lead to efficiency problems, as the running time is simply a sum of the two steps without investigating potential structures that can be shared between them. Existing research efforts on real-time MOT usually focus on the association step, so they are essentially real-time association methods but not real-time MOT system. In this paper, we propose an MOT system that allows target detection and appearance embedding to be learned in a shared model. Specifically, we incorporate the appearance embedding model into a single-shot detector, such that the model can simultaneously output detections and the corresponding embeddings. We further propose a simple and fast association method that works in conjunction with the joint model. In both components the computation cost is significantly reduced compared with former MOT systems, resulting in a neat and fast baseline for future follow-ups on real-time MOT algorithm design. To our knowledge, this work reports the first (near) real-time MOT system, with a running speed of 22 to 40 FPS depending on the input resolution. Meanwhile, its tracking accuracy is comparable to the state-of-the-art trackers embodying separate detection and embedding (SDE) learning ($64.4\\%$ MOTA \\vs $66.1\\%$ MOTA on MOT-16 challenge). Code and models are available at \\url{https://github.com/Zhongdao/Towards-Realtime-MOT}.", "field": [], "task": ["Multi-Object Tracking", "Multiple Object Tracking", "Multi-Task Learning", "Object Tracking", "Real-Time Multi-Object Tracking", "Regression"], "method": [], "dataset": ["MOT16"], "metric": ["MOTA"], "title": "Towards Real-Time Multi-Object Tracking"} {"abstract": "Detecting actions in untrimmed videos is an important yet challenging task.\nIn this paper, we present the structured segment network (SSN), a novel\nframework which models the temporal structure of each action instance via a\nstructured temporal pyramid. On top of the pyramid, we further introduce a\ndecomposed discriminative model comprising two classifiers, respectively for\nclassifying actions and determining completeness. This allows the framework to\neffectively distinguish positive proposals from background or incomplete ones,\nthus leading to both accurate recognition and localization. These components\nare integrated into a unified network that can be efficiently trained in an\nend-to-end fashion. Additionally, a simple yet effective temporal action\nproposal scheme, dubbed temporal actionness grouping (TAG) is devised to\ngenerate high quality action proposals. On two challenging benchmarks, THUMOS14\nand ActivityNet, our method remarkably outperforms previous state-of-the-art\nmethods, demonstrating superior accuracy and strong adaptivity in handling\nactions with various temporal structures.", "field": [], "task": ["Action Detection", "Action Recognition"], "method": [], "dataset": ["THUMOS\u201914"], "metric": ["mAP@0.3", "mAP@0.4", "mAP@0.1", "mAP@0.5", "mAP@0.2"], "title": "Temporal Action Detection with Structured Segment Networks"} {"abstract": "We present a novel neural network model that learns POS tagging and\ngraph-based dependency parsing jointly. Our model uses bidirectional LSTMs to\nlearn feature representations shared for both POS tagging and dependency\nparsing tasks, thus handling the feature-engineering problem. Our extensive\nexperiments, on 19 languages from the Universal Dependencies project, show that\nour model outperforms the state-of-the-art neural network-based\nStack-propagation model for joint POS tagging and transition-based dependency\nparsing, resulting in a new state of the art. Our code is open-source and\navailable together with pre-trained models at:\nhttps://github.com/datquocnguyen/jPTDP", "field": [], "task": ["Dependency Parsing", "Feature Engineering", "Part-Of-Speech Tagging", "Transition-Based Dependency Parsing"], "method": [], "dataset": ["UD"], "metric": ["Avg accuracy"], "title": "A Novel Neural Network Model for Joint POS Tagging and Graph-based Dependency Parsing"} {"abstract": "To make the best use of the underlying structure of faces, the collective information through face datasets and the intermediate estimates during the upsampling process, here we introduce a fully convolutional multi-stage neural network for 4$\\times$ super-resolution for face images. We implicitly impose facial component-wise attention maps using a segmentation network to allow our network to focus on face-inherent patterns. Each stage of our network is composed of a stem layer, a residual backbone, and spatial upsampling layers. We recurrently apply stages to reconstruct an intermediate image, and then reuse its space-to-depth converted versions to bootstrap and enhance image quality progressively. Our experiments show that our face super-resolution method achieves quantitatively superior and perceptually pleasing results in comparison to state of the art.", "field": [], "task": ["Super-Resolution"], "method": [], "dataset": ["FFHQ 256 x 256 - 4x upscaling", "FFHQ 1024 x 1024 - 4x upscaling"], "metric": ["SSIM", "PSNR", "FID", "MS-SSIM"], "title": "Component Attention Guided Face Super-Resolution Network: CAGFace"} {"abstract": "Generating scene graph to describe all the relations inside an image gains\nincreasing interests these years. However, most of the previous methods use\ncomplicated structures with slow inference speed or rely on the external data,\nwhich limits the usage of the model in real-life scenarios. To improve the\nefficiency of scene graph generation, we propose a subgraph-based connection\ngraph to concisely represent the scene graph during the inference. A bottom-up\nclustering method is first used to factorize the entire scene graph into\nsubgraphs, where each subgraph contains several objects and a subset of their\nrelationships. By replacing the numerous relationship representations of the\nscene graph with fewer subgraph and object features, the computation in the\nintermediate stage is significantly reduced. In addition, spatial information\nis maintained by the subgraph features, which is leveraged by our proposed\nSpatial-weighted Message Passing~(SMP) structure and Spatial-sensitive Relation\nInference~(SRI) module to facilitate the relationship recognition. On the\nrecent Visual Relationship Detection and Visual Genome datasets, our method\noutperforms the state-of-the-art method in both accuracy and speed.", "field": [], "task": ["Graph Generation", "Scene Graph Generation", "Visual Relationship Detection"], "method": [], "dataset": ["VRD"], "metric": ["Recall@50"], "title": "Factorizable Net: An Efficient Subgraph-based Framework for Scene Graph Generation"} {"abstract": "Neural network-based methods for abstractive summarization produce outputs\nthat are more fluent than other techniques, but which can be poor at content\nselection. This work proposes a simple technique for addressing this issue: use\na data-efficient content selector to over-determine phrases in a source\ndocument that should be part of the summary. We use this selector as a\nbottom-up attention step to constrain the model to likely phrases. We show that\nthis approach improves the ability to compress text, while still generating\nfluent summaries. This two-step process is both simpler and higher performing\nthan other end-to-end content selection models, leading to significant\nimprovements on ROUGE for both the CNN-DM and NYT corpus. Furthermore, the\ncontent selector can be trained with as little as 1,000 sentences, making it\neasy to transfer a trained summarizer to a new domain.", "field": [], "task": ["Abstractive Text Summarization", "Document Summarization"], "method": [], "dataset": ["CNN / Daily Mail"], "metric": ["ROUGE-L", "PPL", "ROUGE-1", "ROUGE-2"], "title": "Bottom-Up Abstractive Summarization"} {"abstract": "We propose a neural network model for joint extraction of named entities and\nrelations between them, without any hand-crafted features. The key contribution\nof our model is to extend a BiLSTM-CRF-based entity recognition model with a\ndeep biaffine attention layer to model second-order interactions between latent\nfeatures for relation classification, specifically attending to the role of an\nentity in a directional relationship. On the benchmark \"relation and entity\nrecognition\" dataset CoNLL04, experimental results show that our model\noutperforms previous models, producing new state-of-the-art performances.", "field": [], "task": ["Relation Classification", "Relation Extraction"], "method": [], "dataset": ["CoNLL04"], "metric": ["NER Macro F1", "RE+ Macro F1 "], "title": "End-to-end neural relation extraction using deep biaffine attention"} {"abstract": "The encoder-decoder based methods for semi-supervised video object segmentation (Semi-VOS) have received extensive attention due to their superior performances. However, most of them have complex intermediate networks which generate strong specifiers to be robust against challenging scenarios, and this is quite inefficient when dealing with relatively simple scenarios. To solve this problem, we propose a real-time network, Clue Refining Network for Video Object Segmentation (CRVOS), that does not have any intermediate network to efficiently deal with these scenarios. In this work, we propose a simple specifier, referred to as the Clue, which consists of the previous frame's coarse mask and coordinates information. We also propose a novel refine module which shows the better performance compared with the general ones by using a deconvolution layer instead of a bilinear upsampling layer. Our proposed method shows the fastest speed among the existing methods with a competitive accuracy. On DAVIS 2016 validation set, our method achieves 63.5 fps and J&F score of 81.6%.", "field": [], "task": ["Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation", "Visual Object Tracking"], "method": [], "dataset": ["DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "CRVOS: Clue Refining Network for Video Object Segmentation"} {"abstract": "Image registration is a key technique in medical image analysis to estimate\ndeformations between image pairs. A good deformation model is important for\nhigh-quality estimates. However, most existing approaches use ad-hoc\ndeformation models chosen for mathematical convenience rather than to capture\nobserved data variation. Recent deep learning approaches learn deformation\nmodels directly from data. However, they provide limited control over the\nspatial regularity of transformations. Instead of learning the entire\nregistration approach, we learn a spatially-adaptive regularizer within a\nregistration model. This allows controlling the desired level of regularity and\npreserving structural properties of a registration model. For example,\ndiffeomorphic transformations can be attained. Our approach is a radical\ndeparture from existing deep learning approaches to image registration by\nembedding a deep learning model in an optimization-based registration algorithm\nto parameterize and data-adapt the registration model itself.", "field": [], "task": ["Deformable Medical Image Registration", "Diffeomorphic Medical Image Registration", "Image Registration", "Metric Learning"], "method": [], "dataset": ["CUMC12"], "metric": ["Mean target overlap ratio"], "title": "Metric Learning for Image Registration"} {"abstract": "We investigate the problem of efficiently incorporating high-order features into neural graph-based dependency parsing. Instead of explicitly extracting high-order features from intermediate parse trees, we develop a more powerful dependency tree node representation which captures high-order information concisely and efficiently. We use graph neural networks (GNNs) to learn the representations and discuss several new configurations of GNN{'}s updating and aggregation functions. Experiments on PTB show that our parser achieves the best UAS and LAS on PTB (96.0{\\%}, 94.3{\\%}) among systems without using any external resources.", "field": [], "task": ["Dependency Parsing"], "method": [], "dataset": ["Penn Treebank"], "metric": ["UAS", "POS", "LAS"], "title": "Graph-based Dependency Parsing with Graph Neural Networks"} {"abstract": "This paper focuses on the unsupervised domain adaptation of transferring the knowledge from the source domain to the target domain in the context of semantic segmentation. Existing approaches usually regard the pseudo label as the ground truth to fully exploit the unlabeled target-domain data. Yet the pseudo labels of the target-domain data are usually predicted by the model trained on the source domain. Thus, the generated labels inevitably contain the incorrect prediction due to the discrepancy between the training domain and the test domain, which could be transferred to the final adapted model and largely compromises the training process. To overcome the problem, this paper proposes to explicitly estimate the prediction uncertainty during training to rectify the pseudo label learning for unsupervised semantic segmentation adaptation. Given the input image, the model outputs the semantic segmentation prediction as well as the uncertainty of the prediction. Specifically, we model the uncertainty via the prediction variance and involve the uncertainty into the optimization objective. To verify the effectiveness of the proposed method, we evaluate the proposed method on two prevalent synthetic-to-real semantic segmentation benchmarks, i.e., GTA5 -> Cityscapes and SYNTHIA -> Cityscapes, as well as one cross-city benchmark, i.e., Cityscapes -> Oxford RobotCar. We demonstrate through extensive experiments that the proposed approach (1) dynamically sets different confidence thresholds according to the prediction variance, (2) rectifies the learning from noisy pseudo labels, and (3) achieves significant improvements over the conventional pseudo label learning and yields competitive performance on all three benchmarks.", "field": [], "task": ["Domain Adaptation", "Semantic Segmentation", "Unsupervised Domain Adaptation", "Unsupervised Semantic Segmentation"], "method": [], "dataset": ["GTAV-to-Cityscapes Labels"], "metric": ["mIoU"], "title": "Rectifying Pseudo Label Learning via Uncertainty Estimation for Domain Adaptive Semantic Segmentation"} {"abstract": "Multilingual sequence labeling is a task of predicting label sequences using a single unified model for multiple languages. Compared with relying on multiple monolingual models, using a multilingual model has the benefit of a smaller model size, easier in online serving, and generalizability to low-resource languages. However, current multilingual models still underperform individual monolingual models significantly due to model capacity limitations. In this paper, we propose to reduce the gap between monolingual models and the unified multilingual model by distilling the structural knowledge of several monolingual models (teachers) to the unified multilingual model (student). We propose two novel KD methods based on structure-level information: (1) approximately minimizes the distance between the student's and the teachers' structure level probability distributions, (2) aggregates the structure-level knowledge to local distributions and minimizes the distance between two local probability distributions. Our experiments on 4 multilingual tasks with 25 datasets show that our approaches outperform several strong baselines and have stronger zero-shot generalizability than both the baseline model and teacher models.", "field": [], "task": ["Aspect Extraction", "Knowledge Distillation"], "method": [], "dataset": ["SemEval-2016 Task 5 Subtask 1 (Turkish)", "SemEval-2016 Task 5 Subtask 1 (Spanish)", "SemEval-2016 Task 5 Subtask 1 (Dutch)", "SemEval-2016 Task 5 Subtask 1 (Russian)", "SemEval-2016 Task 5 Subtask 1"], "metric": ["F1"], "title": "Structure-Level Knowledge Distillation For Multilingual Sequence Labeling"} {"abstract": "Anchor-based Siamese trackers have achieved remarkable advancements in accuracy, yet the further improvement is restricted by the lagged tracking robustness. We find the underlying reason is that the regression network in anchor-based methods is only trained on the positive anchor boxes (i.e., $IoU \\geq0.6$). This mechanism makes it difficult to refine the anchors whose overlap with the target objects are small. In this paper, we propose a novel object-aware anchor-free network to address this issue. First, instead of refining the reference anchor boxes, we directly predict the position and scale of target objects in an anchor-free fashion. Since each pixel in groundtruth boxes is well trained, the tracker is capable of rectifying inexact predictions of target objects during inference. Second, we introduce a feature alignment module to learn an object-aware feature from predicted bounding boxes. The object-aware feature can further contribute to the classification of target objects and background. Moreover, we present a novel tracking framework based on the anchor-free model. The experiments show that our anchor-free tracker achieves state-of-the-art performance on five benchmarks, including VOT-2018, VOT-2019, OTB-100, GOT-10k and LaSOT. The source code is available at https://github.com/researchmm/TracKit.", "field": [], "task": ["Regression", "Visual Object Tracking"], "method": [], "dataset": ["GOT-10k", "VOT2018", "VOT2019"], "metric": ["Success Rate 0.5", "Average Overlap", "Expected Average Overlap (EAO)"], "title": "Ocean: Object-aware Anchor-free Tracking"} {"abstract": "The understanding of the mechanisms behind focus of attention in a visual scene is a problem of great interest in visual perception and computer vision. In this paper, we describe a model of scanpath as a dynamic process which can be interpreted as a variational law somehow related to mechanics, where the focus of attention is subject to a gravitational field. The distributed virtual mass that drives eye movements is associated with the presence of details and motion in the video. Unlike most current models, the proposed approach does not estimate directly the saliency map, but the prediction of eye movements allows us to integrate over time the positions of interest. The process of inhibition-of-return is also supported in the same dynamic model with the purpose of simulating fixations and saccades. The differential equations of motion of the proposed model are numerically integrated to simulate scanpaths on both images and videos. Experimental results for the tasks of saliency and scanpath prediction on a wide collection of datasets are presented to support the theory. Top level performances are achieved especially in the prediction of scanpaths, which is the primary purpose of the proposed model.", "field": [], "task": ["Saliency Detection", "Saliency Prediction", "Scanpath prediction"], "method": [], "dataset": ["Coutrot Dataset 1", "FixaTons"], "metric": ["String-edit distance", "Scaled time-delay embeddings"], "title": "Gravitational Laws of Focus of Attention"} {"abstract": "A car driver knows how to react on the gestures of the traffic officers. Clearly, this is not the case for the autonomous vehicle, unless it has road traffic control gesture recognition functionalities. In this work, we address the limitation of the existing autonomous driving datasets to provide learning data for traffic control gesture recognition. We introduce a dataset that is based on 3D body skeleton input to perform traffic control gesture classification on every time step. Our dataset consists of 250 sequences from several actors, ranging from 16 to 90 seconds per sequence. To evaluate our dataset, we propose eight sequential processing models based on deep neural networks such as recurrent networks, attention mechanism, temporal convolutional networks and graph convolutional networks. We present an extensive evaluation and analysis of all approaches for our dataset, as well as real-world quantitative evaluation. The code and dataset is publicly available.", "field": [], "task": ["Autonomous Driving", "Autonomous Vehicles", "Gesture Recognition", "Skeleton Based Action Recognition"], "method": [], "dataset": ["TCG-dataset"], "metric": ["Acc"], "title": "Traffic Control Gesture Recognition for Autonomous Vehicles"} {"abstract": "We present a method for training a regression network from image pixels to 3D\nmorphable model coordinates using only unlabeled photographs. The training loss\nis based on features from a facial recognition network, computed on-the-fly by\nrendering the predicted faces with a differentiable renderer. To make training\nfrom features feasible and avoid network fooling effects, we introduce three\nobjectives: a batch distribution loss that encourages the output distribution\nto match the distribution of the morphable model, a loopback loss that ensures\nthe network can correctly reinterpret its own output, and a multi-view identity\nloss that compares the features of the predicted 3D face and the input\nphotograph from multiple viewing angles. We train a regression network using\nthese objectives, a set of unlabeled photographs, and the morphable model\nitself, and demonstrate state-of-the-art results.", "field": [], "task": ["3D Face Reconstruction", "Regression"], "method": [], "dataset": ["Florence"], "metric": ["Average 3D Error"], "title": "Unsupervised Training for 3D Morphable Model Regression"} {"abstract": "Acquiring spatio-temporal states of an action is the most crucial step for\naction classification. In this paper, we propose a data level fusion strategy,\nMotion Fused Frames (MFFs), designed to fuse motion information into static\nimages as better representatives of spatio-temporal states of an action. MFFs\ncan be used as input to any deep learning architecture with very little\nmodification on the network. We evaluate MFFs on hand gesture recognition tasks\nusing three video datasets - Jester, ChaLearn LAP IsoGD and NVIDIA Dynamic Hand\nGesture Datasets - which require capturing long-term temporal relations of hand\nmovements. Our approach obtains very competitive performance on Jester and\nChaLearn benchmarks with the classification accuracies of 96.28% and 57.4%,\nrespectively, while achieving state-of-the-art performance with 84.7% accuracy\non NVIDIA benchmark.", "field": [], "task": ["Action Classification", "Action Classification ", "Gesture Recognition", "Hand Gesture Recognition", "Hand-Gesture Recognition"], "method": [], "dataset": ["ChaLearn val", "NVGesture", "ChaLean test", "Jester test", "Jester val"], "metric": ["Top 5 Accuracy", "Top 1 Accuracy", "Accuracy"], "title": "Motion Fused Frames: Data Level Fusion Strategy for Hand Gesture Recognition"} {"abstract": "We present diffusion-convolutional neural networks (DCNNs), a new model for\ngraph-structured data. Through the introduction of a diffusion-convolution\noperation, we show how diffusion-based representations can be learned from\ngraph-structured data and used as an effective basis for node classification.\nDCNNs have several attractive qualities, including a latent representation for\ngraphical data that is invariant under isomorphism, as well as polynomial-time\nprediction and learning that can be represented as tensor operations and\nefficiently implemented on the GPU. Through several experiments with real\nstructured datasets, we demonstrate that DCNNs are able to outperform\nprobabilistic relational models and kernel-on-graph methods at relational node\nclassification tasks.", "field": [], "task": ["Node Classification"], "method": [], "dataset": ["PubMed (0.1%)", "PubMed (0.03%)", "Cora (1%)", "PubMed (0.05%)", "Cora (3%)", "CiteSeer (1%)", "Cora (0.5%)", "Cora with Public Split: fixed 20 nodes per class", "CiteSeer (0.5%)", "CiteSeer with Public Split: fixed 20 nodes per class", "PubMed with Public Split: fixed 20 nodes per class"], "metric": ["Accuracy"], "title": "Diffusion-Convolutional Neural Networks"} {"abstract": "Deep generative models parameterized by neural networks have recently\nachieved state-of-the-art performance in unsupervised and semi-supervised\nlearning. We extend deep generative models with auxiliary variables which\nimproves the variational approximation. The auxiliary variables leave the\ngenerative model unchanged but make the variational distribution more\nexpressive. Inspired by the structure of the auxiliary variable we also propose\na model with two stochastic layers and skip connections. Our findings suggest\nthat more expressive and properly specified deep generative models converge\nfaster with better results. We show state-of-the-art performance within\nsemi-supervised learning on MNIST, SVHN and NORB datasets.", "field": [], "task": [], "method": [], "dataset": ["SVHN"], "metric": ["Percentage error"], "title": "Auxiliary Deep Generative Models"} {"abstract": "Weakly Supervised Object Detection (WSOD) has emerged as an effective tool to train object detectors using only the image-level category labels. However, without object-level labels, WSOD detectors are prone to detect bounding boxes on salient objects, clustered objects and discriminative object parts. Moreover, the image-level category labels do not enforce consistent object detection across different transformations of the same images. To address the above issues, we propose a Comprehensive Attention Self-Distillation (CASD) training approach for WSOD. To balance feature learning among all object instances, CASD computes the comprehensive attention aggregated from multiple transformations and feature layers of the same images. To enforce consistent spatial supervision on objects, CASD conducts self-distillation on the WSOD networks, such that the comprehensive attention is approximated simultaneously by multiple transformations and feature layers of the same images. CASD produces new state-of-the-art WSOD results on standard benchmarks such as PASCAL VOC 2007/2012 and MS-COCO.", "field": [], "task": ["Object Detection", "Weakly Supervised Object Detection"], "method": [], "dataset": ["PASCAL VOC 2007", "PASCAL VOC 2012 test", "MSCOCO"], "metric": ["MAP", "mAP", "mAP@50"], "title": "Comprehensive Attention Self-Distillation for Weakly-Supervised Object Detection"} {"abstract": "Knowledge graph (KG) embeddings learn low-dimensional representations of entities and relations to predict missing facts. KGs often exhibit hierarchical and logical patterns which must be preserved in the embedding space. For hierarchical data, hyperbolic embedding methods have shown promise for high-fidelity and parsimonious representations. However, existing hyperbolic embedding methods do not account for the rich logical patterns in KGs. In this work, we introduce a class of hyperbolic KG embedding models that simultaneously capture hierarchical and logical patterns. Our approach combines hyperbolic reflections and rotations with attention to model complex relational patterns. Experimental results on standard KG benchmarks show that our method improves over previous Euclidean- and hyperbolic-based efforts by up to 6.1% in mean reciprocal rank (MRR) in low dimensions. Furthermore, we observe that different geometric transformations capture different types of relations while attention-based transformations generalize to multiple relations. In high dimensions, our approach yields new state-of-the-art MRRs of 49.6% on WN18RR and 57.7% on YAGO3-10.", "field": [], "task": ["Knowledge Graph Embeddings"], "method": [], "dataset": ["WN18RR", "YAGO3-10", "FB15k-237"], "metric": ["Hits@10", "MRR", "Hits@3", "Hits@1"], "title": "Low-Dimensional Hyperbolic Knowledge Graph Embeddings"} {"abstract": "Although character-based models using lexicon have achieved promising results for Chinese named entity recognition (NER) task, some lexical words would introduce erroneous information due to wrongly matched words. Existing researches proposed many strategies to integrate lexicon knowledge. However, they performed with simple first-order lexicon knowledge, which provided insufficient word information and still faced the challenge of matched word boundary conflicts; or explored the lexicon knowledge with graph where higher-order information introducing negative words may disturb the identification. To alleviate the above limitations, we present new insight into second-order lexicon knowledge (SLK) of each character in the sentence to provide more lexical word information including semantic and word boundary features. Based on these, we propose a SLK-based model with a novel strategy to integrate the above lexicon knowledge. The proposed model can exploit more discernible lexical words information with the help of global context. Experimental results on three public datasets demonstrate the validity of SLK. The proposed model achieves more excellent performance than the state-of-the-art comparison methods.", "field": [], "task": ["Chinese Named Entity Recognition", "Named Entity Recognition"], "method": [], "dataset": ["Resume NER", "OntoNotes 4", "Weibo NER"], "metric": ["F1"], "title": "SLK-NER: Exploiting Second-order Lexicon Knowledge for Chinese NER"} {"abstract": "Automatic pulmonary nodules classification is significant for early diagnosis of lung cancers. Recently, deep learning techniques have enabled remarkable progress in this field. However, these deep models are typically of high computational complexity and work in a black-box manner. To combat these challenges, in this work, we aim to build an efficient and (partially) explainable classification model. Specially, we use \\emph{neural architecture search} (NAS) to automatically search 3D network architectures with excellent accuracy/speed trade-off. Besides, we use the convolutional block attention module (CBAM) in the networks, which helps us understand the reasoning process. During training, we use A-Softmax loss to learn angularly discriminative representations. In the inference stage, we employ an ensemble of diverse neural networks to improve the prediction accuracy and robustness. We conduct extensive experiments on the LIDC-IDRI database. Compared with previous state-of-the-art, our model shows highly comparable performance by using less than 1/40 parameters. Besides, empirical study shows that the reasoning process of learned networks is in conformity with physicians' diagnosis. Related code and results have been released at: https://github.com/fei-hdu/NAS-Lung.", "field": [], "task": ["Lung Nodule Classification", "Neural Architecture Search", "Pulmonary Nodules Classification"], "method": [], "dataset": ["LIDC-IDRI"], "metric": ["Accuracy", "F1 score", "Specificity (VEB+)"], "title": "Learning Efficient, Explainable and Discriminative Representations for Pulmonary Nodules Classification"} {"abstract": "Accurately ranking images and multimedia objects are of paramount relevance in many retrieval and learning tasks. Manifold learning methods have been investigated for ranking mainly due to their capacity of taking into account the intrinsic global manifold structure. In this paper, a novel manifold ranking algorithm is proposed based on the hypergraphs for unsupervised multimedia retrieval tasks. Different from traditional graph-based approaches, which represent only pairwise relationships, hypergraphs are capable of modeling similarity relationships among a set of objects. The proposed approach uses the hyperedges for constructing a contextual representation of data samples and exploits the encoded information for deriving a more effective similarity function. An extensive experimental evaluation was conducted on nine public datasets including diverse retrieval scenarios and multimedia content. Experimental results demonstrate that high effectiveness gains can be obtained in comparison with the state-of-the-art methods.", "field": [], "task": ["Content-Based Image Retrieval", "Video Retrieval"], "method": [], "dataset": ["INRIA Holidays Dataset"], "metric": ["MAP"], "title": "Multimedia Retrieval Through Unsupervised Hypergraph-Based Manifold Ranking"} {"abstract": "Reading comprehension (RC)---in contrast to information retrieval---requires\nintegrating information and reasoning about events, entities, and their\nrelations across a full document. Question answering is conventionally used to\nassess RC ability, in both artificial agents and children learning to read.\nHowever, existing RC datasets and tasks are dominated by questions that can be\nsolved by selecting answers using superficial information (e.g., local context\nsimilarity or global term frequency); they thus fail to test for the essential\nintegrative aspect of RC. To encourage progress on deeper comprehension of\nlanguage, we present a new dataset and set of tasks in which the reader must\nanswer questions about stories by reading entire books or movie scripts. These\ntasks are designed so that successfully answering their questions requires\nunderstanding the underlying narrative rather than relying on shallow pattern\nmatching or salience. We show that although humans solve the tasks easily,\nstandard RC models struggle on the tasks presented here. We provide an analysis\nof the dataset and the challenges it presents.", "field": [], "task": ["Information Retrieval", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["NarrativeQA"], "metric": ["BLEU-4", "BLEU-1"], "title": "The NarrativeQA Reading Comprehension Challenge"} {"abstract": "Deep Learning has led to a dramatic leap in Super-Resolution (SR) performance\nin the past few years. However, being supervised, these SR methods are\nrestricted to specific training data, where the acquisition of the\nlow-resolution (LR) images from their high-resolution (HR) counterparts is\npredetermined (e.g., bicubic downscaling), without any distracting artifacts\n(e.g., sensor noise, image compression, non-ideal PSF, etc). Real LR images,\nhowever, rarely obey these restrictions, resulting in poor SR results by SotA\n(State of the Art) methods. In this paper we introduce \"Zero-Shot\" SR, which\nexploits the power of Deep Learning, but does not rely on prior training. We\nexploit the internal recurrence of information inside a single image, and train\na small image-specific CNN at test time, on examples extracted solely from the\ninput image itself. As such, it can adapt itself to different settings per\nimage. This allows to perform SR of real old photos, noisy images, biological\ndata, and other images where the acquisition process is unknown or non-ideal.\nOn such images, our method outperforms SotA CNN-based SR methods, as well as\nprevious unsupervised SR methods. To the best of our knowledge, this is the\nfirst unsupervised CNN-based SR method.", "field": [], "task": ["Image Compression", "Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["Set5 - 4x upscaling", "BSD100 - 4x upscaling", "Set14 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "\"Zero-Shot\" Super-Resolution using Deep Internal Learning"} {"abstract": "Crowdsourcing provides a practical way to obtain large amounts of labeled data at a low cost. However, the annotation quality of annotators varies considerably, which imposes new challenges in learning a high-quality model from the crowdsourced annotations. In this work, we provide a new perspective to decompose annotation noise into common noise and individual noise and differentiate the source of confusion based on instance difficulty and annotator expertise on a per-instance-annotator basis. We realize this new crowdsourcing model by an end-to-end learning solution with two types of noise adaptation layers: one is shared across annotators to capture their commonly shared confusions, and the other one is pertaining to each annotator to realize individual confusion. To recognize the source of noise in each annotation, we use an auxiliary network to choose the two noise adaptation layers with respect to both instances and annotators. Extensive experiments on both synthesized and real-world benchmarks demonstrate the effectiveness of our proposed common noise adaptation solution.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["LabelMe"], "metric": ["Test Accuracy"], "title": "Learning from Crowds by Modeling Common Confusions"} {"abstract": "While considerable progresses have been made on face recognition, age-invariant face recognition (AIFR) still remains a major challenge in real world applications of face recognition systems. The major difficulty of AIFR arises from the fact that the facial appearance is subject to significant intra-personal changes caused by the aging process over time. In order to address this problem, we propose a novel deep face recognition framework to learn the age-invariant deep face features through a carefully designed CNN model. To the best of our knowledge, this is the first attempt to show the effectiveness of deep CNNs in advancing the state-of-the-art of AIFR. Extensive experiments are conducted on several public domain face aging datasets (MORPH Album2, FGNET, and CACD-VS) to demonstrate the effectiveness of the proposed model over the state-of-the-art. We also verify the excellent generalization of our new model on the famous LFW dataset.", "field": [], "task": ["Age-Invariant Face Recognition", "Face Recognition"], "method": [], "dataset": ["CACDVS"], "metric": ["Accuracy"], "title": "Latent Factor Guided Convolutional Neural Networks for Age-Invariant Face Recognition"} {"abstract": "We explore several new models for document relevance ranking, building upon\nthe Deep Relevance Matching Model (DRMM) of Guo et al. (2016). Unlike DRMM,\nwhich uses context-insensitive encodings of terms and query-document term\ninteractions, we inject rich context-sensitive encodings throughout our models,\ninspired by PACRR's (Hui et al., 2017) convolutional n-gram matching features,\nbut extended in several ways including multiple views of query and document\ninputs. We test our models on datasets from the BIOASQ question answering\nchallenge (Tsatsaronis et al., 2015) and TREC ROBUST 2004 (Voorhees, 2005),\nshowing they outperform BM25-based baselines, DRMM, and PACRR.", "field": [], "task": ["Ad-Hoc Information Retrieval", "Question Answering"], "method": [], "dataset": ["TREC Robust04"], "metric": ["P@20", "nDCG@20", "MAP"], "title": "Deep Relevance Ranking Using Enhanced Document-Query Interactions"} {"abstract": "Recurrent neural networks are now the state-of-the-art in natural language\nprocessing because they can build rich contextual representations and process\ntexts of arbitrary length. However, recent developments on attention mechanisms\nhave equipped feedforward networks with similar capabilities, hence enabling\nfaster computations due to the increase in the number of operations that can be\nparallelized. We explore this new type of architecture in the domain of\nquestion-answering and propose a novel approach that we call Fully Attention\nBased Information Retriever (FABIR). We show that FABIR achieves competitive\nresults in the Stanford Question Answering Dataset (SQuAD) while having fewer\nparameters and being faster at both learning and inference than rival methods.", "field": [], "task": ["Question Answering"], "method": [], "dataset": ["SQuAD1.1 dev", "SQuAD1.1"], "metric": ["EM", "F1"], "title": "A Fully Attention-Based Information Retriever"} {"abstract": "Semantic segmentation is a key problem for many computer vision tasks. While\napproaches based on convolutional neural networks constantly break new records\non different benchmarks, generalizing well to diverse testing environments\nremains a major challenge. In numerous real world applications, there is indeed\na large gap between data distributions in train and test domains, which results\nin severe performance loss at run-time. In this work, we address the task of\nunsupervised domain adaptation in semantic segmentation with losses based on\nthe entropy of the pixel-wise predictions. To this end, we propose two novel,\ncomplementary methods using (i) entropy loss and (ii) adversarial loss\nrespectively. We demonstrate state-of-the-art performance in semantic\nsegmentation on two challenging \"synthetic-2-real\" set-ups and show that the\napproach can also be used for detection.", "field": [], "task": ["Domain Adaptation", "Image-to-Image Translation", "Semantic Segmentation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["GTAV-to-Cityscapes Labels", "SYNTHIA-to-Cityscapes"], "metric": ["mIoU (13 classes)", "mIoU"], "title": "ADVENT: Adversarial Entropy Minimization for Domain Adaptation in Semantic Segmentation"} {"abstract": "Numerical evaluations with comparisons to baselines play a central role when judging research in recommender systems. In this paper, we show that running baselines properly is difficult. We demonstrate this issue on two extensively studied datasets. First, we show that results for baselines that have been used in numerous publications over the past five years for the Movielens 10M benchmark are suboptimal. With a careful setup of a vanilla matrix factorization baseline, we are not only able to improve upon the reported results for this baseline but even outperform the reported results of any newly proposed method. Secondly, we recap the tremendous effort that was required by the community to obtain high quality results for simple methods on the Netflix Prize. Our results indicate that empirical findings in research papers are questionable unless they were obtained on standardized benchmarks where baselines have been tuned extensively by the research community.", "field": [], "task": ["Recommendation Systems"], "method": [], "dataset": ["MovieLens 1M", "MovieLens 100K", "MovieLens 10M"], "metric": ["RMSE (u1 Splits)", "RMSE"], "title": "On the Difficulty of Evaluating Baselines: A Study on Recommender Systems"} {"abstract": "We present two novel solutions for multi-view 3D human pose estimation based on new learnable triangulation methods that combine 3D information from multiple 2D views. The first (baseline) solution is a basic differentiable algebraic triangulation with an addition of confidence weights estimated from the input images. The second solution is based on a novel method of volumetric aggregation from intermediate 2D backbone feature maps. The aggregated volume is then refined via 3D convolutions that produce final 3D joint heatmaps and allow modelling a human pose prior. Crucially, both approaches are end-to-end differentiable, which allows us to directly optimize the target metric. We demonstrate transferability of the solutions across datasets and considerably improve the multi-view state of the art on the Human3.6M dataset. Video demonstration, annotations and additional materials will be posted on our project page (https://saic-violet.github.io/learnable-triangulation).", "field": [], "task": ["3D Human Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["Human3.6M"], "metric": ["Average MPJPE (mm)", "Multi-View or Monocular", "Using 2D ground-truth joints"], "title": "Learnable Triangulation of Human Pose"} {"abstract": "Benchmark data sets are an indispensable ingredient of the evaluation of graph-based machine learning methods. We release a new data set, compiled from International Planning Competitions (IPC), for benchmarking graph classification, regression, and related tasks. Apart from the graph construction (based on AI planning problems) that is interesting in its own right, the data set possesses distinctly different characteristics from popularly used benchmarks. The data set, named IPC, consists of two self-contained versions, grounded and lifted, both including graphs of large and skewedly distributed sizes, posing substantial challenges for the computation of graph models such as graph kernels and graph neural networks. The graphs in this data set are directed and the lifted version is acyclic, offering the opportunity of benchmarking specialized models for directed (acyclic) structures. Moreover, the graph generator and the labeling are computer programmed; thus, the data set may be extended easily if a larger scale is desired. The data set is accessible from \\url{https://github.com/IBM/IPC-graph-data}.", "field": [], "task": ["Graph Classification", "graph construction", "Regression"], "method": [], "dataset": ["IPC-grounded", "IPC-lifted"], "metric": ["Accuracy"], "title": "IPC: A Benchmark Data Set for Learning with Graph-Structured Data"} {"abstract": "We introduce the Neural State Machine, seeking to bridge the gap between the neural and symbolic views of AI and integrate their complementary strengths for the task of visual reasoning. Given an image, we first predict a probabilistic graph that represents its underlying semantics and serves as a structured world model. Then, we perform sequential reasoning over the graph, iteratively traversing its nodes to answer a given question or draw a new inference. In contrast to most neural architectures that are designed to closely interact with the raw sensory data, our model operates instead in an abstract latent space, by transforming both the visual and linguistic modalities into semantic concept-based representations, thereby achieving enhanced transparency and modularity. We evaluate our model on VQA-CP and GQA, two recent VQA datasets that involve compositionality, multi-step inference and diverse reasoning skills, achieving state-of-the-art results in both cases. We provide further experiments that illustrate the model's strong generalization capacity across multiple dimensions, including novel compositions of concepts, changes in the answer distribution, and unseen linguistic structures, demonstrating the qualities and efficacy of our approach.", "field": [], "task": ["Visual Question Answering", "Visual Reasoning"], "method": [], "dataset": ["GQA test-dev", "GQA test-std", "VQA-CP"], "metric": ["Score", "Accuracy"], "title": "Learning by Abstraction: The Neural State Machine"} {"abstract": "Multi-task learning is a very challenging problem in reinforcement learning. While training multiple tasks jointly allow the policies to share parameters across different tasks, the optimization problem becomes non-trivial: It remains unclear what parameters in the network should be reused across tasks, and how the gradients from different tasks may interfere with each other. Thus, instead of naively sharing parameters across tasks, we introduce an explicit modularization technique on policy representation to alleviate this optimization issue. Given a base policy network, we design a routing network which estimates different routing strategies to reconfigure the base network for each task. Instead of directly selecting routes for each task, our task-specific policy uses a method called soft modularization to softly combine all the possible routes, which makes it suitable for sequential tasks. We experiment with various robotics manipulation tasks in simulation and show our method improves both sample efficiency and performance over strong baselines by a large margin.", "field": [], "task": ["Meta-Learning", "Multi-Task Learning"], "method": [], "dataset": ["MT50"], "metric": ["Average Success Rate"], "title": "Multi-Task Reinforcement Learning with Soft Modularization"} {"abstract": "The Web has become the main platform where people express their opinions about entities of interest and their associated aspects. Aspect-Based Sentiment Analysis (ABSA) aims to automatically compute the sentiment towards these aspects from opinionated text. In this paper we extend the state-of-the-art Hybrid Approach for Aspect-Based Sentiment Analysis (HAABSA) method in two directions. First we replace the non-contextual word embeddings with deep contextual word embeddings in order to better cope with the word semantics in a given text. Second, we use hierarchical attention by adding an extra attention layer to the HAABSA high-level representations in order to increase the method flexibility in modeling the input data. Using two standard datasets (SemEval 2015 and SemEval 2016) we show that the proposed extensions improve the accuracy of the built model for ABSA.", "field": [], "task": ["Aspect-Based Sentiment Analysis", "Sentiment Analysis", "Word Embeddings"], "method": [], "dataset": ["SemEval 2015 Task 12", "SemEval-2016 Task 5 Subtask 1"], "metric": ["Restaurant (Acc)"], "title": "A Hybrid Approach for Aspect-Based Sentiment Analysis Using Deep Contextual Word Embeddings and Hierarchical Attention"} {"abstract": "Graph convolutional networks produce good predictions of unlabeled samples due to its transductive label propagation. Since samples have different predicted confidences, we take high-confidence predictions as pseudo labels to expand the label set so that more samples are selected for updating models. We propose a new training method named as mutual teaching, i.e., we train dual models and let them teach each other during each batch. First, each network feeds forward all samples and selects samples with high-confidence predictions. Second, each model is updated by samples selected by its peer network. We view the high-confidence predictions as useful knowledge, and the useful knowledge of one network teaches the peer network with model updating in each batch. In mutual teaching, the pseudo-label set of a network is from its peer network. Since we use the new strategy of network training, performance improves significantly. Extensive experimental results demonstrate that our method achieves superior performance over state-of-the-art methods under very low label rates.", "field": [], "task": ["Node Classification"], "method": [], "dataset": ["PubMed (0.1%)", "Cora", "Cora (1%)", "PubMed (0.05%)", "Cora (3%)", "CiteSeer (1%)", "Cora (0.5%)", "CiteSeer (0.5%)", "PubMed (0.03%)"], "metric": ["Accuracy"], "title": "Mutual Teaching for Graph Convolutional Networks"} {"abstract": "Document-level relation extraction (RE) poses new challenges compared to its sentence-level counterpart. One document commonly contains multiple entity pairs, and one entity pair occurs multiple times in the document associated with multiple possible relations. In this paper, we propose two novel techniques, adaptive thresholding and localized context pooling, to solve the multi-label and multi-entity problems. The adaptive thresholding replaces the global threshold for multi-label classification in the prior work with a learnable entities-dependent threshold. The localized context pooling directly transfers attention from pre-trained language models to locate relevant context that is useful to decide the relation. We experiment on three document-level RE benchmark datasets: DocRED, a recently released large-scale RE dataset, and two datasets CDRand GDA in the biomedical domain. Our ATLOP (Adaptive Thresholding and Localized cOntext Pooling) model achieves an F1 score of 63.4, and also significantly outperforms existing models on both CDR and GDA.", "field": [], "task": ["Multi-Label Classification", "Relation Extraction"], "method": [], "dataset": ["DocRED"], "metric": ["Ign F1", "F1"], "title": "Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling"} {"abstract": "We propose a simple, intuitive yet powerful method for human-object interaction (HOI) detection. HOIs are so diverse in spatial distribution in an image that existing CNN-based methods face the following three major drawbacks; they cannot leverage image-wide features due to CNN's locality, they rely on a manually defined location-of-interest for the feature aggregation, which sometimes does not cover contextually important regions, and they cannot help but mix up the features for multiple HOI instances if they are located closely. To overcome these drawbacks, we propose a transformer-based feature extractor, in which an attention mechanism and query-based detection play key roles. The attention mechanism is effective in aggregating contextually important information image-wide, while the queries, which we design in such a way that each query captures at most one human-object pair, can avoid mixing up the features from multiple instances. This transformer-based feature extractor produces so effective embeddings that the subsequent detection heads may be fairly simple and intuitive. The extensive analysis reveals that the proposed method successfully extracts contextually important features, and thus outperforms existing methods by large margins (5.37 mAP on HICO-DET, and 5.7 mAP on V-COCO). The source codes are available at $\\href{https://github.com/hitachi-rd-cv/qpic}{\\text{this https URL}}$.", "field": [], "task": ["Human-Object Interaction Detection"], "method": [], "dataset": ["HICO-DET", "V-COCO"], "metric": ["Time Per Frame(ms)", "Time Per Frame (ms)", "MAP"], "title": "QPIC: Query-Based Pairwise Human-Object Interaction Detection with Image-Wide Contextual Information"} {"abstract": "We present convolutional neural network (CNN) based approaches for\nunsupervised multimodal subspace clustering. The proposed framework consists of\nthree main stages - multimodal encoder, self-expressive layer, and multimodal\ndecoder. The encoder takes multimodal data as input and fuses them to a latent\nspace representation. The self-expressive layer is responsible for enforcing\nthe self-expressiveness property and acquiring an affinity matrix corresponding\nto the data points. The decoder reconstructs the original input data. The\nnetwork uses the distance between the decoder's reconstruction and the original\ninput in its training. We investigate early, late and intermediate fusion\ntechniques and propose three different encoders corresponding to them for\nspatial fusion. The self-expressive layers and multimodal decoders are\nessentially the same for different spatial fusion-based approaches. In addition\nto various spatial fusion-based methods, an affinity fusion-based network is\nalso proposed in which the self-expressive layer corresponding to different\nmodalities is enforced to be the same. Extensive experiments on three datasets\nshow that the proposed methods significantly outperform the state-of-the-art\nmultimodal subspace clustering methods.", "field": [], "task": ["Image Clustering", "Multi-modal Subspace Clustering", "Multiview Learning", "Multi-view Subspace Clustering"], "method": [], "dataset": ["ARL Polarimetric Thermal Face Dataset", "ORL", "USPS", "Extended Yale-B"], "metric": ["NMI", "Accuracy"], "title": "Deep Multimodal Subspace Clustering Networks"} {"abstract": "We propose a weakly supervised temporal action localization algorithm on\nuntrimmed videos using convolutional neural networks. Our algorithm learns from\nvideo-level class labels and predicts temporal intervals of human actions with\nno requirement of temporal localization annotations. We design our network to\nidentify a sparse subset of key segments associated with target actions in a\nvideo using an attention module and fuse the key segments through adaptive\ntemporal pooling. Our loss function is comprised of two terms that minimize the\nvideo-level action classification error and enforce the sparsity of the segment\nselection. At inference time, we extract and score temporal proposals using\ntemporal class activations and class-agnostic attentions to estimate the time\nintervals that correspond to target actions. The proposed algorithm attains\nstate-of-the-art results on the THUMOS14 dataset and outstanding performance on\nActivityNet1.3 even with its weak supervision.", "field": [], "task": ["Action Classification", "Action Classification ", "Action Localization", "Temporal Action Localization", "Temporal Localization", "Weakly Supervised Action Localization", "Weakly-supervised Temporal Action Localization", "Weakly Supervised Temporal Action Localization"], "method": [], "dataset": ["ActivityNet-1.3", "THUMOS 2014"], "metric": ["mAP@0.5", "mAP@0.1:0.7"], "title": "Weakly Supervised Action Localization by Sparse Temporal Pooling Network"} {"abstract": "How do we learn an object detector that is invariant to occlusions and\ndeformations? Our current solution is to use a data-driven strategy -- collect\nlarge-scale datasets which have object instances under different conditions.\nThe hope is that the final classifier can use these examples to learn\ninvariances. But is it really possible to see all the occlusions in a dataset?\nWe argue that like categories, occlusions and object deformations also follow a\nlong-tail. Some occlusions and deformations are so rare that they hardly\nhappen; yet we want to learn a model invariant to such occurrences. In this\npaper, we propose an alternative solution. We propose to learn an adversarial\nnetwork that generates examples with occlusions and deformations. The goal of\nthe adversary is to generate examples that are difficult for the object\ndetector to classify. In our framework both the original detector and adversary\nare learned in a joint manner. Our experimental results indicate a 2.3% mAP\nboost on VOC07 and a 2.6% mAP boost on VOC2012 object detection challenge\ncompared to the Fast-RCNN pipeline. We also release the code for this paper.", "field": [], "task": ["Object Detection"], "method": [], "dataset": ["PASCAL VOC 2007"], "metric": ["MAP"], "title": "A-Fast-RCNN: Hard Positive Generation via Adversary for Object Detection"} {"abstract": "The ICML 2013 Workshop on Challenges in Representation Learning focused on\nthree challenges: the black box learning challenge, the facial expression\nrecognition challenge, and the multimodal learning challenge. We describe the\ndatasets created for these challenges and summarize the results of the\ncompetitions. We provide suggestions for organizers of future challenges and\nsome comments on what kind of knowledge can be gained from machine learning\ncompetitions.", "field": [], "task": ["Facial Expression Recognition", "Representation Learning"], "method": [], "dataset": ["FER2013"], "metric": ["Accuracy"], "title": "Challenges in Representation Learning: A report on three machine learning contests"} {"abstract": "The Conditional Random Field as a Recurrent Neural Network layer is a\nrecently proposed algorithm meant to be placed on top of an existing\nFully-Convolutional Neural Network to improve the quality of semantic\nsegmentation. In this paper, we test whether this algorithm, which was shown to\nimprove semantic segmentation for 2D RGB images, is able to improve\nsegmentation quality for 3D multi-modal medical images. We developed an\nimplementation of the algorithm which works for any number of spatial\ndimensions, input/output image channels, and reference image channels. As far\nas we know this is the first publicly available implementation of this sort. We\ntested the algorithm with two distinct 3D medical imaging datasets, we\nconcluded that the performance differences observed were not statistically\nsignificant. Finally, in the discussion section of the paper, we go into the\nreasons as to why this technique transfers poorly from natural images to\nmedical images.", "field": [], "task": ["3D Medical Imaging Segmentation", "Medical Image Segmentation", "Semantic Segmentation", "Volumetric Medical Image Segmentation"], "method": [], "dataset": ["PROMISE 2012"], "metric": ["Dice Score"], "title": "Conditional Random Fields as Recurrent Neural Networks for 3D Medical Imaging Segmentation"} {"abstract": "Emotion recognition in conversations is a challenging task that has recently gained popularity due to its potential applications. Until now, however, a large-scale multimodal multi-party emotional conversational database containing more than two speakers per dialogue was missing. Thus, we propose the Multimodal EmotionLines Dataset (MELD), an extension and enhancement of EmotionLines. MELD contains about 13,000 utterances from 1,433 dialogues from the TV-series Friends. Each utterance is annotated with emotion and sentiment labels, and encompasses audio, visual and textual modalities. We propose several strong multimodal baselines and show the importance of contextual and multimodal information for emotion recognition in conversations. The full dataset is available for use at http:// affective-meld.github.io.", "field": [], "task": ["Dialogue Generation", "Emotion Recognition", "Emotion Recognition in Conversation"], "method": [], "dataset": ["MELD"], "metric": ["Weighted Macro-F1"], "title": "MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations"} {"abstract": "Named entity recognition (NER) is an important task in natural language processing area, which needs to determine entities boundaries and classify them into pre-defined categories. For Chinese NER task, there is only a very small amount of annotated data available. Chinese NER task and Chinese word segmentation (CWS) task have many similar word boundaries. There are also specificities in each task. However, existing methods for Chinese NER either do not exploit word boundary information from CWS or cannot filter the specific information of CWS. In this paper, we propose a novel adversarial transfer learning framework to make full use of task-shared boundaries information and prevent the task-specific features of CWS. Besides, since arbitrary character can provide important cues when predicting entity type, we exploit self-attention to explicitly capture long range dependencies between two tokens. Experimental results on two different widely used datasets show that our proposed model significantly and consistently outperforms other state-of-the-art methods.", "field": [], "task": ["Chinese Named Entity Recognition", "Chinese Word Segmentation", "Named Entity Recognition", "Transfer Learning"], "method": [], "dataset": ["SighanNER", "Weibo NER"], "metric": ["F1"], "title": "Adversarial Transfer Learning for Chinese Named Entity Recognition with Self-Attention Mechanism"} {"abstract": "Recently, substantial progress has been made in language modeling by using deep neural networks. However, in practice, large scale neural language models have been shown to be prone to overfitting. In this paper, we present a simple yet highly effective adversarial training mechanism for regularizing neural language models. The idea is to introduce adversarial noise to the output embedding layer while training the models. We show that the optimal adversarial noise yields a simple closed-form solution, thus allowing us to develop a simple and time efficient algorithm. Theoretically, we show that our adversarial mechanism effectively encourages the diversity of the embedding vectors, helping to increase the robustness of models. Empirically, we show that our method improves on the single model state-of-the-art results for language modeling on Penn Treebank (PTB) and Wikitext-2, achieving test perplexity scores of 46.01 and 38.07, respectively. When applied to machine translation, our method improves over various transformer-based translation baselines in BLEU scores on the WMT14 English-German and IWSLT14 German-English tasks.", "field": [], "task": ["Language Modelling", "Machine Translation"], "method": [], "dataset": ["IWSLT2015 German-English", "Penn Treebank (Word Level)", "WMT2014 English-German", "WikiText-2", "WikiText-103"], "metric": ["Number of params", "Validation perplexity", "Test perplexity", "Params", "BLEU score"], "title": "Improving Neural Language Modeling via Adversarial Training"} {"abstract": "Click-through rate (CTR) prediction is a critical task in online advertising systems. A large body of research considers each ad independently, but ignores its relationship to other ads that may impact the CTR. In this paper, we investigate various types of auxiliary ads for improving the CTR prediction of the target ad. In particular, we explore auxiliary ads from two viewpoints: one is from the spatial domain, where we consider the contextual ads shown above the target ad on the same page; the other is from the temporal domain, where we consider historically clicked and unclicked ads of the user. The intuitions are that ads shown together may influence each other, clicked ads reflect a user's preferences, and unclicked ads may indicate what a user dislikes to certain extent. In order to effectively utilize these auxiliary data, we propose the Deep Spatio-Temporal neural Networks (DSTNs) for CTR prediction. Our model is able to learn the interactions between each type of auxiliary data and the target ad, to emphasize more important hidden information, and to fuse heterogeneous data in a unified framework. Offline experiments on one public dataset and two industrial datasets show that DSTNs outperform several state-of-the-art methods for CTR prediction. We have deployed the best-performing DSTN in Shenma Search, which is the second largest search engine in China. The A/B test results show that the online CTR is also significantly improved compared to our last serving model.", "field": [], "task": ["Click-Through Rate Prediction"], "method": [], "dataset": ["Avito"], "metric": ["Log Loss", "AUC"], "title": "Deep Spatio-Temporal Neural Networks for Click-Through Rate Prediction"} {"abstract": "Visual-semantic embedding aims to find a shared latent space where related visual and textual instances are close to each other. Most current methods learn injective embedding functions that map an instance to a single point in the shared space. Unfortunately, injective embedding cannot effectively handle polysemous instances with multiple possible meanings; at best, it would find an average representation of different meanings. This hinders its use in real-world scenarios where individual instances and their cross-modal associations are often ambiguous. In this work, we introduce Polysemous Instance Embedding Networks (PIE-Nets) that compute multiple and diverse representations of an instance by combining global context with locally-guided features via multi-head self-attention and residual learning. To learn visual-semantic embedding, we tie-up two PIE-Nets and optimize them jointly in the multiple instance learning framework. Most existing work on cross-modal retrieval focuses on image-text data. Here, we also tackle a more challenging case of video-text retrieval. To facilitate further research in video-text retrieval, we release a new dataset of 50K video-sentence pairs collected from social media, dubbed MRW (my reaction when). We demonstrate our approach on both image-text and video-text retrieval scenarios using MS-COCO, TGIF, and our new MRW dataset.", "field": [], "task": ["Cross-Modal Retrieval", "Multiple Instance Learning", "Video-Text Retrieval"], "method": [], "dataset": ["COCO 2014"], "metric": ["Image-to-text R@5", "Image-to-text R@1", "Image-to-text R@10", "Text-to-image R@10", "Text-to-image R@1", "Text-to-image R@5"], "title": "Polysemous Visual-Semantic Embedding for Cross-Modal Retrieval"} {"abstract": "This paper studies recommender systems with knowledge graphs, which can effectively address the problems of data sparsity and cold start. Recently, a variety of methods have been developed for this problem, which generally try to learn effective representations of users and items and then match items to users according to their representations. Though these methods have been shown quite effective, they lack good explanations, which are critical to recommender systems. In this paper, we take a different path and propose generating recommendations by finding meaningful paths from users to items. Specifically, we formulate the problem as a sequential decision process, where the target user is defined as the initial state, and the walks on the graphs are defined as actions. We shape the rewards according to existing state-of-the-art methods and then train a policy function with policy gradient methods. Experimental results on three real-world datasets show that our proposed method not only provides effective recommendations but also offers good explanations.", "field": [], "task": ["Knowledge Graphs", "Policy Gradient Methods", "Recommendation Systems"], "method": [], "dataset": ["MovieLens 1M", "Last.FM", "DBbook2014"], "metric": ["nDCG@10", "HR@10"], "title": "Explainable Knowledge Graph-based Recommendation via Deep Reinforcement Learning"} {"abstract": "This paper proposes a hybrid neural network (HNN) model for commonsense reasoning. An HNN consists of two component models, a masked language model and a semantic similarity model, which share a BERT-based contextual encoder but use different model-specific input and output layers. HNN obtains new state-of-the-art results on three classic commonsense reasoning tasks, pushing the WNLI benchmark to 89%, the Winograd Schema Challenge (WSC) benchmark to 75.1%, and the PDP60 benchmark to 90.0%. An ablation study shows that language models and semantic similarity models are complementary approaches to commonsense reasoning, and HNN effectively combines the strengths of both. The code and pre-trained models will be publicly available at https://github.com/namisan/mt-dnn.", "field": [], "task": ["Common Sense Reasoning", "Language Modelling", "Semantic Similarity", "Semantic Textual Similarity"], "method": [], "dataset": ["PDP60", "Winograd Schema Challenge", "WNLI"], "metric": ["Score", "Accuracy"], "title": "A Hybrid Neural Network Model for Commonsense Reasoning"} {"abstract": "The ability to understand visual information from limited labeled data is an important aspect of machine learning. While image-level classification has been extensively studied in a semi-supervised setting, dense pixel-level classification with limited data has only drawn attention recently. In this work, we propose an approach for semi-supervised semantic segmentation that learns from limited pixel-wise annotated samples while exploiting additional annotation-free images. It uses two network branches that link semi-supervised classification with semi-supervised segmentation including self-training. The dual-branch approach reduces both the low-level and the high-level artifacts typical when training with few labels. The approach attains significant improvement over existing methods, especially when trained with very few labeled samples. On several standard benchmarks - PASCAL VOC 2012, PASCAL-Context, and Cityscapes - the approach achieves new state-of-the-art in semi-supervised learning.", "field": [], "task": ["Semantic Segmentation", "Semi-Supervised Semantic Segmentation"], "method": [], "dataset": ["PASCAL Context 25% labeled", "Pascal VOC 2012 12.5% labeled", "Cityscapes 12.5% labeled", "Pascal VOC 2012 5% labeled", "PASCAL Context 12.5% labeled", "Pascal VOC 2012 2% labeled", "Cityscapes 25% labeled"], "metric": ["Validation mIoU"], "title": "Semi-Supervised Semantic Segmentation with High- and Low-level Consistency"} {"abstract": "Single image dehazing is a critical stage in many modern-day autonomous vision applications. Early prior-based methods often involved a time-consuming minimization of a hand-crafted energy function. Recent learning-based approaches utilize the representational power of deep neural networks (DNNs) to learn the underlying transformation between hazy and clear images. Due to inherent limitations in collecting matching clear and hazy images, these methods resort to training on synthetic data; constructed from indoor images and corresponding depth information. This may result in a possible domain shift when treating outdoor scenes. We propose a completely unsupervised method of training via minimization of the well-known, Dark Channel Prior (DCP) energy function. Instead of feeding the network with synthetic data, we solely use real-world outdoor images and tune the network's parameters by directly minimizing the DCP. Although our \"Deep DCP\" technique can be regarded as a fast approximator of DCP, it actually improves its results significantly. This suggests an additional regularization obtained via the network and learning process. Experiments show that our method performs on par with large-scale supervised methods.", "field": [], "task": ["Image Dehazing", "Single Image Dehazing"], "method": [], "dataset": ["SOTS Indoor", "SOTS Outdoor"], "metric": ["SSIM", "PSNR"], "title": "Unsupervised Single Image Dehazing Using Dark Channel Prior Loss"} {"abstract": "Weakly supervised semantic segmentation (WSSS) using only image-level labels can greatly reduce the annotation cost and therefore has attracted considerable research interest. However, its performance is still inferior to the fully supervised counterparts. To mitigate the performance gap, we propose a saliency guided self-attention network (SGAN) to address the WSSS problem. The introduced self-attention mechanism is able to capture rich and extensive contextual information but may mis-spread attentions to unexpected regions. In order to enable this mechanism to work effectively under weak supervision, we integrate class-agnostic saliency priors into the self-attention mechanism and utilize class-specific attention cues as an additional supervision for SGAN. Our SGAN is able to produce dense and accurate localization cues so that the segmentation performance is boosted. Moreover, by simply replacing the additional supervisions with partially labeled ground-truth, SGAN works effectively for semi-supervised semantic segmentation as well. Experiments on the PASCAL VOC 2012 and COCO datasets show that our approach outperforms all other state-of-the-art methods in both weakly and semi-supervised settings.", "field": [], "task": ["Semantic Segmentation", "Semi-Supervised Semantic Segmentation", "Weakly-Supervised Semantic Segmentation"], "method": [], "dataset": ["PASCAL VOC 2012 test", "PASCAL VOC 2012 val"], "metric": ["Mean IoU"], "title": "Saliency Guided Self-attention Network for Weakly and Semi-supervised Semantic Segmentation"} {"abstract": "This paper proposes a methodology to extract key insights from user generated reviews. This work is based on Aspect Based Sentiment Analysis (ABSA) which predicts the sentiment of aspects mentioned in the text documents. The extracted aspects are fine-grained for the presentation form known as Review Highlights.\r\n\r\nThe syntactic approach for extraction process suffers from the overlapping chunking rules which result in noise extraction. We introduce a hybrid technique which combines machine learning and rule based model. A multi-label classifier identifies the effective rules which efficiently parse aspects and opinions from texts. This selection of rules reduce the amount of noise in extraction tasks.\r\n\r\nThis is a novel attempt to learn syntactic rule fitness from a corpus using machine learning for accurate aspect extraction. As the model learns the syntactic rule prediction from the corpus, it makes the extraction method domain independent. It also allows studying the quality of syntactic rules in a different corpus.", "field": [], "task": ["Aspect-Based Sentiment Analysis", "Aspect Extraction", "Chunking", "Extract Aspect", "Extract aspect-polarity tuple", "Opinion Mining", "Sentiment Analysis"], "method": [], "dataset": [" SemEval 2015 Task 12"], "metric": ["F1 score"], "title": "Review highlights: opinion mining on reviews: a hybrid model for rule selection in aspect extraction"} {"abstract": "An abstractive snippet is an originally created piece of text to summarize a web page on a search engine results page. Compared to the conventional extractive snippets, which are generated by extracting phrases and sentences verbatim from a web page, abstractive snippets circumvent copyright issues; even more interesting is the fact that they open the door for personalization. Abstractive snippets have been evaluated as equally powerful in terms of user acceptance and expressiveness---but the key question remains: Can abstractive snippets be automatically generated with sufficient quality? This paper introduces a new approach to abstractive snippet generation: We identify the first two large-scale sources for distant supervision, namely anchor contexts and web directories. By mining the entire ClueWeb09 and ClueWeb12 for anchor contexts and by utilizing the DMOZ Open Directory Project, we compile the Webis Abstractive Snippet Corpus 2020, comprising more than 3.5 million triples of the form $\\langle$query, snippet, document$\\rangle$ as training examples, where the snippet is either an anchor context or a web directory description in lieu of a genuine query-biased abstractive snippet of the web document. We propose a bidirectional abstractive snippet generation model and assess the quality of both our corpus and the generated abstractive snippets with standard measures, crowdsourcing, and in comparison to the state of the art. The evaluation shows that our novel data sources along with the proposed model allow for producing usable query-biased abstractive snippets while minimizing text reuse.", "field": [], "task": ["Text Summarization"], "method": [], "dataset": ["Webis-Snippet-20 Corpus"], "metric": ["Rouge-L", "Rouge-2", "Rouge-1"], "title": "Abstractive Snippet Generation"} {"abstract": "Emotion-cause pair extraction (ECPE), as an emergent natural language processing task, aims at jointly investigating emotions and their underlying causes in documents. It extends the previous emotion cause extraction (ECE) task, yet without requiring a set of pre-given emotion clauses as in ECE. Existing approaches to ECPE generally adopt a two-stage method, i.e., (1) emotion and cause detection, and then (2) pairing the detected emotions and causes. Such pipeline method, while intuitive, suffers from two critical issues, including error propagation across stages that may hinder the effectiveness, and high computational cost that would limit the practical application of the method. To tackle these issues, we propose a multi-task learning model that can extract emotions, causes and emotion-cause pairs simultaneously in an end-to-end manner. Specifically, our model regards pair extraction as a link prediction task, and learns to link from emotion clauses to cause clauses, i.e., the links are directional. Emotion extraction and cause extraction are incorporated into the model as auxiliary tasks, which further boost the pair extraction. Experiments are conducted on an ECPE benchmarking dataset. The results show that our proposed model outperforms a range of state-of-the-art approaches.", "field": [], "task": ["Emotion Cause Extraction", "Emotion-Cause Pair Extraction", "Link Prediction", "Multi-Task Learning"], "method": [], "dataset": ["ECPE"], "metric": ["F1"], "title": "An End-to-End Multi-Task Learning to Link Framework for Emotion-Cause Pair Extraction"} {"abstract": "Named entity recognition systems have the untapped potential to extract information from legal documents, which can improve\r\ninformation retrieval and decision-making processes. In this paper, a dataset for named entity recognition in Brazilian legal documents is presented. Unlike other Portuguese language datasets, this dataset is composed entirely of legal documents. In addition to tags for persons, locations, time entities and organizations, the dataset contains specific tags for law and legal cases entities. To establish a set of baseline results, we first performed experiments on another Portuguese dataset: Paramopama. This evaluation demonstrate that LSTM-CRF gives results that are significantly better than those previously reported. We then retrained LSTM-CRF, on our dataset and obtained F 1 scores of 97.04% and 88.82% for Legislation and Legal case entities, respectively.\r\nThese results show the viability of the proposed dataset for legal applications.", "field": [], "task": ["Decision Making", "Information Retrieval", "Named Entity Recognition"], "method": [], "dataset": ["LeNER-Br"], "metric": ["Micro F1 (Exact Span)", "Micro F1 (Tokens)"], "title": "LeNER-Br: a Dataset for Named Entity Recognition in Brazilian Legal Text"} {"abstract": "An interactive video object segmentation algorithm, which takes scribble annotations on query objects as input, is proposed in this paper. We develop a deep neural network, which consists of the annotation network (A-Net) and the transfer network (T-Net). First, given user scribbles on a frame, A-Net yields a segmentation result based on the encoder-decoder architecture. Second, T-Net transfers the segmentation result bidirectionally to the other frames, by employing the global and local transfer modules. The global transfer module conveys the segmentation information in an annotated frame to a target frame, while the local transfer module propagates the segmentation information in a temporally adjacent frame to the target frame. By applying A-Net and T-Net alternately, a user can obtain desired segmentation results with minimal efforts. We train the entire network in two stages, by emulating user scribbles and employing an auxiliary loss. Experimental results demonstrate that the proposed interactive video object segmentation algorithm outperforms the state-of-the-art conventional algorithms. Codes and models are available at https://github.com/yuk6heo/IVOS-ATNet.", "field": [], "task": ["Interactive Video Object Segmentation", "Semantic Segmentation", "Video Object Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["DAVIS 2017"], "metric": ["AUC-J", "J@60s", "AUC-J&F", "J&F@60s"], "title": "Interactive Video Object Segmentation Using Global and Local Transfer Modules"} {"abstract": "In conventional supervised training, a model is trained to fit all the\ntraining examples. However, having a monolithic model may not always be the\nbest strategy, as examples could vary widely. In this work, we explore a\ndifferent learning protocol that treats each example as a unique pseudo-task,\nby reducing the original learning problem to a few-shot meta-learning scenario\nwith the help of a domain-dependent relevance function. When evaluated on the\nWikiSQL dataset, our approach leads to faster convergence and achieves\n1.1%-5.4% absolute accuracy gains over the non-meta-learning counterparts.", "field": [], "task": ["Meta-Learning"], "method": [], "dataset": ["WikiSQL"], "metric": ["Exact Match Accuracy", "Execution Accuracy"], "title": "Natural Language to Structured Query Generation via Meta-Learning"} {"abstract": "In spite of the recent success of neural machine translation (NMT) in\nstandard benchmarks, the lack of large parallel corpora poses a major practical\nproblem for many language pairs. There have been several proposals to alleviate\nthis issue with, for instance, triangulation and semi-supervised learning\ntechniques, but they still require a strong cross-lingual signal. In this work,\nwe completely remove the need of parallel data and propose a novel method to\ntrain an NMT system in a completely unsupervised manner, relying on nothing but\nmonolingual corpora. Our model builds upon the recent work on unsupervised\nembedding mappings, and consists of a slightly modified attentional\nencoder-decoder model that can be trained on monolingual corpora alone using a\ncombination of denoising and backtranslation. Despite the simplicity of the\napproach, our system obtains 15.56 and 10.21 BLEU points in WMT 2014\nFrench-to-English and German-to-English translation. The model can also profit\nfrom small parallel corpora, and attains 21.81 and 15.24 points when combined\nwith 100,000 parallel sentences, respectively. Our implementation is released\nas an open source project.", "field": [], "task": ["Machine Translation", "Unsupervised Machine Translation"], "method": [], "dataset": ["WMT2014 English-French", "WMT2015 English-German"], "metric": ["BLEU score"], "title": "Unsupervised Neural Machine Translation"} {"abstract": "Resolving abstract anaphora is an important, but difficult task for text\nunderstanding. Yet, with recent advances in representation learning this task\nbecomes a more tangible aim. A central property of abstract anaphora is that it\nestablishes a relation between the anaphor embedded in the anaphoric sentence\nand its (typically non-nominal) antecedent. We propose a mention-ranking model\nthat learns how abstract anaphors relate to their antecedents with an\nLSTM-Siamese Net. We overcome the lack of training data by generating\nartificial anaphoric sentence--antecedent pairs. Our model outperforms\nstate-of-the-art results on shell noun resolution. We also report first\nbenchmark results on an abstract anaphora subset of the ARRAU corpus. This\ncorpus presents a greater challenge due to a mixture of nominal and pronominal\nanaphors and a greater range of confounders. We found model variants that\noutperform the baselines for nominal anaphors, without training on individual\nanaphor data, but still lag behind for pronominal anaphors. Our model selects\nsyntactically plausible candidates and -- if disregarding syntax --\ndiscriminates candidates using deeper features.", "field": [], "task": ["Abstract Anaphora Resolution", "Representation Learning"], "method": [], "dataset": ["The ARRAU Corpus"], "metric": ["Average Precision"], "title": "A Mention-Ranking Model for Abstract Anaphora Resolution"} {"abstract": "We apply recurrent neural networks to the task of recognizing surgical\nactivities from robot kinematics. Prior work in this area focuses on\nrecognizing short, low-level activities, or gestures, and has been based on\nvariants of hidden Markov models and conditional random fields. In contrast, we\nwork on recognizing both gestures and longer, higher-level activites, or\nmaneuvers, and we model the mapping from kinematics to gestures/maneuvers with\nrecurrent neural networks. To our knowledge, we are the first to apply\nrecurrent neural networks to this task. Using a single model and a single set\nof hyperparameters, we match state-of-the-art performance for gesture\nrecognition and advance state-of-the-art performance for maneuver recognition,\nin terms of both accuracy and edit distance. Code is available at\nhttps://github.com/rdipietro/miccai-2016-surgical-activity-rec .", "field": [], "task": ["Gesture Recognition"], "method": [], "dataset": ["JIGSAWS", "MISTIC-SIL"], "metric": ["Edit Distance", "Accuracy"], "title": "Recognizing Surgical Activities with Recurrent Neural Networks"} {"abstract": "In this article we introduce the Arcade Learning Environment (ALE): both a\nchallenge problem and a platform and methodology for evaluating the development\nof general, domain-independent AI technology. ALE provides an interface to\nhundreds of Atari 2600 game environments, each one different, interesting, and\ndesigned to be a challenge for human players. ALE presents significant research\nchallenges for reinforcement learning, model learning, model-based planning,\nimitation learning, transfer learning, and intrinsic motivation. Most\nimportantly, it provides a rigorous testbed for evaluating and comparing\napproaches to these problems. We illustrate the promise of ALE by developing\nand benchmarking domain-independent agents designed using well-established AI\ntechniques for both reinforcement learning and planning. In doing so, we also\npropose an evaluation methodology made possible by ALE, reporting empirical\nresults on over 55 different games. All of the software, including the\nbenchmark agents, is publicly available.", "field": [], "task": ["Atari Games", "Imitation Learning", "Transfer Learning"], "method": [], "dataset": ["Atari 2600 Amidar", "Atari 2600 River Raid", "Atari 2600 Beam Rider", "Atari 2600 Video Pinball", "Atari 2600 Demon Attack", "Atari 2600 Enduro", "Atari 2600 Elevator Action", "Atari 2600 Alien", "Atari 2600 Boxing", "Atari 2600 Bank Heist", "Atari 2600 Tutankham", "Atari 2600 Time Pilot", "Atari 2600 Space Invaders", "Atari 2600 Assault", "Atari 2600 Gravitar", "Atari 2600 Ice Hockey", "Atari 2600 Bowling", "Atari 2600 Carnival", "Atari 2600 Private Eye", "Atari 2600 Berzerk", "Atari 2600 Asterix", "Atari 2600 Breakout", "Atari 2600 Name This Game", "Atari 2600 Crazy Climber", "Atari 2600 Pong", "Atari 2600 Krull", "Atari 2600 Freeway", "Atari 2600 James Bond", "Atari 2600 Robotank", "Atari 2600 Kangaroo", "Atari 2600 Venture", "Atari 2600 Asteroids", "Atari 2600 Fishing Derby", "Atari 2600 Ms. Pacman", "Atari 2600 Seaquest", "Atari 2600 Journey Escape", "Atari 2600 Tennis", "Atari 2600 Zaxxon", "Atari 2600 Frostbite", "Atari 2600 Star Gunner", "Atari 2600 Double Dunk", "Atari 2600 Battle Zone", "Atari 2600 Gopher", "Atari 2600 Skiing", "Atari 2600 Road Runner", "Atari 2600 Atlantis", "Atari 2600 Kung-Fu Master", "Atari 2600 Chopper Command", "Atari 2600 Up and Down", "Atari 2600 Montezuma's Revenge", "Atari 2600 Pooyan", "Atari 2600 Wizard of Wor", "Atari 2600 Q*Bert", "Atari 2600 Centipede", "Atari 2600 HERO"], "metric": ["Score"], "title": "The Arcade Learning Environment: An Evaluation Platform for General Agents"} {"abstract": "Recent advances in data-to-text generation have led to the use of large-scale\ndatasets and neural network models which are trained end-to-end, without\nexplicitly modeling what to say and in what order. In this work, we present a\nneural network architecture which incorporates content selection and planning\nwithout sacrificing end-to-end training. We decompose the generation task into\ntwo stages. Given a corpus of data records (paired with descriptive documents),\nwe first generate a content plan highlighting which information should be\nmentioned and in which order and then generate the document while taking the\ncontent plan into account. Automatic and human-based evaluation experiments\nshow that our model outperforms strong baselines improving the state-of-the-art\non the recently released RotoWire dataset.", "field": [], "task": ["Data-to-Text Generation", "Text Generation"], "method": [], "dataset": ["Rotowire (Content Selection)", "RotoWire", "RotoWire (Content Ordering)", "RotoWire (Relation Generation)"], "metric": ["count", "Recall", "Precision", "DLD", "BLEU"], "title": "Data-to-Text Generation with Content Selection and Planning"} {"abstract": "We present the first attempt at using sequence to sequence neural networks to model text simplification (TS). Unlike the previously proposed automated TS systems, our neural text simplification (NTS) systems are able to simultaneously perform lexical simplification and content reduction. An extensive human evaluation of the output has shown that NTS systems achieve almost perfect grammaticality and meaning preservation of output sentences and higher level of simplification than the state-of-the-art automated TS systems", "field": [], "task": ["Lexical Simplification", "Machine Translation", "Text Simplification", "Word Embeddings"], "method": [], "dataset": ["TurkCorpus"], "metric": ["BLEU", "SARI (EASSE>=0.2.1)"], "title": "Exploring Neural Text Simplification Models"} {"abstract": "We introduce an unsupervised, geodesic distance based, salient video object segmentation method. Unlike traditional methods, our method incorporates saliency as prior for object via the computation of robust geodesic measurement. We consider two discriminative visual features: spatial edges and temporal motion boundaries as indicators of foreground object locations. We first generate frame-wise spatiotemporal saliency maps using geodesic distance from these indicators. Building on the observation that foreground areas are surrounded by the regions with high spatiotemporal edge values, geodesic distance provides an initial estimation for foreground and background. Then, high-quality saliency results are produced via the geodesic distances to background regions in the subsequent frames. Through the resulting saliency maps, we build global appearance models for foreground and background. By imposing motion continuity, we establish a dynamic location model for each frame. Finally, the spatiotemporal saliency maps, appearance models and dynamic location models are combined into an energy minimization framework to attain both spatially and temporally coherent object segmentation. Extensive quantitative and qualitative experiments on benchmark video dataset demonstrate the superiority of the proposed method over the state-of-the-art algorithms.", "field": [], "task": ["Semantic Segmentation", "Video Object Segmentation", "Video Salient Object Detection", "Video Semantic Segmentation"], "method": [], "dataset": ["ViSal", "MCL", "DAVIS-2016", "DAVSOD-Difficult20", "VOS-T", "DAVSOD-Normal25", "SegTrack v2", "UVSD", "DAVSOD-easy35", "FBMS-59"], "metric": ["max E-Measure", "MAX F-MEASURE", "S-Measure", "AVERAGE MAE", "Average MAE", "max E-measure", "MAX E-MEASURE"], "title": "Saliency-Aware Geodesic Video Object Segmentation"} {"abstract": "Blind and universal image denoising consists of using a unique model that denoises images with any level of noise. It is especially practical as noise levels do not need to be known when the model is developed or at test time. We propose a theoretically-grounded blind and universal deep learning image denoiser for additive Gaussian noise removal. Our network is based on an optimal denoising solution, which we call fusion denoising. It is derived theoretically with a Gaussian image prior assumption. Synthetic experiments show our network's generalization strength to unseen additive noise levels. We also adapt the fusion denoising network architecture for image denoising on real images. Our approach improves real-world grayscale additive image denoising PSNR results for training noise levels and further on noise levels not seen during training. It also improves state-of-the-art color image denoising performance on every single noise level, by an average of 0.1dB, whether trained on or not.", "field": [], "task": ["Color Image Denoising", "Denoising", "Image Denoising"], "method": [], "dataset": ["BSD68 sigma65", "CBSD68 sigma15", "BSD68 sigma10", "CBSD68 sigma60", "BSD68 sigma5", "CBSD68 sigma40", "CBSD68 sigma50", "BSD68 sigma50", "CBSD68 sigma10", "BSD68 sigma75", "BSD68 sigma55", "BSD68 sigma35", "BSD68 sigma25", "BSD68 sigma45", "CBSD68 sigma25", "BSD68 sigma60", "CBSD68 sigma65", "CBSD68 sigma5", "BSD68 sigma40", "CBSD68 sigma70", "CBSD68 sigma30", "BSD68 sigma20", "BSD68 sigma30", "BSD68 sigma15", "CBSD68 sigma20", "CBSD68 sigma55", "CBSD68 sigma35", "CBSD68 sigma75", "BSD68 sigma70", "CBSD68 sigma45"], "metric": ["PSNR"], "title": "Blind Universal Bayesian Image Denoising with Gaussian Noise Level Learning"} {"abstract": "Target-based sentiment analysis or aspect-based sentiment analysis (ABSA) refers to addressing various sentiment analysis tasks at a fine-grained level, which includes but is not limited to aspect extraction, aspect sentiment classification, and opinion extraction. There exist many solvers of the above individual subtasks or a combination of two subtasks, and they can work together to tell a complete story, i.e. the discussed aspect, the sentiment on it, and the cause of the sentiment. However, no previous ABSA research tried to provide a complete solution in one shot. In this paper, we introduce a new subtask under ABSA, named aspect sentiment triplet extraction (ASTE). Particularly, a solver of this task needs to extract triplets (What, How, Why) from the inputs, which show WHAT the targeted aspects are, HOW their sentiment polarities are and WHY they have such polarities (i.e. opinion reasons). For instance, one triplet from \"Waiters are very friendly and the pasta is simply average\" could be ('Waiters', positive, 'friendly'). We propose a two-stage framework to address this task. The first stage predicts what, how and why in a unified model, and then the second stage pairs up the predicted what (how) and why from the first stage to output triplets. In the experiments, our framework has set a benchmark performance in this novel triplet extraction task. Meanwhile, it outperforms a few strong baselines adapted from state-of-the-art related methods.", "field": [], "task": ["Aspect-Based Sentiment Analysis", "Aspect Extraction", "Aspect Sentiment Triplet Extraction", "Sentiment Analysis"], "method": [], "dataset": ["SemEval"], "metric": ["F1"], "title": "Knowing What, How and Why: A Near Complete Solution for Aspect-based Sentiment Analysis"} {"abstract": "Learning feature interactions is crucial for click-through rate (CTR) prediction in recommender systems. In most existing deep learning models, feature interactions are either manually designed or simply enumerated. However, enumerating all feature interactions brings large memory and computation cost. Even worse, useless interactions may introduce noise and complicate the training process. In this work, we propose a two-stage algorithm called Automatic Feature Interaction Selection (AutoFIS). AutoFIS can automatically identify important feature interactions for factorization models with computational cost just equivalent to training the target model to convergence. In the \\emph{search stage}, instead of searching over a discrete set of candidate feature interactions, we relax the choices to be continuous by introducing the architecture parameters. By implementing a regularized optimizer over the architecture parameters, the model can automatically identify and remove the redundant feature interactions during the training process of the model. In the \\emph{re-train stage}, we keep the architecture parameters serving as an attention unit to further boost the performance. Offline experiments on three large-scale datasets (two public benchmarks, one private) demonstrate that AutoFIS can significantly improve various FM based models. AutoFIS has been deployed onto the training platform of Huawei App Store recommendation service, where a 10-day online A/B test demonstrated that AutoFIS improved the DeepFM model by 20.3\\% and 20.1\\% in terms of CTR and CVR respectively.", "field": [], "task": ["Click-Through Rate Prediction", "Recommendation Systems"], "method": [], "dataset": ["Criteo"], "metric": ["Log Loss", "AUC"], "title": "AutoFIS: Automatic Feature Interaction Selection in Factorization Models for Click-Through Rate Prediction"} {"abstract": "This paper investigates several aspects of training a RNN (recurrent neural network) that impact the objective and subjective quality of enhanced speech for real-time single-channel speech enhancement. Specifically, we focus on a RNN that enhances short-time speech spectra on a single-frame-in, single-frame-out basis, a framework adopted by most classical signal processing methods. We propose two novel mean-squared-error-based learning objectives that enable separate control over the importance of speech distortion versus noise reduction. The proposed loss functions are evaluated by widely accepted objective quality and intelligibility measures and compared to other competitive online methods. In addition, we study the impact of feature normalization and varying batch sequence lengths on the objective quality of enhanced speech. Finally, we show subjective ratings for the proposed approach and a state-of-the-art real-time RNN-based method.", "field": [], "task": ["Speech Enhancement"], "method": [], "dataset": ["Deep Noise Suppression (DNS) Challenge"], "metric": ["PESQ-WB", "PESQ-NB"], "title": "Weighted Speech Distortion Losses for Neural-Network-Based Real-Time Speech Enhancement"} {"abstract": "Motivated by the lack of data for non-English languages, in particular for the evaluation of downstream tasks such as Question Answering, we present a participatory effort to collect a native French Question Answering Dataset. Furthermore, we describe and publicly release the annotation tool developed for our collection effort, along with the data obtained and preliminary baselines.", "field": [], "task": ["Question Answering"], "method": [], "dataset": ["FQuAD"], "metric": ["EM", "F1"], "title": "Project PIAF: Building a Native French Question-Answering Dataset"} {"abstract": "We propose a novel Generative Adversarial Network (XingGAN or CrossingGAN) for person image generation tasks, i.e., translating the pose of a given person to a desired one. The proposed Xing generator consists of two generation branches that model the person's appearance and shape information, respectively. Moreover, we propose two novel blocks to effectively transfer and update the person's shape and appearance embeddings in a crossing way to mutually improve each other, which has not been considered by any other existing GAN-based image generation work. Extensive experiments on two challenging datasets, i.e., Market-1501 and DeepFashion, demonstrate that the proposed XingGAN advances the state-of-the-art performance both in terms of objective quantitative scores and subjective visual realness. The source code and trained models are available at https://github.com/Ha0Tang/XingGAN.", "field": [], "task": ["Image Generation", "Pose Transfer"], "method": [], "dataset": ["Market-1501", "Deep-Fashion"], "metric": ["PCKh", "SSIM", "mask-IS", "mask-SSIM", "IS"], "title": "XingGAN for Person Image Generation"} {"abstract": "Biomedical interaction networks have incredible potential to be useful in the prediction of biologically meaningful interactions, identification of network biomarkers of disease, and the discovery of putative drug targets. Recently, graph neural networks have been proposed to effectively learn representations for biomedical entities and achieved state-of-the-art results in biomedical interaction prediction. These methods only consider information from immediate neighbors but cannot learn a general mixing of features from neighbors at various distances. In this paper, we present a higher-order graph convolutional network (HOGCN) to aggregate information from the higher-order neighborhood for biomedical interaction prediction. Specifically, HOGCN collects feature representations of neighbors at various distances and learns their linear mixing to obtain informative representations of biomedical entities. Experiments on four interaction networks, including protein-protein, drug-drug, drug-target, and gene-disease interactions, show that HOGCN achieves more accurate and calibrated predictions. HOGCN performs well on noisy, sparse interaction networks when feature representations of neighbors at various distances are considered. Moreover, a set of novel interaction predictions are validated by literature-based case studies.", "field": [], "task": ["Link Prediction"], "method": [], "dataset": ["Drug-target interactions", "protein-protein interactions", "Drug-Drug Interactions", "Gene-disease interactions"], "metric": ["AUPRC"], "title": "Predicting Biomedical Interactions with Higher-Order Graph Convolutional Networks"} {"abstract": "We combine character-level and contextual language model representations to improve performance on Discourse Representation Structure parsing. Character representations can easily be added in a sequence-to-sequence model in either one encoder or as a fully separate encoder, with improvements that are robust to different language models, languages and data sets. For English, these improvements are larger than adding individual sources of linguistic information or adding non-contextual embeddings. A new method of analysis based on semantic tags demonstrates that the character-level representations improve performance across a subset of selected semantic phenomena.", "field": [], "task": ["DRS Parsing", "Language Modelling", "Semantic Parsing"], "method": [], "dataset": ["PMB-3.0.0", "PMB-2.2.0"], "metric": ["F1"], "title": "Character-level Representations Improve DRS-based Semantic Parsing Even in the Age of BERT"} {"abstract": "In this work we design a neural network for recognizing emotions in speech,\nusing the IEMOCAP dataset. Following the latest advances in audio analysis, we\nuse an architecture involving both convolutional layers, for extracting\nhigh-level features from raw spectrograms, and recurrent ones for aggregating\nlong-term dependencies. We examine the techniques of data augmentation with\nvocal track length perturbation, layer-wise optimizer adjustment, batch\nnormalization of recurrent layers and obtain highly competitive results of\n64.5% for weighted accuracy and 61.7% for unweighted accuracy on four emotions.", "field": [], "task": ["Data Augmentation", "Emotion Recognition", "Speech Emotion Recognition"], "method": [], "dataset": ["IEMOCAP"], "metric": ["UA"], "title": "CNN+LSTM Architecture for Speech Emotion Recognition with Data Augmentation"} {"abstract": "Recent two-stream deep Convolutional Neural Networks (ConvNets) have made\nsignificant progress in recognizing human actions in videos. Despite their\nsuccess, methods extending the basic two-stream ConvNet have not systematically\nexplored possible network architectures to further exploit spatiotemporal\ndynamics within video sequences. Further, such networks often use different\nbaseline two-stream networks. Therefore, the differences and the distinguishing\nfactors between various methods using Recurrent Neural Networks (RNN) or\nconvolutional networks on temporally-constructed feature vectors\n(Temporal-ConvNet) are unclear. In this work, we first demonstrate a strong\nbaseline two-stream ConvNet using ResNet-101. We use this baseline to\nthoroughly examine the use of both RNNs and Temporal-ConvNets for extracting\nspatiotemporal information. Building upon our experimental results, we then\npropose and investigate two different networks to further integrate\nspatiotemporal information: 1) temporal segment RNN and 2) Inception-style\nTemporal-ConvNet. We demonstrate that using both RNNs (using LSTMs) and\nTemporal-ConvNets on spatiotemporal feature matrices are able to exploit\nspatiotemporal dynamics to improve the overall performance. However, each of\nthese methods require proper care to achieve state-of-the-art performance; for\nexample, LSTMs require pre-segmented data or else they cannot fully exploit\ntemporal information. Our analysis identifies specific limitations for each\nmethod that could form the basis of future work. Our experimental results on\nUCF101 and HMDB51 datasets achieve state-of-the-art performances, 94.1% and\n69.0%, respectively, without requiring extensive temporal augmentation.", "field": [], "task": ["Action Classification", "Action Recognition", "Activity Recognition", "Temporal Action Localization", "Video Classification", "Video Understanding"], "method": [], "dataset": ["UCF101", "HMDB-51"], "metric": ["Average accuracy of 3 splits", "3-fold Accuracy"], "title": "TS-LSTM and Temporal-Inception: Exploiting Spatiotemporal Dynamics for Activity Recognition"} {"abstract": "Neural machine translation has recently achieved impressive results, while\nusing little in the way of external linguistic information. In this paper we\nshow that the strong learning capability of neural MT models does not make\nlinguistic features redundant; they can be easily incorporated to provide\nfurther improvements in performance. We generalize the embedding layer of the\nencoder in the attentional encoder--decoder architecture to support the\ninclusion of arbitrary features, in addition to the baseline word feature. We\nadd morphological features, part-of-speech tags, and syntactic dependency\nlabels as input features to English<->German, and English->Romanian neural\nmachine translation systems. In experiments on WMT16 training and test sets, we\nfind that linguistic input features improve model quality according to three\nmetrics: perplexity, BLEU and CHRF3. An open-source implementation of our\nneural MT system is available, as are sample files and configurations.", "field": [], "task": ["Machine Translation"], "method": [], "dataset": ["WMT2016 English-German", "WMT2016 German-English"], "metric": ["BLEU score"], "title": "Linguistic Input Features Improve Neural Machine Translation"} {"abstract": "Syntactic constituency parsing is a fundamental problem in natural language\nprocessing and has been the subject of intensive research and engineering for\ndecades. As a result, the most accurate parsers are domain specific, complex,\nand inefficient. In this paper we show that the domain agnostic\nattention-enhanced sequence-to-sequence model achieves state-of-the-art results\non the most widely used syntactic constituency parsing dataset, when trained on\na large synthetic corpus that was annotated using existing parsers. It also\nmatches the performance of standard parsers when trained only on a small\nhuman-annotated dataset, which shows that this model is highly data-efficient,\nin contrast to sequence-to-sequence models without the attention mechanism. Our\nparser is also fast, processing over a hundred sentences per second with an\nunoptimized CPU implementation.", "field": [], "task": ["Constituency Parsing"], "method": [], "dataset": ["Penn Treebank"], "metric": ["F1 score"], "title": "Grammar as a Foreign Language"} {"abstract": "We study the problem of unsupervised domain adaptive re-identification\n(re-ID) which is an active topic in computer vision but lacks a theoretical\nfoundation. We first extend existing unsupervised domain adaptive\nclassification theories to re-ID tasks. Concretely, we introduce some\nassumptions on the extracted feature space and then derive several loss\nfunctions guided by these assumptions. To optimize them, a novel self-training\nscheme for unsupervised domain adaptive re-ID tasks is proposed. It iteratively\nmakes guesses for unlabeled target data based on an encoder and trains the\nencoder based on the guessed labels. Extensive experiments on unsupervised\ndomain adaptive person re-ID and vehicle re-ID tasks with comparisons to the\nstate-of-the-arts confirm the effectiveness of the proposed theories and\nself-training framework. Our code is available at\n\\url{https://github.com/LcDog/DomainAdaptiveReID}.", "field": [], "task": ["Unsupervised Domain Adaptation"], "method": [], "dataset": ["Duke to Market", "Market to Duke"], "metric": ["rank-10", "mAP", "rank-5", "rank-1"], "title": "Unsupervised Domain Adaptive Re-Identification: Theory and Practice"} {"abstract": "Occlusion is commonplace in realistic human-robot shared environments, yet\nits effects are not considered in standard 3D human pose estimation benchmarks.\nThis leaves the question open: how robust are state-of-the-art 3D pose\nestimation methods against partial occlusions? We study several types of\nsynthetic occlusions over the Human3.6M dataset and find a method with\nstate-of-the-art benchmark performance to be sensitive even to low amounts of\nocclusion. Addressing this issue is key to progress in applications such as\ncollaborative and service robotics. We take a first step in this direction by\nimproving occlusion-robustness through training data augmentation with\nsynthetic occlusions. This also turns out to be an effective regularizer that\nis beneficial even for non-occluded test cases.", "field": [], "task": ["3D Human Pose Estimation", "3D Pose Estimation", "Data Augmentation", "Pose Estimation"], "method": [], "dataset": ["Human3.6M"], "metric": ["Average MPJPE (mm)"], "title": "How Robust is 3D Human Pose Estimation to Occlusion?"} {"abstract": "Low rank matrix completion plays a fundamental role in collaborative filtering applications, the key idea being that the variables lie in a smaller subspace than the ambient space. Often, additional information about the variables is known, and it is reasonable to assume that incorporating this information will lead to better predictions. We tackle the problem of matrix completion when pairwise relationships among variables are known, via a graph. We formulate and derive a highly efficient, conjugate gradient based alternating minimization scheme that solves optimizations with over 55 million observations up to 2 orders of magnitude faster than state-of-the-art (stochastic) gradient-descent based methods. On the theoretical front, we show that such methods generalize weighted nuclear norm formulations, and derive statistical consistency guarantees. We validate our results on both real and synthetic datasets.", "field": [], "task": ["Low-Rank Matrix Completion", "Matrix Completion", "Recommendation Systems"], "method": [], "dataset": ["YahooMusic", "Flixster Monti", "Douban Monti", "YahooMusic Monti", "Flixster", "Douban", "MovieLens 100K"], "metric": ["RMSE (u1 Splits)", "RMSE"], "title": "Collaborative Filtering with Graph Information: Consistency and Scalable Methods"} {"abstract": "Community detection refers to the task of discovering groups of vertices sharing similar properties or functions so as to understand the network data. With the recent development of deep learning, graph representation learning techniques are also utilized for community detection. However, the communities can only be inferred by applying clustering algorithms based on learned vertex embeddings. These general cluster algorithms like K-means and Gaussian Mixture Model cannot output much overlapped communities, which have been proved to be very common in many real-world networks. In this paper, we propose CommunityGAN, a novel community detection framework that jointly solves overlapping community detection and graph representation learning. First, unlike the embedding of conventional graph representation learning algorithms where the vector entry values have no specific meanings, the embedding of CommunityGAN indicates the membership strength of vertices to communities. Second, a specifically designed Generative Adversarial Net (GAN) is adopted to optimize such embedding. Through the minimax competition between the motif-level generator and discriminator, both of them can alternatively and iteratively boost their performance and finally output a better community structure. Extensive experiments on synthetic data and real-world tasks demonstrate that CommunityGAN achieves substantial community detection performance gains over the state-of-the-art methods.", "field": [], "task": ["Community Detection", "Graph Representation Learning", "Representation Learning"], "method": [], "dataset": ["DBLP", "Amazon"], "metric": ["F1-score", "F1-Score"], "title": "CommunityGAN: Community Detection with Generative Adversarial Nets"} {"abstract": "In aspect-level sentiment classification (ASC), it is prevalent to equip dominant neural models with attention mechanisms, for the sake of acquiring the importance of each context word on the given aspect. However, such a mechanism tends to excessively focus on a few frequent words with sentiment polarities, while ignoring infrequent ones. In this paper, we propose a progressive self-supervised attention learning approach for neural ASC models, which automatically mines useful attention supervision information from a training corpus to refine attention mechanisms. Specifically, we iteratively conduct sentiment predictions on all training instances. Particularly, at each iteration, the context word with the maximum attention weight is extracted as the one with active/misleading influence on the correct/incorrect prediction of every instance, and then the word itself is masked for subsequent iterations. Finally, we augment the conventional training objective with a regularization term, which enables ASC models to continue equally focusing on the extracted active context words while decreasing weights of those misleading ones. Experimental results on multiple datasets show that our proposed approach yields better attention mechanisms, leading to substantial improvements over the two state-of-the-art neural ASC models. Source code and trained models are available at https://github.com/DeepLearnXMU/PSSAttention.", "field": [], "task": ["Aspect-Based Sentiment Analysis", "Sentiment Analysis"], "method": [], "dataset": ["SemEval 2014 Task 4 Sub Task 2"], "metric": ["Laptop (Acc)", "Restaurant (Acc)", "Mean Acc (Restaurant + Laptop)"], "title": "Progressive Self-Supervised Attention Learning for Aspect-Level Sentiment Analysis"} {"abstract": "One of the main factors that contributed to the large advances in autonomous driving is the advent of deep learning. For safer self-driving vehicles, one of the problems that has yet to be solved completely is lane detection. Since methods for this task have to work in real-time (+30 FPS), they not only have to be effective (i.e., have high accuracy) but they also have to be efficient (i.e., fast). In this work, we present a novel method for lane detection that uses as input an image from a forward-looking camera mounted in the vehicle and outputs polynomials representing each lane marking in the image, via deep polynomial regression. The proposed method is shown to be competitive with existing state-of-the-art methods in the TuSimple dataset while maintaining its efficiency (115 FPS). Additionally, extensive qualitative results on two additional public datasets are presented, alongside with limitations in the evaluation metrics used by recent works for lane detection. Finally, we provide source code and trained models that allow others to replicate all the results shown in this paper, which is surprisingly rare in state-of-the-art lane detection methods. The full source code and pretrained models are available at https://github.com/lucastabelini/PolyLaneNet.", "field": [], "task": ["Autonomous Driving", "Lane Detection", "Regression"], "method": [], "dataset": ["TuSimple"], "metric": ["F1 score", "Accuracy"], "title": "PolyLaneNet: Lane Estimation via Deep Polynomial Regression"} {"abstract": "The feed-forward architectures of recently proposed deep super-resolution\nnetworks learn representations of low-resolution inputs, and the non-linear\nmapping from those to high-resolution output. However, this approach does not\nfully address the mutual dependencies of low- and high-resolution images. We\npropose Deep Back-Projection Networks (DBPN), that exploit iterative up- and\ndown-sampling layers, providing an error feedback mechanism for projection\nerrors at each stage. We construct mutually-connected up- and down-sampling\nstages each of which represents different types of image degradation and\nhigh-resolution components. We show that extending this idea to allow\nconcatenation of features across up- and down-sampling stages (Dense DBPN)\nallows us to reconstruct further improve super-resolution, yielding superior\nresults and in particular establishing new state of the art results for large\nscaling factors such as 8x across multiple data sets.", "field": [], "task": ["Image Super-Resolution", "Super-Resolution", "Video Super-Resolution"], "method": [], "dataset": ["Set14 - 4x upscaling", "Manga109 - 4x upscaling", "Vid4 - 4x upscaling", "BSD100 - 4x upscaling", "Set5 - 4x upscaling", "Urban100 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Deep Back-Projection Networks For Super-Resolution"} {"abstract": "We propose prototypical networks for the problem of few-shot classification,\nwhere a classifier must generalize to new classes not seen in the training set,\ngiven only a small number of examples of each new class. Prototypical networks\nlearn a metric space in which classification can be performed by computing\ndistances to prototype representations of each class. Compared to recent\napproaches for few-shot learning, they reflect a simpler inductive bias that is\nbeneficial in this limited-data regime, and achieve excellent results. We\nprovide an analysis showing that some simple design decisions can yield\nsubstantial improvements over recent approaches involving complicated\narchitectural choices and meta-learning. We further extend prototypical\nnetworks to zero-shot learning and achieve state-of-the-art results on the\nCU-Birds dataset.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Meta-Learning", "One-Shot Learning", "Zero-Shot Learning"], "method": [], "dataset": ["OMNIGLOT - 1-Shot, 5-way", "Stanford Dogs 5-way (5-shot)", "OMNIGLOT - 5-Shot, 20-way", "Mini-Imagenet 5-way (1-shot)", "Stanford Cars 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "Mini-ImageNet-CUB 5-way (1-shot)", "OMNIGLOT - 5-Shot, 5-way", "OMNIGLOT - 1-Shot, 20-way", "CUB 200 50-way (0-shot)", "Stanford Cars 5-way (5-shot)", "Mini-Imagenet 5-way (10-shot)", "Tiered ImageNet 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Prototypical Networks for Few-shot Learning"} {"abstract": "Pooling is an essential component of a wide variety of sentence\nrepresentation and embedding models. This paper explores generalized pooling\nmethods to enhance sentence embedding. We propose vector-based multi-head\nattention that includes the widely used max pooling, mean pooling, and scalar\nself-attention as special cases. The model benefits from properly designed\npenalization terms to reduce redundancy in multi-head attention. We evaluate\nthe proposed model on three different tasks: natural language inference (NLI),\nauthor profiling, and sentiment classification. The experiments show that the\nproposed model achieves significant improvement over strong\nsentence-encoding-based methods, resulting in state-of-the-art performances on\nfour datasets. The proposed approach can be easily implemented for more\nproblems than we discuss in this paper.", "field": [], "task": ["Natural Language Inference", "Sentence Embedding", "Sentiment Analysis"], "method": [], "dataset": ["Yelp Fine-grained classification", "SNLI"], "metric": ["Error", "Parameters", "% Train Accuracy", "% Test Accuracy"], "title": "Enhancing Sentence Embedding with Generalized Pooling"} {"abstract": "Few-shot classification aims to learn a classifier to recognize unseen classes during training with limited labeled examples. While significant progress has been made, the growing complexity of network designs, meta-learning algorithms, and differences in implementation details make a fair comparison difficult. In this paper, we present 1) a consistent comparative analysis of several representative few-shot classification algorithms, with results showing that deeper backbones significantly reduce the performance differences among methods on datasets with limited domain differences, 2) a modified baseline method that surprisingly achieves competitive performance when compared with the state-of-the-art on both the \\miniI and the CUB datasets, and 3) a new experimental setting for evaluating the cross-domain generalization ability for few-shot classification algorithms. Our results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones. In a realistic cross-domain evaluation setting, we show that a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms.", "field": [], "task": ["Domain Generalization", "Few-Shot Image Classification", "Few-Shot Learning", "Meta-Learning"], "method": [], "dataset": ["Mini-ImageNet-CUB 5-way (1-shot)", "Mini-ImageNet-CUB 5-way (5-shot)"], "metric": ["Accuracy"], "title": "A Closer Look at Few-shot Classification"} {"abstract": "Timeline summarization targets at concisely summarizing the evolution trajectory along the timeline and existing timeline summarization approaches are all based on extractive methods.In this paper, we propose the task of abstractive timeline summarization, which tends to concisely paraphrase the information in the time-stamped events.Unlike traditional document summarization, timeline summarization needs to model the time series information of the input events and summarize important events in chronological order.To tackle this challenge, we propose a memory-based timeline summarization model (MTS).Concretely, we propose a time-event memory to establish a timeline, and use the time position of events on this timeline to guide generation process.Besides, in each decoding step, we incorporate event-level information into word-level attention to avoid confusion between events.Extensive experiments are conducted on a large-scale real-world dataset, and the results show that MTS achieves the state-of-the-art performance in terms of both automatic and human evaluations.", "field": [], "task": ["Abstractive Text Summarization", "Document Summarization", "Timeline Summarization", "Time Series"], "method": [], "dataset": ["MTS"], "metric": ["ROUGE-1"], "title": "Learning towards Abstractive Timeline Summarization"} {"abstract": "Unsupervised domain adaptation (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain. Prior UDA methods typically require to access the source data when learning to adapt the model, making them risky and inefficient for decentralized private data. This work tackles a practical setting where only a trained source model is available and investigates how we can effectively utilize such a model without source data to solve UDA problems. We propose a simple yet generic representation learning framework, named \\emph{Source HypOthesis Transfer} (SHOT). SHOT freezes the classifier module (hypothesis) of the source model and learns the target-specific feature extraction module by exploiting both information maximization and self-supervised pseudo-labeling to implicitly align representations from the target domains to the source hypothesis. To verify its versatility, we evaluate SHOT in a variety of adaptation cases including closed-set, partial-set, and open-set domain adaptation. Experiments indicate that SHOT yields state-of-the-art results among multiple domain adaptation benchmarks.", "field": [], "task": ["Domain Adaptation", "Partial Domain Adaptation", "Representation Learning", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["SVNH-to-MNIST", "Office-Home", "USPS-to-MNIST", "SVHN-to-MNIST", "Office-31", "MNIST-to-USPS", "VisDA2017"], "metric": ["Accuracy (%)", "Average Accuracy", "Accuracy"], "title": "Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation"} {"abstract": "We present the first human-annotated dialogue-based relation extraction (RE) dataset DialogRE, aiming to support the prediction of relation(s) between two arguments that appear in a dialogue. We further offer DialogRE as a platform for studying cross-sentence RE as most facts span multiple sentences. We argue that speaker-related information plays a critical role in the proposed task, based on an analysis of similarities and differences between dialogue-based and traditional RE tasks. Considering the timeliness of communication in a dialogue, we design a new metric to evaluate the performance of RE methods in a conversational setting and investigate the performance of several representative RE methods on DialogRE. Experimental results demonstrate that a speaker-aware extension on the best-performing model leads to gains in both the standard and conversational evaluation settings. DialogRE is available at https://dataset.org/dialogre/.", "field": [], "task": ["Dialog Relation Extraction", "Relation Extraction"], "method": [], "dataset": ["DialogRE"], "metric": ["F1", "F1c"], "title": "Dialogue-Based Relation Extraction"} {"abstract": "Question answering (QA) tasks have been posed using a variety of formats, such as extractive span selection, multiple choice, etc. This has led to format-specialized models, and even to an implicit division in the QA community. We argue that such boundaries are artificial and perhaps unnecessary, given the reasoning abilities we seek to teach are not governed by the format. As evidence, we use the latest advances in language modeling to build a single pre-trained QA model, UnifiedQA, that performs surprisingly well across 17 QA datasets spanning 4 diverse formats. UnifiedQA performs on par with 9 different models that were trained on individual datasets themselves. Even when faced with 12 unseen datasets of observed formats, UnifiedQA performs surprisingly well, showing strong generalization from its out-of-format training data. Finally, simply fine-tuning this pre-trained QA model into specialized models results in a new state of the art on 6 datasets, establishing UnifiedQA as a strong starting point for building QA systems.", "field": [], "task": ["Language Modelling", "Multi-Task Learning", "Question Answering"], "method": [], "dataset": ["CommonsenseQA", "Hendrycks Test"], "metric": ["EM", "Accuracy (%)"], "title": "UnifiedQA: Crossing Format Boundaries With a Single QA System"} {"abstract": "We present a novel Multi-Relational Graph Convolutional Network (MRGCN) based framework to model on-road vehicle behaviors from a sequence of temporally ordered frames as grabbed by a moving monocular camera. The input to MRGCN is a multi-relational graph where the graph's nodes represent the active and passive agents/objects in the scene, and the bidirectional edges that connect every pair of nodes are encodings of their Spatio-temporal relations. We show that this proposed explicit encoding and usage of an intermediate spatio-temporal interaction graph to be well suited for our tasks over learning end-end directly on a set of temporally ordered spatial relations. We also propose an attention mechanism for MRGCNs that conditioned on the scene dynamically scores the importance of information from different interaction types. The proposed framework achieves significant performance gain over prior methods on vehicle-behavior classification tasks on four datasets. We also show a seamless transfer of learning to multiple datasets without resorting to fine-tuning. Such behavior prediction methods find immediate relevance in a variety of navigation tasks such as behavior planning, state estimation, and applications relating to the detection of traffic violations over videos.", "field": [], "task": ["Motion Segmentation", "Semantic Segmentation", "Transfer Learning"], "method": [], "dataset": ["Apolloscape"], "metric": ["Accuracy"], "title": "Understanding Dynamic Scenes using Graph Convolution Networks"} {"abstract": "Extracting entities and relations from unstructured text has attracted increasing attention in recent years but remains challenging, due to the intrinsic difficulty in identifying overlapping relations with shared entities. Prior works show that joint learning can result in a noticeable performance gain. However, they usually involve sequential interrelated steps and suffer from the problem of exposure bias. At training time, they predict with the ground truth conditions while at inference it has to make extraction from scratch. This discrepancy leads to error accumulation. To mitigate the issue, we propose in this paper a one-stage joint extraction model, namely, TPLinker, which is capable of discovering overlapping relations sharing one or both entities while immune from the exposure bias. TPLinker formulates joint extraction as a token pair linking problem and introduces a novel handshaking tagging scheme that aligns the boundary tokens of entity pairs under each relation type. Experiment results show that TPLinker performs significantly better on overlapping and multiple relation extraction, and achieves state-of-the-art performance on two public datasets.", "field": [], "task": ["Relation Extraction"], "method": [], "dataset": ["NYT", "WebNLG"], "metric": ["F1"], "title": "TPLinker: Single-stage Joint Extraction of Entities and Relations Through Token Pair Linking"} {"abstract": "Autoregressive models are among the best performing neural density\nestimators. We describe an approach for increasing the flexibility of an\nautoregressive model, based on modelling the random numbers that the model uses\ninternally when generating data. By constructing a stack of autoregressive\nmodels, each modelling the random numbers of the next model in the stack, we\nobtain a type of normalizing flow suitable for density estimation, which we\ncall Masked Autoregressive Flow. This type of flow is closely related to\nInverse Autoregressive Flow and is a generalization of Real NVP. Masked\nAutoregressive Flow achieves state-of-the-art performance in a range of\ngeneral-purpose density estimation tasks.", "field": [], "task": ["Density Estimation"], "method": [], "dataset": ["MNIST (Conditional)", "CIFAR-10 (Conditional)", "UCI POWER", "UCI MINIBOONE", "CIFAR-10", "BSDS300", "MNIST", "UCI HEPMASS"], "metric": ["Log-likelihood"], "title": "Masked Autoregressive Flow for Density Estimation"} {"abstract": "In this study, we explore capsule networks with dynamic routing for text\nclassification. We propose three strategies to stabilize the dynamic routing\nprocess to alleviate the disturbance of some noise capsules which may contain\n\"background\" information or have not been successfully trained. A series of\nexperiments are conducted with capsule networks on six text classification\nbenchmarks. Capsule networks achieve state of the art on 4 out of 6 datasets,\nwhich shows the effectiveness of capsule networks for text classification. We\nadditionally show that capsule networks exhibit significant improvement when\ntransfer single-label to multi-label text classification over strong baseline\nmethods. To the best of our knowledge, this is the first work that capsule\nnetworks have been empirically investigated for text modeling.", "field": [], "task": ["Multi-Label Text Classification", "Sentiment Analysis", "Subjectivity Analysis", "Text Classification"], "method": [], "dataset": ["CR", "SST-2 Binary classification", "MR", "AG News", "TREC-6", "SUBJ"], "metric": ["Error", "Accuracy"], "title": "Investigating Capsule Networks with Dynamic Routing for Text Classification"} {"abstract": "A practical limitation of deep neural networks is their high degree of\nspecialization to a single task and visual domain. Recently, inspired by the\nsuccesses of transfer learning, several authors have proposed to learn instead\nuniversal, fixed feature extractors that, used as the first stage of any deep\nnetwork, work well for several tasks and domains simultaneously. Nevertheless,\nsuch universal features are still somewhat inferior to specialized networks.\n To overcome this limitation, in this paper we propose to consider instead\nuniversal parametric families of neural networks, which still contain\nspecialized problem-specific models, but differing only by a small number of\nparameters. We study different designs for such parametrizations, including\nseries and parallel residual adapters, joint adapter compression, and parameter\nallocations, and empirically identify the ones that yield the highest\ncompression. We show that, in order to maximize performance, it is necessary to\nadapt both shallow and deep layers of a deep network, but the required changes\nare very small. We also show that these universal parametrization are very\neffective for transfer learning, where they outperform traditional fine-tuning\ntechniques.", "field": [], "task": ["Continual Learning", "Transfer Learning"], "method": [], "dataset": ["visual domain decathlon (10 tasks)"], "metric": ["decathlon discipline (Score)"], "title": "Efficient parametrization of multi-domain deep neural networks"} {"abstract": "This paper presents a real-time face detector, named Single Shot\nScale-invariant Face Detector (S$^3$FD), which performs superiorly on various\nscales of faces with a single deep neural network, especially for small faces.\nSpecifically, we try to solve the common problem that anchor-based detectors\ndeteriorate dramatically as the objects become smaller. We make contributions\nin the following three aspects: 1) proposing a scale-equitable face detection\nframework to handle different scales of faces well. We tile anchors on a wide\nrange of layers to ensure that all scales of faces have enough features for\ndetection. Besides, we design anchor scales based on the effective receptive\nfield and a proposed equal proportion interval principle; 2) improving the\nrecall rate of small faces by a scale compensation anchor matching strategy; 3)\nreducing the false positive rate of small faces via a max-out background label.\nAs a consequence, our method achieves state-of-the-art detection performance on\nall the common face detection benchmarks, including the AFW, PASCAL face, FDDB\nand WIDER FACE datasets, and can run at 36 FPS on a Nvidia Titan X (Pascal) for\nVGA-resolution images.", "field": [], "task": ["Face Detection"], "method": [], "dataset": ["WIDER Face (Medium)", "WIDER Face (Easy)", "Annotated Faces in the Wild", "PASCAL Face", "WIDER Face (Hard)", "FDDB"], "metric": ["AP"], "title": "S$^3$FD: Single Shot Scale-invariant Face Detector"} {"abstract": "We present methodology for using dynamic evaluation to improve neural\nsequence models. Models are adapted to recent history via a gradient descent\nbased mechanism, causing them to assign higher probabilities to re-occurring\nsequential patterns. Dynamic evaluation outperforms existing adaptation\napproaches in our comparisons. Dynamic evaluation improves the state-of-the-art\nword-level perplexities on the Penn Treebank and WikiText-2 datasets to 51.1\nand 44.3 respectively, and the state-of-the-art character-level cross-entropies\non the text8 and Hutter Prize datasets to 1.19 bits/char and 1.08 bits/char\nrespectively.", "field": [], "task": ["Language Modelling"], "method": [], "dataset": ["Text8", "WikiText-2", "Penn Treebank (Word Level)", "Hutter Prize"], "metric": ["Number of params", "Bit per Character (BPC)", "Validation perplexity", "Test perplexity", "Params"], "title": "Dynamic Evaluation of Neural Sequence Models"} {"abstract": "Most conventional sentence similarity methods only focus on similar parts of\ntwo input sentences, and simply ignore the dissimilar parts, which usually give\nus some clues and semantic meanings about the sentences. In this work, we\npropose a model to take into account both the similarities and dissimilarities\nby decomposing and composing lexical semantics over sentences. The model\nrepresents each word as a vector, and calculates a semantic matching vector for\neach word based on all words in the other sentence. Then, each word vector is\ndecomposed into a similar component and a dissimilar component based on the\nsemantic matching vector. After this, a two-channel CNN model is employed to\ncapture features by composing the similar and dissimilar components. Finally, a\nsimilarity score is estimated over the composed feature vectors. Experimental\nresults show that our model gets the state-of-the-art performance on the answer\nsentence selection task, and achieves a comparable result on the paraphrase\nidentification task.", "field": [], "task": ["Paraphrase Identification", "Question Answering", "Sentence Similarity"], "method": [], "dataset": ["WikiQA"], "metric": ["MRR", "MAP"], "title": "Sentence Similarity Learning by Lexical Decomposition and Composition"} {"abstract": "This paper describes team Turing's submission to SemEval 2017 RumourEval:\nDetermining rumour veracity and support for rumours (SemEval 2017 Task 8,\nSubtask A). Subtask A addresses the challenge of rumour stance classification,\nwhich involves identifying the attitude of Twitter users towards the\ntruthfulness of the rumour they are discussing. Stance classification is\nconsidered to be an important step towards rumour verification, therefore\nperforming well in this task is expected to be useful in debunking false\nrumours. In this work we classify a set of Twitter posts discussing rumours\ninto either supporting, denying, questioning or commenting on the underlying\nrumours. We propose a LSTM-based sequential model that, through modelling the\nconversational structure of tweets, which achieves an accuracy of 0.784 on the\nRumourEval test set outperforming all other systems in Subtask A.", "field": [], "task": ["Rumour Detection", "Stance Classification", "Stance Detection"], "method": [], "dataset": ["RumourEval"], "metric": ["Accuracy"], "title": "Turing at SemEval-2017 Task 8: Sequential Approach to Rumour Stance Classification with Branch-LSTM"} {"abstract": "Neural networks are powerful and flexible models that work well for many\ndifficult learning tasks in image, speech and natural language understanding.\nDespite their success, neural networks are still hard to design. In this paper,\nwe use a recurrent network to generate the model descriptions of neural\nnetworks and train this RNN with reinforcement learning to maximize the\nexpected accuracy of the generated architectures on a validation set. On the\nCIFAR-10 dataset, our method, starting from scratch, can design a novel network\narchitecture that rivals the best human-invented architecture in terms of test\nset accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is\n0.09 percent better and 1.05x faster than the previous state-of-the-art model\nthat used a similar architectural scheme. On the Penn Treebank dataset, our\nmodel can compose a novel recurrent cell that outperforms the widely-used LSTM\ncell, and other state-of-the-art baselines. Our cell achieves a test set\nperplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than\nthe previous state-of-the-art model. The cell can also be transferred to the\ncharacter language modeling task on PTB and achieves a state-of-the-art\nperplexity of 1.214.", "field": [], "task": ["Image Classification", "Language Modelling", "Natural Language Understanding", "Neural Architecture Search"], "method": [], "dataset": ["Penn Treebank (Word Level)", "Penn Treebank (Character Level)", "CIFAR-10 Image Classification", "CIFAR-10"], "metric": ["Number of params", "Bit per Character (BPC)", "Percentage error", "Percentage correct", "Test perplexity", "Params"], "title": "Neural Architecture Search with Reinforcement Learning"} {"abstract": "In this paper we introduce a new method for text detection in natural images.\nThe method comprises two contributions: First, a fast and scalable engine to\ngenerate synthetic images of text in clutter. This engine overlays synthetic\ntext to existing background images in a natural way, accounting for the local\n3D scene geometry. Second, we use the synthetic images to train a\nFully-Convolutional Regression Network (FCRN) which efficiently performs text\ndetection and bounding-box regression at all locations and multiple scales in\nan image. We discuss the relation of FCRN to the recently-introduced YOLO\ndetector, as well as other end-to-end object detection systems based on deep\nlearning. The resulting detection network significantly out performs current\nmethods for text detection in natural images, achieving an F-measure of 84.2%\non the standard ICDAR 2013 benchmark. Furthermore, it can process 15 images per\nsecond on a GPU.", "field": [], "task": ["Object Detection", "Regression"], "method": [], "dataset": ["ICDAR 2013"], "metric": ["F-Measure", "Recall", "Precision"], "title": "Synthetic Data for Text Localisation in Natural Images"} {"abstract": "In this paper, we present MultiPoseNet, a novel bottom-up multi-person pose\nestimation architecture that combines a multi-task model with a novel\nassignment method. MultiPoseNet can jointly handle person detection, keypoint\ndetection, person segmentation and pose estimation problems. The novel\nassignment method is implemented by the Pose Residual Network (PRN) which\nreceives keypoint and person detections, and produces accurate poses by\nassigning keypoints to person instances. On the COCO keypoints dataset, our\npose estimation method outperforms all previous bottom-up methods both in\naccuracy (+4-point mAP over previous best result) and speed; it also performs\non par with the best top-down methods while being at least 4x faster. Our\nmethod is the fastest real time system with 23 frames/sec. Source code is\navailable at: https://github.com/mkocabas/pose-residual-network", "field": [], "task": ["Human Detection", "Keypoint Detection", "Multi-Person Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["COCO"], "metric": ["Validation AP", "AP"], "title": "MultiPoseNet: Fast Multi-Person Pose Estimation using Pose Residual Network"} {"abstract": "This paper addresses the challenge of dense pixel correspondence estimation\nbetween two images. This problem is closely related to optical flow estimation\ntask where ConvNets (CNNs) have recently achieved significant progress. While\noptical flow methods produce very accurate results for the small pixel\ntranslation and limited appearance variation scenarios, they hardly deal with\nthe strong geometric transformations that we consider in this work. In this\npaper, we propose a coarse-to-fine CNN-based framework that can leverage the\nadvantages of optical flow approaches and extend them to the case of large\ntransformations providing dense and subpixel accurate estimates. It is trained\non synthetic transformations and demonstrates very good performance to unseen,\nrealistic, data. Further, we apply our method to the problem of relative camera\npose estimation and demonstrate that the model outperforms existing dense\napproaches.", "field": [], "task": ["Dense Pixel Correspondence Estimation", "Optical Flow Estimation"], "method": [], "dataset": ["HPatches"], "metric": ["Viewpoint IV AEPE", "Viewpoint III AEPE", "Viewpoint I AEPE", "Viewpoint V AEPE", "Viewpoint II AEPE"], "title": "DGC-Net: Dense Geometric Correspondence Network"} {"abstract": "We study the use of the Wave-U-Net architecture for speech enhancement, a\nmodel introduced by Stoller et al for the separation of music vocals and\naccompaniment. This end-to-end learning method for audio source separation\noperates directly in the time domain, permitting the integrated modelling of\nphase information and being able to take large temporal contexts into account.\nOur experiments show that the proposed method improves several metrics, namely\nPESQ, CSIG, CBAK, COVL and SSNR, over the state-of-the-art with respect to the\nspeech enhancement task on the Voice Bank corpus (VCTK) dataset. We find that a\nreduced number of hidden layers is sufficient for speech enhancement in\ncomparison to the original system designed for singing voice separation in\nmusic. We see this initial result as an encouraging signal to further explore\nspeech enhancement in the time-domain, both as an end in itself and as a\npre-processing step to speech recognition systems.", "field": [], "task": ["Audio Source Separation", "Speech Enhancement", "Speech Recognition"], "method": [], "dataset": ["DEMAND"], "metric": ["CSIG", "COVL", "CBAK", "PESQ"], "title": "Improved Speech Enhancement with the Wave-U-Net"} {"abstract": "How to learn a discriminative fine-grained representation is a key point in many computer vision applications, such as person re-identification, fine-grained classification, fine-grained image retrieval, etc. Most of the previous methods focus on learning metrics or ensemble to derive better global representation, which are usually lack of local information. Based on the considerations above, we propose a novel Attribute-Aware Attention Model ($A^3M$), which can learn local attribute representation and global category representation simultaneously in an end-to-end manner. The proposed model contains two attention models: attribute-guided attention module uses attribute information to help select category features in different regions, at the same time, category-guided attention module selects local features of different attributes with the help of category cues. Through this attribute-category reciprocal process, local and global features benefit from each other. Finally, the resulting feature contains more intrinsic information for image recognition instead of the noisy and irrelevant features. Extensive experiments conducted on Market-1501, CompCars, CUB-200-2011 and CARS196 demonstrate the effectiveness of our $A^3M$. Code is available at https://github.com/iamhankai/attribute-aware-attention.", "field": [], "task": ["Fine-Grained Image Classification", "Image Retrieval", "Person Re-Identification", "Representation Learning"], "method": [], "dataset": ["CompCars", " CUB-200-2011", "Market-1501"], "metric": ["Rank-1", "Accuracy", "MAP"], "title": "Attribute-Aware Attention Model for Fine-grained Representation Learning"} {"abstract": "Deciphering human behaviors to predict their future paths/trajectories and what they would do from videos is important in many applications. Motivated by this idea, this paper studies predicting a pedestrian's future path jointly with future activities. We propose an end-to-end, multi-task learning system utilizing rich visual features about human behavioral information and interaction with their surroundings. To facilitate the training, the network is learned with an auxiliary task of predicting future location in which the activity will happen. Experimental results demonstrate our state-of-the-art performance over two public benchmarks on future trajectory prediction. Moreover, our method is able to produce meaningful future activity prediction in addition to the path. The result provides the first empirical evidence that joint modeling of paths and activities benefits future path prediction.", "field": [], "task": ["Activity Prediction", "Future prediction", "Human motion prediction", "Motion Forecasting", "Multi-Task Learning", "Trajectory Forecasting", "Trajectory Prediction"], "method": [], "dataset": ["ActEV", "ETH/UCY"], "metric": ["ADE-8/12", "mAP", "FDE-8/12"], "title": "Peeking into the Future: Predicting Future Person Activities and Locations in Videos"} {"abstract": "Factorization Machine (FM) is a widely used supervised learning approach by\neffectively modeling of feature interactions. Despite the successful\napplication of FM and its many deep learning variants, treating every feature\ninteraction fairly may degrade the performance. For example, the interactions\nof a useless feature may introduce noises; the importance of a feature may also\ndiffer when interacting with different features. In this work, we propose a\nnovel model named \\emph{Interaction-aware Factorization Machine} (IFM) by\nintroducing Interaction-Aware Mechanism (IAM), which comprises the\n\\emph{feature aspect} and the \\emph{field aspect}, to learn flexible\ninteractions on two levels. The feature aspect learns feature interaction\nimportance via an attention network while the field aspect learns the feature\ninteraction effect as a parametric similarity of the feature interaction vector\nand the corresponding field interaction prototype. IFM introduces more\nstructured control and learns feature interaction importance in a stratified\nmanner, which allows for more leverage in tweaking the interactions on both\nfeature-wise and field-wise levels. Besides, we give a more generalized\narchitecture and propose Interaction-aware Neural Network (INN) and DeepIFM to\ncapture higher-order interactions. To further improve both the performance and\nefficiency of IFM, a sampling scheme is developed to select interactions based\non the field aspect importance. The experimental results from two well-known\ndatasets show the superiority of the proposed models over the state-of-the-art\nmethods.", "field": [], "task": ["Recommendation Systems"], "method": [], "dataset": ["Frappe"], "metric": ["RMSE"], "title": "Interaction-aware Factorization Machines for Recommender Systems"} {"abstract": "Deep neural networks are vulnerable to adversarial attacks, which can fool them by adding minuscule perturbations to the input images. The robustness of existing defenses suffers greatly under white-box attack settings, where an adversary has full knowledge about the network and can iterate several times to find strong perturbations. We observe that the main reason for the existence of such perturbations is the close proximity of different class samples in the learned feature space. This allows model decisions to be totally changed by adding an imperceptible perturbation in the inputs. To counter this, we propose to class-wise disentangle the intermediate feature representations of deep networks. Specifically, we force the features for each class to lie inside a convex polytope that is maximally separated from the polytopes of other classes. In this manner, the network is forced to learn distinct and distant decision regions for each class. We observe that this simple constraint on the features greatly enhances the robustness of learned models, even against the strongest white-box attacks, without degrading the classification performance on clean images. We report extensive evaluations in both black-box and white-box attack scenarios and show significant gains in comparison to state-of-the art defenses.", "field": [], "task": ["Adversarial Defense"], "method": [], "dataset": ["CIFAR-10"], "metric": ["Accuracy"], "title": "Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks"} {"abstract": "Recently, matrix factorization-based recommendation methods have been criticized for the problem raised by the triangle inequality violation. Although several metric learning-based approaches have been proposed to overcome this issue, existing approaches typically project each user to a single point in the metric space, and thus do not suffice for properly modeling the intensity and the heterogeneity of user-item relationships in implicit feedback. In this paper, we propose TransCF to discover such latent user-item relationships embodied in implicit user-item interactions. Inspired by the translation mechanism popularized by knowledge graph embedding, we construct user-item specific translation vectors by employing the neighborhood information of users and items, and translate each user toward items according to the user's relationships with the items. Our proposed method outperforms several state-of-the-art methods for top-N recommendation on seven real-world data by up to 17% in terms of hit ratio. We also conduct extensive qualitative evaluations on the translation vectors learned by our proposed method to ascertain the benefit of adopting the translation mechanism for implicit feedback-based recommendations.", "field": [], "task": ["Graph Embedding", "Knowledge Graph Embedding", "Metric Learning"], "method": [], "dataset": ["Pinterest", "Ciao", "Book-Crossing", "Flixster", "Tradesy", "Declicious", "Amazon C&A"], "metric": ["Hits@10", "nDCG@10", "Hits@20", "nDCG@20"], "title": "Collaborative Translational Metric Learning"} {"abstract": "Modern machine learning suffers from catastrophic forgetting when learning new classes incrementally. The performance dramatically degrades due to the missing data of old classes. Incremental learning methods have been proposed to retain the knowledge acquired from the old classes, by using knowledge distilling and keeping a few exemplars from the old classes. However, these methods struggle to scale up to a large number of classes. We believe this is because of the combination of two factors: (a) the data imbalance between the old and new classes, and (b) the increasing number of visually similar classes. Distinguishing between an increasing number of visually similar classes is particularly challenging, when the training data is unbalanced. We propose a simple and effective method to address this data imbalance issue. We found that the last fully connected layer has a strong bias towards the new classes, and this bias can be corrected by a linear model. With two bias parameters, our method performs remarkably well on two large datasets: ImageNet (1000 classes) and MS-Celeb-1M (10000 classes), outperforming the state-of-the-art algorithms by 11.1% and 13.2% respectively.", "field": [], "task": ["Incremental Learning"], "method": [], "dataset": ["ImageNet - 500 classes + 10 steps of 50 classes", "CIFAR-100 - 50 classes + 25 steps of 2 classes", "CIFAR-100 - 50 classes + 5 steps of 10 classes", "ImageNet-100 - 50 classes + 50 steps of 1 class", "CIFAR-100 - 50 classes + 50 steps of 1 class", "CIFAR-100 - 50 classes + 10 steps of 5 classes"], "metric": ["Average Incremental Accuracy"], "title": "Large Scale Incremental Learning"} {"abstract": "Click-through rate (CTR) prediction is a critical task in online advertising systems. Most existing methods mainly model the feature-CTR relationship and suffer from the data sparsity issue. In this paper, we propose DeepMCP, which models other types of relationships in order to learn more informative and statistically reliable feature representations, and in consequence to improve the performance of CTR prediction. In particular, DeepMCP contains three parts: a matching subnet, a correlation subnet and a prediction subnet. These subnets model the user-ad, ad-ad and feature-CTR relationship respectively. When these subnets are jointly optimized under the supervision of the target labels, the learned feature representations have both good prediction powers and good representation abilities. Experiments on two large-scale datasets demonstrate that DeepMCP outperforms several state-of-the-art models for CTR prediction.", "field": [], "task": ["Click-Through Rate Prediction", "Representation Learning"], "method": [], "dataset": ["Avito", "Company*"], "metric": ["Log Loss", "AUC"], "title": "Representation Learning-Assisted Click-Through Rate Prediction"} {"abstract": "Supervised deep learning with pixel-wise training labels has great successes on multi-person part segmentation. However, data labeling at pixel-level is very expensive. To solve the problem, people have been exploring to use synthetic data to avoid the data labeling. Although it is easy to generate labels for synthetic data, the results are much worse compared to those using real data and manual labeling. The degradation of the performance is mainly due to the domain gap, i.e., the discrepancy of the pixel value statistics between real and synthetic data. In this paper, we observe that real and synthetic humans both have a skeleton (pose) representation. We found that the skeletons can effectively bridge the synthetic and real domains during the training. Our proposed approach takes advantage of the rich and realistic variations of the real data and the easily obtainable labels of the synthetic data to learn multi-person part segmentation on real images without any human-annotated labels. Through experiments, we show that without any human labeling, our method performs comparably to several state-of-the-art approaches which require human labeling on Pascal-Person-Parts and COCO-DensePose datasets. On the other hand, if part labels are also available in the real-images during training, our method outperforms the supervised state-of-the-art methods by a large margin. We further demonstrate the generalizability of our method on predicting novel keypoints in real images where no real data labels are available for the novel keypoints detection. Code and pre-trained models are available at https://github.com/kevinlin311tw/CDCL-human-part-segmentation", "field": [], "task": ["Domain Adaptation", "Human Part Segmentation", "Multi-Human Parsing", "Pose Estimation"], "method": [], "dataset": ["PASCAL-Part"], "metric": ["mIoU"], "title": "Cross-Domain Complementary Learning Using Pose for Multi-Person Part Segmentation"} {"abstract": "Deep learning based image Super-Resolution (SR) has shown rapid development due to its ability of big data digestion. Generally, deeper and wider networks can extract richer feature maps and generate SR images with remarkable quality. However, the more complex network we have, the more time consumption is required for practical applications. It is important to have a simplified network for efficient image SR. In this paper, we propose an Attention based Back Projection Network (ABPN) for image super-resolution. Similar to some recent works, we believe that the back projection mechanism can be further developed for SR. Enhanced back projection blocks are suggested to iteratively update low- and high-resolution feature residues. Inspired by recent studies on attention models, we propose a Spatial Attention Block (SAB) to learn the cross-correlation across features at different layers. Based on the assumption that a good SR image should be close to the original LR image after down-sampling. We propose a Refined Back Projection Block (RBPB) for final reconstruction. Extensive experiments on some public and AIM2019 Image Super-Resolution Challenge datasets show that the proposed ABPN can provide state-of-the-art or even better performance in both quantitative and qualitative measurements.", "field": [], "task": ["Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["Urban100 - 16x upscaling", "Manga109 - 16x upscaling", "Manga109 - 8x upscaling", "Set14 - 4x upscaling", "Manga109 - 4x upscaling", "BSD100 - 4x upscaling", "DIV8K val - 16x upscaling", "DIV2K val - 16x upscaling", "Set5 - 4x upscaling", "Set14 - 8x upscaling", "Urban100 - 8x upscaling", "Set5 - 8x upscaling", "BSD100 - 16x upscaling", "BSD100 - 8x upscaling", "Urban100 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Image Super-Resolution via Attention based Back Projection Networks"} {"abstract": "Most algorithms for representation learning and link prediction in relational data have been designed for static data. However, the data they are applied to usually evolves with time, such as friend graphs in social networks or user interactions with items in recommender systems. This is also the case for knowledge bases, which contain facts such as (US, has president, B. Obama, [2009-2017]) that are valid only at certain points in time. For the problem of link prediction under temporal constraints, i.e., answering queries such as (US, has president, ?, 2012), we propose a solution inspired by the canonical decomposition of tensors of order 4. We introduce new regularization schemes and present an extension of ComplEx (Trouillon et al., 2016) that achieves state-of-the-art performance. Additionally, we propose a new dataset for knowledge base completion constructed from Wikidata, larger than previous benchmarks by an order of magnitude, as a new reference for evaluating temporal and non-temporal link prediction methods.", "field": [], "task": ["Knowledge Base Completion", "Link Prediction", "Recommendation Systems", "Representation Learning"], "method": [], "dataset": ["YAGO15k", "ICEWS05-15", "ICEWS14"], "metric": ["MRR"], "title": "Tensor Decompositions for temporal knowledge base completion"} {"abstract": "Grasping is natural for humans. However, it involves complex hand configurations and soft tissue deformation that can result in complicated regions of contact between the hand and the object. Understanding and modeling this contact can potentially improve hand models, AR/VR experiences, and robotic grasping. Yet, we currently lack datasets of hand-object contact paired with other data modalities, which is crucial for developing and evaluating contact modeling techniques. We introduce ContactPose, the first dataset of hand-object contact paired with hand pose, object pose, and RGB-D images. ContactPose has 2306 unique grasps of 25 household objects grasped with 2 functional intents by 50 participants, and more than 2.9 M RGB-D grasp images. Analysis of ContactPose data reveals interesting relationships between hand pose and contact. We use this data to rigorously evaluate various data representations, heuristics from the literature, and learning methods for contact modeling. Data, code, and trained models are available at https://contactpose.cc.gatech.edu.", "field": [], "task": ["Grasp Contact Prediction"], "method": [], "dataset": ["ContactPose"], "metric": ["AUC"], "title": "ContactPose: A Dataset of Grasps with Object Contact and Hand Pose"} {"abstract": "The ability to base current computations on memories from the past is critical for many cognitive tasks such as story understanding. Hebbian-type synaptic plasticity is believed to underlie the retention of memories over medium and long time scales in the brain. However, it is unclear how such plasticity processes are integrated with computations in cortical networks. Here, we propose Hebbian Memory Networks (H-Mems), a simple neural network model that is built around a core hetero-associative network subject to Hebbian plasticity. We show that the network can be optimized to utilize the Hebbian plasticity processes for its computations. H-Mems can one-shot memorize associations between stimulus pairs and use these associations for decisions later on. Furthermore, they can solve demanding question-answering tasks on synthetic stories. Our study shows that neural network models are able to enrich their computations with memories through simple Hebbian plasticity processes.", "field": [], "task": ["Question Answering"], "method": [], "dataset": ["bAbi"], "metric": ["Accuracy (trained on 1k)", "Accuracy (trained on 10k)"], "title": "H-Mem: Harnessing synaptic plasticity with Hebbian Memory Networks"} {"abstract": "Recurrent neural nets (RNN) and convolutional neural nets (CNN) are widely\nused on NLP tasks to capture the long-term and local dependencies,\nrespectively. Attention mechanisms have recently attracted enormous interest\ndue to their highly parallelizable computation, significantly less training\ntime, and flexibility in modeling dependencies. We propose a novel attention\nmechanism in which the attention between elements from input sequence(s) is\ndirectional and multi-dimensional (i.e., feature-wise). A light-weight neural\nnet, \"Directional Self-Attention Network (DiSAN)\", is then proposed to learn\nsentence embedding, based solely on the proposed attention without any RNN/CNN\nstructure. DiSAN is only composed of a directional self-attention with temporal\norder encoded, followed by a multi-dimensional attention that compresses the\nsequence into a vector representation. Despite its simple form, DiSAN\noutperforms complicated RNN models on both prediction quality and time\nefficiency. It achieves the best test accuracy among all sentence encoding\nmethods and improves the most recent best result by 1.02% on the Stanford\nNatural Language Inference (SNLI) dataset, and shows state-of-the-art test\naccuracy on the Stanford Sentiment Treebank (SST), Multi-Genre natural language\ninference (MultiNLI), Sentences Involving Compositional Knowledge (SICK),\nCustomer Review, MPQA, TREC question-type classification and Subjectivity\n(SUBJ) datasets.", "field": [], "task": ["Natural Language Inference", "Sentence Embedding"], "method": [], "dataset": ["SNLI"], "metric": ["Parameters", "% Train Accuracy", "% Test Accuracy"], "title": "DiSAN: Directional Self-Attention Network for RNN/CNN-Free Language Understanding"} {"abstract": "Much of the world's data is streaming, time-series data, where anomalies give\nsignificant information in critical situations; examples abound in domains such\nas finance, IT, security, medical, and energy. Yet detecting anomalies in\nstreaming data is a difficult task, requiring detectors to process data in\nreal-time, not batches, and learn while simultaneously making predictions.\nThere are no benchmarks to adequately test and score the efficacy of real-time\nanomaly detectors. Here we propose the Numenta Anomaly Benchmark (NAB), which\nattempts to provide a controlled and repeatable environment of open-source\ntools to test and measure anomaly detection algorithms on streaming data. The\nperfect detector would detect all anomalies as soon as possible, trigger no\nfalse alarms, work with real-world time-series data across a variety of\ndomains, and automatically adapt to changing statistics. Rewarding these\ncharacteristics is formalized in NAB, using a scoring algorithm designed for\nstreaming data. NAB evaluates detectors on a benchmark dataset with labeled,\nreal-world time-series data. We present these components, and give results and\nanalyses for several open source, commercially-used algorithms. The goal for\nNAB is to provide a standard, open source framework with which the research\ncommunity can compare and evaluate different algorithms for detecting anomalies\nin streaming data.", "field": [], "task": ["Anomaly Detection", "Time Series"], "method": [], "dataset": ["Numenta Anomaly Benchmark"], "metric": ["NAB score"], "title": "Evaluating Real-time Anomaly Detection Algorithms - the Numenta Anomaly Benchmark"} {"abstract": "Easy-to-use,Modular and Extendible package of deep-learning based CTR models.DeepFM,DeepInterestNetwork(DIN),DeepInterestEvolutionNetwork(DIEN),DeepCrossNetwork(DCN),AttentionalFactorizationMachine(AFM),Neural Factorization Machine(NFM),AutoInt", "field": [], "task": ["Click-Through Rate Prediction"], "method": [], "dataset": ["Amazon Dataset"], "metric": ["AUC"], "title": "Deep Interest Evolution Network for Click-Through Rate Prediction"} {"abstract": "Detecting action units (AUs) on human faces is challenging because various AUs make subtle facial appearance change over various regions at different scales. Current works have attempted to recognize AUs by emphasizing important regions. However, the incorporation of expert prior knowledge into region definition remains under-exploited, and current AU detection approaches do not use regional convolutional neural networks (R-CNN) with expert prior knowledge to directly focus on AU-related regions adaptively. By incorporating expert prior knowledge, we propose a novel R-CNN based model named AU R-CNN. The proposed solution offers two main contributions: (1) AU R-CNN directly observes different facial regions, where various AUs are located. Specifically, we define an AU partition rule which encodes the expert prior knowledge into the region definition and RoI-level label definition. This design produces considerably better detection performance than existing approaches. (2) We integrate various dynamic models (including convolutional long short-term memory, two stream network, conditional random field, and temporal action localization network) into AU R-CNN and then investigate and analyze the reason behind the performance of dynamic models. Experiment results demonstrate that \\textit{only} static RGB image information and no optical flow-based AU R-CNN surpasses the one fused with dynamic models. AU R-CNN is also superior to traditional CNNs that use the same backbone on varying image resolutions. State-of-the-art recognition performance of AU detection is achieved. The complete network is end-to-end trainable. Experiments on BP4D and DISFA datasets show the effectiveness of our approach. The implementation code is available online.", "field": [], "task": ["Action Unit Detection", "Temporal Action Localization"], "method": [], "dataset": ["BP4D"], "metric": ["Avg F1"], "title": "AU R-CNN: Encoding Expert Prior Knowledge into R-CNN for Action Unit Detection"} {"abstract": "Probabilistic modelling is a principled framework to perform model aggregation, which has been a primary mechanism to combat mode collapse in the context of Generative Adversarial Networks (GAN). In this paper, we propose a novel probabilistic framework for GANs, ProbGAN, which iteratively learns a distribution over generators with a carefully crafted prior. Learning is efficiently triggered by a tailored stochastic gradient Hamiltonian Monte Carlo with a novel gradient approximation to perform Bayesian inference. Our theoretical analysis further reveals that our treatment is the first probabilistic framework that yields an equilibrium where generator distributions are faithful to the data distribution. Empirical evidence on synthetic high-dimensional multi-modal data and image databases (CIFAR-10, STL-10, and ImageNet) demonstrates the superiority of our method over both start-of-the-art multi-generator GANs and other probabilistic treatment for GANs.", "field": [], "task": ["Bayesian Inference"], "method": [], "dataset": ["STL-10"], "metric": ["Inception score", "FID"], "title": "ProbGAN: Towards Probabilistic GAN with Theoretical Guarantees"} {"abstract": "3D face reconstruction from a single 2D image is a challenging problem with broad applications. Recent methods typically aim to learn a CNN-based 3D face model that regresses coefficients of 3D Morphable Model (3DMM) from 2D images to render 3D face reconstruction or dense face alignment. However, the shortage of training data with 3D annotations considerably limits performance of those methods. To alleviate this issue, we propose a novel 2D-assisted self-supervised learning (2DASL) method that can effectively use \"in-the-wild\" 2D face images with noisy landmark information to substantially improve 3D face model learning. Specifically, taking the sparse 2D facial landmarks as additional information, 2DSAL introduces four novel self-supervision schemes that view the 2D landmark and 3D landmark prediction as a self-mapping process, including the 2D and 3D landmark self-prediction consistency, cycle-consistency over the 2D landmark prediction and self-critic over the predicted 3DMM coefficients based on landmark predictions. Using these four self-supervision schemes, the 2DASL method significantly relieves demands on the the conventional paired 2D-to-3D annotations and gives much higher-quality 3D face models without requiring any additional 3D annotations. Experiments on multiple challenging datasets show that our method outperforms state-of-the-arts for both 3D face reconstruction and dense face alignment by a large margin.", "field": [], "task": ["3D Face Reconstruction", "Face Alignment", "Face Model", "Face Reconstruction", "Self-Supervised Learning"], "method": [], "dataset": ["AFLW2000-3D"], "metric": ["Mean NME "], "title": "3D Face Reconstruction from A Single Image Assisted by 2D Face Images in the Wild"} {"abstract": "Graph embedding methods produce unsupervised node features from graphs that\ncan then be used for a variety of machine learning tasks. Modern graphs,\nparticularly in industrial applications, contain billions of nodes and\ntrillions of edges, which exceeds the capability of existing embedding systems.\nWe present PyTorch-BigGraph (PBG), an embedding system that incorporates\nseveral modifications to traditional multi-relation embedding systems that\nallow it to scale to graphs with billions of nodes and trillions of edges. PBG\nuses graph partitioning to train arbitrarily large embeddings on either a\nsingle machine or in a distributed environment. We demonstrate comparable\nperformance with existing embedding systems on common benchmarks, while\nallowing for scaling to arbitrarily large graphs and parallelization on\nmultiple machines. We train and evaluate embeddings on several large social\nnetwork graphs as well as the full Freebase dataset, which contains over 100\nmillion nodes and 2 billion edges.", "field": [], "task": ["Graph Embedding", "graph partitioning", "Link Prediction"], "method": [], "dataset": ["FB15k", "YouTube", "LiveJournal"], "metric": ["Macro F1", "MRR filtered", "MR", "MRR", "Hits@10", "MRR raw", "Micro F1"], "title": "PyTorch-BigGraph: A Large-scale Graph Embedding System"} {"abstract": "Recently, deep convolutional neural networks (CNNs) have been widely explored in single image super-resolution (SISR) and obtained remarkable performance. However, most of the existing CNN-based SISR methods mainly focus on wider or deeper architecture design, neglecting to explore the feature correlations of intermediate layers, hence hindering the representational power of CNNs. To address this issue, in this paper, we propose a second-order attention network (SAN) for more powerful feature expression and feature correlation learning. Specifically, a novel train- able second-order channel attention (SOCA) module is developed to adaptively rescale the channel-wise features by using second-order feature statistics for more discriminative representations. Furthermore, we present a non-locally enhanced residual group (NLRG) structure, which not only incorporates non-local operations to capture long-distance spatial contextual information, but also contains repeated local-source residual attention groups (LSRAG) to learn increasingly abstract feature representations. Experimental results demonstrate the superiority of our SAN network over state-of-the-art SISR methods in terms of both quantitative metrics and visual quality.\r", "field": [], "task": ["Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["Set14 - 4x upscaling", "Manga109 - 4x upscaling", "BSD100 - 4x upscaling", "Set5 - 4x upscaling", "Urban100 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Second-Order Attention Network for Single Image Super-Resolution"} {"abstract": "Providing model-generated explanations in recommender systems is important to\nuser experience. State-of-the-art recommendation algorithms - especially\ncollaborative filtering (CF)-based approaches with shallow or deep models -\nusually work with various unstructured information sources for recommendation,\nsuch as textual reviews, visual images, and various implicit or explicit\nfeedbacks. Though structured knowledge bases were considered in content-based\napproaches, they have been largely ignored recently due to the research focus\non CF approaches. However, structured knowledge exhibit unique advantages in\npersonalized recommendation systems. When the explicit knowledge about users\nand items is considered for recommendation, the system could provide highly\ncustomized recommendations based on users' historical behaviors and the\nknowledge is helpful for providing informed explanations regarding the\nrecommended items. A great challenge for using knowledge bases for\nrecommendation is how to integrate large-scale structured data, while taking\nadvantage of collaborative filtering for highly accurate performance. Recent\nachievements in knowledge-base embedding (KBE) sheds light on this problem,\nwhich makes it possible to learn user and item representations while preserving\nthe structure of their relationship with external knowledge for explanation. In\nthis work, we propose to explain knowledge-base embeddings for explainable\nrecommendation. Specifically, we propose a knowledge-base representation\nlearning framework to embed heterogeneous entities for recommendation, and\nbased on the embedded knowledge base, a soft matching algorithm is proposed to\ngenerate personalized explanations for the recommended items. Experimental\nresults on real-world e-commerce datasets verified the superior recommendation\nperformance and the explainability power of our approach compared with\nstate-of-the-art baselines.", "field": [], "task": ["Link Prediction", "Recommendation Systems", "Representation Learning"], "method": [], "dataset": ["MovieLens 25M", "Yelp"], "metric": ["Hits@10", "nDCG@10", "HR@10"], "title": "Learning Heterogeneous Knowledge Base Embeddings for Explainable Recommendation"} {"abstract": "The chronological order of user-item interactions is a key feature in many recommender systems, where the items that users will interact may largely depend on those items that users just accessed recently. However, with the tremendous increase of users and items, sequential recommender systems still face several challenging problems: (1) the hardness of modeling the long-term user interests from sparse implicit feedback; (2) the difficulty of capturing the short-term user interests given several items the user just accessed. To cope with these challenges, we propose a hierarchical gating network (HGN), integrated with the Bayesian Personalized Ranking (BPR) to capture both the long-term and short-term user interests. Our HGN consists of a feature gating module, an instance gating module, and an item-item product module. In particular, our feature gating and instance gating modules select what item features can be passed to the downstream layers from the feature and instance levels, respectively. Our item-item product module explicitly captures the item relations between the items that users accessed in the past and those items users will access in the future. We extensively evaluate our model with several state-of-the-art methods and different validation metrics on five real-world datasets. The experimental results demonstrate the effectiveness of our model on Top-N sequential recommendation.", "field": [], "task": ["Recommendation Systems"], "method": [], "dataset": ["MovieLens 20M", "Amazon-CDs", "GoodReads-Comics", "GoodReads-Children", "Amazon-Book"], "metric": ["nDCG@10", "Recall@10"], "title": "Hierarchical Gating Networks for Sequential Recommendation"} {"abstract": "This paper presents a strong set of results for resolving gendered ambiguous pronouns on the Gendered Ambiguous Pronouns shared task. The model presented here draws upon the strengths of state-of-the-art language and coreference resolution models, and introduces a novel evidence-based deep learning architecture. Injecting evidence from the coreference models compliments the base architecture, and analysis shows that the model is not hindered by their weaknesses, specifically gender bias. The modularity and simplicity of the architecture make it very easy to extend for further improvement and applicable to other NLP problems. Evaluation on GAP test data results in a state-of-the-art performance at 92.5{\\%} F1 (gender bias of 0.97), edging closer to the human performance of 96.6{\\%}. The end-to-end solution presented here placed 1st in the Kaggle competition, winning by a significant lead.", "field": [], "task": ["Coreference Resolution"], "method": [], "dataset": ["GAP"], "metric": ["Masculine F1 (M)", "Overall F1", "Bias (F/M)", "Feminine F1 (F)"], "title": "Gendered Ambiguous Pronouns Shared Task: Boosting Model Confidence by Evidence Pooling"} {"abstract": "In this paper, we aim to develop an efficient and compact deep network for RGB-D salient object detection, where the depth image provides complementary information to boost performance in complex scenarios. Starting from a coarse initial prediction by a multi-scale residual block, we propose a progressively guided alternate refinement network to refine it. Instead of using ImageNet pre-trained backbone network, we first construct a lightweight depth stream by learning from scratch, which can extract complementary features more efficiently with less redundancy. Then, different from the existing fusion based methods, RGB and depth features are fed into proposed guided residual (GR) blocks alternately to reduce their mutual degradation. By assigning progressive guidance in the stacked GR blocks within each side-output, the false detection and missing parts can be well remedied. Extensive experiments on seven benchmark datasets demonstrate that our model outperforms existing state-of-the-art approaches by a large margin, and also shows superiority in efficiency (71 FPS) and model size (64.9 MB).", "field": [], "task": ["Object Detection", "RGB-D Salient Object Detection", "RGB Salient Object Detection", "Salient Object Detection"], "method": [], "dataset": ["SIP"], "metric": ["Average MAE", "S-Measure"], "title": "Progressively Guided Alternate Refinement Network for RGB-D Salient Object Detection"} {"abstract": "We propose a new matching-based framework for semi-supervised video object segmentation (VOS). Recently, state-of-the-art VOS performance has been achieved by matching-based algorithms, in which feature banks are created to store features for region matching and classification. However, how to effectively organize information in the continuously growing feature bank remains under-explored, and this leads to inefficient design of the bank. We introduce an adaptive feature bank update scheme to dynamically absorb new features and discard obsolete features. We also design a new confidence loss and a fine-grained segmentation module to enhance the segmentation accuracy in uncertain regions. On public benchmarks, our algorithm outperforms existing state-of-the-arts.", "field": [], "task": ["Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["DAVIS 2017 (val)", "YouTube-VOS"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "Jaccard (Unseen)", "F-Measure (Seen)", "Jaccard (Seen)", "F-measure (Recall)", "Jaccard (Decay)", "Overall", "Jaccard (Recall)", "F-measure (Mean)", "J&F", "F-Measure (Unseen)"], "title": "Video Object Segmentation with Adaptive Feature Bank and Uncertain-Region Refinement"} {"abstract": "Irregular sampling occurs in many time series modeling applications where it presents a significant challenge to standard deep learning models. This work is motivated by the analysis of physiological time series data in electronic health records, which are sparse, irregularly sampled, and multivariate. In this paper, we propose a new deep learning framework for this setting that we call Multi-Time Attention Networks. Multi-Time Attention Networks learn an embedding of continuous-time values and use an attention mechanism to produce a fixed-length representation of a time series containing a variable number of observations. We investigate the performance of our framework on interpolation and classification tasks using multiple datasets. Our results show that our approach performs as well or better than a range of baseline and recently proposed models while offering significantly faster training times than current state-of-the-art methods.", "field": [], "task": ["Time Series"], "method": [], "dataset": ["PhysioNet Challenge 2012"], "metric": ["AUC"], "title": "Multi-Time Attention Networks for Irregularly Sampled Time Series"} {"abstract": "Amortized variational inference (AVI) replaces instance-specific local\ninference with a global inference network. While AVI has enabled efficient\ntraining of deep generative models such as variational autoencoders (VAE),\nrecent empirical work suggests that inference networks can produce suboptimal\nvariational parameters. We propose a hybrid approach, to use AVI to initialize\nthe variational parameters and run stochastic variational inference (SVI) to\nrefine them. Crucially, the local SVI procedure is itself differentiable, so\nthe inference network and generative model can be trained end-to-end with\ngradient-based optimization. This semi-amortized approach enables the use of\nrich generative models without experiencing the posterior-collapse phenomenon\ncommon in training VAEs for problems like text generation. Experiments show\nthis approach outperforms strong autoregressive and variational baselines on\nstandard text and image datasets.", "field": [], "task": ["Text Generation", "Variational Inference"], "method": [], "dataset": ["Yahoo Questions"], "metric": ["KL", "NLL", "Perplexity"], "title": "Semi-Amortized Variational Autoencoders"} {"abstract": "Face alignment, which fits a face model to an image and extracts the semantic\nmeanings of facial pixels, has been an important topic in the computer vision\ncommunity. However, most algorithms are designed for faces in small to medium\nposes (yaw angle is smaller than 45 degrees), which lack the ability to align\nfaces in large poses up to 90 degrees. The challenges are three-fold. Firstly,\nthe commonly used landmark face model assumes that all the landmarks are\nvisible and is therefore not suitable for large poses. Secondly, the face\nappearance varies more drastically across large poses, from the frontal view to\nthe profile view. Thirdly, labelling landmarks in large poses is extremely\nchallenging since the invisible landmarks have to be guessed. In this paper, we\npropose to tackle these three challenges in an new alignment framework termed\n3D Dense Face Alignment (3DDFA), in which a dense 3D Morphable Model (3DMM) is\nfitted to the image via Cascaded Convolutional Neural Networks. We also utilize\n3D information to synthesize face images in profile views to provide abundant\nsamples for training. Experiments on the challenging AFLW database show that\nthe proposed approach achieves significant improvements over the\nstate-of-the-art methods.", "field": [], "task": ["3D Pose Estimation", "Depth Image Estimation", "Face Alignment", "Face Model", "Face Reconstruction", "Pose Estimation"], "method": [], "dataset": ["AFLW2000-3D", "AFLW", "300W"], "metric": ["Mean NME", "Mean NME ", "Fullset (public)"], "title": "Face Alignment in Full Pose Range: A 3D Total Solution"} {"abstract": "We study the problem of learning to reason in large scale knowledge graphs\n(KGs). More specifically, we describe a novel reinforcement learning framework\nfor learning multi-hop relational paths: we use a policy-based agent with\ncontinuous states based on knowledge graph embeddings, which reasons in a KG\nvector space by sampling the most promising relation to extend its path. In\ncontrast to prior work, our approach includes a reward function that takes the\naccuracy, diversity, and efficiency into consideration. Experimentally, we show\nthat our proposed method outperforms a path-ranking based algorithm and\nknowledge graph embedding methods on Freebase and Never-Ending Language\nLearning datasets.", "field": [], "task": ["Graph Embedding", "Knowledge Graph Embedding", "Knowledge Graph Embeddings", "Knowledge Graphs"], "method": [], "dataset": ["NELL-995"], "metric": ["Mean AP"], "title": "DeepPath: A Reinforcement Learning Method for Knowledge Graph Reasoning"} {"abstract": "Learning from a few examples remains a key challenge in machine learning.\nDespite recent advances in important domains such as vision and language, the\nstandard supervised deep learning paradigm does not offer a satisfactory\nsolution for learning new concepts rapidly from little data. In this work, we\nemploy ideas from metric learning based on deep neural features and from recent\nadvances that augment neural networks with external memories. Our framework\nlearns a network that maps a small labelled support set and an unlabelled\nexample to its label, obviating the need for fine-tuning to adapt to new class\ntypes. We then define one-shot learning problems on vision (using Omniglot,\nImageNet) and language tasks. Our algorithm improves one-shot accuracy on\nImageNet from 87.6% to 93.2% and from 88.0% to 93.8% on Omniglot compared to\ncompeting approaches. We also demonstrate the usefulness of the same model on\nlanguage modeling by introducing a one-shot task on the Penn Treebank.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Language Modelling", "Metric Learning", "Omniglot", "One-Shot Learning"], "method": [], "dataset": ["OMNIGLOT - 1-Shot, 5-way", "Stanford Dogs 5-way (5-shot)", "OMNIGLOT - 5-Shot, 20-way", "Mini-Imagenet 5-way (1-shot)", "Stanford Cars 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "Mini-ImageNet-CUB 5-way (1-shot)", "OMNIGLOT - 5-Shot, 5-way", "OMNIGLOT - 1-Shot, 20-way", "Stanford Cars 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Matching Networks for One Shot Learning"} {"abstract": "In this paper, we propose a novel random-forest scheme, namely Joint Maximum\nPurity Forest (JMPF), for classification, clustering, and regression tasks. In\nthe JMPF scheme, the original feature space is transformed into a compactly\npre-clustered feature space, via a trained rotation matrix. The rotation matrix\nis obtained through an iterative quantization process, where the input data\nbelonging to different classes are clustered to the respective vertices of the\nnew feature space with maximum purity. In the new feature space, orthogonal\nhyperplanes, which are employed at the split-nodes of decision trees in random\nforests, can tackle the clustering problems effectively. We evaluated our\nproposed method on public benchmark datasets for regression and classification\ntasks, and experiments showed that JMPF remarkably outperforms other\nstate-of-the-art random-forest-based approaches. Furthermore, we applied JMPF\nto image super-resolution, because the transformed, compact features are more\ndiscriminative to the clustering-regression scheme. Experiment results on\nseveral public benchmark datasets also showed that the JMPF-based image\nsuper-resolution scheme is consistently superior to recent state-of-the-art\nimage super-resolution algorithms.", "field": [], "task": ["Image Super-Resolution", "Quantization", "Regression", "Super-Resolution"], "method": [], "dataset": ["Set5 - 4x upscaling", "BSD100 - 4x upscaling", "Set14 - 4x upscaling"], "metric": ["PSNR"], "title": "Joint Maximum Purity Forest with Application to Image Super-Resolution"} {"abstract": "Abstractive summarization aims to generate a shorter version of the document\ncovering all the salient points in a compact and coherent fashion. On the other\nhand, query-based summarization highlights those points that are relevant in\nthe context of a given query. The encode-attend-decode paradigm has achieved\nnotable success in machine translation, extractive summarization, dialog\nsystems, etc. But it suffers from the drawback of generation of repeated\nphrases. In this work we propose a model for the query-based summarization task\nbased on the encode-attend-decode paradigm with two key additions (i) a query\nattention model (in addition to document attention model) which learns to focus\non different portions of the query at different time steps (instead of using a\nstatic representation for the query) and (ii) a new diversity based attention\nmodel which aims to alleviate the problem of repeating phrases in the summary.\nIn order to enable the testing of this model we introduce a new query-based\nsummarization dataset building on debatepedia. Our experiments show that with\nthese two additions the proposed model clearly outperforms vanilla\nencode-attend-decode models with a gain of 28% (absolute) in ROUGE-L scores.", "field": [], "task": ["Abstractive Text Summarization", "Machine Translation", "Query-Based Extractive Summarization"], "method": [], "dataset": ["Debatepedia"], "metric": ["ROUGE-1"], "title": "Diversity driven Attention Model for Query-based Abstractive Summarization"} {"abstract": "The ability to identify and temporally segment fine-grained human actions\nthroughout a video is crucial for robotics, surveillance, education, and\nbeyond. Typical approaches decouple this problem by first extracting local\nspatiotemporal features from video frames and then feeding them into a temporal\nclassifier that captures high-level temporal patterns. We introduce a new class\nof temporal models, which we call Temporal Convolutional Networks (TCNs), that\nuse a hierarchy of temporal convolutions to perform fine-grained action\nsegmentation or detection. Our Encoder-Decoder TCN uses pooling and upsampling\nto efficiently capture long-range temporal patterns whereas our Dilated TCN\nuses dilated convolutions. We show that TCNs are capable of capturing action\ncompositions, segment durations, and long-range dependencies, and are over a\nmagnitude faster to train than competing LSTM-based Recurrent Neural Networks.\nWe apply these models to three challenging fine-grained datasets and show large\nimprovements over the state of the art.", "field": [], "task": ["Action Segmentation", "Skeleton Based Action Recognition"], "method": [], "dataset": ["Varying-view RGB-D Action-Skeleton", "GTEA"], "metric": ["Accuracy (CS)", "Acc", "Edit", "Accuracy (CV II)", "F1@10%", "Accuracy (CV I)", "Accuracy (AV I)", "F1@25%", "Accuracy (AV II)", "F1@50%"], "title": "Temporal Convolutional Networks for Action Segmentation and Detection"} {"abstract": "Generalized linear models with nonlinear feature transformations are widely\nused for large-scale regression and classification problems with sparse inputs.\nMemorization of feature interactions through a wide set of cross-product\nfeature transformations are effective and interpretable, while generalization\nrequires more feature engineering effort. With less feature engineering, deep\nneural networks can generalize better to unseen feature combinations through\nlow-dimensional dense embeddings learned for the sparse features. However, deep\nneural networks with embeddings can over-generalize and recommend less relevant\nitems when the user-item interactions are sparse and high-rank. In this paper,\nwe present Wide & Deep learning---jointly trained wide linear models and deep\nneural networks---to combine the benefits of memorization and generalization\nfor recommender systems. We productionized and evaluated the system on Google\nPlay, a commercial mobile app store with over one billion active users and over\none million apps. Online experiment results show that Wide & Deep significantly\nincreased app acquisitions compared with wide-only and deep-only models. We\nhave also open-sourced our implementation in TensorFlow.", "field": [], "task": ["Click-Through Rate Prediction", "Feature Engineering", "Recommendation Systems", "Regression"], "method": [], "dataset": ["Bing News", "Amazon", "MovieLens 20M", "Criteo", "Company*", "Dianping"], "metric": ["Log Loss", "AUC"], "title": "Wide & Deep Learning for Recommender Systems"} {"abstract": "We present a model that generates natural language descriptions of images and\ntheir regions. Our approach leverages datasets of images and their sentence\ndescriptions to learn about the inter-modal correspondences between language\nand visual data. Our alignment model is based on a novel combination of\nConvolutional Neural Networks over image regions, bidirectional Recurrent\nNeural Networks over sentences, and a structured objective that aligns the two\nmodalities through a multimodal embedding. We then describe a Multimodal\nRecurrent Neural Network architecture that uses the inferred alignments to\nlearn to generate novel descriptions of image regions. We demonstrate that our\nalignment model produces state of the art results in retrieval experiments on\nFlickr8K, Flickr30K and MSCOCO datasets. We then show that the generated\ndescriptions significantly outperform retrieval baselines on both full images\nand on a new dataset of region-level annotations.", "field": [], "task": ["Cross-Modal Retrieval", "Image Captioning", "Text-Image Retrieval"], "method": [], "dataset": ["COCO (image as query)", "COCO Visual Question Answering (VQA) real images 1.0 open ended", "COCO", "COCO 2014", "Flickr30K 1K test"], "metric": ["Image-to-text R@5", "BLEU-1", "Image-to-text R@1", "R@10", "Image-to-text R@10", "Text-to-image R@10", "Recall@10", "Text-to-image R@1", "R@1", "Text-to-image R@5"], "title": "Deep Visual-Semantic Alignments for Generating Image Descriptions"} {"abstract": "While deep convolutional neural networks (CNNs) have achieved impressive\nsuccess in image denoising with additive white Gaussian noise (AWGN), their\nperformance remains limited on real-world noisy photographs. The main reason is\nthat their learned models are easy to overfit on the simplified AWGN model\nwhich deviates severely from the complicated real-world noise model. In order\nto improve the generalization ability of deep CNN denoisers, we suggest\ntraining a convolutional blind denoising network (CBDNet) with more realistic\nnoise model and real-world noisy-clean image pairs. On the one hand, both\nsignal-dependent noise and in-camera signal processing pipeline is considered\nto synthesize realistic noisy images. On the other hand, real-world noisy\nphotographs and their nearly noise-free counterparts are also included to train\nour CBDNet. To further provide an interactive strategy to rectify denoising\nresult conveniently, a noise estimation subnetwork with asymmetric learning to\nsuppress under-estimation of noise level is embedded into CBDNet. Extensive\nexperimental results on three datasets of real-world noisy photographs clearly\ndemonstrate the superior performance of CBDNet over state-of-the-arts in terms\nof quantitative metrics and visual quality. The code has been made available at\nhttps://github.com/GuoShi28/CBDNet.", "field": [], "task": ["Denoising", "Image Denoising", "noise estimation"], "method": [], "dataset": ["SIDD", "DND", "Darmstadt Noise Dataset"], "metric": ["SSIM (sRGB)", "PSNR", "PSNR (sRGB)"], "title": "Toward Convolutional Blind Denoising of Real Photographs"} {"abstract": "In this paper we introduce a simple approach for exploration in reinforcement learning (RL) that allows us to develop theoretically justified algorithms in the tabular case but that is also extendable to settings where function approximation is required. Our approach is based on the successor representation (SR), which was originally introduced as a representation defining state generalization by the similarity of successor states. Here we show that the norm of the SR, while it is being learned, can be used as a reward bonus to incentivize exploration. In order to better understand this transient behavior of the norm of the SR we introduce the substochastic successor representation (SSR) and we show that it implicitly counts the number of times each state (or feature) has been observed. We use this result to introduce an algorithm that performs as well as some theoretically sample-efficient approaches. Finally, we extend these ideas to a deep RL algorithm and show that it achieves state-of-the-art performance in Atari 2600 games when in a low sample-complexity regime.", "field": [], "task": ["Atari Games", "Efficient Exploration"], "method": [], "dataset": ["Atari 2600 Venture", "Atari 2600 Private Eye", "Atari 2600 Montezuma's Revenge", "Atari 2600 Solaris", "Atari 2600 Freeway", "Atari 2600 Gravitar"], "metric": ["Score"], "title": "Count-Based Exploration with the Successor Representation"} {"abstract": "In conventional speech recognition, phoneme-based models outperform grapheme-based models for non-phonetic languages such as English. The performance gap between the two typically reduces as the amount of training data is increased. In this work, we examine the impact of the choice of modeling unit for attention-based encoder-decoder models. We conduct experiments on the LibriSpeech 100hr, 460hr, and 960hr tasks, using various target units (phoneme, grapheme, and word-piece); across all tasks, we find that grapheme or word-piece models consistently outperform phoneme-based models, even though they are evaluated without a lexicon or an external language model. We also investigate model complementarity: we find that we can improve WERs by up to 9% relative by rescoring N-best lists generated from a strong word-piece based baseline with either the phoneme or the grapheme model. Rescoring an N-best list generated by the phonemic system, however, provides limited improvements. Further analysis shows that the word-piece-based models produce more diverse N-best hypotheses, and thus lower oracle WERs, than phonemic models.", "field": [], "task": ["Language Modelling", "Sequence-To-Sequence Speech Recognition", "Speech Recognition"], "method": [], "dataset": ["LibriSpeech test-clean"], "metric": ["Word Error Rate (WER)"], "title": "On the Choice of Modeling Unit for Sequence-to-Sequence Speech Recognition"} {"abstract": "We introduce a novel method for multilingual transfer that utilizes deep\ncontextual embeddings, pretrained in an unsupervised fashion. While contextual\nembeddings have been shown to yield richer representations of meaning compared\nto their static counterparts, aligning them poses a challenge due to their\ndynamic nature. To this end, we construct context-independent variants of the\noriginal monolingual spaces and utilize their mapping to derive an alignment\nfor the context-dependent spaces. This mapping readily supports processing of a\ntarget language, improving transfer by context-aware embeddings. Our\nexperimental results demonstrate the effectiveness of this approach for\nzero-shot and few-shot learning of dependency parsing. Specifically, our method\nconsistently outperforms the previous state-of-the-art on 6 tested languages,\nyielding an improvement of 6.8 LAS points on average.", "field": [], "task": ["Cross-lingual zero-shot dependency parsing", "Dependency Parsing", "Few-Shot Learning", "Word Embeddings"], "method": [], "dataset": ["Universal Dependency Treebank"], "metric": ["UAS", "LAS"], "title": "Cross-Lingual Alignment of Contextual Word Embeddings, with Applications to Zero-shot Dependency Parsing"} {"abstract": "Graph processes model a number of important problems such as identifying the epicenter of an earthquake or predicting weather. In this paper, we propose a Graph Convolutional Recurrent Neural Network (GCRNN) architecture specifically tailored to deal with these problems. GCRNNs use convolutional filter banks to keep the number of trainable parameters independent of the size of the graph and of the time sequences considered. We also put forward Gated GCRNNs, a time-gated variation of GCRNNs akin to LSTMs. When compared with GNNs and another graph recurrent architecture in experiments using both synthetic and real-word data, GCRNNs significantly improve performance while using considerably less parameters.", "field": [], "task": ["Node Classification"], "method": [], "dataset": ["CiteSeer (0.5%)"], "metric": ["Accuracy"], "title": "Gated Graph Convolutional Recurrent Neural Networks"} {"abstract": "We propose spatially-adaptive normalization, a simple but effective layer for synthesizing photorealistic images given an input semantic layout. Previous methods directly feed the semantic layout as input to the deep network, which is then processed through stacks of convolution, normalization, and nonlinearity layers. We show that this is suboptimal as the normalization layers tend to ``wash away'' semantic information. To address the issue, we propose using the input layout for modulating the activations in normalization layers through a spatially-adaptive, learned transformation. Experiments on several challenging datasets demonstrate the advantage of the proposed method over existing approaches, regarding both visual fidelity and alignment with input layouts. Finally, our model allows user control over both semantic and style. Code is available at https://github.com/NVlabs/SPADE .", "field": [], "task": ["Image Generation", "Image-to-Image Translation"], "method": [], "dataset": ["ADE20K Labels-to-Photos", "COCO-Stuff Labels-to-Photos", "Cityscapes Labels-to-Photo", "ADE20K-Outdoor Labels-to-Photos"], "metric": ["Accuracy", "FID", "Per-pixel Accuracy", "mIoU"], "title": "Semantic Image Synthesis with Spatially-Adaptive Normalization"} {"abstract": "Progress in Machine Learning is often driven by the availability of large datasets, and consistent evaluation metrics for comparing modeling approaches. To this end, we present a repository of conversational datasets consisting of hundreds of millions of examples, and a standardised evaluation procedure for conversational response selection models using '1-of-100 accuracy'. The repository contains scripts that allow researchers to reproduce the standard datasets, or to adapt the pre-processing and data filtering steps to their needs. We introduce and evaluate several competitive baselines for conversational response selection, whose implementations are shared in the repository, as well as a neural encoder model that is trained on the entire training set.", "field": [], "task": ["Conversational Response Selection", "Dialogue Understanding"], "method": [], "dataset": ["PolyAI Reddit", "PolyAI OpenSubtitles", "PolyAI AmazonQA"], "metric": ["1-of-100 Accuracy"], "title": "A Repository of Conversational Datasets"} {"abstract": "Using pre-trained word embeddings in conjunction with Deep Learning models has become the {``}de facto{''} approach in Natural Language Processing (NLP). While this usually yields satisfactory results, off-the-shelf word embeddings tend to perform poorly on texts from specialized domains such as clinical reports. Moreover, training specialized word representations from scratch is often either impossible or ineffective due to the lack of large enough in-domain data. In this work, we focus on the clinical domain for which we study embedding strategies that rely on general-domain resources only. We show that by combining off-the-shelf contextual embeddings (ELMo) with static word2vec embeddings trained on a small in-domain corpus built from the task data, we manage to reach and sometimes outperform representations learned from a large corpus in the medical domain.", "field": [], "task": ["Clinical Concept Extraction", "Word Embeddings"], "method": [], "dataset": ["2010 i2b2/VA"], "metric": ["Exact Span F1"], "title": "Embedding Strategies for Specialized Domains: Application to Clinical Entity Recognition"} {"abstract": "Although significant improvement has been achieved recently in 3D human pose estimation, most of the previous methods only treat a single-person case. In this work, we firstly propose a fully learning-based, camera distance-aware top-down approach for 3D multi-person pose estimation from a single RGB image. The pipeline of the proposed system consists of human detection, absolute 3D human root localization, and root-relative 3D single-person pose estimation modules. Our system achieves comparable results with the state-of-the-art 3D single-person pose estimation models without any groundtruth information and significantly outperforms previous 3D multi-person pose estimation methods on publicly available datasets. The code is available in https://github.com/mks0601/3DMPPE_ROOTNET_RELEASE , https://github.com/mks0601/3DMPPE_POSENET_RELEASE.", "field": [], "task": ["3D Absolute Human Pose Estimation", "3D Human Pose Estimation", "3D Multi-Person Pose Estimation", "3D Multi-Person Pose Estimation (absolute)", "3D Multi-Person Pose Estimation (root-relative)", "Multi-Person Pose Estimation", "Pose Estimation", "Root Joint Localization"], "method": [], "dataset": ["3D Poses in the Wild Challenge", "MuPoTS-3D"], "metric": ["3DPCK", "MPJPE", "MPJAE"], "title": "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image"} {"abstract": "Recent advances in visual tracking are based on siamese feature extractors and template matching. For this category of trackers, latest research focuses on better feature embeddings and similarity measures. In this work, we focus on building holistic object representations for tracking. We propose a framework that is designed to be used on top of previous trackers without any need for further training of the siamese network. The framework leverages the idea of obtaining additional object templates during the tracking process. Since the number of stored templates is limited, our method only keeps the most diverse ones. We achieve this by providing a new diversity measure in the space of siamese features. The obtained representation contains information beyond the ground truth object location provided to the system. It is then useful for tracking itself but also for further tasks which require a visual understanding of objects. Strong empirical results on tracking benchmarks indicate that our method can improve the performance and robustness of the underlying trackers while barely reducing their speed. In addition, our method is able to match current state-of-the-art results, while using a simpler and older network architecture and running three times faster.", "field": [], "task": ["Template Matching", "Visual Object Tracking", "Visual Tracking"], "method": [], "dataset": ["VOT2017/18"], "metric": ["Expected Average Overlap (EAO)"], "title": "Tracking Holistic Object Representations"} {"abstract": "Many question answering (QA) tasks only provide weak supervision for how the answer should be computed. For example, TriviaQA answers are entities that can be mentioned multiple times in supporting documents, while DROP answers can be computed by deriving many different equations from numbers in the reference text. In this paper, we show it is possible to convert such tasks into discrete latent variable learning problems with a precomputed, task-specific set of possible \"solutions\" (e.g. different mentions or equations) that contains one correct option. We then develop a hard EM learning scheme that computes gradients relative to the most likely solution at each update. Despite its simplicity, we show that this approach significantly outperforms previous methods on six QA tasks, including absolute gains of 2--10%, and achieves the state-of-the-art on five of them. Using hard updates instead of maximizing marginal likelihood is key to these results as it encourages the model to find the one correct answer, which we show through detailed qualitative analysis.", "field": [], "task": ["Question Answering"], "method": [], "dataset": ["NarrativeQA"], "metric": ["Rouge-L"], "title": "A Discrete Hard EM Approach for Weakly Supervised Question Answering"} {"abstract": "Logical rules are a popular knowledge representation language in many domains, representing background knowledge and encoding information that can be derived from given facts in a compact form. However, rule formulation is a complex process that requires deep domain expertise,and is further challenged by today's often large, heterogeneous, and incomplete knowledge graphs. Several approaches for learning rules automatically, given a set of input example facts,have been proposed over time, including, more recently, neural systems. Yet, the area is missing adequate datasets and evaluation approaches: existing datasets often resemble toy examples that neither cover the various kinds of dependencies between rules nor allow for testing scalability. We present a tool for generating different kinds of datasets and for evaluating rule learning systems, including new performance measures.", "field": [], "task": ["Inductive knowledge graph completion", "Inductive logic programming", "Knowledge Graphs", "Relational Reasoning"], "method": [], "dataset": ["RuDaS"], "metric": ["R-Score", "H-Score"], "title": "RuDaS: Synthetic Datasets for Rule Learning and Evaluation Tools"} {"abstract": "As 3D point cloud analysis has received increasing attention, the insufficient scale of point cloud datasets and the weak generalization ability of networks become prominent. In this paper, we propose a simple and effective augmentation method for the point cloud data, named PointCutMix, to alleviate those problems. It finds the optimal assignment between two point clouds and generates new training data by replacing the points in one sample with their optimal assigned pairs. Two replacement strategies are proposed to adapt to the accuracy or robustness requirement for different tasks, one of which is to randomly select all replacing points while the other one is to select k nearest neighbors of a single random point. Both strategies consistently and significantly improve the performance of various models on point cloud classification problems. By introducing the saliency maps to guide the selection of replacing points, the performance further improves. Moreover, PointCutMix is validated to enhance the model robustness against the point attack. It is worth noting that when using as a defense method, our method outperforms the state-of-the-art defense algorithms. The code is available at:https://github.com/cuge1995/PointCutMix", "field": [], "task": ["3D Point Cloud Classification"], "method": [], "dataset": ["ModelNet40"], "metric": ["Overall Accuracy"], "title": "PointCutMix: Regularization Strategy for Point Cloud Classification"} {"abstract": "Reading a document and extracting an answer to a question about its content\nhas attracted substantial attention recently. While most work has focused on\nthe interaction between the question and the document, in this work we evaluate\nthe importance of context when the question and document are processed\nindependently. We take a standard neural architecture for this task, and show\nthat by providing rich contextualized word representations from a large\npre-trained language model as well as allowing the model to choose between\ncontext-dependent and context-independent word representations, we can obtain\ndramatic improvements and reach performance comparable to state-of-the-art on\nthe competitive SQuAD dataset.", "field": [], "task": ["Language Modelling", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["SQuAD1.1"], "metric": ["EM", "F1"], "title": "Contextualized Word Representations for Reading Comprehension"} {"abstract": "We present SummaRuNNer, a Recurrent Neural Network (RNN) based sequence model\nfor extractive summarization of documents and show that it achieves performance\nbetter than or comparable to state-of-the-art. Our model has the additional\nadvantage of being very interpretable, since it allows visualization of its\npredictions broken up by abstract features such as information content,\nsalience and novelty. Another novel contribution of our work is abstractive\ntraining of our extractive model that can train on human generated reference\nsummaries alone, eliminating the need for sentence-level extractive labels.", "field": [], "task": ["Document Summarization", "Text Summarization"], "method": [], "dataset": ["CNN / Daily Mail (Anonymized)"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "SummaRuNNer: A Recurrent Neural Network based Sequence Model for Extractive Summarization of Documents"} {"abstract": "We consider image transformation problems, where an input image is\ntransformed into an output image. Recent methods for such problems typically\ntrain feed-forward convolutional neural networks using a \\emph{per-pixel} loss\nbetween the output and ground-truth images. Parallel work has shown that\nhigh-quality images can be generated by defining and optimizing\n\\emph{perceptual} loss functions based on high-level features extracted from\npretrained networks. We combine the benefits of both approaches, and propose\nthe use of perceptual loss functions for training feed-forward networks for\nimage transformation tasks. We show results on image style transfer, where a\nfeed-forward network is trained to solve the optimization problem proposed by\nGatys et al in real-time. Compared to the optimization-based method, our\nnetwork gives similar qualitative results but is three orders of magnitude\nfaster. We also experiment with single-image super-resolution, where replacing\na per-pixel loss with a perceptual loss gives visually pleasing results.", "field": [], "task": ["Image Super-Resolution", "Nuclear Segmentation", "Style Transfer", "Super-Resolution"], "method": [], "dataset": ["Set5 - 4x upscaling", "BSD100 - 4x upscaling", "Cell17"], "metric": ["Hausdorff", "PSNR", "Dice", "F1-score", "SSIM"], "title": "Perceptual Losses for Real-Time Style Transfer and Super-Resolution"} {"abstract": "This paper proposes CF-NADE, a neural autoregressive architecture for\ncollaborative filtering (CF) tasks, which is inspired by the Restricted\nBoltzmann Machine (RBM) based CF model and the Neural Autoregressive\nDistribution Estimator (NADE). We first describe the basic CF-NADE model for CF\ntasks. Then we propose to improve the model by sharing parameters between\ndifferent ratings. A factored version of CF-NADE is also proposed for better\nscalability. Furthermore, we take the ordinal nature of the preferences into\nconsideration and propose an ordinal cost to optimize CF-NADE, which shows\nsuperior performance. Finally, CF-NADE can be extended to a deep model, with\nonly moderately increased computational complexity. Experimental results show\nthat CF-NADE with a single hidden layer beats all previous state-of-the-art\nmethods on MovieLens 1M, MovieLens 10M, and Netflix datasets, and adding more\nhidden layers can further improve the performance.", "field": [], "task": [], "method": [], "dataset": ["MovieLens 1M", "MovieLens 10M"], "metric": ["RMSE"], "title": "A Neural Autoregressive Approach to Collaborative Filtering"} {"abstract": "Recent studies reveal that a deep neural network can learn transferable\nfeatures which generalize well to novel tasks for domain adaptation. However,\nas deep features eventually transition from general to specific along the\nnetwork, the feature transferability drops significantly in higher layers with\nincreasing domain discrepancy. Hence, it is important to formally reduce the\ndataset bias and enhance the transferability in task-specific layers. In this\npaper, we propose a new Deep Adaptation Network (DAN) architecture, which\ngeneralizes deep convolutional neural network to the domain adaptation\nscenario. In DAN, hidden representations of all task-specific layers are\nembedded in a reproducing kernel Hilbert space where the mean embeddings of\ndifferent domain distributions can be explicitly matched. The domain\ndiscrepancy is further reduced using an optimal multi-kernel selection method\nfor mean embedding matching. DAN can learn transferable features with\nstatistical guarantees, and can scale linearly by unbiased estimate of kernel\nembedding. Extensive empirical evidence shows that the proposed architecture\nyields state-of-the-art image classification error rates on standard domain\nadaptation benchmarks.", "field": [], "task": ["Domain Adaptation", "Image Classification"], "method": [], "dataset": ["SVNH-to-MNIST", "ImageCLEF-DA", "Office-Caltech", "Synth Signs-to-GTSRB", "Synth Digits-to-SVHN", "Office-Home", "MNIST-to-MNIST-M", "SYNSIG-to-GTSRB"], "metric": ["Average Accuracy", "Accuracy"], "title": "Learning Transferable Features with Deep Adaptation Networks"} {"abstract": "We propose a novel neural network model for joint part-of-speech (POS)\ntagging and dependency parsing. Our model extends the well-known BIST\ngraph-based dependency parser (Kiperwasser and Goldberg, 2016) by incorporating\na BiLSTM-based tagging component to produce automatically predicted POS tags\nfor the parser. On the benchmark English Penn treebank, our model obtains\nstrong UAS and LAS scores at 94.51% and 92.87%, respectively, producing 1.5+%\nabsolute improvements to the BIST graph-based parser, and also obtaining a\nstate-of-the-art POS tagging accuracy at 97.97%. Furthermore, experimental\nresults on parsing 61 \"big\" Universal Dependencies treebanks from raw texts\nshow that our model outperforms the baseline UDPipe (Straka and Strakov\\'a,\n2017) with 0.8% higher average POS tagging score and 3.6% higher average LAS\nscore. In addition, with our model, we also obtain state-of-the-art downstream\ntask scores for biomedical event extraction and opinion analysis applications.\nOur code is available together with all pre-trained models at:\nhttps://github.com/datquocnguyen/jPTDP", "field": [], "task": ["Dependency Parsing", "Event Extraction", "Part-Of-Speech Tagging"], "method": [], "dataset": ["Penn Treebank"], "metric": ["UAS", "POS", "LAS"], "title": "An improved neural network model for joint POS tagging and dependency parsing"} {"abstract": "The most approaches to Knowledge Base Question Answering are based on\nsemantic parsing. In this paper, we address the problem of learning vector\nrepresentations for complex semantic parses that consist of multiple entities\nand relations. Previous work largely focused on selecting the correct semantic\nrelations for a question and disregarded the structure of the semantic parse:\nthe connections between entities and the directions of the relations. We\npropose to use Gated Graph Neural Networks to encode the graph structure of the\nsemantic parse. We show on two data sets that the graph networks outperform all\nbaseline models that do not explicitly model the structure. The error analysis\nconfirms that our approach can successfully process complex semantic parses.", "field": [], "task": ["Knowledge Base Question Answering", "Question Answering", "Semantic Parsing"], "method": [], "dataset": ["WebQSP-WD"], "metric": ["Avg F1"], "title": "Modeling Semantics with Gated Graph Neural Networks for Knowledge Base Question Answering"} {"abstract": "We address the problem of phrase grounding by lear ing a multi-level common semantic space shared by the textual and visual modalities. We exploit multiple levels of feature maps of a Deep Convolutional Neural Network, as well as contextualized word and sentence embeddings extracted from a character-based language model. Following dedicated non-linear mappings for visual features at each level, word, and sentence embeddings, we obtain multiple instantiations of our common semantic space in which comparisons between any target text and the visual content is performed with cosine similarity. We guide the model by a multi-level multimodal attention mechanism which outputs attended visual features at each level. The best level is chosen to be compared with text content for maximizing the pertinence scores of image-sentence pairs of the ground truth. Experiments conducted on three publicly available datasets show significant performance gains (20%-60% relative) over the state-of-the-art in phrase localization and set a new performance record on those datasets. We provide a detailed ablation study to show the contribution of each element of our approach and release our code on GitHub.", "field": [], "task": ["Language Modelling", "Phrase Grounding", "Sentence Embeddings"], "method": [], "dataset": ["Flickr30k", "Visual Genome", "ReferIt"], "metric": ["Pointing Game Accuracy"], "title": "Multi-level Multimodal Common Semantic Space for Image-Phrase Grounding"} {"abstract": "Recent advances in deep domain adaptation reveal that adversarial learning\ncan be embedded into deep networks to learn transferable features that reduce\ndistribution discrepancy between the source and target domains. Existing domain\nadversarial adaptation methods based on single domain discriminator only align\nthe source and target data distributions without exploiting the complex\nmultimode structures. In this paper, we present a multi-adversarial domain\nadaptation (MADA) approach, which captures multimode structures to enable\nfine-grained alignment of different data distributions based on multiple domain\ndiscriminators. The adaptation can be achieved by stochastic gradient descent\nwith the gradients computed by back-propagation in linear-time. Empirical\nevidence demonstrates that the proposed model outperforms state of the art\nmethods on standard domain adaptation datasets.", "field": [], "task": ["Domain Adaptation"], "method": [], "dataset": ["Office-31"], "metric": ["Average Accuracy"], "title": "Multi-Adversarial Domain Adaptation"} {"abstract": "We present CATENA, a sieve-based system to perform temporal and causal relation extraction and classification from English texts, exploiting the interaction between the temporal and the causal model. We evaluate the performance of each sieve, showing that the rule-based, the machine-learned and the reasoning components all contribute to achieving state-of-the-art performance on TempEval-3 and TimeBank-Dense data. Although causal relations are much sparser than temporal ones, the architecture and the selected features are mostly suitable to serve both tasks. The effects of the interaction between the temporal and the causal components, although limited, yield promising results and confirm the tight connection between the temporal and the causal dimension of texts.", "field": [], "task": ["Question Answering", "Relation Classification", "Relation Extraction", "Temporal Information Extraction"], "method": [], "dataset": ["TimeBank"], "metric": ["F1 score"], "title": "CATENA: CAusal and TEmporal relation extraction from NAtural language texts"} {"abstract": "Attention-based recurrent neural network models for joint intent detection and slot filling have achieved the state-of-the-art performance, while they have independent attention weights. Considering that slot and intent have the strong relationship, this paper proposes a slot gate that focuses on learning the relationship between intent and slot attention vectors in order to obtain better semantic frame results by the global optimization. The experiments show that our proposed model significantly improves sentence-level semantic frame accuracy with 4.2{\\%} and 1.9{\\%} relative improvement compared to the attentional model on benchmark ATIS and Snips datasets respectively", "field": [], "task": ["Intent Detection", "Slot Filling", "Spoken Dialogue Systems", "Spoken Language Understanding"], "method": [], "dataset": ["ATIS", "SNIPS"], "metric": ["Slot F1 Score", "Intent Accuracy", "F1", "Accuracy"], "title": "Slot-Gated Modeling for Joint Slot Filling and Intent Prediction"} {"abstract": "In this paper we present DCFE, a real-time facial landmark regression method based on a coarse-to-fine Ensemble of Regression Trees (ERT). We use a simple Convolutional Neural Network (CNN) to generate probability maps of landmarks location. These are further refined with the ERT regressor, which is initialized by fitting a 3D face model to the landmark maps. The coarse-to-fine structure of the ERT lets us address the combinatorial explosion of parts deformation. With the 3D model we also tackle other key problems such as robust regressor initialization, self occlusions, and simultaneous frontal and profile face analysis. In the experiments DCFE achieves the best reported result in AFLW, COFW, and 300W private and common public data sets.", "field": [], "task": ["Face Alignment", "Face Model", "Facial Landmark Detection", "Regression"], "method": [], "dataset": ["300W", "IBUG", "COFW", "AFLW-Full"], "metric": ["NME", "Mean Error Rate", "Mean NME ", "Fullset (public)"], "title": "A Deeply-initialized Coarse-to-fine Ensemble of Regression Trees for Face Alignment"} {"abstract": "Knowledge graphs (KGs) can vary greatly from one domain to another. Therefore supervised approaches to both graph-to-text generation and text-to-graph knowledge extraction (semantic parsing) will always suffer from a shortage of domain-specific parallel graph-text data; at the same time, adapting a model trained on a different domain is often impossible due to little or no overlap in entities and relations. This situation calls for an approach that (1) does not need large amounts of annotated data and thus (2) does not need to rely on domain adaptation techniques to work well in different domains. To this end, we present the first approach to unsupervised text generation from KGs and show simultaneously how it can be used for unsupervised semantic parsing. We evaluate our approach on WebNLG v2.1 and a new benchmark leveraging scene graphs from Visual Genome. Our system outperforms strong baselines for both text$\\leftrightarrow$graph conversion tasks without any manual adaptation from one dataset to the other. In additional experiments, we investigate the impact of using different unsupervised objectives.", "field": [], "task": ["Domain Adaptation", "Unsupervised KG-to-text", "Unsupervised semantic parsing"], "method": [], "dataset": ["VG graph-text", "WebNLG v2.1"], "metric": ["BLEU", "F1"], "title": "An Unsupervised Joint System for Text Generation from Knowledge Graphs and Semantic Parsing"} {"abstract": "Sequential labeling-based NER approaches restrict each word belonging to at most one entity mention, which will face a serious problem when recognizing nested entity mentions. In this paper, we propose to resolve this problem by modeling and leveraging the head-driven phrase structures of entity mentions, i.e., although a mention can nest other mentions, they will not share the same head word. Specifically, we propose Anchor-Region Networks (ARNs), a sequence-to-nuggets architecture for nested mention detection. ARNs first identify anchor words (i.e., possible head words) of all mentions, and then recognize the mention boundaries for each anchor word by exploiting regular phrase structures. Furthermore, we also design Bag Loss, an objective function which can train ARNs in an end-to-end manner without using any anchor word annotation. Experiments show that ARNs achieve the state-of-the-art performance on three standard nested entity mention detection benchmarks.", "field": [], "task": ["Named Entity Recognition", "Nested Mention Recognition", "Nested Named Entity Recognition"], "method": [], "dataset": ["GENIA", "ACE 2005"], "metric": ["F1"], "title": "Sequence-to-Nuggets: Nested Entity Mention Detection via Anchor-Region Networks"} {"abstract": "Image-text matching has been a hot research topic bridging the vision and language areas. It remains challenging because the current representation of image usually lacks global semantic concepts as in its corresponding text caption. To address this issue, we propose a simple and interpretable reasoning model to generate visual representation that captures key objects and semantic concepts of a scene. Specifically, we first build up connections between image regions and perform reasoning with Graph Convolutional Networks to generate features with semantic relationships. Then, we propose to use the gate and memory mechanism to perform global semantic reasoning on these relationship-enhanced features, select the discriminative information and gradually generate the representation for the whole scene. Experiments validate that our method achieves a new state-of-the-art for the image-text matching on MS-COCO and Flickr30K datasets. It outperforms the current best method by 6.8% relatively for image retrieval and 4.8% relatively for caption retrieval on MS-COCO (Recall@1 using 1K test set). On Flickr30K, our model improves image retrieval by 12.6% relatively and caption retrieval by 5.8% relatively (Recall@1). Our code is available at https://github.com/KunpengLi1994/VSRN.", "field": [], "task": ["Cross-Modal Retrieval", "Image Retrieval", "Text Matching"], "method": [], "dataset": ["COCO 2014", "Flickr30K 1K test"], "metric": ["Image-to-text R@5", "Image-to-text R@1", "R@10", "Image-to-text R@10", "Text-to-image R@10", "Text-to-image R@1", "R@5", "R@1", "Text-to-image R@5"], "title": "Visual Semantic Reasoning for Image-Text Matching"} {"abstract": "Multi-scale (MS) approaches have been widely investigated for blind single image / video deblurring that sequentially recovers deblurred images in low spatial scale first and then in high spatial scale later with the output of lower scales. MS approaches have been effective especially for severe blurs induced by large motions in high spatial scale since those can be seen as small blurs in low spatial scale. In this work, we investigate alternative approach to MS, called multi-temporal (MT) approach, for non-uniform single image deblurring. We propose incremental temporal training with constructed MT level dataset from time-resolved dataset, develop novel MT-RNNs with recurrent feature maps, and investigate progressive single image deblurring over iterations. Our proposed MT methods outperform state-of-the-art MS methods on the GoPro dataset in PSNR with the smallest number of parameters.", "field": [], "task": ["Deblurring"], "method": [], "dataset": ["GoPro", "HIDE (trained on GOPRO)"], "metric": ["SSIM", "SSIM (sRGB)", "PSNR", "PSNR (sRGB)"], "title": "Multi-Temporal Recurrent Neural Networks For Progressive Non-Uniform Single Image Deblurring With Incremental Temporal Training"} {"abstract": "Unsupervised domain adaptation aims at transferring knowledge from the labeled source domain to the unlabeled target domain. Previous adversarial domain adaptation methods mostly adopt the discriminator with binary or $K$-dimensional output to perform marginal or conditional alignment independently. Recent experiments have shown that when the discriminator is provided with domain information in both domains and label information in the source domain, it is able to preserve the complex multimodal information and high semantic information in both domains. Following this idea, we adopt a discriminator with $2K$-dimensional output to perform both domain-level and class-level alignments simultaneously in a single discriminator. However, a single discriminator can not capture all the useful information across domains and the relationships between the examples and the decision boundary are rarely explored before. Inspired by multi-view learning and latest advances in domain adaptation, besides the adversarial process between the discriminator and the feature extractor, we also design a novel mechanism to make two discriminators pit against each other, so that they can provide diverse information for each other and avoid generating target features outside the support of the source domain. To the best of our knowledge, it is the first time to explore a dual adversarial strategy in domain adaptation. Moreover, we also use the semi-supervised learning regularization to make the representations more discriminative. Comprehensive experiments on two real-world datasets verify that our method outperforms several state-of-the-art domain adaptation methods.", "field": [], "task": ["Domain Adaptation", "MULTI-VIEW LEARNING", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["Office-31", "ImageCLEF-DA"], "metric": ["Average Accuracy", "Accuracy"], "title": "Dual Adversarial Domain Adaptation"} {"abstract": "Deep learning models have demonstrated high-quality performance in areas such as image classification and speech processing. However, creating a deep learning model using electronic health record (EHR) data, requires addressing particular privacy challenges that are unique to researchers in this domain. This matter focuses attention on generating realistic synthetic data while ensuring privacy. In this paper, we propose a novel framework called correlation-capturing Generative Adversarial Network (CorGAN), to generate synthetic healthcare records. In CorGAN we utilize Convolutional Neural Networks to capture the correlations between adjacent medical features in the data representation space by combining Convolutional Generative Adversarial Networks and Convolutional Autoencoders. To demonstrate the model fidelity, we show that CorGAN generates synthetic data with performance similar to that of real data in various Machine Learning settings such as classification and prediction. We also give a privacy assessment and report on statistical analysis regarding realistic characteristics of the synthetic data. The software of this work is open-source and is available at: https://github.com/astorfi/cor-gan.", "field": [], "task": ["Disease Prediction", "Image Classification", "Synthetic Data Generation"], "method": [], "dataset": ["UCI Epileptic Seizure Recognition"], "metric": ["AUROC"], "title": "CorGAN: Correlation-Capturing Convolutional Generative Adversarial Networks for Generating Synthetic Healthcare Records"} {"abstract": "The performance of machine learning models tends to suffer when the distributions of the training and test data differ. Domain Adaptation is the process of closing the distribution gap between datasets. In this paper, we show that Domain Adaptation methods using pair-wise relationships between source and target domain data can be formulated as a Graph Embedding in which the domain labels are incorporated into the structure of the intrinsic and penalty graphs. We analyse the loss functions of existing state-of-the-art Supervised Domain Adaptation methods and demonstrate that they perform Graph Embedding. Moreover, we highlight some generalisation and reproducibility issues related to the experimental setup commonly used to demonstrate the few-shot learning capabilities of these methods. We propose a rectified evaluation setup for more accurately assessing and comparing Supervised Domain Adaptation methods, and report experiments on the standard benchmark datasets Office31 and MNIST-USPS.", "field": [], "task": ["Domain Adaptation", "Few-Shot Learning", "Graph Embedding"], "method": [], "dataset": ["Office-31"], "metric": ["Average Accuracy"], "title": "Supervised Domain Adaptation: A Graph Embedding Perspective and a Rectified Experimental Protocol"} {"abstract": "Real-world contains an overwhelmingly large number of object classes, learning all of which at once is infeasible. Few shot learning is a promising learning paradigm due to its ability to learn out of order distributions quickly with only a few samples. Recent works [7, 41] show that simply learning a good feature embedding can outperform more sophisticated meta-learning and metric learning algorithms for few-shot learning. In this paper, we propose a simple approach to improve the representation capacity of deep neural networks for few-shot learning tasks. We follow a two-stage learning process: First, we train a neural network to maximize the entropy of the feature embedding, thus creating an optimal output manifold using a self-supervised auxiliary loss. In the second stage, we minimize the entropy on feature embedding by bringing self-supervised twins together, while constraining the manifold with student-teacher distillation. Our experiments show that, even in the first stage, self-supervision can outperform current state-of-the-art methods, with further gains achieved by our second stage distillation process. Our codes are available at: https://github.com/brjathu/SKD.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Knowledge Distillation", "Meta-Learning", "Metric Learning"], "method": [], "dataset": ["FC100 5-way (1-shot)", "CIFAR-FS 5-way (5-shot)", "Mini-Imagenet 5-way (1-shot)", "Tiered ImageNet 5-way (1-shot)", "Mini-Imagenet 5-way (5-shot)", "CIFAR-FS 5-way (1-shot)", "FC100 5-way (5-shot)", "Tiered ImageNet 5-way (5-shot)"], "metric": ["Accuracy"], "title": "Self-supervised Knowledge Distillation for Few-shot Learning"} {"abstract": "This paper studies the problem of learning semantic segmentation from image-level supervision only. Current popular solutions leverage object localization maps from classifiers as supervision signals, and struggle to make the localization maps capture more complete object content. Rather than previous efforts that primarily focus on intra-image information, we address the value of cross-image semantic relations for comprehensive object pattern mining. To achieve this, two neural co-attentions are incorporated into the classifier to complimentarily capture cross-image semantic similarities and differences. In particular, given a pair of training images, one co-attention enforces the classifier to recognize the common semantics from co-attentive objects, while the other one, called contrastive co-attention, drives the classifier to identify the unshared semantics from the rest, uncommon objects. This helps the classifier discover more object patterns and better ground semantics in image regions. In addition to boosting object pattern learning, the co-attention can leverage context from other related images to improve localization map inference, hence eventually benefiting semantic segmentation learning. More essentially, our algorithm provides a unified framework that handles well different WSSS settings, i.e., learning WSSS with (1) precise image-level supervision only, (2) extra simple single-label data, and (3) extra noisy web data. It sets new state-of-the-arts on all these settings, demonstrating well its efficacy and generalizability. Moreover, our approach ranked 1st place in the Weakly-Supervised Semantic Segmentation Track of CVPR2020 Learning from Imperfect Data Challenge.", "field": [], "task": ["Object Localization", "Semantic Segmentation", "Weakly-Supervised Semantic Segmentation"], "method": [], "dataset": ["PASCAL VOC 2012 val"], "metric": ["Mean IoU"], "title": "Mining Cross-Image Semantics for Weakly Supervised Semantic Segmentation"} {"abstract": "State-of-the-art semantic segmentation methods require sufficient labeled data to achieve good results and hardly work on unseen classes without fine-tuning. Few-shot segmentation is thus proposed to tackle this problem by learning a model that quickly adapts to new classes with a few labeled support samples. Theses frameworks still face the challenge of generalization ability reduction on unseen classes due to inappropriate use of high-level semantic information of training classes and spatial inconsistency between query and support targets. To alleviate these issues, we propose the Prior Guided Feature Enrichment Network (PFENet). It consists of novel designs of (1) a training-free prior mask generation method that not only retains generalization power but also improves model performance and (2) Feature Enrichment Module (FEM) that overcomes spatial inconsistency by adaptively enriching query features with support features and prior masks. Extensive experiments on PASCAL-5$^i$ and COCO prove that the proposed prior generation method and FEM both improve the baseline method significantly. Our PFENet also outperforms state-of-the-art methods by a large margin without efficiency loss. It is surprising that our model even generalizes to cases without labeled support samples. Our code is available at https://github.com/Jia-Research-Lab/PFENet/.", "field": [], "task": ["Few-Shot Semantic Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["PASCAL-5i (10-Shot)", "COCO-20i -> Pascal VOC (5-shot)", "COCO-20i (10-shot)", "PASCAL-5i (1-Shot)", "PASCAL-5i (5-Shot)", "COCO-20i -> Pascal VOC (1-shot)"], "metric": ["Mean IoU"], "title": "Prior Guided Feature Enrichment Network for Few-Shot Segmentation"} {"abstract": "In this work, we investigate the problem of lip-syncing a talking face video of an arbitrary identity to match a target speech segment. Current works excel at producing accurate lip movements on a static image or videos of specific people seen during the training phase. However, they fail to accurately morph the lip movements of arbitrary identities in dynamic, unconstrained talking face videos, resulting in significant parts of the video being out-of-sync with the new audio. We identify key reasons pertaining to this and hence resolve them by learning from a powerful lip-sync discriminator. Next, we propose new, rigorous evaluation benchmarks and metrics to accurately measure lip synchronization in unconstrained videos. Extensive quantitative evaluations on our challenging benchmarks show that the lip-sync accuracy of the videos generated by our Wav2Lip model is almost as good as real synced videos. We provide a demo video clearly showing the substantial impact of our Wav2Lip model and evaluation benchmarks on our website: \\url{cvit.iiit.ac.in/research/projects/cvit-projects/a-lip-sync-expert-is-all-you-need-for-speech-to-lip-generation-in-the-wild}. The code and models are released at this GitHub repository: \\url{github.com/Rudrabha/Wav2Lip}. You can also try out the interactive demo at this link: \\url{bhaasha.iiit.ac.in/lipsync}.", "field": [], "task": ["Talking Face Generation", "Talking Head Generation", "Unconstrained Lip-synchronization"], "method": [], "dataset": ["LRS3", "LRS2", "LRW"], "metric": ["LSE-C", "LSE-D", "FID"], "title": "A Lip Sync Expert Is All You Need for Speech to Lip Generation In The Wild"} {"abstract": "Dialogue relation extraction (DRE) aims to detect the relation between two entities mentioned in a multi-party dialogue. It plays an important role in constructing knowledge graphs from conversational data increasingly abundant on the internet and facilitating intelligent dialogue system development. The prior methods of DRE do not meaningfully leverage speaker information-they just prepend the utterances with the respective speaker names. Thus, they fail to model the crucial inter-speaker relations that may give additional context to relevant argument entities through pronouns and triggers. We, however, present a graph attention network-based method for DRE where a graph, that contains meaningfully connected speaker, entity, entity-type, and utterance nodes, is constructed. This graph is fed to a graph attention network for context propagation among relevant nodes, which effectively captures the dialogue context. We empirically show that this graph-based approach quite effectively captures the relations between different entity pairs in a dialogue as it outperforms the state-of-the-art approaches by a significant margin on the benchmark dataset DialogRE. Our code is released at: https://github.com/declare-lab/dialog-HGAT", "field": [], "task": ["Dialog Relation Extraction", "Knowledge Graphs", "Relation Extraction"], "method": [], "dataset": ["DialogRE"], "metric": ["F1", "F1c"], "title": "Dialogue Relation Extraction with Document-level Heterogeneous Graph Attention Networks"} {"abstract": "One fundamental challenge of vehicle re-identification (re-id) is to learn robust and discriminative visual representation, given the significant intra-class vehicle variations across different camera views. As the existing vehicle datasets are limited in terms of training images and viewpoints, we propose to build a unique large-scale vehicle dataset (called VehicleNet) by harnessing four public vehicle datasets, and design a novel yet effective two-stage progressive approach to learning more robust visual representation from VehicleNet. The first stage of our approach is to learn the generic representation for all domains (i.e., source vehicle datasets) by training with the conventional classification loss. This stage relaxes the full alignment between the training and testing domains, as it is agnostic to the target vehicle domain. The second stage is to fine-tune the trained model purely based on the target vehicle set, by minimizing the distribution discrepancy between our VehicleNet and any target domain. We discuss our proposed multi-source dataset VehicleNet and evaluate the effectiveness of the two-stage progressive representation learning through extensive experiments. We achieve the state-of-art accuracy of 86.07% mAP on the private test set of AICity Challenge, and competitive results on two other public vehicle re-id datasets, i.e., VeRi-776 and VehicleID. We hope this new VehicleNet dataset and the learned robust representations can pave the way for vehicle re-id in the real-world environments.", "field": [], "task": ["Representation Learning", "Vehicle Re-Identification"], "method": [], "dataset": ["VeRi", "VehicleID Large", "VehicleID Small", "VehicleID Medium", "VeRi-776"], "metric": ["Rank-1", "mAP"], "title": "VehicleNet: Learning Robust Feature Representation for Vehicle Re-identification"} {"abstract": "Compressed sensing (CS) is a challenging problem in image processing due to reconstructing an almost complete image from a limited measurement. To achieve fast and accurate CS reconstruction, we synthesize the advantages of two well-known methods (neural network and optimization algorithm) to propose a novel optimization inspired neural network which dubbed AMP-Net. AMP-Net realizes the fusion of the Approximate Message Passing (AMP) algorithm and neural network. All of its parameters are learned automatically. Furthermore, we propose an AMPA-Net which uses three attention networks to improve the representation ability of AMP-Net. Finally, We demonstrate the effectiveness of AMP-Net and AMPA-Net on four standard CS reconstruction benchmark data sets. Our code is available on https://github.com/puallee/AMPA-Net.", "field": [], "task": ["Compressive Sensing"], "method": [], "dataset": ["Set11 cs=50%", "Urban100 - 2x upscaling", "BSDS100 - 2x upscaling", "BSD68 CS=50%"], "metric": ["Average PSNR"], "title": "AMPA-Net: Optimization-Inspired Attention Neural Network for Deep Compressed Sensing"} {"abstract": "Relation Extraction (RE) is to predict the relation type of two entities that are mentioned in a piece of text, e.g., a sentence or a dialogue. When the given text is long, it is challenging to identify indicative words for the relation prediction. Recent advances on RE task are from BERT-based sequence modeling and graph-based modeling of relationships among the tokens in the sequence. In this paper, we propose to construct a latent multi-view graph to capture various possible relationships among tokens. We then refine this graph to select important words for relation prediction. Finally, the representation of the refined graph and the BERT-based sequence representation are concatenated for relation extraction. Specifically, in our proposed GDPNet (Gaussian Dynamic Time Warping Pooling Net), we utilize Gaussian Graph Generator (GGG) to generate edges of the multi-view graph. The graph is then refined by Dynamic Time Warping Pooling (DTWPool). On DialogRE and TACRED, we show that GDPNet achieves the best performance on dialogue-level RE, and comparable performance with the state-of-the-arts on sentence-level RE.", "field": [], "task": ["Dialog Relation Extraction", "Relation Extraction"], "method": [], "dataset": ["DialogRE", "TACRED"], "metric": ["F1", "F1c"], "title": "GDPNet: Refining Latent Multi-View Graph for Relation Extraction"} {"abstract": "In this paper, we analyze several neural network designs (and their\nvariations) for sentence pair modeling and compare their performance\nextensively across eight datasets, including paraphrase identification,\nsemantic textual similarity, natural language inference, and question answering\ntasks. Although most of these models have claimed state-of-the-art performance,\nthe original papers often reported on only one or two selected datasets. We\nprovide a systematic study and show that (i) encoding contextual information by\nLSTM and inter-sentence interactions are critical, (ii) Tree-LSTM does not help\nas much as previously claimed but surprisingly improves performance on Twitter\ndatasets, (iii) the Enhanced Sequential Inference Model is the best so far for\nlarger datasets, while the Pairwise Word Interaction Model achieves the best\nperformance when less data is available. We release our implementations as an\nopen-source toolkit.", "field": [], "task": ["Natural Language Inference", "Paraphrase Identification", "Question Answering", "Semantic Textual Similarity", "Sentence Pair Modeling"], "method": [], "dataset": ["2017_test set"], "metric": ["10 fold Cross validation"], "title": "Neural Network Models for Paraphrase Identification, Semantic Textual Similarity, Natural Language Inference, and Question Answering"} {"abstract": "Scene text detection is an important step of scene text recognition system\nand also a challenging problem. Different from general object detection, the\nmain challenges of scene text detection lie on arbitrary orientations, small\nsizes, and significantly variant aspect ratios of text in natural images. In\nthis paper, we present an end-to-end trainable fast scene text detector, named\nTextBoxes++, which detects arbitrary-oriented scene text with both high\naccuracy and efficiency in a single network forward pass. No post-processing\nother than an efficient non-maximum suppression is involved. We have evaluated\nthe proposed TextBoxes++ on four public datasets. In all experiments,\nTextBoxes++ outperforms competing methods in terms of text localization\naccuracy and runtime. More specifically, TextBoxes++ achieves an f-measure of\n0.817 at 11.6fps for 1024*1024 ICDAR 2015 Incidental text images, and an\nf-measure of 0.5591 at 19.8fps for 768*768 COCO-Text images. Furthermore,\ncombined with a text recognizer, TextBoxes++ significantly outperforms the\nstate-of-the-art approaches for word spotting and end-to-end text recognition\ntasks on popular benchmarks. Code is available at:\nhttps://github.com/MhLiao/TextBoxes_plusplus", "field": [], "task": ["Object Detection", "Scene Text", "Scene Text Detection", "Scene Text Recognition"], "method": [], "dataset": ["ICDAR 2013", "ICDAR 2015", "COCO-Text"], "metric": ["F-Measure", "Recall", "Precision"], "title": "TextBoxes++: A Single-Shot Oriented Scene Text Detector"} {"abstract": "We consider the problem of adapting neural paragraph-level question answering\nmodels to the case where entire documents are given as input. Our proposed\nsolution trains models to produce well calibrated confidence scores for their\nresults on individual paragraphs. We sample multiple paragraphs from the\ndocuments during training, and use a shared-normalization training objective\nthat encourages the model to produce globally correct output. We combine this\nmethod with a state-of-the-art pipeline for training models on document QA\ndata. Experiments demonstrate strong performance on several document QA\ndatasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion\nof TriviaQA, a large improvement from the 56.7 F1 of the previous best system.", "field": [], "task": ["Question Answering", "Reading Comprehension"], "method": [], "dataset": ["SQuAD1.1", "TriviaQA"], "metric": ["EM", "F1"], "title": "Simple and Effective Multi-Paragraph Reading Comprehension"} {"abstract": "Aspect-level sentiment classification aims at identifying the sentiment\npolarity of specific target in its context. Previous approaches have realized\nthe importance of targets in sentiment classification and developed various\nmethods with the goal of precisely modeling their contexts via generating\ntarget-specific representations. However, these studies always ignore the\nseparate modeling of targets. In this paper, we argue that both targets and\ncontexts deserve special treatment and need to be learned their own\nrepresentations via interactive learning. Then, we propose the interactive\nattention networks (IAN) to interactively learn attentions in the contexts and\ntargets, and generate the representations for targets and contexts separately.\nWith this design, the IAN model can well represent a target and its collocative\ncontext, which is helpful to sentiment classification. Experimental results on\nSemEval 2014 Datasets demonstrate the effectiveness of our model.", "field": [], "task": ["Aspect-Based Sentiment Analysis"], "method": [], "dataset": ["SemEval 2014 Task 4 Sub Task 2"], "metric": ["Laptop (Acc)", "Restaurant (Acc)", "Mean Acc (Restaurant + Laptop)"], "title": "Interactive Attention Networks for Aspect-Level Sentiment Classification"} {"abstract": "We propose coupled generative adversarial network (CoGAN) for learning a\njoint distribution of multi-domain images. In contrast to the existing\napproaches, which require tuples of corresponding images in different domains\nin the training set, CoGAN can learn a joint distribution without any tuple of\ncorresponding images. It can learn a joint distribution with just samples drawn\nfrom the marginal distributions. This is achieved by enforcing a weight-sharing\nconstraint that limits the network capacity and favors a joint distribution\nsolution over a product of marginal distributions one. We apply CoGAN to\nseveral joint distribution learning tasks, including learning a joint\ndistribution of color and depth images, and learning a joint distribution of\nface images with different attributes. For each task it successfully learns the\njoint distribution without any tuple of corresponding images. We also\ndemonstrate its applications to domain adaptation and image transformation.", "field": [], "task": ["Domain Adaptation", "Image-to-Image Translation"], "method": [], "dataset": ["Cityscapes Labels-to-Photo", "Cityscapes Photo-to-Labels"], "metric": ["Per-pixel Accuracy", "Per-class Accuracy", "Class IOU"], "title": "Coupled Generative Adversarial Networks"} {"abstract": "Modern NLP applications have enjoyed a great boost utilizing neural networks models. Such deep neural models, however, are not applicable to most human languages due to the lack of annotated training data for various NLP tasks. Cross-lingual transfer learning (CLTL) is a viable method for building NLP models for a low-resource target language by leveraging labeled data from other (source) languages. In this work, we focus on the multilingual transfer setting where training data in multiple source languages is leveraged to further boost target language performance. Unlike most existing methods that rely only on language-invariant features for CLTL, our approach coherently utilizes both language-invariant and language-specific features at instance level. Our model leverages adversarial networks to learn language-invariant features, and mixture-of-experts models to dynamically exploit the similarity between the target language and each individual source language. This enables our model to learn effectively what to share between various languages in the multilingual setup. Moreover, when coupled with unsupervised multilingual embeddings, our model can operate in a zero-resource setting where neither target language training data nor cross-lingual resources are available. Our model achieves significant performance gains over prior art, as shown in an extensive set of experiments over multiple text classification and sequence tagging tasks including a large-scale industry dataset.", "field": [], "task": ["Cross-Lingual NER", "Cross-Lingual Transfer", "Text Classification", "Transfer Learning"], "method": [], "dataset": ["CoNLL German", "CoNLL Dutch", "CoNLL Spanish"], "metric": ["F1"], "title": "Multi-Source Cross-Lingual Model Transfer: Learning What to Share"} {"abstract": "In this paper, we present an accurate yet effective solution for 6D pose\nestimation from an RGB image. The core of our approach is that we first\ndesignate a set of surface points on target object model as keypoints and then\ntrain a keypoint detector (KPD) to localize them. Finally a PnP algorithm can\nrecover the 6D pose according to the 2D-3D relationship of keypoints. Different\nfrom recent state-of-the-art CNN-based approaches that rely on a time-consuming\npost-processing procedure, our method can achieve competitive accuracy without\nany refinement after pose prediction. Meanwhile, we obtain a 30% relative\nimprovement in terms of ADD accuracy among methods without using refinement.\nMoreover, we succeed in handling heavy occlusion by selecting the most\nconfident keypoints to recover the 6D pose. For the sake of reproducibility, we\nwill make our code and models publicly available soon.", "field": [], "task": ["6D Pose Estimation", "6D Pose Estimation using RGB", "Pose Estimation", "Pose Prediction"], "method": [], "dataset": ["LineMOD"], "metric": ["Mean ADD", "Accuracy"], "title": "Estimating 6D Pose From Localizing Designated Surface Keypoints"} {"abstract": "Being able to recognize words as slots and detect the intent of an utterance has been a keen issue in natural language understanding. The existing works either treat slot filling and intent detection separately in a pipeline manner, or adopt joint models which sequentially label slots while summarizing the utterance-level intent without explicitly preserving the hierarchical relationship among words, slots, and intents. To exploit the semantic hierarchy for effective modeling, we propose a capsule-based neural network model which accomplishes slot filling and intent detection via a dynamic routing-by-agreement schema. A re-routing schema is proposed to further synergize the slot filling performance using the inferred intent representation. Experiments on two real-world datasets show the effectiveness of our model when compared with other alternative model architectures, as well as existing natural language understanding services.", "field": [], "task": ["Intent Detection", "Natural Language Understanding", "Slot Filling"], "method": [], "dataset": ["ATIS", "SNIPS"], "metric": ["Slot F1 Score", "Intent Accuracy", "F1", "Accuracy"], "title": "Joint Slot Filling and Intent Detection via Capsule Neural Networks"} {"abstract": "Classifying semantic relations between entity pairs in sentences is an\nimportant task in Natural Language Processing (NLP). Most previous models for\nrelation classification rely on the high-level lexical and syntactic features\nobtained by NLP tools such as WordNet, dependency parser, part-of-speech (POS)\ntagger, and named entity recognizers (NER). In addition, state-of-the-art\nneural models based on attention mechanisms do not fully utilize information of\nentity that may be the most crucial features for relation classification. To\naddress these issues, we propose a novel end-to-end recurrent neural model\nwhich incorporates an entity-aware attention mechanism with a latent entity\ntyping (LET) method. Our model not only utilizes entities and their latent\ntypes as features effectively but also is more interpretable by visualizing\nattention mechanisms applied to our model and results of LET. Experimental\nresults on the SemEval-2010 Task 8, one of the most popular relation\nclassification task, demonstrate that our model outperforms existing\nstate-of-the-art models without any high-level features.", "field": [], "task": ["Entity Typing", "Relation Classification", "Relation Extraction"], "method": [], "dataset": ["SemEval-2010 Task 8"], "metric": ["F1"], "title": "Semantic Relation Classification via Bidirectional LSTM Networks with Entity-aware Attention using Latent Entity Typing"} {"abstract": "Learning continuous representations of nodes is attracting growing interest\nin both academia and industry recently, due to their simplicity and\neffectiveness in a variety of applications. Most of existing node embedding\nalgorithms and systems are capable of processing networks with hundreds of\nthousands or a few millions of nodes. However, how to scale them to networks\nthat have tens of millions or even hundreds of millions of nodes remains a\nchallenging problem. In this paper, we propose GraphVite, a high-performance\nCPU-GPU hybrid system for training node embeddings, by co-optimizing the\nalgorithm and the system. On the CPU end, augmented edge samples are parallelly\ngenerated by random walks in an online fashion on the network, and serve as the\ntraining data. On the GPU end, a novel parallel negative sampling is proposed\nto leverage multiple GPUs to train node embeddings simultaneously, without much\ndata transfer and synchronization. Moreover, an efficient collaboration\nstrategy is proposed to further reduce the synchronization cost between CPUs\nand GPUs. Experiments on multiple real-world networks show that GraphVite is\nsuper efficient. It takes only about one minute for a network with 1 million\nnodes and 5 million edges on a single machine with 4 GPUs, and takes around 20\nhours for a network with 66 million nodes and 1.8 billion edges. Compared to\nthe current fastest system, GraphVite is about 50 times faster without any\nsacrifice on performance.", "field": [], "task": ["Dimensionality Reduction", "Knowledge Graph Embedding", "Link Prediction", "Network Embedding", "Node Classification"], "method": [], "dataset": [" FB15k", "WN18", "YouTube", "FB15k-237"], "metric": ["Hits@3", "training time (s)", "Micro-F1@2%", "runtime (s)", "Hits@1", "MR", "Macro-F1@2%", "MRR", "Hits@10"], "title": "GraphVite: A High-Performance CPU-GPU Hybrid System for Node Embedding"} {"abstract": "Haze degrades content and obscures information of images, which can negatively impact vision-based decision-making in real-time systems. In this paper, we propose an efficient fully convolutional neural network (CNN) image dehazing method designed to run on edge graphical processing units (GPUs). We utilize three variants of our architecture to explore the dependency of dehazed image quality on parameter count and model design. The first two variants presented, a small and big version, make use of a single efficient encoder-decoder convolutional feature extractor. The final variant utilizes a pair of encoder-decoders for atmospheric light and transmission map estimation. Each variant ends with an image refinement pyramid pooling network to form the final dehazed image. For the big variant of the single-encoder network, we demonstrate state-of-the-art performance on the NYU Depth dataset. For the small variant, we maintain competitive performance on the super-resolution O/I-HAZE datasets without the need for image cropping. Finally, we examine some challenges presented by the Dense-Haze dataset when leveraging CNN architectures for dehazing of dense haze imagery and examine the impact of loss function selection on image quality. Benchmarks are included to show the feasibility of introducing this approach into real-time systems.", "field": [], "task": ["Decision Making", "Image Cropping", "Image Dehazing", "Single Image Dehazing", "Super-Resolution"], "method": [], "dataset": ["O-Haze"], "metric": ["SIMM", "PSNR"], "title": "Feature Forwarding for Efficient Single Image Dehazing"} {"abstract": "Video recognition has been advanced in recent years by benchmarks with rich annotations. However, research is still mainly limited to human action or sports recognition - focusing on a highly specific video understanding task and thus leaving a significant gap towards describing the overall content of a video. We fill this gap by presenting a large-scale \"Holistic Video Understanding Dataset\"~(HVU). HVU is organized hierarchically in a semantic taxonomy that focuses on multi-label and multi-task video understanding as a comprehensive problem that encompasses the recognition of multiple semantic aspects in the dynamic scene. HVU contains approx.~572k videos in total with 9 million annotations for training, validation, and test set spanning over 3142 labels. HVU encompasses semantic aspects defined on categories of scenes, objects, actions, events, attributes, and concepts which naturally captures the real-world scenarios. We demonstrate the generalization capability of HVU on three challenging tasks: 1.) Video classification, 2.) Video captioning and 3.) Video clustering tasks. In particular for video classification, we introduce a new spatio-temporal deep neural network architecture called \"Holistic Appearance and Temporal Network\"~(HATNet) that builds on fusing 2D and 3D architectures into one by combining intermediate representations of appearance and temporal cues. HATNet focuses on the multi-label and multi-task learning problem and is trained in an end-to-end manner. Via our experiments, we validate the idea that holistic representation learning is complementary, and can play a key role in enabling many real-world applications.", "field": [], "task": ["Action Classification", "Action Recognition", "Multi-Task Learning", "Representation Learning", "Temporal Action Localization", "Video Captioning", "Video Classification", "Video Recognition", "Video Understanding"], "method": [], "dataset": ["Kinetics-400", "UCF101", "HMDB-51"], "metric": ["Average accuracy of 3 splits", "3-fold Accuracy", "Vid acc@1"], "title": "Large Scale Holistic Video Understanding"} {"abstract": "In this paper, we propose the first end-to-end convolutional neural network (CNN) architecture, Defocus Map Estimation Network (DMENet), for spatially varying defocus map estimation. To train the network, we produce a novel depth-of-field (DOF) dataset, SYNDOF, where each image is synthetically blurred with a ground-truth depth map. Due to the synthetic nature of SYNDOF, the feature characteristics of images in SYNDOF can differ from those of real defocused photos. To address this gap, we use domain adaptation that transfers the features of real defocused photos into those of synthetically blurred ones. Our DMENet consists of four subnetworks: blur estimation, domain adaptation, content preservation, and sharpness calibration networks. The subnetworks are connected to each other and jointly trained with their corresponding supervisions in an end-to-end manner. Our method is evaluated on publicly available blur detection and blur estimation datasets and the results show the state-of-the-art performance.In this paper, we propose the first end-to-end convolutional neural network (CNN) architecture, Defocus Map Estimation Network (DMENet), for spatially varying defocus map estimation. To train the network, we produce a novel depth-of-field (DOF) dataset, SYNDOF, where each image is synthetically blurred with a ground-truth depth map. Due to the synthetic nature of SYNDOF, the feature characteristics of images in SYNDOF can differ from those of real defocused photos. To address this gap, we use domain adaptation that transfers the features of real defocused photos into those of synthetically blurred ones. Our DMENet consists of four subnetworks: blur estimation, domain adaptation, content preservation, and sharpness calibration networks. The subnetworks are connected to each other and jointly trained with their corresponding supervisions in an end-to-end manner. Our method is evaluated on publicly available blur detection and blur estimation datasets and the results show the state-of-the-art performance.\r", "field": [], "task": ["Defocus Estimation", "Domain Adaptation"], "method": [], "dataset": ["CUHK - Blur Detection Dataset"], "metric": ["Blur Segmentation Accuracy"], "title": "Deep Defocus Map Estimation Using Domain Adaptation"} {"abstract": "Flow-based generative models parameterize probability distributions through an invertible transformation and can be trained by maximum likelihood. Invertible residual networks provide a flexible family of transformations where only Lipschitz conditions rather than strict architectural constraints are needed for enforcing invertibility. However, prior work trained invertible residual networks for density estimation by relying on biased log-density estimates whose bias increased with the network's expressiveness. We give a tractable unbiased estimate of the log density using a \"Russian roulette\" estimator, and reduce the memory required during training by using an alternative infinite series for the gradient. Furthermore, we improve invertible residual blocks by proposing the use of activation functions that avoid derivative saturation and generalizing the Lipschitz condition to induced mixed norms. The resulting approach, called Residual Flows, achieves state-of-the-art performance on density estimation amongst flow-based models, and outperforms networks that use coupling blocks at joint generative and discriminative modeling.", "field": [], "task": ["Density Estimation", "Image Generation"], "method": [], "dataset": ["CelebA 256x256", "CIFAR-10", "ImageNet 64x64", "ImageNet 32x32", "MNIST"], "metric": ["bits/dimension", "FID", "bpd", "Bits per dim"], "title": "Residual Flows for Invertible Generative Modeling"} {"abstract": "Recent analysis identified distinct genomic subtypes of lower-grade glioma tumors which are associated with shape features. In this study, we propose a fully automatic way to quantify tumor imaging characteristics using deep learning-based segmentation and test whether these characteristics are predictive of tumor genomic subtypes. We used preoperative imaging and genomic data of 110 patients from 5 institutions with lower-grade gliomas from The Cancer Genome Atlas. Based on automatic deep learning segmentations, we extracted three features which quantify two-dimensional and three-dimensional characteristics of the tumors. Genomic data for the analyzed cohort of patients consisted of previously identified genomic clusters based on IDH mutation and 1p/19q co-deletion, DNA methylation, gene expression, DNA copy number, and microRNA expression. To analyze the relationship between the imaging features and genomic clusters, we conducted the Fisher exact test for 10 hypotheses for each pair of imaging feature and genomic subtype. To account for multiple hypothesis testing, we applied a Bonferroni correction. P-values lower than 0.005 were considered statistically significant. We found the strongest association between RNASeq clusters and the bounding ellipsoid volume ratio ($p<0.0002$) and between RNASeq clusters and margin fluctuation ($p<0.005$). In addition, we identified associations between bounding ellipsoid volume ratio and all tested molecular subtypes ($p<0.02$) as well as between angular standard deviation and RNASeq cluster ($p<0.02$). In terms of automatic tumor segmentation that was used to generate the quantitative image characteristics, our deep learning algorithm achieved a mean Dice coefficient of 82% which is comparable to human performance.", "field": [], "task": ["3D Medical Imaging Segmentation", "Brain Segmentation", "Brain Tumor Segmentation", "Tumor Segmentation", "Two-sample testing"], "method": [], "dataset": ["Brain MRI segmentation"], "metric": ["Dice Score"], "title": "Association of genomic subtypes of lower-grade gliomas with shape features automatically extracted by a deep learning algorithm"} {"abstract": "Time series with non-uniform intervals occur in many applications, and are difficult to model using standard recurrent neural networks (RNNs). We generalize RNNs to have continuous-time hidden dynamics defined by ordinary differential equations (ODEs), a model we call ODE-RNNs. Furthermore, we use ODE-RNNs to replace the recognition network of the recently-proposed Latent ODE model. Both ODE-RNNs and Latent ODEs can naturally handle arbitrary time gaps between observations, and can explicitly model the probability of observation times using Poisson processes. We show experimentally that these ODE-based models outperform their RNN-based counterparts on irregularly-sampled data.", "field": [], "task": ["Multivariate Time Series Forecasting", "Multivariate Time Series Imputation", "Time Series", "Time Series Classification"], "method": [], "dataset": ["MuJoCo", "PhysioNet Challenge 2012"], "metric": ["MSE (10^-2, 50% missing)", "MSE (10^2, 50% missing)", "AUC Stdev", "MSE stdev", "mse (10^-3)", "AUC"], "title": "Latent ODEs for Irregularly-Sampled Time Series"} {"abstract": "Fully convolutional neural networks (FCNs) have shown their advantages in the salient object detection task. However, most existing FCNs-based methods still suffer from coarse object boundaries. In this paper, to solve this problem, we focus on the complementarity between salient edge information and salient object information. Accordingly, we present an edge guidance network (EGNet) for salient object detection with three steps to simultaneously model these two kinds of complementary information in a single network. In the first step, we extract the salient object features by a progressive fusion way. In the second step, we integrate the local edge information and global location information to obtain the salient edge features. Finally, to sufficiently leverage these complementary features, we couple the same salient edge features with salient object features at various resolutions. Benefiting from the rich edge information and location information in salient edge features, the fused features can help locate salient objects, especially their boundaries more accurately. Experimental results demonstrate that the proposed method performs favorably against the state-of-the-art methods on six widely used datasets without any pre-processing and post-processing. The source code is available at http: //mmcheng.net/egnet/. \r", "field": [], "task": ["Camouflaged Object Segmentation", "Co-Salient Object Detection", "Object Detection", "RGB Salient Object Detection", "Salient Object Detection"], "method": [], "dataset": ["CoSal2015", "COD", "CoSOD3k", "CAMO", "CoCA"], "metric": ["max E-Measure", "S-Measure", "Weighted F-Measure", "Average MAE", "Mean F-measure", "mean E-Measure", "MAE", "E-Measure", "max F-Measure"], "title": "EGNet: Edge Guidance Network for Salient Object Detection"} {"abstract": "In this paper, we present Hierarchical Graph Network (HGN) for multi-hop question answering. To aggregate clues from scattered texts across multiple paragraphs, a hierarchical graph is created by constructing nodes on different levels of granularity (questions, paragraphs, sentences, entities), the representations of which are initialized with pre-trained contextual encoders. Given this hierarchical graph, the initial node representations are updated through graph propagation, and multi-hop reasoning is performed via traversing through the graph edges for each subsequent sub-task (e.g., paragraph selection, supporting facts extraction, answer prediction). By weaving heterogeneous nodes into an integral unified graph, this hierarchical differentiation of node granularity enables HGN to support different question answering sub-tasks simultaneously. Experiments on the HotpotQA benchmark demonstrate that the proposed model achieves new state of the art, outperforming existing multi-hop QA approaches.", "field": [], "task": ["Multi-hop Question Answering", "Question Answering"], "method": [], "dataset": ["HotpotQA"], "metric": ["Sup", "Ans", "Joint F1"], "title": "Hierarchical Graph Network for Multi-hop Question Answering"} {"abstract": "One of the well-known challenges in computer vision tasks is the visual diversity of images, which could result in an agreement or disagreement between the learned knowledge and the visual content exhibited by the current observation. In this work, we first define such an agreement in a concepts learning process as congruency. Formally, given a particular task and sufficiently large dataset, the congruency issue occurs in the learning process whereby the task-specific semantics in the training data are highly varying. We propose a Direction Concentration Learning (DCL) method to improve congruency in the learning process, where enhancing congruency influences the convergence path to be less circuitous. The experimental results show that the proposed DCL method generalizes to state-of-the-art models and optimizers, as well as improves the performances of saliency prediction task, continual learning task, and classification task. Moreover, it helps mitigate the catastrophic forgetting problem in the continual learning task. The code is publicly available at https://github.com/luoyan407/congruency.", "field": [], "task": ["Continual Learning", "Image Classification", "Saliency Prediction"], "method": [], "dataset": ["Tiny ImageNet Classification"], "metric": ["Validation Acc"], "title": "Direction Concentration Learning: Enhancing Congruency in Machine Learning"} {"abstract": "In few-shot classification, the aim is to learn models able to discriminate classes using only a small number of labeled examples. In this context, works have proposed to introduce Graph Neural Networks (GNNs) aiming at exploiting the information contained in other samples treated concurrently, what is commonly referred to as the transductive setting in the literature. These GNNs are trained all together with a backbone feature extractor. In this paper, we propose a new method that relies on graphs only to interpolate feature vectors instead, resulting in a transductive learning setting with no additional parameters to train. Our proposed method thus exploits two levels of information: a) transfer features obtained on generic datasets, b) transductive information obtained from other samples to be classified. Using standard few-shot vision classification datasets, we demonstrate its ability to bring significant gains compared to other works.", "field": [], "task": ["Few-Shot Image Classification"], "method": [], "dataset": ["Mini-ImageNet - 1-Shot Learning", "CUB 200 5-way 1-shot", "CUB 200 5-way 5-shot"], "metric": ["Accuracy"], "title": "Graph-based Interpolation of Feature Vectors for Accurate Few-Shot Classification"} {"abstract": "Auditory attention to natural speech is a complex brain process. Its quantification from physiological signals can be valuable to improving and widening the range of applications of current brain-computer-interface systems, however it remains a challenging task. In this article, we present a dataset of physiological signals collected from an experiment on auditory attention to natural speech. In this experiment, auditory stimuli consisting of reproductions of English sentences in different auditory conditions were presented to 25 non-native participants, who were asked to transcribe the sentences. During the experiment, 14 channel electroencephalogram, galvanic skin response, and photoplethysmogram signals were collected from each participant. Based on the number of correctly transcribed words, an attention score was obtained for each auditory stimulus presented to subjects. A strong correlation ($p<<0.0001$) between the attention score and the auditory conditions was found. We also formulate four different predictive tasks involving the collected dataset and develop a feature extraction framework. The results for each predictive task are obtained using a Support Vector Machine with spectral features, and are better than chance level. The dataset has been made publicly available for further research, along with a python library - phyaat to facilitate the preprocessing, modeling, and reproduction of the results presented in this paper. The dataset and other resources are shared on webpage - https://phyaat.github.io.", "field": [], "task": ["Attention Score Prediction", "LWR Classification", "Noise Level Prediction", "Semanticity prediction"], "method": [], "dataset": ["PhyAAt"], "metric": ["MAE", "Accuracy"], "title": "PhyAAt: Physiology of Auditory Attention to Speech Dataset"} {"abstract": "Semantic segmentation is an important component in the perception systems of autonomous vehicles. In this work, we adopt recent advances in both image and point cloud segmentation to achieve a better accuracy in the task of segmenting LiDAR scans. KPRNet improves the convolutional neural network architecture of 2D projection methods and utilizes KPConv to replace the commonly used post-processing techniques with a learnable point-wise component which allows us to obtain more accurate 3D labels. With these improvements our model outperforms the current best method on the SemanticKITTI benchmark, reaching an mIoU of 63.1.", "field": [], "task": ["3D Semantic Segmentation", "Autonomous Vehicles", "LIDAR Semantic Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["SemanticKITTI"], "metric": ["mIoU"], "title": "KPRNet: Improving projection-based LiDAR semantic segmentation"} {"abstract": "Modern lane detection methods have achieved remarkable performances in complex real-world scenarios, but many have issues maintaining real-time efficiency, which is important for autonomous vehicles. In this work, we propose LaneATT: an anchor-based deep lane detection model, which, akin to other generic deep object detectors, uses the anchors for the feature pooling step. Since lanes follow a regular pattern and are highly correlated, we hypothesize that in some cases global information may be crucial to infer their positions, especially in conditions such as occlusion, missing lane markers, and others. Thus, this work proposes a novel anchor-based attention mechanism that aggregates global information. The model was evaluated extensively on three of the most widely used datasets in the literature. The results show that our method outperforms the current state-of-the-art methods showing both higher efficacy and efficiency. Moreover, an ablation study is performed along with a discussion on efficiency trade-off options that are useful in practice.", "field": [], "task": ["Autonomous Vehicles", "Lane Detection"], "method": [], "dataset": ["TuSimple", "CULane"], "metric": ["F1 score", "Accuracy"], "title": "Keep your Eyes on the Lane: Real-time Attention-guided Lane Detection"} {"abstract": "Parsing sentences into syntax trees can benefit downstream applications in NLP. Transition-based parsers build trees by executing actions in a state transition system. They are computationally efficient, and can leverage machine learning to predict actions based on partial trees. However, existing transition-based parsers are predominantly based on the shift-reduce transition system, which does not align with how humans are known to parse sentences. Psycholinguistic research suggests that human parsing is strongly incremental: humans grow a single parse tree by adding exactly one token at each step. In this paper, we propose a novel transition system called attach-juxtapose. It is strongly incremental; it represents a partial sentence using a single tree; each action adds exactly one token into the partial tree. Based on our transition system, we develop a strongly incremental parser. At each step, it encodes the partial tree using a graph neural network and predicts an action. We evaluate our parser on Penn Treebank (PTB) and Chinese Treebank (CTB). On PTB, it outperforms existing parsers trained with only constituency trees; and it performs on par with state-of-the-art parsers that use dependency trees as additional training data. On CTB, our parser establishes a new state of the art. Code is available at https://github.com/princeton-vl/attach-juxtapose-parser.", "field": [], "task": ["Constituency Parsing"], "method": [], "dataset": ["Penn Treebank"], "metric": ["F1 score"], "title": "Strongly Incremental Constituency Parsing with Graph Neural Networks"} {"abstract": "Existing approaches for named entity recognition suffer from data sparsity problems when conducted on short and informal texts, especially user-generated social media content. Semantic augmentation is a potential way to alleviate this problem. Given that rich semantic information is implicitly preserved in pre-trained word embeddings, they are potential ideal resources for semantic augmentation. In this paper, we propose a neural-based approach to NER for social media texts where both local (from running text) and augmented semantics are taken into account. In particular, we obtain the augmented semantic information from a large-scale corpus, and propose an attentive semantic augmentation module and a gate module to encode and aggregate such information, respectively. Extensive experiments are performed on three benchmark datasets collected from English and Chinese social media platforms, where the results demonstrate the superiority of our approach to previous studies across all three datasets.", "field": [], "task": ["Chinese Named Entity Recognition", "Named Entity Recognition", "Word Embeddings"], "method": [], "dataset": ["Weibo NER"], "metric": ["F1"], "title": "Named Entity Recognition for Social Media Texts with Semantic Augmentation"} {"abstract": "Recently significant progress has been made in pedestrian detection, but it remains challenging to achieve high performance in occluded and crowded scenes. It could be mostly attributed to the widely used representation of pedestrians, i.e., 2Daxis-aligned bounding box, which just describes the approximate location and size of the object. Bounding box models the object as a uniform distribution within the boundary, making pedestrians indistinguishable in occluded and crowded scenes due to much noise. To eliminate the problem, we propose a novel representation based on 2D beta distribution, named Beta Representation. It pictures a pedestrianby explicitly constructing the relationship between full-body and visible boxes, and emphasizes the center of visual mass by assigning different probability valuesto pixels. As a result, Beta Representation is much better for distinguishing highly-overlapped instances in crowded scenes with a new NMS strategy named BetaNMS. What\u2019s more, to fully exploit Beta Representation, a novel pipeline Beta R-CNN equipped with BetaHead and BetaMask is proposed, leading to high detection performance in occluded and crowded scenes.", "field": [], "task": ["Object Detection", "Pedestrian Detection"], "method": [], "dataset": ["CityPersons", "CrowdHuman (full body)"], "metric": ["Reasonable MR^-2", "mMR", "AP", "Heavy MR^-2", "Partial MR^-2", "Bare MR^-2"], "title": "Beta R-CNN: Looking into Pedestrian Detection from Another Perspective"} {"abstract": "Twitter has acted as an important source of information during disasters and pandemic, especially during the times of COVID-19. In this paper, we describe our system entry for WNUT 2020 Shared Task-3. The task was aimed at automating the extraction of a variety of COVID-19 related events from Twitter, such as individuals who recently contracted the virus, someone with symptoms who were denied testing and believed remedies against the infection. The system consists of separate multi-task models for slot-filling subtasks and sentence-classification subtasks while leveraging the useful sentence-level information for the corresponding event. The system uses COVID-Twitter-Bert with attention-weighted pooling of candidate slot-chunk features to capture the useful information chunks. The system ranks 1st at the leader-board with F1 of 0.6598, without using any ensembles or additional datasets. The code and trained models are available at this https URL.", "field": [], "task": ["Extracting COVID-19 Events from Twitter"], "method": [], "dataset": ["W-NUT 2020 Shared Task-3"], "metric": ["F1"], "title": "Leveraging Event Specific and Chunk Span features to Extract COVID Events from tweets"} {"abstract": "Biologists who use electron microscopy (EM) images to build nanoscale 3D models of whole cells and their organelles have historically been limited to small numbers of cells and cellular features due to constraints in imaging and analysis. This has been a major factor limiting insight into the complex variability of cellular environments. Modern EM can produce gigavoxel image volumes containing large numbers of cells, but accurate manual segmentation of image features is slow and limits the creation of cell models. Segmentation algorithms based on convolutional neural networks can process large volumes quickly, but achieving EM task accuracy goals often challenges current techniques. Here, we define dense cellular segmentation as a multiclass semantic segmentation task for modeling cells and large numbers of their organelles, and give an example in human blood platelets. We present an algorithm using novel hybrid 2D\u20133D segmentation networks to produce dense cellular segmentations with accuracy levels that outperform baseline methods and approach those of human annotators. To our knowledge, this work represents the first published approach to automating the creation of cell models with this level of structural detail.", "field": [], "task": ["3D Semantic Segmentation", "Electron Microscopy", "Semantic Segmentation"], "method": [], "dataset": ["3D Platelet EM"], "metric": ["Mean IoU (test)"], "title": "Dense cellular segmentation for EM using 2D\u20133D neural network ensembles"} {"abstract": "Knowledge graphs contain a wealth of real-world knowledge that can provide strong support for artificial intelligence applications. Much progress has been made in knowledge graph completion, state-of-the-art models are based on graph convolutional neural networks. These models automatically extract features, in combination with the features of the graph model, to generate feature embeddings with a strong expressive ability. However, these methods assign the same weights on the relation path in the knowledge graph and ignore the rich information presented in neighbor nodes, which result in incomplete mining of triple features. To this end, we propose Graph Attenuated Attention networks(GAATs), a novel representation method, which integrates an attenuated attention mechanism to assign different weight in different relation path and acquire the information from the neighborhoods. As a result, entities and relations can be learned in any neighbors. Our empirical research provides insight into the effectiveness of the attenuated attention-based models, and we show significant improvement compared to the state-of-the-art methods on two benchmark datasets WN18RR and FB15k-237.", "field": [], "task": ["Graph Embedding", "Knowledge Base Completion", "Knowledge Graph Completion", "Knowledge Graph Embedding", "Knowledge Graphs", "Link Prediction", "Relational Reasoning"], "method": [], "dataset": ["WN18RR", "FB15k-237"], "metric": ["Hits@3", "Hits@1", "MR", "MRR", "Hits@10"], "title": "Knowledge Graph Embedding via Graph Attenuated Attention Networks"} {"abstract": "Tensor factorization based models have shown great power in knowledge graph completion (KGC). However, their performance usually suffers from the overfitting problem seriously. This motivates various regularizers---such as the squared Frobenius norm and tensor nuclear norm regularizers---while the limited applicability significantly limits their practical usage. To address this challenge, we propose a novel regularizer---namely, DUality-induced RegulArizer (DURA)---which is not only effective in improving the performance of existing models but widely applicable to various methods. The major novelty of DURA is based on the observation that, for an existing tensor factorization based KGC model (primal), there is often another distance based KGC model (dual) closely associated with it. Experiments show that DURA yields consistent and significant improvements on benchmarks.", "field": [], "task": ["Knowledge Graph Completion", "Link Prediction"], "method": [], "dataset": ["WN18RR", "YAGO3-10", "FB15k-237"], "metric": ["Hits@10", "MRR", "Hits@1"], "title": "Duality-Induced Regularizer for Tensor Factorization Based Knowledge Graph Completion"} {"abstract": "Weakly-supervised object detection (WOD) is a challenging problems in\ncomputer vision. The key problem is to simultaneously infer the exact object\nlocations in the training images and train the object detectors, given only the\ntraining images with weak image-level labels. Intuitively, by simulating the\nselective attention mechanism of human visual system, saliency detection\ntechnique can select attractive objects in scenes and thus is a potential way\nto provide useful priors for WOD. However, the way to adopt saliency detection\nin WOD is not trivial since the detected saliency region might be possibly\nhighly ambiguous in complex cases. To this end, this paper first\ncomprehensively analyzes the challenges in applying saliency detection to WOD.\nThen, we make one of the earliest efforts to bridge saliency detection to WOD\nvia the self-paced curriculum learning, which can guide the learning procedure\nto gradually achieve faithful knowledge of multi-class objects from easy to\nhard. The experimental results demonstrate that the proposed approach can\nsuccessfully bridge saliency detection and WOD tasks and achieve the\nstate-of-the-art object detection results under the weak supervision.", "field": [], "task": ["Curriculum Learning", "Object Detection", "Saliency Detection", "Weakly Supervised Object Detection"], "method": [], "dataset": ["PASCAL VOC 2007"], "metric": ["MAP"], "title": "Bridging Saliency Detection to Weakly Supervised Object Detection Based on Self-paced Curriculum Learning"} {"abstract": "We tackle the challenging task of estimating global 3D joint locations for both hands via only monocular RGB input images. We propose a novel multi-stage convolutional neural network based pipeline that accurately segments and locates the hands despite occlusion between two hands and complex background noise and estimates the 2D and 3D canonical joint locations without any depth information. Global joint locations with respect to the camera origin are computed using the hand pose estimations and the actual length of the key bone with a novel projection algorithm. To train the CNNs for this new task, we introduce a large-scale synthetic 3D hand pose dataset. We demonstrate that our system outperforms previous works on 3D canonical hand pose estimation benchmark datasets with RGB-only information. Additionally, we present the first work that achieves accurate global 3D hand tracking on both hands using RGB-only inputs and provide extensive quantitative and qualitative evaluation.", "field": [], "task": ["3D Canonical Hand Pose Estimation", "3D Hand Pose Estimation", "3D Pose Estimation", "Hand Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["STB", "Ego3DHands", "RHP"], "metric": ["AUC"], "title": "Two-hand Global 3D Pose Estimation Using Monocular RGB"} {"abstract": "\n While training a machine learning model using multiple workers, each of which collects data from its own data source, it would be useful when the data collected from different workers are unique and different. Ironically, recent analysis of decentralized parallel stochastic gradient descent (D-PSGD) relies on the assumption that the data hosted on different workers are not too different. In this paper, we ask the question: Can we design a decentralized parallel stochastic gradient descent algorithm that is less sensitive to the data variance across workers? In this paper, we present D$^2$, a novel decentralized parallel stochastic gradient descent algorithm designed for large data variance \\xr{among workers} (imprecisely, \u201cdecentralized\u201d data). The core of D$^2$ is a variance reduction extension of D-PSGD. It improves the convergence rate from $O\\left({\\sigma \\over \\sqrt{nT}} + {(n\\zeta^2)^{\\frac{1}{3}} \\over T^{2/3}}\\right)$ to $O\\left({\\sigma \\over \\sqrt{nT}}\\right)$ where $\\zeta^{2}$ denotes the variance among data on different workers. As a result, D$^2$ is robust to data variance among workers. We empirically evaluated D$^2$ on image classification tasks, where each worker has access to only the data of a limited set of labels, and find that D$^2$ significantly outperforms D-PSGD.\n ", "field": [], "task": ["Image Classification", "Multi-view Subspace Clustering"], "method": [], "dataset": ["ORL"], "metric": ["Accuracy"], "title": "$D^2$: Decentralized Training over Decentralized Data"} {"abstract": "Real-world events exhibit a high degree of interdependence and connections, and hence data points generated also inherit the linkages. However, the majority of AI/ML techniques leave out the linkages among data points. The recent surge of interest in graph-based AI/ML techniques is aimed to leverage the linkages. Graph-based learning algorithms utilize the data and related information effectively to build superior models. Neural Graph Learning (NGL) is one such technique that utilizes a traditional machine learning algorithm with a modified loss function to leverage the edges in the graph structure. In this paper, we propose a model using NGL - NodeNet, to solve node classification task for citation graphs. We discuss our modifications and their relevance to the task. We further compare our results with the current state of the art and investigate reasons for the superior performance of NodeNet.", "field": [], "task": ["Graph Learning", "Node Classification"], "method": [], "dataset": ["Cora", "Pubmed", "Citeseer"], "metric": ["Accuracy"], "title": "NodeNet: A Graph Regularised Neural Network for Node Classification"} {"abstract": "A standard model for Recommender Systems is the Matrix Completion setting:\ngiven partially known matrix of ratings given by users (rows) to items\n(columns), infer the unknown ratings. In the last decades, few attempts where\ndone to handle that objective with Neural Networks, but recently an\narchitecture based on Autoencoders proved to be a promising approach. In\ncurrent paper, we enhanced that architecture (i) by using a loss function\nadapted to input data with missing values, and (ii) by incorporating side\ninformation. The experiments demonstrate that while side information only\nslightly improve the test error averaged on all users/items, it has more impact\non cold users/items.", "field": [], "task": ["Matrix Completion", "Recommendation Systems"], "method": [], "dataset": ["MovieLens 1M", "Douban", "MovieLens 10M"], "metric": ["RMSE"], "title": "Hybrid Recommender System based on Autoencoders"} {"abstract": "This paper presents the first use of graph neural networks (GNNs) for higher-order proof search and demonstrates that GNNs can improve upon state-of-the-art results in this domain. Interactive, higher-order theorem provers allow for the formalization of most mathematical theories and have been shown to pose a significant challenge for deep learning. Higher-order logic is highly expressive and, even though it is well-structured with a clearly defined grammar and semantics, there still remains no well-established method to convert formulas into graph-based representations. In this paper, we consider several graphical representations of higher-order logic and evaluate them against the HOList benchmark for higher-order theorem proving.", "field": [], "task": ["Automated Theorem Proving"], "method": [], "dataset": ["HOList benchmark"], "metric": ["Percentage correct"], "title": "Graph Representations for Higher-Order Logic and Theorem Proving"} {"abstract": "Metric learning algorithms produce distance metrics that capture the important relationships among data. In this work we study the connection between metric learning and collaborative filtering. We propose Collaborative Metric Learning (CML) which learns a joint metric space to encode not only users\u2019 preferences but also the user-user and item-item similarity. The proposed algorithm outperforms state-of-the-art collaborative filtering algorithms on a wide range of recommendation tasks and uncovers the underlying spectrum of users\u2019 fine-grained preferences. CML also achieves significant speedup for Top-K recommendation tasks using off-the-shelf, approximate nearest-neighbor search, with negligible accuracy reduction.", "field": [], "task": ["Metric Learning", "Recommendation Systems"], "method": [], "dataset": ["MovieLens 1M", "MovieLens 20M", "Million Song Dataset", "Netflix"], "metric": ["Recall@100", "nDCG@10", "HR@10", "Recall@50"], "title": "Collaborative Metric Learning"} {"abstract": "The discrimination and simplicity of features are very important for\neffective and efficient pedestrian detection. However, most state-of-the-art\nmethods are unable to achieve good tradeoff between accuracy and efficiency.\nInspired by some simple inherent attributes of pedestrians (i.e., appearance\nconstancy and shape symmetry), we propose two new types of non-neighboring\nfeatures (NNF): side-inner difference features (SIDF) and symmetrical\nsimilarity features (SSF). SIDF can characterize the difference between the\nbackground and pedestrian and the difference between the pedestrian contour and\nits inner part. SSF can capture the symmetrical similarity of pedestrian shape.\nHowever, it's difficult for neighboring features to have such above\ncharacterization abilities. Finally, we propose to combine both non-neighboring\nand neighboring features for pedestrian detection. It's found that\nnon-neighboring features can further decrease the average miss rate by 4.44%.\nExperimental results on INRIA and Caltech pedestrian datasets demonstrate the\neffectiveness and efficiency of the proposed method. Compared to the\nstate-of-the-art methods without using CNN, our method achieves the best\ndetection performance on Caltech, outperforming the second best method (i.e.,\nCheckboards) by 1.63%.", "field": [], "task": ["Pedestrian Detection"], "method": [], "dataset": ["Caltech"], "metric": ["Reasonable Miss Rate"], "title": "Pedestrian Detection Inspired by Appearance Constancy and Shape Symmetry"} {"abstract": "We propose AI-CARGO, a revenue management system for air-cargo that combines machine learning prediction with decision-making using mathematical optimization methods. AI-CARGO addresses a problem that is unique to the air-cargo business, namely the wide discrepancy between the quantity (weight or volume) that a shipper will book and the actual received amount at departure time by the airline. The discrepancy results in sub-optimal and inefficient behavior by both the shipper and the airline resulting in the overall loss of potential revenue for the airline. AI-CARGO also includes a data cleaning component to deal with the heterogeneous forms in which booking data is transmitted to the airline cargo system. AI-CARGO is deployed in the production environment of a large commercial airline company. We have validated the benefits of AI-CARGO using real and synthetic datasets. Especially, we have carried out simulations using dynamic programming techniques to elicit the impact on offloading costs and revenue generation of our proposed system. Our results suggest that combining prediction within a decision-making framework can help dramatically to reduce offloading costs and optimize revenue generation.", "field": [], "task": ["Decision Making", "Stochastic Optimization"], "method": [], "dataset": ["ImageNet ResNet-50 - 60 Epochs"], "metric": ["Top 1 Accuracy"], "title": "AI-CARGO: A Data-Driven Air-Cargo Revenue Management System"} {"abstract": "In weakly-supervised temporal action localization, previous works have failed to locate dense and integral regions for each entire action due to the overestimation of the most salient regions. To alleviate this issue, we propose a marginalized average attentional network (MAAN) to suppress the dominant response of the most salient regions in a principled manner. The MAAN employs a novel marginalized average aggregation (MAA) module and learns a set of latent discriminative probabilities in an end-to-end fashion. MAA samples multiple subsets from the video snippet features according to a set of latent discriminative probabilities and takes the expectation over all the averaged subset features. Theoretically, we prove that the MAA module with learned latent discriminative probabilities successfully reduces the difference in responses between the most salient regions and the others. Therefore, MAAN is able to generate better class activation sequences and identify dense and integral action regions in the videos. Moreover, we propose a fast algorithm to reduce the complexity of constructing MAA from O($2^T$) to O($T^2$). Extensive experiments on two large-scale video datasets show that our MAAN achieves superior performance on weakly-supervised temporal action localization", "field": [], "task": ["Action Localization", "Temporal Action Localization", "Weakly Supervised Action Localization", "Weakly-supervised Temporal Action Localization", "Weakly Supervised Temporal Action Localization"], "method": [], "dataset": ["ActivityNet-1.3", "THUMOS 2014"], "metric": ["mAP@0.5", "mAP@0.1:0.7"], "title": "Marginalized Average Attentional Network for Weakly-Supervised Learning"} {"abstract": "A unified deep neural network, denoted the multi-scale CNN (MS-CNN), is\nproposed for fast multi-scale object detection. The MS-CNN consists of a\nproposal sub-network and a detection sub-network. In the proposal sub-network,\ndetection is performed at multiple output layers, so that receptive fields\nmatch objects of different scales. These complementary scale-specific detectors\nare combined to produce a strong multi-scale object detector. The unified\nnetwork is learned end-to-end, by optimizing a multi-task loss. Feature\nupsampling by deconvolution is also explored, as an alternative to input\nupsampling, to reduce the memory and computation costs. State-of-the-art object\ndetection performance, at up to 15 fps, is reported on datasets, such as KITTI\nand Caltech, containing a substantial number of small objects.", "field": [], "task": ["Face Detection", "Object Detection", "Pedestrian Detection", "Real-Time Object Detection"], "method": [], "dataset": ["WIDER Face (Hard)", "Caltech"], "metric": ["Reasonable Miss Rate", "AP"], "title": "A Unified Multi-scale Deep Convolutional Neural Network for Fast Object Detection"} {"abstract": "Recently, it has been shown that in super-resolution, there exists a tradeoff\nrelationship between the quantitative and perceptual quality of super-resolved\nimages, which correspond to the similarity to the ground-truth images and the\nnaturalness, respectively. In this paper, we propose a novel super-resolution\nmethod that can improve the perceptual quality of the upscaled images while\npreserving the conventional quantitative performance. The proposed method\nemploys a deep network for multi-pass upscaling in company with a discriminator\nnetwork and two quantitative score predictor networks. Experimental results\ndemonstrate that the proposed method achieves a good balance of the\nquantitative and perceptual quality, showing more satisfactory results than\nexisting methods.", "field": [], "task": ["Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["Set5 - 4x upscaling", "BSD100 - 4x upscaling", "Set14 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Deep Learning-based Image Super-Resolution Considering Quantitative and Perceptual Quality"} {"abstract": "Deep learning based landcover classification algorithms have recently been\nproposed in literature. In hyperspectral images (HSI) they face the challenges\nof large dimensionality, spatial variability of spectral signatures and\nscarcity of labeled data. In this article we propose an end-to-end deep\nlearning architecture that extracts band specific spectral-spatial features and\nperforms landcover classification. The architecture has fewer independent\nconnection weights and thus requires lesser number of training data. The method\nis found to outperform the highest reported accuracies on popular hyperspectral\nimage data sets.", "field": [], "task": ["Hyperspectral Image Classification", "Image Classification"], "method": [], "dataset": ["Indian Pines", "Pavia University"], "metric": ["Overall Accuracy"], "title": "BASS Net: Band-Adaptive Spectral-Spatial Feature Learning Neural Network for Hyperspectral Image Classification"} {"abstract": "An accurate depth map of the environment is critical to the safe operation of autonomous robots and vehicles. Currently, either light detection and ranging (LIDAR) or stereo matching algorithms are used to acquire such depth information. However, a high-resolution LIDAR is expensive and produces sparse depth map at large range; stereo matching algorithms are able to generate denser depth maps but are typically less accurate than LIDAR at long range. This paper combines these approaches together to generate high-quality dense depth maps. Unlike previous approaches that are trained using ground-truth labels, the proposed model adopts a self-supervised training process. Experiments show that the proposed method is able to generate high-quality dense depth maps and performs robustly even with low-resolution inputs. This shows the potential to reduce the cost by using LIDARs with lower resolution in concert with stereo systems while maintaining high resolution.", "field": [], "task": ["Stereo Matching", "Stereo Matching Hand"], "method": [], "dataset": ["KITTI Depth Completion Validation"], "metric": ["RMSE"], "title": "LiStereo: Generate Dense Depth Maps from LIDAR and Stereo Imagery"} {"abstract": "In this paper, we study how to improve the domain adaptability of a deletion-based Long Short-Term Memory (LSTM) neural network model for sentence compression. We hypothesize that syntactic information helps in making such models more robust across domains. We propose two major changes to the model: using explicit syntactic features and introducing syntactic constraints through Integer Linear Programming (ILP). Our evaluation shows that the proposed model works better than the original model as well as a traditional non-neural-network-based model in a cross-domain setting.", "field": [], "task": ["Sentence Compression", "Text Summarization", "Tokenization"], "method": [], "dataset": ["Google Dataset"], "metric": ["CR", "F1"], "title": "Can Syntax Help? Improving an LSTM-based Sentence Compression Model for New Domains"} {"abstract": "We show how eye-tracking corpora can be used to improve sentence compression\nmodels, presenting a novel multi-task learning algorithm based on multi-layer\nLSTMs. We obtain performance competitive with or better than state-of-the-art\napproaches.", "field": [], "task": ["Eye Tracking", "Multi-Task Learning", "Sentence Compression"], "method": [], "dataset": ["Google Dataset"], "metric": ["CR", "F1"], "title": "Improving sentence compression by learning to predict gaze"} {"abstract": "This paper describes our system for subtask-A: SDQC for RumourEval, task-8 of SemEval 2017. Identifying rumours, especially for breaking news events as they unfold, is a challenging task due to the absence of sufficient information about the exact rumour stories circulating on social media. Determining the stance of Twitter users towards rumourous messages could provide an indirect way of identifying potential rumours. The proposed approach makes use of topic independent features from two categories, namely cue features and message specific features to fit a gradient boosting classifier. With an accuracy of 0.78, our system achieved the second best performance on subtask-A of RumourEval.", "field": [], "task": ["Rumour Detection", "Stance Detection"], "method": [], "dataset": ["RumourEval"], "metric": ["Accuracy"], "title": "UWaterloo at SemEval-2017 Task 8: Detecting Stance towards Rumours with Topic Independent Features"} {"abstract": "We investigate the problem of learning representations that are invariant to\ncertain nuisance or sensitive factors of variation in the data while retaining\nas much of the remaining information as possible. Our model is based on a\nvariational autoencoding architecture with priors that encourage independence\nbetween sensitive and latent factors of variation. Any subsequent processing,\nsuch as classification, can then be performed on this purged latent\nrepresentation. To remove any remaining dependencies we incorporate an\nadditional penalty term based on the \"Maximum Mean Discrepancy\" (MMD) measure.\nWe discuss how these architectures can be efficiently trained on data and show\nin experiments that this method is more effective than previous work in\nremoving unwanted sources of variation while maintaining informative latent\nrepresentations.", "field": [], "task": ["Sentiment Analysis"], "method": [], "dataset": ["Multi-Domain Sentiment Dataset"], "metric": ["DVD", "Average", "Kitchen", "Electronics", "Books"], "title": "The Variational Fair Autoencoder"} {"abstract": "Fact Verification requires fine-grained natural language inference capability that finds subtle clues to identify the syntactical and semantically correct but not well-supported claims. This paper presents Kernel Graph Attention Network (KGAT), which conducts more fine-grained fact verification with kernel-based attentions. Given a claim and a set of potential evidence sentences that form an evidence graph, KGAT introduces node kernels, which better measure the importance of the evidence node, and edge kernels, which conduct fine-grained evidence propagation in the graph, into Graph Attention Networks for more accurate fact verification. KGAT achieves a 70.38% FEVER score and significantly outperforms existing fact verification models on FEVER, a large-scale benchmark for fact verification. Our analyses illustrate that, compared to dot-product attentions, the kernel-based attention concentrates more on relevant evidence sentences and meaningful clues in the evidence graph, which is the main source of KGAT's effectiveness.", "field": [], "task": ["Fact Verification", "Natural Language Inference"], "method": [], "dataset": ["FEVER"], "metric": ["FEVER", "Accuracy"], "title": "Fine-grained Fact Verification with Kernel Graph Attention Network"} {"abstract": "Knowledge graph embedding, which aims to represent entities and relations as low dimensional vectors (or matrices, tensors, etc.), has been shown to be a powerful technique for predicting missing links in knowledge graphs. Existing knowledge graph embedding models mainly focus on modeling relation patterns such as symmetry/antisymmetry, inversion, and composition. However, many existing approaches fail to model semantic hierarchies, which are common in real-world applications. To address this challenge, we propose a novel knowledge graph embedding model---namely, Hierarchy-Aware Knowledge Graph Embedding (HAKE)---which maps entities into the polar coordinate system. HAKE is inspired by the fact that concentric circles in the polar coordinate system can naturally reflect the hierarchy. Specifically, the radial coordinate aims to model entities at different levels of the hierarchy, and entities with smaller radii are expected to be at higher levels; the angular coordinate aims to distinguish entities at the same level of the hierarchy, and these entities are expected to have roughly the same radii but different angles. Experiments demonstrate that HAKE can effectively model the semantic hierarchies in knowledge graphs, and significantly outperforms existing state-of-the-art methods on benchmark datasets for the link prediction task.", "field": [], "task": ["Graph Embedding", "Knowledge Graph Completion", "Knowledge Graph Embedding", "Knowledge Graph Embeddings", "Knowledge Graphs", "Link Prediction"], "method": [], "dataset": ["WN18RR", "YAGO3-10", "FB15k-237"], "metric": ["Hits@10", "MRR", "Hits@3", "Hits@1"], "title": "Learning Hierarchy-Aware Knowledge Graph Embeddings for Link Prediction"} {"abstract": "Simultaneously running multiple modules is a key requirement for a smart multimedia system for facial applications including face recognition, facial expression understanding, and gender identification. To effectively integrate them, a continual learning approach to learn new tasks without forgetting is introduced. Unlike previous methods growing monotonically in size, our approach maintains the compactness in continual learning. The proposed packing-and-expanding method is effective and easy to implement, which can iteratively shrink and enlarge the model to integrate new functions. Our integrated multitask model can achieve similar accuracy with only 39.9% of the original size.", "field": [], "task": ["Age And Gender Classification", "Continual Learning", "Face Recognition", "Face Verification", "Facial Expression Recognition", "Gender Prediction"], "method": [], "dataset": ["Adience Gender", "Adience Age", "Cifar100 (20 tasks)", "Labeled Faces in the Wild", "FotW Gender", "AffectNet"], "metric": ["Accuracy (8 emotion)", "Accuracy (7 emotion)", "Accuracy (5-fold)", "Accuracy (%)", "Accuracy", "Average Accuracy"], "title": "Increasingly Packing Multiple Facial-Informatics Modules in A Unified Deep-Learning Model via Lifelong Learning"} {"abstract": "Missing value imputation is a fundamental problem in spatiotemporal modeling, from motion tracking to the dynamics of physical systems. Deep autoregressive models suffer from error propagation which becomes catastrophic for imputing long-range sequences. In this paper, we take a non-autoregressive approach and propose a novel deep generative model: Non-AutOregressive Multiresolution Imputation (NAOMI) to impute long-range sequences given arbitrary missing patterns. NAOMI exploits the multiresolution structure of spatiotemporal data and decodes recursively from coarse to fine-grained resolutions using a divide-and-conquer strategy. We further enhance our model with adversarial training. When evaluated extensively on benchmark datasets from systems of both deterministic and stochastic dynamics. NAOMI demonstrates significant improvement in imputation accuracy (reducing average prediction error by 60% compared to autoregressive counterparts) and generalization for long range sequences.", "field": [], "task": ["Imitation Learning", "Imputation", "Multivariate Time Series Imputation"], "method": [], "dataset": ["PEMS-SF", "Basketball Players Movement"], "metric": ["OOB Rate (10^\u22123) ", "Step Change (10^\u22123)", "Player Distance ", "Path Difference", "L2 Loss (10^-4)", "Path Length"], "title": "NAOMI: Non-Autoregressive Multiresolution Sequence Imputation"} {"abstract": "The electrocardiogram (ECG) is a useful diagnostic tool to diagnose various cardiovascular diseases (CVDs) such as myocardial infarction (MI). The ECG records the heart's electrical activity and these signals are able to reflect the abnormal activity of the heart. However, it is challenging to visually interpret the ECG signals due to its small amplitude and duration. Therefore, we propose a novel approach to automatically detect the MI using ECG signals. In this study, we implemented a convolutional neural network (CNN) algorithm for the automated detection of a normal and MI ECG beats (with noise and without noise). We achieved an average accuracy of 93.53% and 95.22% using ECG beats with noise and without noise removal respectively. Further, no feature extraction or selection is performed in this work. Hence, our proposed algorithm can accurately detect the unknown ECG signals even with noise. So, this system can be introduced in clinical settings to aid the clinicians in the diagnosis of MI.", "field": [], "task": ["Electrocardiography (ECG)", "Myocardial infarction detection"], "method": [], "dataset": ["PTB dataset, ECG lead II"], "metric": ["Accuracy"], "title": "Application of deep convolutional neural network for automated detection of myocardial infarction using ecg signals"} {"abstract": "Despite the remarkable progress in face recognition related technologies,\nreliably recognizing faces across ages still remains a big challenge. The\nappearance of a human face changes substantially over time, resulting in\nsignificant intra-class variations. As opposed to current techniques for\nage-invariant face recognition, which either directly extract age-invariant\nfeatures for recognition, or first synthesize a face that matches target age\nbefore feature extraction, we argue that it is more desirable to perform both\ntasks jointly so that they can leverage each other. To this end, we propose a\ndeep Age-Invariant Model (AIM) for face recognition in the wild with three\ndistinct novelties. First, AIM presents a novel unified deep architecture\njointly performing cross-age face synthesis and recognition in a mutual\nboosting way. Second, AIM achieves continuous face rejuvenation/aging with\nremarkable photorealistic and identity-preserving properties, avoiding the\nrequirement of paired data and the true age of testing samples. Third, we\ndevelop effective and novel training strategies for end-to-end learning the\nwhole deep architecture, which generates powerful age-invariant face\nrepresentations explicitly disentangled from the age variation. Moreover, we\npropose a new large-scale Cross-Age Face Recognition (CAFR) benchmark dataset\nto facilitate existing efforts and push the frontiers of age-invariant face\nrecognition research. Extensive experiments on both our CAFR and several other\ncross-age datasets (MORPH, CACD and FG-NET) demonstrate the superiority of the\nproposed AIM model over the state-of-the-arts. Benchmarking our model on one of\nthe most popular unconstrained face recognition datasets IJB-C additionally\nverifies the promising generalizability of AIM in recognizing faces in the\nwild.", "field": [], "task": ["Age-Invariant Face Recognition", "Face Generation", "Face Recognition", "Representation Learning"], "method": [], "dataset": ["CAFR", "FG-NET", "CACDVS", "MORPH Album2", "IJB-C"], "metric": ["TAR @ FAR=0.01", "Rank-1 Recognition Rate", "Accuracy"], "title": "Look Across Elapse: Disentangled Representation Learning and Photorealistic Cross-Age Face Synthesis for Age-Invariant Face Recognition"} {"abstract": "Analyzing human multimodal language is an emerging area of research in NLP. Intrinsically this language is multimodal (heterogeneous), sequential and asynchronous; it consists of the language (words), visual (expressions) and acoustic (paralinguistic) modalities all in the form of asynchronous coordinated sequences. From a resource perspective, there is a genuine need for large scale datasets that allow for in-depth studies of this form of language. In this paper we introduce CMU Multimodal Opinion Sentiment and Emotion Intensity (CMU-MOSEI), the largest dataset of sentiment analysis and emotion recognition to date. Using data from CMU-MOSEI and a novel multimodal fusion technique called the Dynamic Fusion Graph (DFG), we conduct experimentation to exploit how modalities interact with each other in human multimodal language. Unlike previously proposed fusion techniques, DFG is highly interpretable and achieves competative performance when compared to the previous state of the art.", "field": [], "task": ["Emotion Recognition", "Language Modelling", "Multimodal Sentiment Analysis", "Multi-Task Learning", "Sentiment Analysis"], "method": [], "dataset": ["CMU-MOSEI"], "metric": ["MAE", "Accuracy"], "title": "Multimodal Language Analysis in the Wild: CMU-MOSEI Dataset and Interpretable Dynamic Fusion Graph"} {"abstract": "Named Entity Recognition (NER) is a fundamental task in Natural Language Processing, concerned with identifying spans of text expressing references to entities. NER research is often focused on flat entities only (flat NER), ignoring the fact that entity references can be nested, as in [Bank of [China]] (Finkel and Manning, 2009). In this paper, we use ideas from graph-based dependency parsing to provide our model a global view on the input via a biaffine model (Dozat and Manning, 2017). The biaffine model scores pairs of start and end tokens in a sentence which we use to explore all spans, so that the model is able to predict named entities accurately. We show that the model works well for both nested and flat NER through evaluation on 8 corpora and achieving SoTA performance on all of them, with accuracy gains of up to 2.2 percentage points.", "field": [], "task": ["Dependency Parsing", "Named Entity Recognition"], "method": [], "dataset": ["GENIA", "ACE 2004", "CoNLL 2002 (Spanish)", "CoNLL 2003 (German) Revised", "CoNLL 2002 (Dutch)", "ACE 2005", "Ontonotes v5 (English)", "CoNLL 2003 (English)", "CoNLL 2003 (German)"], "metric": ["F1"], "title": "Named Entity Recognition as Dependency Parsing"} {"abstract": "Hypernym discovery aims to discover the hypernym word sets given a hyponym word and proper corpus. This paper proposes a simple but effective method for the discovery of hypernym sets based on word embedding, which can be used to measure the contextual similarities between words. Given a test hyponym word, we get its hypernym lists by computing the similarities between the hyponym word and words in the training data, and fill the test word{'}s hypernym lists with the hypernym list in the training set of the nearest similarity distance to the test word. In SemEval 2018 task9, our results, achieve 1st on Spanish, 2nd on Italian, 6th on English in the metric of MAP.", "field": [], "task": ["Hypernym Discovery", "Information Retrieval", "Natural Language Inference", "Question Answering", "Regression", "Word Sense Disambiguation"], "method": [], "dataset": ["General"], "metric": ["P@5", "MRR", "MAP"], "title": "NLP\\_HZ at SemEval-2018 Task 9: a Nearest Neighbor Approach"} {"abstract": "This paper describes a hypernym discovery system for our participation in the\nSemEval-2018 Task 9, which aims to discover the best (set of) candidate\nhypernyms for input concepts or entities, given the search space of a\npre-defined vocabulary. We introduce a neural network architecture for the\nconcerned task and empirically study various neural network models to build the\nrepresentations in latent space for words and phrases. The evaluated models\ninclude convolutional neural network, long-short term memory network, gated\nrecurrent unit and recurrent convolutional neural network. We also explore\ndifferent embedding methods, including word embedding and sense embedding for\nbetter performance.", "field": [], "task": ["Hypernym Discovery"], "method": [], "dataset": ["Medical domain", "Music domain", "General"], "metric": ["P@5", "MRR", "MAP"], "title": "SJTU-NLP at SemEval-2018 Task 9: Neural Hypernym Discovery with Term Embeddings"} {"abstract": "Domain Adaptation is an actively researched problem in Computer Vision. In\nthis work, we propose an approach that leverages unsupervised data to bring the\nsource and target distributions closer in a learned joint feature space. We\naccomplish this by inducing a symbiotic relationship between the learned\nembedding and a generative adversarial network. This is in contrast to methods\nwhich use the adversarial framework for realistic data generation and\nretraining deep models with such data. We demonstrate the strength and\ngenerality of our approach by performing experiments on three different tasks\nwith varying levels of difficulty: (1) Digit classification (MNIST, SVHN and\nUSPS datasets) (2) Object recognition using OFFICE dataset and (3) Domain\nadaptation from synthetic to real data. Our method achieves state-of-the art\nperformance in most experimental settings and by far the only GAN-based method\nthat has been shown to work well across different datasets such as OFFICE and\nDIGITS.", "field": [], "task": ["Domain Adaptation", "Object Recognition"], "method": [], "dataset": ["Office-31"], "metric": ["Average Accuracy"], "title": "Generate To Adapt: Aligning Domains using Generative Adversarial Networks"} {"abstract": "Understanding entailment and contradiction is fundamental to understanding\nnatural language, and inference about entailment and contradiction is a\nvaluable testing ground for the development of semantic representations.\nHowever, machine learning research in this area has been dramatically limited\nby the lack of large-scale resources. To address this, we introduce the\nStanford Natural Language Inference corpus, a new, freely available collection\nof labeled sentence pairs, written by humans doing a novel grounded task based\non image captioning. At 570K pairs, it is two orders of magnitude larger than\nall other resources of its type. This increase in scale allows lexicalized\nclassifiers to outperform some sophisticated existing entailment models, and it\nallows a neural network-based model to perform competitively on natural\nlanguage inference benchmarks for the first time.", "field": [], "task": ["Image Captioning", "Natural Language Inference"], "method": [], "dataset": ["SNLI"], "metric": ["Parameters", "% Train Accuracy", "% Test Accuracy"], "title": "A large annotated corpus for learning natural language inference"} {"abstract": "We propose an image super-resolution method (SR) using a deeply-recursive\nconvolutional network (DRCN). Our network has a very deep recursive layer (up\nto 16 recursions). Increasing recursion depth can improve performance without\nintroducing new parameters for additional convolutions. Albeit advantages,\nlearning a DRCN is very hard with a standard gradient descent method due to\nexploding/vanishing gradients. To ease the difficulty of training, we propose\ntwo extensions: recursive-supervision and skip-connection. Our method\noutperforms previous methods by a large margin.", "field": [], "task": ["Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["Set14 - 2x upscaling", "Set14 - 4x upscaling", "BSD100 - 2x upscaling", "Urban100 - 2x upscaling", "BSD100 - 4x upscaling", "Set5 - 4x upscaling", "Set5 - 2x upscaling"], "metric": ["MOS", "SSIM", "PSNR"], "title": "Deeply-Recursive Convolutional Network for Image Super-Resolution"} {"abstract": "Data-driven depth estimation methods struggle with the generalization outside their training scenes due to the immense variability of the real-world scenes. This problem can be partially addressed by utilising synthetically generated images, but closing the synthetic-real domain gap is far from trivial. In this paper, we tackle this issue by using domain invariant defocus blur as direct supervision. We leverage defocus cues by using a permutation invariant convolutional neural network that encourages the network to learn from the differences between images with a different point of focus. Our proposed network uses the defocus map as an intermediate supervisory signal. We are able to train our model completely on synthetic data and directly apply it to a wide range of real-world images. We evaluate our model on synthetic and real datasets, showing compelling generalization results and state-of-the-art depth prediction.", "field": [], "task": ["Depth Estimation"], "method": [], "dataset": ["NYU-Depth V2"], "metric": ["RMSE"], "title": "Focus on defocus: bridging the synthetic to real domain gap for depth estimation"} {"abstract": "In this paper, we solve the problem of adapting classifiers across domains.\nWe consider the problem of domain adaptation for multi-class classification\nwhere we are provided a labeled set of examples in a source dataset and we are\nprovided a target dataset with no supervision. In this setting, we propose an\nadversarial discriminator based approach. While the approach based on\nadversarial discriminator has been previously proposed; in this paper, we\npresent an informed adversarial discriminator. Our observation relies on the\nanalysis that shows that if the discriminator has access to all the information\navailable including the class structure present in the source dataset, then it\ncan guide the transformation of features of the target set of classes to a more\nstructure adapted space. Using this formulation, we obtain state-of-the-art\nresults for the standard evaluation on benchmark datasets. We further provide\ndetailed analysis which shows that using all the labeled information results in\nan improved domain adaptation.", "field": [], "task": ["Domain Adaptation", "Image Classification", "Multi-class Classification"], "method": [], "dataset": ["Office-31", "Office-Home", "ImageCLEF-DA"], "metric": ["Average Accuracy", "Accuracy"], "title": "Looking back at Labels: A Class based Domain Adaptation Technique"} {"abstract": "We introduce Recurrent All-Pairs Field Transforms (RAFT), a new deep network architecture for optical flow. RAFT extracts per-pixel features, builds multi-scale 4D correlation volumes for all pairs of pixels, and iteratively updates a flow field through a recurrent unit that performs lookups on the correlation volumes. RAFT achieves state-of-the-art performance. On KITTI, RAFT achieves an F1-all error of 5.10%, a 16% error reduction from the best published result (6.10%). On Sintel (final pass), RAFT obtains an end-point-error of 2.855 pixels, a 30% error reduction from the best published result (4.098 pixels). In addition, RAFT has strong cross-dataset generalization as well as high efficiency in inference time, training speed, and parameter count. Code is available at https://github.com/princeton-vl/RAFT.", "field": [], "task": ["Optical Flow Estimation"], "method": [], "dataset": ["Sintel-final", "Sintel-clean"], "metric": ["Average End-Point Error"], "title": "RAFT: Recurrent All-Pairs Field Transforms for Optical Flow"} {"abstract": "Vision-language navigation (VLN) is the task of navigating an embodied agent\nto carry out natural language instructions inside real 3D environments. In this\npaper, we study how to address three critical challenges for this task: the\ncross-modal grounding, the ill-posed feedback, and the generalization problems.\nFirst, we propose a novel Reinforced Cross-Modal Matching (RCM) approach that\nenforces cross-modal grounding both locally and globally via reinforcement\nlearning (RL). Particularly, a matching critic is used to provide an intrinsic\nreward to encourage global matching between instructions and trajectories, and\na reasoning navigator is employed to perform cross-modal grounding in the local\nvisual scene. Evaluation on a VLN benchmark dataset shows that our RCM model\nsignificantly outperforms previous methods by 10% on SPL and achieves the new\nstate-of-the-art performance. To improve the generalizability of the learned\npolicy, we further introduce a Self-Supervised Imitation Learning (SIL) method\nto explore unseen environments by imitating its own past, good decisions. We\ndemonstrate that SIL can approximate a better and more efficient policy, which\ntremendously minimizes the success rate performance gap between seen and unseen\nenvironments (from 30.7% to 11.7%).", "field": [], "task": ["Imitation Learning", "Vision-Language Navigation", "Visual Navigation"], "method": [], "dataset": ["Room2Room", "R2R"], "metric": ["spl"], "title": "Reinforced Cross-Modal Matching and Self-Supervised Imitation Learning for Vision-Language Navigation"} {"abstract": "In a spoken dialogue system, dialogue state tracker (DST) components track the state of the conversation by updating a distribution of values associated with each of the slots being tracked for the current user turn, using the interactions until then. Much of the previous work has relied on modeling the natural order of the conversation, using distance based offsets as an approximation of time. In this work, we hypothesize that leveraging the wall-clock temporal difference between turns is crucial for finer-grained control of dialogue scenarios. We develop a novel approach that applies a {\\it time mask}, based on the wall-clock time difference, to the associated slot embeddings and empirically demonstrate that our proposed approach outperforms existing approaches that leverage distance offsets, on both an internal benchmark dataset as well as DSTC2.", "field": [], "task": ["Spoken Dialogue Systems", "Video Salient Object Detection"], "method": [], "dataset": ["SegTrack v2", "MCL"], "metric": ["max E-measure", "S-Measure", "AVERAGE MAE", "MAX E-MEASURE"], "title": "Time Masking: Leveraging Temporal Information in Spoken Dialogue Systems"} {"abstract": "There is intense interest in applying machine learning to problems of causal\ninference in fields such as healthcare, economics and education. In particular,\nindividual-level causal inference has important applications such as precision\nmedicine. We give a new theoretical analysis and family of algorithms for\npredicting individual treatment effect (ITE) from observational data, under the\nassumption known as strong ignorability. The algorithms learn a \"balanced\"\nrepresentation such that the induced treated and control distributions look\nsimilar. We give a novel, simple and intuitive generalization-error bound\nshowing that the expected ITE estimation error of a representation is bounded\nby a sum of the standard generalization-error of that representation and the\ndistance between the treated and control distributions induced by the\nrepresentation. We use Integral Probability Metrics to measure distances\nbetween distributions, deriving explicit bounds for the Wasserstein and Maximum\nMean Discrepancy (MMD) distances. Experiments on real and simulated data show\nthe new algorithms match or outperform the state-of-the-art.", "field": [], "task": ["Causal Inference", "Generalization Bounds"], "method": ["Causal Inference"], "dataset": ["IDHP"], "metric": ["Average Treatment Effect Error"], "title": "Estimating individual treatment effect: generalization bounds and algorithms"} {"abstract": "Pseudo-relevance feedback (PRF) is commonly used to boost the performance of\ntraditional information retrieval (IR) models by using top-ranked documents to\nidentify and weight new query terms, thereby reducing the effect of\nquery-document vocabulary mismatches. While neural retrieval models have\nrecently demonstrated strong results for ad-hoc retrieval, combining them with\nPRF is not straightforward due to incompatibilities between existing PRF\napproaches and neural architectures. To bridge this gap, we propose an\nend-to-end neural PRF framework that can be used with existing neural IR models\nby embedding different neural models as building blocks. Extensive experiments\non two standard test collections confirm the effectiveness of the proposed NPRF\nframework in improving the performance of two state-of-the-art neural IR\nmodels.", "field": [], "task": ["Ad-Hoc Information Retrieval", "Information Retrieval"], "method": [], "dataset": ["TREC Robust04"], "metric": ["P@20", "nDCG@20", "MAP"], "title": "NPRF: A Neural Pseudo Relevance Feedback Framework for Ad-hoc Information Retrieval"} {"abstract": "We describe an extension of the DeepMind Kinetics human action dataset from\n400 classes, each with at least 400 video clips, to 600 classes, each with at\nleast 600 video clips. In order to scale up the dataset we changed the data\ncollection process so it uses multiple queries per class, with some of them in\na language other than english -- portuguese. This paper details the changes\nbetween the two versions of the dataset and includes a comprehensive set of\nstatistics of the new version as well as baseline results using the I3D neural\nnetwork architecture. The paper is a companion to the release of the ground\ntruth labels for the public test set.", "field": [], "task": ["Action Classification"], "method": [], "dataset": ["Kinetics-600"], "metric": ["Top-1 Accuracy"], "title": "A Short Note about Kinetics-600"} {"abstract": "This paper introduces the problem of multiple object forecasting (MOF), in which the goal is to predict future bounding boxes of tracked objects. In contrast to existing works on object trajectory forecasting which primarily consider the problem from a birds-eye perspective, we formulate the problem from an object-level perspective and call for the prediction of full object bounding boxes, rather than trajectories alone. Towards solving this task, we introduce the Citywalks dataset, which consists of over 200k high-resolution video frames. Citywalks comprises of footage recorded in 21 cities from 10 European countries in a variety of weather conditions and over 3.5k unique pedestrian trajectories. For evaluation, we adapt existing trajectory forecasting methods for MOF and confirm cross-dataset generalizability on the MOT-17 dataset without fine-tuning. Finally, we present STED, a novel encoder-decoder architecture for MOF. STED combines visual and temporal features to model both object-motion and ego-motion, and outperforms existing approaches for MOF. Code & dataset link: https://github.com/olly-styles/Multiple-Object-Forecasting", "field": [], "task": ["Multiple Object Forecasting", "Trajectory Forecasting"], "method": [], "dataset": ["Citywalks"], "metric": ["ADE", "AIOU"], "title": "Multiple Object Forecasting: Predicting Future Object Locations in Diverse Environments"} {"abstract": "We introduce a method for learning to generate the surface of 3D shapes. Our\napproach represents a 3D shape as a collection of parametric surface elements\nand, in contrast to methods generating voxel grids or point clouds, naturally\ninfers a surface representation of the shape. Beyond its novelty, our new shape\ngeneration framework, AtlasNet, comes with significant advantages, such as\nimproved precision and generalization capabilities, and the possibility to\ngenerate a shape of arbitrary resolution without memory issues. We demonstrate\nthese benefits and compare to strong baselines on the ShapeNet benchmark for\ntwo applications: (i) auto-encoding shapes, and (ii) single-view reconstruction\nfrom a still image. We also provide results showing its potential for other\napplications, such as morphing, parametrization, super-resolution, matching,\nand co-segmentation.", "field": [], "task": ["3D Surface Generation", "Point Cloud Completion", "Super-Resolution"], "method": [], "dataset": ["Completion3D", "Pix3D"], "metric": ["EMD", "TIoU", "CD", "Chamfer Distance"], "title": "AtlasNet: A Papier-M\u00e2ch\u00e9 Approach to Learning 3D Surface Generation"} {"abstract": "Understanding travel behaviour and travel demand is of constant importance to transportation communities and agencies in every country. Nowadays, attempts have been made to automatically infer transportation modes from positional data, such as the data collected by using GPS devices so that the cost in time and budget of conventional travel diary survey could be significantly reduced. Some limitations, however, exist in the literature, in aspects of data collection (sample size selected, duration of study, granularity of data), selection of variables (or combination of variables), and method of inference (the number of transportation modes to be used in the learning). This paper therefore, attempts to fully understand these aspects in the process of inference. We aim to solve a classification problem of GPS data into different transportation modes (car, walk, cycle, underground, train and bus). We first study the variables that could contribute positively to this classification, and statistically quantify their discriminatory power. We then introduce a novel approach to carry out this inference using a framework based on Support Vector Machines (SVMs) classification. The framework was tested using coarse-grained GPS data, which has been avoided in previous studies, achieving a promising accuracy of 88% with a Kappa statistic reflecting almost perfect agreement.", "field": [], "task": ["Trajectory Prediction"], "method": [], "dataset": ["GPS"], "metric": ["Accuracy"], "title": "Inferring hybrid transportation modes from sparse GPS data using a moving window SVM classification"} {"abstract": "In this paper, we address the problem of person re-identification, which\nrefers to associating the persons captured from different cameras. We propose a\nsimple yet effective human part-aligned representation for handling the body\npart misalignment problem. Our approach decomposes the human body into regions\n(parts) which are discriminative for person matching, accordingly computes the\nrepresentations over the regions, and aggregates the similarities computed\nbetween the corresponding regions of a pair of probe and gallery images as the\noverall matching score. Our formulation, inspired by attention models, is a\ndeep neural network modeling the three steps together, which is learnt through\nminimizing the triplet loss function without requiring body part labeling\ninformation. Unlike most existing deep learning algorithms that learn a global\nor spatial partition-based local representation, our approach performs human\nbody partition, and thus is more robust to pose changes and various human\nspatial distributions in the person bounding box. Our approach shows\nstate-of-the-art results over standard datasets, Market-$1501$, CUHK$03$,\nCUHK$01$ and VIPeR.", "field": [], "task": ["Person Re-Identification"], "method": [], "dataset": ["Market-1501"], "metric": ["Rank-1", "MAP"], "title": "Deeply-Learned Part-Aligned Representations for Person Re-Identification"} {"abstract": "The skeleton based gesture recognition is gaining more popularity due to its\nwide possible applications. The key issues are how to extract discriminative\nfeatures and how to design the classification model. In this paper, we first\nleverage a robust feature descriptor, path signature (PS), and propose three PS\nfeatures to explicitly represent the spatial and temporal motion\ncharacteristics, i.e., spatial PS (S_PS), temporal PS (T_PS) and temporal\nspatial PS (T_S_PS). Considering the significance of fine hand movements in the\ngesture, we propose an \"attention on hand\" (AOH) principle to define joint\npairs for the S_PS and select single joint for the T_PS. In addition, the\ndyadic method is employed to extract the T_PS and T_S_PS features that encode\nglobal and local temporal dynamics in the motion. Secondly, without the\nrecurrent strategy, the classification model still faces challenges on temporal\nvariation among different sequences. We propose a new temporal transformer\nmodule (TTM) that can match the sequence key frames by learning the temporal\nshifting parameter for each input. This is a learning-based module that can be\nincluded into standard neural network architecture. Finally, we design a\nmulti-stream fully connected layer based network to treat spatial and temporal\nfeatures separately and fused them together for the final result. We have\ntested our method on three benchmark gesture datasets, i.e., ChaLearn 2016,\nChaLearn 2013 and MSRC-12. Experimental results demonstrate that we achieve the\nstate-of-the-art performance on skeleton-based gesture recognition with high\ncomputational efficiency.", "field": [], "task": ["Gesture Recognition"], "method": [], "dataset": ["ChaLearn 2016", "MSRC-12", "ChaLearn 2013"], "metric": ["Accuracy"], "title": "Skeleton-based Gesture Recognition Using Several Fully Connected Layers with Path Signature Features and Temporal Transformer Module"} {"abstract": "Several large cloze-style context-question-answer datasets have been\nintroduced recently: the CNN and Daily Mail news data and the Children's Book\nTest. Thanks to the size of these datasets, the associated text comprehension\ntask is well suited for deep-learning techniques that currently seem to\noutperform all alternative approaches. We present a new, simple model that uses\nattention to directly pick the answer from the context as opposed to computing\nthe answer using a blended representation of words in the document as is usual\nin similar models. This makes the model particularly suitable for\nquestion-answering problems where the answer is a single word from the\ndocument. Ensemble of our models sets new state of the art on all evaluated\ndatasets.", "field": [], "task": ["Machine Reading Comprehension", "Open-Domain Question Answering", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["Children's Book Test", "SearchQA", "CNN / Daily Mail"], "metric": ["N-gram F1", "Accuracy-CN", "Unigram Acc", "CNN", "Daily Mail", "Accuracy-NE"], "title": "Text Understanding with the Attention Sum Reader Network"} {"abstract": "We propose a method for learning landmark detectors for visual objects (such\nas the eyes and the nose in a face) without any manual supervision. We cast\nthis as the problem of generating images that combine the appearance of the\nobject as seen in a first example image with the geometry of the object as seen\nin a second example image, where the two examples differ by a viewpoint change\nand/or an object deformation. In order to factorize appearance and geometry, we\nintroduce a tight bottleneck in the geometry-extraction process that selects\nand distils geometry-related features. Compared to standard image generation\nproblems, which often use generative adversarial networks, our generation task\nis conditioned on both appearance and geometry and thus is significantly less\nambiguous, to the point that adopting a simple perceptual loss formulation is\nsufficient. We demonstrate that our approach can learn object landmarks from\nsynthetic image deformations or videos, all without manual supervision, while\noutperforming state-of-the-art unsupervised landmark detectors. We further show\nthat our method is applicable to a large variety of datasets - faces, people,\n3D objects, and digits - without any modifications.", "field": [], "task": ["Conditional Image Generation", "Image Generation", "Unsupervised Facial Landmark Detection"], "method": [], "dataset": ["AFLW (Zhang CVPR 2018 crops)", "MAFL"], "metric": ["NME"], "title": "Unsupervised Learning of Object Landmarks through Conditional Image Generation"} {"abstract": "The articulated and complex nature of human actions makes the task of action\nrecognition difficult. One approach to handle this complexity is dividing it to\nthe kinetics of body parts and analyzing the actions based on these partial\ndescriptors. We propose a joint sparse regression based learning method which\nutilizes the structured sparsity to model each action as a combination of\nmultimodal features from a sparse set of body parts. To represent dynamics and\nappearance of parts, we employ a heterogeneous set of depth and skeleton based\nfeatures. The proper structure of multimodal multipart features are formulated\ninto the learning framework via the proposed hierarchical mixed norm, to\nregularize the structured features of each part and to apply sparsity between\nthem, in favor of a group feature selection. Our experimental results expose\nthe effectiveness of the proposed learning method in which it outperforms other\nmethods in all three tested datasets while saturating one of them by achieving\nperfect accuracy.", "field": [], "task": ["Action Recognition", "Feature Selection", "Multimodal Activity Recognition", "Regression", "Temporal Action Localization"], "method": [], "dataset": ["MSR Daily Activity3D dataset"], "metric": ["Accuracy"], "title": "Multimodal Multipart Learning for Action Recognition in Depth Videos"} {"abstract": "In this paper, we present our proposed system (EXPR) to participate in the hypernym discovery task of SemEval 2018. The task addresses the challenge of discovering hypernym relations from a text corpus. Our proposal is a combined approach of path-based technique and distributional technique. We use dependency parser on a corpus to extract candidate hypernyms and represent their dependency paths as a feature vector. The feature vector is concatenated with a feature vector obtained using Wikipedia pre-trained term embedding model. The concatenated feature vector fits a supervised machine learning method to learn a classifier model. This model is able to classify new candidate hypernyms as hypernym or not. Our system performs well to discover new hypernyms not defined in gold hypernyms.", "field": [], "task": ["Hypernym Discovery", "Information Retrieval", "Machine Translation", "Question Answering"], "method": [], "dataset": ["Medical domain"], "metric": ["P@5", "MRR", "MAP"], "title": "EXPR at SemEval-2018 Task 9: A Combined Approach for Hypernym Discovery"} {"abstract": "Person re-identification (re-ID) aims at recognizing the same person from images taken across different cameras. To address this challenging task, existing re-ID models typically rely on a large amount of labeled training data, which is not practical for real-world applications. To alleviate this limitation, researchers now targets at cross-dataset re-ID which focuses on generalizing the discriminative ability to the unlabeled target domain when given a labeled source domain dataset. To achieve this goal, our proposed Pose Disentanglement and Adaptation Network (PDA-Net) aims at learning deep image representation with pose and domain information properly disentangled. With the learned cross-domain pose invariant feature space, our proposed PDA-Net is able to perform pose disentanglement across domains without supervision in identities, and the resulting features can be applied to cross-dataset re-ID. Both of our qualitative and quantitative results on two benchmark datasets confirm the effectiveness of our approach and its superiority over the state-of-the-art cross-dataset Re-ID approaches.", "field": [], "task": ["Person Re-Identification", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["Duke to Market", "Market to Duke"], "metric": ["rank-10", "mAP", "rank-5", "rank-1"], "title": "Cross-Dataset Person Re-Identification via Unsupervised Pose Disentanglement and Adaptation"} {"abstract": "Real-world applications of object recognition often require the solution of multiple tasks in a single platform. Under the standard paradigm of network fine-tuning, an entirely new CNN is learned per task, and the final network size is independent of task complexity. This is wasteful, since simple tasks require smaller networks than more complex tasks, and limits the number of tasks that can be solved simultaneously. To address these problems, we propose a transfer learning procedure, denoted NetTailor, in which layers of a pre-trained CNN are used as universal blocks that can be combined with small task-specific layers to generate new networks. Besides minimizing classification error, the new network is trained to mimic the internal activations of a strong unconstrained CNN, and minimize its complexity by the combination of 1) a soft-attention mechanism over blocks and 2) complexity regularization constraints. In this way, NetTailor can adapt the network architecture, not just its weights, to the target task. Experiments show that networks adapted to simple tasks, such as character or traffic sign recognition, become significantly smaller than those adapted to hard tasks, such as fine-grained recognition. More importantly, due to the modular nature of the procedure, this reduction in network complexity is achieved without compromise of either parameter sharing across tasks, or classification accuracy.", "field": [], "task": ["Continual Learning", "Object Recognition", "Traffic Sign Recognition", "Transfer Learning"], "method": [], "dataset": ["visual domain decathlon (10 tasks)"], "metric": ["decathlon discipline (Score)"], "title": "NetTailor: Tuning the Architecture, Not Just the Weights"} {"abstract": "This paper describes a simple but competitive unsupervised system for hypernym discovery. The system uses skip-gram word embeddings with negative sampling, trained on specialised corpora. Candidate hypernyms for an input word are predicted based based on cosine similarity scores. Two sets of word embedding models were trained separately on two specialised corpora: a medical corpus and a music industry corpus. Our system scored highest in the medical domain among the competing unsupervised systems but performed poorly on the music industry domain. Our system does not depend on any external data other than raw specialised corpora.", "field": [], "task": ["Hypernym Discovery", "Word Embeddings"], "method": [], "dataset": ["Medical domain", "Music domain"], "metric": ["P@5", "MRR", "MAP"], "title": "ADAPT at SemEval-2018 Task 9: Skip-Gram Word Embeddings for Unsupervised Hypernym Discovery in Specialised Corpora"} {"abstract": "Visual tempo characterizes the dynamics and the temporal scale of an action. Modeling such visual tempos of different actions facilitates their recognition. Previous works often capture the visual tempo through sampling raw videos at multiple rates and constructing an input-level frame pyramid, which usually requires a costly multi-branch network to handle. In this work we propose a generic Temporal Pyramid Network (TPN) at the feature-level, which can be flexibly integrated into 2D or 3D backbone networks in a plug-and-play manner. Two essential components of TPN, the source of features and the fusion of features, form a feature hierarchy for the backbone so that it can capture action instances at various tempos. TPN also shows consistent improvements over other challenging baselines on several action recognition datasets. Specifically, when equipped with TPN, the 3D ResNet-50 with dense sampling obtains a 2% gain on the validation set of Kinetics-400. A further analysis also reveals that TPN gains most of its improvements on action classes that have large variances in their visual tempos, validating the effectiveness of TPN.", "field": [], "task": ["Action Recognition"], "method": [], "dataset": ["Something-Something V2"], "metric": ["Top-1 Accuracy"], "title": "Temporal Pyramid Network for Action Recognition"} {"abstract": "To build a high-quality open-domain chatbot, we introduce the effective training process of PLATO-2 via curriculum learning. There are two stages involved in the learning process. In the first stage, a coarse-grained generation model is trained to learn response generation under the simplified framework of one-to-one mapping. In the second stage, a fine-grained generation model and an evaluation model are further trained to learn diverse response generation and response coherence estimation, respectively. PLATO-2 was trained on both Chinese and English data, whose effectiveness and superiority are verified through comprehensive evaluations, achieving new state-of-the-art results.", "field": [], "task": ["Chatbot", "Curriculum Learning"], "method": [], "dataset": ["10 Monkey Species"], "metric": ["10 Hops"], "title": "PLATO-2: Towards Building an Open-Domain Chatbot via Curriculum Learning"} {"abstract": "Ongoing innovations in recurrent neural network architectures have provided a\nsteady influx of apparently state-of-the-art results on language modelling\nbenchmarks. However, these have been evaluated using differing code bases and\nlimited computational resources, which represent uncontrolled sources of\nexperimental variation. We reevaluate several popular architectures and\nregularisation methods with large-scale automatic black-box hyperparameter\ntuning and arrive at the somewhat surprising conclusion that standard LSTM\narchitectures, when properly regularised, outperform more recent models. We\nestablish a new state of the art on the Penn Treebank and Wikitext-2 corpora,\nas well as strong baselines on the Hutter Prize dataset.", "field": [], "task": ["Language Modelling"], "method": [], "dataset": ["WikiText-2"], "metric": ["Number of params", "Validation perplexity", "Test perplexity"], "title": "On the State of the Art of Evaluation in Neural Language Models"} {"abstract": "In this paper, we present a modular robotic system to tackle the problem of generating and performing antipodal robotic grasps for unknown objects from n-channel image of the scene. We propose a novel Generative Residual Convolutional Neural Network (GR-ConvNet) model that can generate robust antipodal grasps from n-channel input at real-time speeds (~20ms). We evaluate the proposed model architecture on standard datasets and a diverse set of household objects. We achieved state-of-the-art accuracy of 97.7% and 94.6% on Cornell and Jacquard grasping datasets respectively. We also demonstrate a grasp success rate of 95.4% and 93% on household and adversarial objects respectively using a 7 DoF robotic arm.", "field": [], "task": ["Robotic Grasping"], "method": [], "dataset": [" Jacquard dataset", "Cornell Grasp Dataset"], "metric": ["Accuracy (%)", "5 fold cross validation"], "title": "Antipodal Robotic Grasping using Generative Residual Convolutional Neural Network"} {"abstract": "Visual domain adaptation aims to learn robust classifiers for the target\ndomain by leveraging knowledge from a source domain. Existing methods either\nattempt to align the cross-domain distributions, or perform manifold subspace\nlearning. However, there are two significant challenges: (1) degenerated\nfeature transformation, which means that distribution alignment is often\nperformed in the original feature space, where feature distortions are hard to\novercome. On the other hand, subspace learning is not sufficient to reduce the\ndistribution divergence. (2) unevaluated distribution alignment, which means\nthat existing distribution alignment methods only align the marginal and\nconditional distributions with equal importance, while they fail to evaluate\nthe different importance of these two distributions in real applications. In\nthis paper, we propose a Manifold Embedded Distribution Alignment (MEDA)\napproach to address these challenges. MEDA learns a domain-invariant classifier\nin Grassmann manifold with structural risk minimization, while performing\ndynamic distribution alignment to quantitatively account for the relative\nimportance of marginal and conditional distributions. To the best of our\nknowledge, MEDA is the first attempt to perform dynamic distribution alignment\nfor manifold domain adaptation. Extensive experiments demonstrate that MEDA\nshows significant improvements in classification accuracy compared to\nstate-of-the-art traditional and deep methods.", "field": [], "task": ["Domain Adaptation", "Transfer Learning", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["Office-Home", "Office-Caltech-10", "Office-Caltech"], "metric": ["Accuracy (%)", "Average Accuracy", "Accuracy"], "title": "Visual Domain Adaptation with Manifold Embedded Distribution Alignment"} {"abstract": "The long-tail distribution of the visual world poses great challenges for deep learning based classification models on how to handle the class imbalance problem. Existing solutions usually involve class-balancing strategies, e.g., by loss re-weighting, data re-sampling, or transfer learning from head- to tail-classes, but most of them adhere to the scheme of jointly learning representations and classifiers. In this work, we decouple the learning procedure into representation learning and classification, and systematically explore how different balancing strategies affect them for long-tailed recognition. The findings are surprising: (1) data imbalance might not be an issue in learning high-quality representations; (2) with representations learned with the simplest instance-balanced (natural) sampling, it is also possible to achieve strong long-tailed recognition ability by adjusting only the classifier. We conduct extensive experiments and set new state-of-the-art performance on common long-tailed benchmarks like ImageNet-LT, Places-LT and iNaturalist, showing that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification. Our code is available at https://github.com/facebookresearch/classifier-balancing.", "field": [], "task": ["Long-tail Learning", "Long-tail learning with class descriptors", "Representation Learning", "Transfer Learning"], "method": [], "dataset": ["SUN-LT", "Places-LT", "ImageNet-LT", "AWA-LT", "ImageNet-LT-d", "CUB-LT"], "metric": ["Per-Class Accuracy", "Long-Tailed Accuracy"], "title": "Decoupling Representation and Classifier for Long-Tailed Recognition"} {"abstract": "In this paper we address the abnormality detection problem in crowded scenes.\nWe propose to use Generative Adversarial Nets (GANs), which are trained using\nnormal frames and corresponding optical-flow images in order to learn an\ninternal representation of the scene normality. Since our GANs are trained with\nonly normal data, they are not able to generate abnormal events. At testing\ntime the real data are compared with both the appearance and the motion\nrepresentations reconstructed by our GANs and abnormal areas are detected by\ncomputing local differences. Experimental results on challenging abnormality\ndetection datasets show the superiority of the proposed method compared to the\nstate of the art in both frame-level and pixel-level abnormality detection\ntasks.", "field": [], "task": ["Abnormal Event Detection In Video", "Anomaly Detection", "Optical Flow Estimation"], "method": [], "dataset": ["UCSD", "UBI-Fights"], "metric": ["AUC"], "title": "Abnormal Event Detection in Videos using Generative Adversarial Nets"} {"abstract": "We herein present a language-model-based evaluator for deletion-based sentence compression and view this task as a series of deletion-and-evaluation operations using the evaluator. More specifically, the evaluator is a syntactic neural language model that is first built by learning the syntactic and structural collocation among words. Subsequently, a series of trial-and-error deletion operations are conducted on the source sentences via a reinforcement learning framework to obtain the best target compression. An empirical study shows that the proposed model can effectively generate more readable compression, comparable or superior to several strong baselines. Furthermore, we introduce a 200-sentence test set for a large-scale dataset, setting a new baseline for the future research.", "field": [], "task": ["Language Modelling", "Sentence Compression"], "method": [], "dataset": ["Google Dataset"], "metric": ["CR", "F1"], "title": "A Language Model based Evaluator for Sentence Compression"} {"abstract": "In e-commerce portals, generating answers for product-related questions has\nbecome a crucial task. In this paper, we propose the task of product-aware\nanswer generation, which tends to generate an accurate and complete answer from\nlarge-scale unlabeled e-commerce reviews and product attributes. Unlike\nexisting question-answering problems, answer generation in e-commerce confronts\nthree main challenges: (1) Reviews are informal and noisy; (2) joint modeling\nof reviews and key-value product attributes is challenging; (3) traditional\nmethods easily generate meaningless answers. To tackle above challenges, we\npropose an adversarial learning based model, named PAAG, which is composed of\nthree components: a question-aware review representation module, a key-value\nmemory network encoding attributes, and a recurrent neural network as a\nsequence generator. Specifically, we employ a convolutional discriminator to\ndistinguish whether our generated answer matches the facts. To extract the\nsalience part of reviews, an attention-based review reader is proposed to\ncapture the most relevant words given the question. Conducted on a large-scale\nreal-world e-commerce dataset, our extensive experiments verify the\neffectiveness of each module in our proposed model. Moreover, our experiments\nshow that our model achieves the state-of-the-art performance in terms of both\nautomatic metrics and human evaluations.", "field": [], "task": ["Question Answering"], "method": [], "dataset": ["JD Product Question Answer"], "metric": ["BLEU"], "title": "Product-Aware Answer Generation in E-Commerce Question-Answering"} {"abstract": "An implementation of \"Binarized Attributed Network Embedding\". Attributed network embedding enables joint representation learning of node links and attributes. Existing attributed network embedding models are designed in continuous Euclidean spaces which often introduce data redundancy and impose challenges to storage and computation costs. To this end, we present a Binarized Attributed Network Embedding model (BANE for short) to learn binary node representation. Specifically, we define a new Weisfeiler-Lehman proximity matrix to capture data dependence between node links and attributes by aggregating the information of node attributes and links from neighboring nodes to a given target node in a layer-wise manner. Based on the Weisfeiler-Lehman proximity matrix, we formulate a new Weisfiler-Lehman matrix factorization learning function under the binary node representation constraint. The learning problem is a mixed integer optimization and an efficient cyclic coordinate descent (CCD) algorithm is used as the solution. Node classification and link prediction experiments on real-world datasets show that the proposed BANE model outperforms the state-of-the-art network embedding methods.", "field": [], "task": ["Graph Embedding", "Link Prediction", "Network Embedding", "Node Classification", "Representation Learning"], "method": [], "dataset": ["Wiki", "Citeseer", "Cora"], "metric": ["AUC"], "title": "Binarized Attributed Network Embedding"} {"abstract": "This paper presents a novel framework, MGNER, for Multi-Grained Named Entity Recognition where multiple entities or entity mentions in a sentence could be non-overlapping or totally nested. Different from traditional approaches regarding NER as a sequential labeling task and annotate entities consecutively, MGNER detects and recognizes entities on multiple granularities: it is able to recognize named entities without explicitly assuming non-overlapping or totally nested structures. MGNER consists of a Detector that examines all possible word segments and a Classifier that categorizes entities. In addition, contextual information and a self-attention mechanism are utilized throughout the framework to improve the NER performance. Experimental results show that MGNER outperforms current state-of-the-art baselines up to 4.4% in terms of the F1 score among nested/non-overlapping NER tasks.", "field": [], "task": ["Multi-Grained Named Entity Recognition", "Named Entity Recognition", "Nested Mention Recognition", "Nested Named Entity Recognition"], "method": [], "dataset": ["CoNLL 2003 (English)", "ACE 2005", "ACE 2004"], "metric": ["F1"], "title": "Multi-Grained Named Entity Recognition"} {"abstract": "Inspired by the recent success of methods that employ shape priors to achieve\nrobust 3D reconstructions, we propose a novel recurrent neural network\narchitecture that we call the 3D Recurrent Reconstruction Neural Network\n(3D-R2N2). The network learns a mapping from images of objects to their\nunderlying 3D shapes from a large collection of synthetic data. Our network\ntakes in one or more images of an object instance from arbitrary viewpoints and\noutputs a reconstruction of the object in the form of a 3D occupancy grid.\nUnlike most of the previous works, our network does not require any image\nannotations or object class labels for training or testing. Our extensive\nexperimental analysis shows that our reconstruction framework i) outperforms\nthe state-of-the-art methods for single view reconstruction, and ii) enables\nthe 3D reconstruction of objects in situations when traditional SFM/SLAM\nmethods fail (because of lack of texture and/or wide baseline).", "field": [], "task": ["3D Object Reconstruction", "3D Reconstruction", "Object Reconstruction"], "method": [], "dataset": ["Data3D\u2212R2N2"], "metric": ["3DIoU", "Avg F1"], "title": "3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction"} {"abstract": "Count-based exploration algorithms are known to perform near-optimally when\nused in conjunction with tabular reinforcement learning (RL) methods for\nsolving small discrete Markov decision processes (MDPs). It is generally\nthought that count-based methods cannot be applied in high-dimensional state\nspaces, since most states will only occur once. Recent deep RL exploration\nstrategies are able to deal with high-dimensional continuous state spaces\nthrough complex heuristics, often relying on optimism in the face of\nuncertainty or intrinsic motivation. In this work, we describe a surprising\nfinding: a simple generalization of the classic count-based approach can reach\nnear state-of-the-art performance on various high-dimensional and/or continuous\ndeep RL benchmarks. States are mapped to hash codes, which allows to count\ntheir occurrences with a hash table. These counts are then used to compute a\nreward bonus according to the classic count-based exploration theory. We find\nthat simple hash functions can achieve surprisingly good results on many\nchallenging tasks. Furthermore, we show that a domain-dependent learned hash\ncode may further improve these results. Detailed analysis reveals important\naspects of a good hash function: 1) having appropriate granularity and 2)\nencoding information relevant to solving the MDP. This exploration strategy\nachieves near state-of-the-art performance on both continuous control tasks and\nAtari 2600 games, hence providing a simple yet powerful baseline for solving\nMDPs that require considerable exploration.", "field": [], "task": ["Atari Games", "Continuous Control"], "method": [], "dataset": ["Atari 2600 Montezuma's Revenge", "Atari 2600 Frostbite", "Atari 2600 Freeway", "Atari 2600 Venture"], "metric": ["Score"], "title": "#Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning"} {"abstract": "Convolutional Neural Networks (CNNs) have shown remarkable performance in general object recognition tasks. In this paper, we propose a new model called EnsNet which is composed of one base CNN and multiple Fully Connected SubNetworks (FCSNs). In this model, the set of feature-maps generated by the last convolutional layer in the base CNN is divided along channels into disjoint subsets, and these subsets are assigned to the FCSNs. Each of the FCSNs is trained independent of others so that it can predict the class label from the subset of the feature-maps assigned to it. The output of the overall model is determined by majority vote of the base CNN and the FCSNs. Experimental results using the MNIST, Fashion-MNIST and CIFAR-10 datasets show that the proposed approach further improves the performance of CNNs. In particular, an EnsNet achieves a state-of-the-art error rate of 0.16% on MNIST.", "field": [], "task": ["Image Classification", "Object Recognition"], "method": [], "dataset": ["MNIST"], "metric": ["Percentage error", "Accuracy"], "title": "Ensemble learning in CNN augmented with fully connected subnetworks"} {"abstract": "The number of malicious files detected every year are counted by millions. One of the main reasons for these high volumes of different files is the fact that, in order to evade detection, malware authors add mutation. This means that malicious files belonging to the same family, with the same malicious behavior, are constantly modified or obfuscated using several techniques, in such a way that they look like different files. In order to be effective in analyzing and classifying such large amounts of files, we need to be able to categorize them into groups and identify their respective families on the basis of their behavior. In this paper, malicious software is visualized as gray scale images since its ability to capture minor changes while retaining the global structure helps to detect variations. Motivated by the visual similarity between malware samples of the same family, we propose a file agnostic deep learning approach for malware categorization to efficiently group malicious software into families based on a set of discriminant patterns extracted from their visualization as images. The suitability of our approach is evaluated against two benchmarks: the MalImg dataset and the Microsoft Malware Classification Challenge dataset. Experimental comparison demonstrates its superior performance with respect to state-of-the-art techniques.", "field": [], "task": ["Malware Classification"], "method": [], "dataset": ["Malimg Dataset", "Microsoft Malware Classification Challenge"], "metric": ["Accuracy (10-fold)", "Macro F1 (10-fold)", "LogLoss", "Accuracy (5-fold)"], "title": "Using Convolutional Neural Networks for Classification of Malware represented as Images"} {"abstract": "This paper presents the participation of Apollo{'}s team in the SemEval-2018 Task 9 {``}Hypernym Discovery{''}, Subtask 1: {``}General-Purpose Hypernym Discovery{''}, which tries to produce a ranked list of hypernyms for a specific term. We propose a novel approach for automatic extraction of hypernymy relations from a corpus by using dependency patterns. We estimated that the application of these patterns leads to a higher score than using the traditional lexical patterns.", "field": [], "task": ["Hypernym Discovery"], "method": [], "dataset": ["General"], "metric": ["P@5", "MRR", "MAP"], "title": "Apollo at SemEval-2018 Task 9: Detecting Hypernymy Relations Using Syntactic Dependencies"} {"abstract": "Recent deep networks achieved state of the art performanceon a variety of semantic segmentation tasks. Despite such progress, thesemodels often face challenges in real world \u00e2\u0080\u009cwild tasks\u00e2\u0080\u009d where large differ-ence between labeled training/source data and unseen test/target dataexists. In particular, such difference is often referred to as \u00e2\u0080\u009cdomain gap\u00e2\u0080\u009d,and could cause significantly decreased performance which cannot beeasily remedied by further increasing the representation power. Unsuper-vised domain adaptation (UDA) seeks to overcome such problem withouttarget domain labels. In this paper, we propose a novel UDA frameworkbased on an iterative self-training (ST) procedure, where the problemis formulated as latent variable loss minimization, and can be solved byalternatively generating pseudo labels on target data and re-training themodel with these labels. On top of ST, we also propose a novel class-balanced self-training (CBST) framework to avoid the gradual domi-nance of large classes on pseudo-label generation, and introduce spatialpriors to refine generated labels. Comprehensive experiments show thatthe proposed methods achieve state of the art semantic segmentationperformance under multiple major UDA settings.", "field": [], "task": ["Domain Adaptation", "Image-to-Image Translation", "Semantic Segmentation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["GTAV-to-Cityscapes Labels"], "metric": ["mIoU"], "title": "Unsupervised Domain Adaptation for Semantic Segmentation via Class-Balanced Self-Training"} {"abstract": "Deep learning based methods have dominated super-resolution (SR) field due to their remarkable performance in terms of effectiveness and efficiency. Most of these methods assume that the blur kernel during downsampling is predefined/known (e.g., bicubic). However, the blur kernels involved in real applications are complicated and unknown, resulting in severe performance drop for the advanced SR methods. In this paper, we propose an Iterative Kernel Correction (IKC) method for blur kernel estimation in blind SR problem, where the blur kernels are unknown. We draw the observation that kernel mismatch could bring regular artifacts (either over-sharpening or over-smoothing), which can be applied to correct inaccurate blur kernels. Thus we introduce an iterative correction scheme -- IKC that achieves better results than direct kernel estimation. We further propose an effective SR network architecture using spatial feature transform (SFT) layers to handle multiple blur kernels, named SFTMD. Extensive experiments on synthetic and real-world images show that the proposed IKC method with SFTMD can provide visually favorable SR results and the state-of-the-art performance in blind SR problem.", "field": [], "task": ["Super-Resolution"], "method": [], "dataset": ["Set5 - 3x upscaling", "Set14 - 2x upscaling", "Set14 - 4x upscaling", "Manga109 - 3x upscaling", "BSD100 - 2x upscaling", "Manga109 - 4x upscaling", "Set14 - 3x upscaling", "BSD100 - 3x upscaling", "Urban100 - 2x upscaling", "Urban100 - 3x upscaling", "BSD100 - 4x upscaling", "Manga109 - 2x upscaling", "Set5 - 4x upscaling", "Set5 - 2x upscaling", "Urban100 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Blind Super-Resolution With Iterative Kernel Correction"} {"abstract": "Deployment of deep learning models in robotics as sensory information\nextractors can be a daunting task to handle, even using generic GPU cards.\nHere, we address three of its most prominent hurdles, namely, i) the adaptation\nof a single model to perform multiple tasks at once (in this work, we consider\ndepth estimation and semantic segmentation crucial for acquiring geometric and\nsemantic understanding of the scene), while ii) doing it in real-time, and iii)\nusing asymmetric datasets with uneven numbers of annotations per each modality.\nTo overcome the first two issues, we adapt a recently proposed real-time\nsemantic segmentation network, making changes to further reduce the number of\nfloating point operations. To approach the third issue, we embrace a simple\nsolution based on hard knowledge distillation under the assumption of having\naccess to a powerful `teacher' network. We showcase how our system can be\neasily extended to handle more tasks, and more datasets, all at once,\nperforming depth estimation and segmentation both indoors and outdoors with a\nsingle model. Quantitatively, we achieve results equivalent to (or better than)\ncurrent state-of-the-art approaches with one forward pass costing just 13ms and\n6.5 GFLOPs on 640x480 inputs. This efficiency allows us to directly incorporate\nthe raw predictions of our network into the SemanticFusion framework for dense\n3D semantic reconstruction of the scene.", "field": [], "task": ["Depth Estimation", "Knowledge Distillation", "Monocular Depth Estimation", "Real-Time Semantic Segmentation", "Semantic Segmentation", "Surface Normals Estimation"], "method": [], "dataset": ["NYU-Depth V2", "NYU Depth v2"], "metric": ["Speed(ms/f)", "Mean IoU", "RMSE", "mIoU"], "title": "Real-Time Joint Semantic Segmentation and Depth Estimation Using Asymmetric Annotations"} {"abstract": "Automatically generating coherent and semantically meaningful text has many\napplications in machine translation, dialogue systems, image captioning, etc.\nRecently, by combining with policy gradient, Generative Adversarial Nets (GAN)\nthat use a discriminative model to guide the training of the generative model\nas a reinforcement learning policy has shown promising results in text\ngeneration. However, the scalar guiding signal is only available after the\nentire text has been generated and lacks intermediate information about text\nstructure during the generative process. As such, it limits its success when\nthe length of the generated text samples is long (more than 20 words). In this\npaper, we propose a new framework, called LeakGAN, to address the problem for\nlong text generation. We allow the discriminative net to leak its own\nhigh-level extracted features to the generative net to further help the\nguidance. The generator incorporates such informative signals into all\ngeneration steps through an additional Manager module, which takes the\nextracted features of current generated words and outputs a latent vector to\nguide the Worker module for next-word generation. Our extensive experiments on\nsynthetic data and various real-world tasks with Turing test demonstrate that\nLeakGAN is highly effective in long text generation and also improves the\nperformance in short text generation scenarios. More importantly, without any\nsupervision, LeakGAN would be able to implicitly learn sentence structures only\nthrough the interaction between Manager and Worker.", "field": [], "task": ["Text Generation"], "method": [], "dataset": ["Chinese Poems", "EMNLP2017 WMT", "COCO Captions"], "metric": ["BLEU-3", "BLEU-4", "BLEU-2", "BLEU-5"], "title": "Long Text Generation via Adversarial Training with Leaked Information"} {"abstract": "Capturing document images with hand-held devices in unstructured environments is a common practice nowadays. However, \"casual\" photos of documents are usually unsuitable for automatic information extraction, mainly due to physical distortion of the document paper, as well as various camera positions and illumination conditions. In this work, we propose DewarpNet, a deep-learning approach for document image unwarping from a single image. Our insight is that the 3D geometry of the document not only determines the warping of its texture but also causes the illumination effects. Therefore, our novelty resides on the explicit modeling of 3D shape for document paper in an end-to-end pipeline. Also, we contribute the largest and most comprehensive dataset for document image unwarping to date - Doc3D. This dataset features multiple ground-truth annotations, including 3D shape, surface normals, UV map, albedo image, etc. Training with Doc3D, we demonstrate state-of-the-art performance for DewarpNet with extensive qualitative and quantitative evaluations. Our network also significantly improves OCR performance on captured document images, decreasing character error rate by 42% on average. Both the code and the dataset are released.\r", "field": [], "task": ["Local Distortion", "MS-SSIM", "Optical Character Recognition", "Regression", "SSIM"], "method": [], "dataset": ["DocUNet"], "metric": ["SSIM", "LD", "MS-SSIM"], "title": "DewarpNet: Single-Image Document Unwarping With Stacked 3D and 2D Regression Networks"} {"abstract": "We propose a general class of language models that treat reference as an\nexplicit stochastic latent variable. This architecture allows models to create\nmentions of entities and their attributes by accessing external databases\n(required by, e.g., dialogue generation and recipe generation) and internal\nstate (required by, e.g. language models which are aware of coreference). This\nfacilitates the incorporation of information that can be accessed in\npredictable locations in databases or discourse context, even when the targets\nof the reference may be rare words. Experiments on three tasks shows our model\nvariants based on deterministic attention.", "field": [], "task": ["Dialogue Generation", "Recipe Generation"], "method": [], "dataset": ["allrecipes.com"], "metric": ["Perplexity", "BLEU"], "title": "Reference-Aware Language Models"} {"abstract": "For over a decade, machine learning has been used to extract\nopinion-holder-target structures from text to answer the question \"Who\nexpressed what kind of sentiment towards what?\". Recent neural approaches do\nnot outperform the state-of-the-art feature-based models for Opinion Role\nLabeling (ORL). We suspect this is due to the scarcity of labeled training data\nand address this issue using different multi-task learning (MTL) techniques\nwith a related task which has substantially more data, i.e. Semantic Role\nLabeling (SRL). We show that two MTL models improve significantly over the\nsingle-task model for labeling of both holders and targets, on the development\nand the test sets. We found that the vanilla MTL model which makes predictions\nusing only shared ORL and SRL features, performs the best. With deeper analysis\nwe determine what works and what might be done to make further improvements for\nORL.", "field": [], "task": ["Fine-Grained Opinion Analysis", "Multi-Task Learning"], "method": [], "dataset": ["MPQA"], "metric": ["Holder Binary F1", "Target Binary F1"], "title": "SRL4ORL: Improving Opinion Role Labeling using Multi-task Learning with Semantic Role Labeling"} {"abstract": "As a unique biometric feature that can be recognized at a distance, gait has\nbroad applications in crime prevention, forensic identification and social\nsecurity. To portray a gait, existing gait recognition methods utilize either a\ngait template, where temporal information is hard to preserve, or a gait\nsequence, which must keep unnecessary sequential constraints and thus loses the\nflexibility of gait recognition. In this paper we present a novel perspective,\nwhere a gait is regarded as a set consisting of independent frames. We propose\na new network named GaitSet to learn identity information from the set. Based\non the set perspective, our method is immune to permutation of frames, and can\nnaturally integrate frames from different videos which have been filmed under\ndifferent scenarios, such as diverse viewing angles, different clothes/carrying\nconditions. Experiments show that under normal walking conditions, our\nsingle-model method achieves an average rank-1 accuracy of 95.0% on the CASIA-B\ngait dataset and an 87.1% accuracy on the OU-MVLP gait dataset. These results\nrepresent new state-of-the-art recognition accuracy. On various complex\nscenarios, our model exhibits a significant level of robustness. It achieves\naccuracies of 87.2% and 70.4% on CASIA-B under bag-carrying and coat-wearing\nwalking conditions, respectively. These outperform the existing best methods by\na large margin. The method presented can also achieve a satisfactory accuracy\nwith a small number of frames in a test sample, e.g., 82.5% on CASIA-B with\nonly 7 frames. The source code has been released at\nhttps://github.com/AbnerHqC/GaitSet.", "field": [], "task": ["Gait Recognition", "Multiview Gait Recognition"], "method": [], "dataset": ["CASIA-B", "OU-MVLP"], "metric": ["Accuracy (Cross-View)", "BG#1-2", "NM#5-6 ", "Accuracy (Cross-View, Avg)", "CL#1-2"], "title": "GaitSet: Regarding Gait as a Set for Cross-View Gait Recognition"} {"abstract": "Identifying temporal relations between events is an essential step towards natural language understanding. However, the temporal relation between two events in a story depends on, and is often dictated by, relations among other events. Consequently, effectively identifying temporal relations between events is a challenging problem even for human annotators. This paper suggests that it is important to take these dependencies into account while learning to identify these relations and proposes a structured learning approach to address this challenge. As a byproduct, this provides a new perspective on handling missing relations, a known issue that hurts existing methods. As we show, the proposed approach results in significant improvements on the two commonly used data sets for this problem.", "field": [], "task": ["Natural Language Understanding", "Relation Extraction"], "method": [], "dataset": ["TempEval-3"], "metric": ["Temporal awareness"], "title": "A Structured Learning Approach to Temporal Relation Extraction"} {"abstract": "We propose a novel GAN-based framework for detecting shadows in images, in\nwhich a shadow detection network (D-Net) is trained together with a shadow\nattenuation network (A-Net) that generates adversarial training examples. The\nA-Net modifies the original training images constrained by a simplified\nphysical shadow model and is focused on fooling the D-Net's shadow predictions.\nHence, it is effectively augmenting the training data for D-Net with\nhard-to-predict cases. The D-Net is trained to predict shadows in both original\nimages and generated images from the A-Net. Our experimental results show that\nthe additional training data from A-Net significantly improves the shadow\ndetection accuracy of D-Net. Our method outperforms the state-of-the-art\nmethods on the most challenging shadow detection benchmark (SBU) and also\nobtains state-of-the-art results on a cross-dataset task, testing on UCF.\nFurthermore, the proposed method achieves accurate real-time shadow detection\nat 45 frames per second.", "field": [], "task": ["Detecting Shadows", "Shadow Detection"], "method": [], "dataset": ["SBU"], "metric": ["BER"], "title": "A+D Net: Training a Shadow Detector with Adversarial Shadow Attenuation"} {"abstract": "3D Morphable Models (3DMMs) are powerful statistical models of 3D facial\nshape and texture, and among the state-of-the-art methods for reconstructing\nfacial shape from single images. With the advent of new 3D sensors, many 3D\nfacial datasets have been collected containing both neutral as well as\nexpressive faces. However, all datasets are captured under controlled\nconditions. Thus, even though powerful 3D facial shape models can be learnt\nfrom such data, it is difficult to build statistical texture models that are\nsufficient to reconstruct faces captured in unconstrained conditions\n(\"in-the-wild\"). In this paper, we propose the first, to the best of our\nknowledge, \"in-the-wild\" 3DMM by combining a powerful statistical model of\nfacial shape, which describes both identity and expression, with an\n\"in-the-wild\" texture model. We show that the employment of such an\n\"in-the-wild\" texture model greatly simplifies the fitting procedure, because\nthere is no need to optimize with regards to the illumination parameters.\nFurthermore, we propose a new fast algorithm for fitting the 3DMM in arbitrary\nimages. Finally, we have captured the first 3D facial database with relatively\nunconstrained conditions and report quantitative evaluations with\nstate-of-the-art performance. Complementary qualitative reconstruction results\nare demonstrated on standard \"in-the-wild\" facial databases. An open source\nimplementation of our technique is released as part of the Menpo Project.", "field": [], "task": ["3D Face Reconstruction"], "method": [], "dataset": ["Florence"], "metric": ["Average 3D Error"], "title": "3D Face Morphable Models \"In-the-Wild\""} {"abstract": "Sarcasm is a type of figurative language broadly adopted in social media and daily conversations. The sarcasm can ultimately alter the meaning of the sentence, which makes the opinion analysis process error-prone. In this paper, we propose to employ bidirectional encoder representations transformers (BERT), and aspect-based sentiment analysis approaches in order to extract the relation between context dialogue sequence and response and determine whether or not the response is sarcastic. The best performing method of ours obtains an F1 score of 0.73 on the Twitter dataset and 0.734 over the Reddit dataset at the second workshop on figurative language processing Shared Task 2020.", "field": [], "task": ["Aspect-Based Sentiment Analysis", "Sarcasm Detection", "Sentiment Analysis"], "method": [], "dataset": ["FigLang 2020 Twitter Dataset", "FigLang 2020 Reddit Dataset"], "metric": ["F1"], "title": "Applying Transformers and Aspect-based Sentiment Analysis approaches on Sarcasm Detection"} {"abstract": "In this paper we consider a version of the zero-shot learning problem where\nseen class source and target domain data are provided. The goal during\ntest-time is to accurately predict the class label of an unseen target domain\ninstance based on revealed source domain side information (\\eg attributes) for\nunseen classes. Our method is based on viewing each source or target data as a\nmixture of seen class proportions and we postulate that the mixture patterns\nhave to be similar if the two instances belong to the same unseen class. This\nperspective leads us to learning source/target embedding functions that map an\narbitrary source/target domain data into a same semantic space where similarity\ncan be readily measured. We develop a max-margin framework to learn these\nsimilarity functions and jointly optimize parameters by means of cross\nvalidation. Our test results are compelling, leading to significant improvement\nin terms of accuracy on most benchmark datasets for zero-shot recognition.", "field": [], "task": ["Semantic Similarity", "Semantic Textual Similarity", "Zero-Shot Learning"], "method": [], "dataset": ["CUB-200-2011 - 0-Shot"], "metric": ["Top-1 Accuracy"], "title": "Zero-Shot Learning via Semantic Similarity Embedding"} {"abstract": "Malware detection and classification is a challenging problem and an active area of research. Particular challenges include how to best treat and preprocess malicious executables in order to feed machine learning algorithms. Novel approaches in the literature treat an executable as a sequence of bytes or as a sequence of assembly language instructions. However, in those approaches the hierarchical structure of programs is not taken into consideration. An executable exhibits various levels of spatial correlation. Adjacent code instructions are correlated spatially but that is not necessarily the case. Function calls and jump commands transfer the control of the program to a different point in the instruction stream. Furthermore, these discontinuities are maintained when treating the binary as a sequence of byte values. In addition, functions might be arranged randomly if addresses are correctly reorganized. To address these issues we propose a Hierarchical Convolutional Network (HCN) for malware classification. It has two levels of convolutional blocks applied at the mnemonic-level and at the function-level, enabling us to extract n-gram like features from both levels when constructing the malware representation. We validate our HCN method on the dataset released for the Microsoft Malware Classification Challenge, outperforming almost every deep learning method in the literature.", "field": [], "task": ["Hierarchical structure", "Malware Classification", "Malware Detection"], "method": [], "dataset": ["Microsoft Malware Classification Challenge"], "metric": ["Accuracy (10-fold)", "Macro F1 (10-fold)", "LogLoss"], "title": "A Hierarchical Convolutional Neural Network for Malware Classification"} {"abstract": "Predicting user responses, such as click-through rate and conversion rate,\nare critical in many web applications including web search, personalised\nrecommendation, and online advertising. Different from continuous raw features\nthat we usually found in the image and audio domains, the input features in web\nspace are always of multi-field and are mostly discrete and categorical while\ntheir dependencies are little known. Major user response prediction models have\nto either limit themselves to linear models or require manually building up\nhigh-order combination features. The former loses the ability of exploring\nfeature interactions, while the latter results in a heavy computation in the\nlarge feature space. To tackle the issue, we propose two novel models using\ndeep neural networks (DNNs) to automatically learn effective patterns from\ncategorical feature interactions and make predictions of users' ad clicks. To\nget our DNNs efficiently work, we propose to leverage three feature\ntransformation methods, i.e., factorisation machines (FMs), restricted\nBoltzmann machines (RBMs) and denoising auto-encoders (DAEs). This paper\npresents the structure of our models and their efficient training algorithms.\nThe large-scale experiments with real-world data demonstrate that our methods\nwork better than major state-of-the-art models.", "field": [], "task": ["Click-Through Rate Prediction"], "method": [], "dataset": ["Criteo", "iPinYou", "Company*"], "metric": ["Log Loss", "AUC"], "title": "Deep Learning over Multi-field Categorical Data: A Case Study on User Response Prediction"} {"abstract": "Existing models for cross-domain named entity recognition (NER) rely on numerous unlabeled corpus or labeled NER training data in target domains. However, collecting data for low-resource target domains is not only expensive but also time-consuming. Hence, we propose a cross-domain NER model that does not use any external resources. We first introduce a Multi-Task Learning (MTL) by adding a new objective function to detect whether tokens are named entities or not. We then introduce a framework called Mixture of Entity Experts (MoEE) to improve the robustness for zero-resource domain adaptation. Finally, experimental results show that our model outperforms strong unsupervised cross-domain sequence labeling models, and the performance of our model is close to that of the state-of-the-art model which leverages extensive resources.", "field": [], "task": ["Cross-Domain Named Entity Recognition", "Domain Adaptation", "Multi-Task Learning", "Named Entity Recognition"], "method": [], "dataset": ["CoNLL04"], "metric": ["F1"], "title": "Zero-Resource Cross-Domain Named Entity Recognition"} {"abstract": "UDPipe is a trainable pipeline which performs sentence segmentation, tokenization, POS tagging, lemmatization and dependency parsing. We present a prototype for UDPipe 2.0 and evaluate it in the CoNLL 2018 UD Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, which employs three metrics for submission ranking. Out of 26 participants, the prototype placed first in the MLAS ranking, third in the LAS ranking and third in the BLEX ranking. In extrinsic parser evaluation EPE 2018, the system ranked first in the overall score.", "field": [], "task": ["Dependency Parsing", "Lemmatization", "Sentence segmentation", "Tokenization", "Word Embeddings"], "method": [], "dataset": ["Universal Dependencies"], "metric": ["UAS", "BLEX", "LAS"], "title": "UDPipe 2.0 Prototype at CoNLL 2018 UD Shared Task"} {"abstract": "Word Sense Disambiguation is an open problem in Natural Language Processing\nwhich is particularly challenging and useful in the unsupervised setting where\nall the words in any given text need to be disambiguated without using any\nlabeled data. Typically WSD systems use the sentence or a small window of words\naround the target word as the context for disambiguation because their\ncomputational complexity scales exponentially with the size of the context. In\nthis paper, we leverage the formalism of topic model to design a WSD system\nthat scales linearly with the number of words in the context. As a result, our\nsystem is able to utilize the whole document as the context for a word to be\ndisambiguated. The proposed method is a variant of Latent Dirichlet Allocation\nin which the topic proportions for a document are replaced by synset\nproportions. We further utilize the information in the WordNet by assigning a\nnon-uniform prior to synset distribution over words and a logistic-normal prior\nfor document distribution over synsets. We evaluate the proposed method on\nSenseval-2, Senseval-3, SemEval-2007, SemEval-2013 and SemEval-2015 English\nAll-Word WSD datasets and show that it outperforms the state-of-the-art\nunsupervised knowledge-based WSD system by a significant margin.", "field": [], "task": ["Topic Models", "Word Sense Disambiguation"], "method": [], "dataset": ["Knowledge-based:"], "metric": ["Senseval 2", "Senseval 3", "SemEval 2013", "All", "SemEval 2007", "SemEval 2015"], "title": "Knowledge-based Word Sense Disambiguation using Topic Models"} {"abstract": "We propose a novel algorithm for monocular depth estimation using relative depth maps. First, using a convolutional neural network, we estimate relative depths between pairs of regions, as well as ordinary depths, at various scales. Second, we restore relative depth maps from selectively estimated data based on the rank-1 property of pairwise comparison matrices. Third, we decompose ordinary and relative depth maps into components and recombine them optimally to reconstruct a final depth map. Experimental results show that the proposed algorithm provides the state-of-art depth estimation performance.\r", "field": [], "task": ["Depth Estimation", "Monocular Depth Estimation"], "method": [], "dataset": ["NYU-Depth V2"], "metric": ["RMSE"], "title": "Monocular Depth Estimation Using Relative Depth Maps"} {"abstract": "Convolutional Neural Networks (CNNs) have been very successful at solving a variety of computer vision tasks such as object classification and detection, semantic segmentation, activity understanding, to name just a few. One key enabling factor for their great performance has been the ability to train very deep networks. Despite their huge success in many tasks, CNNs do not work well with non-Euclidean data, which is prevalent in many real-world applications. Graph Convolutional Networks (GCNs) offer an alternative that allows for non-Eucledian data input to a neural network. While GCNs already achieve encouraging results, they are currently limited to architectures with a relatively small number of layers, primarily due to vanishing gradients during training. This work transfers concepts such as residual/dense connections and dilated convolutions from CNNs to GCNs in order to successfully train very deep GCNs. We show the benefit of using deep GCNs (with as many as $112$ layers) experimentally across various datasets and tasks. Specifically, we achieve state-of-the-art performance in part segmentation and semantic segmentation on point clouds and in node classification of protein functions across biological protein-protein interaction (PPI) graphs. We believe that the insights in this work will open avenues for future research on GCNs and their application to further tasks not explored in this paper. The source code for this work is available at https://github.com/lightaime/deep_gcns_torch and https://github.com/lightaime/deep_gcns for Pytorch and Tensorflow implementation respectively.", "field": [], "task": ["Node Classification", "Object Classification", "Semantic Segmentation"], "method": [], "dataset": ["PPI"], "metric": ["F1"], "title": "DeepGCNs: Making GCNs Go as Deep as CNNs"} {"abstract": "Convolutional neural nets (convnets) trained from massive labeled datasets\nhave substantially improved the state-of-the-art in image classification and\nobject detection. However, visual understanding requires establishing\ncorrespondence on a finer level than object category. Given their large pooling\nregions and training from whole-image labels, it is not clear that convnets\nderive their success from an accurate correspondence model which could be used\nfor precise localization. In this paper, we study the effectiveness of convnet\nactivation features for tasks requiring correspondence. We present evidence\nthat convnet features localize at a much finer scale than their receptive field\nsizes, that they can be used to perform intraclass alignment as well as\nconventional hand-engineered features, and that they outperform conventional\nfeatures in keypoint prediction on objects from PASCAL VOC 2011.", "field": [], "task": ["Image Classification", "Keypoint Detection", "Object Detection"], "method": [], "dataset": [" Pascal3D+"], "metric": ["Mean PCK"], "title": "Do Convnets Learn Correspondence?"} {"abstract": "We present a novel method called Contextual Pyramid CNN (CP-CNN) for\ngenerating high-quality crowd density and count estimation by explicitly\nincorporating global and local contextual information of crowd images. The\nproposed CP-CNN consists of four modules: Global Context Estimator (GCE), Local\nContext Estimator (LCE), Density Map Estimator (DME) and a Fusion-CNN (F-CNN).\nGCE is a VGG-16 based CNN that encodes global context and it is trained to\nclassify input images into different density classes, whereas LCE is another\nCNN that encodes local context information and it is trained to perform\npatch-wise classification of input images into different density classes. DME\nis a multi-column architecture-based CNN that aims to generate high-dimensional\nfeature maps from the input image which are fused with the contextual\ninformation estimated by GCE and LCE using F-CNN. To generate high resolution\nand high-quality density maps, F-CNN uses a set of convolutional and\nfractionally-strided convolutional layers and it is trained along with the DME\nin an end-to-end fashion using a combination of adversarial loss and\npixel-level Euclidean loss. Extensive experiments on highly challenging\ndatasets show that the proposed method achieves significant improvements over\nthe state-of-the-art methods.", "field": [], "task": ["Crowd Counting"], "method": [], "dataset": ["UCF CC 50", "ShanghaiTech A", "WorldExpo\u201910", "ShanghaiTech B"], "metric": ["MAE", "Average MAE"], "title": "Generating High-Quality Crowd Density Maps using Contextual Pyramid CNNs"} {"abstract": "Tasks like code generation and semantic parsing require mapping unstructured\n(or partially structured) inputs to well-formed, executable outputs. We\nintroduce abstract syntax networks, a modeling framework for these problems.\nThe outputs are represented as abstract syntax trees (ASTs) and constructed by\na decoder with a dynamically-determined modular structure paralleling the\nstructure of the output tree. On the benchmark Hearthstone dataset for code\ngeneration, our model obtains 79.2 BLEU and 22.7% exact match accuracy,\ncompared to previous state-of-the-art values of 67.1 and 6.1%. Furthermore, we\nperform competitively on the Atis, Jobs, and Geo semantic parsing datasets with\nno task-specific engineering.", "field": [], "task": ["Code Generation", "Semantic Parsing"], "method": [], "dataset": ["ATIS"], "metric": ["Accuracy"], "title": "Abstract Syntax Networks for Code Generation and Semantic Parsing"} {"abstract": "Semantic parsing has made significant progress, but most current semantic\nparsers are extremely slow (CKY-based) and rather primitive in representation.\nWe introduce three new techniques to tackle these problems. First, we design\nthe first linear-time incremental shift-reduce-style semantic parsing algorithm\nwhich is more efficient than conventional cubic-time bottom-up semantic\nparsers. Second, our parser, being type-driven instead of syntax-driven, uses\ntype-checking to decide the direction of reduction, which eliminates the need\nfor a syntactic grammar such as CCG. Third, to fully exploit the power of\ntype-driven semantic parsing beyond simple types (such as entities and truth\nvalues), we borrow from programming language theory the concepts of subtype\npolymorphism and parametric polymorphism to enrich the type system in order to\nbetter guide the parsing. Our system learns very accurate parses in GeoQuery,\nJobs and Atis domains.", "field": [], "task": ["Semantic Parsing"], "method": [], "dataset": ["ATIS"], "metric": ["Accuracy"], "title": "Type-Driven Incremental Semantic Parsing with Polymorphism"} {"abstract": "Memorization in over-parameterized neural networks could severely hurt generalization in the presence of mislabeled examples. However, mislabeled examples are hard to avoid in extremely large datasets collected with weak supervision. We address this problem by reasoning counterfactually about the loss distribution of examples with uniform random labels had they were trained with the real examples, and use this information to remove noisy examples from the training set. First, we observe that examples with uniform random labels have higher losses when trained with stochastic gradient descent under large learning rates. Then, we propose to model the loss distribution of the counterfactual examples using only the network parameters, which is able to model such examples with remarkable success. Finally, we propose to remove examples whose loss exceeds a certain quantile of the modeled loss distribution. This leads to On-the-fly Data Denoising (ODD), a simple yet effective algorithm that is robust to mislabeled examples, while introducing almost zero computational overhead compared to standard training. ODD is able to achieve state-of-the-art results on a wide range of datasets including real-world ones such as WebVision and Clothing1M.", "field": [], "task": ["Denoising", "Image Classification", "Learning with noisy labels"], "method": [], "dataset": ["mini WebVision 1.0"], "metric": ["Top-5 Accuracy", "ImageNet Top-1 Accuracy", "ImageNet Top-5 Accuracy", "Top-1 Accuracy"], "title": "Robust and On-the-fly Dataset Denoising for Image Classification"} {"abstract": "The deep two-stream architecture exhibited excellent performance on video\nbased action recognition. The most computationally expensive step in this\napproach comes from the calculation of optical flow which prevents it to be\nreal-time. This paper accelerates this architecture by replacing optical flow\nwith motion vector which can be obtained directly from compressed videos\nwithout extra calculation. However, motion vector lacks fine structures, and\ncontains noisy and inaccurate motion patterns, leading to the evident\ndegradation of recognition performance. Our key insight for relieving this\nproblem is that optical flow and motion vector are inherent correlated.\nTransferring the knowledge learned with optical flow CNN to motion vector CNN\ncan significantly boost the performance of the latter. Specifically, we\nintroduce three strategies for this, initialization transfer, supervision\ntransfer and their combination. Experimental results show that our method\nachieves comparable recognition performance to the state-of-the-art, while our\nmethod can process 390.7 frames per second, which is 27 times faster than the\noriginal two-stream method.", "field": [], "task": ["Action Recognition", "Optical Flow Estimation", "Temporal Action Localization"], "method": [], "dataset": ["UCF101"], "metric": ["3-fold Accuracy"], "title": "Real-time Action Recognition with Enhanced Motion Vector CNNs"} {"abstract": "Weakly supervised instance segmentation with image-level labels, instead of\nexpensive pixel-level masks, remains unexplored. In this paper, we tackle this\nchallenging problem by exploiting class peak responses to enable a\nclassification network for instance mask extraction. With image labels\nsupervision only, CNN classifiers in a fully convolutional manner can produce\nclass response maps, which specify classification confidence at each image\nlocation. We observed that local maximums, i.e., peaks, in a class response map\ntypically correspond to strong visual cues residing inside each instance.\nMotivated by this, we first design a process to stimulate peaks to emerge from\na class response map. The emerged peaks are then back-propagated and\neffectively mapped to highly informative regions of each object instance, such\nas instance boundaries. We refer to the above maps generated from class peak\nresponses as Peak Response Maps (PRMs). PRMs provide a fine-detailed\ninstance-level representation, which allows instance masks to be extracted even\nwith some off-the-shelf methods. To the best of our knowledge, we for the first\ntime report results for the challenging image-level supervised instance\nsegmentation task. Extensive experiments show that our method also boosts\nweakly supervised pointwise localization as well as semantic segmentation\nperformance, and reports state-of-the-art results on popular benchmarks,\nincluding PASCAL VOC 2012 and MS COCO.", "field": [], "task": ["Instance Segmentation", "Semantic Segmentation", "Weakly-supervised instance segmentation"], "method": [], "dataset": ["PASCAL VOC 2012 val"], "metric": ["mIoU"], "title": "Weakly Supervised Instance Segmentation using Class Peak Response"} {"abstract": "We introduce a novel unsupervised domain adaptation approach for object detection. We aim to alleviate the imperfect translation problem of pixel-level adaptations, and the source-biased discriminativity problem of feature-level adaptations simultaneously. Our approach is composed of two stages, i.e., Domain Diversification (DD) and Multi-domain-invariant Representation Learning (MRL). At the DD stage, we diversify the distribution of the labeled data by generating various distinctive shifted domains from the source domain. At the MRL stage, we apply adversarial learning with a multi-domain discriminator to encourage feature to be indistinguishable among the domains. DD addresses the source-biased discriminativity, while MRL mitigates the imperfect image translation. We construct a structured domain adaptation framework for our learning paradigm and introduce a practical way of DD for implementation. Our method outperforms the state-of-the-art methods by a large margin of 3%~11% in terms of mean average precision (mAP) on various datasets.", "field": [], "task": ["Domain Adaptation", "Object Detection", "Representation Learning", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["Cityscapes-to-Foggy Cityscapes", "Cityscapes to Foggy Cityscapes"], "metric": ["mAP", "mAP@0.5"], "title": "Diversify and Match: A Domain Adaptive Representation Learning Paradigm for Object Detection"} {"abstract": "Unsupervised video segmentation plays an important role in a wide variety of\napplications from object identification to compression. However, to date, fast\nmotion, motion blur and occlusions pose significant challenges. To address\nthese challenges for unsupervised video segmentation, we develop a novel\nsaliency estimation technique as well as a novel neighborhood graph, based on\noptical flow and edge cues. Our approach leads to significantly better initial\nforeground-background estimates and their robust as well as accurate diffusion\nacross time. We evaluate our proposed algorithm on the challenging DAVIS,\nSegTrack v2 and FBMS-59 datasets. Despite the usage of only a standard edge\ndetector trained on 200 images, our method achieves state-of-the-art results\noutperforming deep learning based methods in the unsupervised setting. We even\ndemonstrate competitive results comparable to deep learning based methods in\nthe semi-supervised setting on the DAVIS dataset.", "field": [], "task": ["Optical Flow Estimation", "Saliency Prediction", "Semantic Segmentation", "Unsupervised Video Object Segmentation", "Video Object Segmentation", "Video Salient Object Detection", "Video Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["DAVSOD-Difficult20"], "metric": ["max E-measure", "Average MAE", "S-Measure"], "title": "Unsupervised Video Object Segmentation using Motion Saliency-Guided Spatio-Temporal Propagation"} {"abstract": "The Shape Interaction Matrix (SIM) is one of the earliest approaches to\nperforming subspace clustering (i.e., separating points drawn from a union of\nsubspaces). In this paper, we revisit the SIM and reveal its connections to\nseveral recent subspace clustering methods. Our analysis lets us derive a\nsimple, yet effective algorithm to robustify the SIM and make it applicable to\nrealistic scenarios where the data is corrupted by noise. We justify our method\nby intuitive examples and the matrix perturbation theory. We then show how this\napproach can be extended to handle missing data, thus yielding an efficient and\ngeneral subspace clustering algorithm. We demonstrate the benefits of our\napproach over state-of-the-art subspace clustering methods on several\nchallenging motion segmentation and face clustering problems, where the data\nincludes corrupted and missing measurements.", "field": [], "task": ["Face Clustering", "Motion Segmentation"], "method": [], "dataset": ["Hopkins155"], "metric": ["Classification Error"], "title": "Shape Interaction Matrix Revisited and Robustified: Efficient Subspace Clustering with Corrupted and Incomplete Data"} {"abstract": "The last decade has witnessed the success of the traditional feature-based\nmethod on exploiting the discrete structures such as words or lexical patterns\nto extract relations from text. Recently, convolutional and recurrent neural\nnetworks has provided very effective mechanisms to capture the hidden\nstructures within sentences via continuous representations, thereby\nsignificantly advancing the performance of relation extraction. The advantage\nof convolutional neural networks is their capacity to generalize the\nconsecutive k-grams in the sentences while recurrent neural networks are\neffective to encode long ranges of sentence context. This paper proposes to\ncombine the traditional feature-based method, the convolutional and recurrent\nneural networks to simultaneously benefit from their advantages. Our systematic\nevaluation of different network architectures and combination methods\ndemonstrates the effectiveness of this approach and results in the\nstate-of-the-art performance on the ACE 2005 and SemEval dataset.", "field": [], "task": ["Relation Extraction"], "method": [], "dataset": ["ACE 2005"], "metric": ["Relation classification F1"], "title": "Combining Neural Networks and Log-linear Models to Improve Relation Extraction"} {"abstract": "Neural models of Knowledge Base data have typically employed compositional representations of graph objects: entity and relation embeddings are systematically combined to evaluate the truth of a candidate Knowedge Base entry. Using a model inspired by Harmonic Grammar, we propose to tokenize triplet embeddings by subjecting them to a process of optimization with respect to learned well-formedness conditions on Knowledge Base triplets. The resulting model, known as Gradient Graphs, leads to sizable improvements when implemented as a companion to compositional models. Also, we show that the \"supracompositional\" triplet token embeddings it produces have interpretable properties that prove helpful in performing inference on the resulting triplet representations.", "field": [], "task": ["Knowledge Base Completion", "Knowledge Graphs", "Link Prediction"], "method": [], "dataset": ["FB15k", " FB15k", "WN18"], "metric": ["Hits@3", "MRR filtered", "Hits@1", "MR", "MRR", "Hits@10"], "title": "Augmenting Compositional Models for Knowledge Base Completion Using Gradient Representations"} {"abstract": "Sequence-to-sequence models with attention have been successful for a variety of NLP problems, but their speed does not scale well for tasks with long source sequences such as document summarization. We propose a novel coarse-to-fine attention model that hierarchically reads a document, using coarse attention to select top-level chunks of text and fine attention to read the words of the chosen chunks. While the computation for training standard attention models scales linearly with source sequence length, our method scales with the number of top-level chunks and can handle much longer sequences. Empirically, we find that while coarse-to-fine attention models lag behind state-of-the-art baselines, our method achieves the desired behavior of sparsely attending to subsets of the document for generation.", "field": [], "task": ["Document Summarization", "Machine Translation", "Question Answering"], "method": [], "dataset": ["CNN / Daily Mail"], "metric": ["PPL", "ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Coarse-to-Fine Attention Models for Document Summarization"} {"abstract": "In this paper, we develop a new model for recognizing human actions. An action is modeled as a very sparse sequence of temporally local discriminative keyframes collections of partial key-poses of the actor(s), depicting key states in the action sequence. We cast the learning of keyframes in a max-margin discriminative framework, where we treat keyframes as latent variables. This allows us to (jointly) learn a set of most discriminative keyframes while also learning the local temporal context between them. Keyframes are encoded using a spatially-localizable poselet-like representation with HoG and BoW components learned from weak annotations; we rely on structured SVM formulation to align our components and mine for hard negatives to boost localization performance. This results in a model that supports spatio-temporal localization and is insensitive to dropped frames or partial observations. We show classification performance that is competitive with the state of the art on the benchmark UT-Interaction dataset and illustrate that our model outperforms prior methods in an on-line streaming setting.", "field": [], "task": ["Activity Recognition", "Temporal Localization"], "method": [], "dataset": ["UT"], "metric": ["Accuracy"], "title": "Poselet Key-Framing: A Model for Human Activity Recognition"} {"abstract": "Photoplethysmography (PPG)-based continuous heart rate monitoring is essential in a number of domains, e.g., for healthcare or fitness applications. Recently, methods based on time-frequency spectra emerged to address the challenges of motion artefact compensation. However, existing approaches are highly parametrised and optimised for specific scenarios of small, public datasets. We address this fragmentation by contributing research into the robustness and generalisation capabilities of PPG-based heart rate estimation approaches. First, we introduce a novel large-scale dataset (called PPG-DaLiA), including a wide range of activities performed under close to real-life conditions. Second, we extend a state-of-the-art algorithm, significantly improving its performance on several datasets. Third, we introduce deep learning to this domain, and investigate various convolutional neural network architectures. Our end-to-end learning approach takes the time-frequency spectra of synchronised PPG- and accelerometer-signals as input, and provides the estimated heart rate as output. Finally, we compare the novel deep learning approach to classical methods, performing evaluation on four public datasets. We show that on large datasets the deep learning model significantly outperforms other methods: The mean absolute error could be reduced by 31% on the new dataset PPG-DaLiA, and by 21% on the dataset WESAD.", "field": [], "task": ["Heart rate estimation", "Photoplethysmography (PPG)"], "method": [], "dataset": ["PPG-DaLiA", "WESAD"], "metric": ["MAE [bpm, session-wise]"], "title": "Deep PPG: Large-Scale Heart Rate Estimation with Convolutional Neural Networks"} {"abstract": "Video-based person re-identification (re-ID) is an important research topic in computer vision. The key to tackling the challenging task is to exploit both spatial and temporal clues in video sequences. In this work, we propose a novel graph-based framework, namely Multi-Granular Hypergraph (MGH), to pursue better representational capabilities by modeling spatiotemporal dependencies in terms of multiple granularities. Specifically, hypergraphs with different spatial granularities are constructed using various levels of part-based features across the video sequence. In each hypergraph, different temporal granularities are captured by hyperedges that connect a set of graph nodes (i.e., part-based features) across different temporal ranges. Two critical issues (misalignment and occlusion) are explicitly addressed by the proposed hypergraph propagation and feature aggregation schemes. Finally, we further enhance the overall video representation by learning more diversified graph-level representations of multiple granularities based on mutual information minimization. Extensive experiments on three widely-adopted benchmarks clearly demonstrate the effectiveness of the proposed framework. Notably, 90.0% top-1 accuracy on MARS is achieved using MGH, outperforming the state-of-the-arts.\r", "field": [], "task": ["Person Re-Identification", "Video-Based Person Re-Identification"], "method": [], "dataset": ["MARS"], "metric": ["Rank-1", "Rank-20", "mAP", "Rank-5"], "title": "Learning Multi-Granular Hypergraphs for Video-Based Person Re-Identification"} {"abstract": "We present discriminative Gaifman models, a novel family of relational\nmachine learning models. Gaifman models learn feature representations bottom up\nfrom representations of locally connected and bounded-size regions of knowledge\nbases (KBs). Considering local and bounded-size neighborhoods of knowledge\nbases renders logical inference and learning tractable, mitigates the problem\nof overfitting, and facilitates weight sharing. Gaifman models sample\nneighborhoods of knowledge bases so as to make the learned relational models\nmore robust to missing objects and relations which is a common situation in\nopen-world KBs. We present the core ideas of Gaifman models and apply them to\nlarge-scale relational learning problems. We also discuss the ways in which\nGaifman models relate to some existing relational machine learning approaches.", "field": [], "task": ["Link Prediction", "Relational Reasoning"], "method": [], "dataset": ["WN18"], "metric": ["Hits@10", "MR", "Hits@1"], "title": "Discriminative Gaifman Models"} {"abstract": "We propose a data-driven approach to online multi-object tracking (MOT) that uses a convolutional neural network (CNN) for data association in a tracking-by-detection framework. The problem of multi-target tracking aims to assign noisy detections to a-priori unknown and time-varying number of tracked objects across a sequence of frames. A majority of the existing solutions focus on either tediously designing cost functions or formulating the task of data association as a complex optimization problem that can be solved effectively. Instead, we exploit the power of deep learning to formulate the data association problem as inference in a CNN. To this end, we propose to learn a similarity function that combines cues from both image and spatial features of objects. Our solution learns to perform global assignments in 3D purely from data, handles noisy detections and a varying number of targets, and is easy to train. We evaluate our approach on the challenging KITTI dataset and show competitive results. Our code is available at https://git.uwaterloo.ca/wise-lab/fantrack.", "field": [], "task": ["3D Multi-Object Tracking", "Multi-Object Tracking", "Object Tracking", "Online Multi-Object Tracking"], "method": [], "dataset": ["KITTI"], "metric": ["MOTA", "MOTP"], "title": "FANTrack: 3D Multi-Object Tracking with Feature Association Network"} {"abstract": "Occlusion relations inform the partition of the image domain into ``objects'' but are difficult to determine from a single image or short-baseline video. We show how long-term occlusion relations can be robustly inferred from video, and used within a convex optimization framework to segment the image domain into regions. We highlight the challenges in determining these occluder/occluded relations and ensuring regions remain temporally consistent, propose strategies to overcome them, and introduce an efficient numerical scheme to perform the partition directly on the pixel grid, without the need for superpixelization or other preprocessing steps.", "field": [], "task": ["Semantic Segmentation", "Unsupervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "Causal Video Object Segmentation From Persistence of Occlusions"} {"abstract": "In visual surveillance systems, it is necessary to recognize the behavior of\npeople handling objects such as a phone, a cup, or a plastic bag. In this\npaper, to address this problem, we propose a new framework for recognizing\nobject-related human actions by graph convolutional networks using human and\nobject poses. In this framework, we construct skeletal graphs of reliable human\nposes by selectively sampling the informative frames in a video, which include\nhuman joints with high confidence scores obtained in pose estimation. The\nskeletal graphs generated from the sampled frames represent human poses related\nto the object position in both the spatial and temporal domains, and these\ngraphs are used as inputs to the graph convolutional networks. Through\nexperiments over an open benchmark and our own data sets, we verify the\nvalidity of our framework in that our method outperforms the state-of-the-art\nmethod for skeleton-based action recognition.", "field": [], "task": ["Action Recognition", "Pose Estimation", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["IRD", "ICVL-4"], "metric": ["Accuracy"], "title": "Skeleton-based Action Recognition of People Handling Objects"} {"abstract": "BACKGROUND:\r\nStudies on sleep-spindles are typically based on visual-marks performed by experts, however this process is time consuming and presents a low inter-expert agreement, causing the data to be limited in quantity and prone to bias. An automatic detector would tackle these issues by generating large amounts of objectively marked data.\r\n\r\nNEW METHOD:\r\nOur goal was to develop a sensitive, precise and robust sleep-spindle detection method. Emphasis has been placed on achieving a consistent performance across heterogeneous recordings and without the need for further parameter fine tuning. The developed detector runs on a single channel and is based on multivariate classification using a support vector machine. Scalp-electroencephalogram recordings were segmented into epochs which were then characterized by a selection of relevant and non-redundant features. The training and validation data came from the Medical Center-University of Freiburg, the test data consisted of 27 records coming from 2 public databases.\r\n\r\nRESULTS:\r\nUsing a sample based assessment, 53% sensitivity, 37% precision and 96% specificity was achieved on the DREAMS database. On the MASS database, 77% sensitivity, 46% precision and 96% specificity was achieved. The developed detector performed favorably when compared to previous detectors. The classification of normalized EEG epochs in a multidimensional space, as well as the use of a validation set, allowed to objectively define a single detection threshold for all databases and participants.\r\n\r\nCONCLUSIONS:\r\nThe use of the developed tool will allow increasing the data-size and statistical significance of research studies on the role of sleep-spindles.", "field": [], "task": ["EEG", "Spindle Detection"], "method": [], "dataset": ["MASS SS2"], "metric": ["F1-score (@IoU = 0.3)"], "title": "A single channel sleep-spindle detector based on multivariate classification of EEG epochs: MUSSDET."} {"abstract": "Multi-orientation scene text detection has recently gained significant research attention. Previous methods directly predict words or text lines, typically by using quadrilateral shapes. However, many of these methods neglect the significance of consistent labeling, which is important for maintaining a stable training process, especially when it comprises a large amount of data. Here we solve this problem by proposing a new method, Orderless Box Discretization (OBD), which first discretizes the quadrilateral box into several key edges containing all potential horizontal and vertical positions. To decode accurate vertex positions, a simple yet effective matching procedure is proposed for reconstructing the quadrilateral bounding boxes. Our method solves the ambiguity issue, which has a significant impact on the learning process. Extensive ablation studies are conducted to validate the effectiveness of our proposed method quantitatively. More importantly, based on OBD, we provide a detailed analysis of the impact of a collection of refinements, which may inspire others to build state-of-the-art text detectors. Combining both OBD and these useful refinements, we achieve state-of-the-art performance on various benchmarks, including ICDAR 2015 and MLT. Our method also won the first place in the text detection task at the recent ICDAR2019 Robust Reading Challenge for Reading Chinese Text on Signboards, further demonstrating its superior performance. The code is available at https://git.io/TextDet.", "field": [], "task": ["Scene Text", "Scene Text Detection"], "method": [], "dataset": ["ICDAR 2017 MLT", "ICDAR 2015"], "metric": ["F-Measure", "Recall", "Precision"], "title": "Exploring the Capacity of an Orderless Box Discretization Network for Multi-orientation Scene Text Detection"} {"abstract": "Parsing articulated objects, e.g. humans and animals, into semantic parts\n(e.g. body, head and arms, etc.) from natural images is a challenging and\nfundamental problem for computer vision. A big difficulty is the large\nvariability of scale and location for objects and their corresponding parts.\nEven limited mistakes in estimating scale and location will degrade the parsing\noutput and cause errors in boundary details. To tackle these difficulties, we\npropose a \"Hierarchical Auto-Zoom Net\" (HAZN) for object part parsing which\nadapts to the local scales of objects and parts. HAZN is a sequence of two\n\"Auto-Zoom Net\" (AZNs), each employing fully convolutional networks that\nperform two tasks: (1) predict the locations and scales of object instances\n(the first AZN) or their parts (the second AZN); (2) estimate the part scores\nfor predicted object instance or part regions. Our model can adaptively \"zoom\"\n(resize) predicted image regions into their proper scales to refine the\nparsing.\n We conduct extensive experiments over the PASCAL part datasets on humans,\nhorses, and cows. For humans, our approach significantly outperforms the\nstate-of-the-arts by 5% mIOU and is especially better at segmenting small\ninstances and small parts. We obtain similar improvements for parsing cows and\nhorses over alternative methods. In summary, our strategy of first zooming into\nobjects and then zooming into parts is very effective. It also enables us to\nprocess different regions of the image at different scales adaptively so that,\nfor example, we do not need to waste computational resources scaling the entire\nimage.", "field": [], "task": [], "method": [], "dataset": ["PASCAL-Part"], "metric": ["mIoU"], "title": "Zoom Better to See Clearer: Human and Object Parsing with Hierarchical Auto-Zoom Net"} {"abstract": "Clustering partitions a dataset such that observations placed together in a\ngroup are similar but different from those in other groups. Hierarchical and\n$K$-means clustering are two approaches but have different strengths and\nweaknesses. For instance, hierarchical clustering identifies groups in a\ntree-like structure but suffers from computational complexity in large datasets\nwhile $K$-means clustering is efficient but designed to identify homogeneous\nspherically-shaped clusters. We present a hybrid non-parametric clustering\napproach that amalgamates the two methods to identify general-shaped clusters\nand that can be applied to larger datasets. Specifically, we first partition\nthe dataset into spherical groups using $K$-means. We next merge these groups\nusing hierarchical methods with a data-driven distance measure as a stopping\ncriterion. Our proposal has the potential to reveal groups with general shapes\nand structure in a dataset. We demonstrate good performance on several\nsimulated and real datasets.", "field": [], "task": ["Density Estimation", "Speech Synthesis"], "method": [], "dataset": ["ImageNet", "North American English"], "metric": ["NLL", "Mean Opinion Score"], "title": "Merging $K$-means with hierarchical clustering for identifying general-shaped groups"} {"abstract": "An unconstrained end-to-end text localization and recognition method is\npresented. The method detects initial text hypothesis in a single pass by an\nefficient region-based method and subsequently refines the text hypothesis\nusing a more robust local text model, which deviates from the common assumption\nof region-based methods that all characters are detected as connected\ncomponents.\n Additionally, a novel feature based on character stroke area estimation is\nintroduced. The feature is efficiently computed from a region distance map, it\nis invariant to scaling and rotations and allows to efficiently detect text\nregions regardless of what portion of text they capture.\n The method runs in real time and achieves state-of-the-art text localization\nand recognition results on the ICDAR 2013 Robust Reading dataset.", "field": [], "task": ["Scene Text"], "method": [], "dataset": ["ICDAR 2013"], "metric": ["F-Measure", "Recall", "Precision"], "title": "Efficient Scene Text Localization and Recognition with Local Character Refinement"} {"abstract": "In this article we describe a new convolutional neural network (CNN) to\nclassify 3D point clouds of urban or indoor scenes. Solutions are given to the\nproblems encountered working on scene point clouds, and a network is described\nthat allows for point classification using only the position of points in a\nmulti-scale neighborhood.\n On the reduced-8 Semantic3D benchmark [Hackel et al., 2017], this network,\nranked second, beats the state of the art of point classification methods\n(those not using a regularization step).", "field": [], "task": ["Semantic Segmentation"], "method": [], "dataset": ["Semantic3D"], "metric": ["mIoU"], "title": "Classification of Point Cloud Scenes with Multiscale Voxel Deep Network"} {"abstract": "Motivation: The amount of information available in textual format is rapidly increasing in the biomedical domain. Therefore, natural language processing (NLP) applications are becoming increasingly important to facilitate the retrieval and analysis of these data. Computing the semantic similarity between sentences is an important component in many NLP tasks including text retrieval and summarization. A number of approaches have been proposed for semantic sentence similarity estimation for generic English. However, our experiments showed that such approaches do not effectively cover biomedical knowledge and produce poor results for biomedical text.\r\n\r\nMethods: We propose several approaches for sentence-level semantic similarity computation in the biomedical domain, including string similarity measures and measures based on the distributed vector representations of sentences learned in an unsupervised manner from a large biomedical corpus. In addition, ontology-based approaches are presented that utilize general and domain-specific ontologies. Finally, a supervised regression based model is developed that effectively combines the different similarity computation metrics. A benchmark data set consisting of 100 sentence pairs from the biomedical literature is manually annotated by five human experts and used for evaluating the proposed methods.\r\n\r\nResults: The experiments showed that the supervised semantic sentence similarity computation approach obtained the best performance (0.836 correlation with gold standard human annotations) and improved over the state-of-the-art domain-independent systems up to 42.6% in terms of the Pearson correlation metric.", "field": [], "task": ["Regression", "Semantic Similarity", "Semantic Textual Similarity", "Sentence Embeddings For Biomedical Texts", "Sentence Similarity"], "method": [], "dataset": ["BIOSSES"], "metric": ["Pearson Correlation"], "title": "BIOSSES: A Semantic Sentence Similarity Estimation System for the Biomedical Domain"} {"abstract": "We use deep learning to model interactions across two or more sets of\nobjects, such as user-movie ratings, protein-drug bindings, or ternary\nuser-item-tag interactions. The canonical representation of such interactions\nis a matrix (or a higher-dimensional tensor) with an exchangeability property:\nthe encoding's meaning is not changed by permuting rows or columns. We argue\nthat models should hence be Permutation Equivariant (PE): constrained to make\nthe same predictions across such permutations. We present a parameter-sharing\nscheme and prove that it could not be made any more expressive without\nviolating PE. This scheme yields three benefits. First, we demonstrate\nstate-of-the-art performance on multiple matrix completion benchmarks. Second,\nour models require a number of parameters independent of the numbers of\nobjects, and thus scale well to large datasets. Third, models can be queried\nabout new objects that were not available at training time, but for which\ninteractions have since been observed. In experiments, our models achieved\nsurprisingly good generalization performance on this matrix extrapolation task,\nboth within domains (e.g., new users and new movies drawn from the same\ndistribution used for training) and even across domains (e.g., predicting music\nratings after training on movies).", "field": [], "task": ["Matrix Completion", "Recommendation Systems"], "method": [], "dataset": ["MovieLens 1M", "Flixster Monti", "Douban Monti", "YahooMusic Monti", "MovieLens 100K"], "metric": ["RMSE (u1 Splits)", "RMSE"], "title": "Deep Models of Interactions Across Sets"} {"abstract": "Convolutional neural networks are nowadays witnessing a major success in different pattern recognition problems. These learning models were basically designed to handle vectorial data such as images but their extension to non-vectorial and semi-structured data (namely graphs with variable sizes, topology, etc.) remains a major challenge, though a few interesting solutions are currently emerging. In this paper, we introduce MLGCN; a novel spectral Multi-Laplacian Graph Convolutional Network. The main ontribution of this method resides in a new design principle that learns graph-laplacians as convex combinations of other elementary \r\nlaplacians \u2013 each one dedicated to a particular topology of the input graphs. We also introduce a novel pooling operator, on graphs, that proceeds in two steps: context-dependent node expansion is achieved, followed by a global average pooling; the strength of this two-step process resides in its ability to preserve the discrimination power of nodes while achieving permutation invariance. Experiments conducted on SBU and UCF-101 datasets, show the validity of our method for the challenging task of action recognition. \r\n\r\nSupplementary : https://bit.ly/2ku2lYv", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["UCF101", "SBU"], "metric": ["3-fold Accuracy", "Accuracy"], "title": "MLGCN: Multi-Laplacian Graph Convolutional Networks for Human Action Recognition"} {"abstract": "Extractive summarization models require sentence-level labels, which are\nusually created heuristically (e.g., with rule-based methods) given that most\nsummarization datasets only have document-summary pairs. Since these labels\nmight be suboptimal, we propose a latent variable extractive model where\nsentences are viewed as latent variables and sentences with activated variables\nare used to infer gold summaries. During training the loss comes\n\\emph{directly} from gold summaries. Experiments on the CNN/Dailymail dataset\nshow that our model improves over a strong extractive baseline trained on\nheuristically approximated labels and also performs competitively to several\nrecent models.", "field": [], "task": ["Document Summarization", "Extractive Text Summarization"], "method": [], "dataset": ["CNN / Daily Mail"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Neural Latent Extractive Document Summarization"} {"abstract": "In this work, we propose a new family of generative flows on an augmented data space, with an aim to improve expressivity without drastically increasing the computational cost of sampling and evaluation of a lower bound on the likelihood. Theoretically, we prove the proposed flow can approximate a Hamiltonian ODE as a universal transport map. Empirically, we demonstrate state-of-the-art performance on standard benchmarks of flow-based generative modeling.", "field": [], "task": ["Image Generation", "Latent Variable Models"], "method": [], "dataset": ["CelebA 256x256", "ImageNet 32x32", "CIFAR-10"], "metric": ["bits/dimension", "bpd"], "title": "Augmented Normalizing Flows: Bridging the Gap Between Generative Flows and Latent Variable Models"} {"abstract": "There has been significant amount of research work on human activity classification relying either on Inertial Measurement Unit (IMU) data or data from static cameras providing a third-person view. Using only IMU data limits the variety and complexity of the activities that can be detected. For instance, the sitting activity can be detected by IMU data, but it cannot be determined whether the subject has sat on a chair or a sofa, or where the subject is. To perform fine-grained activity classification from egocentric videos, and to distinguish between activities that cannot be differentiated by only IMU data, we present an autonomous and robust method using data from both ego-vision cameras and IMUs. In contrast to convolutional neural network-based approaches, we propose to employ capsule networks to obtain features from egocentric video data. Moreover, Convolutional Long Short Term Memory framework is employed both on egocentric videos and IMU data to capture temporal aspect of actions. We also propose a genetic algorithm-based approach to autonomously and systematically set various network parameters, rather than using manual settings. Experiments have been performed to perform 9- and 26-label activity classification, and the proposed method, using autonomously set network parameters, has provided very promising results, achieving overall accuracies of 86.6\\% and 77.2\\%, respectively. The proposed approach combining both modalities also provides increased accuracy compared to using only egovision data and only IMU data.", "field": [], "task": ["Multimodal Activity Recognition"], "method": [], "dataset": [" CMU Multi-Modal Activity (CMU-MMAC)"], "metric": ["Accuracy"], "title": "Autonomous Human Activity Classification from Ego-vision Camera and Accelerometer Data"} {"abstract": "We propose an attentive local feature descriptor suitable for large-scale\nimage retrieval, referred to as DELF (DEep Local Feature). The new feature is\nbased on convolutional neural networks, which are trained only with image-level\nannotations on a landmark image dataset. To identify semantically useful local\nfeatures for image retrieval, we also propose an attention mechanism for\nkeypoint selection, which shares most network layers with the descriptor. This\nframework can be used for image retrieval as a drop-in replacement for other\nkeypoint detectors and descriptors, enabling more accurate feature matching and\ngeometric verification. Our system produces reliable confidence scores to\nreject false positives---in particular, it is robust against queries that have\nno correct match in the database. To evaluate the proposed descriptor, we\nintroduce a new large-scale dataset, referred to as Google-Landmarks dataset,\nwhich involves challenges in both database and query such as background\nclutter, partial occlusion, multiple landmarks, objects in variable scales,\netc. We show that DELF outperforms the state-of-the-art global and local\ndescriptors in the large-scale setting by significant margins. Code and dataset\ncan be found at the project webpage:\nhttps://github.com/tensorflow/models/tree/master/research/delf .", "field": [], "task": ["Image Retrieval"], "method": [], "dataset": ["Par106k", "Par6k", "Oxf5k", "Oxf105k"], "metric": ["mAP", "MAP"], "title": "Large-Scale Image Retrieval with Attentive Deep Local Features"} {"abstract": "In neural abstractive summarization field, conventional sequence-to-sequence\nbased models often suffer from summarizing the wrong aspect of the document\nwith respect to the main aspect. To tackle this problem, we propose the task of\nreader-aware abstractive summary generation, which utilizes the reader comments\nto help the model produce better summary about the main aspect. Unlike\ntraditional abstractive summarization task, reader-aware summarization\nconfronts two main challenges: (1) Comments are informal and noisy; (2) jointly\nmodeling the news document and the reader comments is challenging. To tackle\nthe above challenges, we design an adversarial learning model named\nreader-aware summary generator (RASG), which consists of four components: (1) a\nsequence-to-sequence based summary generator; (2) a reader attention module\ncapturing the reader focused aspects; (3) a supervisor modeling the semantic\ngap between the generated summary and reader focused aspects; (4) a goal\ntracker producing the goal for each generation step. The supervisor and the\ngoal tacker are used to guide the training of our framework in an adversarial\nmanner. Extensive experiments are conducted on our large-scale real-world text\nsummarization dataset, and the results show that RASG achieves the\nstate-of-the-art performance in terms of both automatic metrics and human\nevaluations. The experimental results also demonstrate the effectiveness of\neach module in our framework. We release our large-scale dataset for further\nresearch.", "field": [], "task": ["Abstractive Text Summarization", "Reader-Aware Summarization", "Text Summarization"], "method": [], "dataset": ["RASG"], "metric": ["ROUGE-1"], "title": "Abstractive Text Summarization by Incorporating Reader Comments"} {"abstract": "Understanding the complex urban infrastructure with centimeter-level accuracy is essential for many applications from autonomous driving to mapping, infrastructure monitoring, and urban management. Aerial images provide valuable information over a large area instantaneously; nevertheless, no current dataset captures the complexity of aerial scenes at the level of granularity required by real-world applications. To address this, we introduce SkyScapes, an aerial image dataset with highly-accurate, fine-grained annotations for pixel-level semantic labeling. SkyScapes provides annotations for 31 semantic categories ranging from large structures, such as buildings, roads and vegetation, to fine details, such as 12 (sub-)categories of lane markings. We have defined two main tasks on this dataset: dense semantic segmentation and multi-class lane-marking prediction. We carry out extensive experiments to evaluate state-of-the-art segmentation methods on SkyScapes. Existing methods struggle to deal with the wide range of classes, object sizes, scales, and fine details present. We therefore propose a novel multi-task model, which incorporates semantic edge detection and is better tuned for feature extraction from a wide range of scales. This model achieves notable improvements over the baselines in region outlines and level of detail on both tasks.\r", "field": [], "task": ["Autonomous Driving", "Edge Detection", "Semantic Segmentation"], "method": [], "dataset": ["SkyScapes-Dense", "SkyScapes-Lane"], "metric": ["Mean IoU"], "title": "SkyScapes Fine-Grained Semantic Understanding of Aerial Scenes"} {"abstract": "Few-shot classification consists of learning a predictive model that is able to effectively adapt to a new class, given only a few annotated samples. To solve this challenging problem, meta-learning has become a popular paradigm that advocates the ability to \"learn to adapt\". Recent works have shown, however, that simple learning strategies without meta-learning could be competitive. In this paper, we go a step further and show that by addressing the fundamental high-variance issue of few-shot learning classifiers, it is possible to significantly outperform current meta-learning techniques. Our approach consists of designing an ensemble of deep networks to leverage the variance of the classifiers, and introducing new strategies to encourage the networks to cooperate, while encouraging prediction diversity. Evaluation is conducted on the mini-ImageNet and CUB datasets, where we show that even a single network obtained by distillation yields state-of-the-art results.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Meta-Learning"], "method": [], "dataset": ["Mini-ImageNet - 1-Shot Learning"], "metric": ["Accuracy"], "title": "Diversity with Cooperation: Ensemble Methods for Few-Shot Classification"} {"abstract": "The goal of this work is to recognise phrases and sentences being spoken by a\ntalking face, with or without the audio. Unlike previous works that have\nfocussed on recognising a limited number of words or phrases, we tackle lip\nreading as an open-world problem - unconstrained natural language sentences,\nand in the wild videos.\n Our key contributions are: (1) a 'Watch, Listen, Attend and Spell' (WLAS)\nnetwork that learns to transcribe videos of mouth motion to characters; (2) a\ncurriculum learning strategy to accelerate training and to reduce overfitting;\n(3) a 'Lip Reading Sentences' (LRS) dataset for visual speech recognition,\nconsisting of over 100,000 natural sentences from British television.\n The WLAS model trained on the LRS dataset surpasses the performance of all\nprevious work on standard lip reading benchmark datasets, often by a\nsignificant margin. This lip reading performance beats a professional lip\nreader on videos from BBC television, and we also demonstrate that visual\ninformation helps to improve speech recognition performance even when the audio\nis available.", "field": [], "task": ["Curriculum Learning", "Lipreading", "Lip Reading", "Speech Recognition", "Visual Speech Recognition"], "method": [], "dataset": ["GRID corpus (mixed-speech)"], "metric": ["Word Error Rate (WER)"], "title": "Lip Reading Sentences in the Wild"} {"abstract": "How to effectively learn temporal variation of target appearance, to exclude the interference of cluttered background, while maintaining real-time response, is an essential problem of visual object tracking. Recently, Siamese networks have shown great potentials of matching based trackers in achieving balanced accuracy and beyond real-time speed. However, they still have a big gap to classification & updating based trackers in tolerating the temporal changes of objects and imaging conditions. In this paper, we propose dynamic Siamese network, via a fast transformation learning model that enables effective online learning of target appearance variation and background suppression from previous frames. We then present elementwise multi-layer fusion to adaptively integrate the network outputs using multi-level deep features. Unlike state-of-the-art trackers, our approach allows the usage of any feasible generally- or particularly-trained features, such as SiamFC and VGG. More importantly, the proposed dynamic Siamese network can be jointly trained as a whole directly on the labeled video sequences, thus can take full advantage of the rich spatial temporal information of moving objects. As a result, our approach achieves state-of-the-art performance on OTB-2013 and VOT-2015 benchmarks, while exhibits superiorly balanced accuracy and real-time response over state-of-the-art competitors.\r", "field": [], "task": ["Object Tracking", "Visual Object Tracking"], "method": [], "dataset": ["OTB-2013"], "metric": ["AUC"], "title": "Learning Dynamic Siamese Network for Visual Object Tracking"} {"abstract": "Meta-learning for few-shot learning entails acquiring a prior over previous tasks and experiences, such that new tasks be learned from small amounts of data. However, a critical challenge in few-shot learning is task ambiguity: even when a powerful prior can be meta-learned from a large number of prior tasks, a small dataset for a new task can simply be too ambiguous to acquire a single model (e.g., a classifier) for that task that is accurate. In this paper, we propose a probabilistic meta-learning algorithm that can sample models for a new task from a model distribution. Our approach extends model-agnostic meta-learning, which adapts to new tasks via gradient descent, to incorporate a parameter distribution that is trained via a variational lower bound. At meta-test time, our algorithm adapts via a simple procedure that injects noise into gradient descent, and at meta-training time, the model is trained such that this stochastic adaptation procedure produces samples from the approximate model posterior. Our experimental results show that our method can sample plausible classifiers and regressors in ambiguous few-shot learning problems. We also show how reasoning about ambiguity can also be used for downstream active learning problems.", "field": [], "task": ["Active Learning", "Few-Shot Image Classification", "Few-Shot Learning", "Meta-Learning"], "method": [], "dataset": ["Mini-ImageNet - 1-Shot Learning"], "metric": ["Accuracy"], "title": "Probabilistic Model-Agnostic Meta-Learning"} {"abstract": "The key challenge of face recognition is to develop effective feature representations for reducing intra-personal variations while enlarging inter-personal differences. In this paper, we show that it can be well solved with deep learning and using both face identification and verification signals as supervision. The Deep IDentification-verification features (DeepID2) are learned with carefully designed deep convolutional networks. The face identification task increases the inter-personal variations by drawing DeepID2 extracted from different identities apart, while the face verification task reduces the intra-personal variations by pulling DeepID2 extracted from the same identity together, both of which are essential to face recognition. The learned DeepID2 features can be well generalized to new identities unseen in the training data. On the challenging LFW dataset, 99.15% face verification accuracy is achieved. Compared with the best deep learning result on LFW, the error rate has been significantly reduced by 67%.", "field": [], "task": ["Face Identification", "Face Recognition", "Face Verification"], "method": [], "dataset": ["Labeled Faces in the Wild"], "metric": ["Accuracy"], "title": "Deep Learning Face Representation by Joint Identification-Verification"} {"abstract": "Dramatic appearance variation due to pose constitutes a great challenge in\nfine-grained recognition, one which recent methods using attention mechanisms\nor second-order statistics fail to adequately address. Modern CNNs typically\nlack an explicit understanding of object pose and are instead confused by\nentangled pose and appearance. In this paper, we propose a unified object\nrepresentation built from a hierarchy of pose-aligned regions. Rather than\nrepresenting an object by regions aligned to image axes, the proposed\nrepresentation characterizes appearance relative to the object's pose using\npose-aligned patches whose features are robust to variations in pose, scale and\nrotation. We propose an algorithm that performs pose estimation and forms the\nunified object representation as the concatenation of hierarchical pose-aligned\nregions features, which is then fed into a classification network. The proposed\nalgorithm surpasses the performance of other approaches, increasing the\nstate-of-the-art by nearly 2% on the widely-used CUB-200 dataset and by more\nthan 8% on the much larger NABirds dataset. The effectiveness of this paradigm\nrelative to competing methods suggests the critical importance of disentangling\npose and appearance for continued progress in fine-grained recognition.", "field": [], "task": ["Fine-Grained Image Classification", "Pose Estimation"], "method": [], "dataset": [" CUB-200-2011", "NABirds"], "metric": ["Accuracy"], "title": "Aligned to the Object, not to the Image: A Unified Pose-aligned Representation for Fine-grained Recognition"} {"abstract": "Webly supervised learning becomes attractive recently for its efficiency in data expansion without expensive human labeling. However, adopting search queries or hashtags as web labels of images for training brings massive noise that degrades the performance of DNNs. Especially, due to the semantic confusion of query words, the images retrieved by one query may contain tremendous images belonging to other concepts. For example, searching `tiger cat' on Flickr will return a dominating number of tiger images rather than the cat images. These realistic noisy samples usually have clear visual semantic clusters in the visual space that mislead DNNs from learning accurate semantic labels. To correct real-world noisy labels, expensive human annotations seem indispensable. Fortunately, we find that metadata can provide extra knowledge to discover clean web labels in a labor-free fashion, making it feasible to automatically provide correct semantic guidance among the massive label-noisy web data. In this paper, we propose an automatic label corrector VSGraph-LC based on the visual-semantic graph. VSGraph-LC starts from anchor selection referring to the semantic similarity between metadata and correct label concepts, and then propagates correct labels from anchors on a visual graph using graph neural network (GNN). Experiments on realistic webly supervised learning datasets Webvision-1000 and NUS-81-Web show the effectiveness and robustness of VSGraph-LC. Moreover, VSGraph-LC reveals its advantage on the open-set validation set.", "field": [], "task": ["Image Classification", "Semantic Similarity", "Semantic Textual Similarity"], "method": [], "dataset": ["WebVision-1000"], "metric": ["Top-1 Accuracy"], "title": "Webly Supervised Image Classification with Metadata: Automatic Noisy Label Correction via Visual-Semantic Graph"} {"abstract": "We introduce our method and system for face recognition using multiple\npose-aware deep learning models. In our representation, a face image is\nprocessed by several pose-specific deep convolutional neural network (CNN)\nmodels to generate multiple pose-specific features. 3D rendering is used to\ngenerate multiple face poses from the input image. Sensitivity of the\nrecognition system to pose variations is reduced since we use an ensemble of\npose-specific CNN features. The paper presents extensive experimental results\non the effect of landmark detection, CNN layer selection and pose model\nselection on the performance of the recognition pipeline. Our novel\nrepresentation achieves better results than the state-of-the-art on IARPA's CS2\nand NIST's IJB-A in both verification and identification (i.e. search) tasks.", "field": [], "task": ["Face Recognition", "Face Verification", "Model Selection"], "method": [], "dataset": ["IJB-A"], "metric": ["TAR @ FAR=0.01"], "title": "Face Recognition Using Deep Multi-Pose Representations"} {"abstract": "In this paper, we present an algorithm for unconstrained face verification\nbased on deep convolutional features and evaluate it on the newly released\nIARPA Janus Benchmark A (IJB-A) dataset. The IJB-A dataset includes real-world\nunconstrained faces from 500 subjects with full pose and illumination\nvariations which are much harder than the traditional Labeled Face in the Wild\n(LFW) and Youtube Face (YTF) datasets. The deep convolutional neural network\n(DCNN) is trained using the CASIA-WebFace dataset. Extensive experiments on the\nIJB-A dataset are provided.", "field": [], "task": ["Face Verification"], "method": [], "dataset": ["IJB-A"], "metric": ["TAR @ FAR=0.01"], "title": "Unconstrained Face Verification using Deep CNN Features"} {"abstract": "Semantic parsing shines at analyzing complex natural language that involves\ncomposition and computation over multiple pieces of evidence. However, datasets\nfor semantic parsing contain many factoid questions that can be answered from a\nsingle web document. In this paper, we propose to evaluate semantic\nparsing-based question answering models by comparing them to a question\nanswering baseline that queries the web and extracts the answer only from web\nsnippets, without access to the target knowledge-base. We investigate this\napproach on COMPLEXQUESTIONS, a dataset designed to focus on compositional\nlanguage, and find that our model obtains reasonable performance (35 F1\ncompared to 41 F1 of state-of-the-art). We find in our analysis that our model\nperforms well on complex questions involving conjunctions, but struggles on\nquestions that involve relation composition and superlatives.", "field": [], "task": ["Question Answering", "Semantic Parsing"], "method": [], "dataset": ["COMPLEXQUESTIONS"], "metric": ["F1"], "title": "Evaluating Semantic Parsing against a Simple Web-based Question Answering Model"} {"abstract": "This paper presents a simple nonparametric regression approach to data-driven computing in elasticity. We apply the kernel regression to the material data set, and formulate a system of nonlinear equations solved to obtain a static equilibrium state of an elastic structure. Preliminary numerical experiments illustrate that, compared with existing methods, the proposed method finds a reasonable solution even if data points distribute coarsely in a given material data set.", "field": [], "task": ["Regression", "Stress-Strain Relation"], "method": [], "dataset": ["Non-Linear Elasticity Benchmark"], "metric": ["Time (ms)"], "title": "Data-driven computing in elasticity via kernel regression"} {"abstract": "This paper addresses the problem of modeling textual conversations and detecting emotions. Our proposed model makes use of 1) deep transfer learning rather than the classical shallow methods of word embedding; 2) self-attention mechanisms to focus on the most important parts of the texts and 3) turn-based conversational modeling for classifying the emotions. The approach does not rely on any hand-crafted features or lexicons. Our model was evaluated on the data provided by the SemEval-2019 shared task on contextual emotion detection in text. The model shows very competitive results.", "field": [], "task": ["Emotion Recognition in Conversation", "Transfer Learning"], "method": [], "dataset": ["EC"], "metric": ["Micro-F1"], "title": "Attention-based Modeling for Emotion Detection and Classification in Textual Conversations"} {"abstract": "Depth estimation and 3D object detection are critical for scene understanding\nbut remain challenging to perform with a single image due to the loss of 3D\ninformation during image capture. Recent models using deep neural networks have\nimproved monocular depth estimation performance, but there is still difficulty\nin predicting absolute depth and generalizing outside a standard dataset. Here\nwe introduce the paradigm of deep optics, i.e. end-to-end design of optics and\nimage processing, to the monocular depth estimation problem, using coded\ndefocus blur as an additional depth cue to be decoded by a neural network. We\nevaluate several optical coding strategies along with an end-to-end\noptimization scheme for depth estimation on three datasets, including NYU Depth\nv2 and KITTI. We find an optimized freeform lens design yields the best\nresults, but chromatic aberration from a singlet lens offers significantly\nimproved performance as well. We build a physical prototype and validate that\nchromatic aberrations improve depth estimation on real-world results. In\naddition, we train object detection networks on the KITTI dataset and show that\nthe lens optimized for depth estimation also results in improved 3D object\ndetection performance.", "field": [], "task": ["3D Object Detection", "Depth Estimation", "Monocular Depth Estimation", "Object Detection", "Scene Understanding"], "method": [], "dataset": ["NYU-Depth V2"], "metric": ["RMS"], "title": "Deep Optics for Monocular Depth Estimation and 3D Object Detection"} {"abstract": "Chit-chat models are known to have several problems: they lack specificity,\ndo not display a consistent personality and are often not very captivating. In\nthis work we present the task of making chit-chat more engaging by conditioning\non profile information. We collect data and train models to (i) condition on\ntheir given profile information; and (ii) information about the person they are\ntalking to, resulting in improved dialogues, as measured by next utterance\nprediction. Since (ii) is initially unknown our model is trained to engage its\npartner with personal topics, and we show the resulting dialogue can be used to\npredict profile information about the interlocutors.", "field": [], "task": ["Dialogue Generation"], "method": [], "dataset": ["Persona-Chat"], "metric": ["Avg F1"], "title": "Personalizing Dialogue Agents: I have a dog, do you have pets too?"} {"abstract": "Person re-identification (re-id) is the task of matching multiple occurrences\nof the same person from different cameras, poses, lighting conditions, and a\nmultitude of other factors which alter the visual appearance. Typically, this\nis achieved by learning either optimal features or matching metrics which are\nadapted to specific pairs of camera views dictated by the pairwise labelled\ntraining datasets. In this work, we formulate a deep learning based novel\napproach to automatic prototype-domain discovery for domain perceptive\n(adaptive) person re-id (rather than camera pair specific learning) for any\ncamera views scalable to new unseen scenes without training data. We learn a\nseparate re-id model for each of the discovered prototype-domains and during\nmodel deployment, use the person probe image to select automatically the model\nof the closest prototype domain. Our approach requires neither supervised nor\nunsupervised domain adaptation learning, i.e. no data available from the target\ndomains. We evaluate extensively our model under realistic re-id conditions\nusing automatically detected bounding boxes with low-resolution and partial\nocclusion. We show that our approach outperforms most of the state-of-the-art\nsupervised and unsupervised methods on the latest CUHK-SYSU and PRW benchmarks.", "field": [], "task": ["Domain Adaptation", "Person Re-Identification", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["CUHK-SYSU"], "metric": ["Rank-1", "MAP"], "title": "Deep Learning Prototype Domains for Person Re-Identification"} {"abstract": "Recognizing abnormal events such as traffic violations and accidents in natural driving scenes is essential for successful autonomous driving and advanced driver assistance systems. However, most work on video anomaly detection suffers from two crucial drawbacks. First, they assume cameras are fixed and videos have static backgrounds, which is reasonable for surveillance applications but not for vehicle-mounted cameras. Second, they pose the problem as one-class classification, relying on arduously hand-labeled training datasets that limit recognition to anomaly categories that have been explicitly trained. This paper proposes an unsupervised approach for traffic accident detection in first-person (dashboard-mounted camera) videos. Our major novelty is to detect anomalies by predicting the future locations of traffic participants and then monitoring the prediction accuracy and consistency metrics with three different strategies. We evaluate our approach using a new dataset of diverse traffic accidents, AnAn Accident Detection (A3D), as well as another publicly-available dataset. Experimental results show that our approach outperforms the state-of-the-art.", "field": [], "task": ["Anomaly Detection", "Autonomous Driving", "Object Localization", "Traffic Accident Detection"], "method": [], "dataset": ["A3D", "SA"], "metric": ["AUC"], "title": "Unsupervised Traffic Accident Detection in First-Person Videos"} {"abstract": "Attributes are an intermediate representation, which enables parameter sharing between classes, a must when training data is scarce. We propose to view attribute-based image classification as a label-embedding problem: each class is embedded in the space of attribute vectors. We introduce a function which measures the compatibility between an image and a label embedding. The parameters of this function are learned on a training set of labeled samples to ensure that, given an image, the correct classes rank higher than the incorrect ones. Results on the Animals With Attributes and Caltech-UCSD-Birds datasets show that the proposed framework outperforms the standard Direct Attribute Prediction baseline in a zero-shot learning scenario. The label embedding framework offers other advantages such as the ability to leverage alternative sources of information in addition to attributes (e.g. class hierarchies) or to transition smoothly from zero-shot learning to learning with large quantities of data.", "field": [], "task": ["Image Classification", "Zero-Shot Learning"], "method": [], "dataset": ["CUB-200-2011 - 0-Shot"], "metric": ["Top-1 Accuracy"], "title": "Label-Embedding for Attribute-Based Classification"} {"abstract": "We propose a novel method for instance label segmentation of dense 3D voxel grids. We target volumetric scene representations, which have been acquired with depth sensors or multi-view stereo methods and which have been processed with semantic 3D reconstruction or scene completion methods. The main task is to learn shape information about individual object instances in order to accurately separate them, including connected and incompletely scanned objects. We solve the 3D instance-labeling problem with a multi-task learning strategy. The first goal is to learn an abstract feature embedding, which groups voxels with the same instance label close to each other while separating clusters with different instance labels from each other. The second goal is to learn instance information by densely estimating directional information of the instance's center of mass for each voxel. This is particularly useful to find instance boundaries in the clustering post-processing step, as well as, for scoring the segmentation quality for the first goal. Both synthetic and real-world experiments demonstrate the viability and merits of our approach. In fact, it achieves state-of-the-art performance on the ScanNet 3D instance segmentation benchmark.", "field": [], "task": ["3D Instance Segmentation", "3D Reconstruction", "3D Semantic Instance Segmentation", "Instance Segmentation", "Metric Learning", "Multi-Task Learning", "Semantic Segmentation"], "method": [], "dataset": ["ScanNetV2"], "metric": ["mAP@0.50"], "title": "3D Instance Segmentation via Multi-Task Metric Learning"} {"abstract": "A majority of stock 3D models in modern shape repositories are assembled with\nmany fine-grained components. The main cause of such data form is the\ncomponent-wise modeling process widely practiced by human modelers. These\nmodeling components thus inherently reflect some function-based shape\ndecomposition the artist had in mind during modeling. On the other hand,\nmodeling components represent an over-segmentation since a functional part is\nusually modeled as a multi-component assembly. Based on these observations, we\nadvocate that labeled segmentation of stock 3D models should not overlook the\nmodeling components and propose a learning solution to grouping and labeling of\nthe fine-grained components. However, directly characterizing the shape of\nindividual components for the purpose of labeling is unreliable, since they can\nbe arbitrarily tiny and semantically meaningless. We propose to generate part\nhypotheses from the components based on a hierarchical grouping strategy, and\nperform labeling on those part groups instead of directly on the components.\nPart hypotheses are mid-level elements which are more probable to carry\nsemantic information. A multiscale 3D convolutional neural network is trained\nto extract context-aware features for the hypotheses. To accomplish a labeled\nsegmentation of the whole shape, we formulate higher-order conditional random\nfields (CRFs) to infer an optimal label assignment for all components.\nExtensive experiments demonstrate that our method achieves significantly robust\nlabeling results on raw 3D models from public shape repositories. Our work also\ncontributes the first benchmark for component-wise labeling.", "field": [], "task": [], "method": [], "dataset": ["ShapeNet"], "metric": ["Mean IoU"], "title": "Learning to Group and Label Fine-Grained Shape Components"} {"abstract": "Unsupervised domain adaptation is critical in various computer vision tasks, such as object detection, instance segmentation, and semantic segmentation, which aims to alleviate performance degradation caused by domain-shift. Most of previous methods rely on a single-mode distribution of source and target domains to align them with adversarial learning, leading to inferior results in various scenarios. To that end, in this paper, we design a new spatial attention pyramid network for unsupervised domain adaptation. Specifically, we first build the spatial pyramid representation to capture context information of objects at different scales. Guided by the task-specific information, we combine the dense global structure representation and local texture patterns at each spatial location effectively using the spatial attention mechanism. In this way, the network is enforced to focus on the discriminative regions with context information for domain adaption. We conduct extensive experiments on various challenging datasets for unsupervised domain adaptation on object detection, instance segmentation, and semantic segmentation, which demonstrates that our method performs favorably against the state-of-the-art methods by a large margin. Our source code is available at https://isrc.iscas.ac.cn/gitlab/research/domain-adaption.", "field": [], "task": ["Domain Adaptation", "Instance Segmentation", "Object Detection", "Semantic Segmentation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["Cityscapes to Foggy Cityscapes"], "metric": ["mAP@0.5"], "title": "Spatial Attention Pyramid Network for Unsupervised Domain Adaptation"} {"abstract": "We present the 2016 ChaLearn Looking at People and Faces of the World Challenge and Workshop, which ran three competitions on the common theme of face analysis from still images. The first one, Looking at People, addressed age estimation, while the second and third competitions, Faces of the World, addressed accessory classification and smile and gender classification, respectively. We present two crowd-sourcing methodologies used to collect manual annotations. A custom-build application was used to collect and label data about the apparent age of people (as opposed to the real age). For the Faces of the World data, the citizen-science Zooniverse platform was used. This paper summarizes the three challenges and the data used, as well as the results achieved by the participants of the competitions. Details of the ChaLearn LAP FotW competitions can be found at http://gesture.chalearn.org.", "field": [], "task": ["Age Estimation", "Gender Prediction"], "method": [], "dataset": ["FotW Gender"], "metric": ["Accuracy (%)"], "title": "ChaLearn Looking at People and Faces of the World: Face Analysis Workshop and Challenge 2016"} {"abstract": "In an age overflowing with information, the task of converting unstructured data into structured data are a vital task of great need. Currently, most relation extraction modules are more focused on the extraction of local mention-level relations\u2014usually from short volumes of text. However, in most cases, the most vital and important relations are those that are described in length and detail. In this research, we propose GREG: A Global level Relation Extractor model using knowledge graph embeddings for document-level inputs. The model uses vector representations of mention-level \u2018local\u2019 relation\u2019s to construct knowledge graphs that can represent the input document. The knowledge graph is then used to predict global level relations from documents or large bodies of text. The proposed model is largely divided into two modules which are synchronized during their training. Thus, each of the model\u2019s modules is designed to deal with local relations and global relations separately. This allows the model to avoid the problem of struggling against loss of information due to too much information crunched into smaller sized representations when attempting global level relation extraction. Through evaluation, we have shown that the proposed model yields high performances in both predicting global level relations and local level relations consistently.", "field": [], "task": ["Graph Embedding", "Knowledge Graph Embedding", "Knowledge Graph Embeddings", "Knowledge Graphs", "Relation Extraction"], "method": [], "dataset": ["DocRED"], "metric": ["F1"], "title": "GREG: A Global Level Relation Extraction with Knowledge Graph Embedding"} {"abstract": "In this paper, we focus on the two key aspects of multiple target tracking\nproblem: 1) designing an accurate affinity measure to associate detections and\n2) implementing an efficient and accurate (near) online multiple target\ntracking algorithm. As the first contribution, we introduce a novel Aggregated\nLocal Flow Descriptor (ALFD) that encodes the relative motion pattern between a\npair of temporally distant detections using long term interest point\ntrajectories (IPTs). Leveraging on the IPTs, the ALFD provides a robust\naffinity measure for estimating the likelihood of matching detections\nregardless of the application scenarios. As another contribution, we present a\nNear-Online Multi-target Tracking (NOMT) algorithm. The tracking problem is\nformulated as a data-association between targets and detections in a temporal\nwindow, that is performed repeatedly at every frame. While being efficient,\nNOMT achieves robustness via integrating multiple cues including ALFD metric,\ntarget dynamics, appearance similarity, and long term trajectory regularization\ninto the model. Our ablative analysis verifies the superiority of the ALFD\nmetric over the other conventional affinity metrics. We run a comprehensive\nexperimental evaluation on two challenging tracking datasets, KITTI and MOT\ndatasets. The NOMT method combined with ALFD metric achieves the best accuracy\nin both datasets with significant margins (about 10% higher MOTA) over the\nstate-of-the-arts.", "field": [], "task": [], "method": [], "dataset": ["KITTI Tracking test", "MOT16"], "metric": ["MOTA"], "title": "Near-Online Multi-target Tracking with Aggregated Local Flow Descriptor"} {"abstract": "In natural images, the scales (thickness) of object skeletons may\ndramatically vary among objects and object parts, making object skeleton\ndetection a challenging problem. We present a new convolutional neural network\n(CNN) architecture by introducing a novel hierarchical feature integration\nmechanism, named Hi-Fi, to address the skeleton detection problem. The proposed\nCNN-based approach has a powerful multi-scale feature integration ability that\nintrinsically captures high-level semantics from deeper layers as well as\nlow-level details from shallower layers. % By hierarchically integrating\ndifferent CNN feature levels with bidirectional guidance, our approach (1)\nenables mutual refinement across features of different levels, and (2)\npossesses the strong ability to capture both rich object context and\nhigh-resolution details. Experimental results show that our method\nsignificantly outperforms the state-of-the-art methods in terms of effectively\nfusing features from very different scales, as evidenced by a considerable\nperformance improvement on several benchmarks.", "field": [], "task": ["Object Skeleton Detection"], "method": [], "dataset": ["SK-LARGE"], "metric": ["F-Measure"], "title": "Hi-Fi: Hierarchical Feature Integration for Skeleton Detection"} {"abstract": "Deep networks have been successfully applied to learn transferable features\nfor adapting models from a source domain to a different target domain. In this\npaper, we present joint adaptation networks (JAN), which learn a transfer\nnetwork by aligning the joint distributions of multiple domain-specific layers\nacross domains based on a joint maximum mean discrepancy (JMMD) criterion.\nAdversarial training strategy is adopted to maximize JMMD such that the\ndistributions of the source and target domains are made more distinguishable.\nLearning can be performed by stochastic gradient descent with the gradients\ncomputed by back-propagation in linear-time. Experiments testify that our model\nyields state of the art results on standard datasets.", "field": [], "task": ["Transfer Learning"], "method": [], "dataset": ["VisDA2017", "HMDBfull-to-UCF", "Office-Home", "UCF-to-HMDBfull"], "metric": ["Accuracy"], "title": "Deep Transfer Learning with Joint Adaptation Networks"} {"abstract": "In this work we propose to utilize information about human actions to improve\npose estimation in monocular videos. To this end, we present a pictorial\nstructure model that exploits high-level information about activities to\nincorporate higher-order part dependencies by modeling action specific\nappearance models and pose priors. However, instead of using an additional\nexpensive action recognition framework, the action priors are efficiently\nestimated by our pose estimation framework. This is achieved by starting with a\nuniform action prior and updating the action prior during pose estimation. We\nalso show that learning the right amount of appearance sharing among action\nclasses improves the pose estimation. We demonstrate the effectiveness of the\nproposed method on two challenging datasets for pose estimation and action\nrecognition with over 80,000 test images.", "field": [], "task": ["Action Recognition", "Pose Estimation", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["UPenn Action", "J-HMDB"], "metric": ["Mean PCK@0.2"], "title": "Pose for Action - Action for Pose"} {"abstract": "We propose a novel 3D point cloud segmentation framework named SASO, which jointly performs semantic and instance segmentation tasks. For semantic segmentation task, inspired by the inherent correlation among objects in spatial context, we propose a Multi-scale Semantic Association (MSA) module to explore the constructive effects of the semantic context information. For instance segmentation task, different from previous works that utilize clustering only in inference procedure, we propose a Salient Point Clustering Optimization (SPCO) module to introduce a clustering procedure into the training process and impel the network focusing on points that are difficult to be distinguished. In addition, because of the inherent structures of indoor scenes, the imbalance problem of the category distribution is rarely considered but severely limits the performance of 3D scene perception. To address this issue, we introduce an adaptive Water Filling Sampling (WFS) algorithm to balance the category distribution of training data. Extensive experiments demonstrate that our method outperforms the state-of-the-art methods on benchmark datasets in both semantic segmentation and instance segmentation tasks.", "field": [], "task": ["3D Instance Segmentation", "3D Semantic Instance Segmentation", "Instance Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["S3DIS"], "metric": ["mWCov", "mRec", "mAcc", "mIoU", "mCov", "mPrec"], "title": "SASO: Joint 3D Semantic-Instance Segmentation via Multi-scale Semantic Association and Salient Point Clustering Optimization"} {"abstract": "We propose an improved technique for weakly-supervised object localization.\nConventional methods have a limitation that they focus only on most\ndiscriminative parts of the target objects. The recent study addressed this\nissue and resolved this limitation by augmenting the training data for less\ndiscriminative parts. To this end, we employ an effective data augmentation for\nimproving the accuracy of the object localization. In addition, we introduce\nimproved learning techniques by optimizing Convolutional Neural Networks (CNN)\nbased on the state-of-the-art model. Based on extensive experiments, we\nevaluate the effectiveness of the proposed approach both qualitatively and\nquantitatively. Especially, we observe that our method improves the Top-1\nlocalization accuracy by 21.4 - 37.3% depending on configurations, compared to\nthe current state-of-the-art technique of the weakly-supervised object\nlocalization.", "field": [], "task": ["Data Augmentation", "Object Localization", "Weakly-Supervised Object Localization"], "method": [], "dataset": ["Tiny ImageNet"], "metric": ["Top-1 Localization Accuracy"], "title": "Improved Techniques For Weakly-Supervised Object Localization"} {"abstract": "Making a high-dimensional (e.g., 100K-dim) feature for face recognition seems not a good idea because it will bring difficulties on consequent training, computation, and storage. This prevents further exploration of the use of a highdimensional feature. In this paper, we study the performance of a highdimensional feature. We first empirically show that high dimensionality is critical to high performance. A 100K-dim feature, based on a single-type Local Binary Pattern (LBP) descriptor, can achieve significant improvements over both its low-dimensional version and the state-of-the-art. We also make the high-dimensional feature practical. With our proposed sparse projection method, named rotated sparse regression, both computation and model storage can be reduced by over 100 times without sacrificing accuracy quality.", "field": [], "task": ["Age-Invariant Face Recognition", "Face Recognition", "Face Verification", "Regression"], "method": [], "dataset": ["CACDVS"], "metric": ["Accuracy"], "title": "Blessing of Dimensionality: High-Dimensional Feature and Its Efficient Compression for Face Verification"} {"abstract": "Multi-choice reading comprehension is a challenging task that requires complex reasoning procedure. Given passage and question, a correct answer need to be selected from a set of candidate answers. In this paper, we propose \\textbf{D}ual \\textbf{C}o-\\textbf{M}atching \\textbf{N}etwork (\\textbf{DCMN}) which model the relationship among passage, question and answer bidirectionally. Different from existing approaches which only calculate question-aware or option-aware passage representation, we calculate passage-aware question representation and passage-aware answer representation at the same time. To demonstrate the effectiveness of our model, we evaluate our model on a large-scale multiple choice machine reading comprehension dataset (i.e. RACE). Experimental result show that our proposed model achieves new state-of-the-art results.", "field": [], "task": ["Machine Reading Comprehension", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["RACE"], "metric": ["RACE-h", "RACE-m", "RACE"], "title": "Dual Co-Matching Network for Multi-choice Reading Comprehension"} {"abstract": "Machine learning techniques work best when the data used for training\nresembles the data used for evaluation. This holds true for learned\nsingle-image denoising algorithms, which are applied to real raw camera sensor\nreadings but, due to practical constraints, are often trained on synthetic\nimage data. Though it is understood that generalizing from synthetic to real\ndata requires careful consideration of the noise properties of image sensors,\nthe other aspects of a camera's image processing pipeline (gain, color\ncorrection, tone mapping, etc) are often overlooked, despite their significant\neffect on how raw measurements are transformed into finished images. To address\nthis, we present a technique to \"unprocess\" images by inverting each step of an\nimage processing pipeline, thereby allowing us to synthesize realistic raw\nsensor measurements from commonly available internet photos. We additionally\nmodel the relevant components of an image processing pipeline when evaluating\nour loss function, which allows training to be aware of all relevant\nphotometric processing that will occur after denoising. By processing and\nunprocessing model outputs and training data in this way, we are able to train\na simple convolutional neural network that has 14%-38% lower error rates and is\n9x-18x faster than the previous state of the art on the Darmstadt Noise\nDataset, and generalizes to sensors outside of that dataset as well.", "field": [], "task": ["Denoising", "Image Denoising"], "method": [], "dataset": ["Darmstadt Noise Dataset"], "metric": ["SSIM (Raw)", "PSNR (Raw)", "SSIM (sRGB)", "PSNR (sRGB)"], "title": "Unprocessing Images for Learned Raw Denoising"} {"abstract": "This paper proposes deep convolutional network models that utilize local and\nglobal context to make human activity label predictions in still images,\nachieving state-of-the-art performance on two recent datasets with hundreds of\nlabels each. We use multiple instance learning to handle the lack of\nsupervision on the level of individual person instances, and weighted loss to\nhandle unbalanced training data. Further, we show how specialized features\ntrained on these datasets can be used to improve accuracy on the Visual\nQuestion Answering (VQA) task, in the form of multiple choice fill-in-the-blank\nquestions (Visual Madlibs). Specifically, we tackle two types of questions on\nperson activity and person-object relationship and show improvements over\ngeneric features trained on the ImageNet classification task.", "field": [], "task": ["Human-Object Interaction Detection", "Multiple Instance Learning", "Question Answering", "Visual Question Answering"], "method": [], "dataset": ["HICO"], "metric": ["mAP"], "title": "Learning Models for Actions and Person-Object Interactions with Transfer to Question Answering"} {"abstract": "Visual relationships capture a wide variety of interactions between pairs of\nobjects in images (e.g. \"man riding bicycle\" and \"man pushing bicycle\").\nConsequently, the set of possible relationships is extremely large and it is\ndifficult to obtain sufficient training examples for all possible\nrelationships. Because of this limitation, previous work on visual relationship\ndetection has concentrated on predicting only a handful of relationships.\nThough most relationships are infrequent, their objects (e.g. \"man\" and\n\"bicycle\") and predicates (e.g. \"riding\" and \"pushing\") independently occur\nmore frequently. We propose a model that uses this insight to train visual\nmodels for objects and predicates individually and later combines them together\nto predict multiple relationships per image. We improve on prior work by\nleveraging language priors from semantic word embeddings to finetune the\nlikelihood of a predicted relationship. Our model can scale to predict\nthousands of types of relationships from a few examples. Additionally, we\nlocalize the objects in the predicted relationships as bounding boxes in the\nimage. We further demonstrate that understanding relationships can improve\ncontent based image retrieval.", "field": [], "task": ["Content-Based Image Retrieval", "Image Retrieval", "Visual Relationship Detection", "Word Embeddings"], "method": [], "dataset": ["VRD"], "metric": ["Recall@50"], "title": "Visual Relationship Detection with Language Priors"} {"abstract": "In this work, we study the unsupervised video object segmentation problem where moving objects are segmented without prior knowledge of these objects. First, we propose a motion-based bilateral network to estimate the background based on the motion pattern of non-object regions. The bilateral network reduces false positive regions by accurately identifying background objects. Then, we integrate the background estimate from the bilateral network with instance embeddings into a graph, which allows multiple frame reasoning with graph edges linking pixels from different frames. We classify graph nodes by defining and minimizing a cost function, and segment the video frames based on the node labels. The proposed method outperforms previous state-of-the-art unsupervised video object segmentation methods against the DAVIS 2016 and the FBMS-59 datasets.", "field": [], "task": ["Semantic Segmentation", "Unsupervised Video Object Segmentation", "Video Object Segmentation", "Video Salient Object Detection", "Video Semantic Segmentation"], "method": [], "dataset": ["ViSal", "MCL", "DAVIS-2016", "VOS-T", "DAVSOD-Normal25", "SegTrack v2", "UVSD", "DAVSOD-easy35", "DAVIS 2016", "FBMS-59"], "metric": ["MAX F-MEASURE", "max E-Measure", "S-Measure", "AVERAGE MAE", "Average MAE", "max E-measure", "MAX E-MEASURE", "max F-Measure"], "title": "Unsupervised Video Object Segmentation with Motion-based Bilateral Networks"} {"abstract": "Recent work in salient object detection has considered the incorporation of depth cues from RGB-D images. In most cases, depth contrast is used as the main feature. However, areas of high contrast in background regions cause false positives for such methods, as the background frequently contains regions that are highly variable in depth. Here, we propose a novel RGB-D saliency feature. Local Background Enclosure (LBE) captures the spread of angular directions which are background with respect to the candidate region and the object that it is part of. We show that our feature improves over state-of-the-art RGB-D saliency approaches as well as RGB methods on the RGBD1000 and NJUDS2000 datasets.", "field": [], "task": ["Object Detection", "RGB-D Salient Object Detection", "RGB Salient Object Detection", "Salient Object Detection"], "method": [], "dataset": ["NJU2K"], "metric": ["max E-Measure", "Average MAE", "S-Measure", "max F-Measure"], "title": "Local Background Enclosure for RGB-D Salient Object Detection"} {"abstract": "Due to recent technical and scientific advances, we have a wealth of\ninformation hidden in unstructured text data such as offline/online narratives,\nresearch articles, and clinical reports. To mine these data properly,\nattributable to their innate ambiguity, a Word Sense Disambiguation (WSD)\nalgorithm can avoid numbers of difficulties in Natural Language Processing\n(NLP) pipeline. However, considering a large number of ambiguous words in one\nlanguage or technical domain, we may encounter limiting constraints for proper\ndeployment of existing WSD models. This paper attempts to address the problem\nof one-classifier-per-one-word WSD algorithms by proposing a single\nBidirectional Long Short-Term Memory (BLSTM) network which by considering\nsenses and context sequences works on all ambiguous words collectively.\nEvaluated on SensEval-3 benchmark, we show the result of our model is\ncomparable with top-performing WSD algorithms. We also discuss how applying\nadditional modifications alleviates the model fault and the need for more\ntraining data.", "field": [], "task": ["Word Sense Disambiguation"], "method": [], "dataset": ["SensEval 3 Lexical Sample"], "metric": ["F1"], "title": "One Single Deep Bidirectional LSTM Network for Word Sense Disambiguation of Text Data"} {"abstract": "Learning to localize objects with minimal supervision is an important problem\nin computer vision, since large fully annotated datasets are extremely costly\nto obtain. In this paper, we propose a new method that achieves this goal with\nonly image-level labels of whether the objects are present or not. Our approach\ncombines a discriminative submodular cover problem for automatically\ndiscovering a set of positive object windows with a smoothed latent SVM\nformulation. The latter allows us to leverage efficient quasi-Newton\noptimization techniques. Our experiments demonstrate that the proposed approach\nprovides a 50% relative improvement in mean average precision over the current\nstate-of-the-art on PASCAL VOC 2007 detection.", "field": [], "task": ["Weakly Supervised Object Detection"], "method": [], "dataset": ["PASCAL VOC 2007"], "metric": ["MAP"], "title": "On learning to localize objects with minimal supervision"} {"abstract": "We propose a single-shot approach for simultaneously detecting an object in\nan RGB image and predicting its 6D pose without requiring multiple stages or\nhaving to examine multiple hypotheses. Unlike a recently proposed single-shot\ntechnique for this task (Kehl et al., ICCV'17) that only predicts an\napproximate 6D pose that must then be refined, ours is accurate enough not to\nrequire additional post-processing. As a result, it is much faster - 50 fps on\na Titan X (Pascal) GPU - and more suitable for real-time processing. The key\ncomponent of our method is a new CNN architecture inspired by the YOLO network\ndesign that directly predicts the 2D image locations of the projected vertices\nof the object's 3D bounding box. The object's 6D pose is then estimated using a\nPnP algorithm.\n For single object and multiple object pose estimation on the LINEMOD and\nOCCLUSION datasets, our approach substantially outperforms other recent\nCNN-based approaches when they are all used without post-processing. During\npost-processing, a pose refinement step can be used to boost the accuracy of\nthe existing methods, but at 10 fps or less, they are much slower than our\nmethod.", "field": [], "task": ["6D Pose Estimation using RGB", "Drone Pose Estimation", "Pose Estimation", "Pose Prediction"], "method": [], "dataset": ["OCCLUSION", "LineMOD"], "metric": ["Mean IoU", "Accuracy", "Mean ADD", "MAP"], "title": "Real-Time Seamless Single Shot 6D Object Pose Prediction"} {"abstract": "Face alignment has witnessed substantial progress in the last decade. One of\nthe recent focuses has been aligning a dense 3D face shape to face images with\nlarge head poses. The dominant technology used is based on the cascade of\nregressors, e.g., CNN, which has shown promising results. Nonetheless, the\ncascade of CNNs suffers from several drawbacks, e.g., lack of end-to-end\ntraining, hand-crafted features and slow training speed. To address these\nissues, we propose a new layer, named visualization layer, that can be\nintegrated into the CNN architecture and enables joint optimization with\ndifferent loss functions. Extensive evaluation of the proposed method on\nmultiple datasets demonstrates state-of-the-art accuracy, while reducing the\ntraining time by more than half compared to the typical cascade of CNNs. In\naddition, we compare multiple CNN architectures with the visualization layer to\nfurther demonstrate the advantage of its utilization.", "field": [], "task": ["Face Alignment", "Facial Landmark Detection"], "method": [], "dataset": ["300W"], "metric": ["NME"], "title": "Pose-Invariant Face Alignment with a Single CNN"} {"abstract": "Sequence labeling architectures use word embeddings for capturing similarity,\nbut suffer when handling previously unseen or rare words. We investigate\ncharacter-level extensions to such models and propose a novel architecture for\ncombining alternative word representations. By using an attention mechanism,\nthe model is able to dynamically decide how much information to use from a\nword- or character-level component. We evaluated different architectures on a\nrange of sequence labeling datasets, and character-level extensions were found\nto improve performance on every benchmark. In addition, the proposed\nattention-based architecture delivered the best results even with a smaller\nnumber of trainable parameters.", "field": [], "task": ["Chunking", "Grammatical Error Detection", "Named Entity Recognition", "Part-Of-Speech Tagging", "Word Embeddings"], "method": [], "dataset": ["FCE", "Penn Treebank"], "metric": ["F0.5", "Accuracy"], "title": "Attending to Characters in Neural Sequence Labeling Models"} {"abstract": "Human pose estimation is a key step to action recognition. We propose a\nmethod of estimating 3D human poses from a single image, which works in\nconjunction with an existing 2D pose/joint detector. 3D pose estimation is\nchallenging because multiple 3D poses may correspond to the same 2D pose after\nprojection due to the lack of depth information. Moreover, current 2D pose\nestimators are usually inaccurate which may cause errors in the 3D estimation.\nWe address the challenges in three ways: (i) We represent a 3D pose as a linear\ncombination of a sparse set of bases learned from 3D human skeletons. (ii) We\nenforce limb length constraints to eliminate anthropomorphically implausible\nskeletons. (iii) We estimate a 3D pose by minimizing the $L_1$-norm error\nbetween the projection of the 3D pose and the corresponding 2D detection. The\n$L_1$-norm loss term is robust to inaccurate 2D joint estimations. We use the\nalternating direction method (ADM) to solve the optimization problem\nefficiently. Our approach outperforms the state-of-the-arts on three benchmark\ndatasets.", "field": [], "task": ["3D Pose Estimation", "Action Recognition", "Pose Estimation", "Temporal Action Localization"], "method": [], "dataset": ["Human3.6M"], "metric": ["Average MPJPE (mm)"], "title": "Robust Estimation of 3D Human Poses from a Single Image"} {"abstract": "The challenge of unsupervised person re-identification (ReID) lies in learning discriminative features without true labels. This paper formulates unsupervised person ReID as a multi-label classification task to progressively seek true labels. Our method starts by assigning each person image with a single-class label, then evolves to multi-label classification by leveraging the updated ReID model for label prediction. The label prediction comprises similarity computation and cycle consistency to ensure the quality of predicted labels. To boost the ReID model training efficiency in multi-label classification, we further propose the memory-based multi-label classification loss (MMCL). MMCL works with memory-based non-parametric classifier and integrates multi-label classification and single-label classification in a unified framework. Our label prediction and MMCL work iteratively and substantially boost the ReID performance. Experiments on several large-scale person ReID datasets demonstrate the superiority of our method in unsupervised person ReID. Our method also allows to use labeled person images in other domains. Under this transfer learning setting, our method also achieves state-of-the-art performance.", "field": [], "task": ["Multi-Label Classification", "Person Re-Identification", "Transfer Learning", "Unsupervised Domain Adaptation", "Unsupervised Person Re-Identification"], "method": [], "dataset": ["Duke to Market", "Duke to MSMT", "Market to Duke", "Market to MSMT"], "metric": ["rank-10", "mAP", "rank-5", "rank-1"], "title": "Unsupervised Person Re-identification via Multi-label Classification"} {"abstract": "This work addresses the problem of estimating the full body 3D human pose and\nshape from a single color image. This is a task where iterative\noptimization-based solutions have typically prevailed, while Convolutional\nNetworks (ConvNets) have suffered because of the lack of training data and\ntheir low resolution 3D predictions. Our work aims to bridge this gap and\nproposes an efficient and effective direct prediction method based on ConvNets.\nCentral part to our approach is the incorporation of a parametric statistical\nbody shape model (SMPL) within our end-to-end framework. This allows us to get\nvery detailed 3D mesh results, while requiring estimation only of a small\nnumber of parameters, making it friendly for direct network prediction.\nInterestingly, we demonstrate that these parameters can be predicted reliably\nonly from 2D keypoints and masks. These are typical outputs of generic 2D human\nanalysis ConvNets, allowing us to relax the massive requirement that images\nwith 3D shape ground truth are available for training. Simultaneously, by\nmaintaining differentiability, at training time we generate the 3D mesh from\nthe estimated parameters and optimize explicitly for the surface using a 3D\nper-vertex loss. Finally, a differentiable renderer is employed to project the\n3D mesh to the image, which enables further refinement of the network, by\noptimizing for the consistency of the projection with 2D annotations (i.e., 2D\nkeypoints or masks). The proposed approach outperforms previous baselines on\nthis task and offers an attractive solution for direct prediction of 3D shape\nfrom a single color image.", "field": [], "task": [], "method": [], "dataset": ["Human3.6M"], "metric": ["Average MPJPE (mm)"], "title": "Learning to Estimate 3D Human Pose and Shape from a Single Color Image"} {"abstract": "We propose an efficient approach to exploiting motion information from consecutive frames of a video sequence to recover the 3D pose of people. Instead of computing candidate poses in individual frames and then linking them, as is often done, we regress directly from a spatio-temporal block of frames to a 3D pose in the central one. We will demonstrate that this approach allows us to effectively overcome ambiguities and to improve upon the state-of-the-art on challenging sequences.", "field": [], "task": ["3D Human Pose Estimation"], "method": [], "dataset": ["Human3.6M"], "metric": ["Average MPJPE (mm)"], "title": "Predicting people\u2019s 3D poses from short sequences"} {"abstract": "Existing methods in video action recognition mostly do not distinguish human\nbody from the environment and easily overfit the scenes and objects. In this\nwork, we present a conceptually simple, general and high-performance framework\nfor action recognition in trimmed videos, aiming at person-centric modeling.\nThe method, called Action Machine, takes as inputs the videos cropped by person\nbounding boxes. It extends the Inflated 3D ConvNet (I3D) by adding a branch for\nhuman pose estimation and a 2D CNN for pose-based action recognition, being\nfast to train and test. Action Machine can benefit from the multi-task training\nof action recognition and pose estimation, the fusion of predictions from RGB\nimages and poses. On NTU RGB-D, Action Machine achieves the state-of-the-art\nperformance with top-1 accuracies of 97.2% and 94.3% on cross-view and\ncross-subject respectively. Action Machine also achieves competitive\nperformance on another three smaller action recognition datasets: Northwestern\nUCLA Multiview Action3D, MSR Daily Activity3D and UTD-MHAD. Code will be made\navailable.", "field": [], "task": ["Action Recognition", "Multimodal Activity Recognition", "Pose Estimation", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D", "UTD-MHAD", "N-UCLA", "MSR Daily Activity3D dataset"], "metric": ["Accuracy (CS)", "Accuracy (CV)", "Accuracy"], "title": "Action Machine: Rethinking Action Recognition in Trimmed Videos"} {"abstract": "We consider the problem of detecting robotic grasps in an RGB-D view of a\nscene containing objects. In this work, we apply a deep learning approach to\nsolve this problem, which avoids time-consuming hand-design of features. This\npresents two main challenges. First, we need to evaluate a huge number of\ncandidate grasps. In order to make detection fast, as well as robust, we\npresent a two-step cascaded structure with two deep networks, where the top\ndetections from the first are re-evaluated by the second. The first network has\nfewer features, is faster to run, and can effectively prune out unlikely\ncandidate grasps. The second, with more features, is slower but has to run only\non the top few detections. Second, we need to handle multimodal inputs well,\nfor which we present a method to apply structured regularization on the weights\nbased on multimodal group regularization. We demonstrate that our method\noutperforms the previous state-of-the-art methods in robotic grasp detection,\nand can be used to successfully execute grasps on two different robotic\nplatforms.", "field": [], "task": ["Robotic Grasping"], "method": [], "dataset": ["Cornell Grasp Dataset"], "metric": ["5 fold cross validation"], "title": "Deep Learning for Detecting Robotic Grasps"} {"abstract": "This paper presents a mixed-integer quadratic programming formulation of an existing data-driven approach to computational elasticity. This formulation is suitable for application of a standard mixed-integer programming solver, which finds a global optimal solution. Therefore, the results obtained by the presented method can be used as benchmark instances for any other algorithm. Preliminary numerical experiments are performed to compare quality of solutions obtained by the proposed method and a heuristic conventionally used in the data-driven computational mechanics.", "field": [], "task": ["Stress-Strain Relation"], "method": [], "dataset": ["Non-Linear Elasticity Benchmark"], "metric": ["Time (ms)"], "title": "Mixed-integer programming formulation of a data-driven solver in computational elasticity"} {"abstract": "This paper attempts to address the problem of recognizing human actions while training and testing on distinct datasets, when test videos are neither labeled nor available during training. In this scenario, learning of a joint vocabulary, or domain transfer techniques are not applicable. We first explore reasons for poor classifier performance when tested on novel datasets, and quantify the effect of scene backgrounds on action representations and recognition. Using only the background features and partitioning of gist feature space, we show that the background scenes in recent datasets are quite discriminative and can be used classify an action with reasonable accuracy. We then propose a new process to obtain a measure of confidence in each pixel of the video being a foreground region, using motion, appearance, and saliency together in a 3D MRF based framework. We also propose multiple ways to exploit the foreground confidence: to improve bag-of-words vocabulary, histogram representation of a video, and a novel histogram decomposition based representation and kernel. We used these foreground confidences to recognize actions trained on one data set and test on a different data set. We have performed extensive experiments on several datasets that improve cross dataset recognition accuracy as compared to baseline methods.", "field": [], "task": ["Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["Olympic-to-HMDBsmall", "UCF-to-Olympic", "UCF-to-HMDBsmall", "HMDBsmall-to-UCF"], "metric": ["Accuracy"], "title": "Human Action Recognition Across Datasets by Foreground-weighted Histogram Decomposition"} {"abstract": "Learning a joint language-visual embedding has a number of very appealing\nproperties and can result in variety of practical application, including\nnatural language image/video annotation and search. In this work, we study\nthree different joint language-visual neural network model architectures. We\nevaluate our models on large scale LSMDC16 movie dataset for two tasks: 1)\nStandard Ranking for video annotation and retrieval 2) Our proposed movie\nmultiple-choice test. This test facilitate automatic evaluation of\nvisual-language models for natural language video annotation based on human\nactivities. In addition to original Audio Description (AD) captions, provided\nas part of LSMDC16, we collected and will make available a) manually generated\nre-phrasings of those captions obtained using Amazon MTurk b) automatically\ngenerated human activity elements in \"Predicate + Object\" (PO) phrases based on\n\"Knowlywood\", an activity knowledge mining model. Our best model archives\nRecall@10 of 19.2% on annotation and 18.9% on video retrieval tasks for subset\nof 1000 samples. For multiple-choice test, our best model achieve accuracy\n58.11% over whole LSMDC16 public test-set.", "field": [], "task": ["Video Retrieval"], "method": [], "dataset": ["MSR-VTT"], "metric": ["video-to-text R@5", "text-to-video R@1", "text-to-video R@10", "text-to-video Median Rank"], "title": "Learning Language-Visual Embedding for Movie Understanding with Natural-Language"} {"abstract": "Relation Extraction is the task of identifying entity mention spans in raw text and then identifying relations between pairs of the entity mentions. Recent approaches for this span-level task have been token-level models which have inherent limitations. They cannot easily define and implement span-level features, cannot model overlapping entity mentions and have cascading errors due to the use of sequential decoding. To address these concerns, we present a model which directly models all possible spans and performs joint entity mention detection and relation extraction. We report a new state-of-the-art performance of 62.83 F1 (prev best was 60.49) on the ACE2005 dataset.", "field": [], "task": ["Relation Extraction"], "method": [], "dataset": ["ACE 2005"], "metric": ["Sentence Encoder", "NER Micro F1", "RE Micro F1"], "title": "Span-Level Model for Relation Extraction"} {"abstract": "In this paper, we introduce a novel network, called discriminative feature network (DFNet), to address the unsupervised video object segmentation task. To capture the inherent correlation among video frames, we learn discriminative features (D-features) from the input images that reveal feature distribution from a global perspective. The D-features are then used to establish correspondence with all features of test image under conditional random field (CRF) formulation, which is leveraged to enforce consistency between pixels. The experiments verify that DFNet outperforms state-of-the-art methods by a large margin with a mean IoU score of 83.4% and ranks first on the DAVIS-2016 leaderboard while using much fewer parameters and achieving much more efficient performance in the inference phase. We further evaluate DFNet on the FBMS dataset and the video saliency dataset ViSal, reaching a new state-of-the-art. To further demonstrate the generalizability of our framework, DFNet is also applied to the image object co-segmentation task. We perform experiments on a challenging dataset PASCAL-VOC and observe the superiority of DFNet. The thorough experiments verify that DFNet is able to capture and mine the underlying relations of images and discover the common foreground objects.", "field": [], "task": ["RGB Salient Object Detection", "Semantic Segmentation", "Unsupervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["FBMS", "DAVIS 2016"], "metric": ["F-Score", "Jaccard (Mean)"], "title": "Learning Discriminative Feature with CRF for Unsupervised Video Object Segmentation"} {"abstract": "In this work, we propose Attentive Pooling (AP), a two-way attention\nmechanism for discriminative model training. In the context of pair-wise\nranking or classification with neural networks, AP enables the pooling layer to\nbe aware of the current input pair, in a way that information from the two\ninput items can directly influence the computation of each other's\nrepresentations. Along with such representations of the paired inputs, AP\njointly learns a similarity measure over projected segments (e.g. trigrams) of\nthe pair, and subsequently, derives the corresponding attention vector for each\ninput to guide the pooling. Our two-way attention mechanism is a general\nframework independent of the underlying representation learning, and it has\nbeen applied to both convolutional neural networks (CNNs) and recurrent neural\nnetworks (RNNs) in our studies. The empirical results, from three very\ndifferent benchmark tasks of question answering/answer selection, demonstrate\nthat our proposed models outperform a variety of strong baselines and achieve\nstate-of-the-art performance in all the benchmarks.", "field": [], "task": ["Answer Selection", "Question Answering", "Representation Learning"], "method": [], "dataset": ["YahooCQA", "SemEvalCQA", "WikiQA"], "metric": ["P@1", "MRR", "MAP"], "title": "Attentive Pooling Networks"} {"abstract": "Recent progress in semantic segmentation has been driven by improving the\nspatial resolution under Fully Convolutional Networks (FCNs). To address this\nproblem, we propose a Stacked Deconvolutional Network (SDN) for semantic\nsegmentation. In SDN, multiple shallow deconvolutional networks, which are\ncalled as SDN units, are stacked one by one to integrate contextual information\nand guarantee the fine recovery of localization information. Meanwhile,\ninter-unit and intra-unit connections are designed to assist network training\nand enhance feature fusion since the connections improve the flow of\ninformation and gradient propagation throughout the network. Besides,\nhierarchical supervision is applied during the upsampling process of each SDN\nunit, which guarantees the discrimination of feature representations and\nbenefits the network optimization. We carry out comprehensive experiments and\nachieve the new state-of-the-art results on three datasets, including PASCAL\nVOC 2012, CamVid, GATECH. In particular, our best model without CRF\npost-processing achieves an intersection-over-union score of 86.6% in the test\nset.", "field": [], "task": ["Semantic Segmentation"], "method": [], "dataset": ["PASCAL VOC 2012 test"], "metric": ["Mean IoU"], "title": "Stacked Deconvolutional Network for Semantic Segmentation"} {"abstract": "Visual dialog is a task of answering a series of inter-dependent questions\ngiven an input image, and often requires to resolve visual references among the\nquestions. This problem is different from visual question answering (VQA),\nwhich relies on spatial attention (a.k.a. visual grounding) estimated from an\nimage and question pair. We propose a novel attention mechanism that exploits\nvisual attentions in the past to resolve the current reference in the visual\ndialog scenario. The proposed model is equipped with an associative attention\nmemory storing a sequence of previous (attention, key) pairs. From this memory,\nthe model retrieves the previous attention, taking into account recency, which\nis most relevant for the current question, in order to resolve potentially\nambiguous references. The model then merges the retrieved attention with a\ntentative one to obtain the final attention for the current question;\nspecifically, we use dynamic parameter prediction to combine the two attentions\nconditioned on the question. Through extensive experiments on a new synthetic\nvisual dialog dataset, we show that our model significantly outperforms the\nstate-of-the-art (by ~16 % points) in situations, where visual reference\nresolution plays an important role. Moreover, the proposed model achieves\nsuperior performance (~ 2 % points improvement) in the Visual Dialog dataset,\ndespite having significantly fewer parameters than the baselines.", "field": [], "task": ["Question Answering", "Visual Dialog", "Visual Grounding", "Visual Question Answering"], "method": [], "dataset": ["VisDial v0.9 val"], "metric": ["R@10", "R@1", "Mean Rank", "R@5"], "title": "Visual Reference Resolution using Attention Memory for Visual Dialog"} {"abstract": "There have been remarkable improvements in the semantic labelling task in the\nrecent years. However, the state of the art methods rely on large-scale\npixel-level annotations. This paper studies the problem of training a\npixel-wise semantic labeller network from image-level annotations of the\npresent object classes. Recently, it has been shown that high quality seeds\nindicating discriminative object regions can be obtained from image-level\nlabels. Without additional information, obtaining the full extent of the object\nis an inherently ill-posed problem due to co-occurrences. We propose using a\nsaliency model as additional information and hereby exploit prior knowledge on\nthe object extent and image statistics. We show how to combine both information\nsources in order to recover 80% of the fully supervised performance - which is\nthe new state of the art in weakly supervised training for pixel-wise semantic\nlabelling. The code is available at https://goo.gl/KygSeb.", "field": [], "task": ["Semantic Segmentation"], "method": [], "dataset": ["PASCAL VOC 2012 test", "PASCAL VOC 2012 val"], "metric": ["Mean IoU", "mIoU"], "title": "Exploiting saliency for object segmentation from image level labels"} {"abstract": "Relatively small data sets available for expression recognition research make\nthe training of deep networks for expression recognition very challenging.\nAlthough fine-tuning can partially alleviate the issue, the performance is\nstill below acceptable levels as the deep features probably contain redun- dant\ninformation from the pre-trained domain. In this paper, we present\nFaceNet2ExpNet, a novel idea to train an expression recognition network based\non static images. We first propose a new distribution function to model the\nhigh-level neurons of the expression network. Based on this, a two-stage\ntraining algorithm is carefully designed. In the pre-training stage, we train\nthe convolutional layers of the expression net, regularized by the face net; In\nthe refining stage, we append fully- connected layers to the pre-trained\nconvolutional layers and train the whole network jointly. Visualization shows\nthat the model trained with our method captures improved high-level expression\nsemantics. Evaluations on four public expression databases, CK+, Oulu-CASIA,\nTFD, and SFEW demonstrate that our method achieves better results than\nstate-of-the-art.", "field": [], "task": ["Face Recognition", "Small Data Image Classification"], "method": [], "dataset": ["CK+"], "metric": ["Accuracy (10-fold)"], "title": "FaceNet2ExpNet: Regularizing a Deep Face Recognition Net for Expression Recognition"} {"abstract": "Object detection is a challenging task in visual understanding domain, and\neven more so if the supervision is to be weak. Recently, few efforts to handle\nthe task without expensive human annotations is established by promising deep\nneural network. A new architecture of cascaded networks is proposed to learn a\nconvolutional neural network (CNN) under such conditions. We introduce two such\narchitectures, with either two cascade stages or three which are trained in an\nend-to-end pipeline. The first stage of both architectures extracts best\ncandidate of class specific region proposals by training a fully convolutional\nnetwork. In the case of the three stage architecture, the middle stage provides\nobject segmentation, using the output of the activation maps of first stage.\nThe final stage of both architectures is a part of a convolutional neural\nnetwork that performs multiple instance learning on proposals extracted in the\nprevious stage(s). Our experiments on the PASCAL VOC 2007, 2010, 2012 and large\nscale object datasets, ILSVRC 2013, 2014 datasets show improvements in the\nareas of weakly-supervised object detection, classification and localization.", "field": [], "task": ["Multiple Instance Learning", "Object Detection", "Semantic Segmentation", "Weakly Supervised Object Detection"], "method": [], "dataset": ["ImageNet", "PASCAL VOC 2012 test", "PASCAL VOC 2007", "COCO test-dev"], "metric": ["AP50", "MAP"], "title": "Weakly Supervised Cascaded Convolutional Networks"} {"abstract": "Recognizing objects in natural images is an intricate problem involving\nmultiple conflicting objectives. Deep convolutional neural networks, trained on\nlarge datasets, achieve convincing results and are currently the\nstate-of-the-art approach for this task. However, the long time needed to train\nsuch deep networks is a major drawback. We tackled this problem by reusing a\npreviously trained network. For this purpose, we first trained a deep\nconvolutional network on the ILSVRC2012 dataset. We then maintained the learned\nconvolution kernels and only retrained the classification part on different\ndatasets. Using this approach, we achieved an accuracy of 67.68 % on CIFAR-100,\ncompared to the previous state-of-the-art result of 65.43 %. Furthermore, our\nfindings indicate that convolutional networks are able to learn generic feature\nextractors that can be used for different tasks.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["MNIST", "CIFAR-100", "CIFAR-10"], "metric": ["Percentage error", "Percentage correct"], "title": "Deep Convolutional Neural Networks as Generic Feature Extractors"} {"abstract": "Semantic representations have long been argued as potentially useful for enforcing meaning preservation and improving generalization performance of machine translation methods. In this work, we are the first to incorporate information about predicate-argument structure of source sentences (namely, semantic-role representations) into neural machine translation. We use Graph Convolutional Networks (GCNs) to inject a semantic bias into sentence encoders and achieve improvements in BLEU scores over the linguistic-agnostic and syntax-aware versions on the English--German language pair.", "field": [], "task": ["Machine Translation"], "method": [], "dataset": ["WMT2016 English-German"], "metric": ["BLEU score"], "title": "Exploiting Semantics in Neural Machine Translation with Graph Convolutional Networks"} {"abstract": "The paper presents a novel method, Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network. Our method trains a lightweight deep network, DCE-Net, to estimate pixel-wise and high-order curves for dynamic range adjustment of a given image. The curve estimation is specially designed, considering pixel value range, monotonicity, and differentiability. Zero-DCE is appealing in its relaxed assumption on reference images, i.e., it does not require any paired or unpaired data during training. This is achieved through a set of carefully formulated non-reference loss functions, which implicitly measure the enhancement quality and drive the learning of the network. Our method is efficient as image enhancement can be achieved by an intuitive and simple nonlinear curve mapping. Despite its simplicity, we show that it generalizes well to diverse lighting conditions. Extensive experiments on various benchmarks demonstrate the advantages of our method over state-of-the-art methods qualitatively and quantitatively. Furthermore, the potential benefits of our Zero-DCE to face detection in the dark are discussed. Code and model will be available at https://github.com/Li-Chongyi/Zero-DCE.", "field": [], "task": ["Face Detection", "Image Enhancement", "Low-Light Image Enhancement"], "method": [], "dataset": ["LIME", "MEF", "NPE", "DICM", "VV"], "metric": ["User Study Score"], "title": "Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement"} {"abstract": "This paper addresses the problem of Face Alignment for a single image. We show how an ensemble of regression trees can be used to estimate the face's landmark positions directly from a sparse subset of pixel intensities, achieving super-realtime performance with high quality predictions. We present a general framework based on gradient boosting for learning an ensemble of regression trees that optimizes the sum of square error loss and naturally handles missing or partially labelled data. We show how using appropriate priors exploiting the structure of image data helps with efficient feature selection. Different regularization strategies and its importance to combat overfitting are also investigated. In addition, we analyse the effect of the quantity of training data on the accuracy of the predictions and explore the effect of data augmentation using synthesized data.", "field": [], "task": ["Data Augmentation", "Face Alignment", "Feature Selection", "Regression"], "method": [], "dataset": ["AFLW2000"], "metric": ["Error rate"], "title": "One Millisecond Face Alignment with an Ensemble of Regression Trees"} {"abstract": "Music source separation is the task of decomposing music into its constitutive components,e.g., yielding separated stems for the vocals, bass, and drums. Such a separation has manyapplications ranging from rearranging/repurposing the stems (remixing, repanning, upmixing)to full extraction (karaoke, sample creation, audio restoration). Music separation has a longhistory of scientific activity as it is known to be a very challenging problem. In recent years,deep learning-based systems - for the first time - yielded high-quality separations that alsolead to increased commercial interest. However, until now, no open-source implementationthat achieves state-of-the-art results is available.Open-Unmixcloses this gap by providinga reference implementation based on deep neural networks. It serves two main purposes.Firstly, to accelerate academic research asOpen-Unmixprovides implementations for themost popular deep learning frameworks, giving researchers a flexible way to reproduce results.Secondly, we provide a pre-trained model for end users and even artists to try and use sourceseparation. Furthermore, we designedOpen-Unmixto be one core component in an openecosystem on music separation, where we already provide open datasets, software utilities,and open evaluation to foster reproducible research as the basis of future development.", "field": [], "task": ["Music Source Separation"], "method": [], "dataset": ["MUSDB18"], "metric": ["SDR (vocals)", "SDR (other)", "SDR (drums)", "SDR (bass)"], "title": "Open-Unmix - A Reference Implementation for Music Source Separation"} {"abstract": "In the deep learning (DL) era, parsing models are extremely simplified with little hurt on performance, thanks to the remarkable capability of multi-layer BiLSTMs in context representation. As the most popular graph-based dependency parser due to its high efficiency and performance, the biaffine parser directly scores single dependencies under the arc-factorization assumption, and adopts a very simple local token-wise cross-entropy training loss. This paper for the first time presents a second-order TreeCRF extension to the biaffine parser. For a long time, the complexity and inefficiency of the inside-outside algorithm hinder the popularity of TreeCRF. To address this issue, we propose an effective way to batchify the inside and Viterbi algorithms for direct large matrix operation on GPUs, and to avoid the complex outside algorithm via efficient back-propagation. Experiments and analysis on 27 datasets from 13 languages clearly show that techniques developed before the DL era, such as structural learning (global TreeCRF loss) and high-order modeling are still useful, and can further boost parsing performance over the state-of-the-art biaffine parser, especially for partially annotated training data. We release our code at https://github.com/yzhangcs/crfpar.", "field": [], "task": ["Chinese Dependency Parsing", "Dependency Parsing"], "method": [], "dataset": ["CoNLL-2009", "Penn Treebank", "NLPCC-2019"], "metric": ["UAS", "LAS"], "title": "Efficient Second-Order TreeCRF for Neural Dependency Parsing"} {"abstract": "Most existing person re-identification (re-id) methods require supervised\nmodel learning from a separate large set of pairwise labelled training data for\nevery single camera pair. This significantly limits their scalability and\nusability in real-world large scale deployments with the need for performing\nre-id across many camera views. To address this scalability problem, we develop\na novel deep learning method for transferring the labelled information of an\nexisting dataset to a new unseen (unlabelled) target domain for person re-id\nwithout any supervised learning in the target domain. Specifically, we\nintroduce an Transferable Joint Attribute-Identity Deep Learning (TJ-AIDL) for\nsimultaneously learning an attribute-semantic and identitydiscriminative\nfeature representation space transferrable to any new (unseen) target domain\nfor re-id tasks without the need for collecting new labelled training data from\nthe target domain (i.e. unsupervised learning in the target domain). Extensive\ncomparative evaluations validate the superiority of this new TJ-AIDL model for\nunsupervised person re-id over a wide range of state-of-the-art methods on four\nchallenging benchmarks including VIPeR, PRID, Market-1501, and DukeMTMC-ReID.", "field": [], "task": ["Person Re-Identification", "Unsupervised Domain Adaptation", "Unsupervised Person Re-Identification"], "method": [], "dataset": ["Duke to Market", "Market to Duke"], "metric": ["rank-10", "mAP", "rank-5", "rank-1"], "title": "Transferable Joint Attribute-Identity Deep Learning for Unsupervised Person Re-Identification"} {"abstract": "We propose a sequence labeling framework with a secondary training objective,\nlearning to predict surrounding words for every word in the dataset. This\nlanguage modeling objective incentivises the system to learn general-purpose\npatterns of semantic and syntactic composition, which are also useful for\nimproving accuracy on different sequence labeling tasks. The architecture was\nevaluated on a range of datasets, covering the tasks of error detection in\nlearner texts, named entity recognition, chunking and POS-tagging. The novel\nlanguage modeling objective provided consistent performance improvements on\nevery benchmark, without requiring any additional annotated or unannotated\ndata.", "field": [], "task": ["Chunking", "Grammatical Error Detection", "Language Modelling", "Named Entity Recognition", "Part-Of-Speech Tagging"], "method": [], "dataset": ["CoNLL-2014 A2", "FCE", "CoNLL-2014 A1", "Penn Treebank"], "metric": ["F0.5", "Accuracy"], "title": "Semi-supervised Multitask Learning for Sequence Labeling"} {"abstract": "Weakly-supervised object detection attempts to limit the amount of supervision by dispensing the need for bounding boxes, but still assumes image-level labels on the entire training set. In this work, we study the problem of training an object detector from one or few images with image-level labels and a larger set of completely unlabeled images. This is an extreme case of semi-supervised learning where the labeled data are not enough to bootstrap the learning of a detector. Our solution is to train a weakly-supervised student detector model from image-level pseudo-labels generated on the unlabeled set by a teacher classifier model, bootstrapped by region-level similarities to labeled images. Building upon the recent representative weakly-supervised pipeline PCL, our method can use more unlabeled images to achieve performance competitive or superior to many recent weakly-supervised detection solutions.", "field": [], "task": ["Object Detection", "Weakly Supervised Object Detection"], "method": [], "dataset": ["PASCAL VOC 2007", "PASCAL VOC 2012 test"], "metric": ["MAP"], "title": "Training Object Detectors from Few Weakly-Labeled and Many Unlabeled Images"} {"abstract": "Measles is extremely contagious and is one of the leading causes of vaccine-preventable illness and death in developing countries, claiming more than 100,000 lives each year. Measles was declared eliminated in the US in 2000 due to decades of successful vaccination for the measles. As a result, an increasing number of US healthcare professionals and the public have never seen the disease. Unfortunately, the Measles resurged in the US in 2019 with 1,282 confirmed cases. To assist in diagnosing measles, we collected more than 1300 images of a variety of skin conditions, with which we employed residual deep convolutional neural network to distinguish measles rash from other skin conditions, in an aim to create a phone application in the future. On our image dataset, our model reaches a classification accuracy of 95.2%, sensitivity of 81.7%, and specificity of 97.1%, indicating the model is effective in facilitating an accurate detection of measles to help contain measles outbreaks.", "field": [], "task": ["Unsupervised Pre-training"], "method": [], "dataset": ["Measles"], "metric": ["Accuracy (%)"], "title": "Measles Rash Identification Using Residual Deep Convolutional Neural Network"} {"abstract": "To fluently collaborate with people, robots need the ability to recognize human activities accurately. Although modern robots are equipped with various sensors, robust human activity recognition (HAR) still remains a challenging task for robots due to difficulties related to multimodal data fusion. To address these challenges, in this work, we introduce a deep neural network-based multimodal HAR algorithm, HAMLET. HAMLET incorporates a hierarchical architecture, where the lower layer encodes spatio-temporal features from unimodal data by adopting a multi-head self-attention mechanism. We develop a novel multimodal attention mechanism for disentangling and fusing the salient unimodal features to compute the multimodal features in the upper layer. Finally, multimodal features are used in a fully connect neural-network to recognize human activities. We evaluated our algorithm by comparing its performance to several state-of-the-art activity recognition algorithms on three human activity datasets. The results suggest that HAMLET outperformed all other evaluated baselines across all datasets and metrics tested, with the highest top-1 accuracy of 95.12% and 97.45% on the UTD-MHAD [1] and the UT-Kinect [2] datasets respectively, and F1-score of 81.52% on the UCSD-MIT [3] dataset. We further visualize the unimodal and multimodal attention maps, which provide us with a tool to interpret the impact of attention mechanisms concerning HAR.", "field": [], "task": ["Activity Recognition"], "method": [], "dataset": ["UT-Kinect", "UTD-MHAD", "UCSD-MIT Human Motion"], "metric": ["Accuracy (CS)", "F1-score"], "title": "HAMLET: A Hierarchical Multimodal Attention-based Human Activity Recognition Algorithm"} {"abstract": "Stimulus selectivity of sensory neurons is often characterized by estimating their receptive field properties such as orientation selectivity. Receptive fields are usually derived from the mean (or covariance) of the spike-triggered stimulus ensemble. This approach treats each spike as an independent message but does not take into account that information might be conveyed through patterns of neural activity that are distributed across space or time. Can we find a concise description for the processing of a whole population of neurons analogous to the receptive field for single neurons? Here, we present a generalization of the linear receptive field which is not bound to be triggered on individual spikes but can be meaningfully linked to distributed response patterns. More precisely, we seek to identify those stimulus features and the corresponding patterns of neural activity that are most reliably coupled. We use an extension of reverse-correlation methods based on canonical correlation analysis. The resulting population receptive fields span the subspace of stimuli that is most informative about the population response. We evaluate our approach using both neuronal models and multi-electrode recordings from rabbit retinal ganglion cells. We show how the model can be extended to capture nonlinear stimulus-response relationships using kernel canonical correlation analysis, which makes it possible to test different coding mechanisms. Our technique can also be used to calculate receptive fields from multi-dimensional neural measurements such as those obtained from dynamic imaging methods.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["STL-10"], "metric": ["Percentage correct"], "title": "Receptive Fields without Spike-Triggering"} {"abstract": "Conventional word sense induction (WSI) methods usually represent each\ninstance with discrete linguistic features or cooccurrence features, and train\na model for each polysemous word individually. In this work, we propose to\nlearn sense embeddings for the WSI task. In the training stage, our method\ninduces several sense centroids (embedding) for each polysemous word. In the\ntesting stage, our method represents each instance as a contextual vector, and\ninduces its sense by finding the nearest sense centroid in the embedding space.\nThe advantages of our method are (1) distributed sense vectors are taken as the\nknowledge representations which are trained discriminatively, and usually have\nbetter performance than traditional count-based distributional models, and (2)\na general model for the whole vocabulary is jointly trained to induce sense\ncentroids under the mutlitask learning framework. Evaluated on SemEval-2010 WSI\ndataset, our method outperforms all participants and most of the recent\nstate-of-the-art methods. We further verify the two advantages by comparing\nwith carefully designed baselines.", "field": [], "task": ["Word Sense Induction"], "method": [], "dataset": ["SemEval 2010 WSI"], "metric": ["V-Measure", "F-Score", "AVG"], "title": "Sense Embedding Learning for Word Sense Induction"} {"abstract": "We present a simple, but surprisingly effective, method of self-training a two-phase parser-reranker system using readily available unlabeled data. We show that this type of bootstrapping is possible for parsing when the bootstrapped parses are processed by a discriminative reranker. Our improved model achieves an f-score of 92.1%, an absolute 1.1% improvement (12% error reduction) over the previous best result for Wall Street Journal parsing. Finally, we provide some analysis to better understand the phenomenon.", "field": [], "task": ["Constituency Parsing"], "method": [], "dataset": ["Penn Treebank"], "metric": ["F1 score"], "title": "Effective Self-Training for Parsing"} {"abstract": "We present a general approach to video understanding, inspired by semantic\ntransfer techniques that have been successfully used for 2D image analysis. Our\nmethod considers a video to be a 1D sequence of clips, each one associated with\nits own semantics. The nature of these semantics -- natural language captions\nor other labels -- depends on the task at hand. A test video is processed by\nforming correspondences between its clips and the clips of reference videos\nwith known semantics, following which, reference semantics can be transferred\nto the test video. We describe two matching methods, both designed to ensure\nthat (a) reference clips appear similar to test clips and (b), taken together,\nthe semantics of the selected reference clips is consistent and maintains\ntemporal coherence. We use our method for video captioning on the LSMDC'16\nbenchmark, video summarization on the SumMe and TVSum benchmarks, Temporal\nAction Detection on the Thumos2014 benchmark, and sound prediction on the\nGreatest Hits benchmark. Our method not only surpasses the state of the art, in\nfour out of five benchmarks, but importantly, it is the only single method we\nknow of that was successfully applied to such a diverse range of tasks.", "field": [], "task": ["Action Detection", "Video Captioning", "Video Summarization", "Video Understanding"], "method": [], "dataset": ["MSR-VTT"], "metric": ["video-to-text R@5", "text-to-video R@1", "text-to-video R@10", "text-to-video Median Rank"], "title": "Temporal Tessellation: A Unified Approach for Video Analysis"} {"abstract": "We present a CNN-based technique to estimate high-dynamic range outdoor\nillumination from a single low dynamic range image. To train the CNN, we\nleverage a large dataset of outdoor panoramas. We fit a low-dimensional\nphysically-based outdoor illumination model to the skies in these panoramas\ngiving us a compact set of parameters (including sun position, atmospheric\nconditions, and camera parameters). We extract limited field-of-view images\nfrom the panoramas, and train a CNN with this large set of input image--output\nlighting parameter pairs. Given a test image, this network can be used to infer\nillumination parameters that can, in turn, be used to reconstruct an outdoor\nillumination environment map. We demonstrate that our approach allows the\nrecovery of plausible illumination conditions and enables photorealistic\nvirtual object insertion from a single image. An extensive evaluation on both\nthe panorama dataset and captured HDR environment maps shows that our technique\nsignificantly outperforms previous solutions to this problem.", "field": [], "task": ["Outdoor Light Source Estimation"], "method": [], "dataset": ["SUN360"], "metric": ["Median Relighting Error"], "title": "Deep Outdoor Illumination Estimation"} {"abstract": "In this paper, we address the problem of unsupervised video summarization\nthat automatically extracts key-shots from an input video. Specifically, we\ntackle two critical issues based on our empirical observations: (i) Ineffective\nfeature learning due to flat distributions of output importance scores for each\nframe, and (ii) training difficulty when dealing with long-length video inputs.\nTo alleviate the first problem, we propose a simple yet effective\nregularization loss term called variance loss. The proposed variance loss\nallows a network to predict output scores for each frame with high discrepancy\nwhich enables effective feature learning and significantly improves model\nperformance. For the second problem, we design a novel two-stream network named\nChunk and Stride Network (CSNet) that utilizes local (chunk) and global\n(stride) temporal view on the video features. Our CSNet gives better\nsummarization results for long-length videos compared to the existing methods.\nIn addition, we introduce an attention mechanism to handle the dynamic\ninformation in videos. We demonstrate the effectiveness of the proposed methods\nby conducting extensive ablation studies and show that our final model achieves\nnew state-of-the-art results on two benchmark datasets.", "field": [], "task": ["Supervised Video Summarization", "Unsupervised Video Summarization", "Video Summarization"], "method": [], "dataset": ["TvSum", "SumMe"], "metric": ["F1-score", "F1-score (Canonical)", "F1-score (Augmented)"], "title": "Discriminative Feature Learning for Unsupervised Video Summarization"} {"abstract": "Monocular depth estimators can be trained with various forms of self-supervision from binocular-stereo data to circumvent the need for high-quality laser scans or other ground-truth data. The disadvantage, however, is that the photometric reprojection losses used with self-supervised learning typically have multiple local minima. These plausible-looking alternatives to ground truth can restrict what a regression network learns, causing it to predict depth maps of limited quality. As one prominent example, depth discontinuities around thin structures are often incorrectly estimated by current state-of-the-art methods. Here, we study the problem of ambiguous reprojections in depth prediction from stereo-based self-supervision, and introduce Depth Hints to alleviate their effects. Depth Hints are complementary depth suggestions obtained from simple off-the-shelf stereo algorithms. These hints enhance an existing photometric loss function, and are used to guide a network to learn better weights. They require no additional data, and are assumed to be right only sometimes. We show that using our Depth Hints gives a substantial boost when training several leading self-supervised-from-stereo models, not just our own. Further, combined with other good practices, we produce state-of-the-art depth predictions on the KITTI benchmark.", "field": [], "task": ["Depth Estimation", "Monocular Depth Estimation", "Regression", "Self-Supervised Learning"], "method": [], "dataset": ["KITTI Eigen split"], "metric": ["absolute relative error"], "title": "Self-Supervised Monocular Depth Hints"} {"abstract": "The divergence between labeled training data and unlabeled testing data is a significant challenge for recent deep learning models. Unsupervised domain adaptation (UDA) attempts to solve such a problem. Recent works show that self-training is a powerful approach to UDA. However, existing methods have difficulty in balancing scalability and performance. In this paper, we propose an instance adaptive self-training framework for UDA on the task of semantic segmentation. To effectively improve the quality of pseudo-labels, we develop a novel pseudo-label generation strategy with an instance adaptive selector. Besides, we propose the region-guided regularization to smooth the pseudo-label region and sharpen the non-pseudo-label region. Our method is so concise and efficient that it is easy to be generalized to other unsupervised domain adaptation methods. Experiments on 'GTA5 to Cityscapes' and 'SYNTHIA to Cityscapes' demonstrate the superior performance of our approach compared with the state-of-the-art methods.", "field": [], "task": ["Domain Adaptation", "Image-to-Image Translation", "Semantic Segmentation", "Synthetic-to-Real Translation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["GTAV-to-Cityscapes Labels", "SYNTHIA-to-Cityscapes"], "metric": ["mIoU (13 classes)", "mIoU"], "title": "Instance Adaptive Self-Training for Unsupervised Domain Adaptation"} {"abstract": "Most existing person re-identification algorithms either extract robust\nvisual features or learn discriminative metrics for person images. However, the\nunderlying manifold which those images reside on is rarely investigated. That\nraises a problem that the learned metric is not smooth with respect to the\nlocal geometry structure of the data manifold.\n In this paper, we study person re-identification with manifold-based affinity\nlearning, which did not receive enough attention from this area. An\nunconventional manifold-preserving algorithm is proposed, which can 1) make the\nbest use of supervision from training data, whose label information is given as\npairwise constraints; 2) scale up to large repositories with low on-line time\ncomplexity; and 3) be plunged into most existing algorithms, serving as a\ngeneric postprocessing procedure to further boost the identification\naccuracies. Extensive experimental results on five popular person\nre-identification benchmarks consistently demonstrate the effectiveness of our\nmethod. Especially, on the largest CUHK03 and Market-1501, our method\noutperforms the state-of-the-art alternatives by a large margin with high\nefficiency, which is more appropriate for practical applications.", "field": [], "task": ["Person Re-Identification"], "method": [], "dataset": ["Market-1501"], "metric": ["Rank-1", "MAP"], "title": "Scalable Person Re-identification on Supervised Smoothed Manifold"} {"abstract": "Existing person re-identification (re-id) methods rely mostly on either\nlocalised or global feature representation alone. This ignores their joint\nbenefit and mutual complementary effects. In this work, we show the advantages\nof jointly learning local and global features in a Convolutional Neural Network\n(CNN) by aiming to discover correlated local and global features in different\ncontext. Specifically, we formulate a method for joint learning of local and\nglobal feature selection losses designed to optimise person re-id when using\nonly generic matching metrics such as the L2 distance. We design a novel CNN\narchitecture for Jointly Learning Multi-Loss (JLML) of local and global\ndiscriminative feature optimisation subject concurrently to the same re-id\nlabelled information. Extensive comparative evaluations demonstrate the\nadvantages of this new JLML model for person re-id over a wide range of\nstate-of-the-art re-id methods on five benchmarks (VIPeR, GRID, CUHK01, CUHK03,\nMarket-1501).", "field": [], "task": ["Feature Selection", "Person Re-Identification"], "method": [], "dataset": ["Market-1501"], "metric": ["Rank-1", "MAP"], "title": "Person Re-Identification by Deep Joint Learning of Multi-Loss Classification"} {"abstract": "This paper presents the first model for time normalization trained on the SCATE corpus. In the SCATE schema, time expressions are annotated as a semantic composition of time entities. This novel schema favors machine learning approaches, as it can be viewed as a semantic parsing task. In this work, we propose a character level multi-output neural network that outperforms previous state-of-the-art built on the TimeML schema. To compare predictions of systems that follow both SCATE and TimeML, we present a new scoring metric for time intervals. We also apply this new metric to carry out a comparative analysis of the annotations of both schemes in the same corpus.", "field": [], "task": ["Semantic Composition", "Semantic Parsing", "Timex normalization"], "method": [], "dataset": ["PNT"], "metric": ["F1-Score"], "title": "From Characters to Time Intervals: New Paradigms for Evaluation and Neural Parsing of Time Normalizations"} {"abstract": "In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4,000 identities.\r\n\r\nThe learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35% on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27%, closely approaching human-level performance.", "field": [], "task": ["3D FACE MODELING", "Face Recognition", "Face Verification"], "method": [], "dataset": ["LFW"], "metric": ["1-of-100 Accuracy"], "title": "DeepFace: Closing the Gap to Human-Level Performance in Face Verification"} {"abstract": "We propose an approach to instance-level image segmentation that is built on\ntop of category-level segmentation. Specifically, for each pixel in a semantic\ncategory mask, its corresponding instance bounding box is predicted using a\ndeep fully convolutional regression network. Thus it follows a different\npipeline to the popular detect-then-segment approaches that first predict\ninstances' bounding boxes, which are the current state-of-the-art in instance\nsegmentation. We show that, by leveraging the strength of our state-of-the-art\nsemantic segmentation models, the proposed method can achieve comparable or\neven better results to detect-then-segment approaches. We make the following\ncontributions. (i) First, we propose a simple yet effective approach to\nsemantic instance segmentation. (ii) Second, we propose an online bootstrapping\nmethod during training, which is critically important for achieving good\nperformance for both semantic category segmentation and instance-level\nsegmentation. (iii) As the performance of semantic category segmentation has a\nsignificant impact on the instance-level segmentation, which is the second step\nof our approach, we train fully convolutional residual networks to achieve the\nbest semantic category segmentation accuracy. On the PASCAL VOC 2012 dataset,\nwe obtain the currently best mean intersection-over-union score of 79.1%. (iv)\nWe also achieve state-of-the-art results for instance-level segmentation.", "field": [], "task": ["Instance Segmentation", "Regression", "Semantic Segmentation"], "method": [], "dataset": ["PASCAL Context"], "metric": ["mIoU"], "title": "Bridging Category-level and Instance-level Semantic Image Segmentation"} {"abstract": "Research in face recognition has seen tremendous growth over the past couple\nof decades. Beginning from algorithms capable of performing recognition in\nconstrained environments, the current face recognition systems achieve very\nhigh accuracies on large-scale unconstrained face datasets. While upcoming\nalgorithms continue to achieve improved performance, a majority of the face\nrecognition systems are susceptible to failure under disguise variations, one\nof the most challenging covariate of face recognition. Most of the existing\ndisguise datasets contain images with limited variations, often captured in\ncontrolled settings. This does not simulate a real world scenario, where both\nintentional and unintentional unconstrained disguises are encountered by a face\nrecognition system. In this paper, a novel Disguised Faces in the Wild (DFW)\ndataset is proposed which contains over 11000 images of 1000 identities with\ndifferent types of disguise accessories. The dataset is collected from the\nInternet, resulting in unconstrained face images similar to real world\nsettings. This is the first-of-a-kind dataset with the availability of\nimpersonator and genuine obfuscated face images for each subject. The proposed\ndataset has been analyzed in terms of three levels of difficulty: (i) easy,\n(ii) medium, and (iii) hard in order to showcase the challenging nature of the\nproblem. It is our view that the research community can greatly benefit from\nthe DFW dataset in terms of developing algorithms robust to such adversaries.\nThe proposed dataset was released as part of the First International Workshop\nand Competition on Disguised Faces in the Wild at CVPR, 2018. This paper\npresents the DFW dataset in detail, including the evaluation protocols,\nbaseline results, performance analysis of the submissions received as part of\nthe competition, and three levels of difficulties of the DFW challenge dataset.", "field": [], "task": ["Disguised Face Verification", "Face Recognition"], "method": [], "dataset": ["Disguised Faces in the Wild"], "metric": ["GAR @0.1% FAR", "GAR @1% FAR"], "title": "Recognizing Disguised Faces in the Wild"} {"abstract": "Image geolocalization, inferring the geographic location of an image, is a\nchallenging computer vision problem with many potential applications. The\nrecent state-of-the-art approach to this problem is a deep image classification\napproach in which the world is spatially divided into cells and a deep network\nis trained to predict the correct cell for a given image. We propose to combine\nthis approach with the original Im2GPS approach in which a query image is\nmatched against a database of geotagged images and the location is inferred\nfrom the retrieved set. We estimate the geographic location of a query image by\napplying kernel density estimation to the locations of its nearest neighbors in\nthe reference database. Interestingly, we find that the best features for our\nretrieval task are derived from networks trained with classification loss even\nthough we do not use a classification approach at test time. Training with\nclassification loss outperforms several deep feature learning methods (e.g.\nSiamese networks with contrastive of triplet loss) more typical for retrieval\napplications. Our simple approach achieves state-of-the-art geolocalization\naccuracy while also requiring significantly less training data.", "field": [], "task": ["Density Estimation", "Image Classification", "Photo geolocation estimation"], "method": [], "dataset": ["Im2GPS3k", "Im2GPS"], "metric": ["City level (25 km)", "Continent level (2500 km)", "Reference images", "Training images", "Street level (1 km)", "Country level (750 km)", "Region level (200 km)"], "title": "Revisiting IM2GPS in the Deep Learning Era"} {"abstract": "Lane detection is an important yet challenging task in autonomous driving,\nwhich is affected by many factors, e.g., light conditions, occlusions caused by\nother vehicles, irrelevant markings on the road and the inherent long and thin\nproperty of lanes. Conventional methods typically treat lane detection as a\nsemantic segmentation task, which assigns a class label to each pixel of the\nimage. This formulation heavily depends on the assumption that the number of\nlanes is pre-defined and fixed and no lane changing occurs, which does not\nalways hold. To make the lane detection model applicable to an arbitrary number\nof lanes and lane changing scenarios, we adopt an instance segmentation\napproach, which first differentiates lanes and background and then classify\neach lane pixel into each lane instance. Besides, a multi-task learning\nparadigm is utilized to better exploit the structural information and the\nfeature pyramid architecture is used to detect extremely thin lanes. Three\npopular lane detection benchmarks, i.e., TuSimple, CULane and BDD100K, are used\nto validate the effectiveness of our proposed algorithm.", "field": [], "task": ["Autonomous Driving", "Instance Segmentation", "Lane Detection", "Multi-Task Learning", "Semantic Segmentation"], "method": [], "dataset": ["TuSimple", "CULane"], "metric": ["F1 score", "Accuracy"], "title": "Agnostic Lane Detection"} {"abstract": "We present Habitat, a platform for research in embodied artificial intelligence (AI). Habitat enables training embodied agents (virtual robots) in highly efficient photorealistic 3D simulation. Specifically, Habitat consists of: (i) Habitat-Sim: a flexible, high-performance 3D simulator with configurable agents, sensors, and generic 3D dataset handling. Habitat-Sim is fast -- when rendering a scene from Matterport3D, it achieves several thousand frames per second (fps) running single-threaded, and can reach over 10,000 fps multi-process on a single GPU. (ii) Habitat-API: a modular high-level library for end-to-end development of embodied AI algorithms -- defining tasks (e.g., navigation, instruction following, question answering), configuring, training, and benchmarking embodied agents. These large-scale engineering contributions enable us to answer scientific questions requiring experiments that were till now impracticable or 'merely' impractical. Specifically, in the context of point-goal navigation: (1) we revisit the comparison between learning and SLAM approaches from two recent works and find evidence for the opposite conclusion -- that learning outperforms SLAM if scaled to an order of magnitude more experience than previous investigations, and (2) we conduct the first cross-dataset generalization experiments {train, test} x {Matterport3D, Gibson} for multiple sensors {blind, RGB, RGBD, D} and find that only agents with depth (D) sensors generalize across datasets. We hope that our open-source platform and these findings will advance research in embodied AI.", "field": [], "task": ["PointGoal Navigation", "Question Answering", "Robot Navigation"], "method": [], "dataset": ["Gibson PointGoal Navigation"], "metric": ["spl"], "title": "Habitat: A Platform for Embodied AI Research"} {"abstract": "End-to-end neural models have made significant progress in question answering, however recent studies show that these models implicitly assume that the answer and evidence appear close together in a single document. In this work, we propose the Coarse-grain Fine-grain Coattention Network (CFC), a new question answering model that combines information from evidence across multiple documents. The CFC consists of a coarse-grain module that interprets documents with respect to the query then finds a relevant answer, and a fine-grain module which scores each candidate answer by comparing its occurrences across all of the documents with the query. We design these modules using hierarchies of coattention and self-attention, which learn to emphasize different parts of the input. On the Qangaroo WikiHop multi-evidence question answering task, the CFC obtains a new state-of-the-art result of 70.6% on the blind test set, outperforming the previous best by 3% accuracy despite not using pretrained contextual encoders.", "field": [], "task": ["Question Answering"], "method": [], "dataset": ["WikiHop"], "metric": ["Test"], "title": "Coarse-grain Fine-grain Coattention Network for Multi-evidence Question Answering"} {"abstract": "Recent work has proposed several generative neural models for constituency\nparsing that achieve state-of-the-art results. Since direct search in these\ngenerative models is difficult, they have primarily been used to rescore\ncandidate outputs from base parsers in which decoding is more straightforward.\nWe first present an algorithm for direct search in these generative models. We\nthen demonstrate that the rescoring results are at least partly due to implicit\nmodel combination rather than reranking effects. Finally, we show that explicit\nmodel combination can improve performance even further, resulting in new\nstate-of-the-art numbers on the PTB of 94.25 F1 when training only on gold data\nand 94.66 F1 when using external data.", "field": [], "task": ["Constituency Parsing"], "method": [], "dataset": ["Penn Treebank"], "metric": ["F1 score"], "title": "Improving Neural Parsing by Disentangling Model Combination and Reranking Effects"} {"abstract": "Estimating the 6D pose of known objects is important for robots to interact\nwith the real world. The problem is challenging due to the variety of objects\nas well as the complexity of a scene caused by clutter and occlusions between\nobjects. In this work, we introduce PoseCNN, a new Convolutional Neural Network\nfor 6D object pose estimation. PoseCNN estimates the 3D translation of an\nobject by localizing its center in the image and predicting its distance from\nthe camera. The 3D rotation of the object is estimated by regressing to a\nquaternion representation. We also introduce a novel loss function that enables\nPoseCNN to handle symmetric objects. In addition, we contribute a large scale\nvideo dataset for 6D object pose estimation named the YCB-Video dataset. Our\ndataset provides accurate 6D poses of 21 objects from the YCB dataset observed\nin 92 videos with 133,827 frames. We conduct extensive experiments on our\nYCB-Video dataset and the OccludedLINEMOD dataset to show that PoseCNN is\nhighly robust to occlusions, can handle symmetric objects, and provide accurate\npose estimation using only color images as input. When using depth data to\nfurther refine the poses, our approach achieves state-of-the-art results on the\nchallenging OccludedLINEMOD dataset. Our code and dataset are available at\nhttps://rse-lab.cs.washington.edu/projects/posecnn/.", "field": [], "task": ["6D Pose Estimation", "6D Pose Estimation using RGB", "6D Pose Estimation using RGBD", "Pose Estimation"], "method": [], "dataset": ["YCB-Video"], "metric": ["Mean ADD", "ADDS AUC", "Accuracy (ADD)", "Mean ADD-S"], "title": "PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes"} {"abstract": "Convolutional Neural Networks (ConvNets) have recently shown promising\nperformance in many computer vision tasks, especially image-based recognition.\nHow to effectively apply ConvNets to sequence-based data is still an open\nproblem. This paper proposes an effective yet simple method to represent\nspatio-temporal information carried in $3D$ skeleton sequences into three $2D$\nimages by encoding the joint trajectories and their dynamics into color\ndistribution in the images, referred to as Joint Trajectory Maps (JTM), and\nadopts ConvNets to learn the discriminative features for human action\nrecognition. Such an image-based representation enables us to fine-tune\nexisting ConvNets models for the classification of skeleton sequences without\ntraining the networks afresh. The three JTMs are generated in three orthogonal\nplanes and provide complimentary information to each other. The final\nrecognition is further improved through multiply score fusion of the three\nJTMs. The proposed method was evaluated on four public benchmark datasets, the\nlarge NTU RGB+D Dataset, MSRC-12 Kinect Gesture Dataset (MSRC-12), G3D Dataset\nand UTD Multimodal Human Action Dataset (UTD-MHAD) and achieved the\nstate-of-the-art results.", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["Gaming 3D (G3D)"], "metric": ["Accuracy"], "title": "Action Recognition Based on Joint Trajectory Maps with Convolutional Neural Networks"} {"abstract": "Training robust deep video representations has proven to be much more\nchallenging than learning deep image representations. This is in part due to\nthe enormous size of raw video streams and the high temporal redundancy; the\ntrue and interesting signal is often drowned in too much irrelevant data.\nMotivated by that the superfluous information can be reduced by up to two\norders of magnitude by video compression (using H.264, HEVC, etc.), we propose\nto train a deep network directly on the compressed video.\n This representation has a higher information density, and we found the\ntraining to be easier. In addition, the signals in a compressed video provide\nfree, albeit noisy, motion information. We propose novel techniques to use them\neffectively. Our approach is about 4.6 times faster than Res3D and 2.7 times\nfaster than ResNet-152. On the task of action recognition, our approach\noutperforms all the other methods on the UCF-101, HMDB-51, and Charades\ndataset.", "field": [], "task": ["Action Classification", "Action Recognition", "Temporal Action Localization", "Video Compression"], "method": [], "dataset": ["Charades"], "metric": ["MAP"], "title": "Compressed Video Action Recognition"} {"abstract": "We adapt the greedy Stack-LSTM dependency parser of Dyer et al. (2015) to\nsupport a training-with-exploration procedure using dynamic oracles(Goldberg\nand Nivre, 2013) instead of cross-entropy minimization. This form of training,\nwhich accounts for model predictions at training time rather than assuming an\nerror-free action history, improves parsing accuracies for both English and\nChinese, obtaining very strong results for both languages. We discuss some\nmodifications needed in order to get training with exploration to work well for\na probabilistic neural-network.", "field": [], "task": ["Dependency Parsing"], "method": [], "dataset": ["Penn Treebank"], "metric": ["UAS", "POS", "LAS"], "title": "Training with Exploration Improves a Greedy Stack-LSTM Parser"} {"abstract": "In this paper, we introduce a novel end-end framework for multi-oriented\nscene text detection from an instance-aware semantic segmentation perspective.\nWe present Fused Text Segmentation Networks, which combine multi-level features\nduring the feature extracting as text instance may rely on finer feature\nexpression compared to general objects. It detects and segments the text\ninstance jointly and simultaneously, leveraging merits from both semantic\nsegmentation task and region proposal based object detection task. Not\ninvolving any extra pipelines, our approach surpasses the current state of the\nart on multi-oriented scene text detection benchmarks: ICDAR2015 Incidental\nScene Text and MSRA-TD500 reaching Hmean 84.1% and 82.0% respectively. Morever,\nwe report a baseline on total-text containing curved text which suggests\neffectiveness of the proposed approach.", "field": [], "task": ["Multi-Oriented Scene Text Detection", "Object Detection", "Region Proposal", "Scene Text", "Scene Text Detection", "Semantic Segmentation", "Text Segmentation"], "method": [], "dataset": ["MSRA-TD500", "ICDAR 2015", "Total-Text"], "metric": ["F-Measure", "Recall", "Precision", "H-Mean"], "title": "Fused Text Segmentation Networks for Multi-oriented Scene Text Detection"} {"abstract": "Automatic story comprehension is a fundamental challenge in Natural Language Understanding, and can enable computers to learn about social norms, human behavior and commonsense. In this paper, we present a story comprehension model that explores three distinct semantic aspects: (i) the sequence of events described in the story, (ii) its emotional trajectory, and (iii) its plot consistency. We judge the model{'}s understanding of real-world stories by inquiring if, like humans, it can develop an expectation of what will happen next in a given story. Specifically, we use it to predict the correct ending of a given short story from possible alternatives. The model uses a hidden variable to weigh the semantic aspects in the context of the story. Our experiments demonstrate the potential of our approach to characterize these semantic aspects, and the strength of the hidden variable based approach. The model outperforms the state-of-the-art approaches and achieves best results on a publicly available dataset.", "field": [], "task": ["Common Sense Reasoning", "Natural Language Understanding", "Reading Comprehension", "Speaker Identification", "Text Generation"], "method": [], "dataset": ["Story Cloze Test"], "metric": ["Accuracy"], "title": "Story Comprehension for Predicting What Happens Next"} {"abstract": "This paper proposes three simple, compact yet effective representations of\ndepth sequences, referred to respectively as Dynamic Depth Images (DDI),\nDynamic Depth Normal Images (DDNI) and Dynamic Depth Motion Normal Images\n(DDMNI). These dynamic images are constructed from a sequence of depth maps\nusing bidirectional rank pooling to effectively capture the spatial-temporal\ninformation. Such image-based representations enable us to fine-tune the\nexisting ConvNets models trained on image data for classification of depth\nsequences, without introducing large parameters to learn. Upon the proposed\nrepresentations, a convolutional Neural networks (ConvNets) based method is\ndeveloped for gesture recognition and evaluated on the Large-scale Isolated\nGesture Recognition at the ChaLearn Looking at People (LAP) challenge 2016. The\nmethod achieved 55.57\\% classification accuracy and ranked $2^{nd}$ place in\nthis challenge but was very close to the best performance even though we only\nused depth data.", "field": [], "task": ["Gesture Recognition"], "method": [], "dataset": ["ChaLearn val"], "metric": ["Accuracy"], "title": "Large-scale Isolated Gesture Recognition Using Convolutional Neural Networks"} {"abstract": "With the representation effectiveness, skeleton-based human action recognition has received considerable research attention, and has a wide range of real applications. In this area, many existing methods typically rely on fixed physicalconnectivity skeleton structure for recognition, which is incapable of well capturing the intrinsic high-order correlations among skeleton joints. In this paper, we propose a novel spatio-temporal graph routing (STGR) scheme for skeletonbased action recognition, which adaptively learns the intrinsic high-order connectivity relationships for physicallyapart skeleton joints. Specifically, the scheme is composed of two components: spatial graph router (SGR) and temporal graph router (TGR). The SGR aims to discover the connectivity relationships among the joints based on sub-group clustering along the spatial dimension, while the TGR explores the structural information by measuring the correlation degrees between temporal joint node trajectories. The proposed scheme is naturally and seamlessly incorporated into the framework of graph convolutional networks (GCNs) to produce a set of skeleton-joint-connectivity graphs, which are further fed into the classification networks. Moreover, an insightful analysis on receptive field of graph node is provided to explain the necessity of our method. Experimental results on two benchmark datasets (NTU-RGB+D and Kinetics) demonstrate the effectiveness against the state-of-the-art.", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["Kinetics-Skeleton dataset"], "metric": ["Accuracy"], "title": "Spatiotemporal graph routing for skeleton-based action recognition"} {"abstract": "Face Analysis Project on MXNet", "field": [], "task": ["Face Verification"], "method": [], "dataset": ["2019_test set"], "metric": ["99.46%"], "title": "MobileFaceNets: Efficient CNNs for Accurate Real-Time Face Verification on Mobile Devices"} {"abstract": "We introduce a new entity typing task: given a sentence with an entity\nmention, the goal is to predict a set of free-form phrases (e.g. skyscraper,\nsongwriter, or criminal) that describe appropriate types for the target entity.\nThis formulation allows us to use a new type of distant supervision at large\nscale: head words, which indicate the type of the noun phrases they appear in.\nWe show that these ultra-fine types can be crowd-sourced, and introduce new\nevaluation sets that are much more diverse and fine-grained than existing\nbenchmarks. We present a model that can predict open types, and is trained\nusing a multitask objective that pools our new head-word supervision with prior\nsupervision from entity linking. Experimental results demonstrate that our\nmodel is effective in predicting entity types at varying granularity; it\nachieves state of the art performance on an existing fine-grained entity typing\nbenchmark, and sets baselines for our newly-introduced datasets. Our data and\nmodel can be downloaded from: http://nlp.cs.washington.edu/entity_type", "field": [], "task": ["Entity Linking", "Entity Typing"], "method": [], "dataset": ["Ontonotes v5 (English)"], "metric": ["Precision", "Recall", "F1"], "title": "Ultra-Fine Entity Typing"} {"abstract": "Few-shot learning requires to recognize novel classes with scarce labeled data. Prototypical network is useful in existing researches, however, training on narrow-size distribution of scarce data usually tends to get biased prototypes. In this paper, we figure out two key influencing factors of the process: the intra-class bias and the cross-class bias. We then propose a simple yet effective approach for prototype rectification in transductive setting. The approach utilizes label propagation to diminish the intra-class bias and feature shifting to diminish the cross-class bias. We also conduct theoretical analysis to derive its rationality as well as the lower bound of the performance. Effectiveness is shown on three few-shot benchmarks. Notably, our approach achieves state-of-the-art performance on both miniImageNet (70.31% on 1-shot and 81.89% on 5-shot) and tieredImageNet (78.74% on 1-shot and 86.92% on 5-shot).", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning", "Rectification"], "method": [], "dataset": ["Mini-ImageNet - 1-Shot Learning"], "metric": ["Accuracy"], "title": "Prototype Rectification for Few-Shot Learning"} {"abstract": "This paper addresses the challenge of 3D human pose estimation from a single\ncolor image. Despite the general success of the end-to-end learning paradigm,\ntop performing approaches employ a two-step solution consisting of a\nConvolutional Network (ConvNet) for 2D joint localization and a subsequent\noptimization step to recover 3D pose. In this paper, we identify the\nrepresentation of 3D pose as a critical issue with current ConvNet approaches\nand make two important contributions towards validating the value of end-to-end\nlearning for this task. First, we propose a fine discretization of the 3D space\naround the subject and train a ConvNet to predict per voxel likelihoods for\neach joint. This creates a natural representation for 3D pose and greatly\nimproves performance over the direct regression of joint coordinates. Second,\nto further improve upon initial estimates, we employ a coarse-to-fine\nprediction scheme. This step addresses the large dimensionality increase and\nenables iterative refinement and repeated processing of the image features. The\nproposed approach outperforms all state-of-the-art methods on standard\nbenchmarks achieving a relative error reduction greater than 30% on average.\nAdditionally, we investigate using our volumetric representation in a related\narchitecture which is suboptimal compared to our end-to-end approach, but is of\npractical interest, since it enables training when no image with corresponding\n3D groundtruth is available, and allows us to present compelling results for\nin-the-wild images.", "field": [], "task": ["3D Human Pose Estimation", "Pose Estimation", "Regression"], "method": [], "dataset": ["HumanEva-I", "Human3.6M"], "metric": ["Average MPJPE (mm)", "Mean Reconstruction Error (mm)"], "title": "Coarse-to-Fine Volumetric Prediction for Single-Image 3D Human Pose"} {"abstract": "Pixel-wise image segmentation is a highly demanding task in medical-image analysis. In practice, it is difficult to find annotated medical images with corresponding segmentation masks. In this paper, we present Kvasir-SEG: an open-access dataset of gastrointestinal polyp images and corresponding segmentation masks, manually annotated by a medical doctor and then verified by an experienced gastroenterologist. Moreover, we also generated the bounding boxes of the polyp regions with the help of segmentation masks. We demonstrate the use of our dataset with a traditional segmentation approach and a modern deep-learning based Convolutional Neural Network (CNN) approach. The dataset will be of value for researchers to reproduce results and compare methods. By adding segmentation masks to the Kvasir dataset, which only provide frame-wise annotations, we enable multimedia and computer vision researchers to contribute in the field of polyp segmentation and automatic analysis of colonoscopy images.", "field": [], "task": ["Medical Image Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["Kvasir-SEG"], "metric": ["mean Dice"], "title": "Kvasir-SEG: A Segmented Polyp Dataset"} {"abstract": "Is it possible to build a system to determine the location where a photo was\ntaken using just its pixels? In general, the problem seems exceptionally\ndifficult: it is trivial to construct situations where no location can be\ninferred. Yet images often contain informative cues such as landmarks, weather\npatterns, vegetation, road markings, and architectural details, which in\ncombination may allow one to determine an approximate location and occasionally\nan exact location. Websites such as GeoGuessr and View from your Window suggest\nthat humans are relatively good at integrating these cues to geolocate images,\nespecially en-masse. In computer vision, the photo geolocation problem is\nusually approached using image retrieval methods. In contrast, we pose the\nproblem as one of classification by subdividing the surface of the earth into\nthousands of multi-scale geographic cells, and train a deep network using\nmillions of geotagged images. While previous approaches only recognize\nlandmarks or perform approximate matching using global image descriptors, our\nmodel is able to use and integrate multiple visible cues. We show that the\nresulting model, called PlaNet, outperforms previous approaches and even\nattains superhuman levels of accuracy in some cases. Moreover, we extend our\nmodel to photo albums by combining it with a long short-term memory (LSTM)\narchitecture. By learning to exploit temporal coherence to geolocate uncertain\nphotos, we demonstrate that this model achieves a 50% performance improvement\nover the single-image model.", "field": [], "task": ["Image Retrieval", "Photo geolocation estimation"], "method": [], "dataset": ["Im2GPS"], "metric": ["City level (25 km)", "Continent level (2500 km)", "Reference images", "Training images", "Street level (1 km)", "Country level (750 km)", "Region level (200 km)"], "title": "PlaNet - Photo Geolocation with Convolutional Neural Networks"} {"abstract": "Weakly supervised object detection (WSOD) focuses on training object detector with only image-level annotations, and is challenging due to the gap between the supervision and the objective. Most of existing approaches model WSOD as a multiple instance learning (MIL) problem. However, we observe that the result of MIL based detector is unstable, i.e., the most confident bounding boxes change significantly when using different initializations. We quantitatively demonstrate the instability by introducing a metric to measure it, and empirically analyze the reason of instability. Although the instability seems harmful for detection task, we argue that it can be utilized to improve the performance by fusing the results of differently initialized detectors. To implement this idea, we propose an end-to-end framework with multiple detection branches, and introduce a simple fusion strategy. We further propose an orthogonal initialization method to increase the difference between detection branches. By utilizing the instability, we achieve 52.6% and 48.0% mAP on the challenging PASCAL VOC 2007 and 2012 datasets, which are both the new state-of-the-arts.", "field": [], "task": ["Multiple Instance Learning", "Object Detection", "Weakly Supervised Object Detection"], "method": [], "dataset": ["PASCAL VOC 2007", "PASCAL VOC 2012 test"], "metric": ["MAP"], "title": "Utilizing the Instability in Weakly Supervised Object Detection"} {"abstract": "There is a growing interest in learning data representations that work well\nfor many different types of problems and data. In this paper, we look in\nparticular at the task of learning a single visual representation that can be\nsuccessfully utilized in the analysis of very different types of images, from\ndog breeds to stop signs and digits. Inspired by recent work on learning\nnetworks that predict the parameters of another, we develop a tunable deep\nnetwork architecture that, by means of adapter residual modules, can be steered\non the fly to diverse visual domains. Our method achieves a high degree of\nparameter sharing while maintaining or even improving the accuracy of\ndomain-specific representations. We also introduce the Visual Decathlon\nChallenge, a benchmark that evaluates the ability of representations to capture\nsimultaneously ten very different visual domains and measures their ability to\nrecognize well uniformly.", "field": [], "task": ["Continual Learning"], "method": [], "dataset": ["visual domain decathlon (10 tasks)"], "metric": ["decathlon discipline (Score)"], "title": "Learning multiple visual domains with residual adapters"} {"abstract": "Many real world graphs, such as the graphs of molecules, exhibit structure at\nmultiple different scales, but most existing kernels between graphs are either\npurely local or purely global in character. In contrast, by building a\nhierarchy of nested subgraphs, the Multiscale Laplacian Graph kernels (MLG\nkernels) that we define in this paper can account for structure at a range of\ndifferent scales. At the heart of the MLG construction is another new graph\nkernel, called the Feature Space Laplacian Graph kernel (FLG kernel), which has\nthe property that it can lift a base kernel defined on the vertices of two\ngraphs to a kernel between the graphs. The MLG kernel applies such FLG kernels\nto subgraphs recursively. To make the MLG kernel computationally feasible, we\nalso introduce a randomized projection procedure, similar to the Nystr\\\"om\nmethod, but for RKHS operators.", "field": [], "task": ["Graph Classification"], "method": [], "dataset": ["PROTEINS"], "metric": ["Accuracy"], "title": "The Multiscale Laplacian Graph Kernel"} {"abstract": "Sarcasm is an intricate form of speech, where meaning is conveyed implicitly. Being a convoluted form of expression, detecting sarcasm is an assiduous problem. The difficulty in recognition of sarcasm has many pitfalls, including misunderstandings in everyday communications, which leads us to an increasing focus on automated sarcasm detection. In the second edition of the Figurative Language Processing (FigLang 2020) workshop, the shared task of sarcasm detection released two datasets, containing responses along with their context sampled from Twitter and Reddit. In this work, we use RoBERTa_large to detect sarcasm in both the datasets. We further assert the importance of context in improving the performance of contextual word embedding based models by using three different types of inputs - Response-only, Context-Response, and Context-Response (Separated). We show that our proposed architecture performs competitively for both the datasets. We also show that the addition of a separation token between context and target response results in an improvement of 5.13% in the F1-score in the Reddit dataset.", "field": [], "task": ["Sarcasm Detection"], "method": [], "dataset": ["FigLang 2020 Twitter Dataset", "FigLang 2020 Reddit Dataset"], "metric": ["F1"], "title": "Sarcasm Detection using Context Separators in Online Discourse"} {"abstract": "In this paper, we demonstrate how to do automated theorem proving in the presence of a large knowledge base of potential premises without learning from human proofs. We suggest an exploration mechanism that mixes in additional premises selected by a tf-idf (term frequency-inverse document frequency) based lookup in a deep reinforcement learning scenario. This helps with exploring and learning which premises are relevant for proving a new theorem. Our experiments show that the theorem prover trained with this exploration mechanism outperforms provers that are trained only on human proofs. It approaches the performance of a prover trained by a combination of imitation and reinforcement learning. We perform multiple experiments to understand the importance of the underlying assumptions that make our exploration approach work, thus explaining our design choices.", "field": [], "task": ["Automated Theorem Proving", "Imitation Learning"], "method": [], "dataset": ["HOList benchmark"], "metric": ["Percentage correct"], "title": "Learning to Reason in Large Theories without Imitation"} {"abstract": "We propose a scalable neural network framework to reconstruct the 3D mesh of a human body from multi-view images, in the subspace of the SMPL model. Use of multi-view images can significantly reduce the projection ambiguity of the problem, increasing the reconstruction accuracy of the 3D human body under clothing. Our experiments show that this method benefits from the synthetic dataset generated from our pipeline since it has good flexibility of variable control and can provide ground-truth for validation. Our method outperforms existing methods on real-world images, especially on shape estimations.", "field": [], "task": ["3D Human Pose Estimation"], "method": [], "dataset": ["Human3.6M"], "metric": ["Average MPJPE (mm)", "Multi-View or Monocular", "Using 2D ground-truth joints"], "title": "Shape-Aware Human Pose and Shape Reconstruction Using Multi-View Images"} {"abstract": "In this paper we used two new features i.e. T-wave integral and total integral as extracted feature from one cycle of normal and patient ECG signals to detection and localization of myocardial infarction (MI) in left ventricle of heart. In our previous work we used some features of body surface potential map data for this aim. But we know the standard ECG is more popular, so we focused our detection and localization of MI on standard ECG. We use the T-wave integral because this feature is important impression of T-wave in MI. The second feature in this research is total integral of one ECG cycle, because we believe that the MI affects the morphology of the ECG signal which leads to total integral changes. We used some pattern recognition method such as Artificial Neural Network (ANN) to detect and localize the MI, because this method has very good accuracy for classification of normal signal and abnormal signal. We used one type of Radial Basis Function (RBF) that called Probabilistic Neural Network (PNN) because of its nonlinearity property, and used other classifier such as k-Nearest Neighbors (KNN), Multilayer Perceptron (MLP) and Naive Bayes Classification. We used PhysioNet database as our training and test data. We reached over 76% for accuracy in test data for localization and over 94% for detection of MI. Main advantages of our method are simplicity and its good accuracy. Also we can improve the accuracy of classification by adding more features in this method. A simple method based on using only two features which were extracted from standard ECG is presented and has good accuracy in MI localization.", "field": [], "task": ["Myocardial infarction detection"], "method": [], "dataset": ["PTB dataset, ECG lead II"], "metric": ["Accuracy"], "title": "A New Pattern Recognition Method for Detection and Localization of Myocardial Infarction Using T-Wave Integral and Total Integral as Extracted Features from One Cycle of ECG Signal"} {"abstract": "Object counting is an important task in computer vision due to its growing\ndemand in applications such as surveillance, traffic monitoring, and counting\neveryday objects. State-of-the-art methods use regression-based optimization\nwhere they explicitly learn to count the objects of interest. These often\nperform better than detection-based methods that need to learn the more\ndifficult task of predicting the location, size, and shape of each object.\nHowever, we propose a detection-based method that does not need to estimate the\nsize and shape of the objects and that outperforms regression-based methods.\nOur contributions are three-fold: (1) we propose a novel loss function that\nencourages the network to output a single blob per object instance using\npoint-level annotations only; (2) we design two methods for splitting large\npredicted blobs between object instances; and (3) we show that our method\nachieves new state-of-the-art results on several challenging datasets including\nthe Pascal VOC and the Penguins dataset. Our method even outperforms those that\nuse stronger supervision such as depth features, multi-point annotations, and\nbounding-box labels.", "field": [], "task": ["Object Counting", "Regression"], "method": [], "dataset": ["Pascal VOC 2007 count-test", "COCO count-test"], "metric": ["m-reIRMSE", "mRMSE-nz", "m-reIRMSE-nz", "mRMSE", "m-relRMSE"], "title": "Where are the Blobs: Counting by Localization with Point Supervision"} {"abstract": "Despite great advances witnessed on facial image alignment in recent years, high accuracy high speed face alignment algorithms still have rooms to improve especially for applications where computation resources are limited. Addressing this issue, we propose a new face landmark localization algorithm by combining global regression and local refinement. In particular, for a given image, our algorithm first estimates its global facial shape through a global regression network (GRegNet) and then using cascaded local refinement networks (LRefNet) to sequentially improve the alignment result. Compared with previous face alignment algorithms, our key innovation is the sharing of low level features in GRegNet with LRefNet. Such feature sharing not only significantly improves the algorithm efficiency, but also allows full exploration of rich locality-sensitive details carried with shallow network layers and consequently boosts the localization accuracy. The advantages of our algorithm is clearly validated in our thorough experiments on four popular face alignment benchmarks, 300-W, AFLW, COFW and WFLW. On all datasets, our algorithm produces state-of-the-art alignment accuracy, while enjoys the smallest computational complexity.", "field": [], "task": ["Face Alignment", "Regression"], "method": [], "dataset": ["WFLW", "300W"], "metric": ["ME (%, all) ", "FR@0.1(%, all)", "AUC@0.1 (all)", "Fullset (public)"], "title": "Efficient and Accurate Face Alignment by Global Regression and Cascaded Local Refinement"} {"abstract": "Human trajectory forecasting is an inherently multi-modal problem. Uncertainty in future trajectories stems from two sources: (a) sources that are known to the agent but unknown to the model, such as long term goals and (b)sources that are unknown to both the agent & the model, such as intent of other agents & irreducible randomness indecisions. We propose to factorize this uncertainty into its epistemic & aleatoric sources. We model the epistemic un-certainty through multimodality in long term goals and the aleatoric uncertainty through multimodality in waypoints& paths. To exemplify this dichotomy, we also propose a novel long term trajectory forecasting setting, with prediction horizons upto a minute, an order of magnitude longer than prior works. Finally, we presentY-net, a scene com-pliant trajectory forecasting network that exploits the pro-posed epistemic & aleatoric structure for diverse trajectory predictions across long prediction horizons.Y-net significantly improves previous state-of-the-art performance on both (a) The well studied short prediction horizon settings on the Stanford Drone & ETH/UCY datasets and (b) The proposed long prediction horizon setting on the re-purposed Stanford Drone & Intersection Drone datasets.", "field": [], "task": ["Trajectory Forecasting", "Trajectory Prediction"], "method": [], "dataset": ["Stanford Drone"], "metric": ["ADE-8/12 @K = 20", "FDE-8/12 @K= 20"], "title": "From Goals, Waypoints & Paths To Long Term Human Trajectory Forecasting"} {"abstract": "We propose a deep neural network fusion architecture for fast and robust\npedestrian detection. The proposed network fusion architecture allows for\nparallel processing of multiple networks for speed. A single shot deep\nconvolutional network is trained as a object detector to generate all possible\npedestrian candidates of different sizes and occlusions. This network outputs a\nlarge variety of pedestrian candidates to cover the majority of ground-truth\npedestrians while also introducing a large number of false positives. Next,\nmultiple deep neural networks are used in parallel for further refinement of\nthese pedestrian candidates. We introduce a soft-rejection based network fusion\nmethod to fuse the soft metrics from all networks together to generate the\nfinal confidence scores. Our method performs better than existing\nstate-of-the-arts, especially when detecting small-size and occluded\npedestrians. Furthermore, we propose a method for integrating pixel-wise\nsemantic segmentation network into the network fusion architecture as a\nreinforcement to the pedestrian detector. The approach outperforms\nstate-of-the-art methods on most protocols on Caltech Pedestrian dataset, with\nsignificant boosts on several protocols. It is also faster than all other\nmethods.", "field": [], "task": ["Pedestrian Detection", "Semantic Segmentation"], "method": [], "dataset": ["Caltech"], "metric": ["Reasonable Miss Rate"], "title": "Fused DNN: A deep neural network fusion approach to fast and robust pedestrian detection"} {"abstract": "The optical flow of natural scenes is a combination of the motion of the\nobserver and the independent motion of objects. Existing algorithms typically\nfocus on either recovering motion and structure under the assumption of a\npurely static world or optical flow for general unconstrained scenes. We\ncombine these approaches in an optical flow algorithm that estimates an\nexplicit segmentation of moving objects from appearance and physical\nconstraints. In static regions we take advantage of strong constraints to\njointly estimate the camera motion and the 3D structure of the scene over\nmultiple frames. This allows us to also regularize the structure instead of the\nmotion. Our formulation uses a Plane+Parallax framework, which works even under\nsmall baselines, and reduces the motion estimation to a one-dimensional search\nproblem, resulting in more accurate estimation. In moving regions the flow is\ntreated as unconstrained, and computed with an existing optical flow method.\nThe resulting Mostly-Rigid Flow (MR-Flow) method achieves state-of-the-art\nresults on both the MPI-Sintel and KITTI-2015 benchmarks.", "field": [], "task": ["Motion Estimation", "Optical Flow Estimation"], "method": [], "dataset": ["Sintel-final", "Sintel-clean"], "metric": ["Average End-Point Error"], "title": "Optical Flow in Mostly Rigid Scenes"} {"abstract": "Due to the sparsity of features, noise has proven to be a great inhibitor in the classification of handwritten characters. To combat this, most techniques perform denoising of the data before classification. In this paper, we consolidate the approach by training an all-in-one model that is able to classify even noisy characters. For classification, we progressively train a classifier generative adversarial network on the characters from low to high resolution. We show that by learning the features at each resolution independently a trained model is able to accurately classify characters even in the presence of noise. We experimentally demonstrate the effectiveness of our approach by classifying noisy versions of MNIST, handwritten Bangla Numeral, and Basic Character datasets.", "field": [], "task": ["Denoising", "Document Image Classification", "Image Classification"], "method": [], "dataset": ["Noisy MNIST", "Noisy MNIST (Motion)", "Noisy MNIST (AWGN)", "Noisy Bangla Characters", "Noisy MNIST (Contrast)", "Noisy Bangla Numeral"], "metric": ["Accuracy"], "title": "PCGAN-CHAR: Progressively Trained Classifier Generative Adversarial Networks for Classification of Noisy Handwritten Bangla Characters"} {"abstract": "Object detection has seen tremendous progress in recent years. However, current algorithms don't generalize well when tested on diverse data distributions. We address the problem of incremental learning in object detection on the India Driving Dataset (IDD). Our approach involves using multiple domain-specific classifiers and effective transfer learning techniques focussed on avoiding catastrophic forgetting. We evaluate our approach on the IDD and BDD100K dataset. Results show the effectiveness of our domain adaptive approach in the case of domain shifts in environments.", "field": [], "task": ["Incremental Learning", "Object Detection", "Transfer Learning"], "method": [], "dataset": ["BDD100k", "India Driving Dataset"], "metric": ["mAP@0.5"], "title": "On Generalizing Detection Models for Unconstrained Environments"} {"abstract": "3D object reconstruction from a single image is a highly under-determined\nproblem, requiring strong prior knowledge of plausible 3D shapes. This\nintroduces challenges for learning-based approaches, as 3D object annotations\nare scarce in real images. Previous work chose to train on synthetic data with\nground truth 3D information, but suffered from domain adaptation when tested on\nreal data. In this work, we propose MarrNet, an end-to-end trainable model that\nsequentially estimates 2.5D sketches and 3D object shape. Our disentangled,\ntwo-step formulation has three advantages. First, compared to full 3D shape,\n2.5D sketches are much easier to be recovered from a 2D image; models that\nrecover 2.5D sketches are also more likely to transfer from synthetic to real\ndata. Second, for 3D reconstruction from 2.5D sketches, systems can learn\npurely from synthetic data. This is because we can easily render realistic 2.5D\nsketches without modeling object appearance variations in real images,\nincluding lighting, texture, etc. This further relieves the domain adaptation\nproblem. Third, we derive differentiable projective functions from 3D shape to\n2.5D sketches; the framework is therefore end-to-end trainable on real images,\nrequiring no human annotations. Our model achieves state-of-the-art performance\non 3D shape reconstruction.", "field": [], "task": ["3D Object Reconstruction", "3D Object Reconstruction From A Single Image", "3D Reconstruction", "3D Shape Reconstruction", "Domain Adaptation", "Object Reconstruction"], "method": [], "dataset": ["Pix3D"], "metric": ["R@16", "R@8", "R@2", "R@4", "R@1", "R@32"], "title": "MarrNet: 3D Shape Reconstruction via 2.5D Sketches"} {"abstract": "In this paper we study the use of convolutional neural networks (convnets)\nfor the task of pedestrian detection. Despite their recent diverse successes,\nconvnets historically underperform compared to other pedestrian detectors. We\ndeliberately omit explicitly modelling the problem into the network (e.g. parts\nor occlusion modelling) and show that we can reach competitive performance\nwithout bells and whistles. In a wide range of experiments we analyse small and\nbig convnets, their architectural choices, parameters, and the influence of\ndifferent training data, including pre-training on surrogate tasks.\n We present the best convnet detectors on the Caltech and KITTI dataset. On\nCaltech our convnets reach top performance both for the Caltech1x and\nCaltech10x training setup. Using additional data at training time our strongest\nconvnet model is competitive even to detectors that use additional data\n(optical flow) at test time.", "field": [], "task": ["Optical Flow Estimation", "Pedestrian Detection"], "method": [], "dataset": ["Caltech"], "metric": ["Reasonable Miss Rate"], "title": "Taking a Deeper Look at Pedestrians"} {"abstract": "Subspace clustering methods based on data self-expression have become very\npopular for learning from data that lie in a union of low-dimensional linear\nsubspaces. However, the applicability of subspace clustering has been limited\nbecause practical visual data in raw form do not necessarily lie in such linear\nsubspaces. On the other hand, while Convolutional Neural Network (ConvNet) has\nbeen demonstrated to be a powerful tool for extracting discriminative features\nfrom visual data, training such a ConvNet usually requires a large amount of\nlabeled data, which are unavailable in subspace clustering applications. To\nachieve simultaneous feature learning and subspace clustering, we propose an\nend-to-end trainable framework, called Self-Supervised Convolutional Subspace\nClustering Network (S$^2$ConvSCN), that combines a ConvNet module (for feature\nlearning), a self-expression module (for subspace clustering) and a spectral\nclustering module (for self-supervision) into a joint optimization framework.\nParticularly, we introduce a dual self-supervision that exploits the output of\nspectral clustering to supervise the training of the feature learning module\n(via a classification loss) and the self-expression module (via a spectral\nclustering loss). Our experiments on four benchmark datasets show the\neffectiveness of the dual self-supervision and demonstrate superior performance\nof our proposed approach.", "field": [], "task": ["Image Clustering"], "method": [], "dataset": ["Extended Yale-B"], "metric": ["Accuracy"], "title": "Self-Supervised Convolutional Subspace Clustering Network"} {"abstract": "While target-side monolingual data has been proven to be very useful to improve neural machine translation (briefly, NMT) through back translation, source-side monolingual data is not well investigated. In this work, we study how to use both the source-side and target-side monolingual data for NMT, and propose an effective strategy leveraging both of them. First, we generate synthetic bitext by translating monolingual data from the two domains into the other domain using the models pretrained on genuine bitext. Next, a model is trained on a noised version of the concatenated synthetic bitext where each source sequence is randomly corrupted. Finally, the model is fine-tuned on the genuine bitext and a clean version of a subset of the synthetic bitext without adding any noise. Our approach achieves state-of-the-art results on WMT16, WMT17, WMT18 English$\\leftrightarrow$German translations and WMT19 German$\\to$French translations, which demonstrate the effectiveness of our method. We also conduct a comprehensive study on how each part in the pipeline works.", "field": [], "task": ["Machine Translation"], "method": [], "dataset": ["WMT2019 German-English", "WMT2016 English-German", "WMT2016 German-English", "WMT2019 English-German"], "metric": ["SacreBLEU"], "title": "Exploiting Monolingual Data at Scale for Neural Machine Translation"} {"abstract": "Accurate 3D object detection in LiDAR based point clouds suffers from the challenges of data sparsity and irregularities. Existing methods strive to organize the points regularly, e.g. voxelize, pass them through a designed 2D/3D neural network, and then define object-level anchors that predict offsets of 3D bounding boxes using collective evidences from all the points on the objects of interest. Contrary to the state-of-the-art anchor-based methods, based on the very nature of data sparsity, we observe that even points on an individual object part are informative about semantic information of the object. We thus argue in this paper for an approach opposite to existing methods using object-level anchors. Inspired by compositional models, which represent an object as parts and their spatial relations, we propose to represent an object as composition of its interior non-empty voxels, termed hotspots, and the spatial relations of hotspots. This gives rise to the representation of Object as Hotspots (OHS). Based on OHS, we further propose an anchor-free detection head with a novel ground truth assignment strategy that deals with inter-object point-sparsity imbalance to prevent the network from biasing towards objects with more points. Experimental results show that our proposed method works remarkably well on objects with a small number of points. Notably, our approach ranked 1st on KITTI 3D Detection Benchmark for cyclist and pedestrian detection, and achieved state-of-the-art performance on NuScenes 3D Detection Benchmark.", "field": [], "task": ["3D Object Detection", "Object Detection", "Pedestrian Detection"], "method": [], "dataset": ["KITTI Pedestrians Moderate"], "metric": ["AP"], "title": "Object as Hotspots: An Anchor-Free 3D Object Detection Approach via Firing of Hotspots"} {"abstract": "This report describes the system developed by the CRIM team for the hypernym discovery task at SemEval 2018. This system exploits a combination of supervised projection learning and unsupervised pattern-based hypernym discovery. It was ranked first on the 3 sub-tasks for which we submitted results.", "field": [], "task": ["Hypernym Discovery", "Relation Extraction"], "method": [], "dataset": ["Medical domain", "Music domain", "General"], "metric": ["P@5", "MRR", "MAP"], "title": "CRIM at SemEval-2018 Task 9: A Hybrid Approach to Hypernym Discovery"} {"abstract": "While Graph Neural Networks (GNNs) have achieved remarkable results in a variety of applications, recent studies exposed important shortcomings in their ability to capture the structure of the underlying graph. It has been shown that the expressive power of standard GNNs is bounded by the Weisfeiler-Leman (WL) graph isomorphism test, from which they inherit proven limitations such as the inability to detect and count graph substructures. On the other hand, there is significant empirical evidence, e.g. in network science and bioinformatics, that substructures are often informative for downstream tasks, suggesting that it is desirable to design GNNs capable of leveraging this important source of information. To this end, we propose a novel topologically-aware message passing scheme based on substructure encoding. We show that our architecture allows incorporating domain-specific inductive biases and that it is strictly more expressive than the WL test. Importantly, in contrast to recent works on the expressivity of GNNs, we do not attempt to adhere to the WL hierarchy; this allows us to retain multiple attractive properties of standard GNNs such as locality and linear network complexity, while being able to disambiguate even hard instances of graph isomorphism. We extensively evaluate our method on graph classification and regression tasks and show state-of-the-art results on multiple datasets including molecular graphs and social networks.", "field": [], "task": ["Graph Classification", "Graph Regression", "Regression"], "method": [], "dataset": ["ZINC 100k"], "metric": ["MAE"], "title": "Improving Graph Neural Network Expressivity via Subgraph Isomorphism Counting"} {"abstract": "In this paper, we propose to equip Generative Adversarial Networks with the\nability to produce direct energy estimates for samples.Specifically, we propose\na flexible adversarial training framework, and prove this framework not only\nensures the generator converges to the true data distribution, but also enables\nthe discriminator to retain the density information at the global optimal. We\nderive the analytic form of the induced solution, and analyze the properties.\nIn order to make the proposed framework trainable in practice, we introduce two\neffective approximation techniques. Empirically, the experiment results closely\nmatch our theoretical analysis, verifying the discriminator is able to recover\nthe energy of data distribution.", "field": [], "task": ["Image Generation"], "method": [], "dataset": ["CIFAR-10"], "metric": ["Inception score"], "title": "Calibrating Energy-based Generative Adversarial Networks"} {"abstract": "Numerous efforts have been made to design different low level saliency cues\nfor the RGBD saliency detection, such as color or depth contrast features,\nbackground and color compactness priors. However, how these saliency cues\ninteract with each other and how to incorporate these low level saliency cues\neffectively to generate a master saliency map remain a challenging problem. In\nthis paper, we design a new convolutional neural network (CNN) to fuse\ndifferent low level saliency cues into hierarchical features for automatically\ndetecting salient objects in RGBD images. In contrast to the existing works\nthat directly feed raw image pixels to the CNN, the proposed method takes\nadvantage of the knowledge in traditional saliency detection by adopting\nvarious meaningful and well-designed saliency feature vectors as input. This\ncan guide the training of CNN towards detecting salient object more effectively\ndue to the reduced learning ambiguity. We then integrate a Laplacian\npropagation framework with the learned CNN to extract a spatially consistent\nsaliency map by exploiting the intrinsic structure of the input image.\nExtensive quantitative and qualitative experimental evaluations on three\ndatasets demonstrate that the proposed method consistently outperforms\nstate-of-the-art methods.", "field": [], "task": ["Object Detection", "RGB-D Salient Object Detection", "RGB Salient Object Detection", "Saliency Detection", "Salient Object Detection"], "method": [], "dataset": ["NJU2K"], "metric": ["max E-Measure", "Average MAE", "S-Measure", "max F-Measure"], "title": "RGBD Salient Object Detection via Deep Fusion"} {"abstract": "We propose a Dynamic Graph-Based Spatial-Temporal Attention (DG-STA) method for hand gesture recognition. The key idea is to first construct a fully-connected graph from a hand skeleton, where the node features and edges are then automatically learned via a self-attention mechanism that performs in both spatial and temporal domains. We further propose to leverage the spatial-temporal cues of joint positions to guarantee robust recognition in challenging conditions. In addition, a novel spatial-temporal mask is applied to significantly cut down the computational cost by 99%. We carry out extensive experiments on benchmarks (DHG-14/28 and SHREC'17) and prove the superior performance of our method compared with the state-of-the-art methods. The source code can be found at https://github.com/yuxiaochen1103/DG-STA.", "field": [], "task": ["Gesture Recognition", "Hand Gesture Recognition", "Hand-Gesture Recognition", "Skeleton Based Action Recognition"], "method": [], "dataset": ["DHG-14", "SHREC 2017", "DHG-28"], "metric": ["14 gestures accuracy", "28 gestures accuracy", "Accuracy"], "title": "Construct Dynamic Graphs for Hand Gesture Recognition via Spatial-Temporal Attention"} {"abstract": "Myocardial infarction is the leading cause of death worldwide. In this paper, we design domain-inspired neural network models to detect myocardial infarction. First, we study the contribution of various leads. This systematic analysis, first of its kind in the literature, indicates that out of 15 ECG leads, data from the v6, vz, and ii leads are critical to correctly identify myocardial infarction. Second, we use this finding and adapt the ConvNetQuake neural network model--originally designed to identify earthquakes--to attain state-of-the-art classification results for myocardial infarction, achieving $99.43\\%$ classification accuracy on a record-wise split, and $97.83\\%$ classification accuracy on a patient-wise split. These two results represent cardiologist-level performance level for myocardial infarction detection after feeding only 10 seconds of raw ECG data into our model. Third, we show that our multi-ECG-channel neural network achieves cardiologist-level performance without the need of any kind of manual feature extraction or data pre-processing.", "field": [], "task": ["Myocardial infarction detection"], "method": [], "dataset": ["PTB", "PTB dataset, ECG lead II"], "metric": ["Accuracy (%)", "Accuracy"], "title": "Deep Learning for Cardiologist-level Myocardial Infarction Detection in Electrocardiograms"} {"abstract": "Recurrent neural networks (RNNs) are a powerful model for sequential data.\nEnd-to-end training methods such as Connectionist Temporal Classification make\nit possible to train RNNs for sequence labelling problems where the\ninput-output alignment is unknown. The combination of these methods with the\nLong Short-term Memory RNN architecture has proved particularly fruitful,\ndelivering state-of-the-art results in cursive handwriting recognition. However\nRNN performance in speech recognition has so far been disappointing, with\nbetter results returned by deep feedforward networks. This paper investigates\n\\emph{deep recurrent neural networks}, which combine the multiple levels of\nrepresentation that have proved so effective in deep networks with the flexible\nuse of long range context that empowers RNNs. When trained end-to-end with\nsuitable regularisation, we find that deep Long Short-term Memory RNNs achieve\na test set error of 17.7% on the TIMIT phoneme recognition benchmark, which to\nour knowledge is the best recorded score.", "field": [], "task": ["Handwriting Recognition", "Speech Recognition"], "method": [], "dataset": ["TIMIT"], "metric": ["Percentage error"], "title": "Speech Recognition with Deep Recurrent Neural Networks"} {"abstract": "Person re-identification (Re-ID) is an important problem in video surveillance, aiming to match pedestrian images across camera views. Currently, most works focus on RGB-based Re-ID. However, in some applications, RGB images are not suitable, e.g. in a dark environment or at night. Infrared (IR) imaging becomes necessary in many visual systems. To that end, matching RGB images with infrared images is required, which are heterogeneous with very different visual characteristics. For person Re-ID, this is a very challenging cross-modality problem that has not been studied so far. In this work, we address the RGB-IR cross-modality Re-ID problem and contribute a new multiple modality Re-ID dataset named SYSU-MM01, including RGB and IR images of 491 identities from 6 cameras, giving in total 287,628 RGB images and 15,792 IR images. To explore the RGB-IR Re-ID problem, we evaluate existing popular cross-domain models, including three commonly used neural network structures (one-stream, two-stream and asymmetric FC layer) and analyse the relation between them. We further propose deep zero-padding for training one-stream network towards automatically evolving domain-specific nodes in the network for cross-modality matching. Our experiments show that RGB-IR cross-modality matching is very challenging but still feasible using the proposed model with deep zero-padding, giving the best performance. Our dataset is available at http://isee.sysu.edu.cn/project/RGBIRReID.htm.\r", "field": [], "task": ["Cross-Modal Person Re-Identification", "Person Re-Identification"], "method": [], "dataset": ["SYSU-MM01"], "metric": ["mAP (All-search & Single-shot)"], "title": "RGB-Infrared Cross-Modality Person Re-Identification"} {"abstract": "\n It is important to transfer the knowledge from label-rich source domain to unlabeled target domain due to the expensive cost of manual labeling efforts. Prior domain adaptation methods address this problem through aligning the global distribution statistics between source domain and target domain, but a drawback of prior methods is that they ignore the semantic information contained in samples, e.g., features of backpacks in target domain might be mapped near features of cars in source domain. In this paper, we present moving semantic transfer network, which learn semantic representations for unlabeled target samples by aligning labeled source centroid and pseudo-labeled target centroid. Features in same class but different domains are expected to be mapped nearby, resulting in an improved target classification accuracy. Moving average centroid alignment is cautiously designed to compensate the insufficient categorical information within each mini batch. Experiments testify that our model yields state of the art results on standard datasets.\n ", "field": [], "task": ["Domain Adaptation", "Learning Semantic Representations", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["SVHN-to-MNIST"], "metric": ["Accuracy"], "title": "Learning Semantic Representations for Unsupervised Domain Adaptation"} {"abstract": "In this paper we tackle the problem of unsupervised domain adaptation for the\ntask of semantic segmentation, where we attempt to transfer the knowledge\nlearned upon synthetic datasets with ground-truth labels to real-world images\nwithout any annotation. With the hypothesis that the structural content of\nimages is the most informative and decisive factor to semantic segmentation and\ncan be readily shared across domains, we propose a Domain Invariant Structure\nExtraction (DISE) framework to disentangle images into domain-invariant\nstructure and domain-specific texture representations, which can further\nrealize image-translation across domains and enable label transfer to improve\nsegmentation performance. Extensive experiments verify the effectiveness of our\nproposed DISE model and demonstrate its superiority over several\nstate-of-the-art approaches.", "field": [], "task": ["Domain Adaptation", "Image-to-Image Translation", "Semantic Segmentation", "Synthetic-to-Real Translation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["GTAV-to-Cityscapes Labels", "SYNTHIA-to-Cityscapes"], "metric": ["mIoU (13 classes)", "mIoU"], "title": "All about Structure: Adapting Structural Information across Domains for Boosting Semantic Segmentation"} {"abstract": "In this work, we present graph star net (GraphStar), a novel and unified graph neural net architecture which utilizes message-passing relay and attention mechanism for multiple prediction tasks - node classification, graph classification and link prediction. GraphStar addresses many earlier challenges facing graph neural nets and achieves non-local representation without increasing the model depth or bearing heavy computational costs. We also propose a new method to tackle topic-specific sentiment analysis based on node classification and text classification as graph classification. Our work shows that 'star nodes' can learn effective graph-data representation and improve on current methods for the three tasks. Specifically, for graph classification and link prediction, GraphStar outperforms the current state-of-the-art models by 2-5% on several key benchmarks.", "field": [], "task": ["Graph Classification", "Link Prediction", "Multi-Task Learning", "Node Classification", "Sentiment Analysis", "Text Classification"], "method": [], "dataset": ["Cora", "Pubmed (biased evaluation)", "Cora (biased evaluation)", "ENZYMES", "PPI", "PROTEINS", "D&D", "R8", "20NEWS", "Citeseer", "MR", "IMDb", "MUTAG", "R52", "Citeseer (biased evaluation)", "Pubmed", "Ohsumed"], "metric": ["F1", "AP", "AUC", "Accuracy"], "title": "Graph Star Net for Generalized Multi-Task Learning"} {"abstract": "Defocus blur detection aims to detect out-of-focus regions from an image. Although attracting more and more attention due to its widespread applications, defocus blur detection still confronts several challenges such as the interference of background clutter, sensitivity to scales and missing boundary details of defocus blur regions. To deal with these issues, we propose a deep neural network which recurrently fuses and refines multi-scale deep features (DeFusionNet) for defocus blur detection. We firstly utilize a fully convolutional network to extract multi-scale deep features. The features from bottom layers are able to capture rich low-level features for details preservation, while the features from top layers can characterize the semantic information to locate blur regions. These features from different layers are fused as shallow features and semantic features, respectively. After that, the fused shallow features are propagated to top layers for refining the fine details of detected defocus blur regions, and the fused semantic features are propagated to bottom layers to assist in better locating the defocus regions. The feature fusing and refining are carried out in a recurrent manner. Also, we finally fuse the output of each layer at the last recurrent step to obtain the final defocus blur map by considering the sensitivity to scales of the defocus degree. Experiments on two commonly used defocus blur detection benchmark datasets are conducted to demonstrate the superority of DeFusionNet when compared with other 10 competitors. Code and more results can be found at: http://tangchang.net\r", "field": [], "task": ["Defocus Estimation"], "method": [], "dataset": ["CUHK - Blur Detection Dataset"], "metric": ["MAE", "F-measure"], "title": "DeFusionNET: Defocus Blur Detection via Recurrently Fusing and Refining Multi-Scale Deep Features"} {"abstract": "Face Analysis Project on MXNet", "field": [], "task": ["Face Alignment", "Face Recognition", "Robust Face Alignment", "Robust Face Recognition"], "method": [], "dataset": ["IBUG", "COFW"], "metric": ["Mean Error Rate"], "title": "Stacked Dense U-Nets with Dual Transformers for Robust Face Alignment"} {"abstract": "Convolutional Neural Networks experience catastrophic forgetting when optimized on a sequence of learning problems: as they meet the objective of the current training examples, their performance on previous tasks drops drastically. In this work, we introduce a novel framework to tackle this problem with conditional computation. We equip each convolutional layer with task-specific gating modules, selecting which filters to apply on the given input. This way, we achieve two appealing properties. Firstly, the execution patterns of the gates allow to identify and protect important filters, ensuring no loss in the performance of the model for previously learned tasks. Secondly, by using a sparsity objective, we can promote the selection of a limited set of kernels, allowing to retain sufficient model capacity to digest new tasks.Existing solutions require, at test time, awareness of the task to which each example belongs to. This knowledge, however, may not be available in many practical scenarios. Therefore, we additionally introduce a task classifier that predicts the task label of each example, to deal with settings in which a task oracle is not available. We validate our proposal on four continual learning datasets. Results show that our model consistently outperforms existing methods both in the presence and the absence of a task oracle. Notably, on Split SVHN and Imagenet-50 datasets, our model yields up to 23.98% and 17.42% improvement in accuracy w.r.t. competing methods.", "field": [], "task": ["Continual Learning"], "method": [], "dataset": ["ImageNet-50 (5 tasks) "], "metric": ["Accuracy"], "title": "Conditional Channel Gated Networks for Task-Aware Continual Learning"} {"abstract": "In this work we propose a multi-task spatio-temporal network, called SUSiNet,\nthat can jointly tackle the spatio-temporal problems of saliency estimation,\naction recognition and video summarization. Our approach employs a single\nnetwork that is jointly end-to-end trained for all tasks with multiple and\ndiverse datasets related to the exploring tasks. The proposed network uses a\nunified architecture that includes global and task specific layer and produces\nmultiple output types, i.e., saliency maps or classification labels, by\nemploying the same video input. Moreover, one additional contribution is that\nthe proposed network can be deeply supervised through an attention module that\nis related to human attention as it is expressed by eye-tracking data. From the\nextensive evaluation, on seven different datasets, we have observed that the\nmulti-task network performs as well as the state-of-the-art single-task methods\n(or in some cases better), while it requires less computational budget than\nhaving one independent network per each task.", "field": [], "task": ["Action Recognition", "Eye Tracking", "Saliency Prediction", "Temporal Action Localization", "Video Summarization"], "method": [], "dataset": ["HMDB-51"], "metric": ["Average accuracy of 3 splits"], "title": "SUSiNet: See, Understand and Summarize it"} {"abstract": "In this paper, we propose a two-stage fully 3D network, namely \\textbf{DeepFuse}, to estimate human pose in 3D space by fusing body-worn Inertial Measurement Unit (IMU) data and multi-view images deeply. The first stage is designed for pure vision estimation. To preserve data primitiveness of multi-view inputs, the vision stage uses multi-channel volume as data representation and 3D soft-argmax as activation layer. The second one is the IMU refinement stage which introduces an IMU-bone layer to fuse the IMU and vision data earlier at data level. without requiring a given skeleton model a priori, we can achieve a mean joint error of $28.9$mm on TotalCapture dataset and $13.4$mm on Human3.6M dataset under protocol 1, improving the SOTA result by a large margin. Finally, we discuss the effectiveness of a fully 3D network for 3D pose estimation experimentally which may benefit future research.", "field": [], "task": ["3D Human Pose Estimation", "3D Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["Total Capture", "Human3.6M"], "metric": ["Average MPJPE (mm)", "Multi-View or Monocular", "Using 2D ground-truth joints"], "title": "DeepFuse: An IMU-Aware Network for Real-Time 3D Human Pose Estimation from Multi-View Image"} {"abstract": "We aim to simultaneously estimate the 3D articulated pose and high fidelity volumetric occupancy of human performance, from multiple viewpoint video (MVV) with as few as two views. We use a multi-channel symmetric 3D convolutional encoder-decoder with a dual loss to enforce the learning of a latent embedding that enables inference of skeletal joint positions and a volumetric reconstruction of the performance. The inference is regularised via a prior learned over a dataset of view-ablated multi-view video footage of a wide range of subjects and actions, and show this to generalise well across unseen subjects and actions. We demonstrate improved reconstruction accuracy and lower pose estimation error relative to prior work on two MVV performance capture datasets: Human 3.6M and TotalCapture.", "field": [], "task": ["3D Human Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["Human3.6M"], "metric": ["Average MPJPE (mm)"], "title": "Semantic Estimation of 3D Body Shape and Pose using Minimal Cameras"} {"abstract": "We present an algorithm for fusing multi-viewpoint video (MVV) with inertial measurement unit (IMU) sensor data to accurately estimate 3D human pose. A 3-D convolutional neural network is used to learn a pose embedding from volumetric probabilistic visual hull data (PVH) derived from the MVV frames. We incorporate this model within a dual stream network integrating pose embeddings derived from MVV and a forward kinematic solve of the IMU data. A temporal model (LSTM) is incorporated within both streams prior to their fusion. Hybrid pose inference using these two complementary data sources is shown to resolve ambiguities within each sensor modality, yielding improved accuracy over prior methods. A further contribution of this work is a new hybrid MVV dataset (TotalCapture) comprising video, IMU and a skeletal joint ground truth derived from a commercial motion capture system. The dataset is available online at http://cvssp.org/data/totalcapture/", "field": [], "task": ["3D Human Pose Estimation", "Motion Capture", "Pose Estimation"], "method": [], "dataset": ["Total Capture", "Human3.6M"], "metric": ["Average MPJPE (mm)"], "title": "Total capture: 3D human pose estimation fusing video and inertial sensors"} {"abstract": "Deep learning approaches for sentiment classification do not fully exploit\nsentiment linguistic knowledge. In this paper, we propose a\nMulti-sentiment-resource Enhanced Attention Network (MEAN) to alleviate the\nproblem by integrating three kinds of sentiment linguistic knowledge (e.g.,\nsentiment lexicon, negation words, intensity words) into the deep neural\nnetwork via attention mechanisms. By using various types of sentiment\nresources, MEAN utilizes sentiment-relevant information from different\nrepresentation subspaces, which makes it more effective to capture the overall\nsemantics of the sentiment, negation and intensity words for sentiment\nprediction. The experimental results demonstrate that MEAN has robust\nsuperiority over strong competitors.", "field": [], "task": ["Sentiment Analysis"], "method": [], "dataset": ["MR", "SST-5 Fine-grained classification"], "metric": ["Accuracy"], "title": "A Multi-sentiment-resource Enhanced Attention Network for Sentiment Classification"} {"abstract": "Feature extraction and matching are two crucial components in person\nRe-Identification (ReID). The large pose deformations and the complex view\nvariations exhibited by the captured person images significantly increase the\ndifficulty of learning and matching of the features from person images. To\novercome these difficulties, in this work we propose a Pose-driven Deep\nConvolutional (PDC) model to learn improved feature extraction and matching\nmodels from end to end. Our deep architecture explicitly leverages the human\npart cues to alleviate the pose variations and learn robust feature\nrepresentations from both the global image and different local parts. To match\nthe features from global human body and local body parts, a pose driven feature\nweighting sub-network is further designed to learn adaptive feature fusions.\nExtensive experimental analyses and results on three popular datasets\ndemonstrate significant performance improvements of our model over all\npublished state-of-the-art methods.", "field": [], "task": ["Person Re-Identification"], "method": [], "dataset": ["Market-1501"], "metric": ["Rank-1", "MAP"], "title": "Pose-driven Deep Convolutional Model for Person Re-identification"} {"abstract": "Aspect-level sentiment classification (ASC) aims at identifying sentiment\npolarities towards aspects in a sentence, where the aspect can behave as a\ngeneral Aspect Category (AC) or a specific Aspect Term (AT). However, due to\nthe especially expensive and labor-intensive labeling, existing public corpora\nin AT-level are all relatively small. Meanwhile, most of the previous methods\nrely on complicated structures with given scarce data, which largely limits the\nefficacy of the neural models. In this paper, we exploit a new direction named\ncoarse-to-fine task transfer, which aims to leverage knowledge learned from a\nrich-resource source domain of the coarse-grained AC task, which is more easily\naccessible, to improve the learning in a low-resource target domain of the\nfine-grained AT task. To resolve both the aspect granularity inconsistency and\nfeature mismatch between domains, we propose a Multi-Granularity Alignment\nNetwork (MGAN). In MGAN, a novel Coarse2Fine attention guided by an auxiliary\ntask can help the AC task modeling at the same fine-grained level with the AT\ntask. To alleviate the feature false alignment, a contrastive feature alignment\nmethod is adopted to align aspect-specific feature representations\nsemantically. In addition, a large-scale multi-domain dataset for the AC task\nis provided. Empirically, extensive experiments demonstrate the effectiveness\nof the MGAN.", "field": [], "task": ["Sentiment Analysis"], "method": [], "dataset": ["SemEval 2014 Task 4 Sub Task 2"], "metric": ["Laptop (Acc)", "Restaurant (Acc)", "Mean Acc (Restaurant + Laptop)"], "title": "Exploiting Coarse-to-Fine Task Transfer for Aspect-level Sentiment Classification"} {"abstract": "Mining informative negative instances are of central importance to deep metric learning (DML), however this task is intrinsically limited by mini-batch training, where only a mini-batch of instances is accessible at each iteration. In this paper, we identify a \"slow drift\" phenomena by observing that the embedding features drift exceptionally slow even as the model parameters are updating throughout the training process. This suggests that the features of instances computed at preceding iterations can be used to considerably approximate their features extracted by the current model. We propose a cross-batch memory (XBM) mechanism that memorizes the embeddings of past iterations, allowing the model to collect sufficient hard negative pairs across multiple mini-batches - even over the whole dataset. Our XBM can be directly integrated into a general pair-based DML framework, where the XBM augmented DML can boost performance considerably. In particular, without bells and whistles, a simple contrastive loss with our XBM can have large R@1 improvements of 12%-22.5% on three large-scale image retrieval datasets, surpassing the most sophisticated state-of-the-art methods, by a large margin. Our XBM is conceptually simple, easy to implement - using several lines of codes, and is memory efficient - with a negligible 0.2 GB extra GPU memory. Code is available at: https://github.com/MalongTech/research-xbm.", "field": [], "task": ["Image Retrieval", "Metric Learning"], "method": [], "dataset": ["In-Shop", "SOP"], "metric": ["R@1"], "title": "Cross-Batch Memory for Embedding Learning"} {"abstract": "Face recognition performance evaluation has traditionally focused on\none-to-one verification, popularized by the Labeled Faces in the Wild dataset\nfor imagery and the YouTubeFaces dataset for videos. In contrast, the newly\nreleased IJB-A face recognition dataset unifies evaluation of one-to-many face\nidentification with one-to-one face verification over templates, or sets of\nimagery and videos for a subject. In this paper, we study the problem of\ntemplate adaptation, a form of transfer learning to the set of media in a\ntemplate. Extensive performance evaluations on IJB-A show a surprising result,\nthat perhaps the simplest method of template adaptation, combining deep\nconvolutional network features with template specific linear SVMs, outperforms\nthe state-of-the-art by a wide margin. We study the effects of template size,\nnegative set construction and classifier fusion on performance, then compare\ntemplate adaptation to convolutional networks with metric learning, 2D and 3D\nalignment. Our unexpected conclusion is that these other methods, when combined\nwith template adaptation, all achieve nearly the same top performance on IJB-A\nfor template-based face verification and identification.", "field": [], "task": ["Face Identification", "Face Recognition", "Face Verification", "Metric Learning", "Transfer Learning"], "method": [], "dataset": ["IJB-A"], "metric": ["TAR @ FAR=0.01"], "title": "Template Adaptation for Face Verification and Identification"} {"abstract": "We investigate the problem of fine-grained sketch-based image retrieval (SBIR), where free-hand human sketches are used as queries to perform instance-level retrieval of images. This is an extremely challenging task because (i) visual comparisons not only need to be fine-grained but also executed cross-domain, (ii) free-hand (finger) sketches are highly abstract, making fine-grained matching harder, and most importantly (iii) annotated cross-domain sketch-photo datasets required for training are scarce, challenging many state-of-the-art machine learning techniques. In this paper, for the first time, we address all these challenges, providing a step towards the capabilities that would underpin a commercial sketch-based image retrieval application. We introduce a new database of 1,432 sketch-photo pairs from two categories with 32,000 fine-grained triplet ranking annotations. We then develop a deep triplet-ranking model for instance-level SBIR with a novel data augmentation and staged pre-training strategy to alleviate the issue of insufficient fine-grained training data. Extensive experiments are carried out to contribute a variety of insights into the challenges of data sufficiency and over-fitting avoidance when training deep networks for fine-grained cross-domain ranking tasks. ", "field": [], "task": ["Data Augmentation", "Image Retrieval", "Sketch-Based Image Retrieval"], "method": [], "dataset": ["Handbags", "Chairs"], "metric": ["R@10", "R@1"], "title": "Sketch Me That Shoe"} {"abstract": "In the context of fine-grained visual categorization, the ability to\ninterpret models as human-understandable visual manuals is sometimes as\nimportant as achieving high classification accuracy. In this paper, we propose\na novel Part-Stacked CNN architecture that explicitly explains the fine-grained\nrecognition process by modeling subtle differences from object parts. Based on\nmanually-labeled strong part annotations, the proposed architecture consists of\na fully convolutional network to locate multiple object parts and a two-stream\nclassification network that en- codes object-level and part-level cues\nsimultaneously. By adopting a set of sharing strategies between the computation\nof multiple object parts, the proposed architecture is very efficient running\nat 20 frames/sec during inference. Experimental results on the CUB-200-2011\ndataset reveal the effectiveness of the proposed architecture, from both the\nperspective of classification accuracy and model interpretability.", "field": [], "task": ["Fine-Grained Image Classification", "Fine-Grained Visual Categorization"], "method": [], "dataset": [" CUB-200-2011"], "metric": ["Accuracy"], "title": "Part-Stacked CNN for Fine-Grained Visual Categorization"} {"abstract": "Residual representation learning simplifies the optimization problem of learning complex functions and has been widely used by traditional convolutional neural networks. However, it has not been applied to deep neural decision forest (NDF). In this paper we incorporate residual learning into NDF and the resulting model achieves state-of-the-art level accuracy on three public age estimation benchmarks while requiring less memory and computation. We further employ gradient-based technique to visualize the decision-making process of NDF and understand how it is influenced by facial image inputs. The code and pre-trained models will be available at https://github.com/Nicholasli1995/VisualizingNDF.", "field": [], "task": ["Age Estimation", "Decision Making", "Representation Learning"], "method": [], "dataset": ["CACD"], "metric": ["MAE"], "title": "Facial age estimation by deep residual decision making"} {"abstract": "Sentiment-to-sentiment transfer involves changing the sentiment of the given text while preserving the underlying information. In this work, we present a model SentiInc for sentiment-to-sentiment transfer using unpaired mono-sentiment data. Existing sentiment-to-sentiment transfer models ignore the valuable sentiment-specific details already present in the text. We address this issue by providing a simple framework for encoding sentiment-specific information in the target sentence while preserving the content information. This is done by incorporating sentiment based loss in the back-translation based style transfer. Extensive experiments over the Yelp dataset show that the SentiInc outperforms state-of-the-art methods by a margin of as large as equation ~11% in G-score. The results also demonstrate that our model produces sentiment-accurate and information-preserved sentences.", "field": [], "task": ["Style Transfer", "Text Generation", "Text Style Transfer"], "method": [], "dataset": ["Yelp Review Dataset (Large)", "Yelp Review Dataset (Small)"], "metric": ["G-Score (BLEU, Accuracy)"], "title": "SentiInc: Incorporating Sentiment Information into Sentiment Transfer Without Parallel Data"} {"abstract": "We develop new representations and algorithms for three-dimensional (3D) object detection and spatial layout prediction in cluttered indoor scenes. RGB-D images are traditionally described by local geometric features of the 3D point cloud. We propose a cloud of oriented gradient (COG) descriptor that links the 2D appearance and 3D pose of object categories, and thus accurately models how perspective projection affects perceived image boundaries. We also propose a \"Manhattan voxel\" representation which better captures the 3D room layout geometry of common indoor environments. Effective classification rules are learned via a structured prediction framework that accounts for the intersection-over-union overlap of hypothesized 3D cuboids with human annotations, as well as orientation estimation errors. Contextual relationships among categories and layout are captured via a cascade of classifiers, leading to holistic scene hypotheses with improved accuracy. Our model is learned solely from annotated RGB-D images, without the benefit of CAD models, but nevertheless its performance substantially exceeds the state-of-the-art on the SUN RGB-D database. Avoiding CAD models allows easier learning of detectors for many object categories. ", "field": [], "task": ["3D Object Detection", "Object Detection", "Structured Prediction"], "method": [], "dataset": ["SUN-RGBD val"], "metric": ["MAP"], "title": "Three-Dimensional Object Detection and Layout Prediction Using Clouds of Oriented Gradients"} {"abstract": "We propose Neural Graph Matching (NGM) Networks, a novel framework that can learn to recognize a previous unseen 3D action class with only a few examples. We achieve this by leveraging the inherent structure of 3D data through a graphical representation. This allows us to modularize our model and lead to strong data-efficiency in few-shot learning. More specifically, NGM Networks jointly learn a graph generator and a graph matching metric function in a end-to-end fashion to directly optimize the few-shot learning objective. We evaluate NGM on two 3D action recognition datasets, CAD-120 and PiGraphs, and show that learning to generate and match graphs both lead to significant improvement of few-shot 3D action recognition over the holistic baselines.", "field": [], "task": ["3D Action Recognition", "Action Recognition", "Few-Shot Learning", "Graph Matching", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["CAD-120"], "metric": ["Accuracy"], "title": "Neural Graph Matching Networks for Fewshot 3D Action Recognition"} {"abstract": "While there is a large body of research studying deep learning methods for text generation from structured data, almost all of it focuses purely on English. In this paper, we study the effectiveness of machine translation based pre-training for data-to-text generation in non-English languages. Since the structured data is generally expressed in English, text generation into other languages involves elements of translation, transliteration and copying - elements already encoded in neural machine translation systems. Moreover, since data-to-text corpora are typically small, this task can benefit greatly from pre-training. Based on our experiments on Czech, a morphologically complex language, we find that pre-training lets us train end-to-end models with significantly improved performance, as judged by automatic metrics and human evaluation. We also show that this approach enjoys several desirable properties, including improved performance in low data scenarios and robustness to unseen slot values.", "field": [], "task": ["Data-to-Text Generation", "Machine Translation", "Text Generation", "Transliteration"], "method": [], "dataset": ["Czech Restaurant NLG"], "metric": ["CIDER", "BLEU score", "METEOR", "NIST"], "title": "Machine Translation Pre-training for Data-to-Text Generation -- A Case Study in Czech"} {"abstract": "We propose a weighted variational model to estimate both the reflectance and the illumination from an observed image. We show that, though it is widely adopted for ease of modeling, the log-transformed image for this task is not ideal. Based on the previous investigation of the logarithmic transformation, a new weighted variational model is proposed for better prior representation, which is imposed in the regularization terms. Different from conventional variational models, the proposed model can preserve the estimated reflectance with more details. Moreover, the proposed model can suppress noise to some extent. An alternating minimization scheme is adopted to solve the proposed model. Experimental results demonstrate the effectiveness of the proposed model with its algorithm. Compared with other variational methods, the proposed method yields comparable or better results on both subjective and objective assessments.", "field": [], "task": [], "method": [], "dataset": ["DICM", "VV", "MEF"], "metric": ["User Study Score"], "title": "A Weighted Variational Model for Simultaneous Reflectance and Illumination Estimation"} {"abstract": "Identifying human action segments in an untrimmed video is still challenging due to boundary ambiguity and over-segmentation issues. To address these problems, we present a new boundary-aware cascade network by introducing two novel components. First, we devise a new cascading paradigm, called Stage Cascade, to enable our model to have adaptive receptive fields and more confident predictions for ambiguous frames. Second, we design a general and principled smoothing operation, termed as local barrier pooling, to aggregate local predictions by leveraging semantic boundary information. Moreover, these two components can be jointly fine-tuned in an end-to-end manner. We perform experiments on three challenging datasets: 50Salads, GTEA and Breakfast dataset, demonstrating that our framework significantly out-performs the current state-of-the-art methods. The code is available at https://github.com/MCG-NJU/BCN.", "field": [], "task": ["Action Segmentation"], "method": [], "dataset": ["50 Salads", "Breakfast", "GTEA"], "metric": ["Acc", "Edit", "F1@10%", "F1@25%", "F1@50%"], "title": "Boundary-Aware Cascade Networks for Temporal Action Segmentation"} {"abstract": "Rich semantic relations are important in a variety of visual recognition\nproblems. As a concrete example, group activity recognition involves the\ninteractions and relative spatial relations of a set of people in a scene.\nState of the art recognition methods center on deep learning approaches for\ntraining highly effective, complex classifiers for interpreting images.\nHowever, bridging the relatively low-level concepts output by these methods to\ninterpret higher-level compositional scenes remains a challenge. Graphical\nmodels are a standard tool for this task. In this paper, we propose a method to\nintegrate graphical models and deep neural networks into a joint framework.\nInstead of using a traditional inference method, we use a sequential inference\nmodeled by a recurrent neural network. Beyond this, the appropriate structure\nfor inference can be learned by imposing gates on edges between nodes.\nEmpirical results on group activity recognition demonstrate the potential of\nthis model to handle highly structured learning tasks.", "field": [], "task": ["Activity Recognition", "Group Activity Recognition"], "method": [], "dataset": ["Collective Activity"], "metric": ["Accuracy"], "title": "Structure Inference Machines: Recurrent Neural Networks for Analyzing Relations in Group Activity Recognition"} {"abstract": "Inspired by recent successes of deep learning in computer vision, we propose\na novel application of deep convolutional neural networks to facial expression\nrecognition, in particular smile recognition. A smile recognition test accuracy\nof 99.45% is achieved for the Denver Intensity of Spontaneous Facial Action\n(DISFA) database, significantly outperforming existing approaches based on\nhand-crafted features with accuracies ranging from 65.55% to 79.67%. The\nnovelty of this approach includes a comprehensive model selection of the\narchitecture parameters, allowing to find an appropriate architecture for each\nexpression such as smile. This is feasible because all experiments were run on\na Tesla K40c GPU, allowing a speedup of factor 10 over traditional computations\non a CPU.", "field": [], "task": ["Facial Expression Recognition", "Model Selection", "Smile Recognition"], "method": [], "dataset": ["DISFA"], "metric": ["Accuracy"], "title": "Deep Learning For Smile Recognition"} {"abstract": "Progress in text understanding has been driven by large datasets that test\nparticular capabilities, like recent datasets for reading comprehension\n(Hermann et al., 2015). We focus here on the LAMBADA dataset (Paperno et al.,\n2016), a word prediction task requiring broader context than the immediate\nsentence. We view LAMBADA as a reading comprehension problem and apply\ncomprehension models based on neural networks. Though these models are\nconstrained to choose a word from the context, they improve the state of the\nart on LAMBADA from 7.3% to 49%. We analyze 100 instances, finding that neural\nnetwork readers perform well in cases that involve selecting a name from the\ncontext based on dialogue or discourse cues but struggle when coreference\nresolution or external knowledge is needed.", "field": [], "task": ["Coreference Resolution", "Language Modelling", "Reading Comprehension"], "method": [], "dataset": ["LAMBADA"], "metric": ["Accuracy"], "title": "Broad Context Language Modeling as Reading Comprehension"} {"abstract": "Learning embeddings of entities and relations is an efficient and versatile\nmethod to perform machine learning on relational data such as knowledge graphs.\nIn this work, we propose holographic embeddings (HolE) to learn compositional\nvector space representations of entire knowledge graphs. The proposed method is\nrelated to holographic models of associative memory in that it employs circular\ncorrelation to create compositional representations. By using correlation as\nthe compositional operator HolE can capture rich interactions but\nsimultaneously remains efficient to compute, easy to train, and scalable to\nvery large datasets. In extensive experiments we show that holographic\nembeddings are able to outperform state-of-the-art methods for link prediction\nin knowledge graphs and relational learning benchmark datasets.", "field": [], "task": ["Knowledge Graphs", "Link Prediction", "Relational Reasoning"], "method": [], "dataset": ["FB15k", "WN18"], "metric": ["Hits@10", "Hits@3", "Hits@1"], "title": "Holographic Embeddings of Knowledge Graphs"} {"abstract": "We propose a novel neural method to extract drug-drug interactions (DDIs)\nfrom texts using external drug molecular structure information. We encode\ntextual drug pairs with convolutional neural networks and their molecular pairs\nwith graph convolutional networks (GCNs), and then we concatenate the outputs\nof these two networks. In the experiments, we show that GCNs can predict DDIs\nfrom the molecular structures of drugs in high accuracy and the molecular\ninformation can enhance text-based DDI extraction by 2.39 percent points in the\nF-score on the DDIExtraction 2013 shared task data set.", "field": [], "task": ["Drug\u2013drug Interaction Extraction"], "method": [], "dataset": ["DDI extraction 2013 corpus"], "metric": ["F1", "Micro F1"], "title": "Enhancing Drug-Drug Interaction Extraction from Texts by Molecular Structure Information"} {"abstract": "This paper proposes to improve visual question answering (VQA) with\nstructured representations of both scene contents and questions. A key\nchallenge in VQA is to require joint reasoning over the visual and text\ndomains. The predominant CNN/LSTM-based approach to VQA is limited by\nmonolithic vector representations that largely ignore structure in the scene\nand in the form of the question. CNN feature vectors cannot effectively capture\nsituations as simple as multiple object instances, and LSTMs process questions\nas series of words, which does not reflect the true complexity of language\nstructure. We instead propose to build graphs over the scene objects and over\nthe question words, and we describe a deep neural network that exploits the\nstructure in these representations. This shows significant benefit over the\nsequential processing of LSTMs. The overall efficacy of our approach is\ndemonstrated by significant improvements over the state-of-the-art, from 71.2%\nto 74.4% in accuracy on the \"abstract scenes\" multiple-choice benchmark, and\nfrom 34.7% to 39.1% in accuracy over pairs of \"balanced\" scenes, i.e. images\nwith fine-grained differences and opposite yes/no answers to a same question.", "field": [], "task": ["Question Answering", "Visual Question Answering"], "method": [], "dataset": ["COCO Visual Question Answering (VQA) abstract 1.0 multiple choice", "COCO Visual Question Answering (VQA) abstract images 1.0 open ended"], "metric": ["Percentage correct"], "title": "Graph-Structured Representations for Visual Question Answering"} {"abstract": "Attention mechanisms in biological perception are thought to select subsets\nof perceptual information for more sophisticated processing which would be\nprohibitive to perform on all sensory inputs. In computer vision, however,\nthere has been relatively little exploration of hard attention, where some\ninformation is selectively ignored, in spite of the success of soft attention,\nwhere information is re-weighted and aggregated, but never filtered out. Here,\nwe introduce a new approach for hard attention and find it achieves very\ncompetitive performance on a recently-released visual question answering\ndatasets, equalling and in some cases surpassing similar soft attention\narchitectures while entirely ignoring some features. Even though the hard\nattention mechanism is thought to be non-differentiable, we found that the\nfeature magnitudes correlate with semantic relevance, and provide a useful\nsignal for our mechanism's attentional selection criterion. Because hard\nattention selects important features of the input information, it can also be\nmore efficient than analogous soft attention mechanisms. This is especially\nimportant for recent approaches that use non-local pairwise operations, whereby\ncomputational and memory costs are quadratic in the size of the set of\nfeatures.", "field": [], "task": ["Question Answering", "Visual Question Answering"], "method": [], "dataset": ["CLEVR", "VQA-CP"], "metric": ["Score", "Accuracy"], "title": "Learning Visual Question Answering by Bootstrapping Hard Attention"} {"abstract": "n this work, we tackle the problem of estimating 3D human pose in camera space from a monocular image. First, we propose to use densely-generated limb depth maps to ease the learning of body joints depth, which are well aligned with image cues. Then, we design a lifting module from 2D pixel coordinates to 3D camera coordinates which explicitly takes the depth values as inputs, and is aligned with camera perspective projection model. We show our method achieves superior performance on large-scale 3D pose datasets Human3.6M and MPI-INF-3DHP, and sets the new state-of-the-art.", "field": [], "task": ["3D Human Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["Human3.6M", "MPI-INF-3DHP"], "metric": ["Average MPJPE (mm)", "Using 2D ground-truth joints", "Multi-View or Monocular", "AUC", "3DPCK"], "title": "3D Human Pose Estimation via Explicit Compositional Depth Maps"} {"abstract": "Recently, Long Short-Term Memory (LSTM) has become a popular choice to model\nindividual dynamics for single-person action recognition due to its ability of\nmodeling the temporal information in various ranges of dynamic contexts.\nHowever, existing RNN models only focus on capturing the temporal dynamics of\nthe person-person interactions by naively combining the activity dynamics of\nindividuals or modeling them as a whole. This neglects the inter-related\ndynamics of how person-person interactions change over time. To this end, we\npropose a novel Concurrence-Aware Long Short-Term Sub-Memories (Co-LSTSM) to\nmodel the long-term inter-related dynamics between two interacting people on\nthe bounding boxes covering people. Specifically, for each frame, two\nsub-memory units store individual motion information, while a concurrent LSTM\nunit selectively integrates and stores inter-related motion information between\ninteracting people from these two sub-memory units via a new co-memory cell.\nExperimental results on the BIT and UT datasets show the superiority of\nCo-LSTSM compared with the state-of-the-art methods.", "field": [], "task": ["Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["UT", "BIT"], "metric": ["Accuracy"], "title": "Concurrence-Aware Long Short-Term Sub-Memories for Person-Person Action Recognition"} {"abstract": "Weakly supervised object detection(WSOD) task uses only image-level annotations to train object detection task. WSOD does not require time-consuming instance-level annotations, so the study of this task has attracted more and more attention. Previous weakly supervised object detection methods iteratively update detectors and pseudo-labels, or use feature-based mask-out methods. Most of these methods do not generate complete and accurate proposals, often only the most discriminative parts of the object, or too many background areas. To solve this problem, we added the box regression module to the weakly supervised object detection network and proposed a proposal scoring network (PSNet) to supervise it. The box regression module modifies proposal to improve the IoU of proposal and ground truth. PSNet scores the proposal output from the box regression network and utilize the score to improve the box regression module. In addition, we take advantage of the PRS algorithm for generating a more accurate pseudo label to train the box regression module. Using these methods, we train the detector on the PASCAL VOC 2007 and 2012 and obtain significantly improved results.", "field": [], "task": ["Object Detection", "Regression", "Weakly Supervised Object Detection"], "method": [], "dataset": ["PASCAL VOC 2007"], "metric": ["MAP"], "title": "WSOD with PSNet and Box Regression"} {"abstract": "the main aim of this project is to achieve fine\r\ngrain image classification by applying a suitable machine\r\nlearning architecture to the set of images present in the\r\ndataset. The chosen dataset is taken as a part of the Kaggle\r\ncompetition and is selected from Wikipedia's monkey\r\ncladogram and this dataset contains 10 different species of\r\nmonkeys which are to be classified with the help of a\r\nmachine learning architecture augmented by Image\r\nprocessing. After having brief exposure and using several\r\narchitectures to classify this dataset, the Convolutional\r\nNeural network was found to be the best fit.", "field": [], "task": ["Fine-Grained Image Classification", "Image Classification"], "method": [], "dataset": ["10 Monkey Species"], "metric": ["Accuracy"], "title": "Performing Image Classification for 10 Different Monkey Species using CNN"} {"abstract": "Multimodal sentiment analysis has recently gained popularity because of its relevance to social media posts, customer service calls and video blogs. In this paper, we address three aspects of multimodal sentiment analysis; 1. Cross modal interaction learning, i.e. how multiple modalities contribute to the sentiment, 2. Learning long-term dependencies in multimodal interactions and 3. Fusion of unimodal and cross modal cues. Out of these three, we find that learning cross modal interactions is beneficial for this problem. We perform experiments on two benchmark datasets, CMU Multimodal Opinion level Sentiment Intensity (CMU-MOSI) and CMU Multimodal Opinion Sentiment and Emotion Intensity (CMU-MOSEI) corpus. Our approach on both these tasks yields accuracies of 83.9% and 81.1% respectively, which is 1.6% and 1.34% absolute improvement over current state-of-the-art.", "field": [], "task": ["Multimodal Sentiment Analysis", "Sentiment Analysis"], "method": [], "dataset": ["CMU-MOSEI", "MOSI"], "metric": ["F1 score", "Accuracy"], "title": "Gated Mechanism for Attention Based Multimodal Sentiment Analysis"} {"abstract": "Neural network-based clustering has recently gained popularity, and in\nparticular a constrained clustering formulation has been proposed to perform\ntransfer learning and image category discovery using deep learning. The core\nidea is to formulate a clustering objective with pairwise constraints that can\nbe used to train a deep clustering network; therefore the cluster assignments\nand their underlying feature representations are jointly optimized end-to-end.\nIn this work, we provide a novel clustering formulation to address scalability\nissues of previous work in terms of optimizing deeper networks and larger\namounts of categories. The proposed objective directly minimizes the negative\nlog-likelihood of cluster assignment with respect to the pairwise constraints,\nhas no hyper-parameters, and demonstrates improved scalability and performance\non both supervised learning and unsupervised transfer learning.", "field": [], "task": ["Deep Clustering", "Ecg Risk Stratification", "Transfer Learning"], "method": [], "dataset": ["ngm"], "metric": ["520"], "title": "A probabilistic constrained clustering for transfer learning and image category discovery"} {"abstract": "Domain Adaptation (DA) approaches achieved significant improvements in a wide range of machine learning and computer vision tasks (i.e., classification, detection, and segmentation). However, as far as we are aware, there are few methods yet to achieve domain adaptation directly on 3D point cloud data. The unique challenge of point cloud data lies in its abundant spatial geometric information, and the semantics of the whole object is contributed by including regional geometric structures. Specifically, most general-purpose DA methods that struggle for global feature alignment and ignore local geometric information are not suitable for 3D domain alignment. In this paper, we propose a novel 3D Domain Adaptation Network for point cloud data (PointDAN). PointDAN jointly aligns the global and local features in multi-level. For local alignment, we propose Self-Adaptive (SA) node module with an adjusted receptive field to model the discriminative local structures for aligning domains. To represent hierarchically scaled features, node-attention module is further introduced to weight the relationship of SA nodes across objects and domains. For global alignment, an adversarial-training strategy is employed to learn and align global features across domains. Since there is no common evaluation benchmark for 3D point cloud DA scenario, we build a general benchmark (i.e., PointDA-10) extracted from three popular 3D object/scene datasets (i.e., ModelNet, ShapeNet and ScanNet) for cross-domain 3D objects classification fashion. Extensive experiments on PointDA-10 illustrate the superiority of our model over the state-of-the-art general-purpose DA methods.", "field": [], "task": ["Domain Adaptation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["PreSIL to KITTI"], "metric": ["AP@0.7"], "title": "PointDAN: A Multi-Scale 3D Domain Adaption Network for Point Cloud Representation"} {"abstract": "Automated counting of people in crowd images is a challenging task. The major\ndifficulty stems from the large diversity in the way people appear in crowds.\nIn fact, features available for crowd discrimination largely depend on the\ncrowd density to the extent that people are only seen as blobs in a highly\ndense scene. We tackle this problem with a growing CNN which can progressively\nincrease its capacity to account for the wide variability seen in crowd scenes.\nOur model starts from a base CNN density regressor, which is trained in\nequivalence on all types of crowd images. In order to adapt with the huge\ndiversity, we create two child regressors which are exact copies of the base\nCNN. A differential training procedure divides the dataset into two clusters\nand fine-tunes the child networks on their respective specialties.\nConsequently, without any hand-crafted criteria for forming specialties, the\nchild regressors become experts on certain types of crowds. The child networks\nare again split recursively, creating two experts at every division. This\nhierarchical training leads to a CNN tree, where the child regressors are more\nfine experts than any of their parents. The leaf nodes are taken as the final\nexperts and a classifier network is then trained to predict the correct\nspecialty for a given test image patch. The proposed model achieves higher\ncount accuracy on major crowd datasets. Further, we analyse the characteristics\nof specialties mined automatically by our method.", "field": [], "task": ["Crowd Counting"], "method": [], "dataset": ["UCF CC 50", "ShanghaiTech A", "WorldExpo\u201910", "ShanghaiTech B"], "metric": ["MAE", "Average MAE"], "title": "Divide and Grow: Capturing Huge Diversity in Crowd Images with Incrementally Growing CNN"} {"abstract": "This paper presents a novel deep learning architecture for word-level lipreading. Previous works suggest a potential for incorporating a pretrained deep 3D Convolutional Neural Networks as a front-end feature extractor. We introduce a SpotFast networks, a variant of the state-of-the-art SlowFast networks for action recognition, which utilizes a temporal window as a spot pathway and all frames as a fast pathway. We further incorporate memory augmented lateral transformers to learn sequential features for classification. We evaluate the proposed model on the LRW dataset. The experiments show that our proposed model outperforms various state-of-the-art models and incorporating the memory augmented lateral transformers makes a 3.7% improvement to the SpotFast networks.", "field": [], "task": ["Action Recognition", "Lipreading"], "method": [], "dataset": ["Lip Reading in the Wild"], "metric": ["Top-1 Accuracy"], "title": "SpotFast Networks with Memory Augmented Lateral Transformers for Lipreading"} {"abstract": "Bilinear models such as DistMult and ComplEx are effective methods for knowledge graph (KG) completion. However, they require large batch sizes, which becomes a performance bottleneck when training on large scale datasets due to memory constraints. In this paper we use occurrences of entity-relation pairs in the dataset to construct a joint learning model and to increase the quality of sampled negatives during training. We show on three standard datasets that when these two techniques are combined, they give a significant improvement in performance, especially when the batch size and the number of generated negative examples are low relative to the size of the dataset. We then apply our techniques to a dataset containing 2 million entities and demonstrate that our model outperforms the baseline by 2.8% absolute on hits@1.", "field": [], "task": ["Knowledge Graph Completion", "Link Prediction"], "method": [], "dataset": [" FB15k", "FB15k-237"], "metric": ["Hits@10", "MRR", "Hits@3", "Hits@1"], "title": "Using Pairwise Occurrence Information to Improve Knowledge Graph Completion on Large-Scale Datasets"} {"abstract": "Synthesizing realistic images from text descriptions on a dataset like\nMicrosoft Common Objects in Context (MS COCO), where each image can contain\nseveral objects, is a challenging task. Prior work has used text captions to\ngenerate images. However, captions might not be informative enough to capture\nthe entire image and insufficient for the model to be able to understand which\nobjects in the images correspond to which words in the captions. We show that\nadding a dialogue that further describes the scene leads to significant\nimprovement in the inception score and in the quality of generated images on\nthe MS COCO dataset.", "field": [], "task": ["Image Generation", "Text-to-Image Generation"], "method": [], "dataset": ["COCO"], "metric": ["Inception score"], "title": "ChatPainter: Improving Text to Image Generation using Dialogue"} {"abstract": "Background\r\nIdentifying key variables such as disorders within the clinical narratives in electronic health records has wide-ranging applications within clinical practice and biomedical research. Previous research has demonstrated reduced performance of disorder named entity recognition (NER) and normalization (or grounding) in clinical narratives than in biomedical publications. In this work, we aim to identify the cause for this performance difference and introduce general solutions.\r\n\r\nMethods\r\nWe use closure properties to compare the richness of the vocabulary in clinical narrative text to biomedical publications. We approach both disorder NER and normalization using machine learning methodologies. Our NER methodology is based on linear-chain conditional random fields with a rich feature approach, and we introduce several improvements to enhance the lexical knowledge of the NER system. Our normalization method \u2013 never previously applied to clinical data \u2013 uses pairwise learning to rank to automatically learn term variation directly from the training data.\r\n\r\nResults\r\nWe find that while the size of the overall vocabulary is similar between clinical narrative and biomedical publications, clinical narrative uses a richer terminology to describe disorders than publications. We apply our system, DNorm-C, to locate disorder mentions and in the clinical narratives from the recent ShARe/CLEF eHealth Task. For NER (strict span-only), our system achieves precision = 0.797, recall = 0.713, f-score = 0.753. For the normalization task (strict span + concept) it achieves precision = 0.712, recall = 0.637, f-score = 0.672. The improvements described in this article increase the NER f-score by 0.039 and the normalization f-score by 0.036. We also describe a high recall version of the NER, which increases the normalization recall to as high as 0.744, albeit with reduced precision.\r\n\r\nDiscussion\r\nWe perform an error analysis, demonstrating that NER errors outnumber normalization errors by more than 4-to-1. Abbreviations and acronyms are found to be frequent causes of error, in addition to the mentions the annotators were not able to identify within the scope of the controlled vocabulary.\r\n\r\nConclusion\r\nDisorder mentions in text from clinical narratives use a rich vocabulary that results in high term variation, which we believe to be one of the primary causes of reduced performance in clinical narrative. We show that pairwise learning to rank offers high performance in this context, and introduce several lexical enhancements \u2013 generalizable to other clinical NER tasks \u2013 that improve the ability of the NER system to handle this variation. DNorm-C is a high performing, open source system for disorders in clinical text, and a promising step toward NER and normalization methods that are trainable to a wide variety of domains and entities. (DNorm-C is open source software, and is available with a trained model at the DNorm demonstration website: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/tmTools/#DNorm.)", "field": [], "task": ["Learning-To-Rank", "Medical Named Entity Recognition", "Named Entity Recognition"], "method": [], "dataset": ["ShARe/CLEF eHealth corpus"], "metric": ["Precision", "Recall", "F1"], "title": "Challenges in clinical natural language processing for automated disorder normalization"} {"abstract": "Epilepsy is the most common neurological disorder and an accurate forecast of\nseizures would help to overcome the patient's uncertainty and helplessness. In\nthis contribution, we present and discuss a novel methodology for the\nclassification of intracranial electroencephalography (iEEG) for seizure\nprediction. Contrary to previous approaches, we categorically refrain from an\nextraction of hand-crafted features and use a convolutional neural network\n(CNN) topology instead for both the determination of suitable signal\ncharacteristics and the binary classification of preictal and interictal\nsegments. Three different models have been evaluated on public datasets with\nlong-term recordings from four dogs and three patients. Overall, our findings\ndemonstrate the general applicability. In this work we discuss the strengths\nand limitations of our methodology.", "field": [], "task": ["Seizure prediction"], "method": [], "dataset": ["Melbourne University Seizure Prediction"], "metric": ["AUC"], "title": "Convolutional Neural Networks for Epileptic Seizure Prediction"} {"abstract": "Atrial fibrillation (AF) is the most common cardiac arrhythmias causing morbidity and mortality. AF may appear as episodes of very short (i.e., proximal AF) or sustained duration (i.e., persistent AF), either form of which causes irregular ventricular excitations that affect the global function of the heart. It is an unmet challenge for early and automatic detection of AF, limiting efficient treatment strategies for AF. In this study, we developed a new method based on continuous wavelet transform and 2D convolutional neural networks (CNNs) to detect AF episodes. The proposed method analyzed the time-frequency features of the electrocardiogram (ECG), thus being different to conventional AF detecting methods that implement isolating atrial or ventricular activities. Then a 2D CNN was trained to improve AF detection performance. The MIT-BIH Atrial Fibrillation Database was used for evaluating the algorithm. The efficacy of the proposed method was compared with those of some existing methods, most of which implemented the same dataset. The newly developed algorithm using CNNs achieved 99.41, 98.91, 99.39, and 99.23% for the sensitivity, specificity, positive predictive value, and overall accuracy (ACC) respectively. As the proposed algorithm targets the time-frequency feature of ECG signals rather than isolated atrial or ventricular activity, it has the ability to detect AF episodes for using just five beats, suggesting practical applications in the future.", "field": [], "task": ["Atrial Fibrillation Detection", "Electrocardiography (ECG)"], "method": [], "dataset": ["MIT-BIH AF"], "metric": ["Accuracy"], "title": "Automatic Detection of Atrial Fibrillation Based on Continuous Wavelet Transform and 2D Convolutional Neural Networks"} {"abstract": "Most of existing image denoising methods assume the corrupted noise to be\nadditive white Gaussian noise (AWGN). However, the realistic noise in\nreal-world noisy images is much more complex than AWGN, and is hard to be\nmodelled by simple analytical distributions. As a result, many state-of-the-art\ndenoising methods in literature become much less effective when applied to\nreal-world noisy images captured by CCD or CMOS cameras. In this paper, we\ndevelop a trilateral weighted sparse coding (TWSC) scheme for robust real-world\nimage denoising. Specifically, we introduce three weight matrices into the data\nand regularisation terms of the sparse coding framework to characterise the\nstatistics of realistic noise and image priors. TWSC can be reformulated as a\nlinear equality-constrained problem and can be solved by the alternating\ndirection method of multipliers. The existence and uniqueness of the solution\nand convergence of the proposed algorithm are analysed. Extensive experiments\ndemonstrate that the proposed TWSC scheme outperforms state-of-the-art\ndenoising methods on removing realistic noise.", "field": [], "task": ["Denoising", "Image Denoising"], "method": [], "dataset": ["Darmstadt Noise Dataset"], "metric": ["SSIM (sRGB)", "PSNR", "PSNR (sRGB)"], "title": "A Trilateral Weighted Sparse Coding Scheme for Real-World Image Denoising"} {"abstract": "This paper addresses the task of segmenting moving objects in unconstrained\nvideos. We introduce a novel two-stream neural network with an explicit memory\nmodule to achieve this. The two streams of the network encode spatial and\ntemporal features in a video sequence respectively, while the memory module\ncaptures the evolution of objects over time. The module to build a \"visual\nmemory\" in video, i.e., a joint representation of all the video frames, is\nrealized with a convolutional recurrent unit learned from a small number of\ntraining video sequences. Given a video frame as input, our approach assigns\neach pixel an object or background label based on the learned spatio-temporal\nfeatures as well as the \"visual memory\" specific to the video, acquired\nautomatically without any manually-annotated frames. The visual memory is\nimplemented with convolutional gated recurrent units, which allows to propagate\nspatial information over time. We evaluate our method extensively on two\nbenchmarks, DAVIS and Freiburg-Berkeley motion segmentation datasets, and show\nstate-of-the-art results. For example, our approach outperforms the top method\non the DAVIS dataset by nearly 6%. We also provide an extensive ablative\nanalysis to investigate the influence of each component in the proposed\nframework.", "field": [], "task": ["Motion Segmentation", "Semantic Segmentation", "Unsupervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["SegTrack v2", "DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "Mean IoU", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "Learning Video Object Segmentation with Visual Memory"} {"abstract": "Training recurrent neural networks to model long term dependencies is\ndifficult. Hence, we propose to use external linguistic knowledge as an\nexplicit signal to inform the model which memories it should utilize.\nSpecifically, external knowledge is used to augment a sequence with typed edges\nbetween arbitrarily distant elements, and the resulting graph is decomposed\ninto directed acyclic subgraphs. We introduce a model that encodes such graphs\nas explicit memory in recurrent neural networks, and use it to model\ncoreference relations in text. We apply our model to several text comprehension\ntasks and achieve new state-of-the-art results on all considered benchmarks,\nincluding CNN, bAbi, and LAMBADA. On the bAbi QA tasks, our model solves 15 out\nof the 20 tasks with only 1000 training examples per task. Analysis of the\nlearned representations further demonstrates the ability of our model to encode\nfine-grained entity information across a document.", "field": [], "task": ["Reading Comprehension"], "method": [], "dataset": ["CNN / Daily Mail"], "metric": ["CNN"], "title": "Linguistic Knowledge as Memory for Recurrent Neural Networks"} {"abstract": "We present in this paper a purely rule-based system for Question Classification which we divide into two parts: The first is the extraction of relevant words from a question by use of its structure, and the second is the classification of questions based on rules that associate these words to Concepts. We achieve an accuracy of 97.2{\\%}, close to a 6 point improvement over the previous State of the Art of 91.6{\\%}. Additionally, we believe that machine learning algorithms can be applied on top of this method to further improve accuracy.", "field": [], "task": ["Feature Selection", "Question Answering", "Text Classification"], "method": [], "dataset": ["TREC-50"], "metric": ["Error"], "title": "High Accuracy Rule-based Question Classification using Question Syntax and Semantics"} {"abstract": "Deep convolutional neural networks are known to give good results on image\nclassification tasks. In this paper we present a method to improve the\nclassification result by combining multiple such networks in a committee. We\nadopt the STL-10 dataset which has very few training examples and show that our\nmethod can achieve results that are better than the state of the art. The\nnetworks are trained layer-wise and no backpropagation is used. We also explore\nthe effects of dataset augmentation by mirroring, rotation, and scaling.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["STL-10"], "metric": ["Percentage correct"], "title": "Committees of deep feedforward networks trained with few data"} {"abstract": "Neural models for question answering (QA) over documents have achieved\nsignificant performance improvements. Although effective, these models do not\nscale to large corpora due to their complex modeling of interactions between\nthe document and the question. Moreover, recent work has shown that such models\nare sensitive to adversarial inputs. In this paper, we study the minimal\ncontext required to answer the question, and find that most questions in\nexisting datasets can be answered with a small set of sentences. Inspired by\nthis observation, we propose a simple sentence selector to select the minimal\nset of sentences to feed into the QA model. Our overall system achieves\nsignificant reductions in training (up to 15 times) and inference times (up to\n13 times), with accuracy comparable to or better than the state-of-the-art on\nSQuAD, NewsQA, TriviaQA and SQuAD-Open. Furthermore, our experimental results\nand analyses show that our approach is more robust to adversarial inputs.", "field": [], "task": ["Question Answering", "Reading Comprehension"], "method": [], "dataset": ["NewsQA"], "metric": ["EM", "F1"], "title": "Efficient and Robust Question Answering from Minimal Context over Documents"} {"abstract": "Similar and indeterminate defect detection of solar cell surface with\nheterogeneous texture and complex background is a challenge of solar cell\nmanufacturing. The traditional manufacturing process relies on human eye\ndetection which requires a large number of workers without a stable and good\ndetection effect. In order to solve the problem, a visual defect detection\nmethod based on multi-spectral deep convolutional neural network (CNN) is\ndesigned in this paper. Firstly, a selected CNN model is established. By\nadjusting the depth and width of the model, the influence of model depth and\nkernel size on the recognition result is evaluated. The optimal CNN model\nstructure is selected. Secondly, the light spectrum features of solar cell\ncolor image are analyzed. It is found that a variety of defects exhibited\ndifferent distinguishable characteristics in different spectral bands. Thus, a\nmulti-spectral CNN model is constructed to enhance the discrimination ability\nof the model to distinguish between complex texture background features and\ndefect features. Finally, some experimental results and K-fold cross validation\nshow that the multi-spectral deep CNN model can effectively detect the solar\ncell surface defects with higher accuracy and greater adaptability. The\naccuracy of defect recognition reaches 94.30%. Applying such an algorithm can\nincrease the efficiency of solar cell manufacturing and make the manufacturing\nprocess smarter.", "field": [], "task": ["Defect Detection", "Multi-Document Summarization"], "method": [], "dataset": ["review"], "metric": ["1-of-100 Accuracy"], "title": "Solar Cell Surface Defect Inspection Based on Multispectral Convolutional Neural Network"} {"abstract": "Many real-world sequences cannot be conveniently categorized as general or\ndegenerate; in such cases, imposing a false dichotomy in using the fundamental\nmatrix or homography model for motion segmentation would lead to difficulty.\nEven when we are confronted with a general scene-motion, the fundamental matrix\napproach as a model for motion segmentation still suffers from several defects,\nwhich we discuss in this paper. The full potential of the fundamental matrix\napproach could only be realized if we judiciously harness information from the\nsimpler homography model. From these considerations, we propose a multi-view\nspectral clustering framework that synergistically combines multiple models\ntogether. We show that the performance can be substantially improved in this\nway. We perform extensive testing on existing motion segmentation datasets,\nachieving state-of-the-art performance on all of them; we also put forth a more\nrealistic and challenging dataset adapted from the KITTI benchmark, containing\nreal-world effects such as strong perspectives and strong forward translations\nnot seen in the traditional datasets.", "field": [], "task": ["Motion Segmentation"], "method": [], "dataset": ["KT3DMoSeg", "MTPV62", "Hopkins155"], "metric": ["Error", "Classification Error"], "title": "Motion Segmentation by Exploiting Complementary Geometric Models"} {"abstract": "Language modeling tasks, in which words, or word-pieces, are predicted on the basis of a local context, have been very effective for learning word embeddings and context dependent representations of phrases. Motivated by the observation that efforts to code world knowledge into machine readable knowledge bases or human readable encyclopedias tend to be entity-centric, we investigate the use of a fill-in-the-blank task to learn context independent representations of entities from the text contexts in which those entities were mentioned. We show that large scale training of neural models allows us to learn high quality entity representations, and we demonstrate successful results on four domains: (1) existing entity-level typing benchmarks, including a 64% error reduction over previous work on TypeNet (Murty et al., 2018); (2) a novel few-shot category reconstruction task; (3) existing entity linking benchmarks, where we match the state-of-the-art on CoNLL-Aida without linking-specific features and obtain a score of 89.8% on TAC-KBP 2010 without using any alias table, external knowledge base or in domain training data and (4) answering trivia questions, which uniquely identify entities. Our global entity representations encode fine-grained type categories, such as Scottish footballers, and can answer trivia questions such as: Who was the last inmate of Spandau jail in Berlin?", "field": [], "task": ["Entity Linking", "Language Modelling", "Learning Word Embeddings", "Word Embeddings"], "method": [], "dataset": ["TAC-KBP 2010", "CoNLL-Aida"], "metric": ["Accuracy"], "title": "Learning Cross-Context Entity Representations from Text"} {"abstract": "In this paper, a unified approach is presented to transfer learning that\naddresses several source and target domain label-space and annotation\nassumptions with a single model. It is particularly effective in handling a\nchallenging case, where source and target label-spaces are disjoint, and\noutperforms alternatives in both unsupervised and semi-supervised settings. The\nkey ingredient is a common representation termed Common Factorised Space. It is\nshared between source and target domains, and trained with an unsupervised\nfactorisation loss and a graph-based loss. With a wide range of experiments, we\ndemonstrate the flexibility, relevance and efficacy of our method, both in the\nchallenging cases with disjoint label spaces, and in the more conventional\ncases such as unsupervised domain adaptation, where the source and target\ndomains share the same label-sets.", "field": [], "task": ["Domain Adaptation", "Transfer Learning", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["Duke to Market", "Market to Duke"], "metric": ["rank-10", "mAP", "rank-5", "rank-1"], "title": "Disjoint Label Space Transfer Learning with Common Factorised Space"} {"abstract": "Understanding human activity based on sensor information is required in many applications and has been an active research area. With the advancement of depth sensors and tracking algorithms, systems for human motion activity analysis can be built by combining off-the-shelf motion tracking systems with application-dependent learning tools to extract higher semantic level information. Many of these motion tracking systems provide raw motion data registered to the skeletal joints in the human body. In this paper, we propose novel representations for human motion data using the skeleton-based graph structure along with techniques in graph signal processing. Methods for graph construction and their corresponding basis functions are discussed. The proposed representations can achieve comparable classification performance in action recognition tasks while additionally being more robust to noise and missing data.", "field": [], "task": ["Action Recognition", "graph construction", "Skeleton Based Action Recognition"], "method": [], "dataset": ["UT-Kinect", "MSR Action3D"], "metric": ["Accuracy"], "title": "Graph Based Skeleton Modeling for Human Activity Analysis"} {"abstract": "We introduce a novel kernel that upgrades the Weisfeiler-Lehman and other graph kernels to effectively exploit high-dimensional and continuous vertex attributes. Graphs are first decomposed into subgraphs. Vertices of the subgraphs are then compared by a kernel that combines the similarity of their labels and the similarity of their structural role, using a suitable vertex invariant. By changing this invariant we obtain a family of graph kernels which includes generalizations of Weisfeiler-Lehman, NSPDK, and propagation kernels. We demonstrate empirically that these kernels obtain state-of-the-art results on relational data sets.", "field": [], "task": ["Graph Classification"], "method": [], "dataset": ["FRANKENSTEIN"], "metric": ["Accuracy"], "title": "Graph Invariant Kernels"} {"abstract": "Recent advances in language modeling have led to computationally intensive and resource-demanding state-of-the-art models. In an effort towards sustainable practices, we study the impact of pre-training data volume on compact language models. Multiple BERT-based models are trained on gradually increasing amounts of French text. Through fine-tuning on the French Question Answering Dataset (FQuAD), we observe that well-performing models are obtained with as little as 100 MB of text. In addition, we show that past critically low amounts of pre-training data, an intermediate pre-training step on the task-specific corpus does not yield substantial improvements.", "field": [], "task": ["Language Modelling", "Question Answering"], "method": [], "dataset": ["FQuAD"], "metric": ["EM", "F1"], "title": "On the importance of pre-training data volume for compact language models"} {"abstract": "We present structured perceptron training for neural network transition-based\ndependency parsing. We learn the neural network representation using a gold\ncorpus augmented by a large number of automatically parsed sentences. Given\nthis fixed network representation, we learn a final layer using the structured\nperceptron with beam-search decoding. On the Penn Treebank, our parser reaches\n94.26% unlabeled and 92.41% labeled attachment accuracy, which to our knowledge\nis the best accuracy on Stanford Dependencies to date. We also provide in-depth\nablative analysis to determine which aspects of our model provide the largest\ngains in accuracy.", "field": [], "task": ["Dependency Parsing", "Transition-Based Dependency Parsing"], "method": [], "dataset": ["Penn Treebank"], "metric": ["UAS", "POS", "LAS"], "title": "Structured Training for Neural Network Transition-Based Parsing"} {"abstract": "Artificial Neural Networks have shown impressive success in very different\napplication cases. Choosing a proper network architecture is a critical\ndecision for a network's success, usually done in a manual manner. As a\nstraightforward strategy, large, mostly fully connected architectures are\nselected, thereby relying on a good optimization strategy to find proper\nweights while at the same time avoiding overfitting. However, large parts of\nthe final network are redundant. In the best case, large parts of the network\nbecome simply irrelevant for later inferencing. In the worst case, highly\nparameterized architectures hinder proper optimization and allow the easy\ncreation of adverserial examples fooling the network. A first step in removing\nirrelevant architectural parts lies in identifying those parts, which requires\nmeasuring the contribution of individual components such as neurons. In\nprevious work, heuristics based on using the weight distribution of a neuron as\ncontribution measure have shown some success, but do not provide a proper\ntheoretical understanding. Therefore, in our work we investigate game theoretic\nmeasures, namely the Shapley value (SV), in order to separate relevant from\nirrelevant parts of an artificial neural network. We begin by designing a\ncoalitional game for an artificial neural network, where neurons form\ncoalitions and the average contributions of neurons to coalitions yield to the\nShapley value. In order to measure how well the Shapley value measures the\ncontribution of individual neurons, we remove low-contributing neurons and\nmeasure its impact on the network performance. In our experiments we show that\nthe Shapley value outperforms other heuristics for measuring the contribution\nof neurons.", "field": [], "task": ["Network Pruning"], "method": [], "dataset": ["MNIST"], "metric": ["Avg #Steps"], "title": "Analysing Neural Network Topologies: a Game Theoretic Approach"} {"abstract": "Defocus blur detection (DBD) is a fundamental yet challenging topic, since the homogeneous region is obscure and the transition from the focused area to the unfocused region is gradual. Recent DBD methods make progress through exploring deeper or wider networks with the expense of high memory and computation. In this paper, we propose a novel learning strategy by breaking DBD problem into multiple smaller defocus blur detectors and thus estimate errors can cancel out each other. Our focus is the diversity enhancement via cross-ensemble network. Specifically, we design an end-to-end network composed of two logical parts: feature extractor network (FENet) and defocus blur detector cross-ensemble network (DBD-CENet). FENet is constructed to extract low-level features. Then the features are fed into DBD-CENet containing two parallel-branches for learning two groups of defocus blur detectors. For each individual, we design cross-negative and self-negative correlations and an error function to enhance ensemble diversity and balance individual accuracy. Finally, the multiple defocus blur detectors are combined with a uniformly weighted average to obtain the final DBD map. Experimental results indicate the superiority of our method in terms of accuracy and speed when compared with several state-of-the-art methods.\r", "field": [], "task": ["Defocus Estimation"], "method": [], "dataset": ["CUHK - Blur Detection Dataset"], "metric": ["MAE", "F-measure"], "title": "Enhancing Diversity of Defocus Blur Detectors via Cross-Ensemble Network"} {"abstract": "Label estimation is an important component in an unsupervised person\nre-identification (re-ID) system. This paper focuses on cross-camera label\nestimation, which can be subsequently used in feature learning to learn robust\nre-ID models. Specifically, we propose to construct a graph for samples in each\ncamera, and then graph matching scheme is introduced for cross-camera labeling\nassociation. While labels directly output from existing graph matching methods\nmay be noisy and inaccurate due to significant cross-camera variations, this\npaper proposes a dynamic graph matching (DGM) method. DGM iteratively updates\nthe image graph and the label estimation process by learning a better feature\nspace with intermediate estimated labels. DGM is advantageous in two aspects:\n1) the accuracy of estimated labels is improved significantly with the\niterations; 2) DGM is robust to noisy initial training data. Extensive\nexperiments conducted on three benchmarks including the large-scale MARS\ndataset show that DGM yields competitive performance to fully supervised\nbaselines, and outperforms competing unsupervised learning methods.", "field": [], "task": ["Graph Matching", "Person Re-Identification", "Unsupervised Person Re-Identification"], "method": [], "dataset": ["PRID2011"], "metric": ["Rank-1", "Rank-20", "Rank-5"], "title": "Dynamic Label Graph Matching for Unsupervised Video Re-Identification"} {"abstract": "Due to great challenges such as tremendous intra-class variations and low image resolution, context information has been playing a more and more important role for accurate and robust event recognition in surveillance videos. The context information can generally be divided into the feature level context, the semantic level context, and the prior level context. These three levels of context provide crucial bottom-up, middle level, and top down information that can benefit the recognition task itself. Unlike existing researches that generally integrate the context information at one of the three levels, we propose a hierarchical context model that simultaneously exploits contexts at all three levels and systematically incorporate them into event recognition. To tackle the learning and inference challenges brought in by the model hierarchy, we develop complete learning and inference algorithms for the proposed hierarchical context model based on variational Bayes method. Experiments on VIRAT 1.0 and 2.0 Ground Datasets demonstrate the effectiveness of the proposed hierarchical context model for improving the event recognition performance even under great challenges like large intra-class variations and low image resolution.", "field": [], "task": ["Action Recognition"], "method": [], "dataset": ["VIRAT Ground 2.0"], "metric": ["Average Accuracy"], "title": "A Hierarchical Context Model for Event Recognition in Surveillance Video"} {"abstract": "Handwritten digit and letter recognition is one of the oldest and a very important topic in the field of pattern recognition. Handwritten digit and letter recognition poses different problem because of different writing styles, similarity in structure and angle of orientation. Therefore it is very important to find effective method for recognition and classification of digit and letter. Handwritten digit and letter recognition has various applications such as number plate recognition, extracting business card information, bank check processing, postal address processing, passport processing, signature processing etc. This paper propose a method of handwritten digit and letter recognition using feature extraction based on hybrid Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT). These extracted features are passed to K-Nearest Neighbour (KNN) and Support Vector Machine (SVM) classifiers for classification. Standard MNIST and EMNIST letter dataset are used for this experiment. Firstly MNIST digit and EMNSIT letter dataset are binarized and later stray pixels are removed. Features are extracted using hybrid Discrete Wavelet Transform and Discrete Cosine Transform. KNN and SVM classifiers are used for classification purpose. The proposed method was able to obtain a highest accuracy of 97.74% for digit and 89.51% for letter using SVM classifier.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["EMNIST-Digits", "EMNIST-Letters"], "metric": ["Accuracy (%)", "Accuracy"], "title": "Handwritten digit and letter recognition using hybrid dwt-dct with knn and svm classifier"} {"abstract": "Emotion cause extraction aims to identify the reasons behind a certain\nemotion expressed in text. It is a much more difficult task compared to emotion\nclassification. Inspired by recent advances in using deep memory networks for\nquestion answering (QA), we propose a new approach which considers emotion\ncause identification as a reading comprehension task in QA. Inspired by\nconvolutional neural networks, we propose a new mechanism to store relevant\ncontext in different memory slots to model context information. Our proposed\napproach can extract both word level sequence features and lexical features.\nPerformance evaluation shows that our method achieves the state-of-the-art\nperformance on a recently released emotion cause dataset, outperforming a\nnumber of competitive baselines by at least 3.01% in F-measure.", "field": [], "task": ["Emotion Cause Extraction", "Emotion Classification", "Question Answering", "Reading Comprehension"], "method": [], "dataset": ["ECE"], "metric": ["F1"], "title": "A Question Answering Approach to Emotion Cause Extraction"} {"abstract": "We present a novel family of language model (LM) estimation techniques named\nSparse Non-negative Matrix (SNM) estimation. A first set of experiments\nempirically evaluating it on the One Billion Word Benchmark shows that SNM\n$n$-gram LMs perform almost as well as the well-established Kneser-Ney (KN)\nmodels. When using skip-gram features the models are able to match the\nstate-of-the-art recurrent neural network (RNN) LMs; combining the two modeling\ntechniques yields the best known result on the benchmark. The computational\nadvantages of SNM over both maximum entropy and RNN LM estimation are probably\nits main strength, promising an approach that has the same flexibility in\ncombining arbitrary features effectively and yet should scale to very large\namounts of data as gracefully as $n$-gram LMs do.", "field": [], "task": ["Language Modelling"], "method": [], "dataset": ["One Billion Word"], "metric": ["Number of params", "PPL"], "title": "Skip-gram Language Modeling Using Sparse Non-negative Matrix Probability Estimation"} {"abstract": "Beam search is a desirable choice of test-time decoding algorithm for neural\nsequence models because it potentially avoids search errors made by simpler\ngreedy methods. However, typical cross entropy training procedures for these\nmodels do not directly consider the behaviour of the final decoding method. As\na result, for cross-entropy trained models, beam decoding can sometimes yield\nreduced test performance when compared with greedy decoding. In order to train\nmodels that can more effectively make use of beam search, we propose a new\ntraining procedure that focuses on the final loss metric (e.g. Hamming loss)\nevaluated on the output of beam search. While well-defined, this \"direct loss\"\nobjective is itself discontinuous and thus difficult to optimize. Hence, in our\napproach, we form a sub-differentiable surrogate objective by introducing a\nnovel continuous approximation of the beam search decoding procedure. In\nexperiments, we show that optimizing this new training objective yields\nsubstantially better results on two sequence tasks (Named Entity Recognition\nand CCG Supertagging) when compared with both cross entropy trained greedy\ndecoding and cross entropy trained beam decoding baselines.", "field": [], "task": ["CCG Supertagging", "Motion Segmentation", "Named Entity Recognition"], "method": [], "dataset": ["Hopkins155"], "metric": ["Classification Error"], "title": "A Continuous Relaxation of Beam Search for End-to-end Training of Neural Sequence Models"} {"abstract": "This paper proposes a novel framework for the use of eye movement patterns\nfor biometric applications. Eye movements contain abundant information about\ncognitive brain functions, neural pathways, etc. In the proposed method, eye\nmovement data is classified into fixations and saccades. Features extracted\nfrom fixations and saccades are used by a Gaussian Radial Basis Function\nNetwork (GRBFN) based method for biometric authentication. A score fusion\napproach is adopted to classify the data in the output layer. In the evaluation\nstage, the algorithm has been tested using two types of stimuli: random dot\nfollowing on a screen and text reading. The results indicate the strength of\neye movement pattern as a biometric modality. The algorithm has been evaluated\non BioEye 2015 database and found to outperform all the other methods. Eye\nmovements are generated by a complex oculomotor plant which is very hard to\nspoof by mechanical replicas. Use of eye movement dynamics along with iris\nrecognition technology may lead to a robust counterfeit-resistant person\nidentification system.", "field": [], "task": ["Iris Recognition", "Person Identification"], "method": [], "dataset": ["BioEye"], "metric": ["R1"], "title": "A Score-level Fusion Method for Eye Movement Biometrics"} {"abstract": "Human Activity Recognition (HAR) is a key building block of many emerging\napplications such as intelligent mobility, sports analytics, ambient-assisted\nliving and human-robot interaction. With robust HAR, systems will become more\nhuman-aware, leading towards much safer and empathetic autonomous systems.\nWhile human pose detection has made significant progress with the dawn of deep\nconvolutional neural networks (CNNs), the state-of-the-art research has almost\nexclusively focused on a single sensing modality, especially video. However, in\nsafety critical applications it is imperative to utilize multiple sensor\nmodalities for robust operation. To exploit the benefits of state-of-the-art\nmachine learning techniques for HAR, it is extremely important to have\nmultimodal datasets. In this paper, we present a novel, multi-modal sensor\ndataset that encompasses nine indoor activities, performed by 16 participants,\nand captured by four types of sensors that are commonly used in indoor\napplications and autonomous vehicles. This multimodal dataset is the first of\nits kind to be made openly available and can be exploited for many applications\nthat require HAR, including sports analytics, healthcare assistance and indoor\nintelligent mobility. We propose a novel data preprocessing algorithm to enable\nadaptive feature extraction from the dataset to be utilized by different\nmachine learning algorithms. Through rigorous experimental evaluations, this\npaper reviews the performance of machine learning approaches to posture\nrecognition, and analyses the robustness of the algorithms. When performing HAR\nwith the RGB-Depth data from our new dataset, machine learning algorithms such\nas a deep neural network reached a mean accuracy of up to 96.8% for\nclassification across all stationary and dynamic activities", "field": [], "task": ["Activity Recognition", "Autonomous Vehicles", "Human robot interaction", "Multimodal Activity Recognition", "Sports Analytics"], "method": [], "dataset": ["LboroHAR"], "metric": ["Accuracy"], "title": "Adaptive Feature Processing for Robust Human Activity Recognition on a Novel Multi-Modal Dataset"} {"abstract": "Recent work proposes a family of contextual embeddings that significantly improves the accuracy of sequence labelers over non-contextual embeddings. However, there is no definite conclusion on whether we can build better sequence labelers by combining different kinds of embeddings in various settings. In this paper, we conduct extensive experiments on 3 tasks over 18 datasets and 8 languages to study the accuracy of sequence labeling with various embedding concatenations and make three observations: (1) concatenating more embedding variants leads to better accuracy in rich-resource and cross-domain settings and some conditions of low-resource settings; (2) concatenating additional contextual sub-word embeddings with contextual character embeddings hurts the accuracy in extremely low-resource settings; (3) based on the conclusion of (1), concatenating additional similar contextual embeddings cannot lead to further improvements. We hope these conclusions can help people build stronger sequence labelers in various settings.", "field": [], "task": ["Chunking", "Word Embeddings"], "method": [], "dataset": ["CoNLL 2003 (English)", "CoNLL 2003 (German)"], "metric": ["F1"], "title": "More Embeddings, Better Sequence Labelers?"} {"abstract": "A key requirement for leveraging supervised deep learning methods is the\navailability of large, labeled datasets. Unfortunately, in the context of RGB-D\nscene understanding, very little data is available -- current datasets cover a\nsmall range of scene views and have limited semantic annotations. To address\nthis issue, we introduce ScanNet, an RGB-D video dataset containing 2.5M views\nin 1513 scenes annotated with 3D camera poses, surface reconstructions, and\nsemantic segmentations. To collect this data, we designed an easy-to-use and\nscalable RGB-D capture system that includes automated surface reconstruction\nand crowdsourced semantic annotation. We show that using this data helps\nachieve state-of-the-art performance on several 3D scene understanding tasks,\nincluding 3D object classification, semantic voxel labeling, and CAD model\nretrieval. The dataset is freely available at http://www.scan-net.org.", "field": [], "task": ["3D Object Classification", "Object Classification", "Scene Understanding", "Semantic Segmentation"], "method": [], "dataset": ["ScanNet"], "metric": ["3DIoU"], "title": "ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes"} {"abstract": "Neural networks augmented with external memory have the ability to learn\nalgorithmic solutions to complex tasks. These models appear promising for\napplications such as language modeling and machine translation. However, they\nscale poorly in both space and time as the amount of memory grows --- limiting\ntheir applicability to real-world domains. Here, we present an end-to-end\ndifferentiable memory access scheme, which we call Sparse Access Memory (SAM),\nthat retains the representational power of the original approaches whilst\ntraining efficiently with very large memories. We show that SAM achieves\nasymptotic lower bounds in space and time complexity, and find that an\nimplementation runs $1,\\!000\\times$ faster and with $3,\\!000\\times$ less\nphysical memory than non-sparse models. SAM learns with comparable data\nefficiency to existing models on a range of synthetic tasks and one-shot\nOmniglot character recognition, and can scale to tasks requiring $100,\\!000$s\nof time steps and memories. As well, we show how our approach can be adapted\nfor models that maintain temporal associations between memories, as with the\nrecently introduced Differentiable Neural Computer.", "field": [], "task": ["Language Modelling", "Machine Translation", "Omniglot", "Question Answering"], "method": [], "dataset": ["bAbi"], "metric": ["Accuracy (trained on 1k)", "Mean Error Rate"], "title": "Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes"} {"abstract": "With multiple crowd gatherings of millions of people every year in events\nranging from pilgrimages to protests, concerts to marathons, and festivals to\nfunerals; visual crowd analysis is emerging as a new frontier in computer\nvision. In particular, counting in highly dense crowds is a challenging problem\nwith far-reaching applicability in crowd safety and management, as well as\ngauging political significance of protests and demonstrations. In this paper,\nwe propose a novel approach that simultaneously solves the problems of\ncounting, density map estimation and localization of people in a given dense\ncrowd image. Our formulation is based on an important observation that the\nthree problems are inherently related to each other making the loss function\nfor optimizing a deep CNN decomposable. Since localization requires\nhigh-quality images and annotations, we introduce UCF-QNRF dataset that\novercomes the shortcomings of previous datasets, and contains 1.25 million\nhumans manually marked with dot annotations. Finally, we present evaluation\nmeasures and comparison with recent deep CNN networks, including those\ndeveloped specifically for crowd counting. Our approach significantly\noutperforms state-of-the-art on the new dataset, which is the most challenging\ndataset with the largest number of crowd annotations in the most diverse set of\nscenes.", "field": [], "task": ["Crowd Counting", "Visual Crowd Analysis"], "method": [], "dataset": ["UCF-QNRF"], "metric": ["MAE"], "title": "Composition Loss for Counting, Density Map Estimation and Localization in Dense Crowds"} {"abstract": "Pedestrian detection in crowded scenes is a challenging problem since the\npedestrians often gather together and occlude each other. In this paper, we\npropose a new occlusion-aware R-CNN (OR-CNN) to improve the detection accuracy\nin the crowd. Specifically, we design a new aggregation loss to enforce\nproposals to be close and locate compactly to the corresponding objects.\nMeanwhile, we use a new part occlusion-aware region of interest (PORoI) pooling\nunit to replace the RoI pooling layer in order to integrate the prior structure\ninformation of human body with visibility prediction into the network to handle\nocclusion. Our detector is trained in an end-to-end fashion, which achieves\nstate-of-the-art results on three pedestrian detection datasets, i.e.,\nCityPersons, ETH, and INRIA, and performs on-pair with the state-of-the-arts on\nCaltech.", "field": [], "task": ["Pedestrian Detection"], "method": [], "dataset": ["CityPersons", "Caltech"], "metric": ["Reasonable MR^-2", "Heavy MR^-2", "Reasonable Miss Rate", "Partial MR^-2", "Bare MR^-2"], "title": "Occlusion-aware R-CNN: Detecting Pedestrians in a Crowd"} {"abstract": "Named entity recognition (NER) and entity linking (EL) are two fundamentally related tasks, since in order to perform EL, first the mentions to entities have to be detected. However, most entity linking approaches disregard the mention detection part, assuming that the correct mentions have been previously detected. In this paper, we perform joint learning of NER and EL to leverage their relatedness and obtain a more robust and generalisable system. For that, we introduce a model inspired by the Stack-LSTM approach (Dyer et al., 2015). We observe that, in fact, doing multi-task learning of NER and EL improves the performance in both tasks when comparing with models trained with individual objectives. Furthermore, we achieve results competitive with the state-of-the-art in both NER and EL.", "field": [], "task": ["Entity Linking", "Multi-Task Learning", "Named Entity Recognition"], "method": [], "dataset": ["CoNLL 2003 (English)"], "metric": ["F1"], "title": "Joint Learning of Named Entity Recognition and Entity Linking"} {"abstract": "Few-shot learning is a nascent research topic, motivated by the fact that traditional deep learning methods require tremen- dous amounts of data. The scarcity of annotated data becomes even more challenging in semantic segmentation since pixel- level annotation in segmentation task is more labor-intensive to acquire. To tackle this issue, we propose an Attention- based Multi-Context Guiding (A-MCG) network, which con- sists of three branches: the support branch, the query branch, the feature fusion branch. A key differentiator of A-MCG is the integration of multi-scale context features between sup- port and query branches, enforcing a better guidance from the support set. In addition, we also adopt a spatial atten- tion along the fusion branch to highlight context information from several scales, enhancing self-supervision in one-shot learning. To address the fusion problem in multi-shot learn- ing, Conv-LSTM is adopted to collaboratively integrate the sequential support features to elevate the final accuracy. Our architecture obtains state-of-the-art on unseen classes in a variant of PASCAL VOC12 dataset and performs favorably against previous work with large gains of 1.1%, 1.4% mea- sured in mIoU in the 1-shot and 5-shot setting.", "field": [], "task": ["Few-Shot Learning", "Few-Shot Semantic Segmentation", "One-Shot Learning", "Semantic Segmentation"], "method": [], "dataset": ["Pascal5i"], "metric": ["meanIOU"], "title": "Attention-Based Multi-Context Guiding for Few-Shot Semantic Segmentation"} {"abstract": "We present our work on end-to-end training of acoustic models\r\nusing the lattice-free maximum mutual information (LF-MMI)\r\nobjective function in the context of hidden Markov models.\r\nBy end-to-end training, we mean flat-start training of a single\r\nDNN in one stage without using any previously trained models,\r\nforced alignments, or building state-tying decision trees. We\r\nuse full biphones to enable context-dependent modeling without trees, and show that our end-to-end LF-MMI approach can\r\nachieve comparable results to regular LF-MMI on well-known\r\nlarge vocabulary tasks. We also compare with other end-to-end\r\nmethods such as CTC in character-based and lexicon-free settings and show 5 to 25 percent relative reduction in word error rates on different large vocabulary tasks while using significantly smaller models.", "field": [], "task": ["End-To-End Speech Recognition", "Speech Recognition"], "method": [], "dataset": ["Switchboard (300hr)", "WSJ eval92"], "metric": ["Word Error Rate (WER)"], "title": "End-to-end speech recognition using lattice-free MMI"} {"abstract": "The objective of this paper is to learn a compact representation of image\nsets for template-based face recognition. We make the following contributions:\nfirst, we propose a network architecture which aggregates and embeds the face\ndescriptors produced by deep convolutional neural networks into a compact\nfixed-length representation. This compact representation requires minimal\nmemory storage and enables efficient similarity computation. Second, we propose\na novel GhostVLAD layer that includes {\\em ghost clusters}, that do not\ncontribute to the aggregation. We show that a quality weighting on the input\nfaces emerges automatically such that informative images contribute more than\nthose with low quality, and that the ghost clusters enhance the network's\nability to deal with poor quality images. Third, we explore how input feature\ndimension, number of clusters and different training techniques affect the\nrecognition performance. Given this analysis, we train a network that far\nexceeds the state-of-the-art on the IJB-B face recognition dataset. This is\ncurrently one of the most challenging public benchmarks, and we surpass the\nstate-of-the-art on both the identification and verification protocols.", "field": [], "task": ["Face Recognition", "Face Verification"], "method": [], "dataset": ["IJB-A", "IJB-B"], "metric": ["TAR @ FAR=0.01"], "title": "GhostVLAD for set-based face recognition"} {"abstract": "We propose bilinear models, a recognition architecture that consists of two feature extractors whose outputs are multiplied using outer product at each location of the image and pooled to obtain an image descriptor. This architecture can model local pairwise feature interactions in a translationally invariant manner which is particularly useful for fine-grained categorization. It also generalizes various orderless texture descriptors such as the Fisher vector, VLAD and O2P. We present experiments with bilinear models where the feature extractors are based on convolutional neural networks. The bilinear form simplifies gradient computation and allows end-to-end training of both networks using image labels only. Using networks initialized from the ImageNet dataset followed by domain specific fine-tuning we obtain 84.1% accuracy of the CUB-200-2011 dataset requiring only category labels at training time. We present experiments and visualizations that analyze the effects of fine-tuning and the choice two networks on the speed and accuracy of the models. Results show that the architecture compares favorably to the existing state of the art on a number of fine-grained datasets while being substantially simpler and easier to train. Moreover, our most accurate model is fairly efficient running at 8 frames/sec on a NVIDIA Tesla K40 GPU. The source code for the complete system will be made available at http://vis-www.cs.umass.edu/bcnn", "field": [], "task": ["Fine-Grained Image Classification", "Fine-Grained Visual Recognition"], "method": [], "dataset": [" CUB-200-2011"], "metric": ["Accuracy"], "title": "Bilinear CNN Models for Fine-Grained Visual Recognition"} {"abstract": "Image saliency detection has recently witnessed rapid progress due to deep\nconvolutional neural networks. However, none of the existing methods is able to\nidentify object instances in the detected salient regions. In this paper, we\npresent a salient instance segmentation method that produces a saliency mask\nwith distinct object instance labels for an input image. Our method consists of\nthree steps, estimating saliency map, detecting salient object contours and\nidentifying salient object instances. For the first two steps, we propose a\nmultiscale saliency refinement network, which generates high-quality salient\nregion masks and salient object contours. Once integrated with multiscale\ncombinatorial grouping and a MAP-based subset optimization framework, our\nmethod can generate very promising salient object instance segmentation\nresults. To promote further research and evaluation of salient instance\nsegmentation, we also construct a new database of 1000 images and their\npixelwise salient instance annotations. Experimental results demonstrate that\nour proposed method is capable of achieving state-of-the-art performance on all\npublic benchmarks for salient region detection as well as on our new dataset\nfor salient instance segmentation.", "field": [], "task": ["Instance Segmentation", "Saliency Detection", "Semantic Segmentation"], "method": [], "dataset": ["DUTS-TE"], "metric": ["MAE", "F-measure"], "title": "Instance-Level Salient Object Segmentation"} {"abstract": "We propose an end-to-end learning framework for segmenting generic objects in\nvideos. Our method learns to combine appearance and motion information to\nproduce pixel level segmentation masks for all prominent objects in videos. We\nformulate this task as a structured prediction problem and design a two-stream\nfully convolutional neural network which fuses together motion and appearance\nin a unified framework. Since large-scale video datasets with pixel level\nsegmentations are problematic, we show how to bootstrap weakly annotated videos\ntogether with existing image recognition datasets for training. Through\nexperiments on three challenging video segmentation benchmarks, our method\nsubstantially improves the state-of-the-art for segmenting generic (unseen)\nobjects. Code and pre-trained models are available on the project website.", "field": [], "task": ["Structured Prediction", "Unsupervised Video Object Segmentation", "Video Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "FusionSeg: Learning to combine motion and appearance for fully automatic segmention of generic objects in videos"} {"abstract": "Deep learning methods have achieved great success in pedestrian detection,\nowing to its ability to learn features from raw pixels. However, they mainly\ncapture middle-level representations, such as pose of pedestrian, but confuse\npositive with hard negative samples, which have large ambiguity, e.g. the shape\nand appearance of `tree trunk' or `wire pole' are similar to pedestrian in\ncertain viewpoint. This ambiguity can be distinguished by high-level\nrepresentation. To this end, this work jointly optimizes pedestrian detection\nwith semantic tasks, including pedestrian attributes (e.g. `carrying backpack')\nand scene attributes (e.g. `road', `tree', and `horizontal'). Rather than\nexpensively annotating scene attributes, we transfer attributes information\nfrom existing scene segmentation datasets to the pedestrian dataset, by\nproposing a novel deep model to learn high-level features from multiple tasks\nand multiple data sources. Since distinct tasks have distinct convergence rates\nand data from different datasets have different distributions, a multi-task\nobjective function is carefully designed to coordinate tasks and reduce\ndiscrepancies among datasets. The importance coefficients of tasks and network\nparameters in this objective function can be iteratively estimated. Extensive\nevaluations show that the proposed approach outperforms the state-of-the-art on\nthe challenging Caltech and ETH datasets, where it reduces the miss rates of\nprevious deep models by 17 and 5.5 percent, respectively.", "field": [], "task": ["Pedestrian Detection", "Scene Segmentation"], "method": [], "dataset": ["Caltech"], "metric": ["Reasonable Miss Rate"], "title": "Pedestrian Detection aided by Deep Learning Semantic Tasks"} {"abstract": "Lip reading has witnessed unparalleled development in recent years thanks to deep learning and the availability of large-scale datasets. Despite the encouraging results achieved, the performance of lip reading, unfortunately, remains inferior to the one of its counterpart speech recognition, due to the ambiguous nature of its actuations that makes it challenging to extract discriminant features from the lip movement videos. In this paper, we propose a new method, termed as Lip by Speech (LIBS), of which the goal is to strengthen lip reading by learning from speech recognizers. The rationale behind our approach is that the features extracted from speech recognizers may provide complementary and discriminant clues, which are formidable to be obtained from the subtle movements of the lips, and consequently facilitate the training of lip readers. This is achieved, specifically, by distilling multi-granularity knowledge from speech recognizers to lip readers. To conduct this cross-modal knowledge distillation, we utilize an efficacious alignment scheme to handle the inconsistent lengths of the audios and videos, as well as an innovative filtering strategy to refine the speech recognizer's prediction. The proposed method achieves the new state-of-the-art performance on the CMLR and LRS2 datasets, outperforming the baseline by a margin of 7.66% and 2.75% in character error rate, respectively.", "field": [], "task": ["Knowledge Distillation", "Lipreading", "Lip Reading", "Speech Recognition"], "method": [], "dataset": ["CMLR", "LRS2"], "metric": ["CER", "Word Error Rate (WER)"], "title": "Hearing Lips: Improving Lip Reading by Distilling Speech Recognizers"} {"abstract": "Co-localization is the problem of localizing objects of the same class using only the set of images that contain them. This is a challenging task because the object detector must be built without negative examples that can lead to more informative supervision signals. The main idea of our method is to cluster the feature space of a generically pre-trained CNN, to find a set of CNN features that are consistently and highly activated for an object category, which we call category-consistent CNN features. Then, we propagate their combined activation map using superpixel geodesic distances for co-localization. In our first set of experiments, we show that the proposed method achieves state-of-the-art performance on three related benchmarks: PASCAL 2007, PASCAL-2012, and the Object Discovery dataset. We also show that our method is able to detect and localize truly unseen categories, on six held-out ImageNet categories with accuracy that is significantly higher than previous state-of-the-art. Our intuitive approach achieves this success without any region proposals or object detectors and can be based on a CNN that was pre-trained purely on image classification tasks without further fine-tuning.", "field": [], "task": ["Image Classification", "Object Discovery", "Object Localization"], "method": [], "dataset": ["PASCAL VOC 2012", "PASCAL VOC 2007"], "metric": ["CorLoc"], "title": "Co-localization with Category-Consistent Features and Geodesic Distance Propagation"} {"abstract": "State-of-the-art learning based boundary detection methods require extensive\ntraining data. Since labelling object boundaries is one of the most expensive\ntypes of annotations, there is a need to relax the requirement to carefully\nannotate images to make both the training more affordable and to extend the\namount of training data. In this paper we propose a technique to generate\nweakly supervised annotations and show that bounding box annotations alone\nsuffice to reach high-quality object boundaries without using any\nobject-specific boundary annotations. With the proposed weak supervision\ntechniques we achieve the top performance on the object boundary detection\ntask, outperforming by a large margin the current fully supervised\nstate-of-the-art methods.", "field": [], "task": ["Boundary Detection", "Edge Detection"], "method": [], "dataset": ["SBD"], "metric": ["Maximum F-measure"], "title": "Weakly Supervised Object Boundaries"} {"abstract": "Image geolocalization is the task of identifying the location depicted in a\nphoto based only on its visual information. This task is inherently challenging\nsince many photos have only few, possibly ambiguous cues to their geolocation.\nRecent work has cast this task as a classification problem by partitioning the\nearth into a set of discrete cells that correspond to geographic regions. The\ngranularity of this partitioning presents a critical trade-off; using fewer but\nlarger cells results in lower location accuracy while using more but smaller\ncells reduces the number of training examples per class and increases model\nsize, making the model prone to overfitting. To tackle this issue, we propose a\nsimple but effective algorithm, combinatorial partitioning, which generates a\nlarge number of fine-grained output classes by intersecting multiple\ncoarse-grained partitionings of the earth. Each classifier votes for the\nfine-grained classes that overlap with their respective coarse-grained ones.\nThis technique allows us to predict locations at a fine scale while maintaining\nsufficient training examples per class. Our algorithm achieves the\nstate-of-the-art performance in location recognition on multiple benchmark\ndatasets.", "field": [], "task": ["Photo geolocation estimation"], "method": [], "dataset": ["Im2GPS3k", "Im2GPS"], "metric": ["City level (25 km)", "Continent level (2500 km)", "Reference images", "Training images", "Street level (1 km)", "Country level (750 km)", "Region level (200 km)"], "title": "CPlaNet: Enhancing Image Geolocalization by Combinatorial Partitioning of Maps"} {"abstract": "Weakly supervised object detection aims at reducing the amount of supervision\nrequired to train detection models. Such models are traditionally learned from\nimages/videos labelled only with the object class and not the object bounding\nbox. In our work, we try to leverage not only the object class labels but also\nthe action labels associated with the data. We show that the action depicted in\nthe image/video can provide strong cues about the location of the associated\nobject. We learn a spatial prior for the object dependent on the action (e.g.\n\"ball\" is closer to \"leg of the person\" in \"kicking ball\"), and incorporate\nthis prior to simultaneously train a joint object detection and action\nclassification model. We conducted experiments on both video datasets and image\ndatasets to evaluate the performance of our weakly supervised object detection\nmodel. Our approach outperformed the current state-of-the-art (SOTA) method by\nmore than 6% in mAP on the Charades video dataset.", "field": [], "task": ["Action Classification", "Action Classification ", "Object Detection", "Weakly Supervised Object Detection"], "method": [], "dataset": ["HICO-DET", "Charades"], "metric": ["MAP"], "title": "Activity Driven Weakly Supervised Object Detection"} {"abstract": "Unsupervised cross-domain person re-identification (Re-ID) faces two key issues. One is the data distribution discrepancy between source and target domains, and the other is the lack of labelling information in target domain. They are addressed in this paper from the perspective of representation learning. For the first issue, we highlight the presence of camera-level sub-domains as a unique characteristic of person Re-ID, and develop camera-aware domain adaptation to reduce the discrepancy not only between source and target domains but also across these sub-domains. For the second issue, we exploit the temporal continuity in each camera of target domain to create discriminative information. This is implemented by dynamically generating online triplets within each batch, in order to maximally take advantage of the steadily improved feature representation in training process. Together, the above two methods give rise to a novel unsupervised deep domain adaptation framework for person Re-ID. Experiments and ablation studies on benchmark datasets demonstrate its superiority and interesting properties.", "field": [], "task": ["Domain Adaptation", "Person Re-Identification", "Representation Learning", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["Duke to Market", "Market to Duke"], "metric": ["rank-10", "mAP", "rank-5", "rank-1"], "title": "A Novel Unsupervised Camera-aware Domain Adaptation Framework for Person Re-identification"} {"abstract": "Temporal action localization is an important yet challenging problem. Given a\nlong, untrimmed video consisting of multiple action instances and complex\nbackground contents, we need not only to recognize their action categories, but\nalso to localize the start time and end time of each instance. Many\nstate-of-the-art systems use segment-level classifiers to select and rank\nproposal segments of pre-determined boundaries. However, a desirable model\nshould move beyond segment-level and make dense predictions at a fine\ngranularity in time to determine precise temporal boundaries. To this end, we\ndesign a novel Convolutional-De-Convolutional (CDC) network that places CDC\nfilters on top of 3D ConvNets, which have been shown to be effective for\nabstracting action semantics but reduce the temporal length of the input data.\nThe proposed CDC filter performs the required temporal upsampling and spatial\ndownsampling operations simultaneously to predict actions at the frame-level\ngranularity. It is unique in jointly modeling action semantics in space-time\nand fine-grained temporal dynamics. We train the CDC network in an end-to-end\nmanner efficiently. Our model not only achieves superior performance in\ndetecting actions in every frame, but also significantly boosts the precision\nof localizing temporal boundaries. Finally, the CDC network demonstrates a very\nhigh efficiency with the ability to process 500 frames per second on a single\nGPU server. We will update the camera-ready version and publish the source\ncodes online soon.", "field": [], "task": ["Action Localization", "Temporal Action Localization"], "method": [], "dataset": ["THUMOS\u201914"], "metric": ["mAP IOU@0.6", "mAP IOU@0.7", "mAP IOU@0.5", "mAP IOU@0.4", "mAP IOU@0.3"], "title": "CDC: Convolutional-De-Convolutional Networks for Precise Temporal Action Localization in Untrimmed Videos"} {"abstract": "In many machine learning tasks it is desirable that a model's prediction\ntransforms in an equivariant way under transformations of its input.\nConvolutional neural networks (CNNs) implement translational equivariance by\nconstruction; for other transformations, however, they are compelled to learn\nthe proper mapping. In this work, we develop Steerable Filter CNNs (SFCNNs)\nwhich achieve joint equivariance under translations and rotations by design.\nThe proposed architecture employs steerable filters to efficiently compute\norientation dependent responses for many orientations without suffering\ninterpolation artifacts from filter rotation. We utilize group convolutions\nwhich guarantee an equivariant mapping. In addition, we generalize He's weight\ninitialization scheme to filters which are defined as a linear combination of a\nsystem of atomic filters. Numerical experiments show a substantial enhancement\nof the sample complexity with a growing number of sampled filter orientations\nand confirm that the network generalizes learned patterns over orientations.\nThe proposed approach achieves state-of-the-art on the rotated MNIST benchmark\nand on the ISBI 2012 2D EM segmentation challenge.", "field": [], "task": ["Breast Tumour Classification", "Colorectal Gland Segmentation:", "Multi-tissue Nucleus Segmentation", "Rotated MNIST"], "method": [], "dataset": ["CRAG", "Kumar", "PCam"], "metric": ["F1-score", "Hausdorff Distance (mm)", "AUC", "Dice"], "title": "Learning Steerable Filters for Rotation Equivariant CNNs"} {"abstract": "As facial appearance is subject to significant intra-class variations caused\nby the aging process over time, age-invariant face recognition (AIFR) remains a\nmajor challenge in face recognition community. To reduce the intra-class\ndiscrepancy caused by the aging, in this paper we propose a novel approach\n(namely, Orthogonal Embedding CNNs, or OE-CNNs) to learn the age-invariant deep\nface features. Specifically, we decompose deep face features into two\northogonal components to represent age-related and identity-related features.\nAs a result, identity-related features that are robust to aging are then used\nfor AIFR. Besides, for complementing the existing cross-age datasets and\nadvancing the research in this field, we construct a brand-new large-scale\nCross-Age Face dataset (CAF). Extensive experiments conducted on the three\npublic domain face aging datasets (MORPH Album 2, CACD-VS and FG-NET) have\nshown the effectiveness of the proposed approach and the value of the\nconstructed CAF dataset on AIFR. Benchmarking our algorithm on one of the most\npopular general face recognition (GFR) dataset LFW additionally demonstrates\nthe comparable generalization performance on GFR.", "field": [], "task": ["Age-Invariant Face Recognition", "Face Recognition"], "method": [], "dataset": ["MORPH Album2", "CACDVS"], "metric": ["Rank-1 Recognition Rate", "Accuracy"], "title": "Orthogonal Deep Features Decomposition for Age-Invariant Face Recognition"} {"abstract": "One major challenge for 3D pose estimation from a single RGB image is the\nacquisition of sufficient training data. In particular, collecting large\namounts of training data that contain unconstrained images and are annotated\nwith accurate 3D poses is infeasible. We therefore propose to use two\nindependent training sources. The first source consists of images with\nannotated 2D poses and the second source consists of accurate 3D motion capture\ndata. To integrate both sources, we propose a dual-source approach that\ncombines 2D pose estimation with efficient and robust 3D pose retrieval. In our\nexperiments, we show that our approach achieves state-of-the-art results and is\neven competitive when the skeleton structure of the two sources differ\nsubstantially.", "field": [], "task": ["3D Human Pose Estimation", "3D Pose Estimation", "Motion Capture", "Pose Estimation", "Pose Retrieval"], "method": [], "dataset": ["HumanEva-I", "Human3.6M"], "metric": ["Average MPJPE (mm)", "Mean Reconstruction Error (mm)", "Using 2D ground-truth joints"], "title": "A Dual-Source Approach for 3D Pose Estimation from a Single Image"} {"abstract": "An accurate abstractive summary of a document should contain all its salient\ninformation and should be logically entailed by the input document. We improve\nthese important aspects of abstractive summarization via multi-task learning\nwith the auxiliary tasks of question generation and entailment generation,\nwhere the former teaches the summarization model how to look for salient\nquestioning-worthy details, and the latter teaches the model how to rewrite a\nsummary which is a directed-logical subset of the input document. We also\npropose novel multi-task architectures with high-level (semantic)\nlayer-specific sharing across multiple encoder and decoder layers of the three\ntasks, as well as soft-sharing mechanisms (and show performance ablations and\nanalysis examples of each contribution). Overall, we achieve statistically\nsignificant improvements over the state-of-the-art on both the CNN/DailyMail\nand Gigaword datasets, as well as on the DUC-2002 transfer setup. We also\npresent several quantitative and qualitative analysis studies of our model's\nlearned saliency and entailment skills.", "field": [], "task": ["Abstractive Text Summarization", "Multi-Task Learning", "Question Generation"], "method": [], "dataset": ["CNN / Daily Mail", "GigaWord"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Soft Layer-Specific Multi-Task Summarization with Entailment and Question Generation"} {"abstract": "In this paper, we investigate the sentence summarization task that produces a summary from a source sentence. Neural sequence-to-sequence models have gained considerable success for this task, while most existing approaches only focus on improving the informativeness of the summary, which ignore the correctness, i.e., the summary should not contain unrelated information with respect to the source sentence. We argue that correctness is an essential requirement for summarization systems. Considering a correct summary is semantically entailed by the source sentence, we incorporate entailment knowledge into abstractive summarization models. We propose an entailment-aware encoder under multi-task framework (i.e., summarization generation and entailment recognition) and an entailment-aware decoder by entailment Reward Augmented Maximum Likelihood (RAML) training. Experiment results demonstrate that our models significantly outperform baselines from the aspects of informativeness and correctness.", "field": [], "task": ["Abstractive Text Summarization", "Sentence Summarization", "Text Summarization"], "method": [], "dataset": ["GigaWord", "DUC 2004 Task 1"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Ensure the Correctness of the Summary: Incorporate Entailment Knowledge into Abstractive Sentence Summarization"} {"abstract": "Deep ConvNets have been shown to be effective for the task of human pose\nestimation from single images. However, several challenging issues arise in the\nvideo-based case such as self-occlusion, motion blur, and uncommon poses with\nfew or no examples in training data sets. Temporal information can provide\nadditional cues about the location of body joints and help to alleviate these\nissues. In this paper, we propose a deep structured model to estimate a\nsequence of human poses in unconstrained videos. This model can be efficiently\ntrained in an end-to-end manner and is capable of representing appearance of\nbody joints and their spatio-temporal relationships simultaneously. Domain\nknowledge about the human body is explicitly incorporated into the network\nproviding effective priors to regularize the skeletal structure and to enforce\ntemporal consistency. The proposed end-to-end architecture is evaluated on two\nwidely used benchmarks (Penn Action dataset and JHMDB dataset) for video-based\npose estimation. Our approach significantly outperforms the existing\nstate-of-the-art methods.", "field": [], "task": ["Pose Estimation"], "method": [], "dataset": ["UPenn Action", "J-HMDB"], "metric": ["Mean PCK@0.2"], "title": "Thin-Slicing Network: A Deep Structured Model for Pose Estimation in Videos"} {"abstract": "We present a unified framework for understanding human social behaviors in\nraw image sequences. Our model jointly detects multiple individuals, infers\ntheir social actions, and estimates the collective actions with a single\nfeed-forward pass through a neural network. We propose a single architecture\nthat does not rely on external detection algorithms but rather is trained\nend-to-end to generate dense proposal maps that are refined via a novel\ninference scheme. The temporal consistency is handled via a person-level\nmatching Recurrent Neural Network. The complete model takes as input a sequence\nof frames and outputs detections along with the estimates of individual actions\nand collective activities. We demonstrate state-of-the-art performance of our\nalgorithm on multiple publicly available benchmarks.", "field": [], "task": ["Action Localization", "Activity Recognition", "Scene Understanding"], "method": [], "dataset": ["Volleyball"], "metric": ["Accuracy"], "title": "Social Scene Understanding: End-to-End Multi-Person Action Localization and Collective Activity Recognition"} {"abstract": "This paper presents a Neural Aggregation Network (NAN) for video face\nrecognition. The network takes a face video or face image set of a person with\na variable number of face images as its input, and produces a compact,\nfixed-dimension feature representation for recognition. The whole network is\ncomposed of two modules. The feature embedding module is a deep Convolutional\nNeural Network (CNN) which maps each face image to a feature vector. The\naggregation module consists of two attention blocks which adaptively aggregate\nthe feature vectors to form a single feature inside the convex hull spanned by\nthem. Due to the attention mechanism, the aggregation is invariant to the image\norder. Our NAN is trained with a standard classification or verification loss\nwithout any extra supervision signal, and we found that it automatically learns\nto advocate high-quality face images while repelling low-quality ones such as\nblurred, occluded and improperly exposed faces. The experiments on IJB-A,\nYouTube Face, Celebrity-1000 video face recognition benchmarks show that it\nconsistently outperforms naive aggregation methods and achieves the\nstate-of-the-art accuracy.", "field": [], "task": ["Face Recognition", "Face Verification"], "method": [], "dataset": ["IJB-A"], "metric": ["TAR @ FAR=0.01"], "title": "Neural Aggregation Network for Video Face Recognition"} {"abstract": "The last several years have seen intensive interest in exploring\nneural-network-based models for machine comprehension (MC) and question\nanswering (QA). In this paper, we approach the problems by closely modelling\nquestions in a neural network framework. We first introduce syntactic\ninformation to help encode questions. We then view and model different types of\nquestions and the information shared among them as an adaptation task and\nproposed adaptation models for them. On the Stanford Question Answering Dataset\n(SQuAD), we show that these approaches can help attain better results over a\ncompetitive baseline.", "field": [], "task": ["Question Answering", "Reading Comprehension"], "method": [], "dataset": ["SQuAD1.1 dev", "SQuAD1.1"], "metric": ["EM", "F1"], "title": "Exploring Question Understanding and Adaptation in Neural-Network-Based Question Answering"} {"abstract": "This paper addresses classification tasks on a particular target domain in\nwhich labeled training data are only available from source domains different\nfrom (but related to) the target. Two closely related frameworks, domain\nadaptation and domain generalization, are concerned with such tasks, where the\nonly difference between those frameworks is the availability of the unlabeled\ntarget data: domain adaptation can leverage unlabeled target information, while\ndomain generalization cannot. We propose Scatter Component Analyis (SCA), a\nfast representation learning algorithm that can be applied to both domain\nadaptation and domain generalization. SCA is based on a simple geometrical\nmeasure, i.e., scatter, which operates on reproducing kernel Hilbert space. SCA\nfinds a representation that trades between maximizing the separability of\nclasses, minimizing the mismatch between domains, and maximizing the\nseparability of data; each of which is quantified through scatter. The\noptimization problem of SCA can be reduced to a generalized eigenvalue problem,\nwhich results in a fast and exact solution. Comprehensive experiments on\nbenchmark cross-domain object recognition datasets verify that SCA performs\nmuch faster than several state-of-the-art algorithms and also provides\nstate-of-the-art classification accuracy in both domain adaptation and domain\ngeneralization. We also show that scatter can be used to establish a\ntheoretical generalization bound in the case of domain adaptation.", "field": [], "task": ["Domain Adaptation", "Domain Generalization", "Object Recognition", "Representation Learning"], "method": [], "dataset": ["Office-Caltech"], "metric": ["Average Accuracy"], "title": "Scatter Component Analysis: A Unified Framework for Domain Adaptation and Domain Generalization"} {"abstract": "We investigate the problem of person search in the wild in this work. Instead\nof comparing the query against all candidate regions generated in a query-blind\nmanner, we propose to recursively shrink the search area from the whole image\ntill achieving precise localization of the target person, by fully exploiting\ninformation from the query and contextual cues in every recursive search step.\nWe develop the Neural Person Search Machines (NPSM) to implement such recursive\nlocalization for person search. Benefiting from its neural search mechanism,\nNPSM is able to selectively shrink its focus from a loose region to a tighter\none containing the target automatically. In this process, NPSM employs an\ninternal primitive memory component to memorize the query representation which\nmodulates the attention and augments its robustness to other distracting\nregions. Evaluations on two benchmark datasets, CUHK-SYSU Person Search dataset\nand PRW dataset, have demonstrated that our method can outperform current\nstate-of-the-arts in both mAP and top-1 evaluation protocols.", "field": [], "task": ["Person Search"], "method": [], "dataset": ["CUHK-SYSU"], "metric": ["Rank-1", "MAP"], "title": "Neural Person Search Machines"} {"abstract": "We address the problem of automatically learning the main steps to complete a\ncertain task, such as changing a car tire, from a set of narrated instruction\nvideos. The contributions of this paper are three-fold. First, we develop a new\nunsupervised learning approach that takes advantage of the complementary nature\nof the input video and the associated narration. The method solves two\nclustering problems, one in text and one in video, applied one after each other\nand linked by joint constraints to obtain a single coherent sequence of steps\nin both modalities. Second, we collect and annotate a new challenging dataset\nof real-world instruction videos from the Internet. The dataset contains about\n800,000 frames for five different tasks that include complex interactions\nbetween people and objects, and are captured in a variety of indoor and outdoor\nsettings. Third, we experimentally demonstrate that the proposed method can\nautomatically discover, in an unsupervised manner, the main steps to achieve\nthe task and locate the steps in the input videos.", "field": [], "task": [], "method": [], "dataset": ["CrossTask"], "metric": ["Recall"], "title": "Unsupervised Learning from Narrated Instruction Videos"} {"abstract": "Existing counting methods often adopt regression-based approaches and cannot\nprecisely localize the target objects, which hinders the further analysis\n(e.g., high-level understanding and fine-grained classification). In addition,\nmost of prior work mainly focus on counting objects in static environments with\nfixed cameras. Motivated by the advent of unmanned flying vehicles (i.e.,\ndrones), we are interested in detecting and counting objects in such dynamic\nenvironments. We propose Layout Proposal Networks (LPNs) and spatial kernels to\nsimultaneously count and localize target objects (e.g., cars) in videos\nrecorded by the drone. Different from the conventional region proposal methods,\nwe leverage the spatial layout information (e.g., cars often park regularly)\nand introduce these spatially regularized constraints into our network to\nimprove the localization accuracy. To evaluate our counting method, we present\na new large-scale car parking lot dataset (CARPK) that contains nearly 90,000\ncars captured from different parking lots. To the best of our knowledge, it is\nthe first and the largest drone view dataset that supports object counting, and\nprovides the bounding box annotations.", "field": [], "task": ["Object Counting", "Region Proposal", "Regression"], "method": [], "dataset": ["CARPK"], "metric": ["MAE", "RMSE"], "title": "Drone-based Object Counting by Spatially Regularized Regional Proposal Network"} {"abstract": "In this paper, we propose a multi-task neural network to perform emotion-cause pair extraction in a unified model.", "field": [], "task": ["Emotion-Cause Pair Extraction", "Multi-Task Learning"], "method": [], "dataset": ["ECPE"], "metric": ["F1"], "title": "A Multi-Task Learning Neural Network for Emotion-Cause Pair Extraction"} {"abstract": "Scientific documents rely on both mathematics and text to communicate ideas.\nInspired by the topical correspondence between mathematical equations and word\ncontexts observed in scientific texts, we propose a novel topic model that\njointly generates mathematical equations and their surrounding text (TopicEq).\nUsing an extension of the correlated topic model, the context is generated from\na mixture of latent topics, and the equation is generated by an RNN that\ndepends on the latent topic activations. To experiment with this model, we\ncreate a corpus of 400K equation-context pairs extracted from a range of\nscientific articles from arXiv, and fit the model using a variational\nautoencoder approach. Experimental results show that this joint model\nsignificantly outperforms existing topic models and equation models for\nscientific texts. Moreover, we qualitatively show that the model effectively\ncaptures the relationship between topics and mathematics, enabling novel\napplications such as topic-aware equation generation, equation topic inference,\nand topic-aware alignment of mathematical symbols and words.", "field": [], "task": ["Topic Models"], "method": [], "dataset": ["arXiv"], "metric": ["Topic Coherence@50"], "title": "TopicEq: A Joint Topic and Mathematical Equation Model for Scientific Texts"} {"abstract": "In this work, we first show that on the widely used LibriSpeech benchmark, our transformer-based context-dependent connectionist temporal classification (CTC) system produces state-of-the-art results. We then show that using wordpieces as modeling units combined with CTC training, we can greatly simplify the engineering pipeline compared to conventional frame-based cross-entropy training by excluding all the GMM bootstrapping, decision tree building and force alignment steps, while still achieving very competitive word-error-rate. Additionally, using wordpieces as modeling units can significantly improve runtime efficiency since we can use larger stride without losing accuracy. We further confirm these findings on two internal VideoASR datasets: German, which is similar to English as a fusional language, and Turkish, which is an agglutinative language.", "field": [], "task": ["Speech Recognition"], "method": [], "dataset": ["LibriSpeech test-other", "LibriSpeech test-clean"], "metric": ["Word Error Rate (WER)"], "title": "Faster, Simpler and More Accurate Hybrid ASR Systems Using Wordpieces"} {"abstract": "In this paper we describe an extension of the Kaldi software toolkit to support neural-based language modeling, intended\r\nfor use in automatic speech recognition (ASR) and related tasks. We combine the use of subword features (letter n-grams) and one-hot encoding of frequent words so that the models can handle large vocabularies containing infrequent\r\nwords. We propose a new objective function that allows for training of unnormalized probabilities. An importance sampling based method is supported to speed up training when the vocabulary is large. Experimental results on five corpora show that Kaldi-RNNLM rivals other recurrent neural network language model toolkits both on performance and training speed.", "field": [], "task": ["Language Modelling", "Speech Recognition"], "method": [], "dataset": ["LibriSpeech test-other", "LibriSpeech test-clean"], "metric": ["Word Error Rate (WER)"], "title": "Neural Network Language Modeling with Letter-based Features and Importance Sampling"} {"abstract": "Performing controlled experiments on noisy data is essential in understanding deep learning across noise levels. Due to the lack of suitable datasets, previous research has only examined deep learning on controlled synthetic label noise, and real-world label noise has never been studied in a controlled setting. This paper makes three contributions. First, we establish the first benchmark of controlled real-world label noise from the web. This new benchmark enables us to study the web label noise in a controlled setting for the first time. The second contribution is a simple but effective method to overcome both synthetic and real noisy labels. We show that our method achieves the best result on our dataset as well as on two public benchmarks (CIFAR and WebVision). Third, we conduct the largest study by far into understanding deep neural networks trained on noisy labels across different noise levels, noise types, network architectures, and training settings. The data and code are released at the following link: http://www.lujiang.info/cnlw.html", "field": [], "task": [], "method": [], "dataset": ["mini WebVision 1.0"], "metric": ["Top-5 Accuracy", "ImageNet Top-1 Accuracy", "ImageNet Top-5 Accuracy", "Top-1 Accuracy"], "title": "Beyond Synthetic Noise: Deep Learning on Controlled Noisy Labels"} {"abstract": "In this paper, we present an algorithm for real-world license plate recognition (LPR) from a low-quality image. Our method is built upon a framework that includes denoising and rectification, and each task is conducted by Convolutional Neural Networks. Existing denoising and rectification have been treated separately as a single network in previous research. In contrast to the previous work, we here propose an end-to-end trainable network for image recovery, Single Noisy Image DEnoising and Rectification (SNIDER), which focuses on solving both the problems jointly. It overcomes those obstacles by designing a novel network to address the denoising and rectification jointly. Moreover, we propose a way to leverage optimization with the auxiliary tasks for multi-task fitting and novel training losses. Extensive experiments on two challenging LPR datasets demonstrate the effectiveness of our proposed method in recovering the high-quality license plate image from the low-quality one and show that the the proposed method outperforms other state-of-the-art methods.", "field": [], "task": ["Denoising", "Image Denoising", "License Plate Recognition", "Rectification"], "method": [], "dataset": ["AOLP-RP"], "metric": ["Average Recall"], "title": "SNIDER: Single Noisy Image Denoising and Rectification for Improving License Plate Recognition"} {"abstract": "Drastic variations in illumination across surveillance cameras make the\nperson re-identification problem extremely challenging. Current large scale\nre-identification datasets have a significant number of training subjects, but\nlack diversity in lighting conditions. As a result, a trained model requires\nfine-tuning to become effective under an unseen illumination condition. To\nalleviate this problem, we introduce a new synthetic dataset that contains\nhundreds of illumination conditions. Specifically, we use 100 virtual humans\nilluminated with multiple HDR environment maps which accurately model realistic\nindoor and outdoor lighting. To achieve better accuracy in unseen illumination\nconditions we propose a novel domain adaptation technique that takes advantage\nof our synthetic data and performs fine-tuning in a completely unsupervised\nway. Our approach yields significantly higher accuracy than semi-supervised and\nunsupervised state-of-the-art methods, and is very competitive with supervised\ntechniques.", "field": [], "task": ["Domain Adaptation", "Person Re-Identification", "Unsupervised Person Re-Identification"], "method": [], "dataset": ["PRID2011"], "metric": ["Rank-1"], "title": "Domain Adaptation through Synthesis for Unsupervised Person Re-identification"} {"abstract": "Although recent neural conversation models have shown great potential, they\noften generate bland and generic responses. While various approaches have been\nexplored to diversify the output of the conversation model, the improvement\noften comes at the cost of decreased relevance. In this paper, we propose a\nSpaceFusion model to jointly optimize diversity and relevance that essentially\nfuses the latent space of a sequence-to-sequence model and that of an\nautoencoder model by leveraging novel regularization terms. As a result, our\napproach induces a latent space in which the distance and direction from the\npredicted response vector roughly match the relevance and diversity,\nrespectively. This property also lends itself well to an intuitive\nvisualization of the latent space. Both automatic and human evaluation results\ndemonstrate that the proposed approach brings significant improvement compared\nto strong baselines in both diversity and relevance.", "field": [], "task": ["Chatbot", "Dialogue Generation"], "method": [], "dataset": ["Reddit (multi-ref)"], "metric": ["interest (human)", "relevance (human)"], "title": "Jointly Optimizing Diversity and Relevance in Neural Response Generation"} {"abstract": "Deep Convolutional features extracted from a comprehensive labeled dataset,\ncontain substantial representations which could be effectively used in a new\ndomain. Despite the fact that generic features achieved good results in many\nvisual tasks, fine-tuning is required for pretrained deep CNN models to be more\neffective and provide state-of-the-art performance. Fine tuning using the\nbackpropagation algorithm in a supervised setting, is a time and resource\nconsuming process. In this paper, we present a new architecture and an approach\nfor unsupervised object recognition that addresses the above mentioned problem\nwith fine tuning associated with pretrained CNN-based supervised deep learning\napproaches while allowing automated feature extraction. Unlike existing works,\nour approach is applicable to general object recognition tasks. It uses a\npretrained (on a related domain) CNN model for automated feature extraction\npipelined with a Hopfield network based associative memory bank for storing\npatterns for classification purposes. The use of associative memory bank in our\nframework allows eliminating backpropagation while providing competitive\nperformance on an unseen dataset.", "field": [], "task": ["Few-Shot Image Classification", "Fine-Grained Image Classification", "Image Classification", "Object Recognition", "Semi-Supervised Image Classification"], "method": [], "dataset": ["Caltech-256 5-way (1-shot)", "CIFAR100 5-way (1-shot)", "Caltech-256", "Caltech-256, 1024 Labels", "CIFAR-10, 40 Labels", "CIFAR-10", "Caltech-101", "Caltech-101, 202 Labels"], "metric": ["Percentage error", "Accuracy", "Percentage correct", "Top-1 Error Rate"], "title": "Unsupervised Learning using Pretrained CNN and Associative Memory Bank"} {"abstract": "We present DenseRaC, a novel end-to-end framework for jointly estimating 3D human pose and body shape from a monocular RGB image. Our two-step framework takes the body pixel-to-surface correspondence map (i.e., IUV map) as proxy representation and then performs estimation of parameterized human pose and shape. Specifically, given an estimated IUV map, we develop a deep neural network optimizing 3D body reconstruction losses and further integrating a render-and-compare scheme to minimize differences between the input and the rendered output, i.e., dense body landmarks, body part masks, and adversarial priors. To boost learning, we further construct a large-scale synthetic dataset (MOCA) utilizing web-crawled Mocap sequences, 3D scans and animations. The generated data covers diversified camera views, human actions and body shapes, and is paired with full ground truth. Our model jointly learns to represent the 3D human body from hybrid datasets, mitigating the problem of unpaired training data. Our experiments show that DenseRaC obtains superior performance against state of the art on public benchmarks of various humanrelated tasks.", "field": [], "task": ["3D Human Pose Estimation"], "method": [], "dataset": ["Human3.6M"], "metric": ["Average MPJPE (mm)"], "title": "DenseRaC: Joint 3D Pose and Shape Estimation by Dense Render-and-Compare"} {"abstract": "We introduce a new dataset, Human3.6M, of 3.6 Million accurate 3D Human poses, acquired by recording the performance of 5 female and 6 male subjects, under 4 different viewpoints, for training realistic human sensing systems and for evaluating the next generation of human pose estimation models and algorithms. Besides increasing the size of the datasets in the current state-of-the-art by several orders of magnitude, we also aim to complement such datasets with a diverse set of motions and poses encountered as part of typical human activities (taking photos, talking on the phone, posing, greeting, eating, etc.), with additional synchronized image, human motion capture, and time of flight (depth) data, and with accurate 3D body scans of all the subject actors involved. We also provide controlled mixed reality evaluation scenarios where 3D human models are animated using motion capture and inserted using correct 3D geometry, in complex real environments, viewed with moving cameras, and under occlusion. Finally, we provide a set of large-scale statistical models and detailed evaluation baselines for the dataset illustrating its diversity and the scope for improvement by future work in the research community. Our experiments show that our best large-scale model can leverage our full training set to obtain a 20% improvement in performance compared to a training set of the scale of the largest existing public dataset for this problem. Yet the potential for improvement by leveraging higher capacity, more complex models with our large dataset, is substantially vaster and should stimulate future research. The dataset together with code for the associated large-scale learning models, features, visualization tools, as well as the evaluation server, is available online at http://vision.imar.ro/human3.6m.", "field": [], "task": ["3D Human Pose Estimation", "Motion Capture", "Pose Estimation"], "method": [], "dataset": ["Human3.6M"], "metric": ["Average MPJPE (mm)"], "title": "Human3.6m: Large scale datasets and predictive methods for 3D human sensing in natural environments"} {"abstract": "Word Sense Disambiguation models exist in many flavors. Even though supervised ones tend to perform best in terms of accuracy, they often lose ground to more flexible knowledge-based solutions, which do not require training by a word expert for every disambiguation target. To bridge this gap we adopt a different perspective and rely on sequence learning to frame the disambiguation problem: we propose and study in depth a series of end-to-end neural architectures directly tailored to the task, from bidirectional Long Short-Term Memory to encoder-decoder models. Our extensive evaluation over standard benchmarks and in multiple languages shows that sequence learning enables more versatile all-words models that consistently lead to state-of-the-art results, even against word experts with engineered features.", "field": [], "task": ["Information Retrieval", "Machine Translation", "Word Sense Disambiguation"], "method": [], "dataset": ["Supervised:"], "metric": ["Senseval 2", "Senseval 3", "SemEval 2013", "SemEval 2007", "SemEval 2015"], "title": "Neural Sequence Learning Models for Word Sense Disambiguation"} {"abstract": "Convolutional neural networks (CNNs) have recently emerged as a popular\nbuilding block for natural language processing (NLP). Despite their success,\nmost existing CNN models employed in NLP share the same learned (and static)\nset of filters for all input sentences. In this paper, we consider an approach\nof using a small meta network to learn context-sensitive convolutional filters\nfor text processing. The role of meta network is to abstract the contextual\ninformation of a sentence or document into a set of input-aware filters. We\nfurther generalize this framework to model sentence pairs, where a\nbidirectional filter generation mechanism is introduced to encapsulate\nco-dependent sentence representations. In our benchmarks on four different\ntasks, including ontology classification, sentiment analysis, answer sentence\nselection, and paraphrase identification, our proposed model, a modified CNN\nwith context-sensitive filters, consistently outperforms the standard CNN and\nattention-based CNN baselines. By visualizing the learned context-sensitive\nfilters, we further validate and rationalize the effectiveness of proposed\nframework.", "field": [], "task": ["Paraphrase Identification", "Sentiment Analysis", "Text Classification"], "method": [], "dataset": ["Yelp Binary classification", "DBpedia"], "metric": ["Error"], "title": "Learning Context-Sensitive Convolutional Filters for Text Processing"} {"abstract": "Occlusion is a key problem in 3D human pose estimation from a monocular video. To address this problem, we introduce an occlusion-aware deep-learning framework. By employing estimated 2D confidence heatmaps of keypoints and an optical-flow consistency constraint, we filter out the unreliable estimations of occluded keypoints. When occlusion occurs, we have incomplete 2D keypoints and feed them to our 2D and 3D temporal convolutional networks (2D and 3D TCNs) that enforce temporal smoothness to produce a complete 3D pose. By using incomplete 2D keypoints, instead of complete but incorrect ones, our networks are less affected by the error-prone estimations of occluded keypoints. Training the occlusion-aware 3D TCN requires pairs of a 3D pose and a 2D pose with occlusion labels. As no such a dataset is available, we introduce a \"Cylinder Man Model\" to approximate the occupation of body parts in 3D space. By projecting the model onto a 2D plane in different viewing angles, we obtain and label the occluded keypoints, providing us plenty of training data. In addition, we use this model to create a pose regularization constraint, preferring the 2D estimations of unreliable keypoints to be occluded. Our method outperforms state-of-the-art methods on Human 3.6M and HumanEva-I datasets.\r", "field": [], "task": ["3D Human Pose Estimation", "Optical Flow Estimation", "Pose Estimation"], "method": [], "dataset": ["HumanEva-I", "Human3.6M"], "metric": ["Mean Reconstruction Error (mm)", "Average MPJPE (mm)", "Multi-View or Monocular", "Using 2D ground-truth joints"], "title": "Occlusion-Aware Networks for 3D Human Pose Estimation in Video"} {"abstract": "In this paper, we propose a pose grammar to tackle the problem of 3D human pose estimation from a monocular RGB image. Our model takes estimated 2D pose as the input and learns a generalized 2D-3D mapping function to leverage into 3D pose. The proposed model consists of a base network which efficiently captures pose-aligned features and a hierarchy of Bi-directional RNNs (BRNNs) on the top to explicitly incorporate a set of knowledge regarding human body configuration (i.e., kinematics, symmetry,\r\nmotor coordination). The proposed model thus enforces high-level constraints over human poses. In learning, we develop a data augmentation algorithm to further improve model robustness against appearance variations and cross-view generalization ability. We validate our method on public 3D human pose benchmarks and propose a new evaluation protocol working on cross-view setting to verify the generalization capability of different methods. We empirically observe that most state-of-the-art methods encounter difficulty under such setting while our method can well handle such challenges.", "field": [], "task": ["3D Human Pose Estimation", "3D Pose Estimation", "Data Augmentation", "Pose Estimation"], "method": [], "dataset": ["HumanEva-I"], "metric": ["Mean Reconstruction Error (mm)"], "title": "Learning Pose Grammar for Monocular 3D Pose Estimation"} {"abstract": "Due to the phenomenon of \"posterior collapse,\" current latent variable\ngenerative models pose a challenging design choice that either weakens the\ncapacity of the decoder or requires augmenting the objective so it does not\nonly maximize the likelihood of the data. In this paper, we propose an\nalternative that utilizes the most powerful generative models as decoders,\nwhilst optimising the variational lower bound all while ensuring that the\nlatent variables preserve and encode useful information. Our proposed\n$\\delta$-VAEs achieve this by constraining the variational family for the\nposterior to have a minimum distance to the prior. For sequential latent\nvariable models, our approach resembles the classic representation learning\napproach of slow feature analysis. We demonstrate the efficacy of our approach\nat modeling text on LM1B and modeling images: learning representations,\nimproving sample quality, and achieving state of the art log-likelihood on\nCIFAR-10 and ImageNet $32\\times 32$.", "field": [], "task": ["Image Generation", "Latent Variable Models", "Representation Learning"], "method": [], "dataset": ["ImageNet 32x32", "CIFAR-10"], "metric": ["bits/dimension", "bpd"], "title": "Preventing Posterior Collapse with delta-VAEs"} {"abstract": "Learning from one or few visual examples is one of the key capabilities of humans since early infancy, but is still a significant challenge for modern AI systems. While considerable progress has been achieved in few-shot learning from a few image examples, much less attention has been given to the verbal descriptions that are usually provided to infants when they are presented with a new object. In this paper, we focus on the role of additional semantics that can significantly facilitate few-shot visual learning. Building upon recent advances in few-shot learning with additional semantic information, we demonstrate that further improvements are possible by combining multiple and richer semantics (category labels, attributes, and natural language descriptions). Using these ideas, we offer the community new results on the popular miniImageNet and CUB few-shot benchmarks, comparing favorably to the previous state-of-the-art results for both visual only and visual plus semantics-based approaches. We also performed an ablation study investigating the components and design choices of our approach.", "field": [], "task": ["Few-Shot Image Classification", "Few-Shot Learning"], "method": [], "dataset": ["Mini-ImageNet - 1-Shot Learning"], "metric": ["Accuracy"], "title": "Baby steps towards few-shot learning with multiple semantics"} {"abstract": "Predicting the future trajectories of multiple interacting agents in a scene has become an increasingly important problem for many different applications ranging from control of autonomous vehicles and social robots to security and surveillance. This problem is compounded by the presence of social interactions between humans and their physical interactions with the scene. While the existing literature has explored some of these cues, they mainly ignored the multimodal nature of each human's future trajectory. In this paper, we present Social-BiGAT, a graph-based generative adversarial network that generates realistic, multimodal trajectory predictions by better modelling the social interactions of pedestrians in a scene. Our method is based on a graph attention network (GAT) that learns reliable feature representations that encode the social interactions between humans in the scene, and a recurrent encoder-decoder architecture that is trained adversarially to predict, based on the features, the humans' paths. We explicitly account for the multimodal nature of the prediction problem by forming a reversible transformation between each scene and its latent noise vector, as in Bicycle-GAN. We show that our framework achieves state-of-the-art performance comparing it to several baselines on existing trajectory forecasting benchmarks.", "field": [], "task": ["Autonomous Vehicles", "Trajectory Forecasting", "Trajectory Prediction"], "method": [], "dataset": ["ETH/UCY"], "metric": ["ADE-8/12"], "title": "Social-BiGAT: Multimodal Trajectory Forecasting using Bicycle-GAN and Graph Attention Networks"} {"abstract": "Variational autoencoders (VAEs) are one of the powerful likelihood-based generative models with applications in various domains. However, they struggle to generate high-quality images, especially when samples are obtained from the prior without any tempering. One explanation for VAEs' poor generative quality is the prior hole problem: the prior distribution fails to match the aggregate approximate posterior. Due to this mismatch, there exist areas in the latent space with high density under the prior that do not correspond to any encoded image. Samples from those areas are decoded to corrupted images. To tackle this issue, we propose an energy-based prior defined by the product of a base prior distribution and a reweighting factor, designed to bring the base closer to the aggregate posterior. We train the reweighting factor by noise contrastive estimation, and we generalize it to hierarchical VAEs with many latent variable groups. Our experiments confirm that the proposed noise contrastive priors improve the generative performance of state-of-the-art VAEs by a large margin on the MNIST, CIFAR-10, CelebA 64, and CelebA HQ 256 datasets.", "field": [], "task": ["Image Generation"], "method": [], "dataset": ["CelebA 256x256", "CelebA 64x64", "CIFAR-10"], "metric": ["FID"], "title": "NCP-VAE: Variational Autoencoders with Noise Contrastive Priors"} {"abstract": "We present Cross-View Training (CVT), a simple but effective method for deep semi-supervised learning. On labeled examples, the model is trained with standard cross-entropy loss. On an unlabeled example, the model first performs inference (acting as a \"teacher\") to produce soft targets. The model then learns from these soft targets (acting as a ``\"student\"). We deviate from prior work by adding multiple auxiliary student prediction layers to the model. The input to each student layer is a sub-network of the full model that has a restricted view of the input (e.g., only seeing one region of an image). The students can learn from the teacher (the full model) because the teacher sees more of each example. Concurrently, the students improve the quality of the representations used by the teacher as they learn to make predictions with limited data. When combined with Virtual Adversarial Training, CVT improves upon the current state-of-the-art on semi-supervised CIFAR-10 and semi-supervised SVHN. We also apply CVT to train models on five natural language processing tasks using hundreds of millions of sentences of unlabeled data. On all tasks CVT substantially outperforms supervised learning alone, resulting in models that improve upon or are competitive with the current state-of-the-art.\n", "field": [], "task": ["Chunking"], "method": [], "dataset": ["CoNLL 2000"], "metric": ["Exact Span F1"], "title": "Cross-View Training for Semi-Supervised Learning"} {"abstract": "Interactive video object segmentation (iVOS) aims at efficiently harvesting high-quality segmentation masks of the target object in a video with user interactions. Most previous state-of-the-arts tackle the iVOS with two independent networks for conducting user interaction and temporal propagation, respectively, leading to inefficiencies during the inference stage. In this work, we propose a unified framework, named Memory Aggregation Networks (MA-Net), to address the challenging iVOS in a more efficient way. Our MA-Net integrates the interaction and the propagation operations into a single network, which significantly promotes the efficiency of iVOS in the scheme of multi-round interactions. More importantly, we propose a simple yet effective memory aggregation mechanism to record the informative knowledge from the previous interaction rounds, improving the robustness in discovering challenging objects of interest greatly. We conduct extensive experiments on the validation set of DAVIS Challenge 2018 benchmark. In particular, our MA-Net achieves the J@60 score of 76.1% without any bells and whistles, outperforming the state-of-the-arts with more than 2.7%.", "field": [], "task": ["Interactive Video Object Segmentation", "Semantic Segmentation", "Video Object Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["DAVIS 2017"], "metric": ["AUC-J", "J@60s"], "title": "Memory Aggregation Networks for Efficient Interactive Video Object Segmentation"} {"abstract": "Generative models can be seen as the swiss army knives of machine learning, as many problems can be written probabilistically in terms of the distribution of the data, including prediction, reconstruction, imputation and simulation. One of the most promising directions for unsupervised learning may lie in Deep Learning methods, given their success in supervised learning. However, one of the current problems with deep unsupervised learning methods, is that they often are harder to scale. As a result there are some easier, more scalable shallow methods, such as the Gaussian Mixture Model and the Student-t Mixture Model, that remain surprisingly competitive. In this paper we propose a new scalable deep generative model for images, called the Deep Gaussian Mixture Model, that is a straightforward but powerful generalization of GMMs to multiple layers. The parametrization of a Deep GMM allows it to efficiently capture products of variations in natural images. We propose a new EM-based algorithm that scales well to large datasets, and we show that both the Expectation and the Maximization steps can easily be distributed over multiple machines. In our density estimation experiments we show that deeper GMM architectures generalize better than more shallow ones, with results in the same ballpark as the state of the art.", "field": [], "task": ["Density Estimation", "Image Generation", "Imputation"], "method": [], "dataset": ["CIFAR-10"], "metric": ["bits/dimension"], "title": "Factoring Variations in Natural Images with Deep Gaussian Mixture Models"} {"abstract": "In this paper, we propose a novel end-to-end unsupervised deep domain adaptation model for adaptive object detection by exploiting multi-label object recognition as a dual auxiliary task. The model exploits multi-label prediction to reveal the object category information in each image and then uses the prediction results to perform conditional adversarial global feature alignment, such that the multi-modal structure of image features can be tackled to bridge the domain divergence at the global feature level while preserving the discriminability of the features. Moreover, we introduce a prediction consistency regularization mechanism to assist object detection, which uses the multi-label prediction results as an auxiliary regularization information to ensure consistent object category discoveries between the object recognition task and the object detection task. Experiments are conducted on a few benchmark datasets and the results show the proposed model outperforms the state-of-the-art comparison methods.", "field": [], "task": ["Domain Adaptation", "Image-to-Image Translation", "Object Detection", "Object Recognition", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["Cityscapes-to-Foggy Cityscapes", "Cityscapes to Foggy Cityscapes"], "metric": ["mAP", "mAP@0.5"], "title": "Adaptive Object Detection with Dual Multi-Label Prediction"} {"abstract": "Weakly supervised object detection (WSOD) that only needs image-level annotations has obtained much attention recently. By combining convolutional neural network with multiple instance learning method, Multiple Instance Detection Network (MIDN) has become the most popular method to address the WSOD problem and been adopted as the initial model in many works. We argue that MIDN inclines to converge to the most discriminative object parts, which limits the performance of methods based on it. In this paper, we propose a novel Coupled Multiple Instance Detection Network (C-MIDN) to address this problem. Specifically, we use a pair of MIDNs, which work in a complementary manner with proposal removal. The localization information of the MIDNs is further coupled to obtain tighter bounding boxes and localize multiple objects. We also introduce a Segmentation Guided Proposal Removal (SGPR) algorithm to guarantee the MIL constraint after the removal and ensure the robustness of C-MIDN. Through a simple implementation of the C-MIDN with online detector refinement, we obtain 53.6% and 50.3% mAP on the challenging PASCAL VOC 2007 and 2012 benchmarks respectively, which significantly outperform the previous state-of-the-arts.\r", "field": [], "task": ["Multiple Instance Learning", "Object Detection", "Weakly Supervised Object Detection"], "method": [], "dataset": ["PASCAL VOC 2007", "PASCAL VOC 2012 test"], "metric": ["MAP"], "title": "C-MIDN: Coupled Multiple Instance Detection Network With Segmentation Guidance for Weakly Supervised Object Detection"} {"abstract": "The reservoir computing paradigm is employed to classify heartbeat anomalies online based on electrocardiogram signals. Inspired by the principles of information processing in the brain, reservoir computing provides a framework to design, train, and analyze recurrent neural networks (RNNs) for processing time-dependent information. Due to its computational efficiency and the fact that training amounts to a simple linear regression, this supervised learning algorithm has been variously considered as a strategy to implement useful computations not only on digital computers but also on emerging unconventional hardware platforms such as neuromorphic microchips. Here, this biological-inspired learning framework is exploited to devise an accurate patient-adaptive model that has the potential to be integrated into wearable cardiac events monitoring devices. The proposed patient-customized model was trained and tested on ECG recordings selected from the MIT-BIH arrhythmia database. Restrictive inclusion criteria were used to conduct the study only on ECGs including, at least, two classes of heartbeats with highly unequal number of instances. The results of extensive simulations showed this model not only provides accurate, cheap and fast patient-customized heartbeat classifier but also circumvents the problem of \"imbalanced classes\" when the readout weights are trained using weighted ridge-regression.", "field": [], "task": ["Arrhythmia Detection", "ECG Classification", "Electrocardiography (ECG)", "Regression"], "method": [], "dataset": ["MIT-BIH AR"], "metric": ["Accuracy (Inter-Patient)"], "title": "Reservoir Computing Models for Patient-Adaptable ECG Monitoring in Wearable Devices"} {"abstract": "In recent years, the problem of associating a sentence with an image has gained a lot of attention. This work continues to push the envelope and makes further progress in the performance of image annotation and image search by a sentence tasks. In this work, we are using the Fisher Vector as a sentence representation by pooling the word2vec embedding of each word in the sentence. The Fisher Vector is typically taken as the gradients of the log-likelihood of descriptors, with respect to the parameters of a Gaussian Mixture Model (GMM). In this work we present two other Mixture Models and derive their Expectation-Maximization and Fisher Vector expressions. The first is a Laplacian Mixture Model (LMM), which is based on the Laplacian distribution. The second Mixture Model presented is a Hybrid Gaussian-Laplacian Mixture Model (HGLMM) which is based on a weighted geometric mean of the Gaussian and Laplacian distribution. Finally, by using the new Fisher Vectors derived from HGLMMs to represent sentences, we achieve state-of-the-art results for both the image annotation and the image search by a sentence tasks on four benchmarks: Pascal1K, Flickr8K, Flickr30K, and COCO.", "field": [], "task": ["Image Retrieval", "Word Embeddings"], "method": [], "dataset": ["YouCook2"], "metric": ["text-to-video R@1", "text-to-video R@10", "text-to-video Median Rank", "text-to-video R@5"], "title": "Associating Neural Word Embeddings With Deep Image Representations Using Fisher Vectors"} {"abstract": "Current deep learning results on video generation are limited while there are\nonly a few first results on video prediction and no relevant significant\nresults on video completion. This is due to the severe ill-posedness inherent\nin these three problems. In this paper, we focus on human action videos, and\npropose a general, two-stage deep framework to generate human action videos\nwith no constraints or arbitrary number of constraints, which uniformly address\nthe three problems: video generation given no input frames, video prediction\ngiven the first few frames, and video completion given the first and last\nframes. To make the problem tractable, in the first stage we train a deep\ngenerative model that generates a human pose sequence from random noise. In the\nsecond stage, a skeleton-to-image network is trained, which is used to generate\na human action video given the complete human pose sequence generated in the\nfirst stage. By introducing the two-stage strategy, we sidestep the original\nill-posed problems while producing for the first time high-quality video\ngeneration/prediction/completion results of much longer duration. We present\nquantitative and qualitative evaluation to show that our two-stage approach\noutperforms state-of-the-art methods in video generation, prediction and video\ncompletion. Our video result demonstration can be viewed at\nhttps://iamacewhite.github.io/supp/index.html", "field": [], "task": ["Video Generation", "Video Prediction"], "method": [], "dataset": ["Human3.6M"], "metric": ["MMD"], "title": "Deep Video Generation, Prediction and Completion of Human Action Sequences"} {"abstract": "A semi-supervised online video object segmentation algorithm, which accepts user annotations about a target object at the first frame, is proposed in this work. We propagate the segmentation labels at the previous frame to the current frame using optical flow vectors. However, the propagation is error-prone. Therefore, we develop the convolutional trident network (CTN), which has three decoding branches: separative, definite foreground, and definite background decoders. Then, we perform Markov random field optimization based on outputs of the three decoders. We sequentially carry out these processes from the second to the last frames to extract a segment track of the target object. Experimental results demonstrate that the proposed algorithm significantly outperforms the state-of-the-art conventional algorithms on the DAVIS benchmark dataset.\r", "field": [], "task": ["Optical Flow Estimation", "Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "Online Video Object Segmentation via Convolutional Trident Network"} {"abstract": "We propose a technique that propagates information forward through video\ndata. The method is conceptually simple and can be applied to tasks that\nrequire the propagation of structured information, such as semantic labels,\nbased on video content. We propose a 'Video Propagation Network' that processes\nvideo frames in an adaptive manner. The model is applied online: it propagates\ninformation forward without the need to access future frames. In particular we\ncombine two components, a temporal bilateral network for dense and video\nadaptive filtering, followed by a spatial network to refine features and\nincreased flexibility. We present experiments on video object segmentation and\nsemantic video segmentation and show increased performance comparing to the\nbest previous task-specific methods, while having favorable runtime.\nAdditionally we demonstrate our approach on an example regression task of color\npropagation in a grayscale video.", "field": [], "task": ["Regression", "Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Segmentation", "Video Semantic Segmentation", "Visual Object Tracking"], "method": [], "dataset": ["DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "Video Propagation Networks"} {"abstract": "In this work, we propose a novel approach to video segmentation that operates in bilateral space. We design a new energy on the vertices of a regularly sampled spatio-temporal bilateral grid, which can be solved efficiently using a standard graph cut label assignment. Using a bilateral formulation, the energy that we minimize implicitly approximates long-range, spatio-temporal connections between pixels while still containing only a small number of variables and only local graph edges. We compare to a number of recent methods, and show that our approach achieves state-of-the-art results on multiple benchmarks in a fraction of the runtime. Furthermore, our method scales linearly with image size, allowing for interactive feedback on real-world high resolution video.", "field": [], "task": ["Semi-Supervised Video Object Segmentation", "Video Segmentation", "Video Semantic Segmentation"], "method": [], "dataset": ["DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "Bilateral Space Video Segmentation"} {"abstract": "Noise Contrastive Estimation (NCE) is a powerful parameter estimation method\nfor log-linear models, which avoids calculation of the partition function or\nits derivatives at each training step, a computationally demanding step in many\ncases. It is closely related to negative sampling methods, now widely used in\nNLP. This paper considers NCE-based estimation of conditional models.\nConditional models are frequently encountered in practice; however there has\nnot been a rigorous theoretical analysis of NCE in this setting, and we will\nargue there are subtle but important questions when generalizing NCE to the\nconditional case. In particular, we analyze two variants of NCE for conditional\nmodels: one based on a classification objective, the other based on a ranking\nobjective. We show that the ranking-based variant of NCE gives consistent\nparameter estimates under weaker assumptions than the classification-based\nmethod; we analyze the statistical efficiency of the ranking-based and\nclassification-based variants of NCE; finally we describe experiments on\nsynthetic data and language modeling showing the effectiveness and trade-offs\nof both methods.", "field": [], "task": ["Language Modelling", "Question Answering"], "method": [], "dataset": ["WikiQA"], "metric": ["MRR", "MAP"], "title": "Noise Contrastive Estimation and Negative Sampling for Conditional Models: Consistency and Statistical Efficiency"} {"abstract": "Automatic recognition of overlapped speech remains a highly challenging task to date. Motivated by the bimodal nature of human speech perception, this paper investigates the use of audio-visual technologies for overlapped speech recognition. Three issues associated with the construction of audio-visual speech recognition (AVSR) systems are addressed. First, the basic architecture designs i.e. end-to-end and hybrid of AVSR systems are investigated. Second, purposefully designed modality fusion gates are used to robustly integrate the audio and visual features. Third, in contrast to a traditional pipelined architecture containing explicit speech separation and recognition components, a streamlined and integrated AVSR system optimized consistently using the lattice-free MMI (LF-MMI) discriminative criterion is also proposed. The proposed LF-MMI time-delay neural network (TDNN) system establishes the state-of-the-art for the LRS2 dataset. Experiments on overlapped speech simulated from the LRS2 dataset suggest the proposed AVSR system outperformed the audio only baseline LF-MMI DNN system by up to 29.98\\% absolute in word error rate (WER) reduction, and produced recognition performance comparable to a more complex pipelined system. Consistent performance improvements of 4.89\\% absolute in WER reduction over the baseline AVSR system using feature fusion are also obtained.", "field": [], "task": ["Audio-Visual Speech Recognition", "Lipreading", "Speech Recognition", "Speech Separation", "Visual Speech Recognition"], "method": [], "dataset": ["LRS2"], "metric": ["Word Error Rate (WER)"], "title": "Audio-visual Recognition of Overlapped speech for the LRS2 dataset"} {"abstract": "Local keypoint matching is an important step for computer vision based tasks. In recent years, Deep Convolutional Neural Network (CNN) based strategies have been employed to learn descriptor generation to enhance keypoint matching accuracy. Recent state-of-art works in this direction primarily rely upon a triplet based loss function (and its variations) utilizing three samples: an anchor, a positive and a negative. In this work we propose a novel \u201cTwin Negative Mining\u201d based sampling strategy coupled with a Quad loss function to train a deep neural network based pipeline (Twin-Net) for generating a robust descriptor that provides an increased discriminatory power to differentiate between patches that do not correspond to each other. Our sampling strategy and choice of loss function is aimed at placing an upper bound that descriptors of two patches representing same location could be at worst no more dissimilar than the descriptors of two similar looking patches that do-not belong to same 3D location. This results in an increase in the generalization capability of the network and outperforms its existing counterparts when trained over the same datasets. Twin-Net outputs a 128-dimensional descriptor and uses L 2 Distance as the similarity metric, and hence conforms to the classical descriptor matching pipelines such as that of SIFT. Our results on Brown and HPatches datasets demonstrate Twin-Net's consistently better performance as well as better discriminatory and generalization capability as compared to the state-of-art.", "field": [], "task": ["Patch Matching"], "method": [], "dataset": ["HPatches", "Brown Dataset"], "metric": ["Patch Verification", "Patch Matching", "FPR95", "Patch Retrieval"], "title": "Twin-Net Descriptor: Twin Negative Mining With Quad Loss for Patch-Based Matching"} {"abstract": "Part-of-Speech (POS) tagging for Twitter has received considerable attention in recent years. Because most POS tagging methods are based on supervised models, they usually require a large amount of labeled data for training. However, the existing labeled datasets for Twitter are much smaller than those for newswire text. Hence, to help POS tagging for Twitter, most domain adaptation methods try to leverage newswire datasets by learning the shared features between the two domains. However, from a linguistic perspective, Twitter users not only tend to mimic the formal expressions of traditional media, like news, but they also appear to be developing linguistically informal styles. Therefore, POS tagging for the formal Twitter context can be learned together with the newswire dataset, while POS tagging for the informal Twitter context should be learned separately. To achieve this task, in this work, we propose a hypernetwork-based method to generate different parameters to separately model contexts with different expression styles. Experimental results on three different datasets show that our approach achieves better performance than state-of-the-art methods in most cases.", "field": [], "task": ["Domain Adaptation", "Multi-Task Learning", "Part-Of-Speech Tagging", "Stock Prediction"], "method": [], "dataset": ["Ritter", "ARK"], "metric": ["Acc"], "title": "Transferring from Formal Newswire Domain with Hypernet for Twitter POS Tagging"} {"abstract": "Differently from computer vision systems which require explicit supervision,\nhumans can learn facial expressions by observing people in their environment.\nIn this paper, we look at how similar capabilities could be developed in\nmachine vision. As a starting point, we consider the problem of relating facial\nexpressions to objectively measurable events occurring in videos. In\nparticular, we consider a gameshow in which contestants play to win significant\nsums of money. We extract events affecting the game and corresponding facial\nexpressions objectively and automatically from the videos, obtaining large\nquantities of labelled data for our study. We also develop, using benchmarks\nsuch as FER and SFEW 2.0, state-of-the-art deep neural networks for facial\nexpression recognition, showing that pre-training on face verification data can\nbe highly beneficial for this task. Then, we extend these models to use facial\nexpressions to predict events in videos and learn nameable expressions from\nthem. The dataset and emotion recognition models are available at\nhttp://www.robots.ox.ac.uk/~vgg/data/facevalue", "field": [], "task": ["Emotion Recognition", "Face Verification", "Facial Expression Recognition"], "method": [], "dataset": [" Static Facial Expressions in the Wild"], "metric": ["Accuracy"], "title": "Learning Grimaces by Watching TV"} {"abstract": "This paper presents a new semi-supervised framework with convolutional neural\nnetworks (CNNs) for text categorization. Unlike the previous approaches that\nrely on word embeddings, our method learns embeddings of small text regions\nfrom unlabeled data for integration into a supervised CNN. The proposed scheme\nfor embedding learning is based on the idea of two-view semi-supervised\nlearning, which is intended to be useful for the task of interest even though\nthe training is done on unlabeled data. Our models achieve better results than\nprevious approaches on sentiment classification and topic classification tasks.", "field": [], "task": ["Sentiment Analysis", "Text Categorization", "Text Classification", "Word Embeddings"], "method": [], "dataset": ["IMDb"], "metric": ["Accuracy (2 classes)", "Accuracy (10 classes)"], "title": "Semi-supervised Convolutional Neural Networks for Text Categorization via Region Embedding"} {"abstract": "Document-level sentiment classification aims to assign the user reviews a sentiment polarity. Previous methods either just utilized the document content without consideration of user and product information, or did not comprehensively consider what roles the three kinds of information play in text modeling. In this paper, to reasonably use all the information, we present the idea that user, product and their combination can all influence the generation of attentions to words and sentences, when judging the sentiment of a document. With this idea, we propose a cascading multiway attention (CMA) model, where multiple ways of using user and product information are cascaded to influence the generation of attentions on the word and sentence layers. Then, sentences and documents are well modeled by multiple representation vectors, which provide rich information for sentiment classification. Experiments on IMDB and Yelp datasets demonstrate the effectiveness of our model.", "field": [], "task": ["Product Recommendation", "Sentiment Analysis"], "method": [], "dataset": ["User and product information"], "metric": ["Yelp 2014 (Acc)", "Yelp 2013 (Acc)", "IMDB (Acc)"], "title": "Cascading Multiway Attentions for Document-level Sentiment Classification"} {"abstract": "Weakly-supervised object detection has attracted much attention lately, since it does not require bounding box annotations for training. Although significant progress has also been made, there is still a large gap in performance between weakly-supervised and fully-supervised object detection. Recently, some works use pseudo ground-truths which are generated by a weakly-supervised detector to train a supervised detector. Such approaches incline to find the most representative parts of objects, and only seek one ground-truth box per class even though many same-class instances exist. To overcome these issues, we propose a weakly-supervised to fully-supervised framework, where a weakly-supervised detector is implemented using multiple instance learning. Then, we propose a pseudo ground-truth excavation (PGE) algorithm to find the pseudo ground-truth of each instance in the image. Moreover, the pseudo ground-truth adaptation (PGA) algorithm is designed to further refine the pseudo ground-truths from PGE. Finally, we use these pseudo ground-truths to train a fully-supervised detector. Extensive experiments on the challenging PASCAL VOC 2007 and 2012 benchmarks strongly demonstrate the effectiveness of our framework. We obtain 52.4% and 47.8% mAP on VOC2007 and VOC2012 respectively, a significant improvement over previous state-of-the-art methods.", "field": [], "task": ["Multiple Instance Learning", "Object Detection", "Weakly Supervised Object Detection"], "method": [], "dataset": ["PASCAL VOC 2007", "PASCAL VOC 2012 test"], "metric": ["MAP"], "title": "W2F: A Weakly-Supervised to Fully-Supervised Framework for Object Detection"} {"abstract": "Most current state-of-the-art connectome reconstruction pipelines have two major steps: initial pixel-based segmentation with affinity prediction and watershed transform, and refined segmentation by merging over-segmented regions. These methods rely only on local context and are typically agnostic to the underlying biology. Since a few merge errors can lead to several incorrectly merged neuronal processes, these algorithms are currently tuned towards over-segmentation producing an overburden of costly proofreading. We propose a third step for connectomics reconstruction pipelines to refine an over-segmentation using both local and global context with an emphasis on adhering to the underlying biology. We first extract a graph from an input segmentation where nodes correspond to segment labels and edges indicate potential split errors in the over-segmentation. In order to increase throughput and allow for large-scale reconstruction, we employ biologically inspired geometric constraints based on neuron morphology to reduce the number of nodes and edges. Next, two neural networks learn these neuronal shapes to further aid the graph construction process. Lastly, we reformulate the region merging problem as a graph partitioning one to leverage global context. We demonstrate the performance of our approach on four real-world connectomics datasets with an average variation of information improvement of 21.3%.\r", "field": [], "task": ["Electron Microscopy Image Segmentation", "graph construction", "graph partitioning"], "method": [], "dataset": ["SNEMI3D"], "metric": ["VI Split", "VI Merge", "Total Variation of Information"], "title": "Biologically-Constrained Graphs for Global Connectomics Reconstruction"} {"abstract": "This paper proposes an automatic spatially-aware concept discovery approach\nusing weakly labeled image-text data from shopping websites. We first fine-tune\nGoogleNet by jointly modeling clothing images and their corresponding\ndescriptions in a visual-semantic embedding space. Then, for each attribute\n(word), we generate its spatially-aware representation by combining its\nsemantic word vector representation with its spatial representation derived\nfrom the convolutional maps of the fine-tuned network. The resulting\nspatially-aware representations are further used to cluster attributes into\nmultiple groups to form spatially-aware concepts (e.g., the neckline concept\nmight consist of attributes like v-neck, round-neck, etc). Finally, we\ndecompose the visual-semantic embedding space into multiple concept-specific\nsubspaces, which facilitates structured browsing and attribute-feedback product\nretrieval by exploiting multimodal linguistic regularities. We conducted\nextensive experiments on our newly collected Fashion200K dataset, and results\non clustering quality evaluation and attribute-feedback product retrieval task\ndemonstrate the effectiveness of our automatically discovered spatially-aware\nconcepts.", "field": [], "task": ["Image Retrieval with Multi-Modal Query"], "method": [], "dataset": ["Fashion200k"], "metric": ["Recall@50", "Recall@1", "Recall@10"], "title": "Automatic Spatially-aware Fashion Concept Discovery"} {"abstract": "Abstractive text summarization is the task of compressing and rewriting a\nlong document into a short summary while maintaining saliency, directed logical\nentailment, and non-redundancy. In this work, we address these three important\naspects of a good summary via a reinforcement learning approach with two novel\nreward functions: ROUGESal and Entail, on top of a coverage-based baseline. The\nROUGESal reward modifies the ROUGE metric by up-weighting the salient\nphrases/words detected via a keyphrase classifier. The Entail reward gives high\n(length-normalized) scores to logically-entailed summaries using an entailment\nclassifier. Further, we show superior performance improvement when these\nrewards are combined with traditional metric (ROUGE) based rewards, via our\nnovel and effective multi-reward approach of optimizing multiple rewards\nsimultaneously in alternate mini-batches. Our method achieves the new\nstate-of-the-art results (including human evaluation) on the CNN/Daily Mail\ndataset as well as strong improvements in a test-only transfer setup on\nDUC-2002.", "field": [], "task": ["Abstractive Text Summarization", "Text Summarization"], "method": [], "dataset": ["CNN / Daily Mail"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Multi-Reward Reinforced Summarization with Saliency and Entailment"} {"abstract": "This paper presents a Deep convolutional network model for Identity-Aware\nTransfer (DIAT) of facial attributes. Given the source input image and the\nreference attribute, DIAT aims to generate a facial image that owns the\nreference attribute as well as keeps the same or similar identity to the input\nimage. In general, our model consists of a mask network and an attribute\ntransform network which work in synergy to generate a photo-realistic facial\nimage with the reference attribute. Considering that the reference attribute\nmay be only related to some parts of the image, the mask network is introduced\nto avoid the incorrect editing on attribute irrelevant region. Then the\nestimated mask is adopted to combine the input and transformed image for\nproducing the transfer result. For joint training of transform network and mask\nnetwork, we incorporate the adversarial attribute loss, identity-aware adaptive\nperceptual loss, and VGG-FACE based identity loss. Furthermore, a denoising\nnetwork is presented to serve for perceptual regularization to suppress the\nartifacts in transfer result, while an attribute ratio regularization is\nintroduced to constrain the size of attribute relevant region. Our DIAT can\nprovide a unified solution for several representative facial attribute transfer\ntasks, e.g., expression transfer, accessory removal, age progression, and\ngender transfer, and can be extended for other face enhancement tasks such as\nface hallucination. The experimental results validate the effectiveness of the\nproposed method. Even for the identity-related attribute (e.g., gender), our\nDIAT can obtain visually impressive results by changing the attribute while\nretaining most identity-aware features.", "field": [], "task": ["Denoising", "Face Hallucination", "Image-to-Image Translation"], "method": [], "dataset": ["RaFD"], "metric": ["Classification Error"], "title": "Deep Identity-aware Transfer of Facial Attributes"} {"abstract": "Convolutional network techniques have recently achieved great success in\nvision based detection tasks. This paper introduces the recent development of\nour research on transplanting the fully convolutional network technique to the\ndetection tasks on 3D range scan data. Specifically, the scenario is set as the\nvehicle detection task from the range data of Velodyne 64E lidar. We proposes\nto present the data in a 2D point map and use a single 2D end-to-end fully\nconvolutional network to predict the objectness confidence and the bounding\nboxes simultaneously. By carefully design the bounding box encoding, it is able\nto predict full 3D bounding boxes even using a 2D convolutional network.\nExperiments on the KITTI dataset shows the state-of-the-art performance of the\nproposed method.", "field": [], "task": ["Object Detection"], "method": [], "dataset": ["KITTI Cars Hard", "KITTI Cars Moderate", "KITTI Cars Moderate val", "KITTI Pedestrian Moderate val", "KITTI Pedestrian Easy val", "KITTI Cyclist Easy val", "KITTI Cyclist Moderate val", "KITTI Cyclist Hard val", "KITTI Pedestrian Hard val", "KITTI Cars Easy"], "metric": ["AP"], "title": "Vehicle Detection from 3D Lidar Using Fully Convolutional Network"} {"abstract": "In the low-data regime, it is difficult to train good supervised models from scratch. Instead practitioners turn to pre-trained models, leveraging transfer learning. Ensembling is an empirically and theoretically appealing way to construct powerful predictive models, but the predominant approach of training multiple deep networks with different random initialisations collides with the need for transfer via pre-trained weights. In this work, we study different ways of creating ensembles from pre-trained models. We show that the nature of pre-training itself is a performant source of diversity, and propose a practical algorithm that efficiently identifies a subset of pre-trained models for any downstream dataset. The approach is simple: Use nearest-neighbour accuracy to rank pre-trained models, fine-tune the best ones with a small hyperparameter sweep, and greedily construct an ensemble to minimise validation cross-entropy. When evaluated together with strong baselines on 19 different downstream tasks (the Visual Task Adaptation Benchmark), this achieves state-of-the-art performance at a much lower inference budget, even when selecting from over 2,000 pre-trained models. We also assess our ensembles on ImageNet variants and show improved robustness to distribution shift.", "field": [], "task": ["Image Classification", "Transfer Learning"], "method": [], "dataset": ["VTAB-1k"], "metric": ["Top-1 Accuracy"], "title": "Deep Ensembles for Low-Data Transfer Learning"} {"abstract": "Joint extraction of entities and their relations benefits from the close interaction between named entities and their relation information. Therefore, how to effectively model such cross-modal interactions is critical for the final performance. Previous works have used simple methods such as label-feature concatenation to perform coarse-grained semantic fusion among cross-modal instances, but fail to capture fine-grained correlations over token and label spaces, resulting in insufficient interactions. In this paper, we propose a deep Cross-Modal Attention Network (CMAN) for joint entity and relation extraction. The network is carefully constructed by stacking multiple attention units in depth to fully model dense interactions over token-label spaces, in which two basic attention units are proposed to explicitly capture fine-grained correlations across different modalities (e.g., token-to-token and labelto-token). Experiment results on CoNLL04 dataset show that our model obtains state-of-the-art results by achieving 90.62% F1 on entity recognition and 72.97% F1 on relation classification. In ADE dataset, our model surpasses existing approaches by more than 1.9% F1 on relation classification. Extensive analyses further confirm the effectiveness of our approach.", "field": [], "task": ["Joint Entity and Relation Extraction", "Relation Classification", "Relation Extraction"], "method": [], "dataset": ["ADE Corpus"], "metric": ["NER Macro F1", "RE+ Macro F1"], "title": "Modeling Dense Cross-Modal Interactions for Joint Entity-Relation Extraction"} {"abstract": "Human beings often assess the aesthetic quality of an image coupled with the\nidentification of the image's semantic content. This paper addresses the\ncorrelation issue between automatic aesthetic quality assessment and semantic\nrecognition. We cast the assessment problem as the main task among a multi-task\ndeep model, and argue that semantic recognition task offers the key to address\nthis problem. Based on convolutional neural networks, we employ a single and\nsimple multi-task framework to efficiently utilize the supervision of aesthetic\nand semantic labels. A correlation item between these two tasks is further\nintroduced to the framework by incorporating the inter-task relationship\nlearning. This item not only provides some useful insight about the correlation\nbut also improves assessment accuracy of the aesthetic task. Particularly, an\neffective strategy is developed to keep a balance between the two tasks, which\nfacilitates to optimize the parameters of the framework. Extensive experiments\non the challenging AVA dataset and Photo.net dataset validate the importance of\nsemantic recognition in aesthetic quality assessment, and demonstrate that\nmulti-task deep models can discover an effective aesthetic representation to\nachieve state-of-the-art results.", "field": [], "task": ["Aesthetics Quality Assessment"], "method": [], "dataset": ["AVA"], "metric": ["Accuracy"], "title": "Deep Aesthetic Quality Assessment with Semantic Information"} {"abstract": "Building an intelligent dialogue system with the ability to select a proper response according to a multi-turn context is a great challenging task. Existing studies focus on building a context-response matching model with various neural architectures or PLMs and typically learning with a single response prediction task. These approaches overlook many potential training signals contained in dialogue data, which might be beneficial for context understanding and produce better features for response prediction. Besides, the response retrieved from existing dialogue systems supervised by the conventional way still faces some critical challenges, including incoherence and inconsistency. To address these issues, in this paper, we propose learning a context-response matching model with auxiliary self-supervised tasks designed for the dialogue data based on pre-trained language models. Specifically, we introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination, and jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner. By this means, the auxiliary tasks can guide the learning of the matching model to achieve a better local optimum and select a more proper response. Experiment results on two benchmarks indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection in retrieval-based dialogues, and our model achieves new state-of-the-art results on both datasets.", "field": [], "task": ["Conversational Response Selection"], "method": [], "dataset": ["Ubuntu Dialogue (v1, Ranking)"], "metric": ["R10@1", "R10@5", "R2@1", "R10@2"], "title": "Learning an Effective Context-Response Matching Model with Self-Supervised Tasks for Retrieval-based Dialogues"} {"abstract": "The goal of this paper is to perform 3D object detection in single monocular images in the domain of autonomous driving. Our method first aims to generate a set of candidate class-specific object proposals, which are then run through a standard CNN pipeline to obtain high-quality object detections. The focus of this paper is on proposal generation. In particular, we propose a probabilistic model that places object candidates in 3D using a prior on ground-plane. We then score each candidate box projected to the image plane via several intuitive potentials such as semantic segmentation, contextual information, size and location priors and typical object shape. The weights in our model are trained with S-SVM. Experiments show that our object proposal generation approach significantly outperforms all monocular baselines, and achieves the best detection performance on the challenging KITTI benchmark, among the published monocular competitors.", "field": [], "task": ["3D Object Detection", "Autonomous Driving", "Monocular 3D Object Detection", "Object Detection", "Object Proposal Generation", "Semantic Segmentation"], "method": [], "dataset": ["KITTI Cars Moderate val", "KITTI Pedestrian Moderate val", "KITTI Pedestrian Easy val", "KITTI Cyclist Easy val", "KITTI Cyclist Moderate val", "KITTI Cyclist Hard val", "KITTI Pedestrian Hard val"], "metric": ["AP"], "title": "Monocular 3D Object Detection for Autonomous Driving"} {"abstract": "The idea of using multi-task learning approaches to address the joint extraction of entity and relation is motivated by the relatedness between the entity recognition task and the relation classification task. Existing methods using multi-task learning techniques to address the problem learn interactions among the two tasks through a shared network, where the shared information is passed into the task-specific networks for prediction. However, such an approach hinders the model from learning explicit interactions between the two tasks to improve the performance on the individual tasks. As a solution, we design a multi-task learning model which we refer to as recurrent interaction network which allows the learning of interactions dynamically, to effectively model task-specific features for classification. Empirical studies on two real-world datasets confirm the superiority of the proposed model.", "field": [], "task": ["Multi-Task Learning", "Named Entity Recognition", "Relation Classification", "Relation Extraction"], "method": [], "dataset": ["NYT", "WebNLG"], "metric": ["F1"], "title": "Recurrent Interaction Network for Jointly Extracting Entities and Classifying Relations"} {"abstract": "The neuroscience study has revealed the discrepancy of emotion expression between left and right hemispheres of human brain. Inspired by this study, in this paper, we propose a novel bi-hemispheric discrepancy model (BiHDM) to learn the asymmetric differences between two hemispheres for electroencephalograph (EEG) emotion recognition. Concretely, we first employ four directed recurrent neural networks (RNNs) based on two spatial orientations to traverse electrode signals on two separate brain regions, which enables the model to obtain the deep representations of all the EEG electrodes' signals while keeping the intrinsic spatial dependence. Then we design a pairwise subnetwork to capture the discrepancy information between two hemispheres and extract higher-level features for final classification. Besides, in order to reduce the domain shift between training and testing data, we use a domain discriminator that adversarially induces the overall feature learning module to generate emotion-related but domain-invariant feature, which can further promote EEG emotion recognition. We conduct experiments on three public EEG emotional datasets, and the experiments show that the new state-of-the-art results can be achieved.", "field": [], "task": ["EEG", "Emotion Recognition"], "method": [], "dataset": ["MPED", "SEED-IV"], "metric": ["Accuracy"], "title": "A Novel Bi-hemispheric Discrepancy Model for EEG Emotion Recognition"} {"abstract": "The problem of determining whether an object is in motion, irrespective of\ncamera motion, is far from being solved. We address this challenging task by\nlearning motion patterns in videos. The core of our approach is a fully\nconvolutional network, which is learned entirely from synthetic video\nsequences, and their ground-truth optical flow and motion segmentation. This\nencoder-decoder style architecture first learns a coarse representation of the\noptical flow field features, and then refines it iteratively to produce motion\nlabels at the original high-resolution. We further improve this labeling with\nan objectness map and a conditional random field, to account for errors in\noptical flow, and also to focus on moving \"things\" rather than \"stuff\". The\noutput label of each pixel denotes whether it has undergone independent motion,\ni.e., irrespective of camera motion. We demonstrate the benefits of this\nlearning framework on the moving object segmentation task, where the goal is to\nsegment all objects in motion. Our approach outperforms the top method on the\nrecently released DAVIS benchmark dataset, comprising real-world sequences, by\n5.6%. We also evaluate on the Berkeley motion segmentation database, achieving\nstate-of-the-art results.", "field": [], "task": ["Motion Segmentation", "Optical Flow Estimation", "Semantic Segmentation", "Unsupervised Video Object Segmentation"], "method": [], "dataset": ["DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "Learning Motion Patterns in Videos"} {"abstract": "This paper addresses weakly supervised object detection with only image-level\nsupervision at training stage. Previous approaches train detection models with\nentire images all at once, making the models prone to being trapped in\nsub-optimums due to the introduced false positive examples. Unlike them, we\npropose a zigzag learning strategy to simultaneously discover reliable object\ninstances and prevent the model from overfitting initial seeds. Towards this\ngoal, we first develop a criterion named mean Energy Accumulation Scores (mEAS)\nto automatically measure and rank localization difficulty of an image\ncontaining the target object, and accordingly learn the detector progressively\nby feeding examples with increasing difficulty. In this way, the model can be\nwell prepared by training on easy examples for learning from more difficult\nones and thus gain a stronger detection ability more efficiently. Furthermore,\nwe introduce a novel masking regularization strategy over the high level\nconvolutional feature maps to avoid overfitting initial samples. These two\nmodules formulate a zigzag learning process, where progressive learning\nendeavors to discover reliable object instances, and masking regularization\nincreases the difficulty of finding object instances properly. We achieve 47.6%\nmAP on PASCAL VOC 2007, surpassing the state-of-the-arts by a large margin.", "field": [], "task": ["Object Detection", "Weakly Supervised Object Detection"], "method": [], "dataset": ["PASCAL VOC 2007", "PASCAL VOC 2012 test"], "metric": ["MAP"], "title": "Zigzag Learning for Weakly Supervised Object Detection"} {"abstract": "In recent years, the performance of object detection has advanced\nsignificantly with the evolving deep convolutional neural networks. However,\nthe state-of-the-art object detection methods still rely on accurate bounding\nbox annotations that require extensive human labelling. Object detection\nwithout bounding box annotations, i.e, weakly supervised detection methods, are\nstill lagging far behind. As weakly supervised detection only uses image level\nlabels and does not require the ground truth of bounding box location and label\nof each object in an image, it is generally very difficult to distill knowledge\nof the actual appearances of objects. Inspired by curriculum learning, this\npaper proposes an easy-to-hard knowledge transfer scheme that incorporates easy\nweb images to provide prior knowledge of object appearance as a good starting\npoint. While exploiting large-scale free web imagery, we introduce a\nsophisticated labour free method to construct a web dataset with good diversity\nin object appearance. After that, semantic relevance and distribution relevance\nare introduced and utilized in the proposed curriculum training scheme. Our\nend-to-end learning with the constructed web data achieves remarkable\nimprovement across most object classes especially for the classes that are\noften considered hard in other works.", "field": [], "task": ["Curriculum Learning", "Object Detection", "Transfer Learning", "Weakly Supervised Object Detection"], "method": [], "dataset": ["PASCAL VOC 2007", "PASCAL VOC 2012 test"], "metric": ["MAP"], "title": "Exploiting Web Images for Weakly Supervised Object Detection"} {"abstract": "The current dominant paradigm in sensorimotor control, whether imitation or reinforcement learning, is to train policies directly in raw action spaces such as torque, joint angle, or end-effector position. This forces the agent to make decisions individually at each timestep in training, and hence, limits the scalability to continuous, high-dimensional, and long-horizon tasks. In contrast, research in classical robotics has, for a long time, exploited dynamical systems as a policy representation to learn robot behaviors via demonstrations. These techniques, however, lack the flexibility and generalizability provided by deep learning or reinforcement learning and have remained under-explored in such settings. In this work, we begin to close this gap and embed the structure of a dynamical system into deep neural network-based policies by reparameterizing action spaces via second-order differential equations. We propose Neural Dynamic Policies (NDPs) that make predictions in trajectory distribution space as opposed to prior policy learning methods where actions represent the raw control space. The embedded structure allows end-to-end policy learning for both reinforcement and imitation learning setups. We show that NDPs outperform the prior state-of-the-art in terms of either efficiency or performance across several robotic control tasks for both imitation and reinforcement learning setups. Project video and code are available at https://shikharbahl.github.io/neural-dynamic-policies/", "field": [], "task": ["Imitation Learning"], "method": [], "dataset": ["MT50"], "metric": ["Average Success Rate"], "title": "Neural Dynamic Policies for End-to-End Sensorimotor Learning"} {"abstract": "We introduce scGAN, a novel extension of conditional Generative Adversarial Networks (GAN) tailored for the challenging problem of shadow detection in images. Previous methods for shadow detection focus on learning the local appearance of shadow regions, while using limited local context reasoning in the form of pairwise potentials in a Conditional Random Field. In contrast, the proposed adversarial approach is able to model higher level relationships and global scene characteristics. We train a shadow detector that corresponds to the generator of a conditional GAN, and augment its shadow accuracy by combining the typical GAN loss with a data loss term. Due to the unbalanced distribution of the shadow labels, we use weighted cross entropy. With the standard GAN architecture, properly setting the weight for the cross entropy would require training multiple GANs, a computationally expensive grid procedure. In scGAN, we introduce an additional sensitivity parameter w to the generator. The proposed approach effectively parameterizes the loss of the trained detector. The resulting shadow detector is a single network that can generate shadow maps corresponding to different sensitivity levels, obviating the need for multiple models and a costly training procedure. We evaluate our method on the large-scale SBU and UCF shadow datasets, and observe up to 17% error reduction with respect to the previous state-of-the-art method.\r", "field": [], "task": ["Shadow Detection"], "method": [], "dataset": ["UCF", "SBU", "ISTD"], "metric": ["Balanced Error Rate"], "title": "Shadow Detection With Conditional Generative Adversarial Networks"} {"abstract": "Fine-grained recognition poses the unique challenge of capturing subtle inter-class differences under considerable intra-class variances (e.g., beaks for bird species). Conventional approaches crop local regions and learn detailed representation from those regions, but suffer from the fixed number of parts and missing of surrounding context. In this paper, we propose a simple yet effective framework, called Selective Sparse Sampling, to capture diverse and fine-grained details. The framework is implemented using Convolutional Neural Networks, referred to as Selective Sparse Sampling Networks (S3Ns). With image-level supervision, S3Ns collect peaks, i.e., local maximums, from class response maps to estimate informative, receptive fields and learn a set of sparse attention for capturing fine-detailed visual evidence as well as preserving context. The evidence is selectively sampled to extract discriminative and complementary features, which significantly enrich the learned representation and guide the network to discover more subtle cues. Extensive experiments and ablation studies show that the proposed method consistently outperforms the state-of-the-art methods on challenging benchmarks including CUB-200-2011, FGVC-Aircraft, and Stanford Cars.\r", "field": [], "task": ["Fine-Grained Image Classification", "Fine-Grained Image Recognition"], "method": [], "dataset": [" CUB-200-2011", "Stanford Cars", "FGVC Aircraft"], "metric": ["Accuracy"], "title": "Selective Sparse Sampling for Fine-Grained Image Recognition"} {"abstract": "Recent works attempt to improve scene parsing performance by exploring different levels of contexts, and typically train a well-designed convolutional network to exploit useful contexts across all pixels equally. However, in this paper, we find that the context demands are varying from different pixels or regions in each image. Based on this observation, we propose an Adaptive Context Network (ACNet) to capture the pixel-aware contexts by a competitive fusion of global context and local context according to different per-pixel demands. Specifically, when given a pixel, the global context demand is measured by the similarity between the global feature and its local feature, whose reverse value can be used to measure the local context demand. We model the two demand measurements by the proposed global context module and local context module, respectively, to generate adaptive contextual features. Furthermore, we import multiple such modules to build several adaptive context blocks in different levels of network to obtain a coarse-to-fine result. Finally, comprehensive experimental evaluations demonstrate the effectiveness of the proposed ACNet, and new state-of-the-arts performances are achieved on all four public datasets, i.e. Cityscapes, ADE20K, PASCAL Context, and COCO Stuff.", "field": [], "task": ["Scene Parsing"], "method": [], "dataset": ["ADE20K val"], "metric": ["mIoU"], "title": "Adaptive Context Network for Scene Parsing"} {"abstract": "In this paper, we present a technique that places 3D bounding boxes around objects in an RGB-D scene. Our approach makes best use of the 2D information to quickly reduce the search space in 3D, benefiting from state-of-the-art 2D object detection techniques. We then use the 3D information to orient, place, and score bounding boxes around objects. We independently estimate the orientation for every object, using previous techniques that utilize normal information. Object locations and sizes in 3D are learned using a multilayer perceptron (MLP). In the final step, we refine our detections based on object class relations within a scene. When compared to state-of-the-art detection methods that operate almost entirely in the sparse 3D domain, extensive experiments on the well-known SUN RGB-D dataset show that our proposed method is much faster (4.1s per image) in detecting 3D objects in RGB-D images and performs better (3 mAP higher) than the state-of-the-art method that is 4.7 times slower and comparably to the method that is two orders of magnitude slower. This work hints at the idea that 2D-driven object detection in 3D should be further explored, especially in cases where the 3D input is sparse. \r", "field": [], "task": ["2D Object Detection", "3D Object Detection", "Object Detection"], "method": [], "dataset": ["SUN-RGBD val"], "metric": ["MAP"], "title": "2D-Driven 3D Object Detection in RGB-D Images"} {"abstract": "We propose an end-to-end architecture for joint 2D and 3D human pose estimation in natural images. Key to our approach is the generation and scoring of a number of pose proposals per image, which allows us to predict 2D and 3D pose of multiple people simultaneously. Hence, our approach does not require an approximate localization of the humans for initialization. Our architecture, named LCR-Net, contains 3 main components: 1) the pose proposal generator that suggests potential poses at different locations in the image; 2) a classifier that scores the different pose proposals; and 3) a regressor that refines pose proposals both in 2D and 3D. All three stages share the convolutional feature layers and are trained jointly. The final pose estimation is obtained by integrating over neighboring pose hypotheses, which is shown to improve over a standard non maximum suppression algorithm. Our approach significantly outperforms the state of the art in 3D pose estimation on Human3.6M, a controlled environment. Moreover, it shows promising results on real images for both single and multi-person subsets of the MPII 2D pose benchmark. \r", "field": [], "task": ["3D Human Pose Estimation", "3D Pose Estimation", "Pose Estimation", "Regression"], "method": [], "dataset": ["MuPoTS-3D"], "metric": ["MPJPE"], "title": "LCR-Net: Localization-Classification-Regression for Human Pose"} {"abstract": "This work proposes a weakly-supervised temporal action localization framework, called D2-Net, which strives to temporally localize actions using video-level supervision. Our main contribution is the introduction of a novel loss formulation, which jointly enhances the discriminability of latent embeddings and robustness of the output temporal class activations with respect to foreground-background noise caused by weak supervision. The proposed formulation comprises a discriminative and a denoising loss term for enhancing temporal action localization. The discriminative term incorporates a classification loss and utilizes a top-down attention mechanism to enhance the separability of latent foreground-background embeddings. The denoising loss term explicitly addresses the foreground-background noise in class activations by simultaneously maximizing intra-video and inter-video mutual information using a bottom-up attention mechanism. As a result, activations in the foreground regions are emphasized whereas those in the background regions are suppressed, thereby leading to more robust predictions. Comprehensive experiments are performed on two benchmarks: THUMOS14 and ActivityNet1.2. Our D2-Net performs favorably in comparison to the existing methods on both datasets, achieving gains as high as 3.6% in terms of mean average precision on THUMOS14.", "field": [], "task": ["Action Localization", "Denoising", "Temporal Action Localization", "Weakly Supervised Action Localization", "Weakly-supervised Temporal Action Localization", "Weakly Supervised Temporal Action Localization"], "method": [], "dataset": ["ActivityNet-1.2", "THUMOS 2014"], "metric": ["mAP@0.5", "Mean mAP"], "title": "D2-Net: Weakly-Supervised Action Localization via Discriminative Embeddings and Denoised Activations"} {"abstract": "Abstractive summarization is the ultimate goal of document summarization research, but previously it is less investigated due to the immaturity of text generation techniques. Recently impressive progress has been made to abstractive sentence summarization using neural models. Unfortunately, attempts on abstractive document summarization are still in a primitive stage, and the evaluation results are worse than extractive methods on benchmark datasets. In this paper, we review the difficulties of neural abstractive document summarization, and propose a novel graph-based attention mechanism in the sequence-to-sequence framework. The intuition is to address the saliency factor of summarization, which has been overlooked by prior works. Experimental results demonstrate our model is able to achieve considerable improvement over previous neural abstractive models. The data-driven neural abstractive method is also competitive with state-of-the-art extractive methods.", "field": [], "task": ["Abstractive Text Summarization", "Document Summarization", "Image Captioning", "Machine Translation", "Sentence Summarization", "Text Generation", "Text Summarization"], "method": [], "dataset": ["CNN / Daily Mail (Anonymized)"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "Abstractive Document Summarization with a Graph-Based Attentional Neural Model"} {"abstract": "Deep neural networks are being widely deployed for many critical tasks due to their high classification accuracy. In many cases, pre-trained models are sourced from vendors who may have disrupted the training pipeline to insert Trojan behaviors into the models. These malicious behaviors can be triggered at the adversary's will and hence, cause a serious threat to the widespread deployment of deep models. We propose a method to verify if a pre-trained model is Trojaned or benign. Our method captures fingerprints of neural networks in the form of adversarial perturbations learned from the network gradients. Inserting backdoors into a network alters its decision boundaries which are effectively encoded in their adversarial perturbations. We train a two stream network for Trojan detection from its global ($L_\\infty$ and $L_2$ bounded) perturbations and the localized region of high energy within each perturbation. The former encodes decision boundaries of the network and latter encodes the unknown trigger shape. We also propose an anomaly detection method to identify the target class in a Trojaned network. Our methods are invariant to the trigger type, trigger size, training data and network architecture. We evaluate our methods on MNIST, NIST-Round0 and NIST-Round1 datasets, with up to 1,000 pre-trained models making this the largest study to date on Trojaned network detection, and achieve over 92\\% detection accuracy to set the new state-of-the-art.", "field": [], "task": ["Adversarial Defense", "Anomaly Detection"], "method": [], "dataset": ["TrojAI Round 0", "TrojAI Round 1"], "metric": ["Detection Accuracy"], "title": "Cassandra: Detecting Trojaned Networks from Adversarial Perturbations"} {"abstract": "Optimal parameter initialization remains a crucial problem for neural network\ntraining. A poor weight initialization may take longer to train and/or converge\nto sub-optimal solutions. Here, we propose a method of weight re-initialization\nby repeated annealing and injection of noise in the training process. We\nimplement this through a cyclical batch size schedule motivated by a Bayesian\nperspective of neural network training. We evaluate our methods through\nextensive experiments on tasks in language modeling, natural language\ninference, and image classification. We demonstrate the ability of our method\nto improve language modeling performance by up to 7.91 perplexity and reduce\ntraining iterations by up to $61\\%$, in addition to its flexibility in enabling\nsnapshot ensembling and use with adversarial training.", "field": [], "task": ["Image Classification", "Language Modelling", "Natural Language Inference"], "method": [], "dataset": ["SNLI"], "metric": ["% Test Accuracy"], "title": "Parameter Re-Initialization through Cyclical Batch Size Schedules"} {"abstract": "Routing models, a form of conditional computation where examples are routed through a subset of components in a larger network, have shown promising results in recent works. Surprisingly, routing models to date have lacked important properties, such as architectural diversity and large numbers of routing decisions. Both architectural diversity and routing depth can increase the representational power of a routing network. In this work, we address both of these deficiencies. We discuss the significance of architectural diversity in routing models, and explain the tradeoffs between capacity and optimization when increasing routing depth. In our experiments, we find that adding architectural diversity to routing models significantly improves performance, cutting the error rates of a strong baseline by 35% on an Omniglot setup. However, when scaling up routing depth, we find that modern routing techniques struggle with optimization. We conclude by discussing both the positive and negative results, and suggest directions for future research.", "field": [], "task": ["Multi-Task Learning", "Omniglot"], "method": [], "dataset": ["OMNIGLOT"], "metric": ["Average Accuracy"], "title": "Diversity and Depth in Per-Example Routing Models"} {"abstract": "Sum-product networks are a new deep architecture that can perform fast, exact inference on high-treewidth models. Only generative methods for training SPNs\r\nhave been proposed to date. In this paper, we present the first discriminative\r\ntraining algorithms for SPNs, combining the high accuracy of the former with\r\nthe representational power and tractability of the latter. We show that the class\r\nof tractable discriminative SPNs is broader than the class of tractable generative\r\nones, and propose an efficient backpropagation-style algorithm for computing the\r\ngradient of the conditional log likelihood. Standard gradient descent suffers from\r\nthe diffusion problem, but networks with many layers can be learned reliably using \u201chard\u201d gradient descent, where marginal inference is replaced by MPE inference (i.e., inferring the most probable state of the non-evidence variables). The\r\nresulting updates have a simple and intuitive form. We test discriminative SPNs\r\non standard image classification tasks. We obtain the best results to date on the\r\nCIFAR-10 dataset, using fewer features than prior methods with an SPN architecture that learns local image structure discriminatively. We also report the highest\r\npublished test accuracy on STL-10 even though we only use the labeled portion\r\nof the dataset.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["STL-10"], "metric": ["Percentage correct"], "title": "Discriminative Learning of Sum-Product Networks"} {"abstract": "Supervised learning of convolutional neural networks (CNNs) can require very\nlarge amounts of labeled data. Labeling thousands or millions of training\nexamples can be extremely time consuming and costly. One direction towards\naddressing this problem is to create features from unlabeled data. In this\npaper we propose a new method for training a CNN, with no need for labeled\ninstances. This method for unsupervised feature learning is then successfully\napplied to a challenging object recognition task. The proposed algorithm is\nrelatively simple, but attains accuracy comparable to that of more\nsophisticated methods. The proposed method is significantly easier to train,\ncompared to existing CNN methods, making fewer requirements on manually labeled\ntraining data. It is also shown to be resistant to overfitting. We provide\nresults on some well-known datasets, namely STL-10, CIFAR-10, and CIFAR-100.\nThe results show that our method provides competitive performance compared with\nexisting alternative methods. Selective Convolutional Neural Network (S-CNN) is\na simple and fast algorithm, it introduces a new way to do unsupervised feature\nlearning, and it provides discriminative features which generalize well.", "field": [], "task": ["Image Classification", "Object Recognition"], "method": [], "dataset": ["STL-10"], "metric": ["Percentage correct"], "title": "Selective Unsupervised Feature Learning with Convolutional Neural Network (S-CNN)"} {"abstract": "We propose a meta-parameter free, off-the-shelf, simple and fast unsupervised\nfeature learning algorithm, which exploits a new way of optimizing for\nsparsity. Experiments on STL-10 show that the method presents state-of-the-art\nperformance and provides discriminative features that generalize well.", "field": [], "task": ["Image Classification"], "method": [], "dataset": ["STL-10"], "metric": ["Percentage correct"], "title": "No more meta-parameter tuning in unsupervised sparse feature learning"} {"abstract": "Visible (VIS) to near infrared (NIR) face matching is a challenging problem\ndue to the significant domain discrepancy between the domains and a lack of\nsufficient data for training cross-modal matching algorithms. Existing\napproaches attempt to tackle this problem by either synthesizing visible faces\nfrom NIR faces, extracting domain-invariant features from these modalities, or\nprojecting heterogeneous data onto a common latent space for cross-modal\nmatching. In this paper, we take a different approach in which we make use of\nthe Disentangled Variational Representation (DVR) for cross-modal matching.\nFirst, we model a face representation with an intrinsic identity information\nand its within-person variations. By exploring the disentangled latent variable\nspace, a variational lower bound is employed to optimize the approximate\nposterior for NIR and VIS representations. Second, aiming at obtaining more\ncompact and discriminative disentangled latent space, we impose a minimization\nof the identity information for the same subject and a relaxed correlation\nalignment constraint between the NIR and VIS modality variations. An\nalternative optimization scheme is proposed for the disentangled variational\nrepresentation part and the heterogeneous face recognition network part. The\nmutual promotion between these two parts effectively reduces the NIR and VIS\ndomain discrepancy and alleviates over-fitting. Extensive experiments on three\nchallenging NIR-VIS heterogeneous face recognition databases demonstrate that\nthe proposed method achieves significant improvements over the state-of-the-art\nmethods.", "field": [], "task": ["Face Recognition", "Heterogeneous Face Recognition"], "method": [], "dataset": ["BUAA-VisNir", "Oulu-CASIA NIR-VIS", "CASIA NIR-VIS 2.0"], "metric": ["TAR @ FAR=0.01", "TAR @ FAR=0.001"], "title": "Disentangled Variational Representation for Heterogeneous Face Recognition"} {"abstract": "Multiple human 3D pose estimation from multiple camera views is a challenging task in unconstrained environments. Each individual has to be matched across each view and then the body pose has to be estimated. Additionally, the body pose of every individual changes in a consistent manner over time. To address these challenges, we propose a temporally consistent 3D Pictorial Structures model (3DPS) for multiple human pose estimation from multiple camera views. Our model builds on the 3D Pictorial Structures to introduce the notion of temporal consistency between the inferred body poses. We derive this property by relying on multi-view human tracking. Identifying each individual before inference significantly reduces the size of the state space and positively influences the performance as well. To evaluate our method, we use two challenging multiple human datasets in unconstrained environments. We compare our method with the state-of-the-art approaches and achieve better results.", "field": [], "task": ["3D Multi-Person Pose Estimation", "3D Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["Campus", "Shelf"], "metric": ["PCP3D"], "title": "Multiple human pose estimation with temporally consistent 3d pictorial structures"} {"abstract": "Multiple human 3D pose estimation is a challenging task. It is mainly because of large variations in the scale and pose of humans, fast motions, multiple persons in the scene, and arbitrary number of visible body parts due to occlusion or truncation. Some of these ambiguities can be resolved by using multiview images. This is due to the fact that more evidences of body parts would be available in multiple views. In this work, a novel method for multiple human 3D pose estimation using evidences in multiview images is proposed. The proposed method utilizes a fully connected pairwise conditional random field that contains two types of pairwise terms. The first pairwise term encodes the spatial dependencies among human body joints based on an articulated human body configuration. The second pairwise term is based on the output of a 2D deep part detector. An approximate inference is then performed using the loopy belief propagation algorithm. The proposed method is evaluated on the Campus, Shelf, Utrecht Multi-Person Motion benchmark, Human3.6M, KTH Football II, and MPII Cooking datasets. Experimental results indicate that the proposed method achieves substantial improvements over the existing state-of-the-art methods in terms of the probability of correct pose and the mean per joint position error performance measures.", "field": [], "task": ["3D Multi-Person Pose Estimation", "3D Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["Campus", "Shelf"], "metric": ["PCP3D"], "title": "Multiple human 3d pose estimation from multiview images"} {"abstract": "Human sketches are unique in being able to capture both the spatial topology of a visual object, as well as its subtle appearance details. Fine-grained sketch-based image retrieval (FG-SBIR) importantly leverages on such fine-grained characteristics of sketches to conduct instance-level retrieval of photos. Nevertheless, human sketches are often highly abstract and iconic, resulting in severe misalignments with candidate photos which in turn make subtle visual detail matching difficult. Existing FG-SBIR approaches focus only on coarse holistic matching via deep cross-domain representation learning, yet ignore explicitly accounting for fine-grained details and their spatial context. In this paper, a novel deep FG-SBIR model is proposed which differs significantly from the existing models in that: (1) It is spatially aware, achieved by introducing an attention module that is sensitive to the spatial position of visual details; (2) It combines coarse and fine semantic information via a shortcut connection fusion block; and (3) It models feature correlation and is robust to misalignments between the extracted features across the two domains by introducing a novel higher order learnable energy function (HOLEF) based loss. Extensive experiments show that the proposed deep spatial-semantic attention model significantly outperforms the state-of-the-art.\r", "field": [], "task": ["Image Retrieval", "Representation Learning", "Sketch-Based Image Retrieval"], "method": [], "dataset": ["Handbags", "Chairs"], "metric": ["R@10", "R@1"], "title": "Deep Spatial-Semantic Attention for Fine-Grained Sketch-Based Image Retrieval"} {"abstract": "Scene text detection is an important step of scene text reading system. The main challenges lie on significantly varied sizes and aspect ratios, arbitrary orientations and shapes. Driven by recent progress in deep learning, impressive performances have been achieved for multi-oriented text detection. Yet, the performance drops dramatically in detecting curved texts due to the limited text representation (e.g., horizontal bounding boxes, rotated rectangles, or quadrilaterals). It is of great interest to detect curved texts, which are actually very common in natural scenes. In this paper, we present a novel text detector named TextField for detecting irregular scene texts. Specifically, we learn a direction field pointing away from the nearest text boundary to each text point. This direction field is represented by an image of two-dimensional vectors and learned via a fully convolutional neural network. It encodes both binary text mask and direction information used to separate adjacent text instances, which is challenging for classical segmentation-based approaches. Based on the learned direction field, we apply a simple yet effective morphological-based post-processing to achieve the final detection. Experimental results show that the proposed TextField outperforms the state-of-the-art methods by a large margin (28% and 8%) on two curved text datasets: Total-Text and CTW1500, respectively, and also achieves very competitive performance on multi-oriented datasets: ICDAR 2015 and MSRA-TD500. Furthermore, TextField is robust in generalizing to unseen datasets. The code is available at https://github.com/YukangWang/TextField.", "field": [], "task": ["Scene Text", "Scene Text Detection"], "method": [], "dataset": ["Total-Text"], "metric": ["F-Measure", "Recall", "Precision"], "title": "TextField: Learning A Deep Direction Field for Irregular Scene Text Detection"} {"abstract": "This paper addresses the problems of feature representation of skeleton joints and the modeling of temporal dynamics to recognize human actions. Traditional methods generally use relative coordinate systems dependent on some joints, and model only the long-term dependency, while excluding short-term and medium term dependencies. Instead of taking raw skeletons as the input, we transform the skeletons into another coordinate system to obtain the robustness to scale, rotation and translation, and then extract salient motion features from them. Considering that Long Short-term Memory (LSTM) networks with various time-step sizes can model various attributes well, we propose novel ensemble Temporal Sliding LSTM (TS-LSTM) networks for skeleton-based action recognition. The proposed network is composed of multiple parts containing short-term, medium-term and long-term TS-LSTM networks, respectively. In our network, we utilize an average ensemble among multiple parts as a final feature to capture various temporal dependencies. We evaluate the proposed networks and the additional other architectures to verify the effectiveness of the proposed networks, and also compare them with several other methods on five challenging datasets. The experimental results demonstrate that our network models achieve the state-of-the-art performance through various temporal features. Additionally, we analyze a relation between the recognized actions and the multi-term TS-LSTM features by visualizing the softmax features of multiple parts.\r", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "Ensemble Deep Learning for Skeleton-Based Action Recognition Using Temporal Sliding LSTM Networks"} {"abstract": "In this paper we present seven techniques that everybody should know to\nimprove example-based single image super resolution (SR): 1) augmentation of\ndata, 2) use of large dictionaries with efficient search structures, 3)\ncascading, 4) image self-similarities, 5) back projection refinement, 6)\nenhanced prediction by consistency check, and 7) context reasoning. We validate\nour seven techniques on standard SR benchmarks (i.e. Set5, Set14, B100) and\nmethods (i.e. A+, SRCNN, ANR, Zeyde, Yang) and achieve substantial\nimprovements.The techniques are widely applicable and require no changes or\nonly minor adjustments of the SR methods. Moreover, our Improved A+ (IA) method\nsets new state-of-the-art results outperforming A+ by up to 0.9dB on average\nPSNR whilst maintaining a low time complexity.", "field": [], "task": ["Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["Set5 - 4x upscaling", "BSD100 - 4x upscaling", "Set14 - 4x upscaling"], "metric": ["PSNR"], "title": "Seven ways to improve example-based single image super resolution"} {"abstract": "It remains a challenge to efficiently extract spatialtemporal information\nfrom skeleton sequences for 3D human action recognition. Although most recent\naction recognition methods are based on Recurrent Neural Networks which present\noutstanding performance, one of the shortcomings of these methods is the\ntendency to overemphasize the temporal information. Since 3D convolutional\nneural network(3D CNN) is a powerful tool to simultaneously learn features from\nboth spatial and temporal dimensions through capturing the correlations between\nthree dimensional signals, this paper proposes a novel two-stream model using\n3D CNN. To our best knowledge, this is the first application of 3D CNN in\nskeleton-based action recognition. Our method consists of three stages. First,\nskeleton joints are mapped into a 3D coordinate space and then encoding the\nspatial and temporal information, respectively. Second, 3D CNN models are\nseperately adopted to extract deep features from two streams. Third, to enhance\nthe ability of deep features to capture global relationships, we extend every\nstream into multitemporal version. Extensive experiments on the SmartHome\ndataset and the large-scale NTU RGB-D dataset demonstrate that our method\noutperforms most of RNN-based methods, which verify the complementary property\nbetween spatial and temporal information and the robustness to noise.", "field": [], "task": ["3D Action Recognition", "Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "Two-Stream 3D Convolutional Neural Network for Skeleton-Based Action Recognition"} {"abstract": "We consider the problem of unsupervised domain adaptation in semantic\nsegmentation. The key in this campaign consists in reducing the domain shift,\ni.e., enforcing the data distributions of the two domains to be similar. A\npopular strategy is to align the marginal distribution in the feature space\nthrough adversarial learning. However, this global alignment strategy does not\nconsider the local category-level feature distribution. A possible consequence\nof the global movement is that some categories which are originally well\naligned between the source and target may be incorrectly mapped. To address\nthis problem, this paper introduces a category-level adversarial network,\naiming to enforce local semantic consistency during the trend of global\nalignment. Our idea is to take a close look at the category-level data\ndistribution and align each class with an adaptive adversarial loss.\nSpecifically, we reduce the weight of the adversarial loss for category-level\naligned features while increasing the adversarial force for those poorly\naligned. In this process, we decide how well a feature is category-level\naligned between source and target by a co-training approach. In two domain\nadaptation tasks, i.e., GTA5 -> Cityscapes and SYNTHIA -> Cityscapes, we\nvalidate that the proposed method matches the state of the art in segmentation\naccuracy.", "field": [], "task": ["Domain Adaptation", "Semantic Segmentation", "Synthetic-to-Real Translation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["GTAV-to-Cityscapes Labels"], "metric": ["mIoU"], "title": "Taking A Closer Look at Domain Shift: Category-level Adversaries for Semantics Consistent Domain Adaptation"} {"abstract": "We propose a simple yet effective model for Single Image Super-Resolution\n(SISR), by combining the merits of Residual Learning and Convolutional Sparse\nCoding (RL-CSC). Our model is inspired by the Learned Iterative\nShrinkage-Threshold Algorithm (LISTA). We extend LISTA to its convolutional\nversion and build the main part of our model by strictly following the\nconvolutional form, which improves the network's interpretability.\nSpecifically, the convolutional sparse codings of input feature maps are\nlearned in a recursive manner, and high-frequency information can be recovered\nfrom these CSCs. More importantly, residual learning is applied to alleviate\nthe training difficulty when the network goes deeper. Extensive experiments on\nbenchmark datasets demonstrate the effectiveness of our method. RL-CSC (30\nlayers) outperforms several recent state-of-the-arts, e.g., DRRN (52 layers)\nand MemNet (80 layers) in both accuracy and visual qualities. Codes and more\nresults are available at https://github.com/axzml/RL-CSC.", "field": [], "task": ["Image Super-Resolution", "Super-Resolution"], "method": [], "dataset": ["Set5 - 4x upscaling", "Urban100 - 4x upscaling", "BSD100 - 4x upscaling", "Set14 - 4x upscaling"], "metric": ["SSIM", "PSNR"], "title": "Image Super-Resolution via RL-CSC: When Residual Learning Meets Convolutional Sparse Coding"} {"abstract": "We address the problem of 3D pose estimation of multiple humans from multiple views. The transition from single to multiple human pose estimation and from the 2D to 3D space is challenging due to a much larger state space, occlusions and across-view ambiguities when not knowing the identity of the humans in advance. To address these problems, we first create a reduced state space by triangulation of corresponding pairs of body parts obtained by part detectors for each camera view. In order to resolve ambiguities of wrong and mixed parts of multiple humans after triangulation and also those coming from false positive detections, we introduce a 3D pictorial structures (3DPS) model. Our model builds on multi-view unary potentials, while a prior model is integrated into pairwise and ternary potential functions. To balance the potentials' influence, the model parameters are learnt using a Structured SVM (SSVM). The model is generic and applicable to both single and multiple human pose estimation. To evaluate our model on single and multiple human pose estimation, we rely on four different datasets. We first analyse the contribution of the potentials and then compare our results with related work where we demonstrate superior performance.", "field": [], "task": ["3D Multi-Person Pose Estimation", "3D Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["Campus", "Shelf"], "metric": ["PCP3D"], "title": "3D Pictorial Structures Revisited: Multiple Human Pose Estimation"} {"abstract": "In this work, we address the problem of 3D pose estimation of multiple humans from multiple views. This is a more challenging problem than single human 3D pose estimation due to the much larger state space, partial occlusions as well as across view ambiguities when not knowing the identity of the humans in advance. To address these problems, we first create a reduced state space by triangulation of corresponding body joints obtained from part detectors in pairs of camera views. In order to resolve the ambiguities of wrong and mixed body parts of multiple humans after triangulation and also those coming from false positive body part detections, we introduce a novel 3D pictorial structures (3DPS) model. Our model infers 3D human body configurations from our reduced state space. The 3DPS model is generic and applicable to both single and multiple human pose estimation. In order to compare to the state-of-the art, we first evaluate our method on single human 3D pose estimation on HumanEva-I [22] and KTH Multiview Football Dataset II [8] datasets. Then, we introduce and evaluate our method on two datasets for multiple human 3D pose estimation.", "field": [], "task": ["3D Human Pose Estimation", "3D Multi-Person Pose Estimation", "3D Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["Campus", "Shelf"], "metric": ["PCP3D", "Average Accuracy"], "title": "3D Pictorial Structures for Multiple Human Pose Estimation"} {"abstract": "Text-based person search aims to retrieve the pedestrian images that best match a given text query. Existing\r\nmethods utilize class-id information to get discriminative\r\nand identity-preserving features. However, it is not wellexplored whether it is beneficial to explicitly ensure that the\r\nsemantics of the data are retained. In the proposed work, we\r\naim to create semantics-preserving embeddings through an\r\nadditional task of attribute prediction. Since attribute annotation is typically unavailable in text-based person search,\r\nwe first mine them from the text corpus. These attributes are\r\nthen used as a means to bridge the modality gap between the\r\nimage-text inputs, as well as to improve the representation\r\nlearning. In summary, we propose an approach for textbased person search by learning an attribute-driven space\r\nalong with a class-information driven space, and utilize\r\nboth for obtaining the retrieval results. Our experiments on\r\nbenchmark dataset, CUHK-PEDES, show that learning the\r\nattribute-space not only helps in improving performance,\r\ngiving us state-of-the-art Rank-1 accuracy of 56.68%, but\r\nalso yields humanly-interpretable features.", "field": [], "task": ["Person Search", "Representation Learning", "Text based Person Retrieval"], "method": [], "dataset": ["CUHK-PEDES"], "metric": ["R@10", "R@1", "R@5"], "title": "Text-based Person Search via Attribute-aided Matching"} {"abstract": "Reliable detection of out-of-distribution (OOD) inputs is increasingly understood to be a precondition for deployment of machine learning systems. This paper proposes and investigates the use of contrastive training to boost OOD detection performance. Unlike leading methods for OOD detection, our approach does not require access to examples labeled explicitly as OOD, which can be difficult to collect in practice. We show in extensive experiments that contrastive training significantly helps OOD detection performance on a number of common benchmarks. By introducing and employing the Confusion Log Probability (CLP) score, which quantifies the difficulty of the OOD detection task by capturing the similarity of inlier and outlier datasets, we show that our method especially improves performance in the `near OOD' classes -- a particularly challenging setting for previous methods.", "field": [], "task": ["Out-of-Distribution Detection"], "method": [], "dataset": ["CIFAR-100 vs CIFAR-10"], "metric": ["AUROC"], "title": "Contrastive Training for Improved Out-of-Distribution Detection"} {"abstract": "We present a simple and effective method for 3D hand pose estimation from a\nsingle depth frame. As opposed to previous state-of-the-art methods based on\nholistic 3D regression, our method works on dense pixel-wise estimation. This\nis achieved by careful design choices in pose parameterization, which leverages\nboth 2D and 3D properties of depth map. Specifically, we decompose the pose\nparameters into a set of per-pixel estimations, i.e., 2D heat maps, 3D heat\nmaps and unit 3D directional vector fields. The 2D/3D joint heat maps and 3D\njoint offsets are estimated via multi-task network cascades, which is trained\nend-to-end. The pixel-wise estimations can be directly translated into a vote\ncasting scheme. A variant of mean shift is then used to aggregate local votes\nwhile enforcing consensus between the the estimated 3D pose and the pixel-wise\n2D and 3D estimations by design. Our method is efficient and highly accurate.\nOn MSRA and NYU hand dataset, our method outperforms all previous\nstate-of-the-art approaches by a large margin. On the ICVL hand dataset, our\nmethod achieves similar accuracy compared to the currently proposed nearly\nsaturated result and outperforms various other proposed methods. Code is\navailable $\\href{\"https://github.com/melonwan/denseReg\"}{\\text{online}}$.", "field": [], "task": ["3D Hand Pose Estimation", "Hand Pose Estimation", "Pose Estimation", "Regression"], "method": [], "dataset": ["ICVL Hands", "NYU Hands", "MSRA Hands"], "metric": ["Average 3D Error"], "title": "Dense 3D Regression for Hand Pose Estimation"} {"abstract": "Domain adaptive person re-identification (re-ID) is a challenging task, especially when person identities in target domains are unknown. Existing methods attempt to address this challenge by transferring image styles or aligning feature distributions across domains, whereas the rich unlabeled samples in target domains are not sufficiently exploited. This paper presents a novel augmented discriminative clustering (AD-Cluster) technique that estimates and augments person clusters in target domains and enforces the discrimination ability of re-ID models with the augmented clusters. AD-Cluster is trained by iterative density-based clustering, adaptive sample augmentation, and discriminative feature learning. It learns an image generator and a feature encoder which aim to maximize the intra-cluster diversity in the sample space and minimize the intra-cluster distance in the feature space in an adversarial min-max manner. Finally, AD-Cluster increases the diversity of sample clusters and improves the discrimination capability of re-ID models greatly. Extensive experiments over Market-1501 and DukeMTMC-reID show that AD-Cluster outperforms the state-of-the-art with large margins.", "field": [], "task": ["Domain Adaptive Person Re-Identification", "Person Re-Identification", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["Duke to Market", "Market to Duke"], "metric": ["rank-10", "mAP", "rank-5", "rank-1"], "title": "AD-Cluster: Augmented Discriminative Clustering for Domain Adaptive Person Re-identification"} {"abstract": "Label noise is a critical factor that degrades the generalization performance of deep neural networks, thus leading to severe issues in real-world problems. Existing studies have employed strategies based on either loss or uncertainty to address noisy labels, and ironically some strategies contradict each other: emphasizing or discarding uncertain samples or concentrating on high or low loss samples. To elucidate how opposing strategies can enhance model performance and offer insights into training with noisy labels, we present analytical results on how loss and uncertainty values of samples change throughout the training process. From the in-depth analysis, we design a new robust training method that emphasizes clean and informative samples, while minimizing the influence of noise using both loss and uncertainty. We demonstrate the effectiveness of our method with extensive experiments on synthetic and real-world datasets for various deep learning models. The results show that our method significantly outperforms other state-of-the-art methods and can be used generally regardless of neural network architectures.", "field": [], "task": [], "method": [], "dataset": ["Clothing1M"], "metric": ["Accuracy"], "title": "Which Strategies Matter for Noisy Label Classification? Insight into Loss and Uncertainty"} {"abstract": "Weakly-supervised temporal action localization (WS-TAL) is a promising but challenging task with only video-level action categorical labels available during training. Without requiring temporal action boundary annotations in training data, WS-TAL could possibly exploit automatically retrieved video tags as video-level labels. However, such coarse video-level supervision inevitably incurs confusions, especially in untrimmed videos containing multiple action instances. To address this challenge, we propose the Contrast-based Localization EvaluAtioN Network (CleanNet) with our new action proposal evaluator, which provides pseudo-supervision by leveraging the temporal contrast in snippet-level action classification predictions. Essentially, the new action proposal evaluator enforces an additional temporal contrast constraint so that high-evaluation-score action proposals are more likely to coincide with true action instances. Moreover, the new action localization module is an integral part of CleanNet which enables end-to-end training. This is in contrast to many existing WS-TAL methods where action localization is merely a post-processing step. Experiments on THUMOS14 and ActivityNet datasets validate the efficacy of CleanNet against existing state-ofthe- art WS-TAL algorithms.\r", "field": [], "task": ["Action Classification", "Action Classification ", "Action Localization", "Temporal Action Localization", "Weakly Supervised Action Localization", "Weakly-supervised Temporal Action Localization", "Weakly Supervised Temporal Action Localization"], "method": [], "dataset": ["ActivityNet-1.2", "THUMOS 2014"], "metric": ["mAP@0.5", "mAP@0.1:0.7"], "title": "Weakly Supervised Temporal Action Localization Through Contrast Based Evaluation Networks"} {"abstract": "We present a single-shot, bottom-up approach for whole image parsing. Whole\nimage parsing, also known as Panoptic Segmentation, generalizes the tasks of\nsemantic segmentation for 'stuff' classes and instance segmentation for 'thing'\nclasses, assigning both semantic and instance labels to every pixel in an\nimage. Recent approaches to whole image parsing typically employ separate\nstandalone modules for the constituent semantic and instance segmentation tasks\nand require multiple passes of inference. Instead, the proposed DeeperLab image\nparser performs whole image parsing with a significantly simpler, fully\nconvolutional approach that jointly addresses the semantic and instance\nsegmentation tasks in a single-shot manner, resulting in a streamlined system\nthat better lends itself to fast processing. For quantitative evaluation, we\nuse both the instance-based Panoptic Quality (PQ) metric and the proposed\nregion-based Parsing Covering (PC) metric, which better captures the image\nparsing quality on 'stuff' classes and larger object instances. We report\nexperimental results on the challenging Mapillary Vistas dataset, in which our\nsingle model achieves 31.95% (val) / 31.6% PQ (test) and 55.26% PC (val) with 3\nframes per second (fps) on GPU or near real-time speed (22.6 fps on GPU) with\nreduced accuracy.", "field": [], "task": ["Instance Segmentation", "Panoptic Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["Cityscapes val"], "metric": ["PQ"], "title": "DeeperLab: Single-Shot Image Parser"} {"abstract": "Different from the fully-supervised action detection problem that is dependent on expensive frame-level annotations, weakly supervised action detection (WSAD) only needs video-level annotations, making it more practical for real-world applications. Existing WSAD methods detect action instances by scoring each video segment (a stack of frames) individually. Most of them fail to model the temporal relations among video segments and cannot effectively characterize action instances possessing latent temporal structure. To alleviate this problem in WSAD, we propose the temporal structure mining (TSM) approach. In TSM, each action instance is modeled as a multi-phase process and phase evolving within an action instance, i.e., the temporal structure, is exploited. Meanwhile, the video background is modeled by a background phase, which separates different action instances in an untrimmed video. In this framework, phase filters are used to calculate the confidence scores of the presence of an action's phases in each segment. Since in the WSAD task, frame-level annotations are not available and thus phase filters cannot be trained directly. To tackle the challenge, we treat each segment's phase as a hidden variable. We use segments' confidence scores from each phase filter to construct a table and determine hidden variables, i.e., phases of segments, by a maximal circulant path discovery along the table. Experiments conducted on three benchmark datasets demonstrate the state-of-the-art performance of the proposed TSM.\r", "field": [], "task": ["Action Detection", "Weakly Supervised Action Localization"], "method": [], "dataset": ["ActivityNet-1.2", "ActivityNet-1.3", "THUMOS 2014"], "metric": ["mAP@0.5", "mAP@0.1:0.7"], "title": "Temporal Structure Mining for Weakly Supervised Action Detection"} {"abstract": "Temporal Action Localization (TAL) in untrimmed video is important for many applications. But it is very expensive to annotate the segment-level ground truth (action class and temporal boundary). This raises the interest of addressing TAL with weak supervision, namely only video-level annotations are available during training). However, the state-of-the-art weakly-supervised TAL methods only focus on generating good Class Activation Sequence (CAS) over time but conduct simple thresholding on CAS to localize actions. In this paper, we first develop a novel weakly-supervised TAL framework called AutoLoc to directly predict the temporal boundary of each action instance. We propose a novel Outer-Inner-Contrastive (OIC) loss to automatically discover the needed segment-level supervision for training such a boundary predictor. Our method achieves dramatically improved performance: under the IoU threshold 0.5, our method improves mAP on THUMOS'14 from 13.7% to 21.2% and mAP on ActivityNet from 7.4% to 27.3%. It is also very encouraging to see that our weakly-supervised method achieves comparable results with some fully-supervised methods.", "field": [], "task": ["Action Localization", "Temporal Action Localization", "Weakly Supervised Action Localization", "Weakly-supervised Temporal Action Localization", "Weakly Supervised Temporal Action Localization"], "method": [], "dataset": ["ActivityNet-1.2", "THUMOS 2014"], "metric": ["mAP@0.5", "mAP@0.1:0.7"], "title": "AutoLoc: Weakly-supervised Temporal Action Localization in Untrimmed Videos"} {"abstract": "Neural sequence models have achieved great success in sentence-level sentiment classification. However, some models are exceptionally complex or based on expensive features. Some other models recognize the value of existed linguistic resource but utilize it insufficiently. This paper proposes a novel and general method to incorporate lexicon information, including sentiment lexicons(+/-), negation words and intensifiers. Words are annotated in fine-grained and coarse-grained labels. The proposed method first encodes the fine-grained labels into sentiment embedding and concatenates it with word embedding. Second, the coarse-grained labels are utilized to enhance the attention mechanism to give large weight on sentiment-related words. Experimental results show that our method can increase classification accuracy for neural sequence models on both SST-5 and MR dataset. Specifically, the enhanced Bi-LSTM model can even compare with a Tree-LSTM which uses expensive phrase-level annotations. Further analysis shows that in most cases the lexicon resource can offer the right annotations. Besides, the proposed method is capable of overcoming the effect from inevitably wrong annotations.", "field": [], "task": ["Sentiment Analysis"], "method": [], "dataset": ["SST-5 Fine-grained classification"], "metric": ["Accuracy"], "title": "Leveraging Multi-grained Sentiment Lexicon Information for Neural Sequence Models"} {"abstract": "The unconditional generation of high fidelity images is a longstanding\nbenchmark for testing the performance of image decoders. Autoregressive image\nmodels have been able to generate small images unconditionally, but the\nextension of these methods to large images where fidelity can be more readily\nassessed has remained an open problem. Among the major challenges are the\ncapacity to encode the vast previous context and the sheer difficulty of\nlearning a distribution that preserves both global semantic coherence and\nexactness of detail. To address the former challenge, we propose the Subscale\nPixel Network (SPN), a conditional decoder architecture that generates an image\nas a sequence of sub-images of equal size. The SPN compactly captures\nimage-wide spatial dependencies and requires a fraction of the memory and the\ncomputation required by other fully autoregressive models. To address the\nlatter challenge, we propose to use Multidimensional Upscaling to grow an image\nin both size and depth via intermediate stages utilising distinct SPNs. We\nevaluate SPNs on the unconditional generation of CelebAHQ of size 256 and of\nImageNet from size 32 to 256. We achieve state-of-the-art likelihood results in\nmultiple settings, set up new benchmark results in previously unexplored\nsettings and are able to generate very high fidelity large scale samples on the\nbasis of both datasets.", "field": [], "task": ["Image Generation"], "method": [], "dataset": ["ImageNet 64x64", "CelebA 256x256", "ImageNet 32x32"], "metric": ["bpd", "Bits per dim"], "title": "Generating High Fidelity Images with Subscale Pixel Networks and Multidimensional Upscaling"} {"abstract": "Lip reading aims at decoding texts from the movement of a speaker's mouth. In recent years, lip reading methods have made great progress for English, at both word-level and sentence-level. Unlike English, however, Chinese Mandarin is a tone-based language and relies on pitches to distinguish lexical or grammatical meaning, which significantly increases the ambiguity for the lip reading task. In this paper, we propose a Cascade Sequence-to-Sequence Model for Chinese Mandarin (CSSMCM) lip reading, which explicitly models tones when predicting sentence. Tones are modeled based on visual information and syntactic structure, and are used to predict sentence along with visual information and syntactic structure. In order to evaluate CSSMCM, a dataset called CMLR (Chinese Mandarin Lip Reading) is collected and released, consisting of over 100,000 natural sentences from China Network Television website. When trained on CMLR dataset, the proposed CSSMCM surpasses the performance of state-of-the-art lip reading frameworks, which confirms the effectiveness of explicit modeling of tones for Chinese Mandarin lip reading.", "field": [], "task": ["Lipreading", "Lip Reading"], "method": [], "dataset": ["CMLR"], "metric": ["CER"], "title": "A Cascade Sequence-to-Sequence Model for Chinese Mandarin Lip Reading"} {"abstract": "We aim to study the modeling limitations of the commonly employed boosted\ndecision trees classifier. Inspired by the success of large, data-hungry visual\nrecognition models (e.g. deep convolutional neural networks), this paper\nfocuses on the relationship between modeling capacity of the weak learners,\ndataset size, and dataset properties. A set of novel experiments on the Caltech\nPedestrian Detection benchmark results in the best known performance among\nnon-CNN techniques while operating at fast run-time speed. Furthermore, the\nperformance is on par with deep architectures (9.71% log-average miss rate),\nwhile using only HOG+LUV channels as features. The conclusions from this study\nare shown to generalize over different object detection domains as demonstrated\non the FDDB face detection benchmark (93.37% accuracy). Despite the impressive\nperformance, this study reveals the limited modeling capacity of the common\nboosted trees model, motivating a need for architectural changes in order to\ncompete with multi-level and very deep architectures.", "field": [], "task": ["Face Detection", "Object Detection", "Pedestrian Detection"], "method": [], "dataset": ["WIDER Face (Hard)", "WIDER Face (Medium)", "WIDER Face (Easy)"], "metric": ["AP"], "title": "To Boost or Not to Boost? On the Limits of Boosted Trees for Object Detection"} {"abstract": "Neural Architecture Search (NAS) yields state-of-the-art neural networks that outperform their best manually-designed counterparts. However, previous NAS methods search for architectures under one training recipe (i.e., training hyperparameters), ignoring the significance of training recipes and overlooking superior architectures under other training recipes. Thus, they fail to find higher-accuracy architecture-recipe combinations. To address this oversight, we present JointNAS to search both (a) architectures and (b) their corresponding training recipes. To accomplish this, we introduce a neural acquisition function that scores architectures and training recipes jointly. Following pre-training on a proxy dataset, this acquisition function guides both coarse-grained and fine-grained searches to produce FBNetV3. FBNetV3 is a family of state-of-the-art compact ImageNet models, outperforming both automatically and manually-designed architectures. For example, FBNetV3 matches both EfficientNet and ResNeSt accuracy with 1.4x and 5.0x fewer FLOPs, respectively. Furthermore, the JointNAS-searched training recipe yields significant performance gains across different networks and tasks.", "field": [], "task": ["Neural Architecture Search"], "method": [], "dataset": ["ImageNet"], "metric": ["Top-1 Error Rate", "MACs", "Accuracy"], "title": "FBNetV3: Joint Architecture-Recipe Search using Neural Acquisition Function"} {"abstract": "We formulate a Data Driven Computing paradigm, termed max-ent Data Driven Computing, that generalizes distance-minimizing Data Driven Computing and is robust with respect to outliers. Robustness is achieved by means of clustering analysis. Specifically, we assign data points a variable relevance depending on distance to the solution and on maximum-entropy estimation. The resulting scheme consists of the minimization of a suitably-defined free energy over phase space subject to compatibility and equilibrium constraints. Distance-minimizing Data Driven schemes are recovered in the limit of zero temperature. We present selected numerical tests that establish the convergence properties of the max-ent Data Driven solvers and solutions.", "field": [], "task": ["Stress-Strain Relation"], "method": [], "dataset": ["Non-Linear Elasticity Benchmark"], "metric": ["Time (ms)"], "title": "Data Driven Computing with Noisy Material Data Sets"} {"abstract": "In this paper, we propose RNN-Capsule, a capsule model based on Recurrent Neural Network (RNN) for sentiment analysis. For a given problem, one capsule is built for each sentiment category e.g., \u2018positive\u2019 and \u2018negative\u2019. Each capsule has an attribute, a state, and three modules: representation module, probability module, and reconstruction module. The attribute of a capsule is the assigned sentiment category. Given an instance encoded in hidden vectors by a typical RNN, the representation module builds capsule representation by the attention mechanism. Based on capsule representation, the probability module computes the capsule\u2019s state probability. A capsule\u2019s state is active if its state probability is the largest among all capsules for the given instance, and inactive otherwise. On two benchmark datasets (i.e., Movie Review and Stanford Sentiment Treebank) and one proprietary dataset (i.e., Hospital Feedback), we show that RNN-Capsule achieves state-of-the-art performance on sentiment classification. More importantly, without using any linguistic knowledge, RNN-Capsule is capable of outputting words with sentiment tendencies reflecting capsules\u2019 attributes. The words well reflect the domain specificity of the dataset.", "field": [], "task": ["Sentiment Analysis"], "method": [], "dataset": ["MR", "SST-5 Fine-grained classification"], "metric": ["Accuracy"], "title": "Sentiment Analysis by Capsules"} {"abstract": "Description-based person re-identification (Re-id) is an important task in video surveillance that requires discriminative cross-modal representations to distinguish different people. It is difficult to directly measure the similarity between images and descriptions due to the modality heterogeneity (the cross-modal problem). And all samples belonging to a single category (the fine-grained problem) makes this task even harder than the conventional image-description matching task. In this paper, we propose a Multi-granularity Image-text Alignments (MIA) model to alleviate the cross-modal fine-grained problem for better similarity evaluation in description-based person Re-id. Specifically, three different granularities, i.e., global-global, global-local and local-local alignments are carried out hierarchically. Firstly, the global-global alignment in the Global Contrast (GC) module is for matching the global contexts of images and descriptions. Secondly, the global-local alignment employs the potential relations between local components and global contexts to highlight the distinguishable components while eliminating the uninvolved ones adaptively in the Relation-guided Global-local Alignment (RGA) module. Thirdly, as for the local-local alignment, we match visual human parts with noun phrases in the Bi-directional Fine-grained Matching (BFM) module. The whole network combining multiple granularities can be end-to-end trained without complex pre-processing. To address the difficulties in training the combination of multiple granularities, an effective step training strategy is proposed to train these granularities step-by-step. Extensive experiments and analysis have shown that our method obtains the state-of-the-art performance on the CUHK-PEDES dataset and outperforms the previous methods by a significant margin.", "field": [], "task": ["Person Re-Identification", "Text based Person Retrieval"], "method": [], "dataset": ["CUHK-PEDES"], "metric": ["R@10", "R@1", "R@5"], "title": "Improving Description-based Person Re-identification by Multi-granularity Image-text Alignments"} {"abstract": "Most existing person re-identification (re-id) methods focus on learning the\noptimal distance metrics across camera views. Typically a person's appearance\nis represented using features of thousands of dimensions, whilst only hundreds\nof training samples are available due to the difficulties in collecting matched\ntraining images. With the number of training samples much smaller than the\nfeature dimension, the existing methods thus face the classic small sample size\n(SSS) problem and have to resort to dimensionality reduction techniques and/or\nmatrix regularisation, which lead to loss of discriminative power. In this\nwork, we propose to overcome the SSS problem in re-id distance metric learning\nby matching people in a discriminative null space of the training data. In this\nnull space, images of the same person are collapsed into a single point thus\nminimising the within-class scatter to the extreme and maximising the relative\nbetween-class separation simultaneously. Importantly, it has a fixed dimension,\na closed-form solution and is very efficient to compute. Extensive experiments\ncarried out on five person re-identification benchmarks including VIPeR,\nPRID2011, CUHK01, CUHK03 and Market1501 show that such a simple approach beats\nthe state-of-the-art alternatives, often by a big margin.", "field": [], "task": ["Dimensionality Reduction", "Metric Learning", "Person Re-Identification"], "method": [], "dataset": ["Market-1501"], "metric": ["Rank-1", "MAP"], "title": "Learning a Discriminative Null Space for Person Re-identification"} {"abstract": "In this paper, we focus on heterogeneous features learning for RGB-D activity recognition. We find that features from different channels (RGB, depth) could share some similar hidden structures, and then propose a joint learning model to simultaneously explore the shared and feature-specific components as an instance of heterogeneous multi-task learning. The proposed model formed in a unified framework is capable of: 1) jointly mining a set of subspaces with the same dimensionality to exploit latent shared features across different feature channels, 2) meanwhile, quantifying the shared and feature-specific components of features in the subspaces, and 3) transferring feature-specific intermediate transforms (i-transforms) for learning fusion of heterogeneous features across datasets. To efficiently train the joint model, a three-step iterative optimization algorithm is proposed, followed by a simple inference model. Extensive experimental results on four activity datasets have demonstrated the efficacy of the proposed method. Anew RGB-D activity dataset focusing on human-object interaction is further contributed, which presents more challenges for RGB-D activity benchmarking.", "field": [], "task": ["Activity Recognition", "Human-Object Interaction Detection", "Multi-Task Learning", "Skeleton Based Action Recognition"], "method": [], "dataset": ["NTU RGB+D", "NTU RGB+D 120", "SYSU 3D"], "metric": ["Accuracy (CS)", "Accuracy (Cross-Subject)", "Accuracy (CV)", "Accuracy (Cross-Setup)", "Accuracy"], "title": "Jointly learning heterogeneous features for rgb-d activity recognition"} {"abstract": "Much recent progress in Vision-to-Language problems has been achieved through\na combination of Convolutional Neural Networks (CNNs) and Recurrent Neural\nNetworks (RNNs). This approach does not explicitly represent high-level\nsemantic concepts, but rather seeks to progress directly from image features to\ntext. In this paper we first propose a method of incorporating high-level\nconcepts into the successful CNN-RNN approach, and show that it achieves a\nsignificant improvement on the state-of-the-art in both image captioning and\nvisual question answering. We further show that the same mechanism can be used\nto incorporate external knowledge, which is critically important for answering\nhigh level visual questions. Specifically, we design a visual question\nanswering model that combines an internal representation of the content of an\nimage with information extracted from a general knowledge base to answer a\nbroad range of image-based questions. It particularly allows questions to be\nasked about the contents of an image, even when the image itself does not\ncontain a complete answer. Our final model achieves the best reported results\non both image captioning and visual question answering on several benchmark\ndatasets.", "field": [], "task": ["Image Captioning", "Question Answering", "Visual Question Answering"], "method": [], "dataset": ["COCO Visual Question Answering (VQA) real images 1.0 open ended"], "metric": ["Percentage correct"], "title": "Image Captioning and Visual Question Answering Based on Attributes and External Knowledge"} {"abstract": "Deep convolutional neural networks excel at sentiment polarity classification, but tend to require substantial amounts of training data, which moreover differs quite significantly between domains. In this work, we present an approach to feed generic cues into the training process of such networks, leading to better generalization abilities given limited training data. We propose to induce sentiment embeddings via supervision on extrinsic data, which are then fed into the model via a dedicated memory-based component. We observe significant gains in effectiveness on a range of different datasets in seven different languages.", "field": [], "task": ["Sentiment Analysis", "Transfer Learning", "Word Embeddings"], "method": [], "dataset": ["SST-2 Binary classification"], "metric": ["Accuracy"], "title": "A Helping Hand: Transfer Learning for Deep Sentiment Analysis"} {"abstract": "Few-shot learning, i.e., learning novel concepts from few examples, is fundamental to practical visual recognition systems. While most of existing work has focused on few-shot classification, we make a step towards few-shot object detection, a more challenging yet under-explored task. We develop a conceptually simple but powerful meta-learning based framework that simultaneously tackles few-shot classification and few-shot localization in a unified, coherent way. This framework leverages meta-level knowledge about \"model parameter generation\" from base classes with abundant data to facilitate the generation of a detector for novel classes. Our key insight is to disentangle the learning of category-agnostic and category-specific components in a CNN based detection model. In particular, we introduce a weight prediction meta-model that enables predicting the parameters of category-specific components from few examples. We systematically benchmark the performance of modern detectors in the small-sample size regime. Experiments in a variety of realistic scenarios, including within-domain, cross-domain, and long-tailed settings, demonstrate the effectiveness and generality of our approach under different notions of novel classes.\r", "field": [], "task": ["Few-Shot Learning", "Few-Shot Object Detection", "Meta-Learning", "Object Detection"], "method": [], "dataset": ["MS-COCO (30-shot)", "MS-COCO (10-shot)"], "metric": ["AP"], "title": "Meta-Learning to Detect Rare Objects"} {"abstract": "Many machine learning algorithms require the input to be represented as a\nfixed-length feature vector. When it comes to texts, one of the most common\nfixed-length features is bag-of-words. Despite their popularity, bag-of-words\nfeatures have two major weaknesses: they lose the ordering of the words and\nthey also ignore semantics of the words. For example, \"powerful,\" \"strong\" and\n\"Paris\" are equally distant. In this paper, we propose Paragraph Vector, an\nunsupervised algorithm that learns fixed-length feature representations from\nvariable-length pieces of texts, such as sentences, paragraphs, and documents.\nOur algorithm represents each document by a dense vector which is trained to\npredict words in the document. Its construction gives our algorithm the\npotential to overcome the weaknesses of bag-of-words models. Empirical results\nshow that Paragraph Vectors outperform bag-of-words models as well as other\ntechniques for text representations. Finally, we achieve new state-of-the-art\nresults on several text classification and sentiment analysis tasks.", "field": [], "task": ["Question Answering", "Sentiment Analysis", "Text Classification"], "method": [], "dataset": ["IMDb", "QASent", "WikiQA"], "metric": ["Accuracy (2 classes)", "Accuracy (10 classes)", "MRR", "MAP"], "title": "Distributed Representations of Sentences and Documents"} {"abstract": "Word sense induction (WSI) seeks to automatically discover the senses of a word in a corpus via unsupervised methods. We propose a sense-topic model for WSI, which treats sense and topic as two separate latent variables to be inferred jointly. Topics are informed by the entire document, while senses are informed by the local context surrounding the ambiguous word. We also discuss unsupervised ways of enriching the original corpus in order to improve model performance, including using neural word embeddings and external corpora to expand the context of each data instance. We demonstrate significant improvements over the previous state-of-the-art, achieving the best results reported to date on the SemEval-2013 WSI task.", "field": [], "task": ["Topic Models", "Word Embeddings", "Word Sense Induction"], "method": [], "dataset": ["SemEval 2013"], "metric": ["F_NMI", "F-BC", "AVG"], "title": "A Sense-Topic Model for Word Sense Induction with Unsupervised Data Enrichment"} {"abstract": "Updated on 24/09/2015: This update provides preliminary experiment results\nfor fine-grained classification on the surveillance data of CompCars. The\ntrain/test splits are provided in the updated dataset. See details in Section\n6.", "field": [], "task": ["Fine-Grained Image Classification"], "method": [], "dataset": ["CompCars"], "metric": ["Accuracy"], "title": "A Large-Scale Car Dataset for Fine-Grained Categorization and Verification"} {"abstract": "Machine learning has the potential to assist many communities in using the large datasets that are becoming more and more available. Unfortunately, much of that potential is not being realized because it would require sharing data in a way that compromises privacy. In this paper, we investigate a method for ensuring (differential) privacy of the generator of the Generative Adversarial Nets (GAN) framework. The resulting model can be used for generating synthetic data on which algorithms can be trained and validated, and on which competitions can be conducted, without compromising the privacy of the original dataset. Our method modifies the Private Aggregation of Teacher Ensembles (PATE) framework and applies it to GANs. Our modified framework (which we call PATE-GAN) allows us to tightly bound the influence of any individual sample on the model, resulting in tight differential privacy guarantees and thus an improved performance over models with the same guarantees. We also look at measuring the quality of synthetic data from a new angle; we assert that for the synthetic data to be useful for machine learning researchers, the relative performance of two algorithms (trained and tested) on the synthetic dataset should be the same as their relative performance (when trained and tested) on the original dataset. Our experiments, on various datasets, demonstrate that PATE-GAN consistently outperforms the state-of-the-art method with respect to this and other notions of synthetic data quality.", "field": [], "task": ["Synthetic Data Generation"], "method": [], "dataset": ["UCI Epileptic Seizure Recognition"], "metric": ["AUROC"], "title": "PATE-GAN: Generating Synthetic Data with Differential Privacy Guarantees"} {"abstract": "In this work, we are interested in generalizing convolutional neural networks\n(CNNs) from low-dimensional regular grids, where image, video and speech are\nrepresented, to high-dimensional irregular domains, such as social networks,\nbrain connectomes or words' embedding, represented by graphs. We present a\nformulation of CNNs in the context of spectral graph theory, which provides the\nnecessary mathematical background and efficient numerical schemes to design\nfast localized convolutional filters on graphs. Importantly, the proposed\ntechnique offers the same linear computational complexity and constant learning\ncomplexity as classical CNNs, while being universal to any graph structure.\nExperiments on MNIST and 20NEWS demonstrate the ability of this novel deep\nlearning system to learn local, stationary, and compositional features on\ngraphs.", "field": [], "task": ["Node Classification", "Skeleton Based Action Recognition"], "method": [], "dataset": ["PubMed (0.1%)", "Cora", "PubMed (0.03%)", "Cora (1%)", "PubMed (0.05%)", "Citeseer", "Cora (3%)", "SBU", "CiteSeer (1%)", "Cora (0.5%)", "Cora with Public Split: fixed 20 nodes per class", "CiteSeer (0.5%)", "Pubmed", "CiteSeer with Public Split: fixed 20 nodes per class", "PubMed with Public Split: fixed 20 nodes per class"], "metric": ["Accuracy"], "title": "Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering"} {"abstract": "Mostexistingpersonre-identification(re-id)methods relyon supervised model\nlearning on per-camera-pair manually labelled pairwise training data. This\nleads to poor scalability in practical re-id deployment due to the lack of\nexhaustive identity labelling of image positive and negative pairs for every\ncamera pair. In this work, we address this problem by proposing an unsupervised\nre-id deep learning approach capable of incrementally discovering and\nexploiting the underlying re-id discriminative information from automatically\ngenerated person tracklet data from videos in an end-to-end model optimisation.\nWe formulate a Tracklet Association Unsupervised Deep Learning (TAUDL)\nframework characterised by jointly learning per-camera (within-camera) tracklet\nassociation (labelling) and cross-camera tracklet correlation by maximising the\ndiscovery of most likely tracklet relationships across camera views. Extensive\nexperiments demonstrate the superiority of the proposed TAUDL model over the\nstate-of-the-art unsupervised and domain adaptation re- id methods using six\nperson re-id benchmarking datasets.", "field": [], "task": ["Domain Adaptation", "Person Re-Identification", "Unsupervised Person Re-Identification"], "method": [], "dataset": ["MSMT17", "PRID2011", "DukeTracklet"], "metric": ["Rank-1", "Rank-20", "mAP", "Rank-5"], "title": "Unsupervised Person Re-identification by Deep Learning Tracklet Association"} {"abstract": "Large pose variations remain to be a challenge that confronts real-word face\ndetection. We propose a new cascaded Convolutional Neural Network, dubbed the\nname Supervised Transformer Network, to address this challenge. The first stage\nis a multi-task Region Proposal Network (RPN), which simultaneously predicts\ncandidate face regions along with associated facial landmarks. The candidate\nregions are then warped by mapping the detected facial landmarks to their\ncanonical positions to better normalize the face patterns. The second stage,\nwhich is a RCNN, then verifies if the warped candidate regions are valid faces\nor not. We conduct end-to-end learning of the cascaded network, including\noptimizing the canonical positions of the facial landmarks. This supervised\nlearning of the transformations automatically selects the best scale to\ndifferentiate face/non-face patterns. By combining feature maps from both\nstages of the network, we achieve state-of-the-art detection accuracies on\nseveral public benchmarks. For real-time performance, we run the cascaded\nnetwork only on regions of interests produced from a boosting cascade face\ndetector. Our detector runs at 30 FPS on a single CPU core for a VGA-resolution\nimage.", "field": [], "task": ["Face Detection", "Region Proposal"], "method": [], "dataset": ["PASCAL Face", "Annotated Faces in the Wild"], "metric": ["AP"], "title": "Supervised Transformer Network for Efficient Face Detection"} {"abstract": "This paper proposes a novel learning method for multi-task applications. Multi-task neural networks can learn to transfer knowledge across different tasks by using parameter sharing. However, sharing parameters between unrelated tasks can hurt performance. To address this issue, we propose a framework to learn fine-grained patterns of parameter sharing. Assuming that the network is composed of several components across layers, our framework uses learned binary variables to allocate components to tasks in order to encourage more parameter sharing between related tasks, and discourage parameter sharing otherwise. The binary allocation variables are learned jointly with the model parameters by standard back-propagation thanks to the Gumbel-Softmax reparametrization method. When applied to the Omniglot benchmark, the proposed method achieves a 17% relative reduction of the error rate compared to state-of-the-art.", "field": [], "task": ["Multi-Task Learning", "Omniglot"], "method": [], "dataset": ["OMNIGLOT"], "metric": ["Average Accuracy"], "title": "Flexible Multi-task Networks by Learning Parameter Allocation"} {"abstract": "Unlike human learning, machine learning often fails to handle changes between\ntraining (source) and test (target) input distributions. Such domain shifts,\ncommon in practical scenarios, severely damage the performance of conventional\nmachine learning methods. Supervised domain adaptation methods have been\nproposed for the case when the target data have labels, including some that\nperform very well despite being \"frustratingly easy\" to implement. However, in\npractice, the target domain is often unlabeled, requiring unsupervised\nadaptation. We propose a simple, effective, and efficient method for\nunsupervised domain adaptation called CORrelation ALignment (CORAL). CORAL\nminimizes domain shift by aligning the second-order statistics of source and\ntarget distributions, without requiring any target labels. Even though it is\nextraordinarily simple--it can be implemented in four lines of Matlab\ncode--CORAL performs remarkably well in extensive evaluations on standard\nbenchmark datasets.", "field": [], "task": ["Domain Adaptation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["Synth Digits-to-SVHN", "Synth Signs-to-GTSRB"], "metric": ["Accuracy"], "title": "Return of Frustratingly Easy Domain Adaptation"} {"abstract": "Fine-tuning neural networks is widely used to transfer valuable knowledge\nfrom high-resource to low-resource domains. In a standard fine-tuning scheme,\nsource and target problems are trained using the same architecture. Although\ncapable of adapting to new domains, pre-trained units struggle with learning\nuncommon target-specific patterns. In this paper, we propose to augment the\ntarget-network with normalised, weighted and randomly initialised units that\nbeget a better adaptation while maintaining the valuable source knowledge. Our\nexperiments on POS tagging of social media texts (Tweets domain) demonstrate\nthat our method achieves state-of-the-art performances on 3 commonly used\ndatasets.", "field": [], "task": ["Domain Adaptation", "Part-Of-Speech Tagging"], "method": [], "dataset": ["Social media"], "metric": ["Accuracy"], "title": "Joint Learning of Pre-Trained and Random Units for Domain Adaptation in Part-of-Speech Tagging"} {"abstract": "The effectiveness of generative adversarial approaches in producing images\naccording to a specific style or visual domain has recently opened new\ndirections to solve the unsupervised domain adaptation problem. It has been\nshown that source labeled images can be modified to mimic target samples making\nit possible to train directly a classifier in the target domain, despite the\noriginal lack of annotated data. Inverse mappings from the target to the source\ndomain have also been evaluated but only passing through adapted feature\nspaces, thus without new image generation. In this paper we propose to better\nexploit the potential of generative adversarial networks for adaptation by\nintroducing a novel symmetric mapping among domains. We jointly optimize\nbi-directional image transformations combining them with target self-labeling.\nMoreover we define a new class consistency loss that aligns the generators in\nthe two directions imposing to conserve the class identity of an image passing\nthrough both domain mappings. A detailed qualitative and quantitative analysis\nof the reconstructed images confirm the power of our approach. By integrating\nthe two domain specific classifiers obtained with our bi-directional network we\nexceed previous state-of-the-art unsupervised adaptation results on four\ndifferent benchmark datasets.", "field": [], "task": ["Domain Adaptation", "Image Generation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["SVHN-to-MNIST"], "metric": ["Accuracy"], "title": "From source to target and back: symmetric bi-directional adaptive GAN"} {"abstract": "Many of the existing methods for learning joint embedding of images and text\nuse only supervised information from paired images and its textual attributes.\nTaking advantage of the recent success of unsupervised learning in deep neural\nnetworks, we propose an end-to-end learning framework that is able to extract\nmore robust multi-modal representations across domains. The proposed method\ncombines representation learning models (i.e., auto-encoders) together with\ncross-domain learning criteria (i.e., Maximum Mean Discrepancy loss) to learn\njoint embeddings for semantic and visual features. A novel technique of\nunsupervised-data adaptation inference is introduced to construct more\ncomprehensive embeddings for both labeled and unlabeled data. We evaluate our\nmethod on Animals with Attributes and Caltech-UCSD Birds 200-2011 dataset with\na wide range of applications, including zero and few-shot image recognition and\nretrieval, from inductive to transductive settings. Empirically, we show that\nour framework improves over the current state of the art on many of the\nconsidered tasks.", "field": [], "task": ["Generalized Few-Shot Learning", "Representation Learning"], "method": [], "dataset": ["SUN", "AWA2", "CUB"], "metric": ["Per-Class Accuracy (2-shots)", "Per-Class Accuracy (2-shots)", "Per-Class Accuracy (5-shots)", "Per-Class Accuracy (10-shots)", "Per-Class Accuracy (1-shot)"], "title": "Learning Robust Visual-Semantic Embeddings"} {"abstract": "Accurate and automatic organ segmentation from 3D radiological scans is an\nimportant yet challenging problem for medical image analysis. Specifically, the\npancreas demonstrates very high inter-patient anatomical variability in both\nits shape and volume. In this paper, we present an automated system using 3D\ncomputed tomography (CT) volumes via a two-stage cascaded approach: pancreas\nlocalization and segmentation. For the first step, we localize the pancreas\nfrom the entire 3D CT scan, providing a reliable bounding box for the more\nrefined segmentation step. We introduce a fully deep-learning approach, based\non an efficient application of holistically-nested convolutional networks\n(HNNs) on the three orthogonal axial, sagittal, and coronal views. The\nresulting HNN per-pixel probability maps are then fused using pooling to\nreliably produce a 3D bounding box of the pancreas that maximizes the recall.\nWe show that our introduced localizer compares favorably to both a conventional\nnon-deep-learning method and a recent hybrid approach based on spatial\naggregation of superpixels using random forest classification. The second,\nsegmentation, phase operates within the computed bounding box and integrates\nsemantic mid-level cues of deeply-learned organ interior and boundary maps,\nobtained by two additional and separate realizations of HNNs. By integrating\nthese two mid-level cues, our method is capable of generating\nboundary-preserving pixel-wise class label maps that result in the final\npancreas segmentation. Quantitative evaluation is performed on a publicly\navailable dataset of 82 patient CT scans using 4-fold cross-validation (CV). We\nachieve a Dice similarity coefficient (DSC) of 81.27+/-6.27% in validation,\nwhich significantly outperforms previous state-of-the art methods that report\nDSCs of 71.80+/-10.70% and 78.01+/-8.20%, respectively, using the same dataset.", "field": [], "task": ["3D Medical Imaging Segmentation", "Computed Tomography (CT)", "Pancreas Segmentation"], "method": [], "dataset": ["TCIA Pancreas-CT"], "metric": ["Dice Score"], "title": "Spatial Aggregation of Holistically-Nested Convolutional Neural Networks for Automated Pancreas Localization and Segmentation"} {"abstract": "Different types of Convolutional Neural Networks (CNNs) have been applied to detect cancerous lung nodules from computed tomography (CT) scans. However, the size of a nodule is very diverse and can range anywhere between 3 and 30 millimeters. The high variation of nodule sizes makes classifying them a difficult and challenging task. In this study, we propose a novel CNN architecture called Gated-Dilated (GD) networks to classify nodules as malignant or benign. Unlike previous studies, the GD network uses multiple dilated convolutions instead of max-poolings to capture the scale variations. Moreover, the GD network has a Context-Aware sub-network that analyzes the input features and guides the features to a suitable dilated convolution. We evaluated the proposed network on more than 1,000 CT scans from the LIDC-LDRI dataset. Our proposed network outperforms state-of-the-art baseline models including Multi-Crop, Resnet, and Densenet, with an AUC of >0.95. Compared to the baseline models, the GD network improves the classification accuracies of mid-range sized nodules. Furthermore, we observe a relationship between the size of the nodule and the attention signal generated by the Context-Aware sub-network, which validates our new network architecture.", "field": [], "task": ["Computed Tomography (CT)", "Lung Nodule Classification"], "method": [], "dataset": ["LIDC-IDRI"], "metric": ["Accuracy(10-fold)", "AUC", "Accuracy"], "title": "Gated-Dilated Networks for Lung Nodule Classification in CT scans"} {"abstract": "Current researches of action recognition mainly focus on single-view and\nmulti-view recognition, which can hardly satisfies the requirements of\nhuman-robot interaction (HRI) applications to recognize actions from arbitrary\nviews. The lack of datasets also sets up barriers. To provide data for\narbitrary-view action recognition, we newly collect a large-scale RGB-D action\ndataset for arbitrary-view action analysis, including RGB videos, depth and\nskeleton sequences. The dataset includes action samples captured in 8 fixed\nviewpoints and varying-view sequences which covers the entire 360 degree view\nangles. In total, 118 persons are invited to act 40 action categories, and\n25,600 video samples are collected. Our dataset involves more participants,\nmore viewpoints and a large number of samples. More importantly, it is the\nfirst dataset containing the entire 360 degree varying-view sequences. The\ndataset provides sufficient data for multi-view, cross-view and arbitrary-view\naction analysis. Besides, we propose a View-guided Skeleton CNN (VS-CNN) to\ntackle the problem of arbitrary-view action recognition. Experiment results\nshow that the VS-CNN achieves superior performance.", "field": [], "task": ["Action Recognition", "Human robot interaction", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["Varying-view RGB-D Action-Skeleton"], "metric": ["Accuracy (CS)", "Accuracy (CV II)", "Accuracy (CV I)", "Accuracy (AV I)", "Accuracy (AV II)"], "title": "A Large-scale Varying-view RGB-D Action Dataset for Arbitrary-view Human Action Recognition"} {"abstract": "Many problems in NLP require aggregating information from multiple mentions\nof the same entity which may be far apart in the text. Existing Recurrent\nNeural Network (RNN) layers are biased towards short-term dependencies and\nhence not suited to such tasks. We present a recurrent layer which is instead\nbiased towards coreferent dependencies. The layer uses coreference annotations\nextracted from an external system to connect entity mentions belonging to the\nsame cluster. Incorporating this layer into a state-of-the-art reading\ncomprehension model improves performance on three datasets -- Wikihop, LAMBADA\nand the bAbi AI tasks -- with large gains when training data is scarce.", "field": [], "task": ["Reading Comprehension"], "method": [], "dataset": ["WikiHop"], "metric": ["Test"], "title": "Neural Models for Reasoning over Multiple Mentions using Coreference"} {"abstract": "In this paper, we present a fully automatic brain tumor segmentation method\nbased on Deep Neural Networks (DNNs). The proposed networks are tailored to\nglioblastomas (both low and high grade) pictured in MR images. By their very\nnature, these tumors can appear anywhere in the brain and have almost any kind\nof shape, size, and contrast. These reasons motivate our exploration of a\nmachine learning solution that exploits a flexible, high capacity DNN while\nbeing extremely efficient. Here, we give a description of different model\nchoices that we've found to be necessary for obtaining competitive performance.\nWe explore in particular different architectures based on Convolutional Neural\nNetworks (CNN), i.e. DNNs specifically adapted to image data.\n We present a novel CNN architecture which differs from those traditionally\nused in computer vision. Our CNN exploits both local features as well as more\nglobal contextual features simultaneously. Also, different from most\ntraditional uses of CNNs, our networks use a final layer that is a\nconvolutional implementation of a fully connected layer which allows a 40 fold\nspeed up. We also describe a 2-phase training procedure that allows us to\ntackle difficulties related to the imbalance of tumor labels. Finally, we\nexplore a cascade architecture in which the output of a basic CNN is treated as\nan additional source of information for a subsequent CNN. Results reported on\nthe 2013 BRATS test dataset reveal that our architecture improves over the\ncurrently published state-of-the-art while being over 30 times faster.", "field": [], "task": ["Brain Tumor Segmentation", "Medical Image Segmentation", "Tumor Segmentation"], "method": [], "dataset": ["BRATS-2013 leaderboard", "BRATS-2013"], "metric": ["Dice Score"], "title": "Brain Tumor Segmentation with Deep Neural Networks"} {"abstract": "This paper introduces a new definition of multiscale neighborhoods in 3D\npoint clouds. This definition, based on spherical neighborhoods and\nproportional subsampling, allows the computation of features with a consistent\ngeometrical meaning, which is not the case when using k-nearest neighbors. With\nan appropriate learning strategy, the proposed features can be used in a random\nforest to classify 3D points. In this semantic classification task, we show\nthat our multiscale features outperform state-of-the-art features using the\nsame experimental conditions. Furthermore, their classification power competes\nwith more elaborate classification approaches including Deep Learning methods.", "field": [], "task": ["Semantic Segmentation"], "method": [], "dataset": ["Semantic3D"], "metric": ["mIoU"], "title": "Semantic Classification of 3D Point Clouds with Multiscale Spherical Neighborhoods"} {"abstract": "This paper addresses the scalability and robustness issues of estimating labels from imbalanced unlabeled data for unsupervised video-based person re-identification (re-ID). To achieve it, we propose a novel Robust AnChor Embedding (RACE) framework via deep feature representation learning for large-scale unsupervised video re-ID. Within this framework, anchor sequences representing different persons are firstly selected to formulate an anchor graph which also initializes the CNN model to get discriminative feature representations for later label estimation. To accurately estimate labels from unlabeled sequences with noisy frames, robust anchor embedding is introduced based on the regularized affine hull. Efficiency is ensured with kNN anchors embedding instead of the whole anchor set under manifold assumptions. After that, a robust and efficient top-k counts label prediction strategy is proposed to predict the labels of unlabeled image sequences. With the newly estimated labeled sequences, the unified anchor embedding framework enables the feature learning process to be further facilitated. Extensive experimental results on the large-scale dataset show that the proposed method outperforms existing unsupervised video re-ID methods.", "field": [], "task": ["Person Re-Identification", "Representation Learning", "Video-Based Person Re-Identification"], "method": [], "dataset": ["PRID2011"], "metric": ["Rank-1", "Rank-20", "Rank-5"], "title": "Robust Anchor Embedding for Unsupervised Video Person Re-Identification in the Wild"} {"abstract": "In this paper, we study the problem of image-text matching. Inferring the\nlatent semantic alignment between objects or other salient stuff (e.g. snow,\nsky, lawn) and the corresponding words in sentences allows to capture\nfine-grained interplay between vision and language, and makes image-text\nmatching more interpretable. Prior work either simply aggregates the similarity\nof all possible pairs of regions and words without attending differentially to\nmore and less important words or regions, or uses a multi-step attentional\nprocess to capture limited number of semantic alignments which is less\ninterpretable. In this paper, we present Stacked Cross Attention to discover\nthe full latent alignments using both image regions and words in a sentence as\ncontext and infer image-text similarity. Our approach achieves the\nstate-of-the-art results on the MS-COCO and Flickr30K datasets. On Flickr30K,\nour approach outperforms the current best methods by 22.1% relatively in text\nretrieval from image query, and 18.2% relatively in image retrieval with text\nquery (based on Recall@1). On MS-COCO, our approach improves sentence retrieval\nby 17.8% relatively and image retrieval by 16.6% relatively (based on Recall@1\nusing the 5K test set). Code has been made available at:\nhttps://github.com/kuanghuei/SCAN.", "field": [], "task": ["Cross-Modal Retrieval", "Image Retrieval", "Text Matching"], "method": [], "dataset": ["Flickr30k", "COCO 2014", "Flickr30K 1K test"], "metric": ["Image-to-text R@5", "Image-to-text R@1", "R@10", "Image-to-text R@10", "Text-to-image R@10", "Text-to-image R@1", "R@5", "R@1", "Text-to-image R@5"], "title": "Stacked Cross Attention for Image-Text Matching"} {"abstract": "We study on weakly-supervised object detection (WSOD)which plays a vital role in relieving human involvement fromobject-level annotations. Predominant works integrate re-gion proposal mechanisms with convolutional neural net-works (CNN). Although CNN is proficient in extracting dis-criminative local features, grand challenges still exist tomeasure the likelihood of a bounding box containing a com-plete object (i.e., \u201cobjectness\u201d). In this paper, we pro-pose a novelWSODframework withObjectnessDistillation(i.e.,WSOD2) by designing a tailored training mechanismfor weakly-supervised object detection. Multiple regressiontargets are specifically determined by jointly consideringbottom-up (BU) and top-down (TD) objectness from low-level measurement and CNN confidences with an adaptivelinear combination. As bounding box regression can fa-cilitate a region proposal learning to approach its regres-sion target with high objectness during training, deep ob-jectness representation learned from bottom-up evidencescan be gradually distilled into CNN by optimization. Weexplore different adaptive training curves for BU/TD ob-jectness, and show that the proposed WSOD2can achievestate-of-the-art results.", "field": [], "task": ["Object Detection", "Region Proposal", "Regression", "Weakly Supervised Object Detection"], "method": [], "dataset": ["PASCAL VOC 2007", "PASCAL VOC 2012 test"], "metric": ["MAP"], "title": "WSOD2: Learning Bottom-up and Top-down Objectness Distillation forWeakly-supervised Object Detection"} {"abstract": "This paper proposes a way of improving classification performance for classes which have very few training examples. The key idea is to discover classes which are similar and transfer knowledge among them. Our method organizes the classes into a tree hierarchy. The tree structure can be used to impose a generative prior over classification parameters. We show that these priors can be combined with discriminative models such as deep neural networks. Our method benefits from the power of discriminative training of deep neural networks, at the same time using tree-based generative priors over classification parameters. We also propose an algorithm for learning the underlying tree structure. This gives the model some flexibility to tune the tree so that the tree is pertinent to task being solved. We show that the model can transfer knowledge across related classes using fixed semantic trees. Moreover, it can learn new meaningful trees usually leading to improved performance. Our method achieves state-of-the-art classification results on the CIFAR-100 image data set and the MIR Flickr multimodal data set.", "field": [], "task": ["Image Classification", "Transfer Learning"], "method": [], "dataset": ["CIFAR-100"], "metric": ["Percentage correct"], "title": "Discriminative Transfer Learning with Tree-based Priors"} {"abstract": "We present GraPPa, an effective pre-training approach for table semantic parsing that learns a compositional inductive bias in the joint representations of textual and tabular data. We construct synthetic question-SQL pairs over high-quality tables via a synchronous context-free grammar (SCFG) induced from existing text-to-SQL datasets. We pre-train our model on the synthetic data using a novel text-schema linking objective that predicts the syntactic role of a table field in the SQL for each question-SQL pair. To maintain the model's ability to represent real-world data, we also include masked language modeling (MLM) over several existing table-and-language datasets to regularize the pre-training process. On four popular fully supervised and weakly supervised table semantic parsing benchmarks, GraPPa significantly outperforms RoBERTa-large as the feature representation layers and establishes new state-of-the-art results on all of them.", "field": [], "task": ["Language Modelling", "Semantic Parsing", "Text-To-Sql"], "method": [], "dataset": ["spider"], "metric": ["Accuracy"], "title": "GraPPa: Grammar-Augmented Pre-Training for Table Semantic Parsing"} {"abstract": "Object localization is an important computer vision problem with a variety of\napplications. The lack of large scale object-level annotations and the relative\nabundance of image-level labels makes a compelling case for weak supervision in\nthe object localization task. Deep Convolutional Neural Networks are a class of\nstate-of-the-art methods for the related problem of object recognition. In this\npaper, we describe a novel object localization algorithm which uses\nclassification networks trained on only image labels. This weakly supervised\nmethod leverages local spatial and semantic patterns captured in the\nconvolutional layers of classification networks. We propose an efficient beam\nsearch based approach to detect and localize multiple objects in images. The\nproposed method significantly outperforms the state-of-the-art in standard\nobject localization data-sets with a 8 point increase in mAP scores.", "field": [], "task": ["Object Localization", "Object Recognition", "Weakly Supervised Object Detection"], "method": [], "dataset": ["COCO"], "metric": ["MAP"], "title": "Weakly Supervised Localization using Deep Feature Maps"} {"abstract": "In this paper, we present the first experiments using neural network models\nfor the task of error detection in learner writing. We perform a systematic\ncomparison of alternative compositional architectures and propose a framework\nfor error detection based on bidirectional LSTMs. Experiments on the CoNLL-14\nshared task dataset show the model is able to outperform other participants on\ndetecting errors in learner writing. Finally, the model is integrated with a\npublicly deployed self-assessment system, leading to performance comparable to\nhuman annotators.", "field": [], "task": ["Grammatical Error Detection"], "method": [], "dataset": ["CoNLL-2014 A2", "FCE", "CoNLL-2014 A1"], "metric": ["F0.5"], "title": "Compositional Sequence Labeling Models for Error Detection in Learner Writing"} {"abstract": "This paper aims to classify and locate objects accurately and efficiently,\nwithout using bounding box annotations. It is challenging as objects in the\nwild could appear at arbitrary locations and in different scales. In this\npaper, we propose a novel classification architecture ProNet based on\nconvolutional neural networks. It uses computationally efficient neural\nnetworks to propose image regions that are likely to contain objects, and\napplies more powerful but slower networks on the proposed regions. The basic\nbuilding block is a multi-scale fully-convolutional network which assigns\nobject confidence scores to boxes at different locations and scales. We show\nthat such networks can be trained effectively using image-level annotations,\nand can be connected into cascades or trees for efficient object\nclassification. ProNet outperforms previous state-of-the-art significantly on\nPASCAL VOC 2012 and MS COCO datasets for object classification and point-based\nlocalization.", "field": [], "task": ["Object Classification", "Weakly Supervised Object Detection"], "method": [], "dataset": ["COCO"], "metric": ["MAP"], "title": "ProNet: Learning to Propose Object-specific Boxes for Cascaded Neural Networks"} {"abstract": "Camouflaged objects are generally difficult to be detected in their natural environment even for human beings. In this paper, we propose a novel bio-inspired network, named the MirrorNet, that leverages both instance segmentation and mirror stream for the camouflaged object segmentation. Differently from existing networks for segmentation, our proposed network possesses two segmentation streams: the main stream and the mirror stream corresponding with the original image and its flipped image, respectively. The output from the mirror stream is then fused into the main stream's result for the final camouflage map to boost up the segmentation accuracy. Extensive experiments conducted on the public CAMO dataset demonstrate the effectiveness of our proposed network. Our proposed method achieves 89% in accuracy, outperforming the state-of-the-arts. Project Page: https://sites.google.com/view/ltnghia/research/camo", "field": [], "task": ["Camouflaged Object Segmentation", "Camouflage Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["CAMO"], "metric": ["S-Measure", "Weighted F-Measure", "MAE", "F-Measure", "E-Measure"], "title": "MirrorNet: Bio-Inspired Camouflaged Object Segmentation"} {"abstract": "This paper investigates whether learning contingency-awareness and\ncontrollable aspects of an environment can lead to better exploration in\nreinforcement learning. To investigate this question, we consider an\ninstantiation of this hypothesis evaluated on the Arcade Learning Element\n(ALE). In this study, we develop an attentive dynamics model (ADM) that\ndiscovers controllable elements of the observations, which are often associated\nwith the location of the character in Atari games. The ADM is trained in a\nself-supervised fashion to predict the actions taken by the agent. The learned\ncontingency information is used as a part of the state representation for\nexploration purposes. We demonstrate that combining actor-critic algorithm with\ncount-based exploration using our representation achieves impressive results on\na set of notoriously challenging Atari games due to sparse rewards. For\nexample, we report a state-of-the-art score of >11,000 points on Montezuma's\nRevenge without using expert demonstrations, explicit high-level information\n(e.g., RAM states), or supervisory data. Our experiments confirm that\ncontingency-awareness is indeed an extremely powerful concept for tackling\nexploration problems in reinforcement learning and opens up interesting\nresearch questions for further investigations.", "field": [], "task": ["Atari Games", "Montezuma's Revenge"], "method": [], "dataset": ["Atari 2600 Montezuma's Revenge"], "metric": ["Score"], "title": "Contingency-Aware Exploration in Reinforcement Learning"} {"abstract": "This paper describes our system used for the end-to-end (E2E) natural language generation (NLG) challenge. The challenge collects a novel dataset for spoken dialogue system in the restaurant domain, which shows more lexical richness and syntactic variation and requires content selection (Novikova et al., 2017). To solve this challenge, we employ the CAEncoder-enhanced sequence-tosequence learning model (Zhang et al., 2017) and propose an attention regularizer to spread attention weights across input words as well as control the overfitting problem. Without any specific designation, our system yields very promising performance. Particularly, our system achieves a ROUGE-L score of 0.7083, the best result among all submitted primary systems.", "field": [], "task": ["Data-to-Text Generation", "Text Generation"], "method": [], "dataset": ["E2E NLG Challenge"], "metric": ["NIST", "METEOR", "CIDEr", "ROUGE-L", "BLEU"], "title": "Attention Regularized Sequence-to-Sequence Learning for E2E NLG Challenge"} {"abstract": "In a typical face recognition pipeline, the task of the face detector is to localize the face region. However, the face detector localizes regions that look like a face, irrespective of the liveliness of the face, which makes the entire system susceptible to presentation attacks. In this work, we try to reformulate the task of the face detector to detect real faces, thus eliminating the threat of presentation attacks. While this task could be challenging with visible spectrum images alone, we leverage the multi-channel information available from off the shelf devices (such as color, depth, and infrared channels) to design a multi-channel face detector. The proposed system can be used as a live-face detector obviating the need for a separate presentation attack detection module, making the system reliable in practice without any additional computational overhead. The main idea is to leverage a single-stage object detection framework, with a joint representation obtained from different channels for the PAD task. We have evaluated our approach in the multi-channel WMCA dataset containing a wide variety of attacks to show the effectiveness of the proposed framework.", "field": [], "task": ["Face Presentation Attack Detection", "Face Recognition", "Object Detection"], "method": [], "dataset": ["WMCA"], "metric": ["ACER@0.2BPCER"], "title": "Can Your Face Detector Do Anti-spoofing? Face Presentation Attack Detection with a Multi-Channel Face Detector"} {"abstract": "We focus on spectral clustering of unlabeled graphs and review some results\non clustering methods which achieve weak or strong consistent identification in\ndata generated by such models. We also present a new algorithm which appears to\nperform optimally both theoretically using asymptotic theory and empirically.", "field": [], "task": [], "method": [], "dataset": ["CIFAR-10"], "metric": ["Accuracy", "NMI", "Train set", "ARI"], "title": "Spectral Clustering and Block Models: A Review And A New Algorithm"} {"abstract": "Text in natural images is of arbitrary orientations, requiring detection in\nterms of oriented bounding boxes. Normally, a multi-oriented text detector\noften involves two key tasks: 1) text presence detection, which is a\nclassification problem disregarding text orientation; 2) oriented bounding box\nregression, which concerns about text orientation. Previous methods rely on\nshared features for both tasks, resulting in degraded performance due to the\nincompatibility of the two tasks. To address this issue, we propose to perform\nclassification and regression on features of different characteristics,\nextracted by two network branches of different designs. Concretely, the\nregression branch extracts rotation-sensitive features by actively rotating the\nconvolutional filters, while the classification branch extracts\nrotation-invariant features by pooling the rotation-sensitive features. The\nproposed method named Rotation-sensitive Regression Detector (RRD) achieves\nstate-of-the-art performance on three oriented scene text benchmark datasets,\nincluding ICDAR 2015, MSRA-TD500, RCTW-17 and COCO-Text. Furthermore, RRD\nachieves a significant improvement on a ship collection dataset, demonstrating\nits generality on oriented object detection.", "field": [], "task": ["Object Detection", "Regression", "Scene Text", "Scene Text Detection"], "method": [], "dataset": ["MSRA-TD500"], "metric": ["Precision", "Recall", "H-Mean"], "title": "Rotation-Sensitive Regression for Oriented Scene Text Detection"} {"abstract": "In this paper, we propose a pixel-wise method named TextCohesion for scene\ntext detection, which splits a text instance into five key components: a Text\nSkeleton and four Directional Pixel Regions. These components are easier to\nhandle than the entire text instance. A confidence scoring mechanism is\ndesigned to filter characters that are similar to text. Our method can\nintegrate text contexts intensively when backgrounds are complex. Experiments\non two curved challenging benchmarks demonstrate that TextCohesion outperforms\nstate-of-the-art methods, achieving the F-measure of 84.6% on Total-Text and\nbfseries86.3% on SCUT-CTW1500.", "field": [], "task": ["Curved Text Detection", "Scene Text", "Scene Text Detection"], "method": [], "dataset": ["SCUT-CTW1500", "Total-Text"], "metric": ["F-Measure"], "title": "TextCohesion: Detecting Text for Arbitrary Shapes"} {"abstract": "Most action recognition methods base on a) a late aggregation of frame level CNN features using average pooling, max pooling, or RNN, among others, or b) spatio-temporal aggregation via 3D convolutions. The first assume independence among frame features up to a certain level of abstraction and then perform higher-level aggregation, while the second extracts spatio-temporal features from grouped frames as early fusion. In this paper we explore the space in between these two, by letting adjacent feature branches interact as they develop into the higher level representation. The interaction happens between feature differencing and averaging at each level of the hierarchy, and it has convolutional structure that learns to select the appropriate mode locally in contrast to previous works that impose one of the modes globally (e.g. feature differencing) as a design choice. We further constrain this interaction to be conservative, e.g. a local feature subtraction in one branch is compensated by the addition on another, such that the total feature flow is preserved. We evaluate the performance of our proposal on a number of existing models, i.e. TSN, TRN and ECO, to show its flexibility and effectiveness in improving action recognition performance.", "field": [], "task": ["Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["HMDB-51", "Something-Something V1"], "metric": ["Average accuracy of 3 splits", "Top 1 Accuracy"], "title": "Hierarchical Feature Aggregation Networks for Video Action Recognition"} {"abstract": "In this paper, we propose a deep learning approach to tackle the automatic summarization tasks by incorporating topic information into the convolutional sequence-to-sequence (ConvS2S) model and using self-critical sequence training (SCST) for optimization. Through jointly attending to topics and word-level alignment, our approach can improve coherence, diversity, and informativeness of generated summaries via a biased probability generation mechanism. On the other hand, reinforcement training, like SCST, directly optimizes the proposed model with respect to the non-differentiable metric ROUGE, which also avoids the exposure bias during inference. We carry out the experimental evaluation with state-of-the-art methods over the Gigaword, DUC-2004, and LCSTS datasets. The empirical results demonstrate the superiority of our proposed method in the abstractive summarization.", "field": [], "task": ["Abstractive Text Summarization", "Text Summarization"], "method": [], "dataset": ["GigaWord", "DUC 2004 Task 1"], "metric": ["ROUGE-L", "ROUGE-1", "ROUGE-2"], "title": "A Reinforced Topic-Aware Convolutional Sequence-to-Sequence Model for Abstractive Text Summarization"} {"abstract": "We propose a novel Bayesian nonparametric method to learn translation-invariant relationships on non-Euclidean domains. The resulting graph convolutional Gaussian processes can be applied to problems in machine learning for which the input observations are functions with domains on general graphs. The structure of these models allows for high dimensional inputs while retaining expressibility, as is the case with convolutional neural networks. We present applications of graph convolutional Gaussian processes to images and triangular meshes, demonstrating their versatility and effectiveness, comparing favorably to existing methods, despite being relatively simple models.", "field": [], "task": ["Gaussian Processes", "Superpixel Image Classification"], "method": [], "dataset": ["75 Superpixel MNIST"], "metric": ["Classification Error"], "title": "Graph Convolutional Gaussian Processes"} {"abstract": "We aim to improve the performance of Multiple Object Tracking and Segmentation (MOTS) by refinement. However, it remains challenging for refining MOTS results, which could be attributed to that appearance features are not adapted to target videos and it is also difficult to find proper thresholds to discriminate them. To tackle this issue, we propose a self-supervised refining MOTS (i.e., ReMOTS) framework. ReMOTS mainly takes four steps to refine MOTS results from the data association perspective. (1) Training the appearance encoder using predicted masks. (2) Associating observations across adjacent frames to form short-term tracklets. (3) Training the appearance encoder using short-term tracklets as reliable pseudo labels. (4) Merging short-term tracklets to long-term tracklets utilizing adopted appearance features and thresholds that are automatically obtained from statistical information. Using ReMOTS, we reached the $1^{st}$ place on CVPR 2020 MOTS Challenge 1, with an sMOTSA score of $69.9$.", "field": [], "task": ["Multi-Object Tracking", "Multiple Object Tracking", "Object Tracking"], "method": [], "dataset": ["MOTS20"], "metric": ["sMOTSA"], "title": "ReMOTS: Self-Supervised Refining Multi-Object Tracking and Segmentation"} {"abstract": "We propose a structured generative latent variable model that integrates information from multiple contextual representations for Word Sense Induction. Our approach jointly models global lexical, local lexical and dependency syntactic context. Each context type is associated with a latent variable and the three types of variables share a hierarchical structure. We use skip-gram based word and dependency context embeddings to construct all three types of representations, reducing the total number of parameters to be estimated and enabling better generalization. We describe an EM algorithm to efficiently estimate model parameters and use the Integrated Complete Likelihood criterion to automatically estimate the number of senses. Our model achieves state-of-the-art results on the SemEval-2010 and SemEval-2013 Word Sense Induction datasets.", "field": [], "task": ["Hierarchical structure", "Word Embeddings", "Word Sense Disambiguation", "Word Sense Induction"], "method": [], "dataset": ["SemEval 2013"], "metric": ["F_NMI", "F-BC", "AVG"], "title": "Structured Generative Models of Continuous Features for Word Sense Induction"} {"abstract": "In the Story Cloze Test, a system is presented with a 4-sentence prompt to a\nstory, and must determine which one of two potential endings is the 'right'\nending to the story. Previous work has shown that ignoring the training set and\ntraining a model on the validation set can achieve high accuracy on this task\ndue to stylistic differences between the story endings in the training set and\nvalidation and test sets. Following this approach, we present a simpler\nfully-neural approach to the Story Cloze Test using skip-thought embeddings of\nthe stories in a feed-forward network that achieves close to state-of-the-art\nperformance on this task without any feature engineering. We also find that\nconsidering just the last sentence of the prompt instead of the whole prompt\nyields higher accuracy with our approach.", "field": [], "task": ["Feature Engineering"], "method": [], "dataset": ["Story Cloze Test"], "metric": ["Accuracy"], "title": "A Simple and Effective Approach to the Story Cloze Test"} {"abstract": "Sequential dynamics are a key feature of many modern recommender systems,\nwhich seek to capture the `context' of users' activities on the basis of\nactions they have performed recently. To capture such patterns, two approaches\nhave proliferated: Markov Chains (MCs) and Recurrent Neural Networks (RNNs).\nMarkov Chains assume that a user's next action can be predicted on the basis of\njust their last (or last few) actions, while RNNs in principle allow for\nlonger-term semantics to be uncovered. Generally speaking, MC-based methods\nperform best in extremely sparse datasets, where model parsimony is critical,\nwhile RNNs perform better in denser datasets where higher model complexity is\naffordable. The goal of our work is to balance these two goals, by proposing a\nself-attention based sequential model (SASRec) that allows us to capture\nlong-term semantics (like an RNN), but, using an attention mechanism, makes its\npredictions based on relatively few actions (like an MC). At each time step,\nSASRec seeks to identify which items are `relevant' from a user's action\nhistory, and use them to predict the next item. Extensive empirical studies\nshow that our method outperforms various state-of-the-art sequential models\n(including MC/CNN/RNN-based approaches) on both sparse and dense datasets.\nMoreover, the model is an order of magnitude more efficient than comparable\nCNN/RNN-based models. Visualizations on attention weights also show how our\nmodel adaptively handles datasets with various density, and uncovers meaningful\npatterns in activity sequences.", "field": [], "task": ["Recommendation Systems"], "method": [], "dataset": ["Amazon Games", "MovieLens 1M", "Steam", "Amazon Beauty"], "metric": ["nDCG@10", "HR@10", "Hit@10"], "title": "Self-Attentive Sequential Recommendation"} {"abstract": "Head pose estimation aims at predicting an accurate pose from an image. Current approaches rely on supervised deep learning, which typically requires large amounts of labeled data. Manual or sensor-based annotations of head poses are prone to errors. A solution is to generate synthetic training data by rendering 3D face models. However, the differences (domain gap) between rendered (source-domain) and real-world (target-domain) images can cause low performance. Advances in visual domain adaptation allow reducing the influence of domain differences using adversarial neural networks, which match the feature spaces between domains by enforcing domain-invariant features. While previous work on visual domain adaptation generally assumes discrete and shared label spaces, these assumptions are both invalid for pose estimation tasks. We are the first to present domain adaptation for head pose estimation with a focus on partially shared and continuous label spaces. More precisely, we adapt the predominant weighting approaches to continuous label spaces by applying a weighted resampling of the source domain during training. To evaluate our approach, we revise and extend existing datasets resulting in a new benchmark for visual domain adaption. Our experiments show that our method improves the accuracy of head pose estimation for real-world images despite using only labels from synthetic images.\r", "field": [], "task": ["Domain Adaptation", "Head Pose Estimation", "Pose Estimation"], "method": [], "dataset": ["BIWI"], "metric": ["MAE (trained with BIWI data)"], "title": "Deep Head Pose Estimation Using Synthetic Images and Partial Adversarial Domain Adaption for Continuous Label Spaces"} {"abstract": "Events in natural videos typically arise from spatio-temporal interactions between actors and objects and involve multiple co-occurring activities and object classes. To capture this rich visual and semantic context, we propose using two graphs: (1) an attributed spatio-temporal visual graph whose nodes correspond to actors and objects and whose edges encode different types of interactions, and (2) a symbolic graph that models semantic relationships. We further propose a graph neural network for refining the representations of actors, objects and their interactions on the resulting hybrid graph. Our model goes beyond current approaches that assume nodes and edges are of the same type, operate on graphs with fixed edge weights and do not use a symbolic graph. In particular, our framework: a) has specialized attention-based message functions for different node and edge types; b) uses visual edge features; c) integrates visual evidence with label relationships; and d) performs global reasoning in the semantic space. Experiments on challenging video understanding tasks, such as temporal action localization on the Charades dataset, show that the proposed method leads to state-of-the-art performance.", "field": [], "task": ["Action Classification", "Action Classification ", "Action Detection", "Action Localization", "Action Segmentation", "Representation Learning", "Temporal Action Localization", "Video Understanding"], "method": [], "dataset": ["Charades"], "metric": ["mAP"], "title": "Representation Learning on Visual-Symbolic Graphs for Video Understanding"} {"abstract": "Sign Language Recognition (SLR) has been an active research field for the last two decades. However, most research to date has considered SLR as a naive gesture recognition problem. SLR seeks to recognize a sequence of continuous signs but neglects the underlying rich grammatical and linguistic structures of sign language that differ from spoken language. In contrast, we introduce the Sign Language Translation (SLT) problem. Here, the objective is to generate spoken language translations from sign language videos, taking into account the different word orders and grammar. We formalize SLT in the framework of Neural Machine Translation (NMT) for both end-to-end and pretrained settings (using expert knowledge). This allows us to jointly learn the spatial representations, the underlying language model, and the mapping between sign and spoken language. To evaluate the performance of Neural SLT, we collected the first publicly available Continuous SLT dataset, RWTH-PHOENIX-Weather 2014T. It provides spoken language translations and gloss level annotations for German Sign Language videos of weather broadcasts. Our dataset contains over .95M frames with >67K signs from a sign vocabulary of >1K and >99K words from a German vocabulary of >2.8K. We report quantitative and qualitative results for various SLT setups to underpin future research in this newly established field. The upper bound for translation performance is calculated at 19.26 BLEU-4, while our end-to-end frame-level and gloss-level tokenization networks were able to achieve 9.58 and 18.13 respectively.", "field": [], "task": ["Gesture Recognition", "Language Modelling", "Machine Translation", "Sign Language Recognition", "Sign Language Translation", "Tokenization"], "method": [], "dataset": ["RWTH-PHOENIX-Weather 2014 T"], "metric": ["BLEU-4"], "title": "Neural Sign Language Translation"} {"abstract": "Deep neural networks can model images with rich latent representations, but\nthey cannot naturally conceptualize structures of object categories in a\nhuman-perceptible way. This paper addresses the problem of learning object\nstructures in an image modeling process without supervision. We propose an\nautoencoding formulation to discover landmarks as explicit structural\nrepresentations. The encoding module outputs landmark coordinates, whose\nvalidity is ensured by constraints that reflect the necessary properties for\nlandmarks. The decoding module takes the landmarks as a part of the learnable\ninput representations in an end-to-end differentiable framework. Our discovered\nlandmarks are semantically meaningful and more predictive of manually annotated\nlandmarks than those discovered by previous methods. The coordinates of our\nlandmarks are also complementary features to pretrained deep-neural-network\nrepresentations in recognizing visual attributes. In addition, the proposed\nmethod naturally creates an unsupervised, perceptible interface to manipulate\nobject shapes and decode images with controllable structures. The project\nwebpage is at http://ytzhang.net/projects/lmdis-rep", "field": [], "task": ["Unsupervised Facial Landmark Detection"], "method": [], "dataset": ["AFLW (Zhang CVPR 2018 crops)", "MAFL"], "metric": ["NME"], "title": "Unsupervised Discovery of Object Landmarks as Structural Representations"} {"abstract": "This paper summarizes our method and validation results for the ISBI\nChallenge 2017 - Skin Lesion Analysis Towards Melanoma Detection - Part I:\nLesion Segmentation", "field": [], "task": ["Lesion Segmentation"], "method": [], "dataset": ["ISIC 2017"], "metric": ["Mean IoU"], "title": "Automatic skin lesion segmentation with fully convolutional-deconvolutional networks"} {"abstract": "In this work we present a framework for the recognition of natural scene\ntext. Our framework does not require any human-labelled data, and performs word\nrecognition on the whole image holistically, departing from the character based\nrecognition systems of the past. The deep neural network models at the centre\nof this framework are trained solely on data produced by a synthetic text\ngeneration engine -- synthetic data that is highly realistic and sufficient to\nreplace real data, giving us infinite amounts of training data. This excess of\ndata exposes new possibilities for word recognition models, and here we\nconsider three models, each one \"reading\" words in a different way: via 90k-way\ndictionary encoding, character sequence encoding, and bag-of-N-grams encoding.\nIn the scenarios of language based and completely unconstrained text\nrecognition we greatly improve upon state-of-the-art performance on standard\ndatasets, using our fast, simple machinery and requiring zero data-acquisition\ncosts.", "field": [], "task": ["Scene Text", "Scene Text Recognition", "Text Generation"], "method": [], "dataset": ["ICDAR2013", "SVT"], "metric": ["Accuracy"], "title": "Synthetic Data and Artificial Neural Networks for Natural Scene Text Recognition"} {"abstract": "One crucial aspect of partial domain adaptation (PDA) is how to select the relevant source samples in the shared classes for knowledge transfer. Previous PDA methods tackle this problem by re-weighting the source samples based on their high-level information (deep features). However, since the domain shift between source and target domains, only using the deep features for sample selection is defective. We argue that it is more reasonable to additionally exploit the pixel-level information for PDA problem, as the appearance difference between outlier source classes and target classes is significantly large. In this paper, we propose a reinforced transfer network (RTNet), which utilizes both high-level and pixel-level information for PDA problem. Our RTNet is composed of a reinforced data selector (RDS) based on reinforcement learning (RL), which filters out the outlier source samples, and a domain adaptation model which minimizes the domain discrepancy in the shared label space. Specifically, in the RDS, we design a novel reward based on the reconstruct errors of selected source samples on the target generator, which introduces the pixel-level information to guide the learning of RDS. Besides, we develope a state containing high-level information, which used by the RDS for sample selection. The proposed RDS is a general module, which can be easily integrated into existing DA models to make them fit the PDA situation. Extensive experiments indicate that RTNet can achieve state-of-the-art performance for PDA tasks on several benchmark datasets.", "field": [], "task": ["Domain Adaptation", "Partial Domain Adaptation", "Transfer Learning"], "method": [], "dataset": ["Office-31", "Office-Home"], "metric": ["Accuracy (%)"], "title": "Selective Transfer with Reinforced Transfer Network for Partial Domain Adaptation"} {"abstract": "Modeling and prediction of human motion dynamics has long been a challenging problem in computer vision, and most existing methods rely on the end-to-end supervised training of various architectures of recurrent neural networks. Inspired by the recent success of deep reinforcement learning methods, in this paper we propose a new reinforcement learning formulation for the problem of human pose prediction, and develop an imitation learning algorithm for predicting future poses under this formulation through a combination of behavioral cloning and generative adversarial imitation learning. Our experiments show that our proposed method outperforms all existing state-of-the-art baseline models by large margins on the task of human pose prediction in both short-term predictions and long-term predictions, while also enjoying huge advantage in training speed.", "field": [], "task": ["Human Pose Forecasting", "Imitation Learning", "Pose Prediction"], "method": [], "dataset": ["Human3.6M"], "metric": ["MAR, walking, 400ms", "MAR, walking, 1,000ms"], "title": "Imitation Learning for Human Pose Prediction"} {"abstract": "We describe an effective and efficient method for point-wise semantic classification of 3D point clouds. The method can handle unstructured and inhomogeneous point clouds such as those derived from static terrestrial LiDAR or photogrammetric reconstruction; and it is computationally efficient, making it possible to process point clouds with many millions of points in a matter of minutes. The key issue, both to cope with strong variations in point density and to bring down computation time, turns out to be careful handling of neighborhood relations. By choosing appropriate definitions of a point\u2019s (multi-scale) neighborhood, we obtain a feature set that is both expressive and fast to compute. We evaluate our classification method both on benchmark data from a mobile mapping platform and on a variety of large, terrestrial laser scans with greatly varying point density. The proposed feature set outperforms the state of the art with respect to per-point classification accuracy, while at the same time being much faster to compute.", "field": [], "task": ["Semantic Segmentation"], "method": [], "dataset": ["Semantic3D"], "metric": ["mIoU"], "title": "Fast semantic segmentation of 3d point clouds with strongly varying density"} {"abstract": "In order to track all persons in a scene, the tracking-by-detection paradigm\nhas proven to be a very effective approach. Yet, relying solely on a single\ndetector is also a major limitation, as useful image information might be\nignored. Consequently, this work demonstrates how to fuse two detectors into a\ntracking system. To obtain the trajectories, we propose to formulate tracking\nas a weighted graph labeling problem, resulting in a binary quadratic program.\nAs such problems are NP-hard, the solution can only be approximated. Based on\nthe Frank-Wolfe algorithm, we present a new solver that is crucial to handle\nsuch difficult problems. Evaluation on pedestrian tracking is provided for\nmultiple scenarios, showing superior results over single detector tracking and\nstandard QP-solvers. Finally, our tracker ranks 2nd on the MOT16 benchmark and\n1st on the new MOT17 benchmark, outperforming over 90 trackers.", "field": [], "task": ["Multi-Object Tracking", "Object Tracking"], "method": [], "dataset": ["MOT16", "MOT17"], "metric": ["MOTA"], "title": "Fusion of Head and Full-Body Detectors for Multi-Object Tracking"} {"abstract": "Estimating depth from a single RGB image is an ill-posed and inherently\nambiguous problem. State-of-the-art deep learning methods can now estimate\naccurate 2D depth maps, but when the maps are projected into 3D, they lack\nlocal detail and are often highly distorted. We propose a fast-to-train\ntwo-streamed CNN that predicts depth and depth gradients, which are then fused\ntogether into an accurate and detailed depth map. We also define a novel set\nloss over multiple images; by regularizing the estimation between a common set\nof images, the network is less prone to over-fitting and achieves better\naccuracy than competing methods. Experiments on the NYU Depth v2 dataset shows\nthat our depth predictions are competitive with state-of-the-art and lead to\nfaithful 3D projections.", "field": [], "task": [], "method": [], "dataset": ["NYU-Depth V2"], "metric": ["RMSE"], "title": "A Two-Streamed Network for Estimating Fine-Scaled Depth Maps from Single RGB Images"} {"abstract": "Persistence diagrams (PDs) play a key role in topological data analysis\n(TDA), in which they are routinely used to describe topological properties of\ncomplicated shapes. PDs enjoy strong stability properties and have proven their\nutility in various learning contexts. They do not, however, live in a space\nnaturally endowed with a Hilbert structure and are usually compared with\nspecific distances, such as the bottleneck distance. To incorporate PDs in a\nlearning pipeline, several kernels have been proposed for PDs with a strong\nemphasis on the stability of the RKHS distance w.r.t. perturbations of the PDs.\nIn this article, we use the Sliced Wasserstein approximation SW of the\nWasserstein distance to define a new kernel for PDs, which is not only provably\nstable but also provably discriminative (depending on the number of points in\nthe PDs) w.r.t. the Wasserstein distance $d_1$ between PDs. We also demonstrate\nits practicality, by developing an approximation technique to reduce kernel\ncomputation time, and show that our proposal compares favorably to existing\nkernels for PDs on several benchmarks.", "field": [], "task": ["Graph Classification", "Topological Data Analysis"], "method": [], "dataset": ["NEURON-BINARY", "NEURON-MULTI", "NEURON-Average"], "metric": ["Accuracy"], "title": "Sliced Wasserstein Kernel for Persistence Diagrams"} {"abstract": "Defocus blur detection (DBD) is the separation of infocus and out-of-focus regions in an image. This process has been paid considerable attention because of its remarkable potential applications. Accurate differentiation of homogeneous regions and detection of low-contrast focal regions, as well as suppression of background clutter, are challenges associated with DBD. To address these issues, we propose a multi-stream bottom-top-bottom fully convolutional network (BTBNet), which is the first attempt to develop an end-to-end deep network for DBD. First, we develop a fully convolutional BTBNet to integrate low-level cues and high-level semantic information. Then, considering that the degree of defocus blur is sensitive to scales, we propose multi-stream BTBNets that handle input images with different scales to improve the performance of DBD. Finally, we design a fusion and recursive reconstruction network to recursively refine the preceding blur detection maps. To promote further study and evaluation of the DBD models, we construct a new database of 500 challenging images and their pixel-wise defocus blur annotations. Experimental results on the existing and our new datasets demonstrate that the proposed method achieves significantly better performance than other state-of-the-art algorithms.", "field": [], "task": ["Defocus Estimation"], "method": [], "dataset": ["CUHK - Blur Detection Dataset"], "metric": ["MAE", "F-measure"], "title": "Defocus Blur Detection via Multi-Stream Bottom-Top-Bottom Fully Convolutional Network"} {"abstract": "Continuous monitoring of cardiac health under free living condition is\ncrucial to provide effective care for patients undergoing post operative\nrecovery and individuals with high cardiac risk like the elderly. Capacitive\nElectrocardiogram (cECG) is one such technology which allows comfortable and\nlong term monitoring through its ability to measure biopotential in conditions\nwithout having skin contact. cECG monitoring can be done using many household\nobjects like chairs, beds and even car seats allowing for seamless monitoring\nof individuals. This method is unfortunately highly susceptible to motion\nartifacts which greatly limits its usage in clinical practice. The current use\nof cECG systems has been limited to performing rhythmic analysis. In this paper\nwe propose a novel end-to-end deep learning architecture to perform the task of\ndenoising capacitive ECG. The proposed network is trained using motion\ncorrupted three channel cECG and a reference LEAD I ECG collected on\nindividuals while driving a car. Further, we also propose a novel joint loss\nfunction to apply loss on both signal and frequency domain. We conduct\nextensive rhythmic analysis on the model predictions and the ground truth. We\nfurther evaluate the signal denoising using Mean Square Error(MSE) and Cross\nCorrelation between model predictions and ground truth. We report MSE of 0.167\nand Cross Correlation of 0.476. The reported results highlight the feasibility\nof performing morphological analysis using the filtered cECG. The proposed\napproach can allow for continuous and comprehensive monitoring of the\nindividuals in free living conditions.", "field": [], "task": ["Denoising", "ECG Denoising", "Electrocardiography (ECG)", "Morphological Analysis"], "method": [], "dataset": ["UnoViS_auto2012"], "metric": ["MSE"], "title": "Deep Network for Capacitive ECG Denoising"} {"abstract": "Sketch-based image retrieval (SBIR) is challenging due to the inherent\ndomain-gap between sketch and photo. Compared with pixel-perfect depictions of\nphotos, sketches are iconic renderings of the real world with highly abstract.\nTherefore, matching sketch and photo directly using low-level visual clues are\nunsufficient, since a common low-level subspace that traverses semantically\nacross the two modalities is non-trivial to establish. Most existing SBIR\nstudies do not directly tackle this cross-modal problem. This naturally\nmotivates us to explore the effectiveness of cross-modal retrieval methods in\nSBIR, which have been applied in the image-text matching successfully. In this\npaper, we introduce and compare a series of state-of-the-art cross-modal\nsubspace learning methods and benchmark them on two recently released\nfine-grained SBIR datasets. Through thorough examination of the experimental\nresults, we have demonstrated that the subspace learning can effectively model\nthe sketch-photo domain-gap. In addition we draw a few key insights to drive\nfuture research.", "field": [], "task": ["Cross-Modal Retrieval", "Image Retrieval", "Sketch-Based Image Retrieval", "Text Matching"], "method": [], "dataset": ["Chairs"], "metric": ["R@10", "R@1"], "title": "Cross-modal Subspace Learning for Fine-grained Sketch-based Image Retrieval"} {"abstract": "This paper proposes a method for learning joint embeddings of images and text\nusing a two-branch neural network with multiple layers of linear projections\nfollowed by nonlinearities. The network is trained using a large margin\nobjective that combines cross-view ranking constraints with within-view\nneighborhood structure preservation constraints inspired by metric learning\nliterature. Extensive experiments show that our approach gains significant\nimprovements in accuracy for image-to-text and text-to-image retrieval. Our\nmethod achieves new state-of-the-art results on the Flickr30K and MSCOCO\nimage-sentence datasets and shows promise on the new task of phrase\nlocalization on the Flickr30K Entities dataset.", "field": [], "task": ["Image Retrieval", "Metric Learning", "Text-to-Image Retrieval"], "method": [], "dataset": ["Flickr30K 1K test"], "metric": ["R@10", "R@1", "R@5"], "title": "Learning Deep Structure-Preserving Image-Text Embeddings"} {"abstract": "Semantic labelling and instance segmentation are two tasks that require\nparticularly costly annotations. Starting from weak supervision in the form of\nbounding box detection annotations, we propose a new approach that does not\nrequire modification of the segmentation training procedure. We show that when\ncarefully designing the input labels from given bounding boxes, even a single\nround of training is enough to improve over previously reported weakly\nsupervised results. Overall, our weak supervision approach reaches ~95% of the\nquality of the fully supervised model, both for semantic labelling and instance\nsegmentation.", "field": [], "task": ["Instance Segmentation", "Semantic Segmentation"], "method": [], "dataset": ["PASCAL VOC 2012 test", "PASCAL VOC 2012 val"], "metric": ["Mean IoU"], "title": "Simple Does It: Weakly Supervised Instance and Semantic Segmentation"} {"abstract": "Contextualized word embeddings have been employed effectively across several tasks in Natural Language Processing, as they have proved to carry useful semantic information. However, it is still hard to link them to structured sources of knowledge. In this paper we present ARES (context-AwaRe Embeddings of Senses), a semi-supervised approach to producing sense embeddings for the lexical meanings within a lexical knowledge base that lie in a space that is comparable to that of contextualized word vectors. ARES representations enable a simple 1 Nearest-Neighbour algorithm to outperform state-of-the-art models, not only in the English Word Sense Disambiguation task, but also in the multilingual one, whilst training on sense-annotated data in English only. We further assess the quality of our embeddings in the Word-in-Context task, where, when used as an external source of knowledge, they consistently improve the performance of a neural model, leading it to compete with other more complex architectures. ARES embeddings for all WordNet concepts and the automatically-extracted contexts used for creating the sense representations are freely available at http://sensembert.org/ares.", "field": [], "task": ["Word Embeddings", "Word Sense Disambiguation"], "method": [], "dataset": ["Supervised:"], "metric": ["Senseval 2", "Senseval 3", "SemEval 2013", "SemEval 2007", "SemEval 2015"], "title": "With More Contexts Comes Better Performance: Contextualized Sense Embeddings for All-Round Word Sense Disambiguation"} {"abstract": "Recent advances with Convolutional Networks (ConvNets) have shifted the\nbottleneck for many computer vision tasks to annotated data collection. In this\npaper, we present a geometry-driven approach to automatically collect\nannotations for human pose prediction tasks. Starting from a generic ConvNet\nfor 2D human pose, and assuming a multi-view setup, we describe an automatic\nway to collect accurate 3D human pose annotations. We capitalize on constraints\noffered by the 3D geometry of the camera setup and the 3D structure of the\nhuman body to probabilistically combine per view 2D ConvNet predictions into a\nglobally optimal 3D pose. This 3D pose is used as the basis for harvesting\nannotations. The benefit of the annotations produced automatically with our\napproach is demonstrated in two challenging settings: (i) fine-tuning a generic\nConvNet-based 2D pose predictor to capture the discriminative aspects of a\nsubject's appearance (i.e.,\"personalization\"), and (ii) training a ConvNet from\nscratch for single view 3D human pose prediction without leveraging 3D pose\ngroundtruth. The proposed multi-view pose estimator achieves state-of-the-art\nresults on standard benchmarks, demonstrating the effectiveness of our method\nin exploiting the available multi-view information.", "field": [], "task": ["3D Human Pose Estimation", "Pose Prediction"], "method": [], "dataset": ["Human3.6M"], "metric": ["Average MPJPE (mm)"], "title": "Harvesting Multiple Views for Marker-less 3D Human Pose Annotations"} {"abstract": "3D hand pose estimation is an essential problem for human computer interaction. Most of the existing depth-based hand pose estimation methods consume 2D depth map or 3D volume via 2D/3D convolutional neural networks (CNNs). In this paper, we propose a deep Semantic Hand Pose Regression network (SHPR-Net) for hand pose estimation from point sets, which consists of two subnetworks: a semantic segmentation subnetwork and a hand pose regression subnetwork. The semantic segmentation network assigns semantic labels for each point in the point set. The pose regression network integrates the semantic priors with both input and late fusion strategy and regresses the final hand pose. Two transformation matrices are learned from the point set and applied to transform the input point cloud and inversely transform the output pose respectively, which makes the SHPR-Net more robust to geometric transformations. Experiments on NYU, ICVL and MSRA hand pose datasets demonstrate that our SHPRNet achieves high performance on par with start-of-the-art methods. We also show that our method can be naturally extended to hand pose estimation from multi-view depth data and achieves further improvement on NYU dataset.", "field": [], "task": ["3D Hand Pose Estimation", "Hand Pose Estimation", "Pose Estimation", "Regression", "Semantic Segmentation"], "method": [], "dataset": ["ICVL Hands", "NYU Hands", "MSRA Hands"], "metric": ["Average 3D Error"], "title": "SHPR-Net: Deep Semantic Hand Pose Regression From Point Clouds"} {"abstract": "Weakly supervised object detection (WSOD), where a detector is trained with only image-level annotations, is attracting more and more attention. As a method to obtain a well-performing detector, the detector and the instance labels are updated iteratively. In this study, for more efficient iterative updating, we focus on the instance labeling problem, a problem of which label should be annotated to each region based on the last localization result. Instead of simply labeling the top-scoring region and its highly overlapping regions as positive and others as negative, we propose more effective instance labeling methods as follows. First, to solve the problem that regions covering only some parts of the object tend to be labeled as positive, we find regions covering the whole object focusing on the context classification loss. Second, considering the situation where the other objects contained in the image can be labeled as negative, we impose a spatial restriction on regions labeled as negative. Using these instance labeling methods, we train the detector on the PASCAL VOC 2007 and 2012 and obtain significantly improved results compared with other state-of-the-art approaches.", "field": [], "task": ["Object Detection", "Weakly Supervised Object Detection"], "method": [], "dataset": ["PASCAL VOC 2007", "PASCAL VOC 2012 test"], "metric": ["MAP"], "title": "Object-Aware Instance Labeling for Weakly Supervised Object Detection"} {"abstract": "In this paper we propose a novel approach to tracking by detection that can\nexploit both cameras as well as LIDAR data to produce very accurate 3D\ntrajectories. Towards this goal, we formulate the problem as a linear program\nthat can be solved exactly, and learn convolutional networks for detection as\nwell as matching in an end-to-end manner. We evaluate our model in the\nchallenging KITTI dataset and show very competitive results.", "field": [], "task": [], "method": [], "dataset": ["KITTI Tracking test", "KITTI"], "metric": ["MOTA", "MOTP"], "title": "End-to-end Learning of Multi-sensor 3D Tracking by Detection"} {"abstract": "Despite the large number of both commercial and academic methods for Automatic License Plate Recognition (ALPR), most existing approaches are focused on a specific license plate (LP) region (e.g. European, US, Brazilian, Taiwanese, etc.), and frequently explore datasets containing approximately frontal images. This work proposes a complete ALPR system focusing on unconstrained capture scenarios, where the LP might be considerably distorted due to oblique views. Our main contribution is the introduction of a novel Convolutional Neural Network (CNN) capable of detecting and rectifying multiple distorted license plates in a single image, which are fed to an Optical Character Recognition (OCR) method to obtain the final result. As an additional contribution, we also present manual annotations for a challenging set of LP images from different regions and acquisition conditions. Our experimental results indicate that the proposed method, without any parameter adaptation or fine tuning for a specific scenario, performs similarly to state-of-the-art commercial systems in traditional datasets, and outperforms both academic and commercial approaches in challenging datasets.", "field": [], "task": ["License Plate Recognition", "Optical Character Recognition"], "method": [], "dataset": ["AOLP-RP"], "metric": ["Average Recall"], "title": "License Plate Detection and Recognition in Unconstrained Scenarios"} {"abstract": "In traditional machine learning techniques for malware detection and classification, significant efforts are expended on manually designing features based on expertise and domain-specific knowledge. These solutions perform feature engineering in order to extract features that provide an abstract view of the software program. Thus, the usefulness of the classifier is roughly dependent on the ability of the domain experts to extract a set of descriptive features. Instead, we introduce a file agnostic end-to-end deep learning approach for malware classification from raw byte sequences without extracting hand-crafted features. It consists of two key components: (1) a denoising autoencoder that learns a hidden representation of the malware\u2019s binary content; and (2) a dilated residual network as classifier. The experiments show an impressive performance, achieving almost 99% of accuracy classifying malware into families.", "field": [], "task": ["Denoising", "Feature Engineering", "Malware Classification", "Malware Detection"], "method": [], "dataset": ["Microsoft Malware Classification Challenge"], "metric": ["Accuracy (10-fold)", "Macro F1 (10-fold)", "LogLoss"], "title": "An End-to-End Deep Learning Architecture for Classification of Malware\u2019s Binary Content"} {"abstract": "Clustering using deep autoencoders has been thoroughly investigated in recent years. Current approaches rely on simultaneously learning embedded features and clustering the data points in the latent space. Although numerous deep clustering approaches outperform the shallow models in achieving favorable results on several high-semantic datasets, a critical weakness of such models has been overlooked. In the absence of concrete supervisory signals, the embedded clustering objective function may distort the latent space by learning from unreliable pseudo-labels. Thus, the network can learn non-representative features, which in turn undermines the discriminative ability, yielding worse pseudo-labels. In order to alleviate the effect of random discriminative features, modern autoencoder-based clustering papers propose to use the reconstruction loss for pretraining and as a regularizer during the clustering phase. Nevertheless, a clustering-reconstruction trade-off can cause the \\textit{Feature Drift} phenomena. In this paper, we propose ADEC (Adversarial Deep Embedded Clustering) a novel autoencoder-based clustering model, which addresses a dual problem, namely, \\textit{Feature Randomness} and \\textit{Feature Drift}, using adversarial training. We empirically demonstrate the suitability of our model on handling these problems using benchmark real datasets. Experimental results validate that our model outperforms state-of-the-art autoencoder-based clustering methods.", "field": [], "task": ["Deep Clustering"], "method": [], "dataset": ["USPS", "MNIST-full"], "metric": ["NMI", "Accuracy"], "title": "Adversarial Deep Embedded Clustering: on a better trade-off between Feature Randomness and Feature Drift"} {"abstract": "Exploiting synthetic data to learn deep models has attracted increasing\nattention in recent years. However, the intrinsic domain difference between\nsynthetic and real images usually causes a significant performance drop when\napplying the learned model to real world scenarios. This is mainly due to two\nreasons: 1) the model overfits to synthetic images, making the convolutional\nfilters incompetent to extract informative representation for real images; 2)\nthere is a distribution difference between synthetic and real data, which is\nalso known as the domain adaptation problem. To this end, we propose a new\nreality oriented adaptation approach for urban scene semantic segmentation by\nlearning from synthetic data. First, we propose a target guided distillation\napproach to learn the real image style, which is achieved by training the\nsegmentation model to imitate a pretrained real style model using real images.\nSecond, we further take advantage of the intrinsic spatial structure presented\nin urban scene images, and propose a spatial-aware adaptation scheme to\neffectively align the distribution of two domains. These two modules can be\nreadily integrated with existing state-of-the-art semantic segmentation\nnetworks to improve their generalizability when adapting from synthetic to real\nurban scenes. We evaluate the proposed method on Cityscapes dataset by adapting\nfrom GTAV and SYNTHIA datasets, where the results demonstrate the effectiveness\nof our method.", "field": [], "task": ["Domain Adaptation", "Semantic Segmentation", "Synthetic-to-Real Translation"], "method": [], "dataset": ["GTAV-to-Cityscapes Labels"], "metric": ["mIoU"], "title": "ROAD: Reality Oriented Adaptation for Semantic Segmentation of Urban Scenes"} {"abstract": "We propose a high-level concept word detector that can be integrated with any\nvideo-to-language models. It takes a video as input and generates a list of\nconcept words as useful semantic priors for language generation models. The\nproposed word detector has two important properties. First, it does not require\nany external knowledge sources for training. Second, the proposed word detector\nis trainable in an end-to-end manner jointly with any video-to-language models.\nTo maximize the values of detected words, we also develop a semantic attention\nmechanism that selectively focuses on the detected concept words and fuse them\nwith the word encoding and decoding in the language model. In order to\ndemonstrate that the proposed approach indeed improves the performance of\nmultiple video-to-language tasks, we participate in four tasks of LSMDC 2016.\nOur approach achieves the best accuracies in three of them, including\nfill-in-the-blank, multiple-choice test, and movie retrieval. We also attain\ncomparable performance for the other task, movie description.", "field": [], "task": ["Language Modelling", "Question Answering", "Text Generation", "Video Captioning", "Video Retrieval"], "method": [], "dataset": ["LSMDC"], "metric": ["text-to-video R@1", "text-to-video R@10", "text-to-video Median Rank", "text-to-video R@5"], "title": "End-to-end Concept Word Detection for Video Captioning, Retrieval, and Question Answering"} {"abstract": "Existing RGB-D salient object detection (SOD) models usually treat RGB and depth as independent information and design separate networks for feature extraction from each. Such schemes can easily be constrained by a limited amount of training data or over-reliance on an elaborately designed training process. Inspired by the observation that RGB and depth modalities actually present certain commonality in distinguishing salient objects, a novel joint learning and densely cooperative fusion (JL-DCF) architecture is designed to learn from both RGB and depth inputs through a shared network backbone, known as the Siamese architecture. In this paper, we propose two effective components: joint learning (JL), and densely cooperative fusion (DCF). The JL module provides robust saliency feature learning by exploiting cross-modal commonality via a Siamese network, while the DCF module is introduced for complementary feature discovery. Comprehensive experiments using five popular metrics show that the designed framework yields a robust RGB-D saliency detector with good generalization. As a result, JL-DCF significantly advances the state-of-the-art models by an average of ~2.0% (F-measure) across seven challenging datasets. In addition, we show that JL-DCF is readily applicable to other related multi-modal detection tasks, including RGB-T (thermal infrared) SOD and video SOD (VSOD), achieving comparable or even better performance against state-of-the-art methods. This further confirms that the proposed framework could offer a potential solution for various applications and provide more insight into the cross-modal complementarity task. The code will be available at https://github.com/kerenfu/JLDCF/", "field": [], "task": ["Object Detection", "RGB-D Salient Object Detection", "RGB Salient Object Detection", "Salient Object Detection"], "method": [], "dataset": ["STERE", "NLPR", "DES", "SIP", "NJU2K"], "metric": ["max E-Measure", "Average MAE", "S-Measure", "max F-Measure"], "title": "Siamese Network for RGB-D Salient Object Detection and Beyond"} {"abstract": "Driven by deep neural networks and large scale datasets, scene text detection methods have progressed substantially over the past years, continuously refreshing the performance records on various standard benchmarks. However, limited by the representations (axis-aligned rectangles, rotated rectangles or quadrangles) adopted to describe text, existing methods may fall short when dealing with much more free-form text instances, such as curved text, which are actually very common in real-world scenarios. To tackle this problem, we propose a more flexible representation for scene text, termed as TextSnake, which is able to effectively represent text instances in horizontal, oriented and curved forms. In TextSnake, a text instance is described as a sequence of ordered, overlapping disks centered at symmetric axes, each of which is associated with potentially variable radius and orientation. Such geometry attributes are estimated via a Fully Convolutional Network (FCN) model. In experiments, the text detector based on TextSnake achieves state-of-the-art or comparable performance on Total-Text and SCUT-CTW1500, the two newly published benchmarks with special emphasis on curved text in natural images, as well as the widely-used datasets ICDAR 2015 and MSRA-TD500. Specifically, TextSnake outperforms the baseline on Total-Text by more than 40% in F-measure.", "field": [], "task": ["Scene Text", "Scene Text Detection"], "method": [], "dataset": ["MSRA-TD500", "ICDAR 2015", "SCUT-CTW1500", "Total-Text"], "metric": ["F-Measure", "Recall", "Precision"], "title": "TextSnake: A Flexible Representation for Detecting Text of Arbitrary Shapes"} {"abstract": "This paper presents a new state-of-the-art for document image classification\nand retrieval, using features learned by deep convolutional neural networks\n(CNNs). In object and scene analysis, deep neural nets are capable of learning\na hierarchical chain of abstraction from pixel inputs to concise and\ndescriptive representations. The current work explores this capacity in the\nrealm of document analysis, and confirms that this representation strategy is\nsuperior to a variety of popular hand-crafted alternatives. Experiments also\nshow that (i) features extracted from CNNs are robust to compression, (ii) CNNs\ntrained on non-document images transfer well to document analysis tasks, and\n(iii) enforcing region-specific feature-learning is unnecessary given\nsufficient training data. This work also makes available a new labelled subset\nof the IIT-CDIP collection, containing 400,000 document images across 16\ncategories, useful for training new CNNs for document analysis.", "field": [], "task": ["Document Image Classification", "Image Classification"], "method": [], "dataset": ["RVL-CDIP"], "metric": ["Accuracy"], "title": "Evaluation of Deep Convolutional Nets for Document Image Classification and Retrieval"} {"abstract": "Industrial visual detection is an essential part in modern industry for equipment maintenance and inspection. With the recent progress of deep learning, advanced industrial object detectors are built for smart industrial applications. However, deep learning methods are known data-hungry: the processes of data collection and annotation are labor-intensive and time-consuming. It is especially impractical in industrial scenarios to collect publicly available datasets due to the inherent diversity and privacy. In this paper, we explore automation of industrial visual inspection and propose a segmentation-aggregation framework to learn object detectors from weakly annotated visual data. The used minimum annotation is only image-level category labels without bounding boxes. The method is implemented and evaluated on collected insulator images and public PASCAL VOC benchmarks to verify its effectiveness. The experiments show that our models achieve high detection accuracy and can be applied in industry to achieve automatic visual inspection with minimum annotation cost.", "field": [], "task": ["Object Detection", "Weakly Supervised Object Detection"], "method": [], "dataset": ["PASCAL VOC 2007"], "metric": ["MAP"], "title": "Towards automatic visual inspection: A weakly supervised learning method for industrial applicable object detection"} {"abstract": "One of the current challenges in machine learning is how to deal with data coming at increasing rates in data streams. New predictive learning strategies are needed to cope with the high throughput data and concept drift. One of the data stream mining tasks where new learning strategies are needed is multi-target regression, due to its applicability in a high number of real world problems. While reliable and effective learning strategies have been proposed for batch multi-target regression, few have been proposed for multi-target online learning in data streams. Besides, most of the existing solutions do not consider the occurrence of inter-target correlations when making predictions. In this work, we propose a novel online learning strategy for multi-target regression in data streams. The proposed strategy extends existing online decision tree learning algorithm to explore inter-target dependencies while making predictions. For such, the proposed strategy, called Stacked Single-target Hoeffding Tree (SST-HT), uses the inter-target dependencies as an additional information source to enhance predictive accuracy. Throughout an extensive experimental setup, we evaluate our proposal against state-of-the-art decision tree-based algorithms for online multi-target regression. According to the experimental results, SST-HT presents superior predictive accuracy, with a small increase in the processing time and memory requirements.", "field": [], "task": ["Multi-target regression", "Neural Network Compression", "Regression"], "method": [], "dataset": ["CIFAR-10"], "metric": ["Size (MB)"], "title": "Online Multi-target regression trees with stacked leaf models"} {"abstract": "State-of-the-art object detectors are usually trained on public datasets. They often face substantial difficulties when applied to a different domain, where the imaging condition differs significantly and the corresponding annotated data are unavailable (or expensive to acquire). A natural remedy is to adapt the model by aligning the image representations on both domains. This can be achieved, for example, by adversarial learning, and has been shown to be effective in tasks like image classification. However, we found that in object detection, the improvement obtained in this way is quite limited. An important reason is that conventional domain adaptation methods strive to align images as a whole, while object detection, by nature, focuses on local regions that may contain objects of interest. Motivated by this, we propose a novel approach to domain adaption for object detection to handle the issues in \"where to look\" and \"how to align\". Our key idea is to mine the discriminative regions, namely those that are directly pertinent to object detection, and focus on aligning them across both domains. Experiments show that the proposed method performs remarkably better than existing methods with about 4% 6% improvement under various domain-shift scenarios while keeping good scalability.\r", "field": [], "task": ["Domain Adaptation", "Image Classification", "Object Detection", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["Cityscapes to Foggy Cityscapes"], "metric": ["mAP@0.5"], "title": "Adapting Object Detectors via Selective Cross-Domain Alignment"} {"abstract": "The existing approaches for salient motion segmentation are unable to explicitly learn geometric cues and often give false detections on prominent static objects. We exploit multiview geometric constraints to avoid such shortcomings. To handle the nonrigid background like a sea, we also propose a robust fusion mechanism between motion and appearance-based features. We find dense trajectories, covering every pixel in the video, and propose trajectory-based epipolar distances to distinguish between background and foreground regions. Trajectory epipolar distances are data-independent and can be readily computed given a few features' correspondences between the images. We show that by combining epipolar distances with optical flow, a powerful motion network can be learned. Enabling the network to leverage both of these features, we propose a simple mechanism, we call input-dropout. Comparing the motion-only networks, we outperform the previous state of the art on DAVIS-2016 dataset by 5.2% in the mean IoU score. By robustly fusing our motion network with an appearance network using the input-dropout mechanism, we also outperform the previous methods on DAVIS-2016, 2017 and Segtrackv2 dataset.", "field": [], "task": ["Motion Segmentation", "Optical Flow Estimation", "Unsupervised Video Object Segmentation", "Video Object Segmentation"], "method": [], "dataset": ["SegTrack v2", "DAVIS 2016"], "metric": ["F-measure (Decay)", "Jaccard (Mean)", "Mean IoU", "F-measure (Recall)", "Jaccard (Decay)", "Jaccard (Recall)", "F-measure (Mean)", "J&F"], "title": "EpO-Net: Exploiting Geometric Constraints on Dense Trajectories for Motion Saliency"} {"abstract": "Recent progress on salient object detection is beneficial from Fully Convolutional Neural Network (FCN). The saliency cues contained in multi-level convolutional features are complementary for detecting salient objects. How to integrate multi-level features becomes an open problem in saliency detection. In this paper, we propose a novel bi-directional message passing model to integrate multi-level features for salient object detection. At first, we adopt a Multi-scale Context-aware Feature Extraction Module (MCFEM) for multi-level feature maps to capture rich context information. Then a bi-directional structure is designed to pass messages between multi-level features, and a gate function is exploited to control the message passing rate. We use the features after message passing, which simultaneously encode semantic information and spatial details, to predict saliency maps. Finally, the predicted results are efficiently combined to generate the final saliency map. Quantitative and qualitative experiments on five benchmark datasets demonstrate that our proposed model performs favorably against the state-of-the-art methods under different evaluation metrics.", "field": [], "task": ["Object Detection", "RGB Salient Object Detection", "Saliency Detection", "Salient Object Detection"], "method": [], "dataset": ["UCF", "SOD", "PASCAL-S", "SBU", "ISTD"], "metric": ["MAE", "Balanced Error Rate"], "title": "A Bi-Directional Message Passing Model for Salient Object Detection"} {"abstract": "Liver cancer is one of the leading causes of cancer death. To assist doctors\nin hepatocellular carcinoma diagnosis and treatment planning, an accurate and\nautomatic liver and tumor segmentation method is highly demanded in the\nclinical practice. Recently, fully convolutional neural networks (FCNs),\nincluding 2D and 3D FCNs, serve as the back-bone in many volumetric image\nsegmentation. However, 2D convolutions can not fully leverage the spatial\ninformation along the third dimension while 3D convolutions suffer from high\ncomputational cost and GPU memory consumption. To address these issues, we\npropose a novel hybrid densely connected UNet (H-DenseUNet), which consists of\na 2D DenseUNet for efficiently extracting intra-slice features and a 3D\ncounterpart for hierarchically aggregating volumetric contexts under the spirit\nof the auto-context algorithm for liver and tumor segmentation. We formulate\nthe learning process of H-DenseUNet in an end-to-end manner, where the\nintra-slice representations and inter-slice features can be jointly optimized\nthrough a hybrid feature fusion (HFF) layer. We extensively evaluated our\nmethod on the dataset of MICCAI 2017 Liver Tumor Segmentation (LiTS) Challenge\nand 3DIRCADb Dataset. Our method outperformed other state-of-the-arts on the\nsegmentation results of tumors and achieved very competitive performance for\nliver segmentation even with a single model.", "field": [], "task": ["Automatic Liver And Tumor Segmentation", "Lesion Segmentation", "Liver Segmentation", "Semantic Segmentation", "Tumor Segmentation"], "method": [], "dataset": ["Anatomical Tracings of Lesions After Stroke (ATLAS) "], "metric": ["Precision", "Recall", "IoU", "Dice"], "title": "H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation from CT Volumes"} {"abstract": "Even with the recent advances in convolutional neural networks (CNN) in\nvarious visual recognition tasks, the state-of-the-art action recognition\nsystem still relies on hand crafted motion feature such as optical flow to\nachieve the best performance. We propose a multitask learning model\nActionFlowNet to train a single stream network directly from raw pixels to\njointly estimate optical flow while recognizing actions with convolutional\nneural networks, capturing both appearance and motion in a single model. We\nadditionally provide insights to how the quality of the learned optical flow\naffects the action recognition. Our model significantly improves action\nrecognition accuracy by a large margin 31% compared to state-of-the-art\nCNN-based action recognition models trained without external large scale data\nand additional optical flow input. Without pretraining on large external\nlabeled datasets, our model, by well exploiting the motion information,\nachieves competitive recognition accuracy to the models trained with large\nlabeled datasets such as ImageNet and Sport-1M.", "field": [], "task": ["Action Recognition", "Optical Flow Estimation", "Temporal Action Localization"], "method": [], "dataset": ["UCF101", "HMDB-51"], "metric": ["Average accuracy of 3 splits", "3-fold Accuracy"], "title": "ActionFlowNet: Learning Motion Representation for Action Recognition"} {"abstract": "Recurrent neural network (RNN) has achieved remarkable performance in text categorization. RNN can model the entire sequence and capture long-term dependencies, but it does not do well in extracting key patterns. In contrast, convolutional neural network (CNN) is good at extracting local and position-invariant features. In this paper, we present a novel model named disconnected recurrent neural network (DRNN), which incorporates position-invariance into RNN. By limiting the distance of information flow in RNN, the hidden state at each time step is restricted to represent words near the current position. The proposed model makes great improvements over RNN and CNN models and achieves the best performance on several benchmark datasets for text categorization.", "field": [], "task": ["Sentiment Analysis", "Text Classification"], "method": [], "dataset": ["Yelp Fine-grained classification", "Amazon Review Polarity", "Yelp Binary classification", "Yahoo! Answers", "DBpedia", "Amazon Review Full", "AG News"], "metric": ["Error", "Accuracy"], "title": "Disconnected Recurrent Neural Networks for Text Categorization"} {"abstract": "Music accounts for a significant chunk of interest among various online\nactivities. This is reflected by wide array of alternatives offered in music\nrelated web/mobile apps, information portals, featuring millions of artists,\nsongs and events attracting user activity at similar scale. Availability of\nlarge scale structured and unstructured data has attracted similar level of\nattention by data science community. This paper attempts to offer current\nstate-of-the-art in music related analysis. Various approaches involving\nmachine learning, information theory, social network analysis, semantic web and\nlinked open data are represented in the form of taxonomy along with data\nsources and use cases addressed by the research community.", "field": [], "task": ["No-Reference Image Quality Assessment"], "method": [], "dataset": ["200k Short Texts for Humor Detection"], "metric": ["14 gestures accuracy"], "title": "Music Data Analysis: A State-of-the-art Survey"} {"abstract": "Enabling bi-directional retrieval of images and texts is important for understanding the correspondence between vision and language. Existing methods leverage the attention mechanism to explore such correspondence in a fine-grained manner. However, most of them consider all semantics equally and thus align them uniformly, regardless of their diverse complexities. In fact, semantics are diverse (i.e. involving different kinds of semantic concepts), and humans usually follow a latent structure to combine them into understandable languages. It may be difficult to optimally capture such sophisticated correspondences in existing methods. In this paper, to address such a deficiency, we propose an Iterative Matching with Recurrent Attention Memory (IMRAM) method, in which correspondences between images and texts are captured with multiple steps of alignments. Specifically, we introduce an iterative matching scheme to explore such fine-grained correspondence progressively. A memory distillation unit is used to refine alignment knowledge from early steps to later ones. Experiment results on three benchmark datasets, i.e. Flickr8K, Flickr30K, and MS COCO, show that our IMRAM achieves state-of-the-art performance, well demonstrating its effectiveness. Experiments on a practical business advertisement dataset, named \\Ads{}, further validates the applicability of our method in practical scenarios.", "field": [], "task": ["Cross-Modal Retrieval"], "method": [], "dataset": ["Flickr30k", "COCO 2014"], "metric": ["Image-to-text R@5", "Image-to-text R@1", "Image-to-text R@10", "Text-to-image R@10", "Text-to-image R@1", "Text-to-image R@5"], "title": "IMRAM: Iterative Matching with Recurrent Attention Memory for Cross-Modal Image-Text Retrieval"} {"abstract": "This paper presents a novel unsupervised domain adaptation method for\ncross-domain visual recognition. We propose a unified framework that reduces\nthe shift between domains both statistically and geometrically, referred to as\nJoint Geometrical and Statistical Alignment (JGSA). Specifically, we learn two\ncoupled projections that project the source domain and target domain data into\nlow dimensional subspaces where the geometrical shift and distribution shift\nare reduced simultaneously. The objective function can be solved efficiently in\na closed form. Extensive experiments have verified that the proposed method\nsignificantly outperforms several state-of-the-art domain adaptation methods on\na synthetic dataset and three different real world cross-domain visual\nrecognition tasks.", "field": [], "task": ["Domain Adaptation", "Unsupervised Domain Adaptation"], "method": [], "dataset": ["Office-Caltech"], "metric": ["Average Accuracy"], "title": "Joint Geometrical and Statistical Alignment for Visual Domain Adaptation"} {"abstract": "Dialogue act recognition is an important part of natural language\nunderstanding. We investigate the way dialogue act corpora are annotated and\nthe learning approaches used so far. We find that the dialogue act is\ncontext-sensitive within the conversation for most of the classes.\nNevertheless, previous models of dialogue act classification work on the\nutterance-level and only very few consider context. We propose a novel\ncontext-based learning method to classify dialogue acts using a character-level\nlanguage model utterance representation, and we notice significant improvement.\nWe evaluate this method on the Switchboard Dialogue Act corpus, and our results\nshow that the consideration of the preceding utterances as a context of the\ncurrent utterance improves dialogue act detection.", "field": [], "task": ["Dialogue Act Classification", "Language Modelling", "Natural Language Understanding"], "method": [], "dataset": ["Switchboard corpus"], "metric": ["Accuracy"], "title": "A Context-based Approach for Dialogue Act Recognition using Simple Recurrent Neural Networks"} {"abstract": "People often refer to entities in an image in terms of their relationships\nwith other entities. For example, \"the black cat sitting under the table\"\nrefers to both a \"black cat\" entity and its relationship with another \"table\"\nentity. Understanding these relationships is essential for interpreting and\ngrounding such natural language expressions. Most prior work focuses on either\ngrounding entire referential expressions holistically to one region, or\nlocalizing relationships based on a fixed set of categories. In this paper we\ninstead present a modular deep architecture capable of analyzing referential\nexpressions into their component parts, identifying entities and relationships\nmentioned in the input expression and grounding them all in the scene. We call\nthis approach Compositional Modular Networks (CMNs): a novel architecture that\nlearns linguistic analysis and visual inference end-to-end. Our approach is\nbuilt around two types of neural modules that inspect local regions and\npairwise interactions between regions. We evaluate CMNs on multiple referential\nexpression datasets, outperforming state-of-the-art approaches on all tasks.", "field": [], "task": ["Visual Question Answering"], "method": [], "dataset": ["Visual Genome (subjects)", "Visual7W", "Visual Genome (pairs)"], "metric": ["Percentage correct"], "title": "Modeling Relationships in Referential Expressions with Compositional Modular Networks"} {"abstract": "We propose a new action and gesture recognition method based on spatio-temporal covariance descriptors and a weighted Riemannian locality preserving projection approach that takes into account the curved space formed by the descriptors. The weighted projection is then exploited during boosting to create a final multiclass classification algorithm that employs the most useful spatio-temporal regions. We also show how the descriptors can be computed quickly through the use of integral video representations. Experiments on the UCF sport, CK+ facial expression and Cambridge hand gesture datasets indicate superior performance of the proposed method compared to several recent state-of-the-art techniques. The proposed method is robust and does not require additional processing of the videos, such as foreground detection, interest-point detection or tracking.", "field": [], "task": ["Gesture Recognition", "Interest Point Detection"], "method": [], "dataset": ["Cambridge"], "metric": ["Accuracy"], "title": "Spatio-Temporal Covariance Descriptors for Action and Gesture Recognition"} {"abstract": "Convolutional neural networks (CNNs) have shown great performance as general\nfeature representations for object recognition applications. However, for\nmulti-label images that contain multiple objects from different categories,\nscales and locations, global CNN features are not optimal. In this paper, we\nincorporate local information to enhance the feature discriminative power. In\nparticular, we first extract object proposals from each image. With each image\ntreated as a bag and object proposals extracted from it treated as instances,\nwe transform the multi-label recognition problem into a multi-class\nmulti-instance learning problem. Then, in addition to extracting the typical\nCNN feature representation from each proposal, we propose to make use of\nground-truth bounding box annotations (strong labels) to add another level of\nlocal information by using nearest-neighbor relationships of local regions to\nform a multi-view pipeline. The proposed multi-view multi-instance framework\nutilizes both weak and strong labels effectively, and more importantly it has\nthe generalization ability to even boost the performance of unseen categories\nby partial strong labels from other categories. Our framework is extensively\ncompared with state-of-the-art hand-crafted feature based methods and CNN based\nmethods on two multi-label benchmark datasets. The experimental results\nvalidate the discriminative power and the generalization ability of the\nproposed framework. With strong labels, our framework is able to achieve\nstate-of-the-art results in both datasets.", "field": [], "task": ["Multi-Label Classification", "Object Recognition"], "method": [], "dataset": ["PASCAL VOC 2007"], "metric": ["mAP"], "title": "Exploit Bounding Box Annotations for Multi-label Object Recognition"} {"abstract": "We present a generative model to map natural language questions into SQL\nqueries. Existing neural network based approaches typically generate a SQL\nquery word-by-word, however, a large portion of the generated results are\nincorrect or not executable due to the mismatch between question words and\ntable contents. Our approach addresses this problem by considering the\nstructure of table and the syntax of SQL language. The quality of the generated\nSQL query is significantly improved through (1) learning to replicate content\nfrom column names, cells or SQL keywords; and (2) improving the generation of\nWHERE clause by leveraging the column-cell relation. Experiments are conducted\non WikiSQL, a recently released dataset with the largest question-SQL pairs.\nOur approach significantly improves the state-of-the-art execution accuracy\nfrom 69.0% to 74.4%.", "field": [], "task": ["Semantic Parsing"], "method": [], "dataset": ["WikiSQL"], "metric": ["Exact Match Accuracy", "Execution Accuracy"], "title": "Semantic Parsing with Syntax- and Table-Aware SQL Generation"} {"abstract": "This paper describes the participation of ELiRF-UPV team at task 10, Capturing Discriminative Attributes, of SemEval-2018. Our best approach consists of using ConceptNet, Wikipedia and NumberBatch embeddings in order to stablish relationships between concepts and attributes. Furthermore, this system achieves competitive results in the official evaluation.", "field": [], "task": ["Knowledge Graphs", "Relation Extraction"], "method": [], "dataset": ["SemEval 2018 Task 10"], "metric": ["F1-Score"], "title": "ELiRF-UPV at SemEval-2018 Task 10: Capturing Discriminative Attributes with Knowledge Graphs and Wikipedia"} {"abstract": "Relation extraction (RE) is an indispensable information extraction task in several disciplines. RE models typically assume that named entity recognition (NER) is already performed in a previous step by another independent model. Several recent efforts, under the theme of end-to-end RE, seek to exploit inter-task correlations by modeling both NER and RE tasks jointly. Earlier work in this area commonly reduces the task to a table-filling problem wherein an additional expensive decoding step involving beam search is applied to obtain globally consistent cell labels. In efforts that do not employ table-filling, global optimization in the form of CRFs with Viterbi decoding for the NER component is still necessary for competitive performance. We introduce a novel neural architecture utilizing the table structure, based on repeated applications of 2D convolutions for pooling local dependency and metric-based features, that improves on the state-of-the-art without the need for global optimization. We validate our model on the ADE and CoNLL04 datasets for end-to-end RE and demonstrate $\\approx 1\\%$ gain (in F-score) over prior best results with training and testing times that are seven to ten times faster --- the latter highly advantageous for time-sensitive end user applications.", "field": [], "task": ["Metric Learning", "Named Entity Recognition", "Relation Extraction"], "method": [], "dataset": ["ADE Corpus", "CoNLL04"], "metric": ["NER Macro F1", "RE+ Macro F1", "RE+ Macro F1 "], "title": "Neural Metric Learning for Fast End-to-End Relation Extraction"} {"abstract": "Although humans perform well at predicting what exists beyond the boundaries of an image, deep models struggle to understand context and extrapolation through retained information. This task is known as image outpainting and involves generating realistic expansions of an image's boundaries. Current models use generative adversarial networks to generate results which lack localized image feature consistency and appear fake. We propose two methods to improve this issue: the use of a local and global discriminator, and the addition of residual blocks within the encoding section of the network. Comparisons of our model and the baseline's L1 loss, mean squared error (MSE) loss, and qualitative differences reveal our model is able to naturally extend object boundaries and produce more internally consistent images compared to current methods but produces lower fidelity images.", "field": [], "task": ["Image Outpainting"], "method": [], "dataset": ["Places365-Standard"], "metric": ["MSE", "L1", "Adversarial"], "title": "Enhanced Residual Networks for Context-based Image Outpainting"} {"abstract": "Understanding human activities and object affordances are two very important\nskills, especially for personal robots which operate in human environments. In\nthis work, we consider the problem of extracting a descriptive labeling of the\nsequence of sub-activities being performed by a human, and more importantly, of\ntheir interactions with the objects in the form of associated affordances.\nGiven a RGB-D video, we jointly model the human activities and object\naffordances as a Markov random field where the nodes represent objects and\nsub-activities, and the edges represent the relationships between object\naffordances, their relations with sub-activities, and their evolution over\ntime. We formulate the learning problem using a structural support vector\nmachine (SSVM) approach, where labelings over various alternate temporal\nsegmentations are considered as latent variables. We tested our method on a\nchallenging dataset comprising 120 activity videos collected from 4 subjects,\nand obtained an accuracy of 79.4% for affordance, 63.4% for sub-activity and\n75.0% for high-level activity labeling. We then demonstrate the use of such\ndescriptive labeling in performing assistive tasks by a PR2 robot.", "field": [], "task": ["Skeleton Based Action Recognition"], "method": [], "dataset": ["CAD-120"], "metric": ["Accuracy"], "title": "Learning Human Activities and Object Affordances from RGB-D Videos"} {"abstract": "In this paper, we propose a method for obtaining sentence-level embeddings.\nWhile the problem of securing word-level embeddings is very well studied, we\npropose a novel method for obtaining sentence-level embeddings. This is\nobtained by a simple method in the context of solving the paraphrase generation\ntask. If we use a sequential encoder-decoder model for generating paraphrase,\nwe would like the generated paraphrase to be semantically close to the original\nsentence. One way to ensure this is by adding constraints for true paraphrase\nembeddings to be close and unrelated paraphrase candidate sentence embeddings\nto be far. This is ensured by using a sequential pair-wise discriminator that\nshares weights with the encoder that is trained with a suitable loss function.\nOur loss function penalizes paraphrase sentence embedding distances from being\ntoo large. This loss is used in combination with a sequential encoder-decoder\nnetwork. We also validated our method by evaluating the obtained embeddings for\na sentiment analysis task. The proposed method results in semantic embeddings\nand outperforms the state-of-the-art on the paraphrase generation and sentiment\nanalysis task on standard datasets. These results are also shown to be\nstatistically significant.", "field": [], "task": ["Paraphrase Generation", "Sentence Embedding", "Sentence Embeddings", "Sentiment Analysis"], "method": [], "dataset": ["quora"], "metric": ["BLEU-1"], "title": "Learning Semantic Sentence Embeddings using Sequential Pair-wise Discriminator"} {"abstract": "The task of natural question generation is to generate a corresponding question given the input passage (fact) and answer. It is useful for enlarging the training set of QA systems. Previous work has adopted sequence-to-sequence models that take a passage with an additional bit to indicate answer position as input. However, they do not explicitly model the information between answer and other context within the passage. We propose a model that matches the answer with the passage before generating the question. Experiments show that our model outperforms the existing state of the art using rich features.", "field": [], "task": ["Question Generation"], "method": [], "dataset": ["SQuAD1.1"], "metric": ["BLEU-4"], "title": "Leveraging Context Information for Natural Question Generation"} {"abstract": "Most emotion recognition methods tackle the emotion understanding task by considering individual emotion independently while ignoring their fuzziness nature and the interconnections among them. In this paper, we explore how emotion correlations can be captured and help different classification tasks. We propose EmoGraph that captures the dependencies among different emotions through graph networks. These graphs are constructed by leveraging the co-occurrence statistics among different emotion categories. Empirical results on two multi-label classification datasets demonstrate that EmoGraph outperforms strong baselines, especially for macro-F1. An additional experiment illustrates the captured emotion correlations can also benefit a single-label classification task.", "field": [], "task": ["Emotion Classification", "Emotion Recognition", "Multi-Label Classification"], "method": [], "dataset": ["SemEval 2018 Task 1E-c"], "metric": ["Micro-F1", "Macro-F1", "Accuracy"], "title": "EmoGraph: Capturing Emotion Correlations using Graph Networks"} {"abstract": "We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving dense performance levels. We accomplish this by developing sparse momentum, an algorithm which uses exponentially smoothed gradients (momentum) to identify layers and weights which reduce the error efficiently. Sparse momentum redistributes pruned weights across layers according to the mean momentum magnitude of each layer. Within a layer, sparse momentum grows weights according to the momentum magnitude of zero-valued weights. We demonstrate state-of-the-art sparse performance on MNIST, CIFAR-10, and ImageNet, decreasing the mean error by a relative 8%, 15%, and 6% compared to other sparse algorithms. Furthermore, we show that sparse momentum reliably reproduces dense performance levels while providing up to 5.61x faster training. In our analysis, ablations show that the benefits of momentum redistribution and growth increase with the depth and size of the network. Additionally, we find that sparse momentum is insensitive to the choice of its hyperparameters suggesting that sparse momentum is robust and easy to use.", "field": [], "task": ["Image Classification", "Sparse Learning"], "method": [], "dataset": ["MNIST", "CIFAR-10"], "metric": ["Percentage error", "Percentage correct"], "title": "Sparse Networks from Scratch: Faster Training without Losing Performance"} {"abstract": "Scene text detection, an important step of scene text reading systems, has witnessed rapid development with convolutional neural networks. Nonetheless, two main challenges still exist and hamper its deployment to real-world applications. The first problem is the trade-off between speed and accuracy. The second one is to model the arbitrary-shaped text instance. Recently, some methods have been proposed to tackle arbitrary-shaped text detection, but they rarely take the speed of the entire pipeline into consideration, which may fall short in practical applications.In this paper, we propose an efficient and accurate arbitrary-shaped text detector, termed Pixel Aggregation Network (PAN), which is equipped with a low computational-cost segmentation head and a learnable post-processing. More specifically, the segmentation head is made up of Feature Pyramid Enhancement Module (FPEM) and Feature Fusion Module (FFM). FPEM is a cascadable U-shaped module, which can introduce multi-level information to guide the better segmentation. FFM can gather the features given by the FPEMs of different depths into a final feature for segmentation. The learnable post-processing is implemented by Pixel Aggregation (PA), which can precisely aggregate text pixels by predicted similarity vectors. Experiments on several standard benchmarks validate the superiority of the proposed PAN. It is worth noting that our method can achieve a competitive F-measure of 79.9% at 84.2 FPS on CTW1500.", "field": [], "task": ["Scene Text", "Scene Text Detection"], "method": [], "dataset": ["MSRA-TD500", "ICDAR 2015", "SCUT-CTW1500", "Total-Text"], "metric": ["F-Measure", "Recall", "Precision"], "title": "Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel Aggregation Network"} {"abstract": "In this paper, we present a new deep learning architecture for addressing the problem of supervised learning with sparse and irregularly sampled multivariate time series. The architecture is based on the use of a semi-parametric interpolation network followed by the application of a prediction network. The interpolation network allows for information to be shared across multiple dimensions of a multivariate time series during the interpolation stage, while any standard deep learning model can be used for the prediction network. This work is motivated by the analysis of physiological time series data in electronic health records, which are sparse, irregularly sampled, and multivariate. We investigate the performance of this architecture on both classification and regression tasks, showing that our approach outperforms a range of baseline and recently proposed models.", "field": [], "task": ["Length-of-Stay prediction", "Mortality Prediction", "Regression", "Time Series", "Time Series Classification"], "method": [], "dataset": ["PhysioNet Challenge 2012"], "metric": ["AUC", "AUC Stdev"], "title": "Interpolation-Prediction Networks for Irregularly Sampled Time Series"} {"abstract": "Recently, neural methods have achieved state-of-the-art (SOTA) results in Named Entity Recognition (NER) tasks for many languages without the need for manually crafted features. However, these models still require manually annotated training data, which is not available for many languages. In this paper, we propose an unsupervised cross-lingual NER model that can transfer NER knowledge from one language to another in a completely unsupervised way without relying on any bilingual dictionary or parallel data. Our model achieves this through word-level adversarial learning and augmented fine-tuning with parameter sharing and feature augmentation. Experiments on five different languages demonstrate the effectiveness of our approach, outperforming existing models by a good margin and setting a new SOTA for each language pair.", "field": [], "task": ["Cross-Lingual NER", "Cross-Lingual Transfer", "Low Resource Named Entity Recognition", "Named Entity Recognition"], "method": [], "dataset": ["Conll 2003 Spanish", "CONLL 2003 German", "CONLL 2003 Dutch"], "metric": ["F1 score"], "title": "Zero-Resource Cross-Lingual Named Entity Recognition"} {"abstract": "We introduce an adaptive L2 regularization mechanism in the setting of person re-identification. In the literature, it is common practice to utilize hand-picked regularization factors which remain constant throughout the training procedure. Unlike existing approaches, the regularization factors in our proposed method are updated adaptively through backpropagation. This is achieved by incorporating trainable scalar variables as the regularization factors, which are further fed into a scaled hard sigmoid function. Extensive experiments on the Market-1501, DukeMTMC-reID and MSMT17 datasets validate the effectiveness of our framework. Most notably, we obtain state-of-the-art performance on MSMT17, which is the largest dataset for person re-identification. Source code is publicly available at https://github.com/nixingyang/AdaptiveL2Regularization.", "field": [], "task": ["L2 Regularization", "Person Re-Identification"], "method": [], "dataset": ["MSMT17", "Market-1501", "DukeMTMC-reID"], "metric": ["Rank-1", "mAP", "MAP"], "title": "Adaptive L2 Regularization in Person Re-Identification"} {"abstract": "Skeleton-based action recognition has made great progress recently, but many\nproblems still remain unsolved. For example, most of the previous methods model\nthe representations of skeleton sequences without abundant spatial structure\ninformation and detailed temporal dynamics features. In this paper, we propose\na novel model with spatial reasoning and temporal stack learning (SR-TSL) for\nskeleton based action recognition, which consists of a spatial reasoning\nnetwork (SRN) and a temporal stack learning network (TSLN). The SRN can capture\nthe high-level spatial structural information within each frame by a residual\ngraph neural network, while the TSLN can model the detailed temporal dynamics\nof skeleton sequences by a composition of multiple skip-clip LSTMs. During\ntraining, we propose a clip-based incremental loss to optimize the model. We\nperform extensive experiments on the SYSU 3D Human-Object Interaction dataset\nand NTU RGB+D dataset and verify the effectiveness of each network of our\nmodel. The comparison results illustrate that our approach achieves much better\nresults than state-of-the-art methods.", "field": [], "task": ["Action Recognition", "Human-Object Interaction Detection", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "Skeleton-Based Action Recognition with Spatial Reasoning and Temporal Stack Learning"} {"abstract": "In this paper, we propose a deep progressive reinforcement learning (DPRL) method for action recognition in skeleton-based videos, which aims to distil the most informative frames and discard ambiguous frames in sequences for recognizing actions. Since the choices of selecting representative frames are multitudinous for each video, we model the frame selection as a progressive process through deep reinforcement learning, during which we progressively adjust the chosen frames by taking two important factors into account: (1) the quality of the selected frames and (2) the relationship between the selected frames to the whole video. Moreover, considering the topology of human body inherently lies in a graph-based structure, where the vertices and edges represent the hinged joints and rigid bones respectively, we employ the graph-based convolutional neural network to capture the dependency between the joints for action recognition. Our approach achieves very competitive performance on three widely used benchmarks.", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["UT-Kinect", "NTU RGB+D", "SYSU 3D"], "metric": ["Accuracy (CS)", "Accuracy (CV)", "Accuracy"], "title": "Deep Progressive Reinforcement Learning for Skeleton-Based Action Recognition"} {"abstract": "Scene flow describes the motion of 3D objects in real world and potentially\ncould be the basis of a good feature for 3D action recognition. However, its\nuse for action recognition, especially in the context of convolutional neural\nnetworks (ConvNets), has not been previously studied. In this paper, we propose\nthe extraction and use of scene flow for action recognition from RGB-D data.\nPrevious works have considered the depth and RGB modalities as separate\nchannels and extract features for later fusion. We take a different approach\nand consider the modalities as one entity, thus allowing feature extraction for\naction recognition at the beginning. Two key questions about the use of scene\nflow for action recognition are addressed: how to organize the scene flow\nvectors and how to represent the long term dynamics of videos based on scene\nflow. In order to calculate the scene flow correctly on the available datasets,\nwe propose an effective self-calibration method to align the RGB and depth data\nspatially without knowledge of the camera parameters. Based on the scene flow\nvectors, we propose a new representation, namely, Scene Flow to Action Map\n(SFAM), that describes several long term spatio-temporal dynamics for action\nrecognition. We adopt a channel transform kernel to transform the scene flow\nvectors to an optimal color space analogous to RGB. This transformation takes\nbetter advantage of the trained ConvNets models over ImageNet. Experimental\nresults indicate that this new representation can surpass the performance of\nstate-of-the-art methods on two large public datasets.", "field": [], "task": ["3D Action Recognition", "Action Recognition", "Temporal Action Localization"], "method": [], "dataset": ["ChaLearn val"], "metric": ["Accuracy"], "title": "Scene Flow to Action Map: A New Representation for RGB-D based Action Recognition with Convolutional Neural Networks"} {"abstract": "Person re identification is a challenging retrieval task that requires\nmatching a person's acquired image across non overlapping camera views. In this\npaper we propose an effective approach that incorporates both the fine and\ncoarse pose information of the person to learn a discriminative embedding. In\ncontrast to the recent direction of explicitly modeling body parts or\ncorrecting for misalignment based on these, we show that a rather\nstraightforward inclusion of acquired camera view and/or the detected joint\nlocations into a convolutional neural network helps to learn a very effective\nrepresentation. To increase retrieval performance, re-ranking techniques based\non computed distances have recently gained much attention. We propose a new\nunsupervised and automatic re-ranking framework that achieves state-of-the-art\nre-ranking performance. We show that in contrast to the current\nstate-of-the-art re-ranking methods our approach does not require to compute\nnew rank lists for each image pair (e.g., based on reciprocal neighbors) and\nperforms well by using simple direct rank list based comparison or even by just\nusing the already computed euclidean distances between the images. We show that\nboth our learned representation and our re-ranking method achieve\nstate-of-the-art performance on a number of challenging surveillance image and\nvideo datasets.\n The code is available online at:\nhttps://github.com/pse-ecn/pose-sensitive-embedding", "field": [], "task": ["Person Re-Identification"], "method": [], "dataset": ["MARS", "DukeMTMC-reID", "Market-1501"], "metric": ["Rank-1", "mAP", "MAP"], "title": "A Pose-Sensitive Embedding for Person Re-Identification with Expanded Cross Neighborhood Re-Ranking"} {"abstract": "Many recent advancements in Computer Vision are attributed to large datasets.\nOpen-source software packages for Machine Learning and inexpensive commodity\nhardware have reduced the barrier of entry for exploring novel approaches at\nscale. It is possible to train models over millions of examples within a few\ndays. Although large-scale datasets exist for image understanding, such as\nImageNet, there are no comparable size video classification datasets.\n In this paper, we introduce YouTube-8M, the largest multi-label video\nclassification dataset, composed of ~8 million videos (500K hours of video),\nannotated with a vocabulary of 4800 visual entities. To get the videos and\ntheir labels, we used a YouTube video annotation system, which labels videos\nwith their main topics. While the labels are machine-generated, they have\nhigh-precision and are derived from a variety of human-based signals including\nmetadata and query click signals. We filtered the video labels (Knowledge Graph\nentities) using both automated and manual curation strategies, including asking\nhuman raters if the labels are visually recognizable. Then, we decoded each\nvideo at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to\nextract the hidden representation immediately prior to the classification\nlayer. Finally, we compressed the frame features and make both the features and\nvideo-level labels available for download.\n We trained various (modest) classification models on the dataset, evaluated\nthem using popular evaluation metrics, and report them as baselines. Despite\nthe size of the dataset, some of our models train to convergence in less than a\nday on a single machine using TensorFlow. We plan to release code for training\na TensorFlow model and for computing metrics.", "field": [], "task": ["Action Recognition", "Video Classification"], "method": [], "dataset": ["Sports-1M", "YouTube-8M", "ActivityNet"], "metric": ["PERR", "mAP", "Video hit@1 ", "Hit@1", "Video hit@5", "Hit@5"], "title": "YouTube-8M: A Large-Scale Video Classification Benchmark"} {"abstract": "Human actions can be represented by the trajectories of skeleton joints. Traditional methods generally model the spatial structure and temporal dynamics of human skeleton with hand-crafted features and recognize human actions by well-designed classifiers. In this paper, considering that recurrent neural network (RNN) can model the long-term contextual information of temporal sequences well, we propose an end-to-end hierarchical RNN for skeleton based action recognition. Instead of taking the whole skeleton as the input, we divide the human skeleton into five parts according to human physical structure, and then separately feed them to five subnets. As the number of layers increases, the representations extracted by the subnets are hierarchically fused to be the inputs of higher layers. The final representations of the skeleton sequences are fed into a single-layer perceptron, and the temporally accumulated output of the perceptron is the final decision. We compare with five other deep RNN architectures derived from our model to verify the effectiveness of the proposed network, and also compare with several other methods on three publicly available datasets. Experimental results demonstrate that our model achieves the state-of-the-art performance with high computational efficiency.", "field": [], "task": ["Action Recognition", "Skeleton Based Action Recognition"], "method": [], "dataset": ["NTU RGB+D"], "metric": ["Accuracy (CS)", "Accuracy (CV)"], "title": "Hierarchical recurrent neural network for skeleton based action recognition"} {"abstract": "Cross modal face matching between the thermal and visible spectrum is a much\ndesired capability for night-time surveillance and security applications. Due\nto a very large modality gap, thermal-to-visible face recognition is one of the\nmost challenging face matching problem. In this paper, we present an approach\nto bridge this modality gap by a significant margin. Our approach captures the\nhighly non-linear relationship between the two modalities by using a deep\nneural network. Our model attempts to learn a non-linear mapping from visible\nto thermal spectrum while preserving the identity information. We show\nsubstantive performance improvement on three difficult thermal-visible face\ndatasets. The presented approach improves the state-of-the-art by more than\n10\\% on UND-X1 dataset and by more than 15-30\\% on NVESD dataset in terms of\nRank-1 identification. Our method bridges the drop in performance due to the\nmodality gap by more than 40\\%.", "field": [], "task": ["Face Recognition"], "method": [], "dataset": ["Carl", "UND-X1"], "metric": ["Rank-1"], "title": "Deep Perceptual Mapping for Cross-Modal Face Recognition"} {"abstract": "Gesture recognition is a challenging problem in the field of biometrics. In\nthis paper, we integrate Fisher criterion into Bidirectional Long-Short Term\nMemory (BLSTM) network and Bidirectional Gated Recurrent Unit (BGRU),thus\nleading to two new deep models termed as F-BLSTM and F-BGRU. BothFisher\ndiscriminative deep models can effectively classify the gesture based on\nanalyzing the acceleration and angular velocity data of the human gestures.\nMoreover, we collect a large Mobile Gesture Database (MGD) based on the\naccelerations and angular velocities containing 5547 sequences of 12 gestures.\nExtensive experiments are conducted to validate the superior performance of the\nproposed networks as compared to the state-of-the-art BLSTM and BGRU on MGD\ndatabase and two benchmark databases (i.e. BUAA mobile gesture and SmartWatch\ngesture).", "field": [], "task": ["Gesture Recognition", "Hand Gesture Recognition", "Hand-Gesture Recognition"], "method": [], "dataset": ["BUAA", "SmartWatch", "MGB"], "metric": ["Accuracy"], "title": "Deep Fisher Discriminant Learning for Mobile Hand Gesture Recognition"} {"abstract": "Subset selection from massive data with noised information is increasingly\npopular for various applications. This problem is still highly challenging as\ncurrent methods are generally slow in speed and sensitive to outliers. To\naddress the above two issues, we propose an accelerated robust subset selection\n(ARSS) method. Specifically in the subset selection area, this is the first\nattempt to employ the $\\ell_{p}(0