abstract
stringlengths 13
4.33k
| field
sequence | task
sequence | method
sequence | dataset
sequence | metric
sequence | title
stringlengths 10
194
|
---|---|---|---|---|---|---|
Commonsense reasoning is a critical AI capability, but it is difficult to construct challenging datasets that test common sense. Recent neural question answering systems, based on large pre-trained models of language, have already achieved near-human-level performance on commonsense knowledge benchmarks. These systems do not possess human-level common sense, but are able to exploit limitations of the datasets to achieve human-level scores. We introduce the CODAH dataset, an adversarially-constructed evaluation dataset for testing common sense. CODAH forms a challenging extension to the recently-proposed SWAG dataset, which tests commonsense knowledge using sentence-completion questions that describe situations observed in video. To produce a more difficult dataset, we introduce a novel procedure for question acquisition in which workers author questions designed to target weaknesses of state-of-the-art neural question answering systems. Workers are rewarded for submissions that models fail to answer correctly both before and after fine-tuning (in cross-validation). We create 2.8k questions via this procedure and evaluate the performance of multiple state-of-the-art question answering systems on our dataset. We observe a significant gap between human performance, which is 95.3%, and the performance of the best baseline accuracy of 67.5% by the BERT-Large model. | [] | [
"Common Sense Reasoning",
"Question Answering",
"Sentence Completion"
] | [] | [
"CODAH"
] | [
"Accuracy"
] | CODAH: An Adversarially Authored Question-Answer Dataset for Common Sense |
Weakly supervised object detection (WSOD) is a challenging task when provided
with image category supervision but required to simultaneously learn object
locations and object detectors. Many WSOD approaches adopt multiple instance
learning (MIL) and have non-convex loss functions which are prone to get stuck
into local minima (falsely localize object parts) while missing full object
extent during training. In this paper, we introduce a continuation optimization
method into MIL and thereby creating continuation multiple instance learning
(C-MIL), with the intention of alleviating the non-convexity problem in a
systematic way. We partition instances into spatially related and class related
subsets, and approximate the original loss function with a series of smoothed
loss functions defined within the subsets. Optimizing smoothed loss functions
prevents the training procedure falling prematurely into local minima and
facilitates the discovery of Stable Semantic Extremal Regions (SSERs) which
indicate full object extent. On the PASCAL VOC 2007 and 2012 datasets, C-MIL
improves the state-of-the-art of weakly supervised object detection and weakly
supervised object localization with large margins. | [] | [
"Multiple Instance Learning",
"Object Detection",
"Object Localization",
"Weakly Supervised Object Detection",
"Weakly-Supervised Object Localization"
] | [] | [
"PASCAL VOC 2007",
"PASCAL VOC 2012 test"
] | [
"MAP"
] | C-MIL: Continuation Multiple Instance Learning for Weakly Supervised Object Detection |
We present SpecAugment, a simple data augmentation method for speech recognition. SpecAugment is applied directly to the feature inputs of a neural network (i.e., filter bank coefficients). The augmentation policy consists of warping the features, masking blocks of frequency channels, and masking blocks of time steps. We apply SpecAugment on Listen, Attend and Spell networks for end-to-end speech recognition tasks. We achieve state-of-the-art performance on the LibriSpeech 960h and Swichboard 300h tasks, outperforming all prior work. On LibriSpeech, we achieve 6.8% WER on test-other without the use of a language model, and 5.8% WER with shallow fusion with a language model. This compares to the previous state-of-the-art hybrid system of 7.5% WER. For Switchboard, we achieve 7.2%/14.6% on the Switchboard/CallHome portion of the Hub5'00 test set without the use of a language model, and 6.8%/14.1% with shallow fusion, which compares to the previous state-of-the-art hybrid system at 8.3%/17.3% WER. | [] | [
"Data Augmentation",
"End-To-End Speech Recognition",
"Language Modelling",
"Speech Recognition"
] | [] | [
"LibriSpeech test-other",
"LibriSpeech test-clean",
"Hub5'00 SwitchBoard"
] | [
"CallHome",
"SwitchBoard",
"Word Error Rate (WER)"
] | SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition |
Current 3D object detection methods are heavily influenced by 2D detectors. In order to leverage architectures in 2D detectors, they often convert 3D point clouds to regular grids (i.e., to voxel grids or to bird's eye view images), or rely on detection in 2D images to propose 3D boxes. Few works have attempted to directly detect objects in point clouds. In this work, we return to first principles to construct a 3D detection pipeline for point cloud data and as generic as possible. However, due to the sparse nature of the data -- samples from 2D manifolds in 3D space -- we face a major challenge when directly predicting bounding box parameters from scene points: a 3D object centroid can be far from any surface point thus hard to regress accurately in one step. To address the challenge, we propose VoteNet, an end-to-end 3D object detection network based on a synergy of deep point set networks and Hough voting. Our model achieves state-of-the-art 3D detection on two large datasets of real 3D scans, ScanNet and SUN RGB-D with a simple design, compact model size and high efficiency. Remarkably, VoteNet outperforms previous methods by using purely geometric information without relying on color images. | [] | [
"3D Object Detection",
"Object Detection"
] | [] | [
"ScanNetV2",
"SUN-RGBD val"
] | [
"[email protected]",
"[email protected]",
"MAP"
] | Deep Hough Voting for 3D Object Detection in Point Clouds |
Research on depth-based human activity analysis achieved outstanding performance and demonstrated the effectiveness of 3D representation for action recognition. The existing depth-based and RGB+D-based action recognition benchmarks have a number of limitations, including the lack of large-scale training samples, realistic number of distinct class categories, diversity in camera views, varied environmental conditions, and variety of human subjects. In this work, we introduce a large-scale dataset for RGB+D human action recognition, which is collected from 106 distinct subjects and contains more than 114 thousand video samples and 8 million frames. This dataset contains 120 different action classes including daily, mutual, and health-related activities. We evaluate the performance of a series of existing 3D activity analysis methods on this dataset, and show the advantage of applying deep learning methods for 3D-based human action recognition. Furthermore, we investigate a novel one-shot 3D activity recognition problem on our dataset, and a simple yet effective Action-Part Semantic Relevance-aware (APSR) framework is proposed for this task, which yields promising results for recognition of the novel action classes. We believe the introduction of this large-scale dataset will enable the community to apply, adapt, and develop various data-hungry learning techniques for depth-based and RGB+D-based human activity understanding. [The dataset is available at: http://rose1.ntu.edu.sg/Datasets/actionRecognition.asp] | [] | [
"Action Recognition",
"Activity Recognition",
"One-Shot 3D Action Recognition",
"Temporal Action Localization"
] | [] | [
"NTU RGB+D 120"
] | [
"Accuracy"
] | NTU RGB+D 120: A Large-Scale Benchmark for 3D Human Activity Understanding |
In this work, we propose a novel adaptive spatially-regularized correlation filters (ASRCF) model to simultaneously optimize the filter coefficients and the spatial regularization weight. First, this adaptive spatial regularization scheme could learn an effective spatial weight for a specific object and its appearance variations, and therefore result in more reliable filter coefficients during the tracking process. Second, our ASRCF model can be effectively optimized based on the alternating direction method of multipliers, where each subproblem has the closed-from solution. Third, our tracker applies two kinds of CF models to estimate the location and scale respectively. The location CF model exploits ensembles of shallow and deep features to determine the optimal position accurately. The scale CF model works on multi-scale shallow features to estimate the optimal scale efficiently. Extensive experiments on five recent benchmarks show that our tracker performs favorably against many state-of-the-art algorithms, with real-time performance of 28fps.
| [] | [
"Visual Tracking"
] | [] | [
"OTB-2015"
] | [
"AUC"
] | Visual Tracking via Adaptive Spatially-Regularized Correlation Filters |
This paper focuses on two related subtasks of aspect-based sentiment analysis, namely aspect term extraction and aspect sentiment classification, which we call aspect term-polarity co-extraction. The former task is to extract aspects of a product or service from an opinion document, and the latter is to identify the polarity expressed in the document about these extracted aspects. Most existing algorithms address them as two separate tasks and solve them one by one, or only perform one task, which can be complicated for real applications. In this paper, we treat these two tasks as two sequence labeling problems and propose a novel Dual crOss-sharEd RNN framework (DOER) to generate all aspect term-polarity pairs of the input sentence simultaneously. Specifically, DOER involves a dual recurrent neural network to extract the respective representation of each task, and a cross-shared unit to consider the relationship between them. Experimental results demonstrate that the proposed framework outperforms state-of-the-art baselines on three benchmark datasets. | [] | [
"Aspect-Based Sentiment Analysis",
"Sentiment Analysis"
] | [] | [
"SemEval 2014 Task 4 Subtask 1+2",
"SemEval 2014 Task 4 Laptop"
] | [
"F1"
] | DOER: Dual Cross-Shared RNN for Aspect Term-Polarity Co-Extraction |
Satellite image time series, bolstered by their growing availability, are at the forefront of an extensive effort towards automated Earth monitoring by international institutions. In particular, large-scale control of agricultural parcels is an issue of major political and economic importance. In this regard, hybrid convolutional-recurrent neural architectures have shown promising results for the automated classification of satellite image time series.We propose an alternative approach in which the convolutional layers are advantageously replaced with encoders operating on unordered sets of pixels to exploit the typically coarse resolution of publicly available satellite images. We also propose to extract temporal features using a bespoke neural architecture based on self-attention instead of recurrent networks. We demonstrate experimentally that our method not only outperforms previous state-of-the-art approaches in terms of precision, but also significantly decreases processing time and memory requirements. Lastly, we release a large open-access annotated dataset as a benchmark for future work on satellite image time series. | [] | [
"Time Series",
"Time Series Classification"
] | [] | [
"s2-agri"
] | [
"oAcc",
"mIoU"
] | Satellite Image Time Series Classification with Pixel-Set Encoders and Temporal Self-Attention |
The topological landscape of gene interaction networks provides a rich source of information for inferring functional patterns of genes or proteins. However, it is still a challenging task to aggregate heterogeneous biological information such as gene expression and gene interactions to achieve more accurate inference for prediction and discovery of new gene interactions. In particular, how to generate a unified vector representation to integrate diverse input data is a key challenge addressed here. We propose a scalable and robust deep learning framework to learn embedded representations to unify known gene interactions and gene expression for gene interaction predictions. These low- dimensional embeddings derive deeper insights into the structure of rapidly accumulating and diverse gene interaction networks and greatly simplify downstream modeling. We compare the predictive power of our deep embeddings to the strong baselines. The results suggest that our deep embeddings achieve significantly more accurate predictions. Moreover, a set of novel gene interaction predictions are validated by up-to-date literature-based database entries. The proposed model demonstrates the importance of integrating heterogeneous information about genes for gene network inference. GNE is freely available under the GNU General Public License and can be downloaded from GitHub (https://github.com/kckishan/GNE). | [] | [
"Gene Interaction Prediction",
"Link Prediction"
] | [] | [
"BioGRID (human)",
"BioGRID(yeast)"
] | [
"Average Precision"
] | GNE: a deep learning framework for gene network inference by aggregating biological information |
We present a novel approach to adjust global image properties such as colour, saturation, and luminance using human-interpretable image enhancement curves, inspired by the Photoshop curves tool. Our method, dubbed neural CURve Layers (CURL), is designed as a multi-colour space neural retouching block trained jointly in three different colour spaces (HSV, CIELab, RGB) guided by a novel multi-colour space loss. The curves are fully differentiable and are trained end-to-end for different computer vision problems including photo enhancement (RGB-to-RGB) and as part of the image signal processing pipeline for image formation (RAW-to-RGB). To demonstrate the effectiveness of CURL we combine this global image transformation block with a pixel-level (local) image multi-scale encoder-decoder backbone network. In an extensive experimental evaluation we show that CURL produces state-of-the-art image quality versus recently proposed deep learning approaches in both objective and perceptual metrics, setting new state-of-the-art performance on multiple public datasets. Our code is publicly available at: https://github.com/sjmoran/CURL. | [] | [
"Demosaicking",
"Denoising",
"Image Enhancement"
] | [] | [
"MIT-Adobe 5k"
] | [
"SSIM",
"PSNR",
"LPIPS"
] | CURL: Neural Curve Layers for Global Image Enhancement |
In many scenarios of Person Re-identification (Re-ID), the gallery set consists of lots of surveillance videos and the query is just an image, thus Re-ID has to be conducted between image and videos. Compared with videos, still person images lack temporal information. Besides, the information asymmetry between image and video features increases the difficulty in matching images and videos. To solve this problem, we propose a novel Temporal Knowledge Propagation (TKP) method which propagates the temporal knowledge learned by the video representation network to the image representation network. Specifically, given the input videos, we enforce the image representation network to fit the outputs of video representation network in a shared feature space. With back propagation, temporal knowledge can be transferred to enhance the image features and the information asymmetry problem can be alleviated. With additional classification and integrated triplet losses, our model can learn expressive and discriminative image and video features for image-to-video re-identification. Extensive experiments demonstrate the effectiveness of our method and the overall results on two widely used datasets surpass the state-of-the-art methods by a large margin. Code is available at: https://github.com/guxinqian/TKP | [] | [
"Image-To-Video Person Re-Identification",
"Person Re-Identification",
"Video-Based Person Re-Identification"
] | [] | [
"MARS",
"iLIDS-VID"
] | [
"mAP",
"Rank-10",
"Rank-1",
"Rank-20",
"Rank-5"
] | Temporal Knowledge Propagation for Image-to-Video Person Re-identification |
Recent deep learning based approaches have shown promising results for the
challenging task of inpainting large missing regions in an image. These methods
can generate visually plausible image structures and textures, but often create
distorted structures or blurry textures inconsistent with surrounding areas.
This is mainly due to ineffectiveness of convolutional neural networks in
explicitly borrowing or copying information from distant spatial locations. On
the other hand, traditional texture and patch synthesis approaches are
particularly suitable when it needs to borrow textures from the surrounding
regions. Motivated by these observations, we propose a new deep generative
model-based approach which can not only synthesize novel image structures but
also explicitly utilize surrounding image features as references during network
training to make better predictions. The model is a feed-forward, fully
convolutional neural network which can process images with multiple holes at
arbitrary locations and with variable sizes during the test time. Experiments
on multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and
natural images (ImageNet, Places2) demonstrate that our proposed approach
generates higher-quality inpainting results than existing ones. Code, demo and
models are available at: https://github.com/JiahuiYu/generative_inpainting. | [] | [
"Image Inpainting"
] | [] | [
"Places2 val"
] | [
"rect mask l2 err",
"rect mask l1 error",
"free-form mask l2 err",
"free-form mask l1 err"
] | Generative Image Inpainting with Contextual Attention |
In this paper, we propose to tackle the challenging few-shot learning (FSL) problem by learning global class representations using both base and novel class training samples. In each training episode, an episodic class mean computed from a support set is registered with the global representation via a registration module. This produces a registered global class representation for computing the classification loss using a query set. Though following a similar episodic training pipeline as existing meta learning based approaches, our method differs significantly in that novel class training samples are involved in the training from the beginning. To compensate for the lack of novel class training samples, an effective sample synthesis strategy is developed to avoid overfitting. Importantly, by joint base-novel class training, our approach can be easily extended to a more practical yet challenging FSL setting, i.e., generalized FSL, where the label space of test data is extended to both base and novel classes. Extensive experiments show that our approach is effective for both of the two FSL settings. | [] | [
"Few-Shot Image Classification",
"Few-Shot Learning",
"Generalized Few-Shot Classification",
"Meta-Learning"
] | [] | [
"OMNIGLOT - 5-Shot, 20-way",
"Mini-ImageNet - 1-Shot Learning",
"mini-ImageNet - 100-Way",
"OMNIGLOT - 1-Shot, 20-way"
] | [
"Accuracy"
] | Few-Shot Learning with Global Class Representations |
A Dialogue State Tracker (DST) is a key component in a dialogue system aiming at estimating the beliefs of possible user goals at each dialogue turn. Most of the current DST trackers make use of recurrent neural networks and are based on complex architectures that manage several aspects of a dialogue, including the user utterance, the system actions, and the slot-value pairs defined in a domain ontology. However, the complexity of such neural architectures incurs into a considerable latency in the dialogue state prediction, which limits the deployments of the models in real-world applications, particularly when task scalability (i.e. amount of slots) is a crucial factor. In this paper, we propose an innovative neural model for dialogue state tracking, named Global encoder and Slot-Attentive decoders (G-SAT), which can predict the dialogue state with a very low latency time, while maintaining high-level performance. We report experiments on three different languages (English, Italian, and German) of the WoZ2.0 dataset, and show that the proposed approach provides competitive advantages over state-of-art DST systems, both in terms of accuracy and in terms of time complexity for predictions, being over 15 times faster than the other systems. | [] | [
"Dialogue State Tracking"
] | [] | [
"Wizard-of-Oz"
] | [
"Request",
"Joint"
] | Scalable Neural Dialogue State Tracking |
This paper proposes an utterance-to-utterance interactive matching network (U2U-IMN) for multi-turn response selection in retrieval-based chatbots. Different from previous methods following context-to-response matching or utterance-to-response matching frameworks, this model treats both contexts and responses as sequences of utterances when calculating the matching degrees between them. For a context-response pair, the U2U-IMN model first encodes each utterance separately using recurrent and self-attention layers. Then, a global and bidirectional interaction between the context and the response is conducted using the attention mechanism to collect the matching information between them. The distances between context and response utterances are employed as a prior component when calculating the attention weights. Finally, sentence-level aggregation and context-response-level aggregation are executed in turn to obtain the feature vector for matching degree prediction. Experiments on four public datasets showed that our proposed method outperformed baseline methods on all metrics, achieving a new state-of-the-art performance and demonstrating compatibility across domains for multi-turn response selection. | [] | [
"Conversational Response Selection"
] | [] | [
"Ubuntu Dialogue (v1, Ranking)"
] | [
"R10@1",
"R10@5",
"R2@1",
"R10@2"
] | Utterance-to-Utterance Interactive Matching Network for Multi-Turn Response Selection in Retrieval-Based Chatbots |
Weakly supervised semantic segmentation is a challenging task as it only takes image-level information as supervision for training but produces pixel-level predictions for testing. To address such a challenging task, most recent state-of-the-art approaches propose to adopt two-step solutions, \emph{i.e. } 1) learn to generate pseudo pixel-level masks, and 2) engage FCNs to train the semantic segmentation networks with the pseudo masks. However, the two-step solutions usually employ many bells and whistles in producing high-quality pseudo masks, making this kind of methods complicated and inelegant. In this work, we harness the image-level labels to produce reliable pixel-level annotations and design a fully end-to-end network to learn to predict segmentation maps. Concretely, we firstly leverage an image classification branch to generate class activation maps for the annotated categories, which are further pruned into confident yet tiny object/background regions. Such reliable regions are then directly served as ground-truth labels for the parallel segmentation branch, where a newly designed dense energy loss function is adopted for optimization. Despite its apparent simplicity, our one-step solution achieves competitive mIoU scores (\emph{val}: 62.6, \emph{test}: 62.9) on Pascal VOC compared with those two-step state-of-the-arts. By extending our one-step method to two-step, we get a new state-of-the-art performance on the Pascal VOC (\emph{val}: 66.3, \emph{test}: 66.5). | [] | [
"Image Classification",
"Semantic Segmentation",
"Weakly-Supervised Semantic Segmentation"
] | [] | [
"PASCAL VOC 2012 test",
"PASCAL VOC 2012 val"
] | [
"Mean IoU",
"mIoU"
] | Reliability Does Matter: An End-to-End Weakly Supervised Semantic Segmentation Approach |
Motivation
Recent neural approaches on event extraction from text mainly focus on flat events in general domain, while there are less attempts to detect nested and overlapping events. These existing systems are built on given entities and they depend on external syntactic tools.
Results
We propose an end-to-end neural nested event extraction model named DeepEventMine that extracts multiple overlapping directed acyclic graph structures from a raw sentence. On the top of the bidirectional encoder representations from transformers model, our model detects nested entities and triggers, roles, nested events and their modifications in an end-to-end manner without any syntactic tools. Our DeepEventMine model achieves the new state-of-the-art performance on seven biomedical nested event extraction tasks. Even when gold entities are unavailable, our model can detect events from raw text with promising performance.
Availability and implementation
Our codes and models to reproduce the results are available at: https://github.com/aistairc/DeepEventMine. | [] | [
"Event Extraction"
] | [] | [
"GENIA",
"Infectious Diseases 2011 (ID)",
"GENIA 2013",
"Multi-Level Event Extraction (MLEE)",
"Cancer Genetics 2013 (CG)",
"Epigenetics and Post-translational Modifications 2011 (EPI)",
"Pathway Curation 2013 (PC)"
] | [
"F1"
] | DeepEventMine: end-to-end neural nested event extraction from biomedical texts |
Large pre-trained language models (LMs) are known to encode substantial amounts of linguistic information. However, high-level reasoning skills, such as numerical reasoning, are difficult to learn from a language-modeling objective only. Consequently, existing models for numerical reasoning have used specialized architectures with limited flexibility. In this work, we show that numerical reasoning is amenable to automatic data generation, and thus one can inject this skill into pre-trained LMs, by generating large amounts of data, and training in a multi-task setup. We show that pre-training our model, GenBERT, on this data, dramatically improves performance on DROP (49.3 $\rightarrow$ 72.3 F1), reaching performance that matches state-of-the-art models of comparable size, while using a simple and general-purpose encoder-decoder architecture. Moreover, GenBERT generalizes well to math word problem datasets, while maintaining high performance on standard RC tasks. Our approach provides a general recipe for injecting skills into large pre-trained LMs, whenever the skill is amenable to automatic data augmentation. | [] | [
"Data Augmentation",
"Language Modelling",
"Question Answering"
] | [] | [
"DROP Test"
] | [
"F1"
] | Injecting Numerical Reasoning Skills into Language Models |
We propose ThaiLMCut, a semi-supervised approach for Thai word segmentation which utilizes a bi-directional character language model (LM) as a way to leverage useful linguistic knowledge from unlabeled data. After the language model is trained on substantial unlabeled corpora, the weights of its embedding and recurrent layers are transferred to a supervised word segmentation model which continues fine-tuning them on a word segmentation task. Our experimental results demonstrate that applying the LM always leads to a performance gain, especially when the amount of labeled data is small. In such cases, the F1 Score increased by up to 2.02{\%}. Even on abig labeled dataset, a small improvement gain can still be obtained. The approach has also shown to be very beneficial for out-of-domain settings with a gain in F1 Score of up to 3.13{\%}. Finally, we show that ThaiLMCut can outperform other open source state-of-the-art models achieving an F1 Score of 98.78{\%} on the standard benchmark, InterBEST2009. | [] | [
"Language Modelling",
"Thai Word Segmentation"
] | [] | [
"BEST-2010"
] | [
"F1-Score"
] | ThaiLMCut: Unsupervised Pretraining for Thai Word Segmentation |
Current graph neural network (GNN) architectures naively average or sum node embeddings into an aggregated graph representation -- potentially losing structural or semantic information. We here introduce OT-GNN, a model that computes graph embeddings using parametric prototypes that highlight key facets of different graph aspects. Towards this goal, we are (to our knowledge) the first to successfully combine optimal transport (OT) with parametric graph models. Graph representations are obtained from Wasserstein distances between the set of GNN node embeddings and "prototype" point clouds as free parameters. We theoretically prove that, unlike traditional sum aggregation, our function class on point clouds satisfies a fundamental universal approximation theorem. Empirically, we address an inherent collapse optimization issue by proposing a noise contrastive regularizer to steer the model towards truly exploiting the optimal transport geometry. Finally, we consistently report better generalization performance on several molecular property prediction tasks, while exhibiting smoother graph representations. | [] | [
"Drug Discovery",
"Graph Regression",
"Molecular Property Prediction"
] | [] | [
"Lipophilicity",
"BACE",
"BBBP",
"ESOL",
"Lipophilicity "
] | [
"RMSE",
"AUC"
] | Optimal Transport Graph Neural Networks |
Multi-level feature fusion is a fundamental topic in computer vision. It has been exploited to detect, segment and classify objects at various scales. When multi-level features meet multi-modal cues, the optimal feature aggregation and multi-modal learning strategy become a hot potato. In this paper, we leverage the inherent multi-modal and multi-level nature of RGB-D salient object detection to devise a novel cascaded refinement network. In particular, first, we propose to regroup the multi-level features into teacher and student features using a bifurcated backbone strategy (BBS). Second, we introduce a depth-enhanced module (DEM) to excavate informative depth cues from the channel and spatial views. Then, RGB and depth modalities are fused in a complementary way. Our architecture, named Bifurcated Backbone Strategy Network (BBS-Net), is simple, efficient, and backbone-independent. Extensive experiments show that BBS-Net significantly outperforms eighteen SOTA models on eight challenging datasets under five evaluation measures, demonstrating the superiority of our approach ($\sim 4 \%$ improvement in S-measure $vs.$ the top-ranked model: DMRA-iccv2019). In addition, we provide a comprehensive analysis on the generalization ability of different RGB-D datasets and provide a powerful training set for future research. | [] | [
"Object Detection",
"RGB-D Salient Object Detection",
"RGB Salient Object Detection",
"Salient Object Detection"
] | [] | [
"STERE",
"NLPR",
"DES",
"SIP",
"LFSD",
"NJU2K",
"SSD"
] | [
"max E-Measure",
"Average MAE",
"S-Measure",
"max F-Measure"
] | Bifurcated backbone strategy for RGB-D salient object detection |
In this paper, we propose an adaptive weighting regression (AWR) method to leverage the advantages of both detection-based and regression-based methods. Hand joint coordinates are estimated as discrete integration of all pixels in dense representation, guided by adaptive weight maps. This learnable aggregation process introduces both dense and joint supervision that allows end-to-end training and brings adaptability to weight maps, making the network more accurate and robust. Comprehensive exploration experiments are conducted to validate the effectiveness and generality of AWR under various experimental settings, especially its usefulness for different types of dense representation and input modality. Our method outperforms other state-of-the-art methods on four publicly available datasets, including NYU, ICVL, MSRA and HANDS 2017 dataset. | [] | [
"3D Hand Pose Estimation",
"Hand Pose Estimation",
"Pose Estimation",
"Regression"
] | [] | [
"MSRA Hands",
"HANDS 2019",
"NYU Hands",
"ICVL Hands",
"HANDS 2017"
] | [
"Average 3D Error"
] | AWR: Adaptive Weighting Regression for 3D Hand Pose Estimation |
We present a novel Bipartite Graph Reasoning GAN (BiGraphGAN) for the challenging person image generation task. The proposed graph generator mainly consists of two novel blocks that aim to model the pose-to-pose and pose-to-image relations, respectively. Specifically, the proposed Bipartite Graph Reasoning (BGR) block aims to reason the crossing long-range relations between the source pose and the target pose in a bipartite graph, which mitigates some challenges caused by pose deformation. Moreover, we propose a new Interaction-and-Aggregation (IA) block to effectively update and enhance the feature representation capability of both person's shape and appearance in an interactive way. Experiments on two challenging and public datasets, i.e., Market-1501 and DeepFashion, show the effectiveness of the proposed BiGraphGAN in terms of objective quantitative scores and subjective visual realness. The source code and trained models are available at https://github.com/Ha0Tang/BiGraphGAN. | [] | [
"Image Generation",
"Pose Transfer"
] | [] | [
"Market-1501",
"Deep-Fashion"
] | [
"PCKh",
"SSIM",
"mask-IS",
"mask-SSIM",
"IS"
] | Bipartite Graph Reasoning GANs for Person Image Generation |
Object recognition in video is an important task for plenty of applications, including autonomous driving perception, surveillance tasks, wearable devices or IoT networks. Object recognition using video data is more challenging than using still images due to blur, occlusions or rare object poses. Specific video detectors with high computational cost or standard image detectors together with a fast post-processing algorithm achieve the current state-of-the-art. This work introduces a novel post-processing pipeline that overcomes some of the limitations of previous post-processing methods by introducing a learning-based similarity evaluation between detections across frames. Our method improves the results of state-of-the-art specific video detectors, specially regarding fast moving objects, and presents low resource requirements. And applied to efficient still image detectors, such as YOLO, provides comparable results to much more computationally intensive detectors. | [] | [
"Autonomous Driving",
"Dense Object Detection",
"Object Detection",
"Object Recognition",
"Real-Time Object Detection",
"Video Object Detection"
] | [] | [
"ImageNet VID"
] | [
"runtime (ms)",
"MAP"
] | Robust and Efficient Post-Processing for Video Object Detection (REPP) |
Spatial pooling has been proven highly effective in capturing long-range contextual information for pixel-wise prediction tasks, such as scene parsing. In this paper, beyond conventional spatial pooling that usually has a regular shape of NxN, we rethink the formulation of spatial pooling by introducing a new pooling strategy, called strip pooling, which considers a long but narrow kernel, i.e., 1xN or Nx1. Based on strip pooling, we further investigate spatial pooling architecture design by 1) introducing a new strip pooling module that enables backbone networks to efficiently model long-range dependencies, 2) presenting a novel building block with diverse spatial pooling as a core, and 3) systematically comparing the performance of the proposed strip pooling and conventional spatial pooling techniques. Both novel pooling-based designs are lightweight and can serve as an efficient plug-and-play module in existing scene parsing networks. Extensive experiments on popular benchmarks (e.g., ADE20K and Cityscapes) demonstrate that our simple approach establishes new state-of-the-art results. Code is made available at https://github.com/Andrew-Qibin/SPNet. | [] | [
"Scene Parsing"
] | [] | [
"ADE20K",
"Cityscapes test"
] | [
"Mean IoU (class)",
"Validation mIoU"
] | Strip Pooling: Rethinking Spatial Pooling for Scene Parsing |
Feature warping is a core technique in optical flow estimation; however, the ambiguity caused by occluded areas during warping is a major problem that remains unsolved. In this paper, we propose an asymmetric occlusion-aware feature matching module, which can learn a rough occlusion mask that filters useless (occluded) areas immediately after feature warping without any explicit supervision. The proposed module can be easily integrated into end-to-end network architectures and enjoys performance gains while introducing negligible computational cost. The learned occlusion mask can be further fed into a subsequent network cascade with dual feature pyramids with which we achieve state-of-the-art performance. At the time of submission, our method, called MaskFlownet, surpasses all published optical flow methods on the MPI Sintel, KITTI 2012 and 2015 benchmarks. Code is available at https://github.com/microsoft/MaskFlownet. | [] | [
"Optical Flow Estimation"
] | [] | [
"KITTI 2012",
"Sintel-final",
"Sintel-clean",
"KITTI 2015"
] | [
"Average End-Point Error",
"Fl-all"
] | MaskFlownet: Asymmetric Feature Matching with Learnable Occlusion Mask |
Matrix completion models are among the most common formulations of
recommender systems. Recent works have showed a boost of performance of these
techniques when introducing the pairwise relationships between users/items in
the form of graphs, and imposing smoothness priors on these graphs. However,
such techniques do not fully exploit the local stationarity structures of
user/item graphs, and the number of parameters to learn is linear w.r.t. the
number of users and items. We propose a novel approach to overcome these
limitations by using geometric deep learning on graphs. Our matrix completion
architecture combines graph convolutional neural networks and recurrent neural
networks to learn meaningful statistical graph-structured patterns and the
non-linear diffusion process that generates the known ratings. This neural
network system requires a constant number of parameters independent of the
matrix size. We apply our method on both synthetic and real datasets, showing
that it outperforms state-of-the-art techniques. | [] | [
"Matrix Completion",
"Recommendation Systems"
] | [] | [
"YahooMusic Monti",
"Douban Monti",
"MovieLens 100K",
"Flixster Monti"
] | [
"RMSE (u1 Splits)",
"RMSE"
] | Geometric Matrix Completion with Recurrent Multi-Graph Neural Networks |
We present a novel neural network for processing sequences. The ByteNet is a
one-dimensional convolutional neural network that is composed of two parts, one
to encode the source sequence and the other to decode the target sequence. The
two network parts are connected by stacking the decoder on top of the encoder
and preserving the temporal resolution of the sequences. To address the
differing lengths of the source and the target, we introduce an efficient
mechanism by which the decoder is dynamically unfolded over the representation
of the encoder. The ByteNet uses dilation in the convolutional layers to
increase its receptive field. The resulting network has two core properties: it
runs in time that is linear in the length of the sequences and it sidesteps the
need for excessive memorization. The ByteNet decoder attains state-of-the-art
performance on character-level language modelling and outperforms the previous
best results obtained with recurrent networks. The ByteNet also achieves
state-of-the-art performance on character-to-character machine translation on
the English-to-German WMT translation task, surpassing comparable neural
translation models that are based on recurrent networks with attentional
pooling and run in quadratic time. We find that the latent alignment structure
contained in the representations reflects the expected alignment between the
tokens. | [] | [
"Language Modelling",
"Machine Translation"
] | [] | [
"enwik8",
"WMT2014 English-German",
"WMT2015 English-German"
] | [
"Bit per Character (BPC)",
"BLEU score"
] | Neural Machine Translation in Linear Time |
We introduce a deep network architecture called DerainNet for removing rain
streaks from an image. Based on the deep convolutional neural network (CNN), we
directly learn the mapping relationship between rainy and clean image detail
layers from data. Because we do not possess the ground truth corresponding to
real-world rainy images, we synthesize images with rain for training. In
contrast to other common strategies that increase depth or breadth of the
network, we use image processing domain knowledge to modify the objective
function and improve deraining with a modestly-sized CNN. Specifically, we
train our DerainNet on the detail (high-pass) layer rather than in the image
domain. Though DerainNet is trained on synthetic data, we find that the learned
network translates very effectively to real-world images for testing. Moreover,
we augment the CNN framework with image enhancement to improve the visual
results. Compared with state-of-the-art single image de-raining methods, our
method has improved rain removal and much faster computation time after network
training. | [] | [
"Image Enhancement",
"Rain Removal",
"Single Image Deraining"
] | [] | [
"Test2800",
"Rain100H",
"Test100",
"Test1200",
"Rain100L"
] | [
"SSIM",
"PSNR"
] | Clearing the Skies: A deep network architecture for single-image rain removal |
As a successful deep model applied in image super-resolution (SR), the
Super-Resolution Convolutional Neural Network (SRCNN) has demonstrated superior
performance to the previous hand-crafted models either in speed and restoration
quality. However, the high computational cost still hinders it from practical
usage that demands real-time performance (24 fps). In this paper, we aim at
accelerating the current SRCNN, and propose a compact hourglass-shape CNN
structure for faster and better SR. We re-design the SRCNN structure mainly in
three aspects. First, we introduce a deconvolution layer at the end of the
network, then the mapping is learned directly from the original low-resolution
image (without interpolation) to the high-resolution one. Second, we
reformulate the mapping layer by shrinking the input feature dimension before
mapping and expanding back afterwards. Third, we adopt smaller filter sizes but
more mapping layers. The proposed model achieves a speed up of more than 40
times with even superior restoration quality. Further, we present the parameter
settings that can achieve real-time performance on a generic CPU while still
maintaining good performance. A corresponding transfer strategy is also
proposed for fast training and testing across different upscaling factors. | [] | [
"Image Super-Resolution",
"Super-Resolution"
] | [] | [
"FFHQ 256 x 256 - 4x upscaling",
"BSD100 - 2x upscaling",
"FFHQ 1024 x 1024 - 4x upscaling"
] | [
"SSIM",
"PSNR",
"FID",
"MS-SSIM"
] | Accelerating the Super-Resolution Convolutional Neural Network |
Current state of the art object recognition architectures achieve impressive
performance but are typically specialized for a single depictive style (e.g.
photos only, sketches only). In this paper, we present SwiDeN : our
Convolutional Neural Network (CNN) architecture which recognizes objects
regardless of how they are visually depicted (line drawing, realistic shaded
drawing, photograph etc.). In SwiDeN, we utilize a novel `deep' depictive
style-based switching mechanism which appropriately addresses the
depiction-specific and depiction-invariant aspects of the problem. We compare
SwiDeN with alternative architectures and prior work on a 50-category Photo-Art
dataset containing objects depicted in multiple styles. Experimental results
show that SwiDeN outperforms other approaches for the depiction-invariant
object recognition problem. | [] | [
"Depiction Invariant Object Recognition",
"Object Recognition"
] | [] | [
"Photo-Art-50"
] | [
"Overall Accuracy"
] | SwiDeN : Convolutional Neural Networks For Depiction Invariant Object Recognition |
The existing machine translation systems, whether phrase-based or neural,
have relied almost exclusively on word-level modelling with explicit
segmentation. In this paper, we ask a fundamental question: can neural machine
translation generate a character sequence without any explicit segmentation? To
answer this question, we evaluate an attention-based encoder-decoder with a
subword-level encoder and a character-level decoder on four language
pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15.
Our experiments show that the models with a character-level decoder outperform
the ones with a subword-level decoder on all of the four language pairs.
Furthermore, the ensembles of neural models with a character-level decoder
outperform the state-of-the-art non-neural machine translation systems on
En-Cs, En-De and En-Fi and perform comparably on En-Ru. | [] | [
"Machine Translation"
] | [] | [
"WMT2015 English-German"
] | [
"BLEU score"
] | A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation |
We introduce the multiresolution recurrent neural network, which extends the
sequence-to-sequence framework to model natural language generation as two
parallel discrete stochastic processes: a sequence of high-level coarse tokens,
and a sequence of natural language tokens. There are many ways to estimate or
learn the high-level coarse tokens, but we argue that a simple extraction
procedure is sufficient to capture a wealth of high-level discourse semantics.
Such procedure allows training the multiresolution recurrent neural network by
maximizing the exact joint log-likelihood over both sequences. In contrast to
the standard log- likelihood objective w.r.t. natural language tokens (word
perplexity), optimizing the joint log-likelihood biases the model towards
modeling high-level abstractions. We apply the proposed model to the task of
dialogue response generation in two challenging domains: the Ubuntu technical
support domain, and Twitter conversations. On Ubuntu, the model outperforms
competing approaches by a substantial margin, achieving state-of-the-art
results according to both automatic evaluation metrics and a human evaluation
study. On Twitter, the model appears to generate more relevant and on-topic
responses according to automatic evaluation metrics. Finally, our experiments
demonstrate that the proposed model is more adept at overcoming the sparsity of
natural language and is better able to capture long-term structure. | [] | [
"Dialogue Generation",
"Text Generation"
] | [] | [
"Ubuntu Dialogue (Activity)",
"Ubuntu Dialogue (Tense)",
"Twitter Dialogue (Noun)",
"Ubuntu Dialogue (Cmd)",
"Ubuntu Dialogue (Entity)",
"Twitter Dialogue (Tense)"
] | [
"Precision",
"Recall",
"F1",
"Accuracy"
] | Multiresolution Recurrent Neural Networks: An Application to Dialogue Response Generation |
Surgical workflow recognition has numerous potential medical applications,
such as the automatic indexing of surgical video databases and the optimization
of real-time operating room scheduling, among others. As a result, phase
recognition has been studied in the context of several kinds of surgeries, such
as cataract, neurological, and laparoscopic surgeries. In the literature, two
types of features are typically used to perform this task: visual features and
tool usage signals. However, the visual features used are mostly handcrafted.
Furthermore, the tool usage signals are usually collected via a manual
annotation process or by using additional equipment. In this paper, we propose
a novel method for phase recognition that uses a convolutional neural network
(CNN) to automatically learn features from cholecystectomy videos and that
relies uniquely on visual information. In previous studies, it has been shown
that the tool signals can provide valuable information in performing the phase
recognition task. Thus, we present a novel CNN architecture, called EndoNet,
that is designed to carry out the phase recognition and tool presence detection
tasks in a multi-task manner. To the best of our knowledge, this is the first
work proposing to use a CNN for multiple recognition tasks on laparoscopic
videos. Extensive experimental comparisons to other methods show that EndoNet
yields state-of-the-art results for both tasks. | [] | [
"Surgical tool detection"
] | [] | [
"Cholec80"
] | [
"mAP"
] | EndoNet: A Deep Architecture for Recognition Tasks on Laparoscopic Videos |
Achieving efficient and scalable exploration in complex domains poses a major
challenge in reinforcement learning. While Bayesian and PAC-MDP approaches to
the exploration problem offer strong formal guarantees, they are often
impractical in higher dimensions due to their reliance on enumerating the
state-action space. Hence, exploration in complex domains is often performed
with simple epsilon-greedy methods. In this paper, we consider the challenging
Atari games domain, which requires processing raw pixel inputs and delayed
rewards. We evaluate several more sophisticated exploration strategies,
including Thompson sampling and Boltzman exploration, and propose a new
exploration method based on assigning exploration bonuses from a concurrently
learned model of the system dynamics. By parameterizing our learned model with
a neural network, we are able to develop a scalable and efficient approach to
exploration bonuses that can be applied to tasks with complex, high-dimensional
state spaces. In the Atari domain, our method provides the most consistent
improvement across a range of games that pose a major challenge for prior
methods. In addition to raw game-scores, we also develop an AUC-100 metric for
the Atari Learning domain to evaluate the impact of exploration on this
benchmark. | [] | [
"Atari Games"
] | [] | [
"Atari 2600 Venture",
"Atari 2600 Montezuma's Revenge",
"Atari 2600 Frostbite",
"Atari 2600 Freeway",
"Atari 2600 Q*Bert"
] | [
"Score"
] | Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models |
We present a state-of-the-art speech recognition system developed using
end-to-end deep learning. Our architecture is significantly simpler than
traditional speech systems, which rely on laboriously engineered processing
pipelines; these traditional systems also tend to perform poorly when used in
noisy environments. In contrast, our system does not need hand-designed
components to model background noise, reverberation, or speaker variation, but
instead directly learns a function that is robust to such effects. We do not
need a phoneme dictionary, nor even the concept of a "phoneme." Key to our
approach is a well-optimized RNN training system that uses multiple GPUs, as
well as a set of novel data synthesis techniques that allow us to efficiently
obtain a large amount of varied data for training. Our system, called Deep
Speech, outperforms previously published results on the widely studied
Switchboard Hub5'00, achieving 16.0% error on the full test set. Deep Speech
also handles challenging noisy environments better than widely used,
state-of-the-art commercial speech systems. | [] | [
"Accented Speech Recognition",
"End-To-End Speech Recognition",
"Speech Recognition"
] | [] | [
"swb_hub_500 WER fullSWBCH",
"Switchboard + Hub500",
"VoxForge American-Canadian",
"CHiME clean",
"VoxForge Commonwealth",
"CHiME real",
"VoxForge European",
"VoxForge Indian"
] | [
"Percentage error"
] | Deep Speech: Scaling up end-to-end speech recognition |
We introduce a multi-task setup of identifying and classifying entities,
relations, and coreference clusters in scientific articles. We create SciERC, a
dataset that includes annotations for all three tasks and develop a unified
framework called Scientific Information Extractor (SciIE) for with shared span
representations. The multi-task setup reduces cascading errors between tasks
and leverages cross-sentence relations through coreference links. Experiments
show that our multi-task model outperforms previous models in scientific
information extraction without using any domain-specific features. We further
show that the framework supports construction of a scientific knowledge graph,
which we use to analyze information in scientific literature. | [] | [
"Coreference Resolution",
"Joint Entity and Relation Extraction",
"Named Entity Recognition"
] | [] | [
"SciERC"
] | [
"Relation F1",
"F1",
"Entity F1"
] | Multi-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction |
In this paper, we propose a convolutional layer inspired by optical flow algorithms to learn motion representations. Our representation flow layer is a fully-differentiable layer designed to capture the `flow' of any representation channel within a convolutional neural network for action recognition. Its parameters for iterative flow optimization are learned in an end-to-end fashion together with the other CNN model parameters, maximizing the action recognition performance. Furthermore, we newly introduce the concept of learning `flow of flow' representations by stacking multiple representation flow layers. We conducted extensive experimental evaluations, confirming its advantages over previous recognition models using traditional optical flows in both computational speed and performance. Code/models available here: https://piergiaj.github.io/rep-flow-site/ | [] | [
"Action Classification",
"Action Recognition",
"Action Recognition In Videos",
"Activity Recognition",
"Activity Recognition In Videos",
"Optical Flow Estimation",
"Temporal Action Localization",
"Video Classification",
"Video Understanding"
] | [] | [
"Kinetics-400",
"HMDB-51"
] | [
"Average accuracy of 3 splits",
"Vid acc@1"
] | Representation Flow for Action Recognition |
In this paper, we study the task of image retrieval, where the input query is
specified in the form of an image plus some text that describes desired
modifications to the input image. For example, we may present an image of the
Eiffel tower, and ask the system to find images which are visually similar but
are modified in small ways, such as being taken at nighttime instead of during
the day. To tackle this task, we learn a similarity metric between a target
image and a source image plus source text, an embedding and composing function
such that target image feature is close to the source image plus text
composition feature. We propose a new way to combine image and text using such
function that is designed for the retrieval task. We show this outperforms
existing approaches on 3 different datasets, namely Fashion-200k, MIT-States
and a new synthetic dataset we create based on CLEVR. We also show that our
approach can be used to classify input queries, in addition to image retrieval. | [] | [
"Image Retrieval",
"Image Retrieval with Multi-Modal Query"
] | [] | [
"FashionIQ",
"MIT-States",
"Fashion200k"
] | [
"Recall@50",
"Recall@1",
"Recall@5",
"Recall@10"
] | Composing Text and Image for Image Retrieval - An Empirical Odyssey |
We present the Natural Questions corpus, a question answering dataset. Questions consist of real anonymized, aggregated queries issued to the Google search engine. An annotator is presented with a question along with a Wikipedia page from the top 5 search results, and annotates a long answer (typically a paragraph) and a short answer (one or more entities) if present on the page, or marks null if no long/short answer is present. The public release consists of 307,373 training examples with single annotations, 7,830 examples with 5-way annotations for development data, and a further 7,842 examples 5-way annotated sequestered as test data. We present experiments validating quality of the data. We also describe analysis of 25-way annotations on 302 examples, giving insights into human variability on the annotation task. We introduce robust metrics for the purposes of evaluating question answering systems; demonstrate high human upper bounds on these metrics; and establish baseline results using competitive methods drawn from related literature. | [] | [
"Question Answering"
] | [] | [
"Natural Questions (long)",
"Natural Questions (short)"
] | [
"F1"
] | Natural Questions: a Benchmark for Question Answering Research |
High dynamic range (HDR) image generation from a single exposure low dynamic range (LDR) image has been made possible due to the recent advances in Deep Learning. Various feed-forward Convolutional Neural Networks (CNNs) have been proposed for learning LDR to HDR representations. To better utilize the power of CNNs, we exploit the idea of feedback, where the initial low level features are guided by the high level features using a hidden state of a Recurrent Neural Network. Unlike a single forward pass in a conventional feed-forward network, the reconstruction from LDR to HDR in a feedback network is learned over multiple iterations. This enables us to create a coarse-to-fine representation, leading to an improved reconstruction at every iteration. Various advantages over standard feed-forward networks include early reconstruction ability and better reconstruction quality with fewer network parameters. We design a dense feedback block and propose an end-to-end feedback network- FHDR for HDR image generation from a single exposure LDR image. Qualitative and quantitative evaluations show the superiority of our approach over the state-of-the-art methods. | [] | [
"Image Generation",
"Image Reconstruction",
"Single-Image-Based Hdr Reconstruction"
] | [] | [
"City Scene Dataset"
] | [
"SSIM",
"PSNR",
"HDR-VDP2 Q SCORE"
] | FHDR: HDR Image Reconstruction from a Single LDR Image using Feedback Network |
Man-made scenes can be densely packed, containing numerous objects, often
identical, positioned in close proximity. We show that precise object detection
in such scenes remains a challenging frontier even for state-of-the-art object
detectors. We propose a novel, deep-learning based method for precise object
detection, designed for such challenging settings. Our contributions include:
(1) A layer for estimating the Jaccard index as a detection quality score; (2)
a novel EM merging unit, which uses our quality scores to resolve detection
overlap ambiguities; finally, (3) an extensive, annotated data set, SKU-110K,
representing packed retail environments, released for training and testing
under such extreme settings. Detection tests on SKU-110K and counting tests on
the CARPK and PUCPR+ show our method to outperform existing state-of-the-art
with substantial margins. The code and data will be made available on
\url{www.github.com/eg4000/SKU110K_CVPR19}. | [] | [
"Dense Object Detection",
"Object Detection"
] | [] | [
"CARPK",
"SKU-110K"
] | [
"MAE",
"AP75",
"RMSE",
"AP"
] | Precise Detection in Densely Packed Scenes |
Suffering from the multi-view data diversity and complexity for semi-supervised classification, most of existing graph convolutional networks focus on the networks architecture construction or the salient graph structure preservation, and ignore the the complete graph structure for semi-supervised classification contribution. To mine the more complete distribution structure from multi-view data with the consideration of the specificity and the commonality, we propose structure fusion based on graph convolutional networks (SF-GCN) for improving the performance of semi-supervised classification. SF-GCN can not only retain the special characteristic of each view data by spectral embedding, but also capture the common style of multi-view data by distance metric between multi-graph structures. Suppose the linear relationship between multi-graph structures, we can construct the optimization function of structure fusion model by balancing the specificity loss and the commonality loss. By solving this function, we can simultaneously obtain the fusion spectral embedding from the multi-view data and the fusion structure as adjacent matrix to input graph convolutional networks for semi-supervised classification. Experiments demonstrate that the performance of SF-GCN outperforms that of the state of the arts on three challenging datasets, which are Cora,Citeseer and Pubmed in citation networks. | [] | [
"Node Classification"
] | [] | [
"Cora",
"Pubmed",
"Citeseer"
] | [
"Accuracy"
] | Structure fusion based on graph convolutional networks for semi-supervised classification |
We introduce SharpNet, a method that predicts an accurate depth map for an input color image, with a particular attention to the reconstruction of occluding contours: Occluding contours are an important cue for object recognition, and for realistic integration of virtual objects in Augmented Reality, but they are also notoriously difficult to reconstruct accurately. For example, they are a challenge for stereo-based reconstruction methods, as points around an occluding contour are visible in only one image. Inspired by recent methods that introduce normal estimation to improve depth prediction, we introduce a novel term that constrains depth and occluding contours predictions. Since ground truth depth is difficult to obtain with pixel-perfect accuracy along occluding contours, we use synthetic images for training, followed by fine-tuning on real data. We demonstrate our approach on the challenging NYUv2-Depth dataset, and show that our method outperforms the state-of-the-art along occluding contours, while performing on par with the best recent methods for the rest of the images. Its accuracy along the occluding contours is actually better than the `ground truth' acquired by a depth camera based on structured light. We show this by introducing a new benchmark based on NYUv2-Depth for evaluating occluding contours in monocular reconstruction, which is our second contribution. | [] | [
"Depth Estimation",
"Monocular Depth Estimation",
"Object Recognition"
] | [] | [
"NYU-Depth V2"
] | [
"RMSE"
] | SharpNet: Fast and Accurate Recovery of Occluding Contours in Monocular Depth Estimation |
Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having $O(1)$ global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data. | [] | [
"Chromatin-Profile Prediction",
"Document Summarization",
"Linguistic Acceptability",
"Natural Language Inference",
"Question Answering",
"Semantic Textual Similarity",
"Sentiment Analysis",
"Text Classification",
"Text Summarization"
] | [] | [
"MultiNLI",
"arXiv",
"TriviaQA",
"SST-2 Binary classification",
"Patents",
"WikiHop",
"Yelp-5",
"HotpotQA",
"BBC XSum",
"Hyperpartisan",
"IMDb",
"STS Benchmark",
"CoLA",
"QNLI",
"BigPatent",
"CNN / Daily Mail",
"RTE",
"MRPC",
"Natural Questions",
"Pubmed",
"Quora Question Pairs"
] | [
"F1 (Long)",
"Sup",
"ROUGE-1",
"Accuracy (2 classes)",
"F1 (Short)",
"Test",
"Spearman Correlation",
"Matched",
"ROUGE-2",
"Ans",
"F1",
"Joint F1",
"ROUGE-L",
"Accuracy",
"Accuracy (10 classes)"
] | Big Bird: Transformers for Longer Sequences |
Joint extraction of entities and relations aims to detect entity pairs along with their relations using a single model. Prior work typically solves this task in the extract-then-classify or unified labeling manner. However, these methods either suffer from the redundant entity pairs, or ignore the important inner structure in the process of extracting entities and relations. To address these limitations, in this paper, we first decompose the joint extraction task into two interrelated subtasks, namely HE extraction and TER extraction. The former subtask is to distinguish all head-entities that may be involved with target relations, and the latter is to identify corresponding tail-entities and relations for each extracted head-entity. Next, these two subtasks are further deconstructed into several sequence labeling problems based on our proposed span-based tagging scheme, which are conveniently solved by a hierarchical boundary tagger and a multi-span decoding algorithm. Owing to the reasonable decomposition strategy, our model can fully capture the semantic interdependency between different steps, as well as reduce noise from irrelevant entity pairs. Experimental results show that our method outperforms previous work by 5.2%, 5.9% and 21.5% (F1 score), achieving a new state-of-the-art on three public datasets | [] | [
"Relation Extraction"
] | [] | [
"NYT",
"NYT-single",
"WebNLG"
] | [
"F1"
] | Joint Extraction of Entities and Relations Based on a Novel Decomposition Strategy |
Text simplification aims at making a text easier to read and understand by simplifying grammar and structure while keeping the underlying information identical. It is often considered an all-purpose generic task where the same simplification is suitable for all; however multiple audiences can benefit from simplified text in different ways. We adapt a discrete parametrization mechanism that provides explicit control on simplification systems based on Sequence-to-Sequence models. As a result, users can condition the simplifications returned by a model on attributes such as length, amount of paraphrasing, lexical complexity and syntactic complexity. We also show that carefully chosen values of these attributes allow out-of-the-box Sequence-to-Sequence models to outperform their standard counterparts on simplification benchmarks. Our model, which we call ACCESS (as shorthand for AudienCe-CEntric Sentence Simplification), establishes the state of the art at 41.87 SARI on the WikiLarge test set, a +1.42 improvement over the best previously reported score. | [] | [
"Text Simplification"
] | [] | [
"ASSET",
"TurkCorpus"
] | [
"BLEU",
"SARI (EASSE>=0.2.1)"
] | Controllable Sentence Simplification |
We address Unsupervised Video Object Segmentation (UVOS), the task of automatically generating accurate pixel masks for salient objects in a video sequence and of tracking these objects consistently through time, without any input about which objects should be tracked. Towards solving this task, we present UnOVOST (Unsupervised Offline Video Object Segmentation and Tracking) as a simple and generic algorithm which is able to track and segment a large variety of objects. This algorithm builds up tracks in a number stages, first grouping segments into short tracklets that are spatio-temporally consistent, before merging these tracklets into long-term consistent object tracks based on their visual similarity. In order to achieve this we introduce a novel tracklet-based Forest Path Cutting data association algorithm which builds up a decision forest of track hypotheses before cutting this forest into paths that form long-term consistent object tracks. When evaluating our approach on the DAVIS 2017 Unsupervised dataset we obtain state-of-the-art performance with a mean J &F score of 67.9% on the val, 58% on the test-dev and 56.4% on the test-challenge benchmarks, obtaining first place in the DAVIS 2019 Unsupervised Video Object Segmentation Challenge. UnOVOST even performs competitively with many semi-supervised video object segmentation algorithms even though it is not given any input as to which objects should be tracked and segmented. | [] | [
"Semantic Segmentation",
"Semi-Supervised Video Object Segmentation",
"Unsupervised Video Object Segmentation",
"Video Object Segmentation",
"Video Semantic Segmentation"
] | [] | [
"DAVIS 2017 (val)",
"DAVIS 2017 (test-dev)"
] | [
"F-measure (Decay)",
"Jaccard (Mean)",
"F-measure (Recall)",
"Jaccard (Decay)",
"Jaccard (Recall)",
"F-measure (Mean)",
"J&F"
] | UnOVOST: Unsupervised Offline Video Object Segmentation and Tracking |
This paper targets on the problem of set to set recognition, which learns the
metric between two image sets. Images in each set belong to the same identity.
Since images in a set can be complementary, they hopefully lead to higher
accuracy in practical applications. However, the quality of each sample cannot
be guaranteed, and samples with poor quality will hurt the metric. In this
paper, the quality aware network (QAN) is proposed to confront this problem,
where the quality of each sample can be automatically learned although such
information is not explicitly provided in the training stage. The network has
two branches, where the first branch extracts appearance feature embedding for
each sample and the other branch predicts quality score for each sample.
Features and quality scores of all samples in a set are then aggregated to
generate the final feature embedding. We show that the two branches can be
trained in an end-to-end manner given only the set-level identity annotation.
Analysis on gradient spread of this mechanism indicates that the quality
learned by the network is beneficial to set-to-set recognition and simplifies
the distribution that the network needs to fit. Experiments on both face
verification and person re-identification show advantages of the proposed QAN.
The source code and network structure can be downloaded at
https://github.com/sciencefans/Quality-Aware-Network. | [] | [
"Face Verification",
"Person Re-Identification"
] | [] | [
"YouTube Faces DB"
] | [
"Accuracy"
] | Quality Aware Network for Set to Set Recognition |
The task of session-based recommendation is to predict user actions based on anonymous sessions. Recent research mainly models the target session as a sequence or a graph to capture item transitions within it, ignoring complex transitions between items in different sessions that have been generated by other users. These item transitions include potential collaborative information and reflect similar behavior patterns, which we assume may help with the recommendation for the target session. In this paper, we propose a novel method, namely Dual-channel Graph Transition Network (DGTN), to model item transitions within not only the target session but also the neighbor sessions. Specifically, we integrate the target session and its neighbor (similar) sessions into a single graph. Then the transition signals are explicitly injected into the embedding by channel-aware propagation. Experiments on real-world datasets demonstrate that DGTN outperforms other state-of-the-art methods. Further analysis verifies the rationality of dual-channel item transition modeling, suggesting a potential future direction for session-based recommendation. | [] | [
"Session-Based Recommendations"
] | [] | [
"yoochoose1",
"Diginetica",
"yoochoose1/64"
] | [
"MRR@20",
"Precision@20"
] | DGTN: Dual-channel Graph Transition Network for Session-based Recommendation |
Self-training and unsupervised pre-training have emerged as effective approaches to improve speech recognition systems using unlabeled data. However, it is not clear whether they learn similar patterns or if they can be effectively combined. In this paper, we show that pseudo-labeling and pre-training with wav2vec 2.0 are complementary in a variety of labeled data setups. Using just 10 minutes of labeled data from Libri-light as well as 53k hours of unlabeled data from LibriVox achieves WERs of 3.0%/5.2% on the clean and other test sets of Librispeech - rivaling the best published systems trained on 960 hours of labeled data only a year ago. Training on all labeled data of Librispeech achieves WERs of 1.5%/3.1%. | [] | [
"Speech Recognition",
"Unsupervised Pre-training"
] | [] | [
"LibriSpeech test-other",
"LibriSpeech test-clean",
"LibriSpeech train-clean-100 test-other",
"LibriSpeech train-clean-100 test-clean"
] | [
"Word Error Rate (WER)"
] | Self-training and Pre-training are Complementary for Speech Recognition |
Despite the remarkable recent progress, person re-identification (Re-ID)
approaches are still suffering from the failure cases where the discriminative
body parts are missing. To mitigate such cases, we propose a simple yet
effective Horizontal Pyramid Matching (HPM) approach to fully exploit various
partial information of a given person, so that correct person candidates can be
still identified even even some key parts are missing. Within the HPM, we make
the following contributions to produce a more robust feature representation for
the Re-ID task: 1) we learn to classify using partial feature representations
at different horizontal pyramid scales, which successfully enhance the
discriminative capabilities of various person parts; 2) we exploit average and
max pooling strategies to account for person-specific discriminative
information in a global-local manner. To validate the effectiveness of the
proposed HPM, extensive experiments are conducted on three popular benchmarks,
including Market-1501, DukeMTMC-ReID and CUHK03. In particular, we achieve mAP
scores of 83.1%, 74.5% and 59.7% on these benchmarks, which are the new
state-of-the-arts. Our code is available on Github | [] | [
"Person Re-Identification"
] | [] | [
"DukeMTMC-reID",
"Market-1501"
] | [
"Rank-1",
"MAP"
] | Horizontal Pyramid Matching for Person Re-identification |
It is common that entity mentions can contain other mentions recursively.
This paper introduces a scalable transition-based method to model the nested
structure of mentions. We first map a sentence with nested mentions to a
designated forest where each mention corresponds to a constituent of the
forest. Our shift-reduce based system then learns to construct the forest
structure in a bottom-up manner through an action sequence whose maximal length
is guaranteed to be three times of the sentence length. Based on Stack-LSTM
which is employed to efficiently and effectively represent the states of the
system in a continuous space, our system is further incorporated with a
character-based component to capture letter-level patterns. Our model achieves
the state-of-the-art results on ACE datasets, showing its effectiveness in
detecting nested mentions. | [] | [
"Named Entity Recognition",
"Nested Mention Recognition",
"Nested Named Entity Recognition"
] | [] | [
"GENIA",
"ACE 2005",
"ACE 2004"
] | [
"F1"
] | A Neural Transition-based Model for Nested Mention Recognition |
Existing methods for arterial blood pressure (BP) estimation directly map the
input physiological signals to output BP values without explicitly modeling the
underlying temporal dependencies in BP dynamics. As a result, these models
suffer from accuracy decay over a long time and thus require frequent
calibration. In this work, we address this issue by formulating BP estimation
as a sequence prediction problem in which both the input and target are
temporal sequences. We propose a novel deep recurrent neural network (RNN)
consisting of multilayered Long Short-Term Memory (LSTM) networks, which are
incorporated with (1) a bidirectional structure to access larger-scale context
information of input sequence, and (2) residual connections to allow gradients
in deep RNN to propagate more effectively. The proposed deep RNN model was
tested on a static BP dataset, and it achieved root mean square error (RMSE) of
3.90 and 2.66 mmHg for systolic BP (SBP) and diastolic BP (DBP) prediction
respectively, surpassing the accuracy of traditional BP prediction models. On a
multi-day BP dataset, the deep RNN achieved RMSE of 3.84, 5.25, 5.80 and 5.81
mmHg for the 1st day, 2nd day, 4th day and 6th month after the 1st day SBP
prediction, and 1.80, 4.78, 5.0, 5.21 mmHg for corresponding DBP prediction,
respectively, which outperforms all previous models with notable improvement.
The experimental results suggest that modeling the temporal dependencies in BP
dynamics significantly improves the long-term BP prediction accuracy. | [] | [
"Blood pressure estimation",
"Electrocardiography (ECG)",
"Photoplethysmography (PPG)"
] | [] | [
"Multi-day Continuous BP Prediction",
"MIMIC-III"
] | [
"MAE for SBP [mmHg]",
"MAE for DBP [mmHg]",
"RMSE"
] | Long-term Blood Pressure Prediction with Deep Recurrent Neural Networks |
Most approaches to extraction multiple relations from a paragraph require multiple passes over the paragraph. In practice, multiple passes are computationally expensive and this makes difficult to scale to longer paragraphs and larger text corpora. In this work, we focus on the task of multiple relation extraction by encoding the paragraph only once (one-pass). We build our solution on the pre-trained self-attentive (Transformer) models, where we first add a structured prediction layer to handle extraction between multiple entity pairs, then enhance the paragraph embedding to capture multiple relational information associated with each entity with an entity-aware attention technique. We show that our approach is not only scalable but can also perform state-of-the-art on the standard benchmark ACE 2005. | [] | [
"Relation Extraction",
"Structured Prediction"
] | [] | [
"SemEval-2010 Task 8"
] | [
"F1"
] | Extracting Multiple-Relations in One-Pass with Pre-Trained Transformers |
Temporal Action Proposal (TAP) generation is an important problem, as fast
and accurate extraction of semantically important (e.g. human actions) segments
from untrimmed videos is an important step for large-scale video analysis. We
propose a novel Temporal Unit Regression Network (TURN) model. There are two
salient aspects of TURN: (1) TURN jointly predicts action proposals and refines
the temporal boundaries by temporal coordinate regression; (2) Fast computation
is enabled by unit feature reuse: a long untrimmed video is decomposed into
video units, which are reused as basic building blocks of temporal proposals.
TURN outperforms the state-of-the-art methods under average recall (AR) by a
large margin on THUMOS-14 and ActivityNet datasets, and runs at over 880 frames
per second (FPS) on a TITAN X GPU. We further apply TURN as a proposal
generation stage for existing temporal action localization pipelines, it
outperforms state-of-the-art performance on THUMOS-14 and ActivityNet. | [] | [
"Action Localization",
"Regression",
"Temporal Action Localization"
] | [] | [
"THUMOS’14"
] | [
"[email protected]",
"mAP [email protected]",
"mAP [email protected]",
"mAP [email protected]",
"[email protected]",
"mAP [email protected]",
"[email protected]",
"mAP [email protected]"
] | TURN TAP: Temporal Unit Regression Network for Temporal Action Proposals |
Multivariate time series data in practical applications, such as health care,
geoscience, and biology, are characterized by a variety of missing values. In
time series prediction and other related tasks, it has been noted that missing
values and their missing patterns are often correlated with the target labels,
a.k.a., informative missingness. There is very limited work on exploiting the
missing patterns for effective imputation and improving prediction performance.
In this paper, we develop novel deep learning models, namely GRU-D, as one of
the early attempts. GRU-D is based on Gated Recurrent Unit (GRU), a
state-of-the-art recurrent neural network. It takes two representations of
missing patterns, i.e., masking and time interval, and effectively incorporates
them into a deep model architecture so that it not only captures the long-term
temporal dependencies in time series, but also utilizes the missing patterns to
achieve better prediction results. Experiments of time series classification
tasks on real-world clinical datasets (MIMIC-III, PhysioNet) and synthetic
datasets demonstrate that our models achieve state-of-the-art performance and
provides useful insights for better understanding and utilization of missing
values in time series analysis. | [] | [
"Imputation",
"Multivariate Time Series Forecasting",
"Multivariate Time Series Imputation",
"Time Series",
"Time Series Analysis",
"Time Series Classification",
"Time Series Prediction"
] | [] | [
"MuJoCo",
"PhysioNet Challenge 2012"
] | [
"MSE (10^2, 50% missing)",
"MSE (10^-2, 50% missing)",
"AUC",
"AUC Stdev"
] | Recurrent Neural Networks for Multivariate Time Series with Missing Values |
Unsupervised Domain Adaptation (UDA) makes predictions for the target domain
data while manual annotations are only available in the source domain. Previous
methods minimize the domain discrepancy neglecting the class information, which
may lead to misalignment and poor generalization performance. To address this
issue, this paper proposes Contrastive Adaptation Network (CAN) optimizing a
new metric which explicitly models the intra-class domain discrepancy and the
inter-class domain discrepancy. We design an alternating update strategy for
training CAN in an end-to-end manner. Experiments on two real-world benchmarks
Office-31 and VisDA-2017 demonstrate that CAN performs favorably against the
state-of-the-art methods and produces more discriminative features. | [] | [
"Domain Adaptation",
"Unsupervised Domain Adaptation"
] | [] | [
"VisDA2017",
"Office-31"
] | [
"Avg accuracy",
"Average Accuracy"
] | Contrastive Adaptation Network for Unsupervised Domain Adaptation |
Data augmentation is usually adopted to increase the amount of training data,
prevent overfitting and improve the performance of deep models. However, in
practice, random data augmentation, such as random image cropping, is
low-efficiency and might introduce many uncontrolled background noises. In this
paper, we propose Weakly Supervised Data Augmentation Network (WS-DAN) to
explore the potential of data augmentation. Specifically, for each training
image, we first generate attention maps to represent the object's
discriminative parts by weakly supervised learning. Next, we augment the image
guided by these attention maps, including attention cropping and attention
dropping. The proposed WS-DAN improves the classification accuracy in two
folds. In the first stage, images can be seen better since more discriminative
parts' features will be extracted. In the second stage, attention regions
provide accurate location of object, which ensures our model to look at the
object closer and further improve the performance. Comprehensive experiments in
common fine-grained visual classification datasets show that our WS-DAN
surpasses the state-of-the-art methods, which demonstrates its effectiveness. | [] | [
"Data Augmentation",
"Fine-Grained Image Classification",
"Image Cropping"
] | [] | [
"Stanford Cars",
"CUB-200-2011",
"FGVC Aircraft"
] | [
"Accuracy"
] | See Better Before Looking Closer: Weakly Supervised Data Augmentation Network for Fine-Grained Visual Classification |
Flow-based generative models, conceptually attractive due to tractability of both the exact log-likelihood computation and latent-variable inference, and efficiency of both training and sampling, has led to a number of impressive empirical successes and spawned many advanced variants and theoretical investigations. Despite their computational efficiency, the density estimation performance of flow-based generative models significantly falls behind those of state-of-the-art autoregressive models. In this work, we introduce masked convolutional generative flow (MaCow), a simple yet effective architecture of generative flow using masked convolution. By restricting the local connectivity in a small kernel, MaCow enjoys the properties of fast and stable training, and efficient sampling, while achieving significant improvements over Glow for density estimation on standard image benchmarks, considerably narrowing the gap to autoregressive models. | [] | [
"Density Estimation",
"Image Generation"
] | [] | [
"ImageNet 64x64",
"CelebA 256x256",
"CIFAR-10"
] | [
"bits/dimension",
"bpd",
"Bits per dim"
] | MaCow: Masked Convolutional Generative Flow |
Recent developed deep unsupervised methods allow us to jointly learn representation and cluster unlabelled data. These deep clustering methods mainly focus on the correlation among samples, e.g., selecting high precision pairs to gradually tune the feature representation, which neglects other useful correlations. In this paper, we propose a novel clustering framework, named deep comprehensive correlation mining(DCCM), for exploring and taking full advantage of various kinds of correlations behind the unlabeled data from three aspects: 1) Instead of only using pair-wise information, pseudo-label supervision is proposed to investigate category information and learn discriminative features. 2) The features' robustness to image transformation of input space is fully explored, which benefits the network learning and significantly improves the performance. 3) The triplet mutual information among features is presented for clustering problem to lift the recently discovered instance-level deep mutual information to a triplet-level formation, which further helps to learn more discriminative features. Extensive experiments on several challenging datasets show that our method achieves good performance, e.g., attaining $62.3\%$ clustering accuracy on CIFAR-10, which is $10.1\%$ higher than the state-of-the-art results. | [] | [
"Deep Clustering",
"Image Clustering"
] | [] | [
"Imagenet-dog-15",
"CIFAR-100",
"CIFAR-10",
"Tiny-ImageNet",
"ImageNet-10",
"STL-10"
] | [
"Train set",
"Train Split",
"ARI",
"Backbone",
"Train Set",
"NMI",
"Accuracy"
] | Deep Comprehensive Correlation Mining for Image Clustering |
Person re-identification aims to establish the correct identity correspondences of a person moving through a non-overlapping multi-camera installation. Recent advances based on deep learning models for this task mainly focus on supervised learning scenarios where accurate annotations are assumed to be available for each setup. Annotating large scale datasets for person re-identification is demanding and burdensome, which renders the deployment of such supervised approaches to real-world applications infeasible. Therefore, it is necessary to train models without explicit supervision in an autonomous manner. In this paper, we propose an elegant and practical clustering approach for unsupervised person re-identification based on the cluster validity consideration. Concretely, we explore a fundamental concept in statistics, namely \emph{dispersion}, to achieve a robust clustering criterion. Dispersion reflects the compactness of a cluster when employed at the intra-cluster level and reveals the separation when measured at the inter-cluster level. With this insight, we design a novel Dispersion-based Clustering (DBC) approach which can discover the underlying patterns in data. This approach considers a wider context of sample-level pairwise relationships to achieve a robust cluster affinity assessment which handles the complications may arise due to prevalent imbalanced data distributions. Additionally, our solution can automatically prioritize standalone data points and prevents inferior clustering. Our extensive experimental analysis on image and video re-identification benchmarks demonstrate that our method outperforms the state-of-the-art unsupervised methods by a significant margin. Code is available at https://github.com/gddingcs/Dispersion-based-Clustering.git. | [] | [
"Person Re-Identification",
"Unsupervised Person Re-Identification"
] | [] | [
"DukeMTMC-reID",
"Market-1501"
] | [
"Rank-1",
"Rank-10",
"Rank-5",
"MAP"
] | Towards better Validity: Dispersion based Clustering for Unsupervised Person Re-identification |
Landmark localization is a challenging problem in computer vision with a multitude of applications. Recent deep learning based methods have shown improved results by regressing likelihood maps instead of regressing the coordinates directly. However, setting the precision of these regression targets during the training is a cumbersome process since it creates a trade-off between trainability vs localization accuracy. Using precise targets introduces a significant sampling bias and hence makes the training more difficult, whereas using imprecise targets results in inaccurate landmark detectors. In this paper, we introduce "Adaloss", an objective function that adapts itself during the training by updating the target precision based on the training statistics. This approach does not require setting problem-specific parameters and shows improved stability in training and better localization accuracy during inference. We demonstrate the effectiveness of our proposed method in three different applications of landmark localization: 1) the challenging task of precisely detecting catheter tips in medical X-ray images, 2) localizing surgical instruments in endoscopic images, and 3) localizing facial features on in-the-wild images where we show state-of-the-art results on the 300-W benchmark dataset. | [] | [
"Facial Landmark Detection",
"Regression"
] | [] | [
"300W"
] | [
"NME"
] | Adaloss: Adaptive Loss Function for Landmark Localization |
Since the seminal work of Mikolov et al., word embeddings have become the preferred word representations for many natural language processing tasks. Document similarity measures extracted from word embeddings, such as the soft cosine measure (SCM) and the Word Mover's Distance (WMD), were reported to achieve state-of-the-art performance on semantic text similarity and text classification. Despite the strong performance of the WMD on text classification and semantic text similarity, its super-cubic average time complexity is impractical. The SCM has quadratic worst-case time complexity, but its performance on text classification has never been compared with the WMD. Recently, two word embedding regularization techniques were shown to reduce storage and memory costs, and to improve training speed, document processing speed, and task performance on word analogy, word similarity, and semantic text similarity. However, the effect of these techniques on text classification has not yet been studied. In our work, we investigate the individual and joint effect of the two word embedding regularization techniques on the document processing speed and the task performance of the SCM and the WMD on text classification. For evaluation, we use the $k$NN classifier and six standard datasets: BBCSPORT, TWITTER, OHSUMED, REUTERS-21578, AMAZON, and 20NEWS. We show 39% average $k$NN test error reduction with regularized word embeddings compared to non-regularized word embeddings. We describe a practical procedure for deriving such regularized embeddings through Cholesky factorization. We also show that the SCM with regularized word embeddings significantly outperforms the WMD on text classification and is over 10,000 times faster. | [] | [
"Document Classification",
"Text Classification",
"Word Embeddings"
] | [] | [
"BBCSport",
"Amazon",
"Reuters-21578",
"20NEWS",
"Twitter",
"Ohsumed"
] | [
"Accuracy"
] | Text classification with word embedding regularization and soft similarity measure |
Estimating depth from a single RGB images is a fundamental task in computer vision, which is most directly solved using supervised deep learning. In the field of unsupervised learning of depth from a single RGB image, depth is not given explicitly. Existing work in the field receives either a stereo pair, a monocular video, or multiple views, and, using losses that are based on structure-from-motion, trains a depth estimation network. In this work, we rely, instead of different views, on depth from focus cues. Learning is based on a novel Point Spread Function convolutional layer, which applies location specific kernels that arise from the Circle-Of-Confusion in each image location. We evaluate our method on data derived from five common datasets for depth estimation and lightfield images, and present results that are on par with supervised methods on KITTI and Make3D datasets and outperform unsupervised learning approaches. Since the phenomenon of depth from defocus is not dataset specific, we hypothesize that learning based on it would overfit less to the specific content in each dataset. Our experiments show that this is indeed the case, and an estimator learned on one dataset using our method provides better results on other datasets, than the directly supervised methods. | [] | [
"Depth Estimation",
"Lightfield",
"Monocular Depth Estimation",
"Structure from Motion"
] | [] | [
"NYU-Depth V2",
"KITTI Eigen split"
] | [
"RMSE",
"absolute relative error"
] | Single Image Depth Estimation Trained via Depth from Defocus Cues |
Temporal action localization is an important step towards video understanding. Most current action localization methods depend on untrimmed videos with full temporal annotations of action instances. However, it is expensive and time-consuming to annotate both action labels and temporal boundaries of videos. To this end, we propose a weakly supervised temporal action localization method that only requires video-level action instances as supervision during training. We propose a classification module to generate action labels for each segment in the video, and a deep metric learning module to learn the similarity between different action instances. We jointly optimize a balanced binary cross-entropy loss and a metric loss using a standard backpropagation algorithm. Extensive experiments demonstrate the effectiveness of both of these components in temporal localization. We evaluate our algorithm on two challenging untrimmed video datasets: THUMOS14 and ActivityNet1.2. Our approach improves the current state-of-the-art result for THUMOS14 by 6.5% mAP at IoU threshold 0.5, and achieves competitive performance for ActivityNet1.2. | [] | [
"Action Localization",
"Metric Learning",
"Temporal Action Localization",
"Temporal Localization",
"Video Understanding",
"Weakly-supervised Temporal Action Localization",
"Weakly Supervised Temporal Action Localization"
] | [] | [
"ActivityNet-1.2",
"THUMOS’14"
] | [
"mAP [email protected]",
"mAP [email protected]",
"mAP [email protected]",
"mAP [email protected]"
] | Weakly Supervised Temporal Action Localization Using Deep Metric Learning |
Entropy minimization has been widely used in unsupervised domain adaptation (UDA). However, existing works reveal that entropy minimization only may result into collapsed trivial solutions. In this paper, we propose to avoid trivial solutions by further introducing diversity maximization. In order to achieve the possible minimum target risk for UDA, we show that diversity maximization should be elaborately balanced with entropy minimization, the degree of which can be finely controlled with the use of deep embedded validation in an unsupervised manner. The proposed minimal-entropy diversity maximization (MEDM) can be directly implemented by stochastic gradient descent without use of adversarial learning. Empirical evidence demonstrates that MEDM outperforms the state-of-the-art methods on four popular domain adaptation datasets. | [] | [
"Domain Adaptation",
"Unsupervised Domain Adaptation"
] | [] | [
"Office-31",
"Office-Home",
"ImageCLEF-DA"
] | [
"Average Accuracy",
"Accuracy"
] | Entropy Minimization vs. Diversity Maximization for Domain Adaptation |
Leveraging physical knowledge described by partial differential equations (PDEs) is an appealing way to improve unsupervised video prediction methods. Since physics is too restrictive for describing the full visual content of generic videos, we introduce PhyDNet, a two-branch deep architecture, which explicitly disentangles PDE dynamics from unknown complementary information. A second contribution is to propose a new recurrent physical cell (PhyCell), inspired from data assimilation techniques, for performing PDE-constrained prediction in latent space. Extensive experiments conducted on four various datasets show the ability of PhyDNet to outperform state-of-the-art methods. Ablation studies also highlight the important gain brought out by both disentanglement and PDE-constrained prediction. Finally, we show that PhyDNet presents interesting features for dealing with missing data and long-term forecasting. | [] | [
"Video Prediction"
] | [] | [
"Human3.6M",
"Moving MNIST"
] | [
"MAE",
"SSIM",
"MSE"
] | Disentangling Physical Dynamics from Unknown Factors for Unsupervised Video Prediction |
Existing methods for instance segmentation in videos typi-cally involve multi-stage pipelines that follow the tracking-by-detectionparadigm and model a video clip as a sequence of images. Multiple net-works are used to detect objects in individual frames, and then associatethese detections over time. Hence, these methods are often non-end-to-end trainable and highly tailored to specific tasks. In this paper, we pro-pose a different approach that is well-suited to a variety of tasks involvinginstance segmentation in videos. In particular, we model a video clip asa single 3D spatio-temporal volume, and propose a novel approach thatsegments and tracks instances across space and time in a single stage. Ourproblem formulation is centered around the idea of spatio-temporal em-beddings which are trained to cluster pixels belonging to a specific objectinstance over an entire video clip. To this end, we introduce (i) novel mix-ing functions that enhance the feature representation of spatio-temporalembeddings, and (ii) a single-stage, proposal-free network that can rea-son about temporal context. Our network is trained end-to-end to learnspatio-temporal embeddings as well as parameters required to clusterthese embeddings, thus simplifying inference. Our method achieves state-of-the-art results across multiple datasets and tasks. Code and modelsare available at https://github.com/sabarim/STEm-Seg. | [] | [
"Instance Segmentation",
"Semantic Segmentation",
"Unsupervised Video Object Segmentation",
"Video Instance Segmentation"
] | [] | [
"DAVIS 2017 (val)",
"YouTube-VIS validation",
"DAVIS 2016"
] | [
"AR10",
"F-measure (Decay)",
"Jaccard (Mean)",
"F-measure (Recall)",
"Jaccard (Decay)",
"AR1",
"AP75",
"AP50",
"Jaccard (Recall)",
"F-measure (Mean)",
"J&F",
"mask AP"
] | STEm-Seg: Spatio-temporal Embeddings for Instance Segmentation in Videos |
Geospatial object segmentation, as a particular semantic segmentation task, always faces with larger-scale variation, larger intra-class variance of background, and foreground-background imbalance in the high spatial resolution (HSR) remote sensing imagery. However, general semantic segmentation methods mainly focus on scale variation in the natural scene, with inadequate consideration of the other two problems that usually happen in the large area earth observation scene. In this paper, we argue that the problems lie on the lack of foreground modeling and propose a foreground-aware relation network (FarSeg) from the perspectives of relation-based and optimization-based foreground modeling, to alleviate the above two problems. From perspective of relation, FarSeg enhances the discrimination of foreground features via foreground-correlated contexts associated by learning foreground-scene relation. Meanwhile, from perspective of optimization, a foreground-aware optimization is proposed to focus on foreground examples and hard examples of background during training for a balanced optimization. The experimental results obtained using a large scale dataset suggest that the proposed method is superior to the state-of-the-art general semantic segmentation methods and achieves a better trade-off between speed and accuracy. Code has been made available at: \url{https://github.com/Z-Zheng/FarSeg}. | [] | [
"Semantic Segmentation"
] | [] | [
"iSAID"
] | [
"mIoU"
] | Foreground-Aware Relation Network for Geospatial Object Segmentation in High Spatial Resolution Remote Sensing Imagery |
Real-world data is predominantly unbalanced and long-tailed, but deep models struggle to recognize rare classes in the presence of frequent classes. Often, classes can be accompanied by side information like textual descriptions, but it is not fully clear how to use them for learning with unbalanced long-tail data. Such descriptions have been mostly used in (Generalized) Zero-shot learning (ZSL), suggesting that ZSL with class descriptions may also be useful for long-tail distributions. We describe DRAGON, a late-fusion architecture for long-tail learning with class descriptors. It learns to (1) correct the bias towards head classes on a sample-by-sample basis; and (2) fuse information from class-descriptions to improve the tail-class accuracy. We also introduce new benchmarks CUB-LT, SUN-LT, AWA-LT for long-tail learning with class-descriptions, building on existing learning-with-attributes datasets and a version of Imagenet-LT with class descriptors. DRAGON outperforms state-of-the-art models on the new benchmark. It is also a new SoTA on existing benchmarks for GFSL with class descriptors (GFSL-d) and standard (vision-only) long-tailed learning ImageNet-LT, CIFAR-10, 100, and Places365. | [] | [
"Few-Shot Learning",
"Generalized Few-Shot Learning",
"Generalized Zero-Shot Learning",
"Long-tail Learning",
"Long-tail learning with class descriptors",
"Zero-Shot Learning"
] | [] | [
"SUN-LT",
"Places-LT",
"AWA2",
"CIFAR-10-LT (ρ=100)",
"CIFAR-100-LT (ρ=10)",
"ImageNet-LT",
"CIFAR-10-LT (ρ=10)",
"AWA-LT",
"CIFAR-100-LT (ρ=100)",
"ImageNet-LT-d",
"CUB-LT",
"SUN",
"CUB"
] | [
"Per-Class Accuracy (1-shot)",
"Per-Class Accuracy (20-shots)",
"Long-Tailed Accuracy",
"Error Rate",
"Per-Class Accuracy (2-shots)",
"Per-Class Accuracy (2-shots)",
"Per-Class Accuracy (5-shots)",
"Per-Class Accuracy (10-shots)",
"Per-Class Accuracy"
] | From Generalized zero-shot learning to long-tail with class descriptors |
The low-level details and high-level semantics are both essential to the semantic segmentation task. However, to speed up the model inference, current approaches almost always sacrifice the low-level details, which leads to a considerable accuracy decrease. We propose to treat these spatial details and categorical semantics separately to achieve high accuracy and high efficiency for realtime semantic segmentation. To this end, we propose an efficient and effective architecture with a good trade-off between speed and accuracy, termed Bilateral Segmentation Network (BiSeNet V2). This architecture involves: (i) a Detail Branch, with wide channels and shallow layers to capture low-level details and generate high-resolution feature representation; (ii) a Semantic Branch, with narrow channels and deep layers to obtain high-level semantic context. The Semantic Branch is lightweight due to reducing the channel capacity and a fast-downsampling strategy. Furthermore, we design a Guided Aggregation Layer to enhance mutual connections and fuse both types of feature representation. Besides, a booster training strategy is designed to improve the segmentation performance without any extra inference cost. Extensive quantitative and qualitative evaluations demonstrate that the proposed architecture performs favourably against a few state-of-the-art real-time semantic segmentation approaches. Specifically, for a 2,048x1,024 input, we achieve 72.6% Mean IoU on the Cityscapes test set with a speed of 156 FPS on one NVIDIA GeForce GTX 1080 Ti card, which is significantly faster than existing methods, yet we achieve better segmentation accuracy. | [] | [
"Real-Time Semantic Segmentation",
"Semantic Segmentation"
] | [] | [
"COCO-Stuff",
"Cityscapes test",
"CamVid"
] | [
"Time (ms)",
"Frame (fps)",
"mIoU"
] | BiSeNet V2: Bilateral Network with Guided Aggregation for Real-time Semantic Segmentation |
We present TDNet, a temporally distributed network designed for fast and accurate video semantic segmentation. We observe that features extracted from a certain high-level layer of a deep CNN can be approximated by composing features extracted from several shallower sub-networks. Leveraging the inherent temporal continuity in videos, we distribute these sub-networks over sequential frames. Therefore, at each time step, we only need to perform a lightweight computation to extract a sub-features group from a single sub-network. The full features used for segmentation are then recomposed by application of a novel attention propagation module that compensates for geometry deformation between frames. A grouped knowledge distillation loss is also introduced to further improve the representation power at both full and sub-feature levels. Experiments on Cityscapes, CamVid, and NYUD-v2 demonstrate that our method achieves state-of-the-art accuracy with significantly faster speed and lower latency. | [] | [
"Knowledge Distillation",
"Real-Time Semantic Segmentation",
"Semantic Segmentation",
"Video Semantic Segmentation"
] | [] | [
"CamVid",
"NYU Depth v2",
"Cityscapes val",
"Cityscapes test"
] | [
"Speed(ms/f)",
"Time (ms)",
"Mean IoU",
"mIoU",
"Frame (fps)"
] | Temporally Distributed Networks for Fast Video Semantic Segmentation |
Modern face alignment methods have become quite accurate at predicting the locations of facial landmarks, but they do not typically estimate the uncertainty of their predicted locations nor predict whether landmarks are visible. In this paper, we present a novel framework for jointly predicting landmark locations, associated uncertainties of these predicted locations, and landmark visibilities. We model these as mixed random variables and estimate them using a deep network trained with our proposed Location, Uncertainty, and Visibility Likelihood (LUVLi) loss. In addition, we release an entirely new labeling of a large face alignment dataset with over 19,000 face images in a full range of head poses. Each face is manually labeled with the ground-truth locations of 68 landmarks, with the additional information of whether each landmark is unoccluded, self-occluded (due to extreme head poses), or externally occluded. Not only does our joint estimation yield accurate estimates of the uncertainty of predicted landmark locations, but it also yields state-of-the-art estimates for the landmark locations themselves on multiple standard face alignment datasets. Our method's estimates of the uncertainty of predicted landmark locations could be used to automatically identify input images on which face alignment fails, which can be critical for downstream tasks. | [] | [
"Face Alignment"
] | [] | [
"MERL-RAV"
] | [
"NME"
] | LUVLi Face Alignment: Estimating Landmarks' Location, Uncertainty, and Visibility Likelihood |
Score-based generative models can produce high quality image samples comparable to GANs, without requiring adversarial optimization. However, existing training procedures are limited to images of low resolution (typically below 32x32), and can be unstable under some settings. We provide a new theoretical analysis of learning and sampling from score models in high dimensional spaces, explaining existing failure modes and motivating new solutions that generalize across datasets. To enhance stability, we also propose to maintain an exponential moving average of model weights. With these improvements, we can effortlessly scale score-based generative models to images with unprecedented resolutions ranging from 64x64 to 256x256. Our score-based models can generate high-fidelity samples that rival best-in-class GANs on various image datasets, including CelebA, FFHQ, and multiple LSUN categories. | [] | [
"Image Generation"
] | [] | [
"CIFAR-10"
] | [
"Inception score",
"FID"
] | Improved Techniques for Training Score-Based Generative Models |
We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot. | [] | [
"Image Retrieval with Multi-Modal Query",
"Visual Question Answering",
"Visual Reasoning"
] | [] | [
"CLEVR-Humans",
"MIT-States",
"CLEVR"
] | [
"Recall@1",
"Recall@5",
"Recall@10",
"Accuracy"
] | FiLM: Visual Reasoning with a General Conditioning Layer |
Current research in text simplification has been hampered by two central problems: (i) the small amount of high-quality parallel simplification data available, and (ii) the lack of explicit annotations of simplification operations, such as deletions or substitutions, on existing data. While the recently introduced Newsela corpus has alleviated the first problem, simplifications still need to be learned directly from parallel text using black-box, end-to-end approaches rather than from explicit annotations. These complex-simple parallel sentence pairs often differ to such a high degree that generalization becomes difficult. End-to-end models also make it hard to interpret what is actually learned from data. We propose a method that decomposes the task of TS into its sub-problems. We devise a way to automatically identify operations in a parallel corpus and introduce a sequence-labeling approach based on these annotations. Finally, we provide insights on the types of transformations that different approaches can model. | [] | [
"Machine Translation",
"Sentence Compression",
"Text Simplification"
] | [] | [
"PWKP / WikiSmall",
"Newsela",
"TurkCorpus"
] | [
"SARI (EASSE>=0.2.1)",
"SARI"
] | Learning How to Simplify From Explicit Labeling of Complex-Simplified Text Pairs |
Arbitrary shape text detection in natural scenes is an extremely challenging task. Unlike existing text detection approaches that only perceive texts based on limited feature representations, we propose a novel framework, namely TextFuseNet, to exploit the use of richer features fused for text detection. More specifically, we propose to perceive texts from three levels of feature representations, i.e., character-, word- and global-level, and then introduce a novel text representation fusion technique to help achieve robust arbitrary text detection. The multi-level feature representation can adequately describe texts by dissecting them into individual characters while still maintaining their general semantics. TextFuseNet then collects and merges the texts’ features from different levels using a multi-path fusion architecture which can effectively align and fuse different representations. In practice, our proposed TextFuseNet can learn a more adequate description of arbitrary shapes texts, suppressing false positives and producing more accurate detection results. Our proposed framework can also be trained with weak supervision for those datasets that lack character-level annotations. Experiments on several datasets show that the proposed TextFuseNet achieves state-of-the-art performance. Specifically, we achieve an F-measure of 94.3% on ICDAR2013, 92.1% on ICDAR2015, 87.1% on Total-Text and 86.6% on CTW-1500, respectively. | [] | [
"Scene Text",
"Scene Text Detection"
] | [] | [
"ICDAR 2015",
"SCUT-CTW1500",
"IC19-Art",
"Total-Text",
"ICDAR 2013"
] | [
"F-Measure",
"Recall",
"Precision",
"H-Mean"
] | TextFuseNet: Scene Text Detection with Richer Fused Features |
We study a formalization of the grammar induction problem that models sentences as being generated by a compound probabilistic context-free grammar. In contrast to traditional formulations which learn a single stochastic grammar, our grammar's rule probabilities are modulated by a per-sentence continuous latent variable, which induces marginal dependencies beyond the traditional context-free assumptions. Inference in this grammar is performed by collapsed variational inference, in which an amortized variational posterior is placed on the continuous variable, and the latent trees are marginalized out with dynamic programming. Experiments on English and Chinese show the effectiveness of our approach compared to recent state-of-the-art methods when evaluated on unsupervised parsing. | [] | [
"Constituency Grammar Induction",
"Variational Inference"
] | [] | [
"PTB"
] | [
"Max F1 (WSJ)",
"Mean F1 (WSJ)"
] | Compound Probabilistic Context-Free Grammars for Grammar Induction |
Facial landmark localization aims to detect the predefined points of human faces, and the topic has been rapidly improved with the recent development of neural network based methods. However, it remains a challenging task when dealing with faces in unconstrained scenarios, especially with large pose variations. In this paper, we target the problem of facial landmark localization across large poses and address this task based on a split-and-aggregate strategy. To split the search space, we propose a set of anchor templates as references for regression, which well addresses the large variations of face poses. Based on the prediction of each anchor template, we propose to aggregate the results, which can reduce the landmark uncertainty due to the large poses. Overall, our proposed approach, named AnchorFace, obtains state-of-the-art results with extremely efficient inference speed on four challenging benchmarks, i.e. AFLW, 300W, Menpo, and WFLW dataset. Code will be available at https://github.com/nothingelse92/AnchorFace. | [] | [
"Face Alignment",
"Facial Landmark Detection",
"Regression"
] | [] | [
"WFLW",
"300W",
"AFLW-Full",
"AFLW-Front"
] | [
"Mean NME",
"Fullset (public)",
"[email protected] (all)",
"ME (%, all) ",
"[email protected](%, all)",
"Mean NME "
] | AnchorFace: An Anchor-based Facial Landmark Detector Across Large Poses |
Most existing neural models for math word problems exploit Seq2Seq model to generate solution
expressions sequentially from left to right, whose
results are far from satisfactory due to the lack
of goal-driven mechanism commonly seen in human problem solving. This paper proposes a treestructured neural model to generate expression tree
in a goal-driven manner. Given a math word problem, the model first identifies and encodes its goal
to achieve, and then the goal gets decomposed into
sub-goals combined by an operator in a top-down
recursive way. The whole process is repeated until the goal is simple enough to be realized by a
known quantity as leaf node. During the process,
two-layer gated-feedforward networks are designed
to implement each step of goal decomposition, and
a recursive neural network is used to encode fulfilled subtrees into subtree embeddings, which provides a better representation of subtrees than the
simple goals of subtrees. Experimental results on
the dataset Math23K have shown that our treestructured model outperforms significantly several
state-of-the-art models. | [] | [
"Math Word Problem Solving"
] | [] | [
"Math23K"
] | [
"Accuracy(5-fold)"
] | A Goal-Driven Tree-Structured Neural Model for Math Word Problems |
Attentional, RNN-based encoder-decoder models for abstractive summarization
have achieved good performance on short input and output sequences. For longer
documents and summaries however these models often include repetitive and
incoherent phrases. We introduce a neural network model with a novel
intra-attention that attends over the input and continuously generated output
separately, and a new training method that combines standard supervised word
prediction and reinforcement learning (RL). Models trained only with supervised
learning often exhibit "exposure bias" - they assume ground truth is provided
at each step during training. However, when standard word prediction is
combined with the global sequence prediction training of RL the resulting
summaries become more readable. We evaluate this model on the CNN/Daily Mail
and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the
CNN/Daily Mail dataset, an improvement over previous state-of-the-art models.
Human evaluation also shows that our model produces higher quality summaries. | [] | [
"Abstractive Text Summarization",
"Text Summarization"
] | [] | [
"CNN / Daily Mail",
"CNN / Daily Mail (Anonymized)"
] | [
"ROUGE-L",
"ROUGE-1",
"ROUGE-2"
] | A Deep Reinforced Model for Abstractive Summarization |
We introduce a new function-preserving transformation for efficient neural
architecture search. This network transformation allows reusing previously
trained networks and existing successful architectures that improves sample
efficiency. We aim to address the limitation of current network transformation
operations that can only perform layer-level architecture modifications, such
as adding (pruning) filters or inserting (removing) a layer, which fails to
change the topology of connection paths. Our proposed path-level transformation
operations enable the meta-controller to modify the path topology of the given
network while keeping the merits of reusing weights, and thus allow efficiently
designing effective structures with complex path topologies like Inception
models. We further propose a bidirectional tree-structured reinforcement
learning meta-controller to explore a simple yet highly expressive
tree-structured architecture space that can be viewed as a generalization of
multi-branch architectures. We experimented on the image classification
datasets with limited computational resources (about 200 GPU-hours), where we
observed improved parameter efficiency and better test results (97.70% test
accuracy on CIFAR-10 with 14.3M parameters and 74.6% top-1 accuracy on ImageNet
in the mobile setting), demonstrating the effectiveness and transferability of
our designed architectures. | [] | [
"Image Classification",
"Neural Architecture Search"
] | [] | [
"CIFAR-10 Image Classification"
] | [
"Percentage error",
"Params"
] | Path-Level Network Transformation for Efficient Architecture Search |
Being a fundamental component in training and inference, data processing has not been systematically considered in human pose estimation community, to the best of our knowledge. In this paper, we focus on this problem and find that the devil of human pose estimation evolution is in the biased data processing. Specifically, by investigating the standard data processing in state-of-the-art approaches mainly including coordinate system transformation and keypoint format transformation (i.e., encoding and decoding), we find that the results obtained by common flipping strategy are unaligned with the original ones in inference. Moreover, there is a statistical error in some keypoint format transformation methods. Two problems couple together, significantly degrade the pose estimation performance and thus lay a trap for the research community. This trap has given bone to many suboptimal remedies, which are always unreported, confusing but influential. By causing failure in reproduction and unfair in comparison, the unreported remedies seriously impedes the technological development. To tackle this dilemma from the source, we propose Unbiased Data Processing (UDP) consist of two technique aspect for the two aforementioned problems respectively (i.e., unbiased coordinate system transformation and unbiased keypoint format transformation). As a model-agnostic approach and a superior solution, UDP successfully pushes the performance boundary of human pose estimation and offers a higher and more reliable baseline for research community. Code is public available in https://github.com/HuangJunJie2017/UDP-Pose | [] | [
"Pose Estimation"
] | [] | [
"COCO test-dev"
] | [
"APM",
"AP75",
"AP",
"APL",
"AP50",
"AR"
] | The Devil is in the Details: Delving into Unbiased Data Processing for Human Pose Estimation |
We present a semi-parametric approach to photographic image synthesis from
semantic layouts. The approach combines the complementary strengths of
parametric and nonparametric techniques. The nonparametric component is a
memory bank of image segments constructed from a training set of images. Given
a novel semantic layout at test time, the memory bank is used to retrieve
photographic references that are provided as source material to a deep network.
The synthesis is performed by a deep network that draws on the provided
photographic material. Experiments on multiple semantic segmentation datasets
show that the presented approach yields considerably more realistic images than
recent purely parametric techniques. The results are shown in the supplementary
video at https://youtu.be/U4Q98lenGLQ | [] | [
"Image Generation",
"Image-to-Image Translation",
"Semantic Segmentation"
] | [] | [
"COCO-Stuff Labels-to-Photos",
"Cityscapes Labels-to-Photo",
"ADE20K-Outdoor Labels-to-Photos"
] | [
"Accuracy",
"FID",
"Per-pixel Accuracy",
"mIoU"
] | Semi-parametric Image Synthesis |
Graph-structured data such as social networks, functional brain networks,
gene regulatory networks, communications networks have brought the interest in
generalizing deep learning techniques to graph domains. In this paper, we are
interested to design neural networks for graphs with variable length in order
to solve learning problems such as vertex classification, graph classification,
graph regression, and graph generative tasks. Most existing works have focused
on recurrent neural networks (RNNs) to learn meaningful representations of
graphs, and more recently new convolutional neural networks (ConvNets) have
been introduced. In this work, we want to compare rigorously these two
fundamental families of architectures to solve graph learning tasks. We review
existing graph RNN and ConvNet architectures, and propose natural extension of
LSTM and ConvNet to graphs with arbitrary size. Then, we design a set of
analytically controlled experiments on two basic graph problems, i.e. subgraph
matching and graph clustering, to test the different architectures. Numerical
results show that the proposed graph ConvNets are 3-17% more accurate and
1.5-4x faster than graph RNNs. Graph ConvNets are also 36% more accurate than
variational (non-learning) techniques. Finally, the most effective graph
ConvNet architecture uses gated edges and residuality. Residuality plays an
essential role to learn multi-layer architectures as they provide a 10% gain of
performance. | [] | [
"Graph Classification",
"Graph Clustering",
"Graph Learning",
"Graph Regression",
"Node Classification",
"Regression"
] | [] | [
"CIFAR10 100k",
"ZINC-500k",
"PATTERN 100k"
] | [
"MAE",
"Accuracy (%)"
] | Residual Gated Graph ConvNets |
For natural language understanding (NLU) technology to be maximally useful,
both practically and as a scientific object of study, it must be general: it
must be able to process language in a way that is not exclusively tailored to
any one specific task or dataset. In pursuit of this objective, we introduce
the General Language Understanding Evaluation benchmark (GLUE), a tool for
evaluating and analyzing the performance of models across a diverse range of
existing NLU tasks. GLUE is model-agnostic, but it incentivizes sharing
knowledge across tasks because certain tasks have very limited training data.
We further provide a hand-crafted diagnostic test suite that enables detailed
linguistic analysis of NLU models. We evaluate baselines based on current
methods for multi-task and transfer learning and find that they do not
immediately give substantial improvements over the aggregate performance of
training a separate model per task, indicating room for improvement in
developing general and robust NLU systems. | [] | [
"Natural Language Inference",
"Natural Language Understanding",
"Transfer Learning"
] | [] | [
"MultiNLI"
] | [
"Mismatched",
"Matched"
] | GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding |
Link prediction for knowledge graphs is the task of predicting missing
relationships between entities. Previous work on link prediction has focused on
shallow, fast models which can scale to large knowledge graphs. However, these
models learn less expressive features than deep, multi-layer models -- which
potentially limits performance. In this work, we introduce ConvE, a multi-layer
convolutional network model for link prediction, and report state-of-the-art
results for several established datasets. We also show that the model is highly
parameter efficient, yielding the same performance as DistMult and R-GCN with
8x and 17x fewer parameters. Analysis of our model suggests that it is
particularly effective at modelling nodes with high indegree -- which are
common in highly-connected, complex knowledge graphs such as Freebase and
YAGO3. In addition, it has been noted that the WN18 and FB15k datasets suffer
from test set leakage, due to inverse relations from the training set being
present in the test set -- however, the extent of this issue has so far not
been quantified. We find this problem to be severe: a simple rule-based model
can achieve state-of-the-art results on both WN18 and FB15k. To ensure that
models are evaluated on datasets where simply exploiting inverse relations
cannot yield competitive results, we investigate and validate several commonly
used datasets -- deriving robust variants where necessary. We then perform
experiments on these robust datasets for our own and several previously
proposed models and find that ConvE achieves state-of-the-art Mean Reciprocal
Rank across most datasets. | [] | [
"Knowledge Graph Embeddings",
"Knowledge Graphs",
"Link Prediction"
] | [] | [
"WN18RR",
" FB15k",
"FB15k-237",
"YAGO3-10",
"WN18"
] | [
"Hits@3",
"Hits@1",
"MR",
"MRR",
"Hits@10"
] | Convolutional 2D Knowledge Graph Embeddings |
Recent deep networks are capable of memorizing the entire data even when the
labels are completely random. To overcome the overfitting on corrupted labels,
we propose a novel technique of learning another neural network, called
MentorNet, to supervise the training of the base deep networks, namely,
StudentNet. During training, MentorNet provides a curriculum (sample weighting
scheme) for StudentNet to focus on the sample the label of which is probably
correct. Unlike the existing curriculum that is usually predefined by human
experts, MentorNet learns a data-driven curriculum dynamically with StudentNet.
Experimental results demonstrate that our approach can significantly improve
the generalization performance of deep networks trained on corrupted training
data. Notably, to the best of our knowledge, we achieve the best-published
result on WebVision, a large benchmark containing 2.2 million images of
real-world noisy labels. The code are at https://github.com/google/mentornet | [] | [] | [] | [
"mini WebVision 1.0"
] | [
"ImageNet Top-1 Accuracy",
"ImageNet Top-5 Accuracy"
] | MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels |
This paper describes our results for TRAC 2020 competition held together with the conference LREC 2020. Our team name was Ms8qQxMbnjJMgYcw. The competition consisted of 2 subtasks in 3 languages (Bengali, English and Hindi) where the participants{'} task was to classify aggression in short texts from social media and decide whether it is gendered or not. We used a single BERT-based system with two outputs for all tasks simultaneously. Our model placed first in English and second in Bengali gendered text classification competition tasks with 0.87 and 0.93 in F1-score respectively. | [] | [
"Text Classification"
] | [] | [
"TRAC2-Benghali. Task 2.",
"TRAC2-English. Task2."
] | [
"F1"
] | BERT of all trades, master of some |
Embedding knowledge graphs (KGs) into continuous vector spaces is a focus of
current research. Combining such an embedding model with logic rules has
recently attracted increasing attention. Most previous attempts made a one-time
injection of logic rules, ignoring the interactive nature between embedding
learning and logical inference. And they focused only on hard rules, which
always hold with no exception and usually require extensive manual effort to
create or validate. In this paper, we propose Rule-Guided Embedding (RUGE), a
novel paradigm of KG embedding with iterative guidance from soft rules. RUGE
enables an embedding model to learn simultaneously from 1) labeled triples that
have been directly observed in a given KG, 2) unlabeled triples whose labels
are going to be predicted iteratively, and 3) soft rules with various
confidence levels extracted automatically from the KG. In the learning process,
RUGE iteratively queries rules to obtain soft labels for unlabeled triples, and
integrates such newly labeled triples to update the embedding model. Through
this iterative procedure, knowledge embodied in logic rules may be better
transferred into the learned embeddings. We evaluate RUGE in link prediction on
Freebase and YAGO. Experimental results show that: 1) with rule knowledge
injected iteratively, RUGE achieves significant and consistent improvements
over state-of-the-art baselines; and 2) despite their uncertainties,
automatically extracted soft rules are highly beneficial to KG embedding, even
those with moderate confidence levels. The code and data used for this paper
can be obtained from https://github.com/iieir-km/RUGE. | [] | [
"Graph Embedding",
"Knowledge Graph Embedding",
"Knowledge Graphs",
"Link Prediction"
] | [] | [
"FB15k",
"YAGO37"
] | [
"Hits@3",
"Hits@5",
"Hits@1",
"MRR",
"Hits@10"
] | Knowledge Graph Embedding with Iterative Guidance from Soft Rules |
We present a novel training framework for neural sequence models,
particularly for grounded dialog generation. The standard training paradigm for
these models is maximum likelihood estimation (MLE), or minimizing the
cross-entropy of the human responses. Across a variety of domains, a recurring
problem with MLE trained generative neural dialog models (G) is that they tend
to produce 'safe' and generic responses ("I don't know", "I can't tell"). In
contrast, discriminative dialog models (D) that are trained to rank a list of
candidate human responses outperform their generative counterparts; in terms of
automatic metrics, diversity, and informativeness of the responses. However, D
is not useful in practice since it cannot be deployed to have real
conversations with users.
Our work aims to achieve the best of both worlds -- the practical usefulness
of G and the strong performance of D -- via knowledge transfer from D to G. Our
primary contribution is an end-to-end trainable generative visual dialog model,
where G receives gradients from D as a perceptual (not adversarial) loss of the
sequence sampled from G. We leverage the recently proposed Gumbel-Softmax (GS)
approximation to the discrete distribution -- specifically, an RNN augmented
with a sequence of GS samplers, coupled with the straight-through gradient
estimator to enable end-to-end differentiability. We also introduce a stronger
encoder for visual dialog, and employ a self-attention mechanism for answer
encoding along with a metric learning loss to aid D in better capturing
semantic similarities in answer responses. Overall, our proposed model
outperforms state-of-the-art on the VisDial dataset by a significant margin
(2.67% on recall@10). The source code can be downloaded from
https://github.com/jiasenlu/visDial.pytorch. | [] | [
"Metric Learning",
"Transfer Learning",
"Visual Dialog"
] | [] | [
"VisDial v0.9 val"
] | [
"R@10",
"R@5",
"Mean Rank",
"MRR",
"R@1"
] | Best of Both Worlds: Transferring Knowledge from Discriminative Learning to a Generative Visual Dialog Model |
Segmentation of the pixels corresponding to human skin is an essential first step in multiple applications ranging from surveillance to heart-rate estimation from remote-photoplethysmography. However, the existing literature considers the problem only in the visible-range of the EM-spectrum which limits their utility in low or no light settings where the criticality of the application is higher. To alleviate this problem, we consider the problem of skin segmentation from the Near-infrared images. However, Deep learning based state-of-the-art segmentation techniques demands large amounts of labelled data that is unavailable for the current problem. Therefore we cast the skin segmentation problem as that of target-independent Unsupervised Domain Adaptation (UDA) where we use the data from the Red-channel of the visible-range to develop skin segmentation algorithm on NIR images. We propose a method for target-independent segmentation where the 'nearest-clone' of a target image in the source domain is searched and used as a proxy in the segmentation network trained only on the source domain. We prove the existence of 'nearest-clone' and propose a method to find it through an optimization algorithm over the latent space of a Deep generative model based on variational inference. We demonstrate the efficacy of the proposed method for NIR skin segmentation over the state-of-the-art UDA segmentation methods on the two newly created skin segmentation datasets in NIR domain despite not having access to the target NIR data. Additionally, we report state-of-the-art results for adaption from Synthia to Cityscapes which is a popular setting in Unsupervised Domain Adaptation for semantic segmentation. The code and datasets are available at https://github.com/ambekarsameer96/GLSS. | [] | [
"Domain Adaptation",
"Heart rate estimation",
"Image-to-Image Translation",
"Semantic Segmentation",
"Unsupervised Domain Adaptation",
"Variational Inference"
] | [] | [
"SYNTHIA-to-Cityscapes"
] | [
"mIoU (13 classes)"
] | Unsupervised Domain Adaptation for Semantic Segmentation of NIR Images through Generative Latent Search |
Person re-identification (person re-ID) is mostly viewed as an image
retrieval problem. This task aims to search a query person in a large image
pool. In practice, person re-ID usually adopts automatic detectors to obtain
cropped pedestrian images. However, this process suffers from two types of
detector errors: excessive background and part missing. Both errors deteriorate
the quality of pedestrian alignment and may compromise pedestrian matching due
to the position and scale variances. To address the misalignment problem, we
propose that alignment can be learned from an identification procedure. We
introduce the pedestrian alignment network (PAN) which allows discriminative
embedding learning and pedestrian alignment without extra annotations. Our key
observation is that when the convolutional neural network (CNN) learns to
discriminate between different identities, the learned feature maps usually
exhibit strong activations on the human body rather than the background. The
proposed network thus takes advantage of this attention mechanism to adaptively
locate and align pedestrians within a bounding box. Visual examples show that
pedestrians are better aligned with PAN. Experiments on three large-scale re-ID
datasets confirm that PAN improves the discriminative ability of the feature
embeddings and yields competitive accuracy with the state-of-the-art methods. | [] | [
"Image Retrieval",
"Large-Scale Person Re-Identification",
"Person Re-Identification"
] | [] | [
"DukeMTMC-reID",
"Market-1501",
"CUHK03 labeled",
"CUHK03 (detected)"
] | [
"Rank-1",
"MAP"
] | Pedestrian Alignment Network for Large-scale Person Re-identification |
Of late, weakly supervised object detection is with great importance in
object recognition. Based on deep learning, weakly supervised detectors have
achieved many promising results. However, compared with fully supervised
detection, it is more challenging to train deep network based detectors in a
weakly supervised manner. Here we formulate weakly supervised detection as a
Multiple Instance Learning (MIL) problem, where instance classifiers (object
detectors) are put into the network as hidden nodes. We propose a novel online
instance classifier refinement algorithm to integrate MIL and the instance
classifier refinement procedure into a single deep network, and train the
network end-to-end with only image-level supervision, i.e., without object
location information. More precisely, instance labels inferred from weak
supervision are propagated to their spatially overlapped instances to refine
instance classifier online. The iterative instance classifier refinement
procedure is implemented using multiple streams in deep network, where each
stream supervises its latter stream. Weakly supervised object detection
experiments are carried out on the challenging PASCAL VOC 2007 and 2012
benchmarks. We obtain 47% mAP on VOC 2007 that significantly outperforms the
previous state-of-the-art. | [] | [
"Multiple Instance Learning",
"Object Detection",
"Object Recognition",
"Weakly Supervised Object Detection"
] | [] | [
"PASCAL VOC 2012 test",
"PASCAL VOC 2007",
"ImageNet"
] | [
"MAP"
] | Multiple Instance Detection Network with Online Instance Classifier Refinement |
Lipreading is the task of decoding text from the movement of a speaker's
mouth. Traditional approaches separated the problem into two stages: designing
or learning visual features, and prediction. More recent deep lipreading
approaches are end-to-end trainable (Wand et al., 2016; Chung & Zisserman,
2016a). However, existing work on models trained end-to-end perform only word
classification, rather than sentence-level sequence prediction. Studies have
shown that human lipreading performance increases for longer words (Easton &
Basala, 1982), indicating the importance of features capturing temporal context
in an ambiguous communication channel. Motivated by this observation, we
present LipNet, a model that maps a variable-length sequence of video frames to
text, making use of spatiotemporal convolutions, a recurrent network, and the
connectionist temporal classification loss, trained entirely end-to-end. To the
best of our knowledge, LipNet is the first end-to-end sentence-level lipreading
model that simultaneously learns spatiotemporal visual features and a sequence
model. On the GRID corpus, LipNet achieves 95.2% accuracy in sentence-level,
overlapped speaker split task, outperforming experienced human lipreaders and
the previous 86.4% word-level state-of-the-art accuracy (Gergen et al., 2016). | [] | [
"Lipreading"
] | [] | [
"GRID corpus (mixed-speech)"
] | [
"Word Error Rate (WER)"
] | LipNet: End-to-End Sentence-level Lipreading |
The 3D shapes of faces are well known to be discriminative. Yet despite this,
they are rarely used for face recognition and always under controlled viewing
conditions. We claim that this is a symptom of a serious but often overlooked
problem with existing methods for single view 3D face reconstruction: when
applied "in the wild", their 3D estimates are either unstable and change for
different photos of the same subject or they are over-regularized and generic.
In response, we describe a robust method for regressing discriminative 3D
morphable face models (3DMM). We use a convolutional neural network (CNN) to
regress 3DMM shape and texture parameters directly from an input photo. We
overcome the shortage of training data required for this purpose by offering a
method for generating huge numbers of labeled examples. The 3D estimates
produced by our CNN surpass state of the art accuracy on the MICC data set.
Coupled with a 3D-3D face matching pipeline, we show the first competitive face
recognition results on the LFW, YTF and IJB-A benchmarks using 3D face shapes
as representations, rather than the opaque deep feature vectors used by other
modern systems. | [] | [
"3D Face Reconstruction",
"Face Recognition",
"Face Reconstruction",
"Face Verification"
] | [] | [
"NoW Benchmark",
"YouTube Faces DB",
"Florence",
"Labeled Faces in the Wild"
] | [
"Mean Reconstruction Error (mm)",
"Average 3D Error",
"Accuracy"
] | Regressing Robust and Discriminative 3D Morphable Models with a very Deep Neural Network |
Predicting user responses, such as clicks and conversions, is of great
importance and has found its usage in many Web applications including
recommender systems, web search and online advertising. The data in those
applications is mostly categorical and contains multiple fields; a typical
representation is to transform it into a high-dimensional sparse binary feature
representation via one-hot encoding. Facing with the extreme sparsity,
traditional models may limit their capacity of mining shallow patterns from the
data, i.e. low-order feature combinations. Deep models like deep neural
networks, on the other hand, cannot be directly applied for the
high-dimensional input because of the huge feature space. In this paper, we
propose a Product-based Neural Networks (PNN) with an embedding layer to learn
a distributed representation of the categorical data, a product layer to
capture interactive patterns between inter-field categories, and further fully
connected layers to explore high-order feature interactions. Our experimental
results on two large-scale real-world ad click datasets demonstrate that PNNs
consistently outperform the state-of-the-art models on various metrics. | [] | [
"Click-Through Rate Prediction",
"Recommendation Systems"
] | [] | [
"Bing News",
"Amazon",
"MovieLens 20M",
"Criteo",
"Company*",
"Dianping",
"iPinYou"
] | [
"Log Loss",
"AUC"
] | Product-based Neural Networks for User Response Prediction |
We introduce recurrent neural network grammars, probabilistic models of
sentences with explicit phrase structure. We explain efficient inference
procedures that allow application to both parsing and language modeling.
Experiments show that they provide better parsing in English than any single
previously published supervised generative model and better language modeling
than state-of-the-art sequential RNNs in English and Chinese. | [] | [
"Constituency Parsing",
"Language Modelling"
] | [] | [
"Penn Treebank"
] | [
"F1 score"
] | Recurrent Neural Network Grammars |
We develop a new edge detection algorithm that tackles two important issues
in this long-standing vision problem: (1) holistic image training and
prediction; and (2) multi-scale and multi-level feature learning. Our proposed
method, holistically-nested edge detection (HED), performs image-to-image
prediction by means of a deep learning model that leverages fully convolutional
neural networks and deeply-supervised nets. HED automatically learns rich
hierarchical representations (guided by deep supervision on side responses)
that are important in order to approach the human ability resolve the
challenging ambiguity in edge and object boundary detection. We significantly
advance the state-of-the-art on the BSD500 dataset (ODS F-score of .782) and
the NYU Depth dataset (ODS F-score of .746), and do so with an improved speed
(0.4 second per image) that is orders of magnitude faster than some recent
CNN-based edge detection algorithms. | [] | [
"Boundary Detection",
"Edge Detection"
] | [] | [
"BIPED"
] | [
"ODS"
] | Holistically-Nested Edge Detection |
Subsets and Splits