abstract
stringlengths 13
4.33k
| field
sequence | task
sequence | method
sequence | dataset
sequence | metric
sequence | title
stringlengths 10
194
|
---|---|---|---|---|---|---|
We present a multilingual Named Entity Recognition approach based on a robust
and general set of features across languages and datasets. Our system combines
shallow local information with clustering semi-supervised features induced on
large amounts of unlabeled text. Understanding via empirical experimentation
how to effectively combine various types of clustering features allows us to
seamlessly export our system to other datasets and languages. The result is a
simple but highly competitive system which obtains state of the art results
across five languages and twelve datasets. The results are reported on standard
shared task evaluation data such as CoNLL for English, Spanish and Dutch.
Furthermore, and despite the lack of linguistically motivated features, we also
report best results for languages such as Basque and German. In addition, we
demonstrate that our method also obtains very competitive results even when the
amount of supervised data is cut by half, alleviating the dependency on
manually annotated data. Finally, the results show that our emphasis on
clustering features is crucial to develop robust out-of-domain models. The
system and models are freely available to facilitate its use and guarantee the
reproducibility of results. | [] | [
"Named Entity Recognition"
] | [] | [
"CoNLL 2003 (English)"
] | [
"F1"
] | Robust Multilingual Named Entity Recognition with Shallow Semi-Supervised Features |
Recently, image representation built upon Convolutional Neural Network (CNN)
has been shown to provide effective descriptors for image search, outperforming
pre-CNN features as short-vector representations. Yet such models are not
compatible with geometry-aware re-ranking methods and still outperformed, on
some particular object retrieval benchmarks, by traditional image search
systems relying on precise descriptor matching, geometric re-ranking, or query
expansion. This work revisits both retrieval stages, namely initial search and
re-ranking, by employing the same primitive information derived from the CNN.
We build compact feature vectors that encode several image regions without the
need to feed multiple inputs to the network. Furthermore, we extend integral
images to handle max-pooling on convolutional layer activations, allowing us to
efficiently localize matching objects. The resulting bounding box is finally
used for image re-ranking. As a result, this paper significantly improves
existing CNN-based recognition pipeline: We report for the first time results
competing with traditional methods on the challenging Oxford5k and Paris6k
datasets. | [] | [
"Image Retrieval"
] | [] | [
"Par106k",
"Par6k",
"Oxf105k"
] | [
"mAP",
"MAP"
] | Particular object retrieval with integral max-pooling of CNN activations |
Word vectors and Language Models (LMs) pretrained on a large amount of unlabelled data can dramatically improve various Natural Language Processing (NLP) tasks. However, the measure and impact of similarity between pretraining data and target task data are left to intuition. We propose three cost-effective measures to quantify different aspects of similarity between source pretraining and target task data. We demonstrate that these measures are good predictors of the usefulness of pretrained models for Named Entity Recognition (NER) over 30 data pairs. Results also suggest that pretrained LMs are more effective and more predictable than pretrained word vectors, but pretrained word vectors are better when pretraining data is dissimilar. | [] | [
"Named Entity Recognition"
] | [] | [
"WetLab",
"JNLPBA"
] | [
"F1"
] | Using Similarity Measures to Select Pretraining Data for NER |
Skeleton-based human action recognition has attracted great interest thanks to the easy accessibility of the human skeleton data. Recently, there is a trend of using very deep feedforward neural networks to model the 3D coordinates of joints without considering the computational efficiency. In this paper, we propose a simple yet effective semantics-guided neural network (SGN) for skeleton-based action recognition. We explicitly introduce the high level semantics of joints (joint type and frame index) into the network to enhance the feature representation capability. In addition, we exploit the relationship of joints hierarchically through two modules, i.e., a joint-level module for modeling the correlations of joints in the same frame and a framelevel module for modeling the dependencies of frames by taking the joints in the same frame as a whole. A strong baseline is proposed to facilitate the study of this field. With an order of magnitude smaller model size than most previous works, SGN achieves the state-of-the-art performance on the NTU60, NTU120, and SYSU datasets. The source code is available at https://github.com/microsoft/SGN. | [] | [
"Action Recognition",
"Skeleton Based Action Recognition",
"Temporal Action Localization"
] | [] | [
"NTU RGB+D",
"N-UCLA",
"SYSU 3D"
] | [
"Accuracy (CS)",
"Accuracy (CV)",
"Accuracy"
] | Semantics-Guided Neural Networks for Efficient Skeleton-Based Human Action Recognition |
Cross-view image translation is challenging because it involves images with
drastically different views and severe deformation. In this paper, we propose a
novel approach named Multi-Channel Attention SelectionGAN (SelectionGAN) that
makes it possible to generate images of natural scenes in arbitrary viewpoints,
based on an image of the scene and a novel semantic map. The proposed
SelectionGAN explicitly utilizes the semantic information and consists of two
stages. In the first stage, the condition image and the target semantic map are
fed into a cycled semantic-guided generation network to produce initial coarse
results. In the second stage, we refine the initial results by using a
multi-channel attention selection mechanism. Moreover, uncertainty maps
automatically learned from attentions are used to guide the pixel loss for
better network optimization. Extensive experiments on Dayton, CVUSA and Ego2Top
datasets show that our model is able to generate significantly better results
than the state-of-the-art methods. The source code, data and trained models are
available at https://github.com/Ha0Tang/SelectionGAN. | [] | [
"Bird View Synthesis",
"Cross-View Image-to-Image Translation",
"Image-to-Image Translation"
] | [] | [
"cvusa",
"Dayton (256×256) - ground-to-aerial",
"Dayton (64x64) - ground-to-aerial",
"Dayton (64×64) - aerial-to-ground",
"Ego2Top",
"Dayton (256×256) - aerial-to-ground"
] | [
"SSIM"
] | Multi-Channel Attention Selection GAN with Cascaded Semantic Guidance for Cross-View Image Translation |
Semi-supervised learning (SSL) provides an effective
means of leveraging unlabeled data to improve a model’s
performance. In this paper, we demonstrate the power of a
simple combination of two common SSL methods: consistency regularization and pseudo-labeling. Our algorithm,
FixMatch, first generates pseudo-labels using the model’s
predictions on weakly-augmented unlabeled images. For a
given image, the pseudo-label is only retained if the model
produces a high-confidence prediction. The model is then
trained to predict the pseudo-label when fed a stronglyaugmented version of the same image. Despite its simplicity, we show that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks, including 94.93% accuracy on CIFAR-10
with 250 labels and 88.61% accuracy with 40 – just 4 labels per class. Since FixMatch bears many similarities
to existing SSL methods that achieve worse performance,
we carry out an extensive ablation study to tease apart
the experimental factors that are most important to FixMatch’s success. We make our code available at https:
//github.com/google-research/fixmatch. | [] | [
"Image Classification"
] | [] | [
"STL-10"
] | [
"Percentage correct"
] | FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence |
Fine-Grained Visual Categorization (FGVC) is a challenging topic in computer vision. It is a problem characterized by large intra-class differences and subtle inter-class differences. In this paper, we tackle this problem in a weakly supervised manner, where neural network models are getting fed with additional data using a data augmentation technique through a visual attention mechanism. We perform domain adaptive knowledge transfer via fine-tuning on our base network model. We perform our experiment on six challenging and commonly used FGVC datasets, and we show competitive improvement on accuracies by using attention-aware data augmentation techniques with features derived from deep learning model InceptionV3, pre-trained on large scale datasets. Our method outperforms competitor methods on multiple FGVC datasets and showed competitive results on other datasets. Experimental studies show that transfer learning from large scale datasets can be utilized effectively with visual attention based data augmentation, which can obtain state-of-the-art results on several FGVC datasets. We present a comprehensive analysis of our experiments. Our method achieves state-of-the-art results in multiple fine-grained classification datasets including challenging CUB200-2011 bird, Flowers-102, and FGVC-Aircrafts datasets. | [] | [
"Data Augmentation",
"Fine-Grained Image Classification",
"Fine-Grained Visual Categorization",
"Image Classification",
"Transfer Learning"
] | [] | [
"FGVC Aircraft",
"CUB-200-2011",
"Flowers-102",
"Food-101",
"Stanford Dogs",
"Stanford Cars"
] | [
"Top-1",
"Top 1 Accuracy",
"Accuracy"
] | Domain Adaptive Transfer Learning on Visual Attention Aware Data Augmentation for Fine-grained Visual Categorization |
Most existing subspace clustering methods hinge on self-expression of handcrafted representations and are unaware of potential clustering errors. Thus they perform unsatisfactorily on real data with complex underlying subspaces. To solve this issue, we propose a novel deep adversarial subspace clustering (DASC) model, which learns more favorable sample representations by deep learning for subspace clustering, and more importantly introduces adversarial learning to supervise sample representation learning and subspace clustering. Specifically, DASC consists of a subspace clustering generator and a quality-verifying discriminator, which learn against each other. The generator produces subspace estimation and sample clustering. The discriminator evaluates current clustering performance by inspecting whether the re-sampled data from estimated subspaces have consistent subspace properties, and supervises the generator to progressively improve subspace clustering. Experimental results on the handwritten recognition, face and object clustering tasks demonstrate the advantages of DASC over shallow and few deep subspace clustering models. Moreover, to our best knowledge, this is the first successful application of GAN-alike model for unsupervised subspace clustering, which also paves the way for deep learning to solve other unsupervised learning problems. | [] | [
"Image Clustering",
"Representation Learning"
] | [] | [
"coil-40",
"UMist"
] | [
"NMI",
"Accuracy"
] | Deep Adversarial Subspace Clustering |
We introduce Wavesplit, an end-to-end source separation system. From a single mixture, the model infers a representation for each source and then estimates each source signal given the inferred representations. The model is trained to jointly perform both tasks from the raw waveform. Wavesplit infers a set of source representations via clustering, which addresses the fundamental permutation problem of separation. For speech separation, our sequence-wide speaker representations provide a more robust separation of long, challenging recordings compared to prior work. Wavesplit redefines the state-of-the-art on clean mixtures of 2 or 3 speakers (WSJ0-2/3mix), as well as in noisy and reverberated settings (WHAM/WHAMR). We also set a new benchmark on the recent LibriMix dataset. Finally, we show that Wavesplit is also applicable to other domains, by separating fetal and maternal heart rates from a single abdominal electrocardiogram. | [] | [
"Data Augmentation",
"Speech Separation"
] | [] | [
"wsj0-2mix"
] | [
"SI-SDRi"
] | Wavesplit: End-to-End Speech Separation by Speaker Clustering |
Inspired by recent advances of deep learning in instance segmentation and
object tracking, we introduce video object segmentation problem as a concept of
guided instance segmentation. Our model proceeds on a per-frame basis, guided
by the output of the previous frame towards the object of interest in the next
frame. We demonstrate that highly accurate object segmentation in videos can be
enabled by using a convnet trained with static images only. The key ingredient
of our approach is a combination of offline and online learning strategies,
where the former serves to produce a refined mask from the previous frame
estimate and the latter allows to capture the appearance of the specific object
instance. Our method can handle different types of input annotations: bounding
boxes and segments, as well as incorporate multiple annotated frames, making
the system suitable for diverse applications. We obtain competitive results on
three different datasets, independently from the type of input annotation. | [] | [
"Instance Segmentation",
"Object Tracking",
"Semantic Segmentation",
"Semi-Supervised Video Object Segmentation",
"Video Object Segmentation",
"Video Semantic Segmentation",
"Visual Object Tracking"
] | [] | [
"DAVIS 2016",
"YouTube"
] | [
"F-measure (Decay)",
"Jaccard (Mean)",
"mIoU",
"F-measure (Recall)",
"Jaccard (Decay)",
"Jaccard (Recall)",
"F-measure (Mean)",
"J&F"
] | Learning Video Object Segmentation from Static Images |
Multi-person pose estimation in images and videos is an important yet
challenging task with many applications. Despite the large improvements in
human pose estimation enabled by the development of convolutional neural
networks, there still exist a lot of difficult cases where even the
state-of-the-art models fail to correctly localize all body joints. This
motivates the need for an additional refinement step that addresses these
challenging cases and can be easily applied on top of any existing method. In
this work, we introduce a pose refinement network (PoseRefiner) which takes as
input both the image and a given pose estimate and learns to directly predict a
refined pose by jointly reasoning about the input-output space. In order for
the network to learn to refine incorrect body joint predictions, we employ a
novel data augmentation scheme for training, where we model "hard" human pose
cases. We evaluate our approach on four popular large-scale pose estimation
benchmarks such as MPII Single- and Multi-Person Pose Estimation, PoseTrack
Pose Estimation, and PoseTrack Pose Tracking, and report systematic improvement
over the state of the art. | [] | [
"Data Augmentation",
"Keypoint Detection",
"Multi-Person Pose Estimation",
"Multi-Person Pose Estimation and Tracking",
"Pose Estimation",
"Pose Tracking"
] | [] | [
"PoseTrack2018",
"MPII Single Person",
"MPII Multi-Person"
] | [
"[email protected]",
"MOTA",
"MAP",
"AP",
"[email protected]"
] | Learning to Refine Human Pose Estimation |
Processing point cloud data is an important component of many real-world systems. As such, a wide variety of point-based approaches have been proposed, reporting steady benchmark improvements over time. We study the key ingredients of this progress and uncover two critical results. First, we find that auxiliary factors like different evaluation schemes, data augmentation strategies, and loss functions, which are independent of the model architecture, make a large difference in performance. The differences are large enough that they obscure the effect of architecture. When these factors are controlled for, PointNet++, a relatively older network, performs competitively with recent methods. Second, a very simple projection-based method, which we refer to as SimpleView, performs surprisingly well. It achieves on par or better results than sophisticated state-of-the-art methods on ModelNet40, while being half the size of PointNet++. It also outperforms state-of-the-art methods on ScanObjectNN, a real-world point cloud benchmark, and demonstrates better cross-dataset generalization.
| [] | [
"3D Point Cloud Classification"
] | [] | [
"ScanObjectNN",
"ModelNet40"
] | [
"Overall Accuracy"
] | Revisiting Point Cloud Classification with a Simple and Effective Baseline |
Recent advances in adaptive object detection have achieved compelling results in virtue of adversarial feature adaptation to mitigate the distributional shifts along the detection pipeline. Whilst adversarial adaptation significantly enhances the transferability of feature representations, the feature discriminability of object detectors remains less investigated. Moreover, transferability and discriminability may come at a contradiction in adversarial adaptation given the complex combinations of objects and the differentiated scene layouts between domains. In this paper, we propose a Hierarchical Transferability Calibration Network (HTCN) that hierarchically (local-region/image/instance) calibrates the transferability of feature representations for harmonizing transferability and discriminability. The proposed model consists of three components: (1) Importance Weighted Adversarial Training with input Interpolation (IWAT-I), which strengthens the global discriminability by re-weighting the interpolated image-level features; (2) Context-aware Instance-Level Alignment (CILA) module, which enhances the local discriminability by capturing the underlying complementary effect between the instance-level feature and the global context information for the instance-level feature alignment; (3) local feature masks that calibrate the local transferability to provide semantic guidance for the following discriminative pattern alignment. Experimental results show that HTCN significantly outperforms the state-of-the-art methods on benchmark datasets. | [] | [
"Object Detection",
"Weakly Supervised Object Detection"
] | [] | [
"Cityscapes-to-Foggy Cityscapes"
] | [
"mAP"
] | Harmonizing Transferability and Discriminability for Adapting Object Detectors |
Most existing Re-IDentification (Re-ID) methods are highly dependent on precise bounding boxes that enable images to be aligned with each other. However, due to the challenging practical scenarios, current detection models often produce inaccurate bounding boxes, which inevitably degenerate the performance of existing Re-ID algorithms. In this paper, we propose a novel coarse-to-fine pyramid model to relax the need of bounding boxes, which not only incorporates local and global information, but also integrates the gradual cues between them. The pyramid model is able to match at different scales and then search for the correct image of the same identity, even when the image pairs are not aligned. In addition, in order to learn discriminative identity representation, we explore a dynamic training scheme to seamlessly unify two losses and extract appropriate shared information between them. Experimental results clearly demonstrate that the proposed method achieves the state-of-the-art results on three datasets. Especially, our approach exceeds the current best method by 9.5% on the most challenging CUHK03 dataset. | [] | [
"Person Re-Identification"
] | [] | [
"CUHK03 detected",
"DukeMTMC-reID",
"Market-1501",
"CUHK03 labeled"
] | [
"Rank-1",
"MAP"
] | Pyramidal Person Re-IDentification via Multi-Loss Dynamic Training |
Describes an audio dataset of spoken words designed to help train and
evaluate keyword spotting systems. Discusses why this task is an interesting
challenge, and why it requires a specialized dataset that is different from
conventional datasets used for automatic speech recognition of full sentences.
Suggests a methodology for reproducible and comparable accuracy metrics for
this task. Describes how the data was collected and verified, what it contains,
previous versions and properties. Concludes by reporting baseline results of
models trained on this dataset. | [] | [
"Accuracy Metrics",
"Keyword Spotting",
"Speech Recognition"
] | [] | [
"TensorFlow"
] | [
"TFMA"
] | Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition |
Top-performing deep architectures are trained on massive amounts of labeled
data. In the absence of labeled data for a certain task, domain adaptation
often provides an attractive option given that labeled data of similar nature
but from a different domain (e.g. synthetic images) are available. Here, we
propose a new approach to domain adaptation in deep architectures that can be
trained on large amount of labeled data from the source domain and large amount
of unlabeled data from the target domain (no labeled target-domain data is
necessary).
As the training progresses, the approach promotes the emergence of "deep"
features that are (i) discriminative for the main learning task on the source
domain and (ii) invariant with respect to the shift between the domains. We
show that this adaptation behaviour can be achieved in almost any feed-forward
model by augmenting it with few standard layers and a simple new gradient
reversal layer. The resulting augmented architecture can be trained using
standard backpropagation.
Overall, the approach can be implemented with little effort using any of the
deep-learning packages. The method performs very well in a series of image
classification experiments, achieving adaptation effect in the presence of big
domain shifts and outperforming previous state-of-the-art on Office datasets. | [] | [
"Domain Adaptation",
"Image Classification",
"Transfer Learning",
"Unsupervised Domain Adaptation",
"Unsupervised Image-To-Image Translation"
] | [] | [
"SVNH-to-MNIST",
"Office-Home",
"UCF-to-HMDBfull",
"Olympic-to-HMDBsmall",
"Office-31",
"HMDBsmall-to-UCF",
"HMDBfull-to-UCF",
"UCF-to-Olympic",
"UCF-to-HMDBsmall"
] | [
"Classification Accuracy",
"Accuracy"
] | Unsupervised Domain Adaptation by Backpropagation |
In many real-world problems, collecting a large number of labeled samples is infeasible. Few-shot learning (FSL) is the dominant approach to address this issue, where the objective is to quickly adapt to novel categories in presence of a limited number of samples. FSL tasks have been predominantly solved by leveraging the ideas from gradient-based meta-learning and metric learning approaches. However, recent works have demonstrated the significance of powerful feature representations with a simple embedding network that can outperform existing sophisticated FSL algorithms. In this work, we build on this insight and propose a novel training mechanism that simultaneously enforces equivariance and invariance to a general set of geometric transformations. Equivariance or invariance has been employed standalone in the previous works; however, to the best of our knowledge, they have not been used jointly. Simultaneous optimization for both of these contrasting objectives allows the model to jointly learn features that are not only independent of the input transformation but also the features that encode the structure of geometric transformations. These complementary sets of features help generalize well to novel classes with only a few data samples. We achieve additional improvements by incorporating a novel self-supervised distillation objective. Our extensive experimentation shows that even without knowledge distillation our proposed method can outperform current state-of-the-art FSL methods on five popular benchmark datasets. | [] | [
"Few-Shot Image Classification",
"Few-Shot Learning",
"Knowledge Distillation",
"Meta-Learning",
"Metric Learning"
] | [] | [
"FC100 5-way (1-shot)",
"CIFAR-FS 5-way (5-shot)",
"Meta-Dataset",
"Mini-Imagenet 5-way (1-shot)",
"Tiered ImageNet 5-way (1-shot)",
"Mini-Imagenet 5-way (5-shot)",
"CIFAR-FS 5-way (1-shot)",
"FC100 5-way (5-shot)",
"Tiered ImageNet 5-way (5-shot)"
] | [
"Accuracy"
] | Exploring Complementary Strengths of Invariant and Equivariant Representations for Few-Shot Learning |
Advancing research in the emerging field of deep graph learning requires new tools to support tensor computation over graphs. In this paper, we present the design principles and implementation of Deep Graph Library (DGL). DGL distills the computational patterns of GNNs into a few generalized sparse tensor operations suitable for extensive parallelization. By advocating graph as the central programming abstraction, DGL can perform optimizations transparently. By cautiously adopting a framework-neutral design, DGL allows users to easily port and leverage the existing components across multiple deep learning frameworks. Our evaluation shows that DGL significantly outperforms other popular GNN-oriented frameworks in both speed and memory consumption over a variety of benchmarks and has little overhead for small scale workloads. | [] | [
"Graph Learning",
"Node Classification"
] | [] | [
"Cora"
] | [
"Accuracy"
] | Deep Graph Library: A Graph-Centric, Highly-Performant Package for Graph Neural Networks |
Previous approaches for scene text detection usually rely on manually defined sliding windows. This work presents an intuitive two-stage region-based method to detect multi-oriented text without any prior knowledge regarding the textual shape. In the first stage, we estimate the possible locations of text instances by detecting and linking corners instead of shifting a set of default anchors. The quadrilateral proposals are geometry adaptive, which allows our method to cope with various text aspect ratios and orientations. In the second stage, we design a new pooling layer named Dual-RoI Pooling which embeds data augmentation inside the region-wise subnetwork for more robust classification and regression over these proposals. Experimental results on public benchmarks confirm that the proposed method is capable of achieving comparable performance with state-of-the-art methods. The code is publicly available at https://github.com/xhzdeng/crpn | [] | [
"Data Augmentation",
"Regression",
"Robust classification",
"Scene Text",
"Scene Text Detection"
] | [] | [
"ICDAR 2013",
"ICDAR 2015",
"COCO-Text"
] | [
"F-Measure",
"Recall",
"Precision"
] | Detecting Multi-Oriented Text with Corner-based Region Proposals |
Many Few-Shot Learning research works have two stages: pre-training base model and adapting to novel model. In this paper, we propose to use closed-form base learner, which constrains the adapting stage with pre-trained base model to get better generalized novel model. Following theoretical analysis proves its rationality as well as indication of how to train a well-generalized base model. We then conduct experiments on four benchmarks and achieve state-of-the-art performance in all cases. Notably, we achieve the accuracy of 87.75% on 5-shot miniImageNet which approximately outperforms existing methods by 10%. | [] | [
"Few-Shot Image Classification",
"Few-Shot Learning"
] | [] | [
"FC100 5-way (1-shot)",
"CIFAR-FS 5-way (5-shot)",
"Mini-Imagenet 5-way (1-shot)",
"Tiered ImageNet 5-way (1-shot)",
"Mini-Imagenet 5-way (5-shot)",
"CIFAR-FS 5-way (1-shot)",
"FC100 5-way (5-shot)",
"Tiered ImageNet 5-way (5-shot)"
] | [
"Accuracy"
] | Generalized Adaptation for Few-Shot Learning |
We introduce a new and rigorously-formulated PAC-Bayes few-shot meta-learning algorithm that implicitly learns a prior distribution of the model of interest. Our proposed method extends the PAC-Bayes framework from a single task setting to the few-shot learning setting to upper-bound generalisation errors on unseen tasks and samples. We also propose a generative-based approach to model the shared prior and the posterior of task-specific model parameters more expressively compared to the usual diagonal Gaussian assumption. We show that the models trained with our proposed meta-learning algorithm are well calibrated and accurate, with state-of-the-art calibration and classification results on few-shot classification (mini-ImageNet and tiered-ImageNet) and regression (multi-modal task-distribution regression) benchmarks. | [] | [
"Few-Shot Image Classification",
"Few-Shot Learning",
"Meta-Learning",
"Regression"
] | [] | [
"Mini-Imagenet 5-way (1-shot)",
"Tiered ImageNet 5-way (1-shot)",
"Mini-Imagenet 5-way (5-shot)",
"Tiered ImageNet 5-way (5-shot)"
] | [
"Accuracy"
] | PAC-Bayesian Meta-learning with Implicit Prior and Posterior |
Intrusion detection system (IDS) has become an essential layer in all the latest ICT system due to an urge towards cyber safety in the day-to-day world. Reasons including uncertainty in finding the types of attacks and increased the complexity of advanced cyber attacks, IDS calls for the need of integration of Deep Neural Networks (DNNs). In this paper, DNNs have been utilized to predict the attacks on Network Intrusion Detection System (N-IDS). A DNN with 0.1 rate of learning is applied and is run for 1000 number of epochs and KDDCup-`99’ dataset has been used for training and benchmarking the network. For comparison purposes, the training is done on the same dataset with several other classical machine learning algorithms and DNN of layers ranging from 1 to 5. The results were compared and concluded that a DNN of 3 layers has superior performance over all the other classical machine learning algorithms. | [] | [
"Intrusion Detection",
"Network Intrusion Detection"
] | [] | [
"KDD "
] | [
"Accuracy"
] | Evaluating Shallow and Deep Neural Networks for Network Intrusion Detection Systems in Cyber Security |
Popularized as 'bottom-up' attention, bounding box (or region) based visual features have recently surpassed vanilla grid-based convolutional features as the de facto standard for vision and language tasks like visual question answering (VQA). However, it is not clear whether the advantages of regions (e.g. better localization) are the key reasons for the success of bottom-up attention. In this paper, we revisit grid features for VQA, and find they can work surprisingly well - running more than an order of magnitude faster with the same accuracy (e.g. if pre-trained in a similar fashion). Through extensive experiments, we verify that this observation holds true across different VQA models (reporting a state-of-the-art accuracy on VQA 2.0 test-std, 72.71), datasets, and generalizes well to other tasks like image captioning. As grid features make the model design and training process much simpler, this enables us to train them end-to-end and also use a more flexible network design. We learn VQA models end-to-end, from pixels directly to answers, and show that strong performance is achievable without using any region annotations in pre-training. We hope our findings help further improve the scientific understanding and the practical application of VQA. Code and features will be made available. | [] | [
"Image Captioning",
"Question Answering",
"Visual Question Answering"
] | [] | [
"VQA v2 test-std",
"VQA v2 test-dev"
] | [
"number",
"overall",
"other",
"yes/no",
"Accuracy"
] | In Defense of Grid Features for Visual Question Answering |
Recently, a number of competitive methods have tackled unsupervised representation learning by maximising the mutual information between the representations produced from augmentations. The resulting representations are then invariant to stochastic augmentation strategies, and can be used for downstream tasks such as clustering or classification. Yet data augmentations preserve many properties of an image and so there is potential for a suboptimal choice of representation that relies on matching easy-to-find features in the data. We demonstrate that greedy or local methods of maximising mutual information (such as stochastic gradient optimisation) discover local optima of the mutual information criterion; the resulting representations are also less-ideally suited to complex downstream tasks. Earlier work has not specifically identified or addressed this issue. We introduce deep hierarchical object grouping (DHOG) that computes a number of distinct discrete representations of images in a hierarchical order, eventually generating representations that better optimise the mutual information objective. We also find that these representations align better with the downstream task of grouping into underlying object classes. We tested DHOG on unsupervised clustering, which is a natural downstream test as the target representation is a discrete labelling of the data. We achieved new state-of-the-art results on the three main benchmarks without any prefiltering or Sobel-edge detection that proved necessary for many previous methods to work. We obtain accuracy improvements of: 4.3% on CIFAR-10, 1.5% on CIFAR-100-20, and 7.2% on SVHN. | [] | [
"Edge Detection",
"Representation Learning",
"Unsupervised Representation Learning"
] | [] | [
"CIFAR-10"
] | [
"Train set",
"ARI",
"Backbone",
"NMI",
"Accuracy"
] | DHOG: Deep Hierarchical Object Grouping |
We introduce Data Diversification: a simple but effective strategy to boost neural machine translation (NMT) performance. It diversifies the training data by using the predictions of multiple forward and backward models and then merging them with the original dataset on which the final NMT model is trained. Our method is applicable to all NMT models. It does not require extra monolingual data like back-translation, nor does it add more computations and parameters like ensembles of models. Our method achieves state-of-the-art BLEU scores of 30.7 and 43.7 in the WMT'14 English-German and English-French translation tasks, respectively. It also substantially improves on 8 other translation tasks: 4 IWSLT tasks (English-German and English-French) and 4 low-resource translation tasks (English-Nepali and English-Sinhala). We demonstrate that our method is more effective than knowledge distillation and dual learning, it exhibits strong correlation with ensembles of models, and it trades perplexity off for better BLEU score. We have released our source code at https://github.com/nxphi47/data_diversification | [] | [
"Knowledge Distillation",
"Machine Translation"
] | [] | [
"WMT2014 English-German",
"IWSLT2014 German-English"
] | [
"BLEU score"
] | Data Diversification: A Simple Strategy For Neural Machine Translation |
Neural Architecture Search (NAS) achieved many breakthroughs in recent years. In spite of its remarkable progress, many algorithms are restricted to particular search spaces. They also lack efficient mechanisms to reuse knowledge when confronting multiple tasks. These challenges preclude their applicability, and motivate our proposal of CATCH, a novel Context-bAsed meTa reinforcement learning (RL) algorithm for transferrable arChitecture searcH. The combination of meta-learning and RL allows CATCH to efficiently adapt to new tasks while being agnostic to search spaces. CATCH utilizes a probabilistic encoder to encode task properties into latent context variables, which then guide CATCH's controller to quickly "catch" top-performing networks. The contexts also assist a network evaluator in filtering inferior candidates and speed up learning. Extensive experiments demonstrate CATCH's universality and search efficiency over many other widely-recognized algorithms. It is also capable of handling cross-domain architecture search as competitive networks on ImageNet, COCO, and Cityscapes are identified. This is the first work to our knowledge that proposes an efficient transferrable NAS solution while maintaining robustness across various settings. | [] | [
"Meta-Learning",
"Meta Reinforcement Learning",
"Neural Architecture Search"
] | [] | [
"NAS-Bench-201, ImageNet-16-120"
] | [
"Search time (s)",
"Accuracy (val)"
] | CATCH: Context-based Meta Reinforcement Learning for Transferrable Architecture Search |
Existing RGB-D salient object detection (SOD) approaches concentrate on the cross-modal fusion between the RGB stream and the depth stream. They do not deeply explore the effect of the depth map itself. In this work, we design a single stream network to directly use the depth map to guide early fusion and middle fusion between RGB and depth, which saves the feature encoder of the depth stream and achieves a lightweight and real-time model. We tactfully utilize depth information from two perspectives: (1) Overcoming the incompatibility problem caused by the great difference between modalities, we build a single stream encoder to achieve the early fusion, which can take full advantage of ImageNet pre-trained backbone model to extract rich and discriminative features. (2) We design a novel depth-enhanced dual attention module (DEDA) to efficiently provide the fore-/back-ground branches with the spatially filtered features, which enables the decoder to optimally perform the middle fusion. Besides, we put forward a pyramidally attended feature extraction module (PAFE) to accurately localize the objects of different scales. Extensive experiments demonstrate that the proposed model performs favorably against most state-of-the-art methods under different evaluation metrics. Furthermore, this model is 55.5\% lighter than the current lightest model and runs at a real-time speed of 32 FPS when processing a $384 \times 384$ image. | [] | [
"Object Detection",
"RGB-D Salient Object Detection",
"RGB Salient Object Detection",
"Salient Object Detection"
] | [] | [
"NJU2K"
] | [
"Average MAE",
"S-Measure",
"max F-Measure"
] | A Single Stream Network for Robust and Real-time RGB-D Salient Object Detection |
Recent years have witnessed the significant progress
on convolutional neural networks (CNNs) in dynamic scene
deblurring. While most of the CNN models are generally learned
by the reconstruction loss defined on training data, incorporating
suitable image priors as well as regularization terms into the
network architecture could boost the deblurring performance.
In this work, we propose a Dark and Bright Channel Priors
embedded Network (DBCPeNet) to plug the channel priors into
a neural network for effective dynamic scene deblurring. A
novel trainable dark and bright channel priors embedded layer
(DBCPeL) is developed to aggregate both channel priors and
blurry image representations, and a sparse regularization is
introduced to regularize the DBCPeNet model learning. Furthermore, we present an effective multi-scale network architecture,
namely image full scale exploitation (IFSE), which works in both
coarse-to-fine and fine-to-coarse manners for better exploiting
information flow across scales. Experimental results on the GoPro
and Kohler datasets show that our proposed DBCPeNet performs ¨
favorably against state-of-the-art deep image deblurring methods
in terms of both quantitative metrics and visual quality. | [] | [
"Deblurring"
] | [] | [
"GoPro"
] | [
"SSIM",
"PSNR"
] | Dark and Bright Channel Prior Embedded Network for Dynamic Scene Deblurring |
Inspired by speech recognition, recent state-of-the-art algorithms mostly
consider scene text recognition as a sequence prediction problem. Though
achieving excellent performance, these methods usually neglect an important
fact that text in images are actually distributed in two-dimensional space. It
is a nature quite different from that of speech, which is essentially a
one-dimensional signal. In principle, directly compressing features of text
into a one-dimensional form may lose useful information and introduce extra
noise. In this paper, we approach scene text recognition from a two-dimensional
perspective. A simple yet effective model, called Character Attention Fully
Convolutional Network (CA-FCN), is devised for recognizing the text of
arbitrary shapes. Scene text recognition is realized with a semantic
segmentation network, where an attention mechanism for characters is adopted.
Combined with a word formation module, CA-FCN can simultaneously recognize the
script and predict the position of each character. Experiments demonstrate that
the proposed algorithm outperforms previous methods on both regular and
irregular text datasets. Moreover, it is proven to be more robust to imprecise
localizations in the text detection phase, which are very common in practice. | [] | [
"Scene Text",
"Scene Text Recognition",
"Semantic Segmentation",
"Speech Recognition"
] | [] | [
"ICDAR2013",
"SVT"
] | [
"Accuracy"
] | Scene Text Recognition from Two-Dimensional Perspective |
In this work, we establish dense correspondences between RGB image and a
surface-based representation of the human body, a task we refer to as dense
human pose estimation. We first gather dense correspondences for 50K persons
appearing in the COCO dataset by introducing an efficient annotation pipeline.
We then use our dataset to train CNN-based systems that deliver dense
correspondence 'in the wild', namely in the presence of background, occlusions
and scale variations. We improve our training set's effectiveness by training
an 'inpainting' network that can fill in missing groundtruth values and report
clear improvements with respect to the best results that would be achievable in
the past. We experiment with fully-convolutional networks and region-based
models and observe a superiority of the latter; we further improve accuracy
through cascading, obtaining a system that delivers highly0accurate results in
real time. Supplementary materials and videos are provided on the project page
http://densepose.org | [] | [
"Pose Estimation"
] | [] | [
"DensePose-COCO"
] | [
"AP"
] | DensePose: Dense Human Pose Estimation In The Wild |
Many predictive tasks of web applications need to model categorical
variables, such as user IDs and demographics like genders and occupations. To
apply standard machine learning techniques, these categorical predictors are
always converted to a set of binary features via one-hot encoding, making the
resultant feature vector highly sparse. To learn from such sparse data
effectively, it is crucial to account for the interactions between features.
Factorization Machines (FMs) are a popular solution for efficiently using the
second-order feature interactions. However, FM models feature interactions in a
linear way, which can be insufficient for capturing the non-linear and complex
inherent structure of real-world data. While deep neural networks have recently
been applied to learn non-linear feature interactions in industry, such as the
Wide&Deep by Google and DeepCross by Microsoft, the deep structure meanwhile
makes them difficult to train.
In this paper, we propose a novel model Neural Factorization Machine (NFM)
for prediction under sparse settings. NFM seamlessly combines the linearity of
FM in modelling second-order feature interactions and the non-linearity of
neural network in modelling higher-order feature interactions. Conceptually,
NFM is more expressive than FM since FM can be seen as a special case of NFM
without hidden layers. Empirical results on two regression tasks show that with
one hidden layer only, NFM significantly outperforms FM with a 7.3% relative
improvement. Compared to the recent deep learning methods Wide&Deep and
DeepCross, our NFM uses a shallower structure but offers better performance,
being much easier to train and tune in practice. | [] | [
"Link Prediction",
"Regression"
] | [] | [
"MovieLens 25M",
"Yelp"
] | [
"Hits@10",
"nDCG@10",
"HR@10"
] | Neural Factorization Machines for Sparse Predictive Analytics |
Histology images are inherently symmetric under rotation, where each orientation is equally as likely to appear. However, this rotational symmetry is not widely utilised as prior knowledge in modern Convolutional Neural Networks (CNNs), resulting in data hungry models that learn independent features at each orientation. Allowing CNNs to be rotation-equivariant removes the necessity to learn this set of transformations from the data and instead frees up model capacity, allowing more discriminative features to be learned. This reduction in the number of required parameters also reduces the risk of overfitting. In this paper, we propose Dense Steerable Filter CNNs (DSF-CNNs) that use group convolutions with multiple rotated copies of each filter in a densely connected framework. Each filter is defined as a linear combination of steerable basis filters, enabling exact rotation and decreasing the number of trainable parameters compared to standard filters. We also provide the first in-depth comparison of different rotation-equivariant CNNs for histology image analysis and demonstrate the advantage of encoding rotational symmetry into modern architectures. We show that DSF-CNNs achieve state-of-the-art performance, with significantly fewer parameters, when applied to three different tasks in the area of computational pathology: breast tumour classification, colon gland segmentation and multi-tissue nuclear segmentation. | [] | [
"Breast Tumour Classification",
"Colorectal Gland Segmentation:",
"Multi-tissue Nucleus Segmentation",
"Nuclear Segmentation"
] | [] | [
"CRAG",
"Kumar",
"PCam"
] | [
"F1-score",
"Hausdorff Distance (mm)",
"AUC",
"Dice"
] | Dense Steerable Filter CNNs for Exploiting Rotational Symmetry in Histology Images |
This paper proposes to tackle open- domain question answering using Wikipedia
as the unique knowledge source: the answer to any factoid question is a text
span in a Wikipedia article. This task of machine reading at scale combines the
challenges of document retrieval (finding the relevant articles) with that of
machine comprehension of text (identifying the answer spans from those
articles). Our approach combines a search component based on bigram hashing and
TF-IDF matching with a multi-layer recurrent neural network model trained to
detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA
datasets indicate that (1) both modules are highly competitive with respect to
existing counterparts and (2) multitask learning using distant supervision on
their combination is an effective complete system on this challenging task. | [] | [
"Open-Domain Question Answering",
"Question Answering",
"Reading Comprehension"
] | [] | [
"SQuAD1.1",
"SearchQA",
"Natural Questions (long)",
"Natural Questions (short)",
"SQuAD1.1 dev",
"Quasart-T"
] | [
"EM",
"F1"
] | Reading Wikipedia to Answer Open-Domain Questions |
Surgical tool presence detection and surgical phase recognition are two fundamental yet challenging tasks in surgical video analysis and also very essential components in various applications in modern operating rooms. While these two analysis tasks are highly correlated in clinical practice as the surgical process is well-defined, most previous methods tackled them separately, without making full use of their relatedness. In this paper, we present a novel method by developing a multi-task recurrent convolutional network with correlation loss (MTRCNet-CL) to exploit their relatedness to simultaneously boost the performance of both tasks. Specifically, our proposed MTRCNet-CL model has an end-to-end architecture with two branches, which share earlier feature encoders to extract general visual features while holding respective higher layers targeting for specific tasks. Given that temporal information is crucial for phase recognition, long-short term memory (LSTM) is explored to model the sequential dependencies in the phase recognition branch. More importantly, a novel and effective correlation loss is designed to model the relatedness between tool presence and phase identification of each video frame, by minimizing the divergence of predictions from the two branches. Mutually leveraging both low-level feature sharing and high-level prediction correlating, our MTRCNet-CL method can encourage the interactions between the two tasks to a large extent, and hence can bring about benefits to each other. Extensive experiments on a large surgical video dataset (Cholec80) demonstrate outstanding performance of our proposed method, consistently exceeding the state-of-the-art methods by a large margin (e.g., 89.1% v.s. 81.0% for the mAP in tool presence detection and 87.4% v.s. 84.5% for F1 score in phase recognition). The code can be found on our project website. | [] | [
"Surgical tool detection"
] | [] | [
"Cholec80"
] | [
"mAP"
] | Multi-Task Recurrent Convolutional Network with Correlation Loss for Surgical Video Analysis |
We present a new approach to modeling sequential data: the deep equilibrium model (DEQ). Motivated by an observation that the hidden layers of many existing deep sequence models converge towards some fixed point, we propose the DEQ approach that directly finds these equilibrium points via root-finding. Such a method is equivalent to running an infinite depth (weight-tied) feedforward network, but has the notable advantage that we can analytically backpropagate through the equilibrium point using implicit differentiation. Using this approach, training and prediction in these networks require only constant memory, regardless of the effective "depth" of the network. We demonstrate how DEQs can be applied to two state-of-the-art deep sequence models: self-attention transformers and trellis networks. On large-scale language modeling tasks, such as the WikiText-103 benchmark, we show that DEQs 1) often improve performance over these state-of-the-art models (for similar parameter counts); 2) have similar computational requirements to existing models; and 3) vastly reduce memory consumption (often the bottleneck for training large sequence models), demonstrating an up-to 88% memory reduction in our experiments. The code is available at https://github.com/locuslab/deq . | [] | [
"Language Modelling"
] | [] | [
"Penn Treebank (Word Level)",
"WikiText-103"
] | [
"Number of params",
"Test perplexity",
"Params"
] | Deep Equilibrium Models |
Facial landmark detection aims to localize the anatomically defined points of human faces. In this paper, we study facial landmark detection from partially labeled facial images. A typical approach is to (1) train a detector on the labeled images; (2) generate new training samples using this detector's prediction as pseudo labels of unlabeled images; (3) retrain the detector on the labeled samples and partial pseudo labeled samples. In this way, the detector can learn from both labeled and unlabeled data to become robust. In this paper, we propose an interaction mechanism between a teacher and two students to generate more reliable pseudo labels for unlabeled data, which are beneficial to semi-supervised facial landmark detection. Specifically, the two students are instantiated as dual detectors. The teacher learns to judge the quality of the pseudo labels generated by the students and filter out unqualified samples before the retraining stage. In this way, the student detectors get feedback from their teacher and are retrained by premium data generated by itself. Since the two students are trained by different samples, a combination of their predictions will be more robust as the final prediction compared to either prediction. Extensive experiments on 300-W and AFLW benchmarks show that the interactions between teacher and students contribute to better utilization of the unlabeled data and achieves state-of-the-art performance. | [] | [
"Facial Landmark Detection"
] | [] | [
"300W",
"300W (Full)"
] | [
"NME",
"Mean NME "
] | Teacher Supervises Students How to Learn From Partially Labeled Images for Facial Landmark Detection |
We present a new technique for learning visual-semantic embeddings for
cross-modal retrieval. Inspired by hard negative mining, the use of hard
negatives in structured prediction, and ranking loss functions, we introduce a
simple change to common loss functions used for multi-modal embeddings. That,
combined with fine-tuning and use of augmented data, yields significant gains
in retrieval performance. We showcase our approach, VSE++, on MS-COCO and
Flickr30K datasets, using ablation studies and comparisons with existing
methods. On MS-COCO our approach outperforms state-of-the-art methods by 8.8%
in caption retrieval and 11.3% in image retrieval (at R@1). | [] | [
"Cross-Modal Retrieval",
"Image Retrieval",
"Structured Prediction"
] | [] | [
"Flickr30k"
] | [
"Image-to-text R@5",
"Image-to-text R@1",
"Image-to-text R@10",
"Text-to-image R@10",
"Text-to-image R@1",
"Text-to-image R@5"
] | VSE++: Improving Visual-Semantic Embeddings with Hard Negatives |
Tracking has traditionally been the art of following interest points through space and time. This changed with the rise of powerful deep networks. Nowadays, tracking is dominated by pipelines that perform object detection followed by temporal association, also known as tracking-by-detection. In this paper, we present a simultaneous detection and tracking algorithm that is simpler, faster, and more accurate than the state of the art. Our tracker, CenterTrack, applies a detection model to a pair of images and detections from the prior frame. Given this minimal input, CenterTrack localizes objects and predicts their associations with the previous frame. That's it. CenterTrack is simple, online (no peeking into the future), and real-time. It achieves 67.3% MOTA on the MOT17 challenge at 22 FPS and 89.4% MOTA on the KITTI tracking benchmark at 15 FPS, setting a new state of the art on both datasets. CenterTrack is easily extended to monocular 3D tracking by regressing additional 3D attributes. Using monocular video input, it achieves 28.3% [email protected] on the newly released nuScenes 3D tracking benchmark, substantially outperforming the monocular baseline on this benchmark while running at 28 FPS. | [] | [
"Multi-Object Tracking",
"Object Detection"
] | [] | [
"KITTI Tracking test"
] | [
"MOTA"
] | Tracking Objects as Points |
A great proportion of sequence-to-sequence (Seq2Seq) models for Neural
Machine Translation (NMT) adopt Recurrent Neural Network (RNN) to generate
translation word by word following a sequential order. As the studies of
linguistics have proved that language is not linear word sequence but sequence
of complex structure, translation at each step should be conditioned on the
whole target-side context. To tackle the problem, we propose a new NMT model
that decodes the sequence with the guidance of its structural prediction of the
context of the target sequence. Our model generates translation based on the
structural prediction of the target-side context so that the translation can be
freed from the bind of sequential order. Experimental results demonstrate that
our model is more competitive compared with the state-of-the-art methods, and
the analysis reflects that our model is also robust to translating sentences of
different lengths and it also reduces repetition with the instruction from the
target-side context for decoding. | [] | [
"Machine Translation"
] | [] | [
"IWSLT2015 English-Vietnamese"
] | [
"BLEU"
] | Deconvolution-Based Global Decoding for Neural Machine Translation |
Comprehensive visual understanding requires detection frameworks that can effectively learn and utilize object interactions while analyzing objects individually. This is the main objective in Human-Object Interaction (HOI) detection task. In particular, relative spatial reasoning and structural connections between objects are essential cues for analyzing interactions, which is addressed by the proposed Visual-Spatial-Graph Network (VSGNet) architecture. VSGNet extracts visual features from the human-object pairs, refines the features with spatial configurations of the pair, and utilizes the structural connections between the pair via graph convolutions. The performance of VSGNet is thoroughly evaluated using the Verbs in COCO (V-COCO) and HICO-DET datasets. Experimental results indicate that VSGNet outperforms state-of-the-art solutions by 8% or 4 mAP in V-COCO and 16% or 3 mAP in HICO-DET. | [] | [
"Human-Object Interaction Detection"
] | [] | [
"HICO-DET",
"V-COCO"
] | [
"Time Per Frame(ms)",
"MAP"
] | VSGNet: Spatial Attention Network for Detecting Human Object Interactions Using Graph Convolutions |
Speaker intent detection and semantic slot filling are two critical tasks in
spoken language understanding (SLU) for dialogue systems. In this paper, we
describe a recurrent neural network (RNN) model that jointly performs intent
detection, slot filling, and language modeling. The neural network model keeps
updating the intent estimation as word in the transcribed utterance arrives and
uses it as contextual features in the joint model. Evaluation of the language
model and online SLU model is made on the ATIS benchmarking data set. On
language modeling task, our joint model achieves 11.8% relative reduction on
perplexity comparing to the independent training language model. On SLU tasks,
our joint model outperforms the independent task training model by 22.3% on
intent detection error rate, with slight degradation on slot filling F1 score.
The joint model also shows advantageous performance in the realistic ASR
settings with noisy speech input. | [] | [
"Intent Detection",
"Language Modelling",
"Slot Filling",
"Spoken Language Understanding"
] | [] | [
"ATIS"
] | [
"F1",
"Accuracy"
] | Joint Online Spoken Language Understanding and Language Modeling with Recurrent Neural Networks |
We propose to leverage multiple sources of information to compute an estimate of the number of individuals present in an extremely dense crowd visible in a single image. Due to problems including perspective, occlusion, clutter, and few pixels per person, counting by human detection in such images is almost impossible. Instead, our approach relies on multiple sources such as low confidence head detections, repetition of texture elements (using SIFT), and frequency-domain analysis to estimate counts, along with confidence associated with observing individuals, in an image region. Secondly, we employ a global consistency constraint on counts using Markov Random Field. This caters for disparity in counts in local neighborhoods and across scales. We tested our approach on a new dataset of fifty crowd images containing 64K annotated humans, with the head counts ranging from 94 to 4543. This is in stark contrast to datasets used for existing methods which contain not more than tens of individuals. We experimentally demonstrate the efficacy and reliability of the proposed approach by quantifying the counting performance. | [] | [
"Crowd Counting",
"Human Detection"
] | [] | [
"UCF CC 50",
"UCF-QNRF"
] | [
"MAE"
] | Multi-source Multi-scale Counting in Extremely Dense Crowd Images |
Knowledge graph embedding models have gained significant attention in AI research. Recent works have shown that the inclusion of background knowledge, such as logical rules, can improve the performance of embeddings in downstream machine learning tasks. However, so far, most existing models do not allow the inclusion of rules. We address the challenge of including rules and present a new neural based embedding model (LogicENN). We prove that LogicENN can learn every ground truth of encoded rules in a knowledge graph. To the best of our knowledge, this has not been proved so far for the neural based family of embedding models. Moreover, we derive formulae for the inclusion of various rules, including (anti-)symmetric, inverse, irreflexive and transitive, implication, composition, equivalence and negation. Our formulation allows to avoid grounding for implication and equivalence relations. Our experiments show that LogicENN outperforms the state-of-the-art models in link prediction. | [] | [
"Graph Embedding",
"Knowledge Graph Embedding",
"Knowledge Graphs",
"Link Prediction"
] | [] | [
" FB15k",
"WN18",
"FB15k-237"
] | [
"Hits@10",
"MR",
"MRR"
] | LogicENN: A Neural Based Knowledge Graphs Embedding Model with Logical Rules |
Recognizing objects from subcategories with very subtle differences remains a challenging task due to the large intra-class and small inter-class variation. Recent work tackles this problem in a weakly-supervised manner: object parts are first detected and the corresponding part-specific features are extracted for fine-grained classification. However, these methods typically treat the part-specific features of each image in isolation while neglecting their relationships between different images. In this paper, we propose Cross-X learning, a simple yet effective approach that exploits the relationships between different images and between different network layers for robust multi-scale feature learning. Our approach involves two novel components: (i) a cross-category cross-semantic regularizer that guides the extracted features to represent semantic parts and, (ii) a cross-layer regularizer that improves the robustness of multi-scale features by matching the prediction distribution across multiple layers. Our approach can be easily trained end-to-end and is scalable to large datasets like NABirds. We empirically analyze the contributions of different components of our approach and demonstrate its robustness, effectiveness and state-of-the-art performance on five benchmark datasets. Code is available at \url{https://github.com/cswluo/CrossX}. | [] | [
"Fine-Grained Image Classification",
"Fine-Grained Visual Categorization"
] | [] | [
" CUB-200-2011",
"Stanford Cars",
"FGVC Aircraft",
"NABirds"
] | [
"Accuracy"
] | Cross-X Learning for Fine-Grained Visual Categorization |
In graph neural networks (GNNs), pooling operators compute local summaries of input graphs to capture their global properties, and they are fundamental for building deep GNNs that learn hierarchical representations. In this work, we propose the Node Decimation Pooling (NDP), a pooling operator for GNNs that generates coarser graphs while preserving the overall graph topology. During training, the GNN learns new node representations and fits them to a pyramid of coarsened graphs, which is computed offline in a pre-processing stage. NDP consists of three steps. First, a node decimation procedure selects the nodes belonging to one side of the partition identified by a spectral algorithm that approximates the \maxcut{} solution. Afterwards, the selected nodes are connected with Kron reduction to form the coarsened graph. Finally, since the resulting graph is very dense, we apply a sparsification procedure that prunes the adjacency matrix of the coarsened graph to reduce the computational cost in the GNN. Notably, we show that it is possible to remove many edges without significantly altering the graph structure. Experimental results show that NDP is more efficient compared to state-of-the-art graph pooling operators while reaching, at the same time, competitive performance on a significant variety of graph classification tasks. | [] | [
"Graph Classification",
"Representation Learning"
] | [] | [
"COLLAB",
"ENZYMES",
"REDDIT-B",
"PROTEINS",
"D&D",
"NCI1",
"MUTAG",
"Mutagenicity",
"Bench-hard",
"5pt. Bench-Easy"
] | [
"Accuracy"
] | Hierarchical Representation Learning in Graph Neural Networks with Node Decimation Pooling |
We introduce a globally normalized transition-based neural network model that
achieves state-of-the-art part-of-speech tagging, dependency parsing and
sentence compression results. Our model is a simple feed-forward neural network
that operates on a task-specific transition system, yet achieves comparable or
better accuracies than recurrent models. We discuss the importance of global as
opposed to local normalization: a key insight is that the label bias problem
implies that globally normalized models can be strictly more expressive than
locally normalized models. | [] | [
"Dependency Parsing",
"Part-Of-Speech Tagging",
"Sentence Compression"
] | [] | [
"Penn Treebank"
] | [
"UAS",
"POS",
"LAS"
] | Globally Normalized Transition-Based Neural Networks |
Recent advances in neural variational inference have spawned a renaissance in
deep latent variable models. In this paper we introduce a generic variational
inference framework for generative and conditional models of text. While
traditional variational methods derive an analytic approximation for the
intractable distributions over latent variables, here we construct an inference
network conditioned on the discrete text input to provide the variational
distribution. We validate this framework on two very different text modelling
applications, generative document modelling and supervised question answering.
Our neural variational document model combines a continuous stochastic document
representation with a bag-of-words generative model and achieves the lowest
reported perplexities on two standard test corpora. The neural answer selection
model employs a stochastic representation layer within an attention mechanism
to extract the semantics between a question and answer pair. On two question
answering benchmarks this model exceeds all previous published benchmarks. | [] | [
"Answer Selection",
"Latent Variable Models",
"Question Answering",
"Topic Models",
"Variational Inference"
] | [] | [
"QASent",
"20 Newsgroups",
"WikiQA"
] | [
"MRR",
"Test perplexity",
"MAP"
] | Neural Variational Inference for Text Processing |
In this paper, we propose a novel end-to-end trainable Video Question
Answering (VideoQA) framework with three major components: 1) a new
heterogeneous memory which can effectively learn global context information
from appearance and motion features; 2) a redesigned question memory which
helps understand the complex semantics of question and highlights queried
subjects; and 3) a new multimodal fusion layer which performs multi-step
reasoning by attending to relevant visual and textual hints with self-updated
attention. Our VideoQA model firstly generates the global context-aware visual
and textual features respectively by interacting current inputs with memory
contents. After that, it makes the attentional fusion of the multimodal visual
and textual representations to infer the correct answer. Multiple cycles of
reasoning can be made to iteratively refine attention weights of the multimodal
data and improve the final representation of the QA pair. Experimental results
demonstrate our approach achieves state-of-the-art performance on four VideoQA
benchmark datasets. | [] | [
"Question Answering",
"Video Question Answering",
"Visual Question Answering"
] | [] | [
"MSRVTT-QA",
"MSVD-QA"
] | [
"Accuracy"
] | Heterogeneous Memory Enhanced Multimodal Attention Model for Video Question Answering |
Recent works on representation learning for graph structured data
predominantly focus on learning distributed representations of graph
substructures such as nodes and subgraphs. However, many graph analytics tasks
such as graph classification and clustering require representing entire graphs
as fixed length feature vectors. While the aforementioned approaches are
naturally unequipped to learn such representations, graph kernels remain as the
most effective way of obtaining them. However, these graph kernels use
handcrafted features (e.g., shortest paths, graphlets, etc.) and hence are
hampered by problems such as poor generalization. To address this limitation,
in this work, we propose a neural embedding framework named graph2vec to learn
data-driven distributed representations of arbitrary sized graphs. graph2vec's
embeddings are learnt in an unsupervised manner and are task agnostic. Hence,
they could be used for any downstream task such as graph classification,
clustering and even seeding supervised representation learning approaches. Our
experiments on several benchmark and large real-world datasets show that
graph2vec achieves significant improvements in classification and clustering
accuracies over substructure representation learning approaches and are
competitive with state-of-the-art graph kernels. | [] | [
"Graph Classification",
"Graph Embedding",
"Graph Matching",
"Representation Learning"
] | [] | [
"NCI109",
"Android Malware Dataset",
"PROTEINS",
"NCI1",
"MUTAG",
"PTC"
] | [
"Accuracy"
] | graph2vec: Learning Distributed Representations of Graphs |
We combine supervised learning with unsupervised learning in deep neural
networks. The proposed model is trained to simultaneously minimize the sum of
supervised and unsupervised cost functions by backpropagation, avoiding the
need for layer-wise pre-training. Our work builds on the Ladder network
proposed by Valpola (2015), which we extend by combining the model with
supervision. We show that the resulting model reaches state-of-the-art
performance in semi-supervised MNIST and CIFAR-10 classification, in addition
to permutation-invariant MNIST classification with all labels. | [] | [
"Semi-Supervised Image Classification"
] | [] | [
"CIFAR-10, 4000 Labels"
] | [
"Accuracy"
] | Semi-Supervised Learning with Ladder Networks |
Sentence embeddings have become an essential part of today's natural language processing (NLP) systems, especially together advanced deep learning methods. Although pre-trained sentence encoders are available in the general domain, none exists for biomedical texts to date. In this work, we introduce BioSentVec: the first open set of sentence embeddings trained with over 30 million documents from both scholarly articles in PubMed and clinical notes in the MIMIC-III Clinical Database. We evaluate BioSentVec embeddings in two sentence pair similarity tasks in different text genres. Our benchmarking results demonstrate that the BioSentVec embeddings can better capture sentence semantics compared to the other competitive alternatives and achieve state-of-the-art performance in both tasks. We expect BioSentVec to facilitate the research and development in biomedical text mining and to complement the existing resources in biomedical word embeddings. BioSentVec is publicly available at https://github.com/ncbi-nlp/BioSentVec | [] | [
"Sentence Embeddings",
"Sentence Embeddings For Biomedical Texts",
"Word Embeddings"
] | [] | [
"BIOSSES",
"MedSTS"
] | [
"Pearson Correlation"
] | BioSentVec: creating sentence embeddings for biomedical texts |
We propose a principled method for gradient-based regularization of the critic of GAN-like models trained by adversarially optimizing the kernel of a Maximum Mean Discrepancy (MMD). We show that controlling the gradient of the critic is vital to having a sensible loss function, and devise a method to enforce exact, analytical gradient constraints at no additional cost compared to existing approximate techniques based on additive regularizers. The new loss function is provably continuous, and experiments show that it stabilizes and accelerates training, giving image generation models that outperform state-of-the art methods on $160 \times 160$ CelebA and $64 \times 64$ unconditional ImageNet. | [] | [
"Image Generation"
] | [] | [
"CIFAR-10"
] | [
"Inception score",
"FID"
] | On gradient regularizers for MMD GANs |
We present SlowFast networks for video recognition. Our model involves (i) a Slow pathway, operating at low frame rate, to capture spatial semantics, and (ii) a Fast pathway, operating at high frame rate, to capture motion at fine temporal resolution. The Fast pathway can be made very lightweight by reducing its channel capacity, yet can learn useful temporal information for video recognition. Our models achieve strong performance for both action classification and detection in video, and large improvements are pin-pointed as contributions by our SlowFast concept. We report state-of-the-art accuracy on major video recognition benchmarks, Kinetics, Charades and AVA. Code has been made available at: https://github.com/facebookresearch/SlowFast | [] | [
"Action Classification",
"Action Classification ",
"Action Detection",
"Action Recognition",
"Action Recognition In Videos",
"Video Recognition"
] | [] | [
"Kinetics-400",
"Something-Something V2",
"Kinetics-600",
"Diving-48",
"AVA v2.1",
"Charades"
] | [
"Top-5 Accuracy",
"Vid acc@5",
"mAP (Val)",
"MAP",
"Top-1 Accuracy",
"Accuracy",
"Vid acc@1"
] | SlowFast Networks for Video Recognition |
We propose an algorithm to predict room layout from a single image that
generalizes across panoramas and perspective images, cuboid layouts and more
general layouts (e.g. L-shape room). Our method operates directly on the
panoramic image, rather than decomposing into perspective images as do recent
works. Our network architecture is similar to that of RoomNet, but we show
improvements due to aligning the image based on vanishing points, predicting
multiple layout elements (corners, boundaries, size and translation), and
fitting a constrained Manhattan layout to the resulting predictions. Our method
compares well in speed and accuracy to other existing work on panoramas,
achieves among the best accuracy for perspective images, and can handle both
cuboid-shaped and more general Manhattan layouts. | [] | [
"3D Room Layouts From A Single RGB Panorama"
] | [] | [
"Stanford 2D-3D",
"PanoContext",
"Realtor360"
] | [
"3DIoU"
] | LayoutNet: Reconstructing the 3D Room Layout from a Single RGB Image |
Neural network models recently proposed for question answering (QA) primarily
focus on capturing the passage-question relation. However, they have minimal
capability to link relevant facts distributed across multiple sentences which
is crucial in achieving deeper understanding, such as performing multi-sentence
reasoning, co-reference resolution, etc. They also do not explicitly focus on
the question and answer type which often plays a critical role in QA. In this
paper, we propose a novel end-to-end question-focused multi-factor attention
network for answer extraction. Multi-factor attentive encoding using
tensor-based transformation aggregates meaningful facts even when they are
located in multiple sentences. To implicitly infer the answer type, we also
propose a max-attentional question aggregation mechanism to encode a question
vector based on the important words in a question. During prediction, we
incorporate sequence-level encoding of the first wh-word and its immediately
following word as an additional source of question type information. Our
proposed model achieves significant improvements over the best prior
state-of-the-art results on three large-scale challenging QA datasets, namely
NewsQA, TriviaQA, and SearchQA. | [] | [
"Open-Domain Question Answering",
"Question Answering",
"Reading Comprehension"
] | [] | [
"NewsQA",
"SearchQA"
] | [
"EM",
"Unigram Acc",
"F1",
"N-gram F1"
] | A Question-Focused Multi-Factor Attention Network for Question Answering |
Semi-supervised learning has proven to be a powerful paradigm for leveraging unlabeled data to mitigate the reliance on large labeled datasets. In this work, we unify the current dominant approaches for semi-supervised learning to produce a new algorithm, MixMatch, that works by guessing low-entropy labels for data-augmented unlabeled examples and mixing labeled and unlabeled data using MixUp. We show that MixMatch obtains state-of-the-art results by a large margin across many datasets and labeled data amounts. For example, on CIFAR-10 with 250 labels, we reduce error rate by a factor of 4 (from 38% to 11%) and by a factor of 2 on STL-10. We also demonstrate how MixMatch can help achieve a dramatically better accuracy-privacy trade-off for differential privacy. Finally, we perform an ablation study to tease apart which components of MixMatch are most important for its success. | [] | [
"Image Classification",
"Semi-Supervised Image Classification"
] | [] | [
"CIFAR-10, 500 Labels",
"CIFAR-100",
"SVHN, 500 Labels",
"CIFAR-10",
"CIFAR-10, 4000 Labels",
"CIFAR-10, 2000 Labels",
"CIFAR-10, 250 Labels",
"STL-10, 1000 Labels",
"SVHN, 250 Labels",
"SVHN, 1000 labels",
"STL-10",
"SVHN",
"CIFAR-10, 1000 Labels",
"STL-10, 5000 Labels",
"SVHN, 2000 Labels",
"SVHN, 4000 Labels"
] | [
"Percentage error",
"Percentage correct",
"Accuracy"
] | MixMatch: A Holistic Approach to Semi-Supervised Learning |
Recent development of large-scale question answering (QA) datasets triggered
a substantial amount of research into end-to-end neural architectures for QA.
Increasingly complex systems have been conceived without comparison to simpler
neural baseline systems that would justify their complexity. In this work, we
propose a simple heuristic that guides the development of neural baseline
systems for the extractive QA task. We find that there are two ingredients
necessary for building a high-performing neural QA system: first, the awareness
of question words while processing the context and second, a composition
function that goes beyond simple bag-of-words modeling, such as recurrent
neural networks. Our results show that FastQA, a system that meets these two
requirements, can achieve very competitive performance compared with existing
models. We argue that this surprising finding puts results of previous systems
and the complexity of recent QA datasets into perspective. | [] | [
"Question Answering",
"Reading Comprehension"
] | [] | [
"NewsQA",
"SQuAD1.1 dev",
"SQuAD1.1"
] | [
"EM",
"F1"
] | Making Neural QA as Simple as Possible but not Simpler |
We address temporal action localization in untrimmed long videos. This is
important because videos in real applications are usually unconstrained and
contain multiple action instances plus video content of background scenes or
other activities. To address this challenging issue, we exploit the
effectiveness of deep networks in temporal action localization via three
segment-based 3D ConvNets: (1) a proposal network identifies candidate segments
in a long video that may contain actions; (2) a classification network learns
one-vs-all action classification model to serve as initialization for the
localization network; and (3) a localization network fine-tunes on the learned
classification network to localize each action instance. We propose a novel
loss function for the localization network to explicitly consider temporal
overlap and therefore achieve high temporal localization accuracy. Only the
proposal network and the localization network are used during prediction. On
two large-scale benchmarks, our approach achieves significantly superior
performances compared with other state-of-the-art systems: mAP increases from
1.7% to 7.4% on MEXaction2 and increases from 15.0% to 19.0% on THUMOS 2014,
when the overlap threshold for evaluation is set to 0.5. | [] | [
"Action Classification",
"Action Classification ",
"Action Localization",
"Temporal Action Localization",
"Temporal Localization"
] | [] | [
"MEXaction2",
"THUMOS’14"
] | [
"[email protected]",
"mAP",
"[email protected]",
"mAP [email protected]",
"mAP [email protected]",
"mAP [email protected]",
"[email protected]",
"[email protected]",
"mAP [email protected]",
"[email protected]",
"mAP [email protected]"
] | Temporal Action Localization in Untrimmed Videos via Multi-stage CNNs |
Domain adaptation in person re-identification (re-ID) has always been a challenging task. In this work, we explore how to harness the natural similar characteristics existing in the samples from the target domain for learning to conduct person re-ID in an unsupervised manner. Concretely, we propose a Self-similarity Grouping (SSG) approach, which exploits the potential similarity (from global body to local parts) of unlabeled samples to automatically build multiple clusters from different views. These independent clusters are then assigned with labels, which serve as the pseudo identities to supervise the training process. We repeatedly and alternatively conduct such a grouping and training process until the model is stable. Despite the apparent simplify, our SSG outperforms the state-of-the-arts by more than 4.6% (DukeMTMC to Market1501) and 4.4% (Market1501 to DukeMTMC) in mAP, respectively. Upon our SSG, we further introduce a clustering-guided semisupervised approach named SSG ++ to conduct the one-shot domain adaption in an open set setting (i.e. the number of independent identities from the target domain is unknown). Without spending much effort on labeling, our SSG ++ can further promote the mAP upon SSG by 10.7% and 6.9%, respectively. Our Code is available at: https://github.com/OasisYang/SSG . | [] | [
"Domain Adaptation",
"One-Shot Learning",
"Person Re-Identification",
"Unsupervised Domain Adaptation",
"Unsupervised Person Re-Identification"
] | [] | [
"Market to Duke",
"Market to MSMT",
"Market-1501->MSMT17",
"DukeMTMC-reID->MSMT17",
"DukeMTMC-reID",
"Duke to MSMT",
"Duke to Market",
"Market-1501"
] | [
"rank-10",
"mAP",
"Rank-10",
"MAP",
"Rank-1",
"rank-1",
"Rank-5",
"rank-5"
] | Self-similarity Grouping: A Simple Unsupervised Cross Domain Adaptation Approach for Person Re-identification |
Symmetry detection has been a classical problem in computer graphics, many of which using traditional geometric methods. In recent years, however, we have witnessed the arising deep learning changed the landscape of computer graphics. In this paper, we aim to solve the symmetry detection of the occluded point cloud in a deep-learning fashion. To the best of our knowledge, we are the first to utilize deep learning to tackle such a problem. In such a deep learning framework, double supervisions: points on the symmetry plane and normal vectors are employed to help us pinpoint the symmetry plane. We conducted experiments on the YCB- video dataset and demonstrate the efficacy of our method. | [] | [
"Occluded 3D Object Symmetry Detection",
"Symmetry Detection"
] | [] | [
"YCB-Video"
] | [
"PR AUC"
] | Symmetry Detection of Occluded Point Cloud Using Deep Learning |
Regression via classification (RvC) is a common method used for regression problems in deep learning, where the target variable belongs to a set of continuous values. By discretizing the target into a set of non-overlapping classes, it has been shown that training a classifier can improve neural network accuracy compared to using a standard regression approach. However, it is not clear how the set of discrete classes should be chosen and how it affects the overall solution. In this work, we propose that using several discrete data representations simultaneously can improve neural network learning compared to a single representation. Our approach is end-to-end differentiable and can be added as a simple extension to conventional learning methods, such as deep neural networks. We test our method on three challenging tasks and show that our method reduces the prediction error compared to a baseline RvC approach while maintaining a similar model complexity. | [] | [
"Age Estimation",
"Head Pose Estimation",
"Historical Color Image Dating",
"Regression"
] | [] | [
"HCI",
"UTKFace",
"BIWI"
] | [
"MAE",
"MAE (trained with BIWI data)"
] | Deep Ordinal Regression with Label Diversity |
State-of-the-art navigation methods leverage a spatial memory to generalize to new environments, but their occupancy maps are limited to capturing the geometric structures directly observed by the agent. We propose occupancy anticipation, where the agent uses its egocentric RGB-D observations to infer the occupancy state beyond the visible regions. In doing so, the agent builds its spatial awareness more rapidly, which facilitates efficient exploration and navigation in 3D environments. By exploiting context in both the egocentric views and top-down maps our model successfully anticipates a broader map of the environment, with performance significantly better than strong baselines. Furthermore, when deployed for the sequential decision-making tasks of exploration and navigation, our model outperforms state-of-the-art methods on the Gibson and Matterport3D datasets. Our approach is the winning entry in the 2020 Habitat PointNav Challenge. Project page: http://vision.cs.utexas.edu/projects/occupancy_anticipation/ | [] | [
"Decision Making",
"Efficient Exploration",
"Robot Navigation"
] | [] | [
"Habitat 2020 Point Nav test-std"
] | [
"SOFT_SPL",
"DISTANCE_TO_GOAL",
"SUCCESS",
"SPL"
] | Occupancy Anticipation for Efficient Exploration and Navigation |
Deep learning has improved performance on many natural language processing
(NLP) tasks individually. However, general NLP models cannot emerge within a
paradigm that focuses on the particularities of a single metric, dataset, and
task. We introduce the Natural Language Decathlon (decaNLP), a challenge that
spans ten tasks: question answering, machine translation, summarization,
natural language inference, sentiment analysis, semantic role labeling,
zero-shot relation extraction, goal-oriented dialogue, semantic parsing, and
commonsense pronoun resolution. We cast all tasks as question answering over a
context. Furthermore, we present a new Multitask Question Answering Network
(MQAN) jointly learns all tasks in decaNLP without any task-specific modules or
parameters in the multitask setting. MQAN shows improvements in transfer
learning for machine translation and named entity recognition, domain
adaptation for sentiment analysis and natural language inference, and zero-shot
capabilities for text classification. We demonstrate that the MQAN's
multi-pointer-generator decoder is key to this success and performance further
improves with an anti-curriculum training strategy. Though designed for
decaNLP, MQAN also achieves state of the art results on the WikiSQL semantic
parsing task in the single-task setting. We also release code for procuring and
processing data, training and evaluating models, and reproducing all
experiments for decaNLP. | [] | [
"Domain Adaptation",
"Machine Translation",
"Named Entity Recognition",
"Natural Language Inference",
"Question Answering",
"Relation Extraction",
"Semantic Parsing",
"Semantic Role Labeling",
"Sentiment Analysis",
"Text Classification",
"Transfer Learning"
] | [] | [
"MultiNLI"
] | [
"Accuracy"
] | The Natural Language Decathlon: Multitask Learning as Question Answering |
This work proposes the continuous conditional generative adversarial network (CcGAN), the first generative model for image generation conditional on continuous, scalar conditions (termed regression labels). Existing conditional GANs (cGANs) are mainly designed for categorical conditions (eg, class labels); conditioning on regression labels is mathematically distinct and raises two fundamental problems:(P1) Since there may be very few (even zero) real images for some regression labels, minimizing existing empirical versions of cGAN losses (aka empirical cGAN losses) often fails in practice;(P2) Since regression labels are scalar and infinitely many, conventional label input methods are not applicable. The proposed CcGAN solves the above problems, respectively, by (S1) reformulating existing empirical cGAN losses to be appropriate for the continuous scenario; and (S2) proposing a naive label input (NLI) method and an improved label input (ILI) method to incorporate regression labels into the generator and the discriminator. The reformulation in (S1) leads to two novel empirical discriminator losses, termed the hard vicinal discriminator loss (HVDL) and the soft vicinal discriminator loss (SVDL) respectively, and a novel empirical generator loss. The error bounds of a discriminator trained with HVDL and SVDL are derived under mild assumptions in this work. Two new benchmark datasets (RC-49 and Cell-200) and a novel evaluation metric (Sliding Fr\'echet Inception Distance) are also proposed for this continuous scenario. Our experiments on the Circular 2-D Gaussians, RC-49, UTKFace, Cell-200, and Steering Angle datasets show that CcGAN is able to generate diverse, high-quality samples from the image distribution conditional on a given regression label. Moreover, in these experiments, CcGAN substantially outperforms cGAN both visually and quantitatively. | [] | [
"Image Generation",
"Regression"
] | [] | [
"RC-49"
] | [
"Intra-FID"
] | Continuous Conditional Generative Adversarial Networks for Image Generation: Novel Losses and Label Input Mechanisms |
Classification problems solved with deep neural networks (DNNs) typically rely on a closed world paradigm, and optimize over a single objective (e.g., minimization of the cross-entropy loss). This setup dismisses all kinds of supporting signals that can be used to reinforce the existence or absence of a particular pattern. The increasing need for models that are interpretable by design makes the inclusion of said contextual signals a crucial necessity. To this end, we introduce the notion of Self-Supervised Autogenous Learning (SSAL) models. A SSAL objective is realized through one or more additional targets that are derived from the original supervised classification task, following architectural principles found in multi-task learning. SSAL branches impose low-level priors into the optimization process (e.g., grouping). The ability of using SSAL branches during inference, allow models to converge faster, focusing on a richer set of class-relevant features. We show that SSAL models consistently outperform the state-of-the-art while also providing structured predictions that are more interpretable. | [] | [
"Image Classification",
"Multi-Task Learning"
] | [] | [
"CIFAR-100",
"ImageNet"
] | [
"Percentage correct",
"Top 1 Accuracy"
] | Contextual Classification Using Self-Supervised Auxiliary Models for Deep Neural Networks |
The field of Automatic Facial Expression Analysis has grown rapidly in recent
years. However, despite progress in new approaches as well as benchmarking
efforts, most evaluations still focus on either posed expressions, near-frontal
recordings, or both. This makes it hard to tell how existing expression
recognition approaches perform under conditions where faces appear in a wide
range of poses (or camera views), displaying ecologically valid expressions.
The main obstacle for assessing this is the availability of suitable data, and
the challenge proposed here addresses this limitation. The FG 2017 Facial
Expression Recognition and Analysis challenge (FERA 2017) extends FERA 2015 to
the estimation of Action Units occurrence and intensity under different camera
views. In this paper we present the third challenge in automatic recognition of
facial expressions, to be held in conjunction with the 12th IEEE conference on
Face and Gesture Recognition, May 2017, in Washington, United States. Two
sub-challenges are defined: the detection of AU occurrence, and the estimation
of AU intensity. In this work we outline the evaluation protocol, the data
used, and the results of a baseline method for both sub-challenges. | [] | [
"Facial Action Unit Detection",
"Facial Expression Recognition",
"Gesture Recognition"
] | [] | [
"BP4D"
] | [
"F1",
"Average Accuracy"
] | FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge |
Recent reports suggest that a generic supervised deep CNN model trained on a
large-scale dataset reduces, but does not remove, dataset bias on a standard
benchmark. Fine-tuning deep models in a new domain can require a significant
amount of data, which for many applications is simply not available. We propose
a new CNN architecture which introduces an adaptation layer and an additional
domain confusion loss, to learn a representation that is both semantically
meaningful and domain invariant. We additionally show that a domain confusion
metric can be used for model selection to determine the dimension of an
adaptation layer and the best position for the layer in the CNN architecture.
Our proposed adaptation method offers empirical performance which exceeds
previously published results on a standard benchmark visual domain adaptation
task. | [] | [
"Domain Adaptation",
"Model Selection"
] | [] | [
"Office-Caltech"
] | [
"Average Accuracy"
] | Deep Domain Confusion: Maximizing for Domain Invariance |
Conventional training of a deep CNN based object detector demands a large number of bounding box annotations, which may be unavailable for rare categories. In this work we develop a few-shot object detector that can learn to detect novel objects from only a few annotated examples. Our proposed model leverages fully labeled base classes and quickly adapts to novel classes, using a meta feature learner and a reweighting module within a one-stage detection architecture. The feature learner extracts meta features that are generalizable to detect novel object classes, using training data from base classes with sufficient samples. The reweighting module transforms a few support examples from the novel classes to a global vector that indicates the importance or relevance of meta features for detecting the corresponding objects. These two modules, together with a detection prediction module, are trained end-to-end based on an episodic few-shot learning scheme and a carefully designed loss function. Through extensive experiments we demonstrate that our model outperforms well-established baselines by a large margin for few-shot object detection, on multiple datasets and settings. We also present analysis on various aspects of our proposed model, aiming to provide some inspiration for future few-shot detection works. | [] | [
"Few-Shot Learning",
"Few-Shot Object Detection",
"Image Classification",
"Meta-Learning",
"Object Detection"
] | [] | [
"MS-COCO (30-shot)",
"MS-COCO (10-shot)"
] | [
"AP"
] | Few-shot Object Detection via Feature Reweighting |
This paper presents RuSentiment, a new dataset for sentiment analysis of social media posts in Russian, and a new set of comprehensive annotation guidelines that are extensible to other languages. RuSentiment is currently the largest in its class for Russian, with 31,185 posts annotated with Fleiss{'} kappa of 0.58 (3 annotations per post). To diversify the dataset, 6,950 posts were pre-selected with an active learning-style strategy. We report baseline classification results, and we also release the best-performing embeddings trained on 3.2B tokens of Russian VKontakte posts. | [] | [
"Active Learning",
"Sentiment Analysis",
"Word Embeddings"
] | [] | [
"RuSentiment"
] | [
"Weighted F1"
] | RuSentiment: An Enriched Sentiment Analysis Dataset for Social Media in Russian |
Distantly supervised open-domain question answering (DS-QA) aims to find answers in collections of unlabeled text. Existing DS-QA models usually retrieve related paragraphs from a large-scale corpus and apply reading comprehension technique to extract answers from the most relevant paragraph. They ignore the rich information contained in other paragraphs. Moreover, distant supervision data inevitably accompanies with the wrong labeling problem, and these noisy data will substantially degrade the performance of DS-QA. To address these issues, we propose a novel DS-QA model which employs a paragraph selector to filter out those noisy paragraphs and a paragraph reader to extract the correct answer from those denoised paragraphs. Experimental results on real-world datasets show that our model can capture useful information from noisy data and achieve significant improvements on DS-QA as compared to all baselines. | [] | [
"Denoising",
"Information Retrieval",
"Open-Domain Question Answering",
"Question Answering",
"Reading Comprehension"
] | [] | [
"SearchQA",
"Quasar",
"Quasart-T"
] | [
"N-gram F1",
"Unigram Acc",
"F1",
"EM",
"EM (Quasar-T)",
"F1 (Quasar-T)"
] | Denoising Distantly Supervised Open-Domain Question Answering |
Hand pose estimation from 3D depth images, has been explored widely using various kinds of techniques in the field of computer vision. Though, deep learning based method improve the performance greatly recently, however, this problem still remains unsolved due to lack of large datasets, like ImageNet or effective data synthesis methods. In this paper, we propose HandAugment, a method to synthesize image data to augment the training process of the neural networks. Our method has two main parts: First, We propose a scheme of two-stage neural networks. This scheme can make the neural networks focus on the hand regions and thus to improve the performance. Second, we introduce a simple and effective method to synthesize data by combining real and synthetic image together in the image space. Finally, we show that our method achieves the first place in the task of depth-based 3D hand pose estimation in HANDS 2019 challenge. | [] | [
"3D Hand Pose Estimation",
"Data Augmentation",
"Hand Pose Estimation",
"Pose Estimation"
] | [] | [
"HANDS 2019"
] | [
"Average 3D Error"
] | HandAugment: A Simple Data Augmentation Method for Depth-Based 3D Hand Pose Estimation |
We present a convolution-free approach to video classification built exclusively on self-attention over space and time. Our method, named "TimeSformer," adapts the standard Transformer architecture to video by enabling spatiotemporal feature learning directly from a sequence of frame-level patches. Our experimental study compares different self-attention schemes and suggests that "divided attention," where temporal attention and spatial attention are separately applied within each block, leads to the best video classification accuracy among the design choices considered. Despite the radically different design compared to the prominent paradigm of 3D convolutional architectures for video, TimeSformer achieves state-of-the-art results on several major action recognition benchmarks, including the best reported accuracy on Kinetics-400 and Kinetics-600. Furthermore, our model is faster to train and has higher test-time efficiency compared to competing architectures. Code and pretrained models will be made publicly available. | [] | [
"Action Classification",
"Action Recognition",
"Video Classification",
"Video Question Answering",
"Video Understanding"
] | [] | [
"Kinetics-400",
"Howto100M-QA",
"Diving-48",
"Something-Something V2"
] | [
"Top-1 Accuracy",
"Vid acc@5",
"Vid acc@1",
"Accuracy"
] | Is Space-Time Attention All You Need for Video Understanding? |
Many new proposals for scene text recognition (STR) models have been introduced in recent years. While each claim to have pushed the boundary of the technology, a holistic and fair comparison has been largely missing in the field due to the inconsistent choices of training and evaluation datasets. This paper addresses this difficulty with three major contributions. First, we examine the inconsistencies of training and evaluation datasets, and the performance gap results from inconsistencies. Second, we introduce a unified four-stage STR framework that most existing STR models fit into. Using this framework allows for the extensive evaluation of previously proposed STR modules and the discovery of previously unexplored module combinations. Third, we analyze the module-wise contributions to performance in terms of accuracy, speed, and memory demand, under one consistent set of training and evaluation datasets. Such analyses clean up the hindrance on the current comparisons to understand the performance gain of the existing modules. | [] | [
"Scene Text",
"Scene Text Recognition"
] | [] | [
"ICDAR2013",
"ICDAR2015",
"ICDAR 2003",
"SVT"
] | [
"Accuracy"
] | What Is Wrong With Scene Text Recognition Model Comparisons? Dataset and Model Analysis |
One-class novelty detection is to identify anomalous instances that do not conform to the expected normal instances. In this paper, the Generative Adversarial Networks (GANs) based on encoder-decoder-encoder pipeline are used for detection and achieve state-of-the-art performance. However, deep neural networks are too over-parameterized to deploy on resource-limited devices. Therefore, Progressive Knowledge Distillation with GANs (PKDGAN) is proposed to learn compact and fast novelty detection networks. The P-KDGAN is a novel attempt to connect two standard GANs by the designed distillation loss for transferring knowledge from the teacher to the student. The progressive learning of knowledge distillation is a two-step approach that continuously improves the performance of the student GAN and achieves better performance than single step methods. In the first step, the student GAN learns the basic knowledge totally from the teacher via guiding of the pretrained teacher GAN with fixed weights. In the second step, joint fine-training is adopted for the knowledgeable teacher and student GANs to further improve the performance and stability. The experimental results on CIFAR-10, MNIST, and FMNIST show that our method improves the performance of the student GAN by 2.44%, 1.77%, and 1.73% when compressing the computation at ratios of 24.45:1, 311.11:1, and 700:1, respectively. | [] | [
"Anomaly Detection",
"Knowledge Distillation",
"Unsupervised Anomaly Detection"
] | [] | [
"MNIST",
"\t Fashion-MNIST",
"CIFAR-10"
] | [
"ROC AUC",
"AUC-ROC"
] | P-KDGAN: Progressive Knowledge Distillation with GANs for One-class Novelty Detection |
Few-shot image classification aims to classify unseen classes with limited labelled samples. Recent works benefit from the meta-learning process with episodic tasks and can fast adapt to class from training to testing. Due to the limited number of samples for each task, the initial embedding network for meta-learning becomes an essential component and can largely affect the performance in practice. To this end, most of the existing methods highly rely on the efficient embedding network. Due to the limited labelled data, the scale of embedding network is constrained under a supervised learning(SL) manner which becomes a bottleneck of the few-shot learning methods. In this paper, we proposed to train a more generalized embedding network with self-supervised learning (SSL) which can provide robust representation for downstream tasks by learning from the data itself. We evaluate our work by extensive comparisons with previous baseline methods on two few-shot classification datasets ({\em i.e.,} MiniImageNet and CUB) and achieve better performance over baselines. Tests on four datasets in cross-domain few-shot learning classification show that the proposed method achieves state-of-the-art results and further prove the robustness of the proposed model. Our code is available at \hyperref[https://github.com/phecy/SSL-FEW-SHOT.]{https://github.com/phecy/SSL-FEW-SHOT.} | [] | [
"Cross-Domain Few-Shot",
"cross-domain few-shot learning",
"Few-Shot Image Classification",
"Few-Shot Learning",
"Image Classification",
"Meta-Learning",
"Self-Supervised Learning"
] | [] | [
"Mini-Imagenet 5-way (1-shot)",
"Mini-Imagenet 5-way (5-shot)",
"Mini-ImageNet - 1-Shot Learning",
"CUB 200 5-way 1-shot",
"CUB 200 5-way 5-shot"
] | [
"Accuracy"
] | Self-Supervised Learning For Few-Shot Image Classification |
Simplicity is the ultimate sophistication. Differentiable Architecture Search (DARTS) has now become one of the mainstream paradigms of neural architecture search. However, it largely suffers from several disturbing factors of optimization process whose results are unstable to reproduce. FairDARTS points out that skip connections natively have an unfair advantage in exclusive competition which primarily leads to dramatic performance collapse. While FairDARTS turns the unfair competition into a collaborative one, we instead impede such unfair advantage by injecting unbiased random noise into skip operations' output. In effect, the optimizer should perceive this difficulty at each training step and refrain from overshooting on skip connections, but in a long run it still converges to the right solution area since no bias is added to the gradient. We name this novel approach as NoisyDARTS. Our experiments on CIFAR-10 and ImageNet attest that it can effectively break the skip connection's unfair advantage and yield better performance. It generates a series of models that achieve state-of-the-art results on both datasets. Code will be made available at https://github.com/xiaomi-automl/NoisyDARTS. | [] | [
"AutoML",
"Image Classification",
"Neural Architecture Search"
] | [] | [
"ImageNet",
"CIFAR-10"
] | [
"Search Time (GPU days)",
"MACs",
"Percentage correct",
"Top-1 Error Rate",
"FLOPS",
"Params",
"Parameters",
"Accuracy"
] | Noisy Differentiable Architecture Search |
Entity alignment aims to identify equivalent entity pairs from different Knowledge Graphs (KGs), which is essential in integrating multi-source KGs. Recently, with the introduction of GNNs into entity alignment, the architectures of recent models have become more and more complicated. We even find two counter-intuitive phenomena within these methods: (1) The standard linear transformation in GNNs is not working well. (2) Many advanced KG embedding models designed for link prediction task perform poorly in entity alignment. In this paper, we abstract existing entity alignment methods into a unified framework, Shape-Builder & Alignment, which not only successfully explains the above phenomena but also derives two key criteria for an ideal transformation operation. Furthermore, we propose a novel GNNs-based method, Relational Reflection Entity Alignment (RREA). RREA leverages Relational Reflection Transformation to obtain relation specific embeddings for each entity in a more efficient way. The experimental results on real-world datasets show that our model significantly outperforms the state-of-the-art methods, exceeding by 5.8%-10.9% on Hits@1. | [] | [
"Entity Alignment",
"Knowledge Graphs",
"Link Prediction"
] | [] | [
"DBP15k zh-en"
] | [
"Hits@1"
] | Relational Reflection Entity Alignment |
We introduce a simple recurrent variational auto-encoder architecture that
significantly improves image modeling. The system represents the
state-of-the-art in latent variable models for both the ImageNet and Omniglot
datasets. We show that it naturally separates global conceptual information
from lower level details, thus addressing one of the fundamentally desired
properties of unsupervised learning. Furthermore, the possibility of
restricting ourselves to storing only global information about an image allows
us to achieve high quality 'conceptual compression'. | [] | [
"Image Generation",
"Latent Variable Models",
"Omniglot"
] | [] | [
"CIFAR-10"
] | [
"bits/dimension"
] | Towards Conceptual Compression |
Effective and efficient mitigation of malware is a long-time endeavor in the
information security community. The development of an anti-malware system that
can counteract an unknown malware is a prolific activity that may benefit
several sectors. We envision an intelligent anti-malware system that utilizes
the power of deep learning (DL) models. Using such models would enable the
detection of newly-released malware through mathematical generalization. That
is, finding the relationship between a given malware $x$ and its corresponding
malware family $y$, $f: x \mapsto y$. To accomplish this feat, we used the
Malimg dataset (Nataraj et al., 2011) which consists of malware images that
were processed from malware binaries, and then we trained the following DL
models 1 to classify each malware family: CNN-SVM (Tang, 2013), GRU-SVM
(Agarap, 2017), and MLP-SVM. Empirical evidence has shown that the GRU-SVM
stands out among the DL models with a predictive accuracy of ~84.92%. This
stands to reason for the mentioned model had the relatively most sophisticated
architecture design among the presented models. The exploration of an even more
optimal DL-SVM model is the next stage towards the engineering of an
intelligent anti-malware system. | [] | [
"Malware Classification"
] | [] | [
"Malimg Dataset"
] | [
"Accuracy"
] | Towards Building an Intelligent Anti-Malware System: A Deep Learning Approach using Support Vector Machine (SVM) for Malware Classification |
This paper introduces a new large-scale music dataset, MusicNet, to serve as
a source of supervision and evaluation of machine learning methods for music
research. MusicNet consists of hundreds of freely-licensed classical music
recordings by 10 composers, written for 11 instruments, together with
instrument/note annotations resulting in over 1 million temporal labels on 34
hours of chamber music performances under various studio and microphone
conditions.
The paper defines a multi-label classification task to predict notes in
musical recordings, along with an evaluation protocol, and benchmarks several
machine learning architectures for this task: i) learning from spectrogram
features; ii) end-to-end learning with a neural net; iii) end-to-end learning
with a convolutional neural net. These experiments show that end-to-end models
trained for note prediction learn frequency selective filters as a low-level
representation of audio. | [] | [
"Multi-Label Classification",
"Music Transcription"
] | [] | [
"MusicNet"
] | [
"APS"
] | Learning Features of Music from Scratch |
Clustering is one of the most fundamental tasks in machine learning. Recently, deep clustering has become a major trend in clustering techniques. Representation learning often plays an important role in the effectiveness of deep clustering, and thus can be a principal cause of performance degradation. In this paper, we propose a clustering-friendly representation learning method using instance discrimination and feature decorrelation. Our deep-learning-based representation learning method is motivated by the properties of classical spectral clustering. Instance discrimination learns similarities among data and feature decorrelation removes redundant correlation among features. We utilize an instance discrimination method in which learning individual instance classes leads to learning similarity among instances. Through detailed experiments and examination, we show that the approach can be adapted to learning a latent space for clustering. We design novel softmax-formulated decorrelation constraints for learning. In evaluations of image clustering using CIFAR-10 and ImageNet-10, our method achieves accuracy of 81.5% and 95.4%, respectively, far above state-of-the-art values. We also show that the softmax-formulated constraints are compatible with various neural networks. | [] | [
"Deep Clustering",
"Image Clustering",
"Representation Learning"
] | [] | [
"Imagenet-dog-15",
"CIFAR-100",
"CIFAR-10",
"ImageNet-10",
"STL-10"
] | [
"Train set",
"Train Split",
"ARI",
"Backbone",
"Train Set",
"NMI",
"Accuracy"
] | Clustering-friendly Representation Learning via Instance Discrimination and Feature Decorrelation |
While knowledge distillation (transfer) has been attracting attentions from the research community, the recent development in the fields has heightened the need for reproducible studies and highly generalized frameworks to lower barriers to such high-quality, reproducible deep learning research. Several researchers voluntarily published frameworks used in their knowledge distillation studies to help other interested researchers reproduce their original work. Such frameworks, however, are usually neither well generalized nor maintained, thus researchers are still required to write a lot of code to refactor/build on the frameworks for introducing new methods, models, datasets and designing experiments. In this paper, we present our developed open-source framework built on PyTorch and dedicated for knowledge distillation studies. The framework is designed to enable users to design experiments by declarative PyYAML configuration files, and helps researchers complete the recently proposed ML Code Completeness Checklist. Using the developed framework, we demonstrate its various efficient training strategies, and implement a variety of knowledge distillation methods. We also reproduce some of their original experimental results on the ImageNet and COCO datasets presented at major machine learning conferences such as ICLR, NeurIPS, CVPR and ECCV, including recent state-of-the-art methods. All the source code, configurations, log files and trained model weights are publicly available at https://github.com/yoshitomo-matsubara/torchdistill . | [] | [
"Image Classification",
"Instance Segmentation",
"Knowledge Distillation",
"Object Detection"
] | [] | [
"ImageNet",
"COCO test-dev"
] | [
"box AP",
"mask AP",
"Top 1 Accuracy"
] | torchdistill: A Modular, Configuration-Driven Framework for Knowledge Distillation |
In 2D image processing, some attempts decompose images into high and low frequency components for describing edge and smooth parts respectively. Similarly, the contour and flat area of 3D objects, such as the boundary and seat area of a chair, describe different but also complementary geometries. However, such investigation is lost in previous deep networks that understand point clouds by directly treating all points or local patches equally. To solve this problem, we propose Geometry-Disentangled Attention Network (GDANet). GDANet introduces Geometry-Disentangle Module to dynamically disentangle point clouds into the contour and flat part of 3D objects, respectively denoted by sharp and gentle variation components. Then GDANet exploits Sharp-Gentle Complementary Attention Module that regards the features from sharp and gentle variation components as two holistic representations, and pays different attentions to them while fusing them respectively with original point cloud features. In this way, our method captures and refines the holistic and complementary 3D geometric semantics from two distinct disentangled components to supplement the local information. Extensive experiments on 3D object classification and segmentation benchmarks demonstrate that GDANet achieves the state-of-the-arts with fewer parameters. Code is released on https://github.com/mutianxu/GDANet. | [] | [
"3D Object Classification",
"Object Classification"
] | [] | [
"ShapeNet-Part",
"ModelNet40"
] | [
"Overall Accuracy",
"Class Average IoU",
"Instance Average IoU"
] | Learning Geometry-Disentangled Representation for Complementary Understanding of 3D Object Point Cloud |
Attention-based learning for fine-grained image recognition remains a
challenging task, where most of the existing methods treat each object part in
isolation, while neglecting the correlations among them. In addition, the
multi-stage or multi-scale mechanisms involved make the existing methods less
efficient and hard to be trained end-to-end. In this paper, we propose a novel
attention-based convolutional neural network (CNN) which regulates multiple
object parts among different input images. Our method first learns multiple
attention region features of each input image through the one-squeeze
multi-excitation (OSME) module, and then apply the multi-attention multi-class
constraint (MAMC) in a metric learning framework. For each anchor feature, the
MAMC functions by pulling same-attention same-class features closer, while
pushing different-attention or different-class features away. Our method can be
easily trained end-to-end, and is highly efficient which requires only one
training stage. Moreover, we introduce Dogs-in-the-Wild, a comprehensive dog
species dataset that surpasses similar existing datasets by category coverage,
data volume and annotation quality. This dataset will be released upon
acceptance to facilitate the research of fine-grained image recognition.
Extensive experiments are conducted to show the substantial improvements of our
method on four benchmark datasets. | [] | [
"Fine-Grained Image Recognition",
"Metric Learning"
] | [] | [
"Stanford Cars"
] | [
"Accuracy"
] | Multi-Attention Multi-Class Constraint for Fine-grained Image Recognition |
Depth Completion deals with the problem of converting a sparse depth map to a dense one, given the corresponding color image. Convolutional spatial propagation network (CSPN) is one of the state-of-the-art (SoTA) methods of depth completion, which recovers structural details of the scene. In this paper, we propose CSPN++, which further improves its effectiveness and efficiency by learning adaptive convolutional kernel sizes and the number of iterations for the propagation, thus the context and computational resources needed at each pixel could be dynamically assigned upon requests. Specifically, we formulate the learning of the two hyper-parameters as an architecture selection problem where various configurations of kernel sizes and numbers of iterations are first defined, and then a set of soft weighting parameters are trained to either properly assemble or select from the pre-defined configurations at each pixel. In our experiments, we find weighted assembling can lead to significant accuracy improvements, which we referred to as "context-aware CSPN", while weighted selection, "resource-aware CSPN" can reduce the computational resource significantly with similar or better accuracy. Besides, the resource needed for CSPN++ can be adjusted w.r.t. the computational budget automatically. Finally, to avoid the side effects of noise or inaccurate sparse depths, we embed a gated network inside CSPN++, which further improves the performance. We demonstrate the effectiveness of CSPN++on the KITTI depth completion benchmark, where it significantly improves over CSPN and other SoTA methods. | [] | [
"Depth Completion"
] | [] | [
"KITTI Depth Completion Validation"
] | [
"RMSE"
] | CSPN++: Learning Context and Resource Aware Convolutional Spatial Propagation Networks for Depth Completion |
When a large feedforward neural network is trained on a small training set,
it typically performs poorly on held-out test data. This "overfitting" is
greatly reduced by randomly omitting half of the feature detectors on each
training case. This prevents complex co-adaptations in which a feature detector
is only helpful in the context of several other specific feature detectors.
Instead, each neuron learns to detect a feature that is generally helpful for
producing the correct answer given the combinatorially large variety of
internal contexts in which it must operate. Random "dropout" gives big
improvements on many benchmark tasks and sets new records for speech and object
recognition. | [] | [
"Image Classification",
"Object Recognition"
] | [] | [
"CIFAR-10"
] | [
"Percentage correct"
] | Improving neural networks by preventing co-adaptation of feature detectors |
For human pose estimation in monocular images, joint occlusions and
overlapping upon human bodies often result in deviated pose predictions. Under
these circumstances, biologically implausible pose predictions may be produced.
In contrast, human vision is able to predict poses by exploiting geometric
constraints of joint inter-connectivity. To address the problem by
incorporating priors about the structure of human bodies, we propose a novel
structure-aware convolutional network to implicitly take such priors into
account during training of the deep network. Explicit learning of such
constraints is typically challenging. Instead, we design discriminators to
distinguish the real poses from the fake ones (such as biologically implausible
ones). If the pose generator (G) generates results that the discriminator fails
to distinguish from real ones, the network successfully learns the priors. | [] | [
"Pose Estimation"
] | [] | [
"MPII Human Pose"
] | [
"PCKh-0.5"
] | Adversarial PoseNet: A Structure-aware Convolutional Network for Human Pose Estimation |
Estimating crowd count in densely crowded scenes is an extremely challenging
task due to non-uniform scale variations. In this paper, we propose a novel
end-to-end cascaded network of CNNs to jointly learn crowd count classification
and density map estimation. Classifying crowd count into various groups is
tantamount to coarsely estimating the total count in the image thereby
incorporating a high-level prior into the density estimation network. This
enables the layers in the network to learn globally relevant discriminative
features which aid in estimating highly refined density maps with lower count
error. The joint training is performed in an end-to-end fashion. Extensive
experiments on highly challenging publicly available datasets show that the
proposed method achieves lower count error and better quality density maps as
compared to the recent state-of-the-art methods. | [] | [
"Crowd Counting",
"Density Estimation",
"Multi-Task Learning"
] | [] | [
"UCF CC 50",
"UCF-QNRF",
"ShanghaiTech A",
"ShanghaiTech B"
] | [
"MAE",
"MSE"
] | CNN-based Cascaded Multi-task Learning of High-level Prior and Density Estimation for Crowd Counting |
Text-based person search aims to retrieve the pedestrian images that best match a given textual description from gallery images. Previous methods utilize the soft-attention mechanism to infer the semantic alignments between the regions of image and the corresponding words in sentence. However, these methods may fuse the irrelevant multi-modality features together which cause matching redundancy problem. In this work, we propose a novel hierarchical Gumbel attention network for text-based person search via Gumbel top-k re-parameterization algorithm. Specifically, it adaptively selects the strong semantically relevant image regions and words/phrases from images and texts for precise alignment and similarity calculation. This hard selection strategy is able to fuse the strong-relevant multi-modality features for alleviating the problem of matching redundancy. Meanwhile, a Gumbel top-k reparameterization algorithm is designed as a low-variance, unbiased gradient estimator to handle the discreteness problem of hard attention mechanism by an end-to-end manner. Moreover, a hierarchical adaptive matching strategy is employed by the model from three different granularities, i.e., word-level, phrase-level, and sentencelevel, towards fine-grained matching. Extensive experimental results demonstrate the state-of-the-art performance. Compared the existed best method, we achieve the 8.24% Rank-1 and 7.6% mAP relative improvements in the text-to-image retrieval task, and 5.58% Rank-1 and 6.3% mAP relative improvements in the image-to-text retrieval task on CUHK-PEDES dataset, respectively | [] | [
"Image Retrieval",
"Image-to-Text Retrieval",
"Person Search",
"Text based Person Retrieval",
"Text-to-Image Retrieval"
] | [] | [
"CUHK-PEDES"
] | [
"R@10",
"R@1",
"R@5"
] | Hierarchical Gumbel Attention Network for Text-based Person Search |
Fine-grained classification is a challenging problem, due to subtle differences among highly-confused categories. Most approaches address this difficulty by learning discriminative representation of individual input image. On the other hand, humans can effectively identify contrastive clues by comparing image pairs. Inspired by this fact, this paper proposes a simple but effective Attentive Pairwise Interaction Network (API-Net), which can progressively recognize a pair of fine-grained images by interaction. Specifically, API-Net first learns a mutual feature vector to capture semantic differences in the input pair. It then compares this mutual vector with individual vectors to generate gates for each input image. These distinct gate vectors inherit mutual context on semantic differences, which allow API-Net to attentively capture contrastive clues by pairwise interaction between two images. Additionally, we train API-Net in an end-to-end manner with a score ranking regularization, which can further generalize API-Net by taking feature priorities into account. We conduct extensive experiments on five popular benchmarks in fine-grained classification. API-Net outperforms the recent SOTA methods, i.e., CUB-200-2011 (90.0%), Aircraft(93.9%), Stanford Cars (95.3%), Stanford Dogs (90.3%), and NABirds (88.1%). | [] | [
"Fine-Grained Image Classification"
] | [] | [
"FGVC Aircraft",
" CUB-200-2011",
"Stanford Dogs",
"Stanford Cars",
"NABirds"
] | [
"Accuracy"
] | Learning Attentive Pairwise Interaction for Fine-Grained Classification |
Face detection and alignment in unconstrained environment are challenging due
to various poses, illuminations and occlusions. Recent studies show that deep
learning approaches can achieve impressive performance on these two tasks. In
this paper, we propose a deep cascaded multi-task framework which exploits the
inherent correlation between them to boost up their performance. In particular,
our framework adopts a cascaded structure with three stages of carefully
designed deep convolutional networks that predict face and landmark location in
a coarse-to-fine manner. In addition, in the learning process, we propose a new
online hard sample mining strategy that can improve the performance
automatically without manual sample selection. Our method achieves superior
accuracy over the state-of-the-art techniques on the challenging FDDB and WIDER
FACE benchmark for face detection, and AFLW benchmark for face alignment, while
keeps real time performance. | [] | [
"Face Alignment",
"Face Detection"
] | [] | [
"WIDER Face (Hard)",
"WIDER Face (Medium)",
"WIDER Face (Easy)"
] | [
"AP"
] | Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks |
Timely assessment of compound toxicity is one of the biggest challenges
facing the pharmaceutical industry today. A significant proportion of compounds
identified as potential leads are ultimately discarded due to the toxicity they
induce. In this paper, we propose a novel machine learning approach for the
prediction of molecular activity on ToxCast targets. We combine extreme
gradient boosting with fully-connected and graph-convolutional neural network
architectures trained on QSAR physical molecular property descriptors, PubChem
molecular fingerprints, and SMILES sequences. Our ensemble predictor leverages
the strengths of each individual technique, significantly outperforming
existing state-of-the art models on the ToxCast and Tox21 toxicity-prediction
datasets. We provide free access to molecule toxicity prediction using our
model at http://www.owkin.com/toxicblend. | [] | [
"Drug Discovery"
] | [] | [
"Tox21"
] | [
"AUC"
] | ToxicBlend: Virtual Screening of Toxic Compounds with Ensemble Predictors |
A reliable and accurate 3D tracking framework is essential for predicting future locations of surrounding objects and planning the observer's actions in numerous applications such as autonomous driving. We propose a framework that can effectively associate moving objects over time and estimate their full 3D bounding box information from a sequence of 2D images captured on a moving platform. The object association leverages quasi-dense similarity learning to identify objects in various poses and viewpoints with appearance cues only. After initial 2D association, we further utilize 3D bounding boxes depth-ordering heuristics for robust instance association and motion-based 3D trajectory prediction for re-identification of occluded vehicles. In the end, an LSTM-based object velocity learning module aggregates the long-term trajectory information for more accurate motion extrapolation. Experiments on our proposed simulation data and real-world benchmarks, including KITTI, nuScenes, and Waymo datasets, show that our tracking framework offers robust object association and tracking on urban-driving scenarios. On the Waymo Open benchmark, we establish the first camera-only baseline in the 3D tracking and 3D detection challenges. Our quasi-dense 3D tracking pipeline achieves impressive improvements on the nuScenes 3D tracking benchmark with near five times tracking accuracy of the best vision-only submission among all published methods. Our code, data and trained models are available at https://github.com/SysCV/qd-3dt. | [] | [
"3D Object Tracking",
"Autonomous Driving",
"Object Tracking",
"Trajectory Prediction"
] | [] | [
"KITTI Tracking test"
] | [
"MOTA"
] | Monocular Quasi-Dense 3D Object Tracking |
Egocentric video recognition is a natural testbed for diverse interaction reasoning. Due to the large action vocabulary in egocentric video datasets, recent studies usually utilize a two-branch structure for action recognition, ie, one branch for verb classification and the other branch for noun classification. However, correlation studies between the verb and the noun branches have been largely ignored. Besides, the two branches fail to exploit local features due to the absence of a position-aware attention mechanism. In this paper, we propose a novel Symbiotic Attention framework leveraging Privileged information (SAP) for egocentric video recognition. Finer position-aware object detection features can facilitate the understanding of actor's interaction with the object. We introduce these features in action recognition and regard them as privileged information. Our framework enables mutual communication among the verb branch, the noun branch, and the privileged information. This communication process not only injects local details into global features but also exploits implicit guidance about the spatio-temporal position of an on-going action. We introduce novel symbiotic attention (SA) to enable effective communication. It first normalizes the detection guided features on one branch to underline the action-relevant information from the other branch. SA adaptively enhances the interactions among the three sources. To further catalyze this communication, spatial relations are uncovered for the selection of most action-relevant information. It identifies the most valuable and discriminative feature for classification. We validate the effectiveness of our SAP quantitatively and qualitatively. Notably, it achieves the state-of-the-art on two large-scale egocentric video datasets. | [] | [
"Action Recognition",
"Egocentric Activity Recognition",
"Object Detection",
"Video Recognition"
] | [] | [
"EGTEA"
] | [
"Mean class accuracy",
"Average Accuracy"
] | Symbiotic Attention with Privileged Information for Egocentric Action Recognition |
Generating novel, yet realistic, images of persons is a challenging task due
to the complex interplay between the different image factors, such as the
foreground, background and pose information. In this work, we aim at generating
such images based on a novel, two-stage reconstruction pipeline that learns a
disentangled representation of the aforementioned image factors and generates
novel person images at the same time. First, a multi-branched reconstruction
network is proposed to disentangle and encode the three factors into embedding
features, which are then combined to re-compose the input image itself. Second,
three corresponding mapping functions are learned in an adversarial manner in
order to map Gaussian noise to the learned embedding feature space, for each
factor respectively. Using the proposed framework, we can manipulate the
foreground, background and pose of the input image, and also sample new
embedding features to generate such targeted manipulations, that provide more
control over the generation process. Experiments on Market-1501 and Deepfashion
datasets show that our model does not only generate realistic person images
with new foregrounds, backgrounds and poses, but also manipulates the generated
factors and interpolates the in-between states. Another set of experiments on
Market-1501 shows that our model can also be beneficial for the person
re-identification task. | [] | [
"Gesture-to-Gesture Translation",
"Image Generation",
"Person Re-Identification",
"Pose Transfer"
] | [] | [
"Senz3D",
"NTU Hand Digit",
"Deep-Fashion"
] | [
"SSIM",
"PSNR",
"AMT",
"IS"
] | Disentangled Person Image Generation |
Gradient-based meta-learning methods leverage gradient descent to learn the
commonalities among various tasks. While previous such methods have been
successful in meta-learning tasks, they resort to simple gradient descent
during meta-testing. Our primary contribution is the {\em MT-net}, which
enables the meta-learner to learn on each layer's activation space a subspace
that the task-specific learner performs gradient descent on. Additionally, a
task-specific learner of an {\em MT-net} performs gradient descent with respect
to a meta-learned distance metric, which warps the activation space to be more
sensitive to task identity. We demonstrate that the dimension of this learned
subspace reflects the complexity of the task-specific learner's adaptation
task, and also that our model is less sensitive to the choice of initial
learning rates than previous gradient-based meta-learning methods. Our method
achieves state-of-the-art or comparable performance on few-shot classification
and regression tasks. | [] | [
"Few-Shot Image Classification",
"Meta-Learning",
"Regression"
] | [] | [
"OMNIGLOT - 1-Shot, 5-way",
"Mini-Imagenet 5-way (1-shot)",
"OMNIGLOT - 1-Shot, 20-way"
] | [
"Accuracy"
] | Gradient-Based Meta-Learning with Learned Layerwise Metric and Subspace |
Many classic methods have shown non-local self-similarity in natural images
to be an effective prior for image restoration. However, it remains unclear and
challenging to make use of this intrinsic property via deep networks. In this
paper, we propose a non-local recurrent network (NLRN) as the first attempt to
incorporate non-local operations into a recurrent neural network (RNN) for
image restoration. The main contributions of this work are: (1) Unlike existing
methods that measure self-similarity in an isolated manner, the proposed
non-local module can be flexibly integrated into existing deep networks for
end-to-end training to capture deep feature correlation between each location
and its neighborhood. (2) We fully employ the RNN structure for its parameter
efficiency and allow deep feature correlation to be propagated along adjacent
recurrent states. This new design boosts robustness against inaccurate
correlation estimation due to severely degraded images. (3) We show that it is
essential to maintain a confined neighborhood for computing deep feature
correlation given degraded images. This is in contrast to existing practice
that deploys the whole image. Extensive experiments on both image denoising and
super-resolution tasks are conducted. Thanks to the recurrent non-local
operations and correlation propagation, the proposed NLRN achieves superior
results to state-of-the-art methods with much fewer parameters. | [] | [
"Denoising",
"Image Denoising",
"Image Restoration",
"Image Super-Resolution",
"Super-Resolution"
] | [] | [
"Urban100 sigma25",
"Darmstadt Noise Dataset",
"Set14 - 4x upscaling",
"BSD68 sigma50",
"Set12 sigma50",
"Set12 sigma15",
"Urban100 sigma50",
"BSD68 sigma25",
"BSD100 - 4x upscaling",
"BSD68 sigma15",
"Urban100 sigma15",
"Set12 sigma30",
"Set5 - 4x upscaling",
"BSD200 sigma50",
"BSD200 sigma70",
"BSD200 sigma30",
"Urban100 - 4x upscaling"
] | [
"SSIM",
"PSNR"
] | Non-Local Recurrent Network for Image Restoration |
Many deep learning architectures have been proposed to model the
compositionality in text sequences, requiring a substantial number of
parameters and expensive computations. However, there has not been a rigorous
evaluation regarding the added value of sophisticated compositional functions.
In this paper, we conduct a point-by-point comparative study between Simple
Word-Embedding-based Models (SWEMs), consisting of parameter-free pooling
operations, relative to word-embedding-based RNN/CNN models. Surprisingly,
SWEMs exhibit comparable or even superior performance in the majority of cases
considered. Based upon this understanding, we propose two additional pooling
strategies over learned word embeddings: (i) a max-pooling operation for
improved interpretability; and (ii) a hierarchical pooling operation, which
preserves spatial (n-gram) information within text sequences. We present
experiments on 17 datasets encompassing three tasks: (i) (long) document
classification; (ii) text sequence matching; and (iii) short text tasks,
including classification and tagging. The source code and datasets can be
obtained from https:// github.com/dinghanshen/SWEM. | [] | [
"Document Classification",
"Named Entity Recognition",
"Sentiment Analysis",
"Subjectivity Analysis",
"Text Classification",
"Word Embeddings"
] | [] | [
"MultiNLI",
"Yelp Fine-grained classification",
"Yelp Binary classification",
"Yahoo! Answers",
"DBpedia",
"SST-2 Binary classification",
"MSRP",
"SNLI",
"WikiQA",
"CoNLL 2000",
"MR",
"AG News",
"CoNLL 2003 (English)",
"SST-5 Fine-grained classification",
"TREC-6",
"SUBJ",
"Quora Question Pairs"
] | [
"% Test Accuracy",
"Matched",
"MAP",
"Error",
"MRR",
"F1",
"Accuracy",
"Mismatched"
] | Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms |
We present a network architecture for processing point clouds that directly
operates on a collection of points represented as a sparse set of samples in a
high-dimensional lattice. Naively applying convolutions on this lattice scales
poorly, both in terms of memory and computational cost, as the size of the
lattice increases. Instead, our network uses sparse bilateral convolutional
layers as building blocks. These layers maintain efficiency by using indexing
structures to apply convolutions only on occupied parts of the lattice, and
allow flexible specifications of the lattice structure enabling hierarchical
and spatially-aware feature learning, as well as joint 2D-3D reasoning. Both
point-based and image-based representations can be easily incorporated in a
network with such layers and the resulting model can be trained in an
end-to-end manner. We present results on 3D segmentation tasks where our
approach outperforms existing state-of-the-art techniques. | [] | [
"3D Part Segmentation",
"3D Semantic Segmentation",
"Semantic Segmentation"
] | [] | [
"ShapeNet-Part",
"SemanticKITTI",
"ScanNet"
] | [
"3DIoU",
"Class Average IoU",
"Instance Average IoU",
"mIoU"
] | SPLATNet: Sparse Lattice Networks for Point Cloud Processing |
The ability to consolidate information of different types is at the core of
intelligence, and has tremendous practical value in allowing learning for one
task to benefit from generalizations learned for others. In this paper we
tackle the challenging task of improving semantic parsing performance, taking
UCCA parsing as a test case, and AMR, SDP and Universal Dependencies (UD)
parsing as auxiliary tasks. We experiment on three languages, using a uniform
transition-based system and learning architecture for all parsing tasks.
Despite notable conceptual, formal and domain differences, we show that
multitask learning significantly improves UCCA parsing in both in-domain and
out-of-domain settings. | [] | [
"Semantic Parsing",
"UCCA Parsing"
] | [] | [
"SemEval 2019 Task 1"
] | [
"English-20K (open) F1",
"English-Wiki (open) F1"
] | Multitask Parsing Across Semantic Representations |