abstract
stringlengths 13
4.33k
| field
sequence | task
sequence | method
sequence | dataset
sequence | metric
sequence | title
stringlengths 10
194
|
---|---|---|---|---|---|---|
Video object segmentation (VOS) describes the task of segmenting a set of objects in each frame of a video. In the semi-supervised setting, the first mask of each object is provided at test time. Following the one-shot principle, fine-tuning VOS methods train a segmentation model separately on each given object mask. However, recently the VOS community has deemed such a test time optimization and its impact on the test runtime as unfeasible. To mitigate the inefficiencies of previous fine-tuning approaches, we present efficient One-Shot Video Object Segmentation (e-OSVOS). In contrast to most VOS approaches, e-OSVOS decouples the object detection task and predicts only local segmentation masks by applying a modified version of Mask R-CNN. The one-shot test runtime and performance are optimized without a laborious and handcrafted hyperparameter search. To this end, we meta learn the model initialization and learning rates for the test time optimization. To achieve optimal learning behavior, we predict individual learning rates at a neuron level. Furthermore, we apply an online adaptation to address the common performance degradation throughout a sequence by continuously fine-tuning the model on previous mask predictions supported by a frame-to-frame bounding box propagation. e-OSVOS provides state-of-the-art results on DAVIS 2016, DAVIS 2017, and YouTube-VOS for one-shot fine-tuning methods while reducing the test runtime substantially. Code is available at https://github.com/dvl-tum/e-osvos. | [
"Convolutions",
"RoI Feature Extractors",
"Output Functions",
"Instance Segmentation Models"
] | [
"Object Detection",
"Semantic Segmentation",
"Semi-Supervised Video Object Segmentation",
"Video Object Segmentation",
"Video Semantic Segmentation",
"Youtube-VOS"
] | [
"Mask R-CNN",
"Softmax",
"RoIAlign",
"Convolution"
] | [
"DAVIS 2017 (val)",
"YouTube-VOS",
"DAVIS 2017 (test-dev)",
"DAVIS 2016"
] | [
"Jaccard (Mean)",
"Jaccard (Unseen)",
"F-Measure (Seen)",
"Jaccard (Seen)",
"Jaccard (Decay)",
"Overall",
"F-measure (Mean)",
"J&F",
"F-Measure (Unseen)"
] | Make One-Shot Video Object Segmentation Efficient Again |
Image and sentence matching has made great progress recently, but it remains
challenging due to the large visual-semantic discrepancy. This mainly arises
from that the representation of pixel-level image usually lacks of high-level
semantic information as in its matched sentence. In this work, we propose a
semantic-enhanced image and sentence matching model, which can improve the
image representation by learning semantic concepts and then organizing them in
a correct semantic order. Given an image, we first use a multi-regional
multi-label CNN to predict its semantic concepts, including objects,
properties, actions, etc. Then, considering that different orders of semantic
concepts lead to diverse semantic meanings, we use a context-gated sentence
generation scheme for semantic order learning. It simultaneously uses the image
global context containing concept relations as reference and the groundtruth
semantic order in the matched sentence as supervision. After obtaining the
improved image representation, we learn the sentence representation with a
conventional LSTM, and then jointly perform image and sentence matching and
sentence generation for model learning. Extensive experiments demonstrate the
effectiveness of our learned semantic concepts and order, by achieving the
state-of-the-art results on two public benchmark datasets. | [
"Initialization",
"Convolutional Neural Networks",
"Recurrent Neural Networks",
"Activation Functions",
"Normalization",
"Convolutions",
"Pooling Operations",
"Skip Connections",
"Skip Connection Blocks"
] | [
"Cross-Modal Retrieval"
] | [
"ResNet",
"Average Pooling",
"Long Short-Term Memory",
"Max Pooling",
"Batch Normalization",
"Tanh Activation",
"1x1 Convolution",
"ReLU",
"Convolution",
"Residual Connection",
"LSTM",
"Bottleneck Residual Block",
"Residual Network",
"Kaiming Initialization",
"Residual Block",
"Global Average Pooling",
"Rectified Linear Units",
"Sigmoid Activation"
] | [
"Flickr30k",
"COCO 2014",
"Flickr30K 1K test"
] | [
"Image-to-text R@5",
"Image-to-text R@1",
"R@10",
"Image-to-text R@10",
"Text-to-image R@10",
"Text-to-image R@1",
"R@5",
"R@1",
"Text-to-image R@5"
] | Learning Semantic Concepts and Order for Image and Sentence Matching |
Relation classification is an important semantic processing task for which
state-ofthe-art systems still rely on costly handcrafted features. In this work
we tackle the relation classification task using a convolutional neural network
that performs classification by ranking (CR-CNN). We propose a new pairwise
ranking loss function that makes it easy to reduce the impact of artificial
classes. We perform experiments using the the SemEval-2010 Task 8 dataset,
which is designed for the task of classifying the relationship between two
nominals marked in a sentence. Using CRCNN, we outperform the state-of-the-art
for this dataset and achieve a F1 of 84.1 without using any costly handcrafted
features. Additionally, our experimental results show that: (1) our approach is
more effective than CNN followed by a softmax classifier; (2) omitting the
representation of the artificial class Other improves both precision and
recall; and (3) using only word embeddings as input features is enough to
achieve state-of-the-art results if we consider only the text between the two
target nominals. | [
"Output Functions"
] | [
"Relation Classification",
"Relation Extraction",
"Word Embeddings"
] | [
"Softmax"
] | [
"SemEval-2010 Task 8"
] | [
"F1"
] | Classifying Relations by Ranking with Convolutional Neural Networks |
Relation classification is an important research arena in the field of
natural language processing (NLP). In this paper, we present SDP-LSTM, a novel
neural network to classify the relation of two entities in a sentence. Our
neural architecture leverages the shortest dependency path (SDP) between two
entities; multichannel recurrent neural networks, with long short term memory
(LSTM) units, pick up heterogeneous information along the SDP. Our proposed
model has several distinct features: (1) The shortest dependency paths retain
most relevant information (to relation classification), while eliminating
irrelevant words in the sentence. (2) The multichannel LSTM networks allow
effective information integration from heterogeneous sources over the
dependency paths. (3) A customized dropout strategy regularizes the neural
network to alleviate overfitting. We test our model on the SemEval 2010
relation classification task, and achieve an $F_1$-score of 83.7\%, higher than
competing methods in the literature. | [
"Recurrent Neural Networks",
"Activation Functions",
"Regularization"
] | [
"Relation Classification"
] | [
"Long Short-Term Memory",
"Tanh Activation",
"LSTM",
"Dropout",
"Sigmoid Activation"
] | [
"SemEval 2010 Task 8"
] | [
"F1"
] | Classifying Relations via Long Short Term Memory Networks along Shortest Dependency Path |
The fully-convolutional siamese network based on template matching has shown great potentials in visual tracking. During testing, the template is fixed with the initial target feature and the performance totally relies on the general matching ability of the siamese network. However, this manner cannot capture the temporal variations of targets or background clutter. In this work, we propose a novel gradient-guided network to exploit the discriminative information in gradients and update the template in the siamese network through feed-forward and backward operations. Our algorithm performs feed-forward and backward operations to exploit the discriminative informaiton in gradients and capture the core attention of the target. To be specific, the algorithm can utilize the information from the gradient to update the template in the current frame. In addition, a template generalization training method is proposed to better use gradient information and avoid overfitting. To our knowledge, this work is the first attempt to exploit the information in the gradient for template update in siamese-based trackers. Extensive experiments on recent benchmarks demonstrate that our method achieves better performance than other state-of-the-art trackers. | [
"Twin Networks"
] | [
"Object Tracking",
"Template Matching",
"Visual Object Tracking",
"Visual Tracking"
] | [
"Siamese Network"
] | [
"OTB-2015",
"VOT2017"
] | [
"Precision",
"Expected Average Overlap (EAO)"
] | GradNet: Gradient-Guided Network for Visual Object Tracking |
Given the wide diffusion of deep neural network architectures for computer vision tasks, several new applications are nowadays more and more feasible. Among them, a particular attention has been recently given to instance segmentation, by exploiting the results achievable by two-stage networks (such as Mask R-CNN or Faster R-CNN), derived from R-CNN. In these complex architectures, a crucial role is played by the Region of Interest (RoI) extraction layer, devoted to extracting a coherent subset of features from a single Feature Pyramid Network (FPN) layer attached on top of a backbone. This paper is motivated by the need to overcome the limitations of existing RoI extractors which select only one (the best) layer from FPN. Our intuition is that all the layers of FPN retain useful information. Therefore, the proposed layer (called Generic RoI Extractor - GRoIE) introduces non-local building blocks and attention mechanisms to boost the performance. A comprehensive ablation study at component level is conducted to find the best set of algorithms and parameters for the GRoIE layer. Moreover, GRoIE can be integrated seamlessly with every two-stage architecture for both object detection and instance segmentation tasks. Therefore, the improvements brought about by the use of GRoIE in different state-of-the-art architectures are also evaluated. The proposed layer leads up to gain a 1.1% AP improvement on bounding box detection and 1.7% AP improvement on instance segmentation. The code is publicly available on GitHub repository at https://github.com/IMPLabUniPr/mmdetection/tree/groie_dev | [
"Output Functions",
"Feature Extractors",
"Image Feature Extractors",
"RoI Feature Extractors",
"Convolutions",
"Instance Segmentation Models",
"Skip Connections",
"Image Model Blocks"
] | [
"Instance Segmentation",
"Object Detection",
"Semantic Segmentation"
] | [
"Softmax",
"Feature Pyramid Network",
"Non-Local Operation",
"Convolution",
"GRoIE",
"1x1 Convolution",
"Residual Connection",
"FPN",
"Generic RoI Extractor",
"RoIAlign",
"Mask R-CNN",
"Non-Local Block"
] | [
"COCO minival"
] | [
"APM",
"box AP",
"AP75",
"APS",
"APL",
"AP50",
"mask AP"
] | A novel Region of Interest Extraction Layer for Instance Segmentation |
Large deep neural networks are powerful, but exhibit undesirable behaviors
such as memorization and sensitivity to adversarial examples. In this work, we
propose mixup, a simple learning principle to alleviate these issues. In
essence, mixup trains a neural network on convex combinations of pairs of
examples and their labels. By doing so, mixup regularizes the neural network to
favor simple linear behavior in-between training examples. Our experiments on
the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show
that mixup improves the generalization of state-of-the-art neural network
architectures. We also find that mixup reduces the memorization of corrupt
labels, increases the robustness to adversarial examples, and stabilizes the
training of generative adversarial networks. | [
"Image Data Augmentation",
"Initialization",
"Convolutional Neural Networks",
"Activation Functions",
"Normalization",
"Convolutions",
"Pooling Operations",
"Skip Connections",
"Skip Connection Blocks"
] | [
"Domain Generalization",
"Image Classification",
"Semi-Supervised Image Classification"
] | [
"ResNet",
"Average Pooling",
"Batch Normalization",
"Convolution",
"1x1 Convolution",
"ReLU",
"Residual Network",
"Residual Connection",
"Bottleneck Residual Block",
"Mixup",
"Kaiming Initialization",
"Residual Block",
"Global Average Pooling",
"Rectified Linear Units",
"Max Pooling"
] | [
"Kuzushiji-MNIST",
"CIFAR-100",
"CIFAR-10",
"CIFAR-10, 250 Labels",
"SVHN, 250 Labels",
"ImageNet-A"
] | [
"Top-1 accuracy %",
"Percentage correct",
"Accuracy"
] | mixup: Beyond Empirical Risk Minimization |
Most previous seq2seq summarization systems purely depend on the source text to generate summaries, which tends to work unstably. Inspired by the traditional template-based summarization approaches, this paper proposes to use existing summaries as soft templates to guide the seq2seq model. To this end, we use a popular IR platform to Retrieve proper summaries as candidate templates. Then, we extend the seq2seq framework to jointly conduct template Reranking and template-aware summary generation (Rewriting). Experiments show that, in terms of informativeness, our model significantly outperforms the state-of-the-art methods, and even soft templates themselves demonstrate high competitiveness. In addition, the import of high-quality external summaries improves the stability and readability of generated summaries. | [
"Recurrent Neural Networks",
"Activation Functions",
"Sequence To Sequence Models"
] | [
"Abstractive Text Summarization",
"Sentence Summarization"
] | [
"Long Short-Term Memory",
"Tanh Activation",
"Sequence to Sequence",
"LSTM",
"Seq2Seq",
"Sigmoid Activation"
] | [
"GigaWord"
] | [
"ROUGE-L",
"ROUGE-1",
"ROUGE-2"
] | Retrieve, Rerank and Rewrite: Soft Template Based Neural Summarization |
Pretrained contextual representation models (Peters et al., 2018; Devlin et al., 2018) have pushed forward the state-of-the-art on many NLP tasks. A new release of BERT (Devlin, 2018) includes a model simultaneously pretrained on 104 languages with impressive performance for zero-shot cross-lingual transfer on a natural language inference task. This paper explores the broader cross-lingual potential of mBERT (multilingual) as a zero shot language transfer model on 5 NLP tasks covering a total of 39 languages from various language families: NLI, document classification, NER, POS tagging, and dependency parsing. We compare mBERT with the best-published methods for zero-shot cross-lingual transfer and find mBERT competitive on each task. Additionally, we investigate the most effective strategy for utilizing mBERT in this manner, determine to what extent mBERT generalizes away from language specific features, and measure factors that influence cross-lingual transfer. | [
"Regularization",
"Output Functions",
"Learning Rate Schedules",
"Stochastic Optimization",
"Attention Modules",
"Activation Functions",
"Subword Segmentation",
"Normalization",
"Language Models",
"Feedforward Networks",
"Attention Mechanisms",
"Skip Connections"
] | [
"Cross-Lingual NER",
"Cross-Lingual Transfer",
"Dependency Parsing",
"Document Classification",
"Natural Language Inference"
] | [
"Weight Decay",
"WordPiece",
"Layer Normalization",
"Softmax",
"Adam",
"Multi-Head Attention",
"Attention Dropout",
"Linear Warmup With Linear Decay",
"Residual Connection",
"Scaled Dot-Product Attention",
"Dropout",
"BERT",
"GELU",
"Dense Connections",
"Gaussian Linear Error Units"
] | [
"CoNLL German",
"CoNLL Dutch",
"CoNLL Spanish"
] | [
"F1"
] | Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT |
In the past few decades, to reduce the risk of X-ray in computed tomography (CT), low-dose CT image denoising has attracted extensive attention from researchers, which has become an important research issue in the field of medical images. In recent years, with the rapid development of deep learning technology, many algorithms have emerged to apply convolutional neural networks to this task, achieving promising results. However, there are still some problems such as low denoising efficiency, over-smoothed result, etc. In this paper, we propose the Edge enhancement based Densely connected Convolutional Neural Network (EDCNN). In our network, we design an edge enhancement module using the proposed novel trainable Sobel convolution. Based on this module, we construct a model with dense connections to fuse the extracted edge information and realize end-to-end image denoising. Besides, when training the model, we introduce a compound loss that combines MSE loss and multi-scales perceptual loss to solve the over-smoothed problem and attain a marked improvement in image quality after denoising. Compared with the existing low-dose CT image denoising algorithms, our proposed model has a better performance in preserving details and suppressing noise. | [
"Feedforward Networks"
] | [
"Computed Tomography (CT)",
"Denoising",
"Image Denoising"
] | [
"Dense Connections"
] | [
"AAPM"
] | [
"SSIM",
"PSNR"
] | EDCNN: Edge enhancement-based Densely Connected Network with Compound Loss for Low-Dose CT Denoising |
We present DeblurGAN, an end-to-end learned method for motion deblurring. The
learning is based on a conditional GAN and the content loss . DeblurGAN
achieves state-of-the art performance both in the structural similarity measure
and visual appearance. The quality of the deblurring model is also evaluated in
a novel way on a real-world problem -- object detection on (de-)blurred images.
The method is 5 times faster than the closest competitor -- DeepDeblur. We also
introduce a novel method for generating synthetic motion blurred images from
sharp ones, allowing realistic dataset augmentation.
The model, code and the dataset are available at
https://github.com/KupynOrest/DeblurGAN | [
"Generative Models",
"Convolutions"
] | [
"Deblurring",
"Object Detection"
] | [
"Generative Adversarial Network",
"GAN",
"Convolution"
] | [
"RealBlur-J (trained on GoPro)",
"RealBlur-R (trained on GoPro)",
"REDS",
"HIDE (trained on GOPRO)"
] | [
"Average PSNR",
"SSIM (sRGB)",
"PSNR (sRGB)"
] | DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks |
3D object detection on point clouds finds many applications. However, most known point cloud object detection methods did not adequately accommodate the characteristics (e.g., sparsity) of point clouds, and thus some key semantic information (e.g., shape information) is not well captured. In this paper, we propose a new graph convolution (GConv) based hierarchical graph network (HGNet) for 3D object detection, which processes raw point clouds directly to predict 3D bounding boxes. HGNet effectively captures the relationship of the points and utilizes the multi-level semantics for object detection. Specially, we propose a novel shape-attentive GConv (SA-GConv) to capture the local shape features, by modelling the relative geometric positions of points to describe object shapes. An SA-GConv based U-shape network captures the multi-level features, which are mapped into an identical feature space by an improved voting module and then further utilized to generate proposals. Next, a new GConv based Proposal Reasoning Module reasons on the proposals considering the global scene semantics, and the bounding boxes are then predicted. Consequently, our new framework outperforms state-of-the-art methods on two large-scale point cloud datasets, by 4% mean average precision (mAP) on SUN RGB-D and by 3% mAP on ScanNet-V2.
| [
"Convolutions"
] | [
"3D Object Detection",
"Object Detection"
] | [
"Convolution"
] | [
"ScanNetV2",
"SUN-RGBD val"
] | [
"[email protected]",
"[email protected]",
"MAP"
] | A Hierarchical Graph Network for 3D Object Detection on Point Clouds |
Existing literature on Question Answering (QA) mostly focuses on algorithmic novelty, data augmentation, or increasingly large pre-trained language models like XLNet and RoBERTa. Additionally, a lot of systems on the QA leaderboards do not have associated research documentation in order to successfully replicate their experiments. In this paper, we outline these algorithmic components such as Attention-over-Attention, coupled with data augmentation and ensembling strategies that have shown to yield state-of-the-art results on benchmark datasets like SQuAD, even achieving super-human performance. Contrary to these prior results, when we evaluate on the recently proposed Natural Questions benchmark dataset, we find that an incredibly simple approach of transfer learning from BERT outperforms the previous state-of-the-art system trained on 4 million more examples than ours by 1.9 F1 points. Adding ensembling strategies further improves that number by 2.3 F1 points. | [
"Regularization",
"Output Functions",
"Learning Rate Schedules",
"Stochastic Optimization",
"Attention Modules",
"Activation Functions",
"Subword Segmentation",
"Normalization",
"Tokenizers",
"Language Models",
"Feedforward Networks",
"Transformers",
"Attention Mechanisms",
"Skip Connections"
] | [
"Data Augmentation",
"Question Answering",
"Transfer Learning"
] | [
"Weight Decay",
"Adam",
"Scaled Dot-Product Attention",
"SentencePiece",
"RoBERTa",
"Gaussian Linear Error Units",
"XLNet",
"Residual Connection",
"Dense Connections",
"Layer Normalization",
"GELU",
"WordPiece",
"Byte Pair Encoding",
"BPE",
"Softmax",
"Multi-Head Attention",
"Attention Dropout",
"Linear Warmup With Linear Decay",
"Dropout",
"BERT"
] | [
"Natural Questions (long)",
"Natural Questions (short)"
] | [
"F1"
] | Frustratingly Easy Natural Question Answering |
Monocular 3D human-pose estimation from static images is a challenging problem, due to the curse of dimensionality and the ill-posed nature of lifting 2D-to-3D. In this paper, we propose a Deep Conditional Variational Autoencoder based model that synthesizes diverse anatomically plausible 3D-pose samples conditioned on the estimated 2D-pose. We show that CVAE-based 3D-pose sample set is consistent with the 2D-pose and helps tackling the inherent ambiguity in 2D-to-3D lifting. We propose two strategies for obtaining the final 3D pose- (a) depth-ordering/ordinal relations to score and weight-average the candidate 3D-poses, referred to as OrdinalScore, and (b) with supervision from an Oracle. We report close to state of-the-art results on two benchmark datasets using OrdinalScore, and state-of-the-art results using the Oracle. We also show that our pipeline yields competitive results without paired image-to-3D annotations. The training and evaluation code is available at https://github.com/ssfootball04/generative_pose. | [
"Generative Models"
] | [
"3D Human Pose Estimation",
"Pose Estimation"
] | [
"AutoEncoder"
] | [
"Human3.6M"
] | [
"Average MPJPE (mm)"
] | Monocular 3D Human Pose Estimation by Generation and Ordinal Ranking |
This technical note describes a new baseline for the Natural Questions. Our model is based on BERT and reduces the gap between the model F1 scores reported in the original dataset paper and the human upper bound by 30% and 50% relative for the long and short answer tasks respectively. This baseline has been submitted to the official NQ leaderboard at ai.google.com/research/NaturalQuestions. Code, preprocessed data and pretrained model are available at https://github.com/google-research/language/tree/master/language/question_answering/bert_joint. | [
"Regularization",
"Output Functions",
"Learning Rate Schedules",
"Stochastic Optimization",
"Attention Modules",
"Activation Functions",
"Subword Segmentation",
"Normalization",
"Language Models",
"Feedforward Networks",
"Attention Mechanisms",
"Skip Connections"
] | [
"Question Answering"
] | [
"Weight Decay",
"WordPiece",
"Layer Normalization",
"Softmax",
"Adam",
"Multi-Head Attention",
"Attention Dropout",
"Linear Warmup With Linear Decay",
"Residual Connection",
"Scaled Dot-Product Attention",
"Dropout",
"BERT",
"GELU",
"Dense Connections",
"Gaussian Linear Error Units"
] | [
"Natural Questions",
"Natural Questions (long)",
"Natural Questions (short)"
] | [
"F1 (Long)",
"F1",
"F1 (Short)"
] | A BERT Baseline for the Natural Questions |
Recently Transformer and Convolution neural network (CNN) based models have shown promising results in Automatic Speech Recognition (ASR), outperforming Recurrent neural networks (RNNs). Transformer models are good at capturing content-based global interactions, while CNNs exploit local features effectively. In this work, we achieve the best of both worlds by studying how to combine convolution neural networks and transformers to model both local and global dependencies of an audio sequence in a parameter-efficient way. To this regard, we propose the convolution-augmented transformer for speech recognition, named Conformer. Conformer significantly outperforms the previous Transformer and CNN based models achieving state-of-the-art accuracies. On the widely used LibriSpeech benchmark, our model achieves WER of 2.1%/4.3% without using a language model and 1.9%/3.9% with an external language model on test/testother. We also observe competitive performance of 2.7%/6.3% with a small model of only 10M parameters. | [
"Regularization",
"Attention Modules",
"Stochastic Optimization",
"Output Functions",
"Activation Functions",
"Subword Segmentation",
"Normalization",
"Convolutions",
"Feedforward Networks",
"Transformers",
"Attention Mechanisms",
"Skip Connections"
] | [
"Language Modelling",
"Speech Recognition"
] | [
"Layer Normalization",
"Byte Pair Encoding",
"BPE",
"Softmax",
"Adam",
"Transformer",
"Multi-Head Attention",
"Convolution",
"ReLU",
"Residual Connection",
"Label Smoothing",
"Dropout",
"Scaled Dot-Product Attention",
"Dense Connections",
"Rectified Linear Units"
] | [
"LibriSpeech test-other",
"LibriSpeech test-clean"
] | [
"Word Error Rate (WER)"
] | Conformer: Convolution-augmented Transformer for Speech Recognition |
Data-augmentation is key to the training of neural networks for image classification. This paper first shows that existing augmentations induce a significant discrepancy between the typical size of the objects seen by the classifier at train and test time. We experimentally validate that, for a target test resolution, using a lower train resolution offers better classification at test time. We then propose a simple yet effective and efficient strategy to optimize the classifier performance when the train and test resolutions differ. It involves only a computationally cheap fine-tuning of the network at the test resolution. This enables training strong classifiers using small training images. For instance, we obtain 77.1% top-1 accuracy on ImageNet with a ResNet-50 trained on 128x128 images, and 79.8% with one trained on 224x224 image. In addition, if we use extra training data we get 82.5% with the ResNet-50 train with 224x224 images. Conversely, when training a ResNeXt-101 32x48d pre-trained in weakly-supervised fashion on 940 million public images at resolution 224x224 and further optimizing for test resolution 320x320, we obtain a test top-1 accuracy of 86.4% (top-5: 98.0%) (single-crop). To the best of our knowledge this is the highest ImageNet single-crop, top-1 and top-5 accuracy to date. | [
"Image Data Augmentation",
"Initialization",
"Image Scaling Strategies",
"Convolutional Neural Networks",
"Activation Functions",
"Normalization",
"Convolutions",
"Pooling Operations",
"Skip Connections",
"Skip Connection Blocks"
] | [
"Data Augmentation",
"Fine-Grained Image Classification",
"Image Classification"
] | [
"Average Pooling",
"1x1 Convolution",
"ResNet",
"Random Horizontal Flip",
"Convolution",
"ReLU",
"Residual Connection",
"Grouped Convolution",
"Random Resized Crop",
"FixRes",
"Batch Normalization",
"Residual Network",
"ColorJitter",
"Kaiming Initialization",
"ResNeXt Block",
"Color Jitter",
"ResNeXt",
"Bottleneck Residual Block",
"Residual Block",
"Global Average Pooling",
"Rectified Linear Units",
"Max Pooling"
] | [
"iNaturalist",
"Oxford 102 Flowers",
"Oxford-IIIT Pets",
"ImageNet ReaL",
"CUB-200-2011",
"Stanford Cars",
"ImageNet",
"Birdsnap",
"NABirds"
] | [
"Number of params",
"Top 1 Accuracy",
"Params",
"Top-1 Error Rate",
"Accuracy",
"Top 5 Accuracy"
] | Fixing the train-test resolution discrepancy |
The dominant neural machine translation models are based on the encoder-decoder structure, and many of them rely on an unconstrained receptive field over source and target sequences. In this paper we study a new architecture that breaks with both conventions. Our simplified architecture consists in the decoder part of a transformer model, based on self-attention, but with locality constraints applied on the attention receptive field. As input for training, both source and target sentences are fed to the network, which is trained as a language model. At inference time, the target tokens are predicted autoregressively starting with the source sequence as previous tokens. The proposed model achieves a new state of the art of 35.7 BLEU on IWSLT'14 German-English and matches the best reported results in the literature on the WMT'14 English-German and WMT'14 English-French translation benchmarks. | [
"Regularization",
"Output Functions",
"Stochastic Optimization",
"Attention Modules",
"Activation Functions",
"Subword Segmentation",
"Normalization",
"Feedforward Networks",
"Transformers",
"Attention Mechanisms",
"Skip Connections"
] | [
"Language Modelling",
"Machine Translation"
] | [
"Layer Normalization",
"Byte Pair Encoding",
"BPE",
"Softmax",
"Adam",
"Transformer",
"Multi-Head Attention",
"Rectified Linear Units",
"ReLU",
"Residual Connection",
"Label Smoothing",
"Dropout",
"Scaled Dot-Product Attention",
"Dense Connections"
] | [
"WMT2014 English-French",
"WMT2014 English-German",
"IWSLT2014 German-English"
] | [
"BLEU score"
] | Joint Source-Target Self Attention with Locality Constraints |
Graph clustering aims to discovercommunity structures in networks, the task being fundamentally challenging mainly because the topology structure and the content of the graphs are difficult to represent for clustering analysis. Recently, graph clustering has moved from traditional shallow methods to deep learning approaches, thanks to the unique feature representation learning capability of deep learning. However, existing deep approaches for graph clustering can only exploit the structure information, while ignoring the content information associated with the nodes in a graph. In this paper, we propose a novel marginalized graph autoencoder (MGAE) algorithm for graph clustering. The key innovation of MGAE is that it advances the autoencoder to the graph domain, so graph representation learning can be carried out not only in a purely unsupervised setting by leveraging structure and content information, it can also be stacked in a deep fashion to learn effective representation. From a technical viewpoint, we propose a marginalized graph convolutional network to corrupt network node content, allowing node content to interact with network features, and marginalizes the corrupted features in a graph autoencoder context to learn graph feature representations. The learned features are fed into the spectral clustering algorithm for graph clustering. Experimental results on benchmark datasets demonstrate the superior performance of MGAE, compared to numerous baselines. | [
"Generative Models"
] | [
"Graph Clustering",
"Graph Representation Learning",
"Representation Learning"
] | [
"AutoEncoder"
] | [
"Pubmed",
"Citeseer"
] | [
"Accuracy"
] | Marginalized graph autoencoder for graph clustering |
The morphological clues of various cancer cells are essential for pathologists to determine the stages of cancers. In order to obtain the quantitative morphological information, we present an end-to-end network for panoptic segmentation of pathology images. Recently, many methods have been proposed, focusing on the semantic-level or instance-level cell segmentation. Unlike existing cell segmentation methods, the proposed network unifies detecting, localizing objects and assigning pixel-level class information to regions with large overlaps such as the background. This unifier is obtained by optimizing the novel semantic loss, the bounding box loss of Region Proposal Network (RPN), the classifier loss of RPN, the background-foreground classifier loss of segmentation Head instead of class-specific loss, the bounding box loss of proposed cell object, and the mask loss of cell object. The results demonstrate that the proposed method not only outperforms state-of-the-art approaches to the 2017 MICCAI Digital Pathology Challenge dataset, but also proposes an effective and end-to-end solution for the panoptic segmentation challenge. | [
"Region Proposal"
] | [
"Cell Segmentation",
"Nuclear Segmentation",
"Panoptic Segmentation",
"Region Proposal"
] | [
"Region Proposal Network",
"RPN"
] | [
"Cell17"
] | [
"F1-score",
"Hausdorff",
"Dice"
] | Panoptic Segmentation with an End-to-End Cell R-CNN for Pathology Image Analysis |
Grasping and manipulating objects is an important human skill. Since
hand-object contact is fundamental to grasping, capturing it can lead to
important insights. However, observing contact through external sensors is
challenging because of occlusion and the complexity of the human hand. We
present ContactDB, a novel dataset of contact maps for household objects that
captures the rich hand-object contact that occurs during grasping, enabled by
use of a thermal camera. Participants in our study grasped 3D printed objects
with a post-grasp functional intent. ContactDB includes 3750 3D meshes of 50
household objects textured with contact maps and 375K frames of synchronized
RGB-D+thermal images. To the best of our knowledge, this is the first
large-scale dataset that records detailed contact maps for human grasps.
Analysis of this data shows the influence of functional intent and object size
on grasping, the tendency to touch/avoid 'active areas', and the high frequency
of palm and proximal finger contact. Finally, we train state-of-the-art image
translation and 3D convolution algorithms to predict diverse contact patterns
from object shape. Data, code and models are available at
https://contactdb.cc.gatech.edu. | [
"Convolutions"
] | [
"Grasp Contact Prediction"
] | [
"3D Convolution",
"Convolution"
] | [
"ContactDB"
] | [
"Error rate"
] | ContactDB: Analyzing and Predicting Grasp Contact via Thermal Imaging |
Convolutional Neural Networks (CNN) have been regarded as a powerful class of models for visual recognition problems. Nevertheless, the convolutional filters in these networks are local operations while ignoring the large-range dependency. Such drawback becomes even worse particularly for video recognition, since video is an information-intensive media with complex temporal variations. In this paper, we present a novel framework to boost the spatio-temporal representation learning by Local and Global Diffusion (LGD). Specifically, we construct a novel neural network architecture that learns the local and global representations in parallel. The architecture is composed of LGD blocks, where each block updates local and global features by modeling the diffusions between these two representations. Diffusions effectively interact two aspects of information, i.e., localized and holistic, for more powerful way of representation learning. Furthermore, a kernelized classifier is introduced to combine the representations from two aspects for video recognition. Our LGD networks achieve clear improvements on the large-scale Kinetics-400 and Kinetics-600 video classification datasets against the best competitors by 3.5% and 0.7%. We further examine the generalization of both the global and local representations produced by our pre-trained LGD networks on four different benchmarks for video action recognition and spatio-temporal action detection tasks. Superior performances over several state-of-the-art techniques on these benchmarks are reported. Code is available at: https://github.com/ZhaofanQiu/local-and-global-diffusion-networks. | [
"Initialization",
"Convolutional Neural Networks",
"Activation Functions",
"Normalization",
"Convolutions",
"Pooling Operations",
"Skip Connections",
"Skip Connection Blocks"
] | [
"Action Classification",
"Action Detection",
"Action Recognition",
"Representation Learning",
"Temporal Action Localization",
"Video Classification",
"Video Recognition"
] | [
"ResNet",
"Average Pooling",
"Batch Normalization",
"Convolution",
"1x1 Convolution",
"ReLU",
"Residual Network",
"Residual Connection",
"Bottleneck Residual Block",
"Kaiming Initialization",
"Residual Block",
"Global Average Pooling",
"Rectified Linear Units",
"Max Pooling"
] | [
"HMDB-51",
"Kinetics-400",
"UCF101",
"Kinetics-600"
] | [
"3-fold Accuracy",
"Top-5 Accuracy",
"Vid acc@5",
"Top-1 Accuracy",
"Average accuracy of 3 splits",
"Vid acc@1"
] | Learning Spatio-Temporal Representation with Local and Global Diffusion |
Deep neural networks have achieved great success in classification tasks during the last years. However, one major problem to the path towards artificial intelligence is the inability of neural networks to accurately detect samples from novel class distributions and therefore, most of the existent classification algorithms assume that all classes are known prior to the training stage. In this work, we propose a methodology for training a neural network that allows it to efficiently detect out-of-distribution (OOD) examples without compromising much of its classification accuracy on the test examples from known classes. We propose a novel loss function that gives rise to a novel method, Outlier Exposure with Confidence Control (OECC), which achieves superior results in OOD detection with OE both on image and text classification tasks without requiring access to OOD samples. Additionally, we experimentally show that the combination of OECC with state-of-the-art post-training OOD detection methods, like the Mahalanobis Detector (MD) and the Gramian Matrices (GM) methods, further improves their performance in the OOD detection task, demonstrating the potential of combining training and post-training methods for OOD detection. | [
"Initialization",
"Regularization",
"Activation Functions",
"Normalization",
"Convolutions",
"Pooling Operations",
"Skip Connections",
"Image Models",
"Skip Connection Blocks"
] | [
"Anomaly Detection",
"Image Classification",
"Out-of-Distribution Detection",
"Text Classification"
] | [
"Average Pooling",
"Batch Normalization",
"Convolution",
"ReLU",
"Residual Connection",
"Dropout",
"Wide Residual Block",
"Kaiming Initialization",
"Global Average Pooling",
"Rectified Linear Units",
"WideResNet"
] | [
"CIFAR-10 vs CIFAR-100",
"CIFAR-100",
"CIFAR-10",
"MS-1M vs. IJB-C",
"CIFAR-100 vs CIFAR-10",
"20 Newsgroups",
"ImageNet dogs vs ImageNet non-dogs"
] | [
"AUROC",
"AUPR",
"FPR95"
] | Outlier Exposure with Confidence Control for Out-of-Distribution Detection |
In this paper, we study the problem of learning Graph Convolutional Networks (GCNs) for regression. Current architectures of GCNs are limited to the small receptive field of convolution filters and shared transformation matrix for each node. To address these limitations, we propose Semantic Graph Convolutional Networks (SemGCN), a novel neural network architecture that operates on regression tasks with graph-structured data. SemGCN learns to capture semantic information such as local and global node relationships, which is not explicitly represented in the graph. These semantic relationships can be learned through end-to-end training from the ground truth without additional supervision or hand-crafted rules. We further investigate applying SemGCN to 3D human pose regression. Our formulation is intuitive and sufficient since both 2D and 3D human poses can be represented as a structured graph encoding the relationships between joints in the skeleton of a human body. We carry out comprehensive studies to validate our method. The results prove that SemGCN outperforms state of the art while using 90% fewer parameters. | [
"Convolutions"
] | [
"3D Human Pose Estimation",
"Regression"
] | [
"Convolution"
] | [
"Human3.6M"
] | [
"Average MPJPE (mm)",
"Multi-View or Monocular",
"Using 2D ground-truth joints"
] | Semantic Graph Convolutional Networks for 3D Human Pose Regression |
Models for audio source separation usually operate on the magnitude spectrum,
which ignores phase information and makes separation performance dependant on
hyper-parameters for the spectral front-end. Therefore, we investigate
end-to-end source separation in the time-domain, which allows modelling phase
information and avoids fixed spectral transformations. Due to high sampling
rates for audio, employing a long temporal input context on the sample level is
difficult, but required for high quality separation results because of
long-range temporal correlations. In this context, we propose the Wave-U-Net,
an adaptation of the U-Net to the one-dimensional time domain, which repeatedly
resamples feature maps to compute and combine features at different time
scales. We introduce further architectural improvements, including an output
layer that enforces source additivity, an upsampling technique and a
context-aware prediction framework to reduce output artifacts. Experiments for
singing voice separation indicate that our architecture yields a performance
comparable to a state-of-the-art spectrogram-based U-Net architecture, given
the same data. Finally, we reveal a problem with outliers in the currently used
SDR evaluation metrics and suggest reporting rank-based statistics to alleviate
this problem. | [
"Semantic Segmentation Models",
"Activation Functions",
"Convolutions",
"Pooling Operations",
"Skip Connections"
] | [
"Audio Source Separation",
"Music Source Separation"
] | [
"U-Net",
"Concatenated Skip Connection",
"Convolution",
"ReLU",
"Rectified Linear Units",
"Max Pooling"
] | [
"MUSDB18"
] | [
"SDR (vocals)",
"SDR (other)",
"SDR (drums)",
"SDR (bass)"
] | Wave-U-Net: A Multi-Scale Neural Network for End-to-End Audio Source Separation |
Recurrent Neural Networks (RNNs) have long been the dominant architecture in sequence-to-sequence learning. RNNs, however, are inherently sequential models that do not allow parallelization of their computations. Transformers are emerging as a natural alternative to standard RNNs, replacing recurrent computations with a multi-head attention mechanism. In this paper, we propose the SepFormer, a novel RNN-free Transformer-based neural network for speech separation. The SepFormer learns short and long-term dependencies with a multi-scale approach that employs transformers. The proposed model achieves state-of-the-art (SOTA) performance on the standard WSJ0-2/3mix datasets. It reaches an SI-SNRi of 22.3 dB on WSJ0-2mix and an SI-SNRi of 19.5 dB on WSJ0-3mix. The SepFormer inherits the parallelization advantages of Transformers and achieves a competitive performance even when downsampling the encoded representation by a factor of 8. It is thus significantly faster and it is less memory-demanding than the latest speech separation systems with comparable performance. | [
"Attention Modules"
] | [
"Speech Separation"
] | [
"Multi-Head Attention"
] | [
"wsj0-2mix",
"WSJ0-3mix"
] | [
"SI-SDRi"
] | Attention is All You Need in Speech Separation |
The Sparsespeech model is an unsupervised acoustic model that can generate discrete pseudo-labels for untranscribed speech. We extend the Sparsespeech model to allow for sampling over a random discrete variable, yielding pseudo-posteriorgrams. The degree of sparsity in this posteriorgram can be fully controlled after the model has been trained. We use the Gumbel-Softmax trick to approximately sample from a discrete distribution in the neural network and this allows us to train the network efficiently with standard backpropagation. The new and improved model is trained and evaluated on the Libri-Light corpus, a benchmark for ASR with limited or no supervision. The model is trained on 600h and 6000h of English read speech. We evaluate the improved model using the ABX error measure and a semi-supervised setting with 10h of transcribed speech. We observe a relative improvement of up to 31.4% on ABX error rates across speakers on the test set with the improved Sparsespeech model on 600h of speech data and further improvements when we scale the model to 6000h. | [
"Recurrent Neural Networks",
"Activation Functions",
"Bidirectional Recurrent Neural Networks",
"Distributions"
] | [
"Speech Recognition"
] | [
"Gumbel Softmax",
"Long Short-Term Memory",
"BiLSTM",
"Tanh Activation",
"Bidirectional LSTM",
"LSTM",
"Sigmoid Activation"
] | [
"Libri-Light test-other",
"Libri-Light test-clean"
] | [
"ABX-across",
"ABX-within"
] | Improving Unsupervised Sparsespeech Acoustic Models with Categorical Reparameterization |
Designing convolutional neural networks (CNN) for mobile devices is challenging because mobile models need to be small and fast, yet still accurate. Although significant efforts have been dedicated to design and improve mobile CNNs on all dimensions, it is very difficult to manually balance these trade-offs when there are so many architectural possibilities to consider. In this paper, we propose an automated mobile neural architecture search (MNAS) approach, which explicitly incorporate model latency into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and latency. Unlike previous work, where latency is considered via another, often inaccurate proxy (e.g., FLOPS), our approach directly measures real-world inference latency by executing the model on mobile phones. To further strike the right balance between flexibility and search space size, we propose a novel factorized hierarchical search space that encourages layer diversity throughout the network. Experimental results show that our approach consistently outperforms state-of-the-art mobile CNN models across multiple vision tasks. On the ImageNet classification task, our MnasNet achieves 75.2% top-1 accuracy with 78ms latency on a Pixel phone, which is 1.8x faster than MobileNetV2 [29] with 0.5% higher accuracy and 2.3x faster than NASNet [36] with 1.2% higher accuracy. Our MnasNet also achieves better mAP quality than MobileNets for COCO object detection. Code is at https://github.com/tensorflow/tpu/tree/master/models/official/mnasnet | [
"Image Data Augmentation",
"Regularization",
"Convolutional Neural Networks",
"Learning Rate Schedules",
"Stochastic Optimization",
"Output Functions",
"Activation Functions",
"Recurrent Neural Networks",
"Normalization",
"Convolutions",
"Feedforward Networks",
"Pooling Operations",
"Image Model Blocks",
"Skip Connection Blocks"
] | [
"Image Classification",
"Neural Architecture Search",
"Object Detection",
"Real-Time Object Detection"
] | [
"Depthwise Convolution",
"Weight Decay",
"Average Pooling",
"RMSProp",
"Long Short-Term Memory",
"Tanh Activation",
"MnasNet",
"1x1 Convolution",
"Random Horizontal Flip",
"Convolution",
"ReLU",
"Dense Connections",
"Random Resized Crop",
"Batch Normalization",
"Squeeze-and-Excitation Block",
"Pointwise Convolution",
"Sigmoid Activation",
"Inverted Residual Block",
"Softmax",
"Linear Warmup With Linear Decay",
"LSTM",
"Dropout",
"Depthwise Separable Convolution",
"Global Average Pooling",
"Rectified Linear Units"
] | [
"COCO",
"ImageNet"
] | [
"Number of params",
"Top 5 Accuracy",
"MAP",
"Top 1 Accuracy"
] | MnasNet: Platform-Aware Neural Architecture Search for Mobile |
Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set. | [
"Regularization",
"Output Functions",
"Convolutional Neural Networks",
"Learning Rate Schedules",
"Stochastic Optimization",
"Optimization",
"Convolutions",
"Feedforward Networks",
"Pooling Operations",
"Image Model Blocks",
"Miscellaneous Components"
] | [
"Image Classification",
"Retinal OCT Disease Classification"
] | [
"Inception-v3 Module",
"Exponential Decay",
"SGD with Momentum",
"Average Pooling",
"RMSProp",
"Softmax",
"Auxiliary Classifier",
"Convolution",
"1x1 Convolution",
"Inception-v3",
"Label Smoothing",
"Dropout",
"Gradient Clipping",
"Dense Connections",
"Max Pooling"
] | [
"OCT2017",
"ImageNet"
] | [
"Acc",
"Sensitivity",
"Top 1 Accuracy"
] | Rethinking the Inception Architecture for Computer Vision |
In this work, we address the problem of 3D human pose estimation from a sequence of 2D human poses. Although the recent success of deep networks has led many state-of-the-art methods for 3D pose estimation to train deep networks end-to-end to predict from images directly, the top-performing approaches have shown the effectiveness of dividing the task of 3D pose estimation into two steps: using a state-of-the-art 2D pose estimator to estimate the 2D pose from images and then mapping them into 3D space. They also showed that a low-dimensional representation like 2D locations of a set of joints can be discriminative enough to estimate 3D pose with high accuracy. However, estimation of 3D pose for individual frames leads to temporally incoherent estimates due to independent error in each frame causing jitter. Therefore, in this work we utilize the temporal information across a sequence of 2D joint locations to estimate a sequence of 3D poses. We designed a sequence-to-sequence network composed of layer-normalized LSTM units with shortcut connections connecting the input to the output on the decoder side and imposed temporal smoothness constraint during training. We found that the knowledge of temporal consistency improves the best reported result on Human3.6M dataset by approximately $12.2%$ and helps our network to recover temporally consistent 3D poses over a sequence of images even when the 2D pose detector fails. | [
"Recurrent Neural Networks",
"Activation Functions"
] | [
"3D Human Pose Estimation",
"3D Pose Estimation",
"Pose Estimation"
] | [
"Tanh Activation",
"Long Short-Term Memory",
"LSTM",
"Sigmoid Activation"
] | [
"HumanEva-I"
] | [
"Mean Reconstruction Error (mm)"
] | Exploiting temporal information for 3D human pose estimation |
A number of recent works have proposed attention models for Visual Question
Answering (VQA) that generate spatial maps highlighting image regions relevant
to answering the question. In this paper, we argue that in addition to modeling
"where to look" or visual attention, it is equally important to model "what
words to listen to" or question attention. We present a novel co-attention
model for VQA that jointly reasons about image and question attention. In
addition, our model reasons about the question (and consequently the image via
the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional
convolution neural networks (CNN). Our model improves the state-of-the-art on
the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA
dataset. By using ResNet, the performance is further improved to 62.1% for VQA
and 65.4% for COCO-QA. | [
"Initialization",
"Convolutional Neural Networks",
"Activation Functions",
"Normalization",
"Convolutions",
"Pooling Operations",
"Skip Connections",
"Skip Connection Blocks"
] | [
"Visual Dialog",
"Visual Question Answering"
] | [
"ResNet",
"Average Pooling",
"Batch Normalization",
"Convolution",
"1x1 Convolution",
"ReLU",
"Residual Network",
"Residual Connection",
"Bottleneck Residual Block",
"Kaiming Initialization",
"Residual Block",
"Global Average Pooling",
"Rectified Linear Units",
"Max Pooling"
] | [
"VQA v1 test-std",
"COCO Visual Question Answering (VQA) real images 1.0 open ended",
"VisDial v0.9 val",
"VQA v1 test-dev",
"COCO Visual Question Answering (VQA) real images 1.0 multiple choice"
] | [
"R@10",
"Percentage correct",
"R@5",
"Mean Rank",
"MRR",
"Accuracy",
"R@1"
] | Hierarchical Question-Image Co-Attention for Visual Question Answering |
Recently, neural models pretrained on a language modeling task, such as ELMo (Peters et al., 2017), OpenAI GPT (Radford et al., 2018), and BERT (Devlin et al., 2018), have achieved impressive results on various natural language processing tasks such as question-answering and natural language inference. In this paper, we describe a simple re-implementation of BERT for query-based passage re-ranking. Our system is the state of the art on the TREC-CAR dataset and the top entry in the leaderboard of the MS MARCO passage retrieval task, outperforming the previous state of the art by 27% (relative) in MRR@10. The code to reproduce our results is available at https://github.com/nyu-dl/dl4marco-bert | [
"Regularization",
"Attention Modules",
"Learning Rate Schedules",
"Stochastic Optimization",
"Recurrent Neural Networks",
"Activation Functions",
"Output Functions",
"Subword Segmentation",
"Word Embeddings",
"Normalization",
"Attention Mechanisms",
"Language Models",
"Feedforward Networks",
"Transformers",
"Fine-Tuning",
"Skip Connections",
"Bidirectional Recurrent Neural Networks"
] | [
"Passage Re-Ranking"
] | [
"Weight Decay",
"Cosine Annealing",
"Adam",
"Long Short-Term Memory",
"BiLSTM",
"Tanh Activation",
"Scaled Dot-Product Attention",
"Gaussian Linear Error Units",
"Bidirectional LSTM",
"Residual Connection",
"Dense Connections",
"ELMo",
"Layer Normalization",
"Discriminative Fine-Tuning",
"GPT",
"GELU",
"Sigmoid Activation",
"WordPiece",
"Byte Pair Encoding",
"BPE",
"Softmax",
"Multi-Head Attention",
"Attention Dropout",
"Linear Warmup With Cosine Annealing",
"Linear Warmup With Linear Decay",
"LSTM",
"Dropout",
"BERT"
] | [
"MS MARCO"
] | [
"MRR"
] | Passage Re-ranking with BERT |
Neural network pruning reduces the computational cost of an over-parameterized network to improve its efficiency. Popular methods vary from $\ell_1$-norm sparsification to Neural Architecture Search (NAS). In this work, we propose a novel pruning method that optimizes the final accuracy of the pruned network and distills knowledge from the over-parameterized parent network's inner layers. To enable this approach, we formulate the network pruning as a Knapsack Problem which optimizes the trade-off between the importance of neurons and their associated computational cost. Then we prune the network channels while maintaining the high-level structure of the network. The pruned network is fine-tuned under the supervision of the parent network using its inner network knowledge, a technique we refer to as the Inner Knowledge Distillation. Our method leads to state-of-the-art pruning results on ImageNet, CIFAR-10 and CIFAR-100 using ResNet backbones. To prune complex network structures such as convolutions with skip-links and depth-wise convolutions, we propose a block grouping approach to cope with these structures. Through this we produce compact architectures with the same FLOPs as EfficientNet-B0 and MobileNetV3 but with higher accuracy, by $1\%$ and $0.3\%$ respectively on ImageNet, and faster runtime on GPU. | [
"Initialization",
"Convolutional Neural Networks",
"Activation Functions",
"Normalization",
"Convolutions",
"Feedforward Networks",
"Pooling Operations",
"Skip Connections",
"Skip Connection Blocks"
] | [
"Knowledge Distillation",
"Network Pruning",
"Neural Architecture Search"
] | [
"ResNet",
"Average Pooling",
"Batch Normalization",
"Convolution",
"1x1 Convolution",
"ReLU",
"Residual Network",
"Residual Connection",
"Bottleneck Residual Block",
"Kaiming Initialization",
"Residual Block",
"Dense Connections",
"Global Average Pooling",
"Rectified Linear Units",
"Max Pooling"
] | [
"ImageNet"
] | [
"GFLOPs",
"Accuracy"
] | Knapsack Pruning with Inner Distillation |
Most recent successes on forecasting the people motion are based on LSTM models and all most recent progress has been achieved by modelling the social interaction among people and the people interaction with the scene. We question the use of the LSTM models and propose the novel use of Transformer Networks for trajectory forecasting. This is a fundamental switch from the sequential step-by-step processing of LSTMs to the only-attention-based memory mechanisms of Transformers. In particular, we consider both the original Transformer Network (TF) and the larger Bidirectional Transformer (BERT), state-of-the-art on all natural language processing tasks. Our proposed Transformers predict the trajectories of the individual people in the scene. These are "simple" model because each person is modelled separately without any complex human-human nor scene interaction terms. In particular, the TF model without bells and whistles yields the best score on the largest and most challenging trajectory forecasting benchmark of TrajNet. Additionally, its extension which predicts multiple plausible future trajectories performs on par with more engineered techniques on the 5 datasets of ETH + UCY. Finally, we show that Transformers may deal with missing observations, as it may be the case with real sensor data. Code is available at https://github.com/FGiuliari/Trajectory-Transformer. | [
"Regularization",
"Output Functions",
"Stochastic Optimization",
"Attention Modules",
"Recurrent Neural Networks",
"Activation Functions",
"Subword Segmentation",
"Normalization",
"Feedforward Networks",
"Transformers",
"Attention Mechanisms",
"Skip Connections"
] | [
"Trajectory Forecasting",
"Trajectory Prediction"
] | [
"Layer Normalization",
"Byte Pair Encoding",
"BPE",
"Softmax",
"Adam",
"Long Short-Term Memory",
"Multi-Head Attention",
"Transformer",
"Tanh Activation",
"Rectified Linear Units",
"ReLU",
"Residual Connection",
"Label Smoothing",
"Dropout",
"Scaled Dot-Product Attention",
"LSTM",
"Dense Connections",
"Sigmoid Activation"
] | [
"ETH/UCY"
] | [
"ADE-8/12"
] | Transformer Networks for Trajectory Forecasting |
Instance segmentation requires a large number of training samples to achieve satisfactory performance and benefits from proper data augmentation. To enlarge the training set and increase the diversity, previous methods have investigated using data annotation from other domain (e.g. bbox, point) in a weakly supervised mechanism. In this paper, we present a simple, efficient and effective method to augment the training set using the existing instance mask annotations. Exploiting the pixel redundancy of the background, we are able to improve the performance of Mask R-CNN for 1.7 mAP on COCO dataset and 3.3 mAP on Pascal VOC dataset by simply introducing random jittering to objects. Furthermore, we propose a location probability map based approach to explore the feasible locations that objects can be placed based on local appearance similarity. With the guidance of such map, we boost the performance of R101-Mask R-CNN on instance segmentation from 35.7 mAP to 37.9 mAP without modifying the backbone or network structure. Our method is simple to implement and does not increase the computational complexity. It can be integrated into the training pipeline of any instance segmentation model without affecting the training and inference efficiency. Our code and models have been released at https://github.com/GothicAi/InstaBoost | [
"Image Data Augmentation",
"Initialization",
"Output Functions",
"Convolutional Neural Networks",
"Learning Rate Schedules",
"Feature Extractors",
"Activation Functions",
"RoI Feature Extractors",
"Normalization",
"Convolutions",
"Pooling Operations",
"Instance Segmentation Models",
"Skip Connections",
"Object Detection Models",
"Skip Connection Blocks"
] | [
"Data Augmentation",
"Instance Segmentation",
"Semantic Segmentation"
] | [
"Average Pooling",
"1x1 Convolution",
"RoIAlign",
"ResNet",
"Convolution",
"ReLU",
"Residual Connection",
"FPN",
"Batch Normalization",
"Residual Network",
"Kaiming Initialization",
"Cascade R-CNN",
"Step Decay",
"Softmax",
"Feature Pyramid Network",
"InstaBoost",
"Bottleneck Residual Block",
"Mask R-CNN",
"Residual Block",
"Global Average Pooling",
"Rectified Linear Units",
"Max Pooling"
] | [
"COCO test-dev"
] | [
"APM",
"box AP",
"AP75",
"APS",
"APL",
"AP50",
"mask AP"
] | InstaBoost: Boosting Instance Segmentation via Probability Map Guided Copy-Pasting |
Distant supervision (DS) is a promising approach for relation extraction but often suffers from the noisy label problem. Traditional DS methods usually represent an entity pair as a bag of sentences and denoise labels using multi-instance learning techniques. The bag-based paradigm, however, fails to leverage the inter-sentence-level and the entity-level evidence for relation extraction, and their denoising algorithms are often specialized and complicated. In this paper, we propose a new DS paradigm--document-based distant supervision, which models relation extraction as a document-based machine reading comprehension (MRC) task. By re-organizing all sentences about an entity as a document and extracting relations via querying the document with relation-specific questions, the document-based DS paradigm can simultaneously encode and exploit all sentence-level, inter-sentence-level, and entity-level evidence. Furthermore, we design a new loss function--DSLoss (distant supervision loss), which can effectively train MRC models using only $\langle$document, question, answer$\rangle$ tuples, therefore noisy label problem can be inherently resolved. Experiments show that our method achieves new state-of-the-art DS performance. | [
"Regularization",
"Output Functions",
"Stochastic Optimization",
"Learning Rate Schedules",
"Attention Modules",
"Activation Functions",
"Subword Segmentation",
"Normalization",
"Language Models",
"Feedforward Networks",
"Attention Mechanisms",
"Skip Connections"
] | [
"Denoising",
"Machine Reading Comprehension",
"Reading Comprehension",
"Relation Extraction",
"Relationship Extraction (Distant Supervised)"
] | [
"Weight Decay",
"WordPiece",
"Layer Normalization",
"Softmax",
"Adam",
"Multi-Head Attention",
"Attention Dropout",
"Linear Warmup With Linear Decay",
"Residual Connection",
"Scaled Dot-Product Attention",
"Dropout",
"BERT",
"GELU",
"Dense Connections",
"Gaussian Linear Error Units"
] | [
"NYT"
] | [
"P@300",
"P@100",
"P@200",
"PR AUC"
] | From Bag of Sentences to Document: Distantly Supervised Relation Extraction via Machine Reading Comprehension |
We propose a novel self-attention mechanism that can learn its optimal attention span. This allows us to extend significantly the maximum context size used in Transformer, while maintaining control over their memory footprint and computational time. We show the effectiveness of our approach on the task of character level language modeling, where we achieve state-of-the-art performances on text8 and enwiki8 by using a maximum context of 8k characters. | [
"Regularization",
"Attention Modules",
"Stochastic Optimization",
"Output Functions",
"Activation Functions",
"Subword Segmentation",
"Normalization",
"Feedforward Networks",
"Transformers",
"Attention Mechanisms",
"Skip Connections"
] | [
"Language Modelling"
] | [
"Adam",
"Scaled Dot-Product Attention",
"Adaptive Masking",
"Transformer",
"ReLU",
"Residual Connection",
"Embedding Dropout",
"Dense Connections",
"Layer Normalization",
"Label Smoothing",
"L1 Regularization",
"Byte Pair Encoding",
"BPE",
"Adaptive Span Transformer",
"Softmax",
"Multi-Head Attention",
"Attention Dropout",
"Dropout",
"Rectified Linear Units"
] | [
"Text8",
"enwik8"
] | [
"Number of params",
"Bit per Character (BPC)"
] | Adaptive Attention Span in Transformers |
Deep neural networks are typically trained by optimizing a loss function with
an SGD variant, in conjunction with a decaying learning rate, until
convergence. We show that simple averaging of multiple points along the
trajectory of SGD, with a cyclical or constant learning rate, leads to better
generalization than conventional training. We also show that this Stochastic
Weight Averaging (SWA) procedure finds much flatter solutions than SGD, and
approximates the recent Fast Geometric Ensembling (FGE) approach with a single
model. Using SWA we achieve notable improvement in test accuracy over
conventional SGD training on a range of state-of-the-art residual networks,
PyramidNets, DenseNets, and Shake-Shake networks on CIFAR-10, CIFAR-100, and
ImageNet. In short, SWA is extremely easy to implement, improves
generalization, and has almost no computational overhead. | [
"Initialization",
"Regularization",
"Convolutional Neural Networks",
"Learning Rate Schedules",
"Output Functions",
"Stochastic Optimization",
"Activation Functions",
"Normalization",
"Convolutions",
"Feedforward Networks",
"Pooling Operations",
"Skip Connections",
"Image Model Blocks",
"Image Models",
"Skip Connection Blocks"
] | [
"Image Classification",
"Stochastic Optimization"
] | [
"Weight Decay",
"Cosine Annealing",
"Average Pooling",
"1x1 Convolution",
"ResNet",
"VGG",
"Pyramidal Bottleneck Residual Unit",
"Convolution",
"ReLU",
"Residual Connection",
"Wide Residual Block",
"Zero-padded Shortcut Connection",
"Dense Connections",
"Max Pooling",
"Dense Block",
"PyramidNet",
"Batch Normalization",
"Residual Network",
"Kaiming Initialization",
"SGD",
"Stochastic Gradient Descent",
"Shake-Shake Regularization",
"Softmax",
"Pyramidal Residual Unit",
"Concatenated Skip Connection",
"Bottleneck Residual Block",
"DenseNet",
"Dropout",
"Stochastic Weight Averaging",
"Residual Block",
"Global Average Pooling",
"Rectified Linear Units",
"WideResNet"
] | [
"CIFAR-100",
"ImageNet",
"CIFAR-10"
] | [
"Percentage correct",
"Top 1 Accuracy"
] | Averaging Weights Leads to Wider Optima and Better Generalization |
Assessing the location and extent of lesions caused by chronic stroke is critical for medical diagnosis, surgical planning, and prognosis. In recent years, with the rapid development of 2D and 3D convolutional neural networks (CNN), the encoder-decoder structure has shown great potential in the field of medical image segmentation. However, the 2D CNN ignores the 3D information of medical images, while the 3D CNN suffers from high computational resource demands. This paper proposes a new architecture called dimension-fusion-UNet (D-UNet), which combines 2D and 3D convolution innovatively in the encoding stage. The proposed architecture achieves a better segmentation performance than 2D networks, while requiring significantly less computation time in comparison to 3D networks. Furthermore, to alleviate the data imbalance issue between positive and negative samples for the network training, we propose a new loss function called Enhance Mixing Loss (EML). This function adds a weighted focal coefficient and combines two traditional loss functions. The proposed method has been tested on the ATLAS dataset and compared to three state-of-the-art methods. The results demonstrate that the proposed method achieves the best quality performance in terms of DSC = 0.5349+0.2763 and precision = 0.6331+0.295). | [
"Convolutions"
] | [
"Lesion Segmentation",
"Medical Diagnosis",
"Medical Image Segmentation",
"Semantic Segmentation"
] | [
"3D Convolution",
"Convolution"
] | [
"Anatomical Tracings of Lesions After Stroke (ATLAS) "
] | [
"Precision",
"Recall",
"Dice"
] | D-UNet: a dimension-fusion U shape network for chronic stroke lesion segmentation |
We propose a unified game-theoretical framework to perform classification and conditional image generation given limited supervision. It is formulated as a three-player minimax game consisting of a generator, a classifier and a discriminator, and therefore is referred to as Triple Generative Adversarial Network (Triple-GAN). The generator and the classifier characterize the conditional distributions between images and labels to perform conditional generation and classification, respectively. The discriminator solely focuses on identifying fake image-label pairs. Under a nonparametric assumption, we prove the unique equilibrium of the game is that the distributions characterized by the generator and the classifier converge to the data distribution. As a byproduct of the three-player mechanism, Triple-GAN is flexible to incorporate different semi-supervised classifiers and GAN architectures. We evaluate Triple-GAN in two challenging settings, namely, semi-supervised learning and the extreme low data regime. In both settings, Triple-GAN can achieve excellent classification results and generate meaningful samples in a specific class simultaneously. In particular, using a commonly adopted 13-layer CNN classifier, Triple-GAN outperforms extensive semi-supervised learning methods substantially on more than 10 benchmarks no matter data augmentation is applied or not. | [
"Generative Models",
"Convolutions"
] | [
"Conditional Image Generation",
"Data Augmentation",
"Image Generation",
"Semi-Supervised Image Classification"
] | [
"Generative Adversarial Network",
"GAN",
"Convolution"
] | [
"SVHN, 500 Labels",
"SVHN, 250 Labels",
"SVHN, 1000 labels",
"CIFAR-10, 1000 Labels",
"CIFAR-10, 4000 Labels"
] | [
"Accuracy"
] | Triple Generative Adversarial Networks |
This paper presents a fast and parsimonious parsing method to accurately and robustly detect a vectorized wireframe in an input image with a single forward pass. The proposed method is end-to-end trainable, consisting of three components: (i) line segment and junction proposal generation, (ii) line segment and junction matching, and (iii) line segment and junction verification. For computing line segment proposals, a novel exact dual representation is proposed which exploits a parsimonious geometric reparameterization for line segments and forms a holistic 4-dimensional attraction field map for an input image. Junctions can be treated as the "basins" in the attraction field. The proposed method is thus called Holistically-Attracted Wireframe Parser (HAWP). In experiments, the proposed method is tested on two benchmarks, the Wireframe dataset, and the YorkUrban dataset. On both benchmarks, it obtains state-of-the-art performance in terms of accuracy and efficiency. For example, on the Wireframe dataset, compared to the previous state-of-the-art method L-CNN, it improves the challenging mean structural average precision (msAP) by a large margin ($2.8\%$ absolute improvements) and achieves 29.5 FPS on single GPU ($89\%$ relative improvement). A systematic ablation study is performed to further justify the proposed method. | [
"Graph Embeddings"
] | [
"Line Segment Detection"
] | [
"LINE",
"Large-scale Information Network Embedding"
] | [
"York Urban Dataset",
"wireframe dataset"
] | [
"sAP15",
"sAP10",
"F1 score",
"sAP5"
] | Holistically-Attracted Wireframe Parsing |
The Variational Auto-Encoder (VAE) is one of the most used unsupervised
machine learning models. But although the default choice of a Gaussian
distribution for both the prior and posterior represents a mathematically
convenient distribution often leading to competitive results, we show that this
parameterization fails to model data with a latent hyperspherical structure. To
address this issue we propose using a von Mises-Fisher (vMF) distribution
instead, leading to a hyperspherical latent space. Through a series of
experiments we show how such a hyperspherical VAE, or $\mathcal{S}$-VAE, is
more suitable for capturing data with a hyperspherical latent structure, while
outperforming a normal, $\mathcal{N}$-VAE, in low dimensions on other data
types. | [
"Generative Models"
] | [
"Link Prediction"
] | [
"VAE",
"Variational Autoencoder"
] | [
"Cora",
"Pubmed",
"Citeseer"
] | [
"AP",
"AUC"
] | Hyperspherical Variational Auto-Encoders |
Extracting robust and general 3D local features is key to downstream tasks such as point cloud registration and reconstruction. Existing learning-based local descriptors are either sensitive to rotation transformations, or rely on classical handcrafted features which are neither general nor representative. In this paper, we introduce a new, yet conceptually simple, neural architecture, termed SpinNet, to extract local features which are rotationally invariant whilst sufficiently informative to enable accurate registration. A Spatial Point Transformer is first introduced to map the input local surface into a carefully designed cylindrical space, enabling end-to-end optimization with SO(2) equivariant representation. A Neural Feature Extractor which leverages the powerful point-based and 3D cylindrical convolutional neural layers is then utilized to derive a compact and representative descriptor for matching. Extensive experiments on both indoor and outdoor datasets demonstrate that SpinNet outperforms existing state-of-the-art techniques by a large margin. More critically, it has the best generalization ability across unseen scenarios with different sensor modalities. The code is available at https://github.com/QingyongHu/SpinNet. | [
"Regularization",
"Attention Modules",
"Output Functions",
"Stochastic Optimization",
"Normalization",
"Subword Segmentation",
"Feedforward Networks",
"Transformers",
"Attention Mechanisms",
"Skip Connections"
] | [
"Point Cloud Registration"
] | [
"Layer Normalization",
"Byte Pair Encoding",
"BPE",
"Softmax",
"Adam",
"Transformer",
"Multi-Head Attention",
"Residual Connection",
"Scaled Dot-Product Attention",
"Dropout",
"Label Smoothing",
"Dense Connections"
] | [
"3DMatch Benchmark"
] | [
"Recall"
] | SpinNet: Learning a General Surface Descriptor for 3D Point Cloud Registration |
Prior approaches to line segment detection typically involve perceptual grouping in the image domain or global accumulation in the Hough domain. Here we propose a probabilistic algorithm that merges the advantages of both approaches. In a first stage lines are detected using a global probabilistic Hough approach. In the second stage each detected line is analyzed in the image domain to localize the line segments that generated the peak in the Hough map. By limiting search to a line, the distribution of segments over the sequence of points on the line can be modeled as a Markov chain, and a probabilistically optimal labelling can be computed exactly using a standard dynamic programming algorithm, in linear time. The Markov assumption also leads to an intuitive ranking method that uses the local marginal posterior probabilities to estimate the expected number of correctly labelled points on a segment. To assess the resulting Markov Chain Marginal Line Segment Detector (MCMLSD) we develop and apply a novel quantitative evaluation methodology that controls for under- and over-segmentation. Evaluation on the YorkUrbanDB dataset shows that the proposed MCMLSD method outperforms the state-of-the-art by a substantial margin.
| [
"Graph Embeddings"
] | [
"Line Segment Detection"
] | [
"LINE",
"Large-scale Information Network Embedding"
] | [
"York Urban Dataset",
"wireframe dataset"
] | [
"sAP10",
"sAP5"
] | MCMLSD: A Dynamic Programming Approach to Line Segment Detection |
Building discriminative representations for 3D data has been an important
task in computer graphics and computer vision research. Convolutional Neural
Networks (CNNs) have shown to operate on 2D images with great success for a
variety of tasks. Lifting convolution operators to 3D (3DCNNs) seems like a
plausible and promising next step. Unfortunately, the computational complexity
of 3D CNNs grows cubically with respect to voxel resolution. Moreover, since
most 3D geometry representations are boundary based, occupied regions do not
increase proportionately with the size of the discretization, resulting in
wasted computation. In this work, we represent 3D spaces as volumetric fields,
and propose a novel design that employs field probing filters to efficiently
extract features from them. Each field probing filter is a set of probing
points --- sensors that perceive the space. Our learning algorithm optimizes
not only the weights associated with the probing points, but also their
locations, which deforms the shape of the probing filters and adaptively
distributes them in 3D space. The optimized probing points sense the 3D space
"intelligently", rather than operating blindly over the entire domain. We show
that field probing is significantly more efficient than 3DCNNs, while providing
state-of-the-art performance, on classification tasks for 3D object recognition
benchmark datasets. | [
"Convolutions"
] | [
"3D Object Recognition",
"Object Recognition"
] | [
"Convolution"
] | [
"ModelNet40"
] | [
"Accuracy"
] | FPNN: Field Probing Neural Networks for 3D Data |
Convolutional neural networks have enabled accurate image super-resolution in
real-time. However, recent attempts to benefit from temporal correlations in
video super-resolution have been limited to naive or inefficient architectures.
In this paper, we introduce spatio-temporal sub-pixel convolution networks that
effectively exploit temporal redundancies and improve reconstruction accuracy
while maintaining real-time speed. Specifically, we discuss the use of early
fusion, slow fusion and 3D convolutions for the joint processing of multiple
consecutive video frames. We also propose a novel joint motion compensation and
video super-resolution algorithm that is orders of magnitude more efficient
than competing methods, relying on a fast multi-resolution spatial transformer
module that is end-to-end trainable. These contributions provide both higher
accuracy and temporally more consistent videos, which we confirm qualitatively
and quantitatively. Relative to single-frame models, spatio-temporal networks
can either reduce the computational cost by 30% whilst maintaining the same
quality or provide a 0.2dB gain for a similar computational cost. Results on
publicly available datasets demonstrate that the proposed algorithms surpass
current state-of-the-art performance in both accuracy and efficiency. | [
"Convolutions"
] | [
"Motion Compensation",
"Video Super-Resolution"
] | [
"Convolution"
] | [
"Vid4 - 4x upscaling"
] | [
"SSIM",
"PSNR",
"MOVIE"
] | Real-Time Video Super-Resolution with Spatio-Temporal Networks and Motion Compensation |
Recent approaches to question generation have used modifications to a Seq2Seq architecture inspired by advances in machine translation. Models are trained using teacher forcing to optimise only the one-step-ahead prediction. However, at test time, the model is asked to generate a whole sequence, causing errors to propagate through the generation process (exposure bias). A number of authors have proposed countering this bias by optimising for a reward that is less tightly coupled to the training data, using reinforcement learning. We optimise directly for quality metrics, including a novel approach using a discriminator learned directly from the training data. We confirm that policy gradient methods can be used to decouple training from the ground truth, leading to increases in the metrics used as rewards. We perform a human evaluation, and show that although these metrics have previously been assumed to be good proxies for question quality, they are poorly aligned with human judgement and the model simply learns to exploit the weaknesses of the reward source. | [
"Recurrent Neural Networks",
"Activation Functions",
"Sequence To Sequence Models"
] | [
"Machine Translation",
"Policy Gradient Methods",
"Question Generation"
] | [
"Long Short-Term Memory",
"Tanh Activation",
"Sequence to Sequence",
"LSTM",
"Seq2Seq",
"Sigmoid Activation"
] | [
"SQuAD1.1"
] | [
"BLEU-4"
] | Evaluating Rewards for Question Generation Models |
In this paper, we propose a one-stage online clustering method called Contrastive Clustering (CC) which explicitly performs the instance- and cluster-level contrastive learning. To be specific, for a given dataset, the positive and negative instance pairs are constructed through data augmentations and then projected into a feature space. Therein, the instance- and cluster-level contrastive learning are respectively conducted in the row and column space by maximizing the similarities of positive pairs while minimizing those of negative ones. Our key observation is that the rows of the feature matrix could be regarded as soft labels of instances, and accordingly the columns could be further regarded as cluster representations. By simultaneously optimizing the instance- and cluster-level contrastive loss, the model jointly learns representations and cluster assignments in an end-to-end manner. Extensive experimental results show that CC remarkably outperforms 17 competitive clustering methods on six challenging image benchmarks. In particular, CC achieves an NMI of 0.705 (0.431) on the CIFAR-10 (CIFAR-100) dataset, which is an up to 19\% (39\%) performance improvement compared with the best baseline. | [
"Initialization",
"Convolutional Neural Networks",
"Activation Functions",
"Normalization",
"Convolutions",
"Pooling Operations",
"Skip Connections",
"Skip Connection Blocks"
] | [
"Image Clustering",
"Online Clustering"
] | [
"ResNet",
"Average Pooling",
"Batch Normalization",
"Convolution",
"1x1 Convolution",
"ReLU",
"Residual Network",
"Residual Connection",
"Bottleneck Residual Block",
"Kaiming Initialization",
"Residual Block",
"Global Average Pooling",
"Rectified Linear Units"
] | [
"Imagenet-dog-15",
"CIFAR-100",
"CIFAR-10",
"Tiny-ImageNet",
"ImageNet-10",
"STL-10"
] | [
"Train set",
"Train Split",
"ARI",
"Backbone",
"NMI",
"Accuracy"
] | Contrastive Clustering |
I am a young person who is curious about image classification.
I have an average knowledge and an average computer.
I thought the cifar 100 data for image classification was challenging for me.
It was a data containing 100 classes from 32x32 and 100 images.
I chose resnet as the model due to the low number of data and gradient vanishing problem.
I worked with google colab because my computer is not enough (thank you Google)
Since I used the free version, I could only run Resnet50.
I tried to stick to the original Resnet article. But I made it myself in changes.
I have tried many hyper parameters.
I found the parameters that gave the best results as soon as possible as fast as I could.
I'm a young man who likes to push his luck. that is all | [
"Initialization",
"Convolutional Neural Networks",
"Activation Functions",
"Normalization",
"Convolutions",
"Pooling Operations",
"Skip Connections",
"Skip Connection Blocks"
] | [
"Image Classification"
] | [
"ResNet",
"Average Pooling",
"Residual Block",
"Batch Normalization",
"Convolution",
"1x1 Convolution",
"ReLU",
"Residual Network",
"Residual Connection",
"Bottleneck Residual Block",
"Kaiming Initialization",
"Global Average Pooling",
"Rectified Linear Units",
"Max Pooling"
] | [
"CIFAR-100"
] | [
"Percentage correct"
] | ResNet50_on_Cifar_100_Without_Transfer_Learning |
Image restoration, including image denoising, super resolution, inpainting,
and so on, is a well-studied problem in computer vision and image processing,
as well as a test bed for low-level image modeling algorithms. In this work, we
propose a very deep fully convolutional auto-encoder network for image
restoration, which is a encoding-decoding framework with symmetric
convolutional-deconvolutional layers. In other words, the network is composed
of multiple layers of convolution and de-convolution operators, learning
end-to-end mappings from corrupted images to the original ones. The
convolutional layers capture the abstraction of image contents while
eliminating corruptions. Deconvolutional layers have the capability to upsample
the feature maps and recover the image details. To deal with the problem that
deeper networks tend to be more difficult to train, we propose to symmetrically
link convolutional and deconvolutional layers with skip-layer connections, with
which the training converges much faster and attains better results. | [
"Convolutions"
] | [
"Denoising",
"Image Denoising",
"Image Restoration",
"JPEG Artifact Correction",
"Super-Resolution"
] | [
"Convolution"
] | [
"Set5 - 3x upscaling",
"Set14 - 2x upscaling",
"Set14 - 4x upscaling",
"BSD100 - 2x upscaling",
"Set14 - 3x upscaling",
"BSD100 - 3x upscaling",
"BSD100 - 4x upscaling",
"Live1 (Quality 10 Grayscale)",
"LIVE1 (Quality 20 Grayscale)",
"Set5 - 4x upscaling",
"BSD200 sigma50",
"BSD200 sigma70",
"BSD200 sigma30",
"Set5 - 2x upscaling",
"BSD200 sigma10"
] | [
"SSIM",
"PSNR"
] | Image Restoration Using Convolutional Auto-encoders with Symmetric Skip Connections |
Most deep learning object detectors are based on the anchor mechanism and resort to the Intersection over Union (IoU) between predefined anchor boxes and ground truth boxes to evaluate the matching quality between anchors and objects. In this paper, we question this use of IoU and propose a new anchor matching criterion guided, during the training phase, by the optimization of both the localization and the classification tasks: the predictions related to one task are used to dynamically assign sample anchors and improve the model on the other task, and vice versa. Despite the simplicity of the proposed method, our experiments with different state-of-the-art deep learning architectures on PASCAL VOC and MS COCO datasets demonstrate the effectiveness and generality of our Mutual Guidance strategy. | [
"Object Detection Models"
] | [
"Object Detection"
] | [
"MutualGuide",
"Mutual Guidance"
] | [
"PASCAL VOC 2007"
] | [
"MAP"
] | Localize to Classify and Classify to Localize: Mutual Guidance in Object Detection |
Modern CNN-based object detectors assign anchors for ground-truth objects under the restriction of object-anchor Intersection-over-Unit (IoU). In this study, we propose a learning-to-match approach to break IoU restriction, allowing objects to match anchors in a flexible manner. Our approach, referred to as FreeAnchor, updates hand-crafted anchor assignment to "free" anchor matching by formulating detector training as a maximum likelihood estimation (MLE) procedure. FreeAnchor targets at learning features which best explain a class of objects in terms of both classification and localization. FreeAnchor is implemented by optimizing detection customized likelihood and can be fused with CNN-based detectors in a plug-and-play manner. Experiments on COCO demonstrate that FreeAnchor consistently outperforms their counterparts with significant margins. | [
"Anchor Supervision",
"Initialization",
"Proposal Filtering",
"Learning Rate Schedules",
"Stochastic Optimization",
"Convolutional Neural Networks",
"Activation Functions",
"Loss Functions",
"Normalization",
"Convolutions",
"Pooling Operations",
"Skip Connections",
"Skip Connection Blocks"
] | [
"Object Detection"
] | [
"Average Pooling",
"1x1 Convolution",
"ResNet",
"Convolution",
"ReLU",
"Residual Connection",
"Grouped Convolution",
"Focal Loss",
"Non Maximum Suppression",
"Batch Normalization",
"Residual Network",
"Kaiming Initialization",
"SGD",
"Step Decay",
"Stochastic Gradient Descent",
"ResNeXt Block",
"ResNeXt",
"FreeAnchor",
"Bottleneck Residual Block",
"Residual Block",
"Global Average Pooling",
"Rectified Linear Units",
"Max Pooling"
] | [
"COCO test-dev"
] | [
"APM",
"box AP",
"AP75",
"APS",
"APL",
"AP50"
] | FreeAnchor: Learning to Match Anchors for Visual Object Detection |
This paper introduces a negative margin loss to metric learning based few-shot learning methods. The negative margin loss significantly outperforms regular softmax loss, and achieves state-of-the-art accuracy on three standard few-shot classification benchmarks with few bells and whistles. These results are contrary to the common practice in the metric learning field, that the margin is zero or positive. To understand why the negative margin loss performs well for the few-shot classification, we analyze the discriminability of learned features w.r.t different margins for training and novel classes, both empirically and theoretically. We find that although negative margin reduces the feature discriminability for training classes, it may also avoid falsely mapping samples of the same novel class to multiple peaks or clusters, and thus benefit the discrimination of novel classes. Code is available at https://github.com/bl0/negative-margin.few-shot. | [
"Output Functions"
] | [
"Few-Shot Image Classification",
"Few-Shot Learning",
"Metric Learning"
] | [
"Softmax"
] | [
"Mini-ImageNet - 1-Shot Learning",
"CUB 200 5-way 1-shot",
"Mini-ImageNet to CUB - 5 shot learning",
"CUB 200 5-way 5-shot"
] | [
"Accuracy"
] | Negative Margin Matters: Understanding Margin in Few-shot Classification |
Over the past decade, multivariate time series classification has received great attention. We propose transforming the existing univariate time series classification models, the Long Short Term Memory Fully Convolutional Network (LSTM-FCN) and Attention LSTM-FCN (ALSTM-FCN), into a multivariate time series classification model by augmenting the fully convolutional block with a squeeze-and-excitation block to further improve accuracy. Our proposed models outperform most state-of-the-art models while requiring minimum preprocessing. The proposed models work efficiently on various complex multivariate time series classification tasks such as activity recognition or action recognition. Furthermore, the proposed models are highly efficient at test time and small enough to deploy on memory constrained systems. | [
"Activation Functions",
"Convolutions",
"Feedforward Networks",
"Pooling Operations",
"Image Model Blocks"
] | [
"Action Recognition",
"Activity Recognition",
"Temporal Action Localization",
"Time Series",
"Time Series Classification"
] | [
"Average Pooling",
"Convolution",
"ReLU",
"Squeeze-and-Excitation Block",
"Dense Connections",
"Rectified Linear Units",
"Sigmoid Activation"
] | [
"ECG",
"DigitShapes",
"CharacterTrajectories",
"Shapes",
"UWave",
"KickvsPunch",
"AUSLAN",
"LP1",
"PenDigits",
"JapaneseVowels",
"Wafer",
"NetFlow",
"LP4",
"ArabicDigits",
"LP2",
"Libras",
"LP3",
"CMUsubject16",
"LP5",
"WalkvsRun"
] | [
"Accuracy"
] | Multivariate LSTM-FCNs for Time Series Classification |
Convolutional layers are one of the basic building blocks of modern deep neural networks. One fundamental assumption is that convolutional kernels should be shared for all examples in a dataset. We propose conditionally parameterized convolutions (CondConv), which learn specialized convolutional kernels for each example. Replacing normal convolutions with CondConv enables us to increase the size and capacity of a network, while maintaining efficient inference. We demonstrate that scaling networks with CondConv improves the performance and inference cost trade-off of several existing convolutional neural network architectures on both classification and detection tasks. On ImageNet classification, our CondConv approach applied to EfficientNet-B0 achieves state-of-the-art performance of 78.3% accuracy with only 413M multiply-adds. Code and checkpoints for the CondConv Tensorflow layer and CondConv-EfficientNet models are available at: https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet/condconv. | [
"Proposal Filtering",
"Convolutional Neural Networks",
"Normalization",
"Regularization",
"Activation Functions",
"Convolutions",
"Pooling Operations",
"Object Detection Models",
"Image Models",
"Stochastic Optimization",
"Recurrent Neural Networks",
"Feedforward Networks",
"Skip Connection Blocks",
"Image Data Augmentation",
"Initialization",
"Output Functions",
"Learning Rate Schedules",
"Skip Connections",
"Image Model Blocks"
] | [
"Image Classification",
"Object Detection"
] | [
"Depthwise Convolution",
"Cosine Annealing",
"Average Pooling",
"EfficientNet",
"RMSProp",
"Long Short-Term Memory",
"Mixup",
"Tanh Activation",
"MnasNet",
"1x1 Convolution",
"ResNet",
"MobileNetV2",
"AutoAugment",
"SSD",
"Convolution",
"ReLU",
"Residual Connection",
"Linear Layer",
"Dense Connections",
"MobileNetV1",
"Swish",
"Non Maximum Suppression",
"Batch Normalization",
"Residual Network",
"Squeeze-and-Excitation Block",
"Pointwise Convolution",
"Kaiming Initialization",
"Sigmoid Activation",
"Shake-Shake Regularization",
"CondConv",
"Inverted Residual Block",
"Softmax",
"Linear Warmup With Cosine Annealing",
"Bottleneck Residual Block",
"Dropout",
"Depthwise Separable Convolution",
"LSTM",
"Residual Block",
"Global Average Pooling",
"Rectified Linear Units",
"Max Pooling"
] | [
"ImageNet"
] | [
"Top 1 Accuracy"
] | CondConv: Conditionally Parameterized Convolutions for Efficient Inference |
Semantic image segmentation plays a pivotal role in many vision applications including autonomous driving and medical image analysis. Most of the former approaches move towards enhancing the performance in terms of accuracy with a little awareness of computational efficiency. In this paper, we introduce LiteSeg, a lightweight architecture for semantic image segmentation. In this work, we explore a new deeper version of Atrous Spatial Pyramid Pooling module (ASPP) and apply short and long residual connections, and depthwise separable convolution, resulting in a faster and efficient model. LiteSeg architecture is introduced and tested with multiple backbone networks as Darknet19, MobileNet, and ShuffleNet to provide multiple trade-offs between accuracy and computational cost. The proposed model LiteSeg, with MobileNetV2 as a backbone network, achieves an accuracy of 67.81% mean intersection over union at 161 frames per second with $640 \times 360$ resolution on the Cityscapes dataset. | [
"Semantic Segmentation Models",
"Regularization",
"Convolutional Neural Networks",
"Learning Rate Schedules",
"Stochastic Optimization",
"Semantic Segmentation Modules",
"Activation Functions",
"Output Functions",
"Normalization",
"Convolutions",
"Feedforward Networks",
"Pooling Operations",
"Skip Connection Blocks",
"Skip Connections",
"Image Model Blocks",
"Image Models",
"Miscellaneous Components"
] | [
"Real-Time Semantic Segmentation",
"Semantic Segmentation"
] | [
"Depthwise Convolution",
"Weight Decay",
"ShuffleNet",
"Dilated Convolution",
"DASPP",
"Darknet-19",
"Average Pooling",
"Polynomial Rate Decay",
"Channel Shuffle",
"1x1 Convolution",
"LiteSeg",
"Nesterov Accelerated Gradient",
"MobileNetV2",
"Deeper Atrous Spatial Pyramid Pooling",
"Convolution",
"ReLU",
"Residual Connection",
"Groupwise Point Convolution",
"ShuffleNet Block",
"Dense Connections",
"Grouped Convolution",
"Batch Normalization",
"Pointwise Convolution",
"Inverted Residual Block",
"Softmax",
"Depthwise Separable Convolution",
"Global Average Pooling",
"Rectified Linear Units",
"Max Pooling",
"Spatial Pyramid Pooling"
] | [
"Cityscapes val",
"Cityscapes test"
] | [
"GFlops",
"Category mIoU",
"Mean IoU (class)",
"mIoU"
] | LiteSeg: A Novel Lightweight ConvNet for Semantic Segmentation |
Convolutional Neural Network (CNN) based image segmentation has made great progress in recent years. However, video object segmentation remains a challenging task due to its high computational complexity. Most of the previous methods employ a two-stream CNN framework to handle spatial and motion features separately. In this paper, we propose an end-to-end encoder-decoder style 3D CNN to aggregate spatial and temporal information simultaneously for video object segmentation. To efficiently process video, we propose 3D separable convolution for the pyramid pooling module and decoder, which dramatically reduces the number of operations while maintaining the performance. Moreover, we also extend our framework to video action segmentation by adding an extra classifier to predict the action label for actors in videos. Extensive experiments on several video datasets demonstrate the superior performance of the proposed approach for action and object segmentation compared to the state-of-the-art. | [
"Semantic Segmentation Modules",
"Activation Functions",
"Normalization",
"Convolutions",
"Pooling Operations"
] | [
"Action Segmentation",
"Semantic Segmentation",
"Semi-Supervised Video Object Segmentation",
"Video Object Segmentation",
"Video Semantic Segmentation",
"Visual Object Tracking"
] | [
"Average Pooling",
"Batch Normalization",
"Convolution",
"ReLU",
"Rectified Linear Units",
"Pyramid Pooling Module"
] | [
"DAVIS 2016"
] | [
"F-measure (Decay)",
"Jaccard (Mean)",
"F-measure (Recall)",
"Jaccard (Decay)",
"Jaccard (Recall)",
"F-measure (Mean)",
"J&F"
] | An Efficient 3D CNN for Action/Object Segmentation in Video |
Feature pyramids are a basic component in recognition systems for detecting
objects at different scales. But recent deep learning object detectors have
avoided pyramid representations, in part because they are compute and memory
intensive. In this paper, we exploit the inherent multi-scale, pyramidal
hierarchy of deep convolutional networks to construct feature pyramids with
marginal extra cost. A top-down architecture with lateral connections is
developed for building high-level semantic feature maps at all scales. This
architecture, called a Feature Pyramid Network (FPN), shows significant
improvement as a generic feature extractor in several applications. Using FPN
in a basic Faster R-CNN system, our method achieves state-of-the-art
single-model results on the COCO detection benchmark without bells and
whistles, surpassing all existing single-model entries including those from the
COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU
and thus is a practical and accurate solution to multi-scale object detection.
Code will be made publicly available. | [
"Object Detection Models",
"Output Functions",
"Feature Extractors",
"RoI Feature Extractors",
"Convolutions",
"Region Proposal"
] | [
"Object Detection"
] | [
"RPN",
"Faster R-CNN",
"Softmax",
"Feature Pyramid Network",
"Convolution",
"RoIPool",
"1x1 Convolution",
"FPN",
"Region Proposal Network"
] | [
"COCO minival",
"COCO test-dev"
] | [
"APM",
"box AP",
"AP75",
"APS",
"APL",
"AP50"
] | Feature Pyramid Networks for Object Detection |
Differentiable Neural Architecture Search (DNAS) has demonstrated great success in designing state-of-the-art, efficient neural networks. However, DARTS-based DNAS's search space is small when compared to other search methods', since all candidate network layers must be explicitly instantiated in memory. To address this bottleneck, we propose a memory and computationally efficient DNAS variant: DMaskingNAS. This algorithm expands the search space by up to $10^{14}\times$ over conventional DNAS, supporting searches over spatial and channel dimensions that are otherwise prohibitively expensive: input resolution and number of filters. We propose a masking mechanism for feature map reuse, so that memory and computational costs stay nearly constant as the search space expands. Furthermore, we employ effective shape propagation to maximize per-FLOP or per-parameter accuracy. The searched FBNetV2s yield state-of-the-art performance when compared with all previous architectures. With up to 421$\times$ less search cost, DMaskingNAS finds models with 0.9% higher accuracy, 15% fewer FLOPs than MobileNetV3-Small; and with similar accuracy but 20% fewer FLOPs than Efficient-B0. Furthermore, our FBNetV2 outperforms MobileNetV3 by 2.6% in accuracy, with equivalent model size. FBNetV2 models are open-sourced at https://github.com/facebookresearch/mobile-vision. | [
"Distributions",
"Neural Architecture Search"
] | [
"Neural Architecture Search"
] | [
"Gumbel Softmax",
"Differentiable Neural Architecture Search",
"DNAS"
] | [
"ImageNet"
] | [
"Top-1 Error Rate",
"MACs",
"Accuracy"
] | FBNetV2: Differentiable Neural Architecture Search for Spatial and Channel Dimensions |
Video Question Answering (QA) is an important task in understanding video
temporal structure. We observe that there are three unique attributes of video
QA compared with image QA: (1) it deals with long sequences of images
containing richer information not only in quantity but also in variety; (2)
motion and appearance information are usually correlated with each other and
able to provide useful attention cues to the other; (3) different questions
require different number of frames to infer the answer. Based these
observations, we propose a motion-appearance comemory network for video QA. Our
networks are built on concepts from Dynamic Memory Network (DMN) and introduces
new mechanisms for video QA. Specifically, there are three salient aspects: (1)
a co-memory attention mechanism that utilizes cues from both motion and
appearance to generate attention; (2) a temporal conv-deconv network to
generate multi-level contextual facts; (3) a dynamic fact ensemble method to
construct temporal representation dynamically for different questions. We
evaluate our method on TGIF-QA dataset, and the results outperform
state-of-the-art significantly on all four tasks of TGIF-QA. | [
"Recurrent Neural Networks",
"Working Memory Models",
"Output Functions"
] | [
"Question Answering",
"Video Question Answering",
"Visual Question Answering"
] | [
"Gated Recurrent Unit",
"Softmax",
"Memory Network",
"GRU",
"Dynamic Memory Network"
] | [
"MSRVTT-QA",
"MSVD-QA"
] | [
"Accuracy"
] | Motion-Appearance Co-Memory Networks for Video Question Answering |
We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters. | [
"Regularization",
"Output Functions",
"Learning Rate Schedules",
"Stochastic Optimization",
"Convolutional Neural Networks",
"Activation Functions",
"Convolutions",
"Feedforward Networks",
"Pooling Operations",
"Skip Connections",
"Image Model Blocks"
] | [
"Image Classification"
] | [
"Weight Decay",
"Depthwise Convolution",
"Average Pooling",
"RMSProp",
"1x1 Convolution",
"Convolution",
"ReLU",
"Residual Connection",
"Dense Connections",
"Inception Module",
"Pointwise Convolution",
"Step Decay",
"SGD with Momentum",
"Softmax",
"Xception",
"Dropout",
"Depthwise Separable Convolution",
"Global Average Pooling",
"Rectified Linear Units",
"Max Pooling"
] | [
"ImageNet"
] | [
"Number of params",
"Top 5 Accuracy",
"Top 1 Accuracy"
] | Xception: Deep Learning with Depthwise Separable Convolutions |
Recent work in language modeling demonstrates that training large transformer models advances the state of the art in Natural Language Processing applications. However, very large models can be quite difficult to train due to memory constraints. In this work, we present our techniques for training very large transformer models and implement a simple, efficient intra-layer model parallel approach that enables training transformer models with billions of parameters. Our approach does not require a new compiler or library changes, is orthogonal and complimentary to pipeline model parallelism, and can be fully implemented with the insertion of a few communication operations in native PyTorch. We illustrate this approach by converging transformer based models up to 8.3 billion parameters using 512 GPUs. We sustain 15.1 PetaFLOPs across the entire application with 76% scaling efficiency when compared to a strong single GPU baseline that sustains 39 TeraFLOPs, which is 30% of peak FLOPs. To demonstrate that large language models can further advance the state of the art (SOTA), we train an 8.3 billion parameter transformer language model similar to GPT-2 and a 3.9 billion parameter model similar to BERT. We show that careful attention to the placement of layer normalization in BERT-like models is critical to achieving increased performance as the model size grows. Using the GPT-2 model we achieve SOTA results on the WikiText103 (10.8 compared to SOTA perplexity of 15.8) and LAMBADA (66.5% compared to SOTA accuracy of 63.2%) datasets. Our BERT model achieves SOTA results on the RACE dataset (90.9% compared to SOTA accuracy of 89.4%). | [
"Regularization",
"Output Functions",
"Learning Rate Schedules",
"Stochastic Optimization",
"Attention Modules",
"Activation Functions",
"Subword Segmentation",
"Normalization",
"Language Models",
"Attention Mechanisms",
"Feedforward Networks",
"Transformers",
"Fine-Tuning",
"Skip Connections"
] | [
"Language Modelling",
"Reading Comprehension"
] | [
"Weight Decay",
"Cosine Annealing",
"Adam",
"Scaled Dot-Product Attention",
"Gaussian Linear Error Units",
"Transformer",
"ReLU",
"Residual Connection",
"Dense Connections",
"Layer Normalization",
"Discriminative Fine-Tuning",
"Label Smoothing",
"GELU",
"GPT-2",
"WordPiece",
"Byte Pair Encoding",
"BPE",
"Softmax",
"Multi-Head Attention",
"Attention Dropout",
"Linear Warmup With Cosine Annealing",
"Linear Warmup With Linear Decay",
"Dropout",
"BERT",
"Rectified Linear Units"
] | [
"WikiText-103",
"RACE"
] | [
"Number of params",
"Accuracy (Middle)",
"Test perplexity",
"Accuracy",
"Accuracy (High)"
] | Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism |
Text-to-SQL is the problem of converting a user question into an SQL query, when the question and database are given. In this paper, we present a neural network approach called RYANSQL (Recursively Yielding Annotation Network for SQL) to solve complex Text-to-SQL tasks for cross-domain databases. State-ment Position Code (SPC) is defined to trans-form a nested SQL query into a set of non-nested SELECT statements; a sketch-based slot filling approach is proposed to synthesize each SELECT statement for its corresponding SPC. Additionally, two input manipulation methods are presented to improve generation performance further. RYANSQL achieved 58.2% accuracy on the challenging Spider benchmark, which is a 3.2%p improvement over previous state-of-the-art approaches. At the time of writing, RYANSQL achieves the first position on the Spider leaderboard. | [
"Regularization",
"Output Functions",
"Stochastic Optimization",
"Learning Rate Schedules",
"Attention Modules",
"Activation Functions",
"Subword Segmentation",
"Normalization",
"Language Models",
"Feedforward Networks",
"Attention Mechanisms",
"Skip Connections"
] | [
"Slot Filling",
"Text-To-Sql"
] | [
"Weight Decay",
"Exponential Decay",
"WordPiece",
"Layer Normalization",
"Softmax",
"Adam",
"Multi-Head Attention",
"Attention Dropout",
"Residual Connection",
"Scaled Dot-Product Attention",
"Dropout",
"BERT",
"GELU",
"Dense Connections",
"Gaussian Linear Error Units"
] | [
"spider"
] | [
"Accuracy (Test)",
"Accuracy (Dev)"
] | RYANSQL: Recursively Applying Sketch-based Slot Fillings for Complex Text-to-SQL in Cross-Domain Databases |
In this article, we tackle the issue of the limited quantity of manually sense annotated corpora for the task of word sense disambiguation, by exploiting the semantic relationships between senses such as synonymy, hypernymy and hyponymy, in order to compress the sense vocabulary of Princeton WordNet, and thus reduce the number of different sense tags that must be observed to disambiguate all words of the lexical database. We propose two different methods that greatly reduces the size of neural WSD models, with the benefit of improving their coverage without additional training data, and without impacting their precision. In addition to our method, we present a WSD system which relies on pre-trained BERT word vectors in order to achieve results that significantly outperform the state of the art on all WSD evaluation tasks. | [
"Regularization",
"Output Functions",
"Learning Rate Schedules",
"Stochastic Optimization",
"Attention Modules",
"Activation Functions",
"Subword Segmentation",
"Normalization",
"Language Models",
"Feedforward Networks",
"Attention Mechanisms",
"Skip Connections"
] | [
"Word Sense Disambiguation"
] | [
"Weight Decay",
"WordPiece",
"Layer Normalization",
"Softmax",
"Adam",
"Multi-Head Attention",
"Attention Dropout",
"Linear Warmup With Linear Decay",
"Residual Connection",
"Scaled Dot-Product Attention",
"Dropout",
"BERT",
"GELU",
"Dense Connections",
"Gaussian Linear Error Units"
] | [
"Supervised:",
"SensEval 3 Task 1",
"SemEval 2013 Task 12",
"SemEval 2007 Task 17",
"SemEval 2015 Task 13",
"SemEval 2007 Task 7",
"SensEval 2"
] | [
"Senseval 2",
"Senseval 3",
"SemEval 2013",
"F1",
"SemEval 2007",
"SemEval 2015"
] | Sense Vocabulary Compression through the Semantic Knowledge of WordNet for Neural Word Sense Disambiguation |
Objective: The aim of this study is to develop an efficient and reliable
epileptic seizure prediction system using intracranial EEG (iEEG) data,
especially for people with drug-resistant epilepsy. The prediction procedure
should yield accurate results in a fast enough fashion to alert patients of
impending seizures. Methods: We quantitatively analyze the human iEEG data to
obtain insights into how the human brain behaves before and between epileptic
seizures. We then introduce an efficient pre-processing method for reducing the
data size and converting the time-series iEEG data into an image-like format
that can be used as inputs to convolutional neural networks (CNNs). Further, we
propose a seizure prediction algorithm that uses cooperative multi-scale CNNs
for automatic feature learning of iEEG data. Results: 1) iEEG channels contain
complementary information and excluding individual channels is not advisable to
retain the spatial information needed for accurate prediction of epileptic
seizures. 2) The traditional PCA is not a reliable method for iEEG data
reduction in seizure prediction. 3) Hand-crafted iEEG features may not be
suitable for reliable seizure prediction performance as the iEEG data varies
between patients and over time for the same patient. 4) Seizure prediction
results show that our algorithm outperforms existing methods by achieving an
average sensitivity of 87.85% and AUC score of 0.84. Conclusion: Understanding
how the human brain behaves before seizure attacks and far from them
facilitates better designs of epileptic seizure predictors. Significance:
Accurate seizure prediction algorithms can warn patients about the next seizure
attack so they could avoid dangerous activities. Medications could then be
administered to abort the impending seizure and minimize the risk of injury. | [
"Dimensionality Reduction"
] | [
"EEG",
"Seizure prediction",
"Time Series"
] | [
"Principal Components Analysis",
"PCA"
] | [
"Melbourne University Seizure Prediction"
] | [
"AUC"
] | Human Intracranial EEG Quantitative Analysis and Automatic Feature Learning for Epileptic Seizure Prediction |
Graph neural networks (GNNs) have achieved lots of success on graph-structured data. In the light of this, there has been increasing interest in studying their representation power. One line of work focuses on the universal approximation of permutation-invariant functions by certain classes of GNNs, and another demonstrates the limitation of GNNs via graph isomorphism tests. Our work connects these two perspectives and proves their equivalence. We further develop a framework of the representation power of GNNs with the language of sigma-algebra, which incorporates both viewpoints. Using this framework, we compare the expressive power of different classes of GNNs as well as other methods on graphs. In particular, we prove that order-2 Graph G-invariant networks fail to distinguish non-isomorphic regular graphs with the same degree. We then extend them to a new architecture, Ring-GNNs, which succeeds on distinguishing these graphs and provides improvements on real-world social network datasets. | [
"Graph Embeddings"
] | [
"Graph Regression"
] | [
"LINE",
"Large-scale Information Network Embedding"
] | [
"ZINC-500k"
] | [
"MAE"
] | On the equivalence between graph isomorphism testing and function approximation with GNNs |
Neural embedding-based machine learning models have shown promise for predicting novel links in biomedical knowledge graphs. Unfortunately, their practical utility is diminished by their lack of interpretability. Recently, the fully interpretable, rule-based algorithm AnyBURL yielded highly competitive results on many general-purpose link prediction benchmarks. However, its applicability to large-scale prediction tasks on complex biomedical knowledge bases is limited by long inference times and difficulties with aggregating predictions made by multiple rules. We improve upon AnyBURL by introducing the SAFRAN rule application framework which aggregates rules through a scalable clustering algorithm. SAFRAN yields new state-of-the-art results for fully interpretable link prediction on the established general-purpose benchmark FB15K-237 and the large-scale biomedical benchmark OpenBioLink. Furthermore, it exceeds the results of multiple established embedding-based algorithms on FB15K-237 and narrows the gap between rule-based and embedding-based algorithms on OpenBioLink. We also show that SAFRAN increases inference speeds by up to two orders of magnitude. | [
"Rule-based systems"
] | [
"Knowledge Graphs",
"Link Prediction"
] | [
"SAFRAN - Scalable and fast non-redundant rule application",
"Symbolic rule learning",
"SAFRAN"
] | [
"OpenBioLink",
"FB15k-237"
] | [
"Hits@10",
"Hits@3",
"Hits@1"
] | Scalable and interpretable rule-based link prediction for large heterogeneous knowledge graphs |
We introduce a new model, the Recurrent Entity Network (EntNet). It is
equipped with a dynamic long-term memory which allows it to maintain and update
a representation of the state of the world as it receives new data. For
language understanding tasks, it can reason on-the-fly as it reads text, not
just when it is required to answer a question or respond as is the case for a
Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or
Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed
size memory and can learn to perform location and content-based read and write
operations. However, unlike those models it has a simple parallel architecture
in which several memory locations can be updated simultaneously. The EntNet
sets a new state-of-the-art on the bAbI tasks, and is the first method to solve
all the tasks in the 10k training examples setting. We also demonstrate that it
can solve a reasoning task which requires a large number of supporting facts,
which other methods are not able to solve, and can generalize past its training
horizon. It can also be practically used on large scale datasets such as
Children's Book Test, where it obtains competitive performance, reading the
story in a single pass. | [
"Output Functions",
"Recurrent Neural Networks",
"Activation Functions",
"Working Memory Models",
"Attention Mechanisms"
] | [
"Question Answering"
] | [
"Recurrent Entity Network",
"Softmax",
"Long Short-Term Memory",
"Neural Turing Machine",
"Tanh Activation",
"Content-based Attention",
"LSTM",
"Location-based Attention",
"Sigmoid Activation"
] | [
"bAbi"
] | [
"Accuracy (trained on 1k)",
"Mean Error Rate",
"Accuracy (trained on 10k)"
] | Tracking the World State with Recurrent Entity Networks |
Weakly supervised learning has emerged as a compelling tool for object detection by reducing the need for strong supervision during training. However, major challenges remain: (1) differentiation of object instances can be ambiguous; (2) detectors tend to focus on discriminative parts rather than entire objects; (3) without ground truth, object proposals have to be redundant for high recalls, causing significant memory consumption. Addressing these challenges is difficult, as it often requires to eliminate uncertainties and trivial solutions. To target these issues we develop an instance-aware and context-focused unified framework. It employs an instance-aware self-training algorithm and a learnable Concrete DropBlock while devising a memory-efficient sequential batch back-propagation. Our proposed method achieves state-of-the-art results on COCO ($12.1\% ~AP$, $24.8\% ~AP_{50}$), VOC 2007 ($54.9\% ~AP$), and VOC 2012 ($52.1\% ~AP$), improving baselines by great margins. In addition, the proposed method is the first to benchmark ResNet based models and weakly supervised video object detection. Code, models, and more details will be made available at: https://github.com/NVlabs/wetectron. | [
"Initialization",
"Regularization",
"Convolutional Neural Networks",
"Activation Functions",
"Normalization",
"Convolutions",
"Pooling Operations",
"Skip Connections",
"Skip Connection Blocks"
] | [
"Object Detection",
"Video Object Detection",
"Weakly Supervised Object Detection"
] | [
"ResNet",
"DropBlock",
"Average Pooling",
"Batch Normalization",
"Convolution",
"1x1 Convolution",
"ReLU",
"Residual Network",
"Residual Connection",
"Bottleneck Residual Block",
"Kaiming Initialization",
"Residual Block",
"Global Average Pooling",
"Rectified Linear Units",
"Max Pooling"
] | [
"PASCAL VOC 2012 test",
"PASCAL VOC 2007",
"COCO test-dev"
] | [
"AP50",
"MAP"
] | Instance-aware, Context-focused, and Memory-efficient Weakly Supervised Object Detection |
As the most successful variant and improvement for Trust Region Policy
Optimization (TRPO), proximal policy optimization (PPO) has been widely applied
across various domains with several advantages: efficient data utilization,
easy implementation, and good parallelism. In this paper, a first-order
gradient reinforcement learning algorithm called Policy Optimization with
Penalized Point Probability Distance (POP3D), which is a lower bound to the
square of total variance divergence is proposed as another powerful variant.
Firstly, we talk about the shortcomings of several commonly used algorithms, by
which our method is partly motivated. Secondly, we address to overcome these
shortcomings by applying POP3D. Thirdly, we dive into its mechanism from the
perspective of solution manifold. Finally, we make quantitative comparisons
among several state-of-the-art algorithms based on common benchmarks.
Simulation results show that POP3D is highly competitive compared with PPO.
Besides, our code is released in https://github.com/paperwithcode/pop3d. | [
"Policy Gradient Methods",
"Regularization"
] | [
"Atari Games"
] | [
"PPO",
"Proximal Policy Optimization",
"Entropy Regularization"
] | [
"Atari 2600 Amidar",
"Atari 2600 River Raid",
"Atari 2600 Beam Rider",
"Atari 2600 Video Pinball",
"Atari 2600 Demon Attack",
"Atari 2600 Enduro",
"Atari 2600 Alien",
"Atari 2600 Boxing",
"Atari 2600 Pitfall!",
"Atari 2600 Bank Heist",
"Atari 2600 Tutankham",
"Atari 2600 Time Pilot",
"Atari 2600 Space Invaders",
"Atari 2600 Assault",
"Atari 2600 Gravitar",
"Atari 2600 Ice Hockey",
"Atari 2600 Bowling",
"Atari 2600 Private Eye",
"Atari 2600 Asterix",
"Atari 2600 Breakout",
"Atari 2600 Name This Game",
"Atari 2600 Crazy Climber",
"Atari 2600 Pong",
"Atari 2600 Krull",
"Atari 2600 Freeway",
"Atari 2600 James Bond",
"Atari 2600 Robotank",
"Atari 2600 Kangaroo",
"Atari 2600 Venture",
"Atari 2600 Asteroids",
"Atari 2600 Fishing Derby",
"Atari 2600 Ms. Pacman",
"Atari 2600 Seaquest",
"Atari 2600 Tennis",
"Atari 2600 Zaxxon",
"Atari 2600 Frostbite",
"Atari 2600 Star Gunner",
"Atari 2600 Double Dunk",
"Atari 2600 Battle Zone",
"Atari 2600 Gopher",
"Atari 2600 Road Runner",
"Atari 2600 Atlantis",
"Atari 2600 Kung-Fu Master",
"Atari 2600 Chopper Command",
"Atari 2600 Up and Down",
"Atari 2600 Montezuma's Revenge",
"Atari 2600 Wizard of Wor",
"Atari 2600 Q*Bert",
"Atari 2600 Centipede"
] | [
"Score"
] | Policy Optimization With Penalized Point Probability Distance: An Alternative To Proximal Policy Optimization |
We introduce a deep and light-weight transformer, DeLighT, that delivers similar or better performance than standard transformer-based models with significantly fewer parameters. DeLighT more efficiently allocates parameters both (1) within each Transformer block using the DeLighT transformation, a deep and light-weight transformation, and (2) across blocks using block-wise scaling, which allows for shallower and narrower DeLighT blocks near the input and wider and deeper DeLighT blocks near the output. Overall, DeLighT networks are 2.5 to 4 times deeper than standard transformer models and yet have fewer parameters and operations. Experiments on benchmark machine translation and language modeling tasks show that DeLighT matches or improves the performance of baseline Transformers with 2 to 3 times fewer parameters on average. Our source code is available at: \url{https://github.com/sacmehta/delight} | [
"Regularization",
"Output Functions",
"Attention Modules",
"Stochastic Optimization",
"Feedforward Networks",
"Transformers"
] | [
"Language Modelling",
"Machine Translation"
] | [
"DeLighT Block",
"Feedforward Network",
"Adaptive Softmax",
"Adam",
"DeLighT",
"Label Smoothing",
"DExTra"
] | [
"WMT2016 English-German",
"WMT2016 English-Romanian",
"WMT2016 English-French",
"WikiText-103",
"IWSLT2014 German-English"
] | [
"Number of params",
"BLEU score",
"Test perplexity"
] | DeLighT: Deep and Light-weight Transformer |
Natural language is hierarchically structured: smaller units (e.g., phrases) are nested within larger units (e.g., clauses). When a larger constituent ends, all of the smaller constituents that are nested within it must also be closed. While the standard LSTM architecture allows different neurons to track information at different time scales, it does not have an explicit bias towards modeling a hierarchy of constituents. This paper proposes to add such an inductive bias by ordering the neurons; a vector of master input and forget gates ensures that when a given neuron is updated, all the neurons that follow it in the ordering are also updated. Our novel recurrent architecture, ordered neurons LSTM (ON-LSTM), achieves good performance on four different tasks: language modeling, unsupervised parsing, targeted syntactic evaluation, and logical inference. | [
"Recurrent Neural Networks",
"Activation Functions"
] | [
"Constituency Grammar Induction",
"Language Modelling"
] | [
"Tanh Activation",
"Long Short-Term Memory",
"LSTM",
"Sigmoid Activation"
] | [
"PTB"
] | [
"Max F1 (WSJ10)",
"Mean F1 (WSJ10)",
"Max F1 (WSJ)",
"Mean F1 (WSJ)"
] | Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks |
Existing deep convolutional neural networks (CNNs) require a fixed-size
(e.g., 224x224) input image. This requirement is "artificial" and may reduce
the recognition accuracy for the images or sub-images of an arbitrary
size/scale. In this work, we equip the networks with another pooling strategy,
"spatial pyramid pooling", to eliminate the above requirement. The new network
structure, called SPP-net, can generate a fixed-length representation
regardless of image size/scale. Pyramid pooling is also robust to object
deformations. With these advantages, SPP-net should in general improve all
CNN-based image classification methods. On the ImageNet 2012 dataset, we
demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures
despite their different designs. On the Pascal VOC 2007 and Caltech101
datasets, SPP-net achieves state-of-the-art classification results using a
single full-image representation and no fine-tuning.
The power of SPP-net is also significant in object detection. Using SPP-net,
we compute the feature maps from the entire image only once, and then pool
features in arbitrary regions (sub-images) to generate fixed-length
representations for training the detectors. This method avoids repeatedly
computing the convolutional features. In processing test images, our method is
24-102x faster than the R-CNN method, while achieving better or comparable
accuracy on Pascal VOC 2007.
In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our
methods rank #2 in object detection and #3 in image classification among all 38
teams. This manuscript also introduces the improvement made for this
competition. | [
"Object Detection Models",
"Image Data Augmentation",
"Regularization",
"Convolutional Neural Networks",
"Output Functions",
"Stochastic Optimization",
"Learning Rate Schedules",
"Activation Functions",
"Normalization",
"Convolutions",
"Feedforward Networks",
"Pooling Operations",
"Non-Parametric Classification"
] | [
"Image Classification",
"Object Detection",
"Object Recognition"
] | [
"1x1 Convolution",
"SPP-Net",
"ZFNet",
"Random Horizontal Flip",
"Support Vector Machine",
"Convolution",
"ReLU",
"Dense Connections",
"Max Pooling",
"Grouped Convolution",
"Random Resized Crop",
"R-CNN",
"SVM",
"Rectified Linear Units",
"AlexNet",
"Local Contrast Normalization",
"SGD",
"Step Decay",
"Stochastic Gradient Descent",
"Softmax",
"Dropout",
"Local Response Normalization",
"OverFeat",
"Spatial Pyramid Pooling"
] | [
"PASCAL VOC 2007",
"ImageNet"
] | [
"Top 5 Accuracy",
"Top 1 Accuracy",
"MAP"
] | Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition |
Recurrent sequence generators conditioned on input data through an attention
mechanism have recently shown very good performance on a range of tasks in-
cluding machine translation, handwriting synthesis and image caption gen-
eration. We extend the attention-mechanism with features needed for speech
recognition. We show that while an adaptation of the model used for machine
translation in reaches a competitive 18.7% phoneme error rate (PER) on the
TIMIT phoneme recognition task, it can only be applied to utterances which are
roughly as long as the ones it was trained on. We offer a qualitative
explanation of this failure and propose a novel and generic method of adding
location-awareness to the attention mechanism to alleviate this issue. The new
method yields a model that is robust to long inputs and achieves 18% PER in
single utterances and 20% in 10-times longer (repeated) utterances. Finally, we
propose a change to the at- tention mechanism that prevents it from
concentrating too much on single frames, which further reduces PER to 17.6%
level. | [
"Activation Functions",
"Attention Mechanisms"
] | [
"Machine Translation",
"Speech Recognition"
] | [
"Tanh Activation",
"Additive Attention",
"Location Sensitive Attention"
] | [
"TIMIT"
] | [
"Percentage error"
] | Attention-Based Models for Speech Recognition |
In the past few years, triplet loss-based metric embeddings have become a de-facto standard for several important computer vision problems, most notably, person reidentification. On the other hand, in the area of speech recognition the metric embeddings generated by the triplet loss are rarely used even for classification problems. We fill this gap showing that a combination of two representation learning techniques: a triplet loss-based embedding and a variant of kNN for classification instead of cross-entropy loss significantly (by 26% to 38%) improves the classification accuracy for convolutional networks on a LibriSpeech-derived LibriWords datasets. To do so, we propose a novel phonetic similarity based triplet mining approach. We also match the current best published SOTA for Google Speech Commands dataset V2 10+2-class classification with an architecture that is about 6 times more compact and improve the current best published SOTA for 35-class classification on Google Speech Commands dataset V2 by over 40%. | [
"Loss Functions"
] | [
"Keyword Spotting",
"Representation Learning",
"Speech Recognition"
] | [
"Triplet Loss"
] | [
"Google Speech Commands"
] | [
"Google Speech Commands V2 35",
"Google Speech Commands V2 12"
] | Learning Efficient Representations for Keyword Spotting with Triplet Loss |
Deep learning based image-to-image translation methods aim at learning the joint distribution of the two domains and finding transformations between them. Despite recent GAN (Generative Adversarial Network) based methods have shown compelling visual results, they are prone to fail at preserving image-objects and maintaining translation consistency when faced with large and complex domain shifts, which reduces their practicality on tasks such as generating large-scale training data for different domains. To address this problem, we purpose a weakly supervised structure-aware image-to-image translation network, which is composed of encoders, generators, discriminators and parsing nets for the two domains, respectively, in a unified framework. The purposed network generates more visually plausible images of a different domain compared to the competing methods on different image-translation tasks. In addition, we quantitatively evaluate different methods by training Faster-RCNN and YOLO with datasets generated from the image-translation results and demonstrate significant improvement of the detection accuracies by using the proposed image-object preserving network. | [
"Generative Models",
"Convolutions"
] | [
"Data Augmentation",
"Domain Adaptation",
"Image-to-Image Translation",
"Link Prediction"
] | [
"Generative Adversarial Network",
"GAN",
"Convolution"
] | [
"WN18"
] | [
"Hits@10"
] | AugGAN: Cross Domain Adaptation with GAN-based Data Augmentation |
Existing image classification datasets used in computer vision tend to have a
uniform distribution of images across object categories. In contrast, the
natural world is heavily imbalanced, as some species are more abundant and
easier to photograph than others. To encourage further progress in challenging
real world conditions we present the iNaturalist species classification and
detection dataset, consisting of 859,000 images from over 5,000 different
species of plants and animals. It features visually similar species, captured
in a wide variety of situations, from all over the world. Images were collected
with different camera types, have varying image quality, feature a large class
imbalance, and have been verified by multiple citizen scientists. We discuss
the collection of the dataset and present extensive baseline experiments using
state-of-the-art computer vision classification and detection models. Results
show that current non-ensemble based methods achieve only 67% top one
classification accuracy, illustrating the difficulty of the dataset.
Specifically, we observe poor results for classes with small numbers of
training examples suggesting more attention is needed in low-shot learning. | [
"Initialization",
"Convolutional Neural Networks",
"Activation Functions",
"Normalization",
"Convolutions",
"Pooling Operations",
"Skip Connections",
"Skip Connection Blocks"
] | [
"Image Classification"
] | [
"ResNet",
"Average Pooling",
"Batch Normalization",
"Convolution",
"1x1 Convolution",
"ReLU",
"Residual Network",
"Residual Connection",
"Bottleneck Residual Block",
"Kaiming Initialization",
"Residual Block",
"Global Average Pooling",
"Rectified Linear Units",
"Max Pooling"
] | [
"iNaturalist 2018",
"iNaturalist"
] | [
"Top-1 Accuracy",
"Top 5 Accuracy",
"Top 1 Accuracy"
] | The iNaturalist Species Classification and Detection Dataset |
We present Noisy Student Training, a semi-supervised learning approach that works well even when labeled data is abundant. Noisy Student Training achieves 88.4% top-1 accuracy on ImageNet, which is 2.0% better than the state-of-the-art model that requires 3.5B weakly labeled Instagram images. On robustness test sets, it improves ImageNet-A top-1 accuracy from 61.0% to 83.7%, reduces ImageNet-C mean corruption error from 45.7 to 28.3, and reduces ImageNet-P mean flip rate from 27.8 to 12.2. Noisy Student Training extends the idea of self-training and distillation with the use of equal-or-larger student models and noise added to the student during learning. On ImageNet, we first train an EfficientNet model on labeled images and use it as a teacher to generate pseudo labels for 300M unlabeled images. We then train a larger EfficientNet as a student model on the combination of labeled and pseudo labeled images. We iterate this process by putting back the student as the teacher. During the learning of the student, we inject noise such as dropout, stochastic depth, and data augmentation via RandAugment to the student so that the student generalizes better than the teacher. Models are available at https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet. Code is available at https://github.com/google-research/noisystudent. | [
"Image Data Augmentation",
"Image Scaling Strategies",
"Semi-Supervised Learning Methods",
"Regularization",
"Learning Rate Schedules",
"Stochastic Optimization",
"Activation Functions",
"Normalization",
"Convolutions",
"Feedforward Networks",
"Pooling Operations",
"Image Model Blocks",
"Image Models",
"Skip Connection Blocks"
] | [
"Data Augmentation",
"Image Classification"
] | [
"Depthwise Convolution",
"Noisy Student",
"Average Pooling",
"EfficientNet",
"RMSProp",
"RandAugment",
"1x1 Convolution",
"Convolution",
"ReLU",
"Dense Connections",
"Swish",
"FixRes",
"Batch Normalization",
"Squeeze-and-Excitation Block",
"Pointwise Convolution",
"Step Decay",
"Sigmoid Activation",
"Inverted Residual Block",
"Dropout",
"Depthwise Separable Convolution",
"Stochastic Depth",
"Rectified Linear Units"
] | [
"ImageNet ReaL",
"ImageNet"
] | [
"Number of params",
"Top 1 Accuracy",
"Params",
"Accuracy",
"Top 5 Accuracy"
] | Self-training with Noisy Student improves ImageNet classification |
Objects are composed of a set of geometrically organized parts. We introduce an unsupervised capsule autoencoder (SCAE), which explicitly uses geometric relationships between parts to reason about objects. Since these relationships do not depend on the viewpoint, our model is robust to viewpoint changes. SCAE consists of two stages. In the first stage, the model predicts presences and poses of part templates directly from the image and tries to reconstruct the image by appropriately arranging the templates. In the second stage, SCAE predicts parameters of a few object capsules, which are then used to reconstruct part poses. Inference in this model is amortized and performed by off-the-shelf neural encoders, unlike in previous capsule networks. We find that object capsule presences are highly informative of the object class, which leads to state-of-the-art results for unsupervised classification on SVHN (55%) and MNIST (98.7%). The code is available at https://github.com/google-research/google-research/tree/master/stacked_capsule_autoencoders | [
"Generative Models"
] | [
"Cross-Modal Retrieval",
"Unsupervised MNIST"
] | [
"AutoEncoder"
] | [
"MNIST"
] | [
"Accuracy"
] | Stacked Capsule Autoencoders |
This paper proposes a new method to measure the relative importance of features in Artificial Neural Networks (ANN) models. Its underlying principle assumes that the more important a feature is, the more the weights, connected to the respective input neuron, will change during the training of the model. To capture this behavior, a running variance of every weight connected to the input layer is measured during training. For that, an adaptation of Welford’s online algorithm for computing the online variance is proposed. When the training is finished, for each input, the variances of the weights are combined with the final weights to obtain the measure of relative importance for each feature. This method was tested with shallow and deep neural network architectures on several well-known classification and regression problems. The results obtained confirm that this approach is making meaningful measurements. Moreover, results showed that the importance scores are highly correlated with the variable importance method from Random Forests (RF). | [
"Feedforward Networks"
] | [
"Feature Importance",
"Regression"
] | [
"VarImpVIANN",
"Variance-based Feature Importance of Artificial Neural Networks"
] | [
"Breastcancer",
"Iris",
"Diabetes",
"Digits",
"Wine",
"boston"
] | [
"Pearson Correlation"
] | Variance-Based Feature Importance in Neural Networks |
One of the most difficult speech recognition tasks is accurate recognition of
human to human communication. Advances in deep learning over the last few years
have produced major speech recognition improvements on the representative
Switchboard conversational corpus. Word error rates that just a few years ago
were 14% have dropped to 8.0%, then 6.6% and most recently 5.8%, and are now
believed to be within striking range of human performance. This then raises two
issues - what IS human performance, and how far down can we still drive speech
recognition error rates? A recent paper by Microsoft suggests that we have
already achieved human performance. In trying to verify this statement, we
performed an independent set of human performance measurements on two
conversational tasks and found that human performance may be considerably
better than what was earlier reported, giving the community a significantly
harder goal to achieve. We also report on our own efforts in this area,
presenting a set of acoustic and language modeling techniques that lowered the
word error rate of our own English conversational telephone LVCSR system to the
level of 5.5%/10.3% on the Switchboard/CallHome subsets of the Hub5 2000
evaluation, which - at least at the writing of this paper - is a new
performance milestone (albeit not at what we measure to be human performance!).
On the acoustic side, we use a score fusion of three models: one LSTM with
multiple feature inputs, a second LSTM trained with speaker-adversarial
multi-task learning and a third residual net (ResNet) with 25 convolutional
layers and time-dilated convolutions. On the language modeling side, we use
word and character LSTMs and convolutional WaveNet-style language models. | [
"Initialization",
"Convolutional Neural Networks",
"Recurrent Neural Networks",
"Activation Functions",
"Normalization",
"Convolutions",
"Pooling Operations",
"Skip Connections",
"Skip Connection Blocks"
] | [
"Language Modelling",
"Large Vocabulary Continuous Speech Recognition",
"Multi-Task Learning",
"Speech Recognition"
] | [
"ResNet",
"Average Pooling",
"Long Short-Term Memory",
"Max Pooling",
"Batch Normalization",
"Tanh Activation",
"1x1 Convolution",
"ReLU",
"Convolution",
"Residual Connection",
"Bottleneck Residual Block",
"Residual Network",
"LSTM",
"Kaiming Initialization",
"Residual Block",
"Global Average Pooling",
"Rectified Linear Units",
"Sigmoid Activation"
] | [
"Switchboard + Hub500",
"swb_hub_500 WER fullSWBCH"
] | [
"Percentage error"
] | English Conversational Telephone Speech Recognition by Humans and Machines |
Conversational question answering (CQA) is a novel QA task that requires
understanding of dialogue context. Different from traditional single-turn
machine reading comprehension (MRC) tasks, CQA includes passage comprehension,
coreference resolution, and contextual understanding. In this paper, we propose
an innovated contextualized attention-based deep neural network, SDNet, to fuse
context into traditional MRC models. Our model leverages both inter-attention
and self-attention to comprehend conversation context and extract relevant
information from passage. Furthermore, we demonstrated a novel method to
integrate the latest BERT contextual model. Empirical results show the
effectiveness of our model, which sets the new state of the art result in CoQA
leaderboard, outperforming the previous best model by 1.6% F1. Our ensemble
model further improves the result by 2.7% F1. | [
"Regularization",
"Output Functions",
"Learning Rate Schedules",
"Stochastic Optimization",
"Attention Modules",
"Activation Functions",
"Subword Segmentation",
"Normalization",
"Language Models",
"Feedforward Networks",
"Attention Mechanisms",
"Skip Connections"
] | [
"Coreference Resolution",
"Machine Reading Comprehension",
"Question Answering",
"Reading Comprehension"
] | [
"Weight Decay",
"WordPiece",
"Layer Normalization",
"Softmax",
"Adam",
"Multi-Head Attention",
"Attention Dropout",
"Linear Warmup With Linear Decay",
"Residual Connection",
"Scaled Dot-Product Attention",
"Dropout",
"BERT",
"GELU",
"Dense Connections",
"Gaussian Linear Error Units"
] | [
"CoQA"
] | [
"Overall",
"Out-of-domain",
"In-domain"
] | SDNet: Contextualized Attention-based Deep Network for Conversational Question Answering |
In this paper, we present a novel approach for human action recognition with histograms of 3D joint locations (HOJ3D) as a compact representation of postures. We extract the 3D skeletal joint locations from Kinect depth maps using Shotton et al.'s method [6]. The HOJ3D computed from the action depth sequences are reprojected using LDA and then clustered into k posture visual words, which represent the prototypical poses of actions. The temporal evolutions of those visual words are modeled by discrete hidden Markov models (HMMs). In addition, due to the design of our spherical coordinate system and the robust 3D skeleton estimation from Kinect, our method demonstrates significant view invariance on our 3D action dataset. Our dataset is composed of 200 3D sequences of 10 indoor activities performed by 10 individuals in varied views. Our method is real-time and achieves superior results on the challenging 3D action dataset. We also tested our algorithm on the MSR Action 3D dataset and our algorithm outperforms Li et al. [25] on most of the cases. | [
"Dimensionality Reduction"
] | [
"Action Recognition",
"Skeleton Based Action Recognition",
"Temporal Action Localization"
] | [
"Linear Discriminant Analysis",
"LDA"
] | [
"UWA3D"
] | [
"Accuracy"
] | View invariant human action recognition using histograms of 3D joints |
Electrocardiograms (ECGs) are widely used to clinically detect cardiac arrhythmias (CAs). They are also being used to develop computer-assisted methods for heart disease diagnosis. We have developed a convolution neural network model to detect and classify CAs, using a large 12-lead ECG dataset (6,877 recordings) provided by the China Physiological Signal Challenge (CPSC) 2018. Our model, which was ranked first in the challenge competition, achieved a median overall F1-score of 0.84 for the nine-type CA classification of CPSC2018's hidden test set of 2,954 ECG recordings. Further analysis showed that concurrent CAs were adequately predictive for 476 patients with multiple types of CA diagnoses in the dataset. Using only single-lead data yielded a performance that was only slightly worse than using the full 12-lead data, with leads aVR and V1 being the most prominent. We extensively consider these results in the context of their agreement with and relevance to clinical observations. | [
"Convolutions"
] | [
"Arrhythmia Detection"
] | [
"Convolution"
] | [
"The China Physiological Signal Challenge 2018"
] | [
"F1 (Hidden Test Set)"
] | Detection and Classification of Cardiac Arrhythmias by a Challenge-Best Deep Learning Neural Network Model |
We propose a method for multi-person detection and 2-D pose estimation that
achieves state-of-art results on the challenging COCO keypoints task. It is a
simple, yet powerful, top-down approach consisting of two stages.
In the first stage, we predict the location and scale of boxes which are
likely to contain people; for this we use the Faster RCNN detector. In the
second stage, we estimate the keypoints of the person potentially contained in
each proposed bounding box. For each keypoint type we predict dense heatmaps
and offsets using a fully convolutional ResNet. To combine these outputs we
introduce a novel aggregation procedure to obtain highly localized keypoint
predictions. We also use a novel form of keypoint-based Non-Maximum-Suppression
(NMS), instead of the cruder box-level NMS, and a novel form of keypoint-based
confidence score estimation, instead of box-level scoring.
Trained on COCO data alone, our final system achieves average precision of
0.649 on the COCO test-dev set and the 0.643 test-standard sets, outperforming
the winner of the 2016 COCO keypoints challenge and other recent state-of-art.
Further, by using additional in-house labeled data we obtain an even higher
average precision of 0.685 on the test-dev set and 0.673 on the test-standard
set, more than 5% absolute improvement compared to the previous best performing
method on the same dataset. | [
"Initialization",
"Convolutional Neural Networks",
"Activation Functions",
"Normalization",
"Convolutions",
"Pooling Operations",
"Skip Connections",
"Skip Connection Blocks"
] | [
"Human Detection",
"Keypoint Detection",
"Multi-Person Pose Estimation",
"Pose Estimation"
] | [
"ResNet",
"Average Pooling",
"Residual Block",
"Batch Normalization",
"Convolution",
"1x1 Convolution",
"ReLU",
"Residual Network",
"Residual Connection",
"Bottleneck Residual Block",
"Kaiming Initialization",
"Global Average Pooling",
"Rectified Linear Units",
"Max Pooling"
] | [
"COCO test-challenge",
"COCO",
"COCO test-dev"
] | [
"ARM",
"APM",
"AR75",
"AR50",
"ARL",
"AP75",
"AP",
"APL",
"AP50",
"AR"
] | Towards Accurate Multi-person Pose Estimation in the Wild |
Recently, segmentation-based methods are quite popular in scene text detection, as the segmentation results can more accurately describe scene text of various shapes such as curve text. However, the post-processing of binarization is essential for segmentation-based detection, which converts probability maps produced by a segmentation method into bounding boxes/regions of text. In this paper, we propose a module named Differentiable Binarization (DB), which can perform the binarization process in a segmentation network. Optimized along with a DB module, a segmentation network can adaptively set the thresholds for binarization, which not only simplifies the post-processing but also enhances the performance of text detection. Based on a simple segmentation network, we validate the performance improvements of DB on five benchmark datasets, which consistently achieves state-of-the-art results, in terms of both detection accuracy and speed. In particular, with a light-weight backbone, the performance improvements by DB are significant so that we can look for an ideal tradeoff between detection accuracy and efficiency. Specifically, with a backbone of ResNet-18, our detector achieves an F-measure of 82.8, running at 62 FPS, on the MSRA-TD500 dataset. Code is available at: https://github.com/MhLiao/DB | [
"Initialization",
"Convolutional Neural Networks",
"Activation Functions",
"Normalization",
"Convolutions",
"Pooling Operations",
"Skip Connections",
"Skip Connection Blocks"
] | [
"Binarization",
"Scene Text",
"Scene Text Detection"
] | [
"ResNet",
"Average Pooling",
"Batch Normalization",
"Convolution",
"1x1 Convolution",
"ReLU",
"Residual Network",
"Residual Connection",
"Bottleneck Residual Block",
"Kaiming Initialization",
"Residual Block",
"Global Average Pooling",
"Rectified Linear Units",
"Max Pooling"
] | [
"MSRA-TD500",
"ICDAR 2015",
"SCUT-CTW1500",
"Total-Text"
] | [
"F-Measure",
"Recall",
"Precision"
] | Real-time Scene Text Detection with Differentiable Binarization |
Deploying convolutional neural networks (CNNs) on embedded devices is difficult due to the limited memory and computation resources. The redundancy in feature maps is an important characteristic of those successful CNNs, but has rarely been investigated in neural architecture design. This paper proposes a novel Ghost module to generate more feature maps from cheap operations. Based on a set of intrinsic feature maps, we apply a series of linear transformations with cheap cost to generate many ghost feature maps that could fully reveal information underlying intrinsic features. The proposed Ghost module can be taken as a plug-and-play component to upgrade existing convolutional neural networks. Ghost bottlenecks are designed to stack Ghost modules, and then the lightweight GhostNet can be easily established. Experiments conducted on benchmarks demonstrate that the proposed Ghost module is an impressive alternative of convolution layers in baseline models, and our GhostNet can achieve higher recognition performance (e.g. $75.7\%$ top-1 accuracy) than MobileNetV3 with similar computational cost on the ImageNet ILSVRC-2012 classification dataset. Code is available at https://github.com/huawei-noah/ghostnet | [
"Convolutional Neural Networks",
"Feature Extractors",
"Normalization",
"Regularization",
"Activation Functions",
"Convolutions",
"Pooling Operations",
"Object Detection Models",
"Region Proposal",
"Stochastic Optimization",
"Loss Functions",
"Feedforward Networks",
"Skip Connection Blocks",
"Image Data Augmentation",
"Initialization",
"Output Functions",
"RoI Feature Extractors",
"Skip Connections",
"Image Model Blocks"
] | [
"Image Classification"
] | [
"Depthwise Convolution",
"Weight Decay",
"Average Pooling",
"Faster R-CNN",
"1x1 Convolution",
"MobileNetV3",
"Region Proposal Network",
"ReLU6",
"ResNet",
"VGG",
"Ghost Module",
"Random Horizontal Flip",
"RoIPool",
"Convolution",
"ReLU",
"Ghost Bottleneck",
"Residual Connection",
"FPN",
"Dense Connections",
"RPN",
"Focal Loss",
"Random Resized Crop",
"Hard Swish",
"Batch Normalization",
"Residual Network",
"Squeeze-and-Excitation Block",
"Pointwise Convolution",
"Kaiming Initialization",
"Sigmoid Activation",
"SGD with Momentum",
"Inverted Residual Block",
"Softmax",
"Feature Pyramid Network",
"Bottleneck Residual Block",
"Dropout",
"Depthwise Separable Convolution",
"RetinaNet",
"Residual Block",
"Global Average Pooling",
"Rectified Linear Units",
"Max Pooling",
"GhostNet"
] | [
"ImageNet"
] | [
"Number of params",
"Top 5 Accuracy",
"Top 1 Accuracy"
] | GhostNet: More Features from Cheap Operations |
It is highly desirable yet challenging to generate image captions that can describe novel objects which are unseen in caption-labeled training data, a capability that is evaluated in the novel object captioning challenge (nocaps). In this challenge, no additional image-caption training data, other thanCOCO Captions, is allowed for model training. Thus, conventional Vision-Language Pre-training (VLP) methods cannot be applied. This paper presents VIsual VOcabulary pretraining (VIVO) that performs pre-training in the absence of caption annotations. By breaking the dependency of paired image-caption training data in VLP, VIVO can leverage large amounts of paired image-tag data to learn a visual vocabulary. This is done by pre-training a multi-layer Transformer model that learns to align image-level tags with their corresponding image region features. To address the unordered nature of image tags, VIVO uses a Hungarian matching loss with masked tag prediction to conduct pre-training. We validate the effectiveness of VIVO by fine-tuning the pre-trained model for image captioning. In addition, we perform an analysis of the visual-text alignment inferred by our model. The results show that our model can not only generate fluent image captions that describe novel objects, but also identify the locations of these objects. Our single model has achieved new state-of-the-art results on nocaps and surpassed the human CIDEr score. | [
"Regularization",
"Attention Modules",
"Stochastic Optimization",
"Output Functions",
"Subword Segmentation",
"Normalization",
"Feedforward Networks",
"Transformers",
"Attention Mechanisms",
"Skip Connections"
] | [
"Image Captioning"
] | [
"Layer Normalization",
"Byte Pair Encoding",
"BPE",
"Softmax",
"Adam",
"Transformer",
"Multi-Head Attention",
"Residual Connection",
"Label Smoothing",
"Dropout",
"Scaled Dot-Product Attention",
"Dense Connections"
] | [
"nocaps out-of-domain",
"nocaps in-domain",
"nocaps entire",
"nocaps near-domain"
] | [
"B4",
"METEOR",
"CIDEr",
"ROUGE-L",
"SPICE",
"B2",
"B1",
"B3"
] | VIVO: Visual Vocabulary Pre-Training for Novel Object Captioning |
Fine-grained visual categorization (FGVC) is an important but challenging task due to high intra-class variances and low inter-class variances caused by deformation, occlusion, illumination, etc. An attention convolutional binary neural tree architecture is presented to address those problems for weakly supervised FGVC. Specifically, we incorporate convolutional operations along edges of the tree structure, and use the routing functions in each node to determine the root-to-leaf computational paths within the tree. The final decision is computed as the summation of the predictions from leaf nodes. The deep convolutional operations learn to capture the representations of objects, and the tree structure characterizes the coarse-to-fine hierarchical feature learning process. In addition, we use the attention transformer module to enforce the network to capture discriminative features. The negative log-likelihood loss is used to train the entire network in an end-to-end fashion by SGD with back-propagation. Several experiments on the CUB-200-2011, Stanford Cars and Aircraft datasets demonstrate that the proposed method performs favorably against the state-of-the-arts. | [
"Regularization",
"Output Functions",
"Stochastic Optimization",
"Attention Modules",
"Activation Functions",
"Subword Segmentation",
"Normalization",
"Feedforward Networks",
"Transformers",
"Attention Mechanisms",
"Skip Connections"
] | [
"Fine-Grained Image Classification",
"Fine-Grained Visual Categorization"
] | [
"Stochastic Gradient Descent",
"Layer Normalization",
"Byte Pair Encoding",
"BPE",
"Softmax",
"Adam",
"Transformer",
"Multi-Head Attention",
"SGD",
"Rectified Linear Units",
"ReLU",
"Residual Connection",
"Label Smoothing",
"Dropout",
"Scaled Dot-Product Attention",
"Dense Connections"
] | [
" CUB-200-2011",
"Stanford Cars",
"FGVC Aircraft"
] | [
"Accuracy"
] | Attention Convolutional Binary Neural Tree for Fine-Grained Visual Categorization |
This paper presents a Deep Learning approach for traffic sign recognition systems. Several classification experiments are conducted over publicly available traffic sign datasets from Germany and Belgium using a Deep Neural Network which comprises Convolutional layers and Spatial Transformer Networks. Such trials are built to measure the impact of diverse factors with the end goal of designing a Convolutional Neural Network that can improve the state-of-the-art of traffic sign classification task. First, different adaptive and non-adaptive stochastic gradient descent optimisation algorithms such as SGD, SGD-Nesterov, RMSprop and Adam are evaluated. Subsequently, multiple combinations of Spatial Transformer Networks placed at distinct positions within the main neural network are analysed. The recognition rate of the proposed Convolutional Neural Network reports an accuracy of 99.71% in the German Traffic Sign Recognition Benchmark, outperforming previous state-of-the-art methods and also being more efficient in terms of memory requirements. | [
"Image Model Blocks"
] | [
"Traffic Sign Recognition"
] | [
"Spatial Transformer"
] | [
"GTSRB"
] | [
"Accuracy"
] | Deep neural network for traffic sign recognition systems: An analysis of spatial transformers and stochastic optimisation methods |
Supervised deep learning has been successfully applied for many recognition problems in machine learning and computer vision. Although it can approximate a complex many-to-one function very well when large number of training data is provided, the lack of probabilistic inference of the current supervised deep learning methods makes it difficult to model a complex structured output representations. In this work, we develop a scalable deep conditional generative model for structured output variables using Gaussian latent variables. The model is trained efficiently in the framework of stochastic gradient variational Bayes, and allows a fast prediction using stochastic feed-forward inference. In addition, we provide novel strategies to build a robust structured prediction algorithms, such as recurrent prediction network architecture, input noise-injection and multi-scale prediction training methods. In experiments, we demonstrate the effectiveness of our proposed algorithm in comparison to the deterministic deep neural network counterparts in generating diverse but realistic output representations using stochastic inference. Furthermore, the proposed schemes in training methods and architecture design were complimentary, which leads to achieve strong pixel-level object segmentation and semantic labeling performance on Caltech-UCSD Birds 200 and the subset of Labeled Faces in the Wild dataset. | [
"Generative Models",
"Optimization"
] | [
"Semantic Segmentation",
"Structured Prediction"
] | [
"cVAE",
"Stochastic Gradient Variational Bayes",
"Conditional Variational Auto Encoder"
] | [
"MNIST"
] | [
"Negative CLL"
] | Learning Structured Output Representation using Deep Conditional Generative Models |
The latest work on language representations carefully integrates contextualized features into language model training, which enables a series of success especially in various machine reading comprehension and natural language inference tasks. However, the existing language representation models including ELMo, GPT and BERT only exploit plain context-sensitive features such as character or word embeddings. They rarely consider incorporating structured semantic information which can provide rich semantics for language representation. To promote natural language understanding, we propose to incorporate explicit contextual semantics from pre-trained semantic role labeling, and introduce an improved language representation model, Semantics-aware BERT (SemBERT), which is capable of explicitly absorbing contextual semantics over a BERT backbone. SemBERT keeps the convenient usability of its BERT precursor in a light fine-tuning way without substantial task-specific modifications. Compared with BERT, semantics-aware BERT is as simple in concept but more powerful. It obtains new state-of-the-art or substantially improves results on ten reading comprehension and language inference tasks. | [
"Regularization",
"Output Functions",
"Learning Rate Schedules",
"Stochastic Optimization",
"Recurrent Neural Networks",
"Activation Functions",
"Attention Modules",
"Subword Segmentation",
"Word Embeddings",
"Normalization",
"Attention Mechanisms",
"Language Models",
"Feedforward Networks",
"Transformers",
"Fine-Tuning",
"Skip Connections",
"Bidirectional Recurrent Neural Networks"
] | [
"Language Modelling",
"Machine Reading Comprehension",
"Natural Language Inference",
"Natural Language Understanding",
"Question Answering",
"Reading Comprehension",
"Semantic Role Labeling",
"Word Embeddings"
] | [
"Weight Decay",
"Cosine Annealing",
"Long Short-Term Memory",
"Adam",
"BiLSTM",
"Tanh Activation",
"Scaled Dot-Product Attention",
"Gaussian Linear Error Units",
"Bidirectional LSTM",
"Residual Connection",
"Dense Connections",
"ELMo",
"Layer Normalization",
"Discriminative Fine-Tuning",
"GPT",
"GELU",
"Sigmoid Activation",
"WordPiece",
"Byte Pair Encoding",
"BPE",
"Softmax",
"Multi-Head Attention",
"Attention Dropout",
"Linear Warmup With Cosine Annealing",
"Linear Warmup With Linear Decay",
"LSTM",
"Dropout",
"BERT"
] | [
"SQuAD2.0 dev",
"SNLI",
"SQuAD2.0"
] | [
"% Test Accuracy",
"Parameters",
"F1",
"EM",
"% Train Accuracy"
] | Semantics-aware BERT for Language Understanding |
Generative adversarial nets (GANs) are widely used to learn the data sampling
process and their performance may heavily depend on the loss functions, given a
limited computational budget. This study revisits MMD-GAN that uses the maximum
mean discrepancy (MMD) as the loss function for GAN and makes two
contributions. First, we argue that the existing MMD loss function may
discourage the learning of fine details in data as it attempts to contract the
discriminator outputs of real data. To address this issue, we propose a
repulsive loss function to actively learn the difference among the real data by
simply rearranging the terms in MMD. Second, inspired by the hinge loss, we
propose a bounded Gaussian kernel to stabilize the training of MMD-GAN with the
repulsive loss function. The proposed methods are applied to the unsupervised
image generation tasks on CIFAR-10, STL-10, CelebA, and LSUN bedroom datasets.
Results show that the repulsive loss function significantly improves over the
MMD loss at no additional computational cost and outperforms other
representative loss functions. The proposed methods achieve an FID score of
16.21 on the CIFAR-10 dataset using a single DCGAN network and spectral
normalization. | [
"Generative Models",
"Convolutions",
"Activation Functions",
"Normalization"
] | [
"Image Generation"
] | [
"Generative Adversarial Network",
"GAN",
"Batch Normalization",
"Convolution",
"ReLU",
"DCGAN",
"Deep Convolutional GAN",
"Leaky ReLU",
"Rectified Linear Units"
] | [
"STL-10",
"CIFAR-10"
] | [
"Inception score",
"FID"
] | Improving MMD-GAN Training with Repulsive Loss Function |
Panoptic segmentation is a scene parsing task which unifies semantic segmentation and instance segmentation into one single task. However, the current state-of-the-art studies did not take too much concern on inference time. In this work, we propose an Efficient Panoptic Segmentation Network (EPSNet) to tackle the panoptic segmentation tasks with fast inference speed. Basically, EPSNet generates masks based on simple linear combination of prototype masks and mask coefficients. The light-weight network branches for instance segmentation and semantic segmentation only need to predict mask coefficients and produce masks with the shared prototypes predicted by prototype network branch. Furthermore, to enhance the quality of shared prototypes, we adopt a module called "cross-layer attention fusion module", which aggregates the multi-scale features with attention mechanism helping them capture the long-range dependencies between each other. To validate the proposed work, we have conducted various experiments on the challenging COCO panoptic dataset, which achieve highly promising performance with significantly faster inference speed (53ms on GPU). | [
"Initialization",
"Convolutional Neural Networks",
"Activation Functions",
"Normalization",
"Convolutions",
"Pooling Operations",
"Skip Connections",
"Skip Connection Blocks"
] | [
"Instance Segmentation",
"Panoptic Segmentation",
"Scene Parsing",
"Semantic Segmentation"
] | [
"ResNet",
"Average Pooling",
"Batch Normalization",
"Convolution",
"1x1 Convolution",
"ReLU",
"Residual Network",
"Residual Connection",
"Bottleneck Residual Block",
"Kaiming Initialization",
"Residual Block",
"Global Average Pooling",
"Rectified Linear Units",
"Max Pooling"
] | [
"COCO test-dev"
] | [
"PQst",
"PQ",
"PQth"
] | EPSNet: Efficient Panoptic Segmentation Network with Cross-layer Attention Fusion |
Recent advances in deep learning have heightened interest among researchers in the field of visual speech recognition (VSR). Currently, most existing methods equate VSR with automatic lip reading, which attempts to recognise speech by analysing lip motion. However, human experience and psychological studies suggest that we do not always fix our gaze at each other's lips during a face-to-face conversation, but rather scan the whole face repetitively. This inspires us to revisit a fundamental yet somehow overlooked problem: can VSR models benefit from reading extraoral facial regions, i.e. beyond the lips? In this paper, we perform a comprehensive study to evaluate the effects of different facial regions with state-of-the-art VSR models, including the mouth, the whole face, the upper face, and even the cheeks. Experiments are conducted on both word-level and sentence-level benchmarks with different characteristics. We find that despite the complex variations of the data, incorporating information from extraoral facial regions, even the upper face, consistently benefits VSR performance. Furthermore, we introduce a simple yet effective method based on Cutout to learn more discriminative features for face-based VSR, hoping to maximise the utility of information encoded in different facial regions. Our experiments show obvious improvements over existing state-of-the-art methods that use only the lip region as inputs, a result we believe would probably provide the VSR community with some new and exciting insights. | [
"Image Data Augmentation"
] | [
"Lipreading",
"Lip Reading",
"Speech Recognition",
"Visual Speech Recognition"
] | [
"Cutout"
] | [
"Lip Reading in the Wild",
"LRW-1000",
"GRID corpus (mixed-speech)"
] | [
"Top-1 Accuracy",
"Word Error Rate (WER)"
] | Can We Read Speech Beyond the Lips? Rethinking RoI Selection for Deep Visual Speech Recognition |
Temporal relational reasoning, the ability to link meaningful transformations
of objects or entities over time, is a fundamental property of intelligent
species. In this paper, we introduce an effective and interpretable network
module, the Temporal Relation Network (TRN), designed to learn and reason about
temporal dependencies between video frames at multiple time scales. We evaluate
TRN-equipped networks on activity recognition tasks using three recent video
datasets - Something-Something, Jester, and Charades - which fundamentally
depend on temporal relational reasoning. Our results demonstrate that the
proposed TRN gives convolutional neural networks a remarkable capacity to
discover temporal relations in videos. Through only sparsely sampled video
frames, TRN-equipped networks can accurately predict human-object interactions
in the Something-Something dataset and identify various human gestures on the
Jester dataset with very competitive performance. TRN-equipped networks also
outperform two-stream networks and 3D convolution networks in recognizing daily
activities in the Charades dataset. Further analyses show that the models learn
intuitive and interpretable visual common sense knowledge in videos. | [
"Convolutions"
] | [
"Action Classification",
"Action Recognition",
"Activity Recognition",
"Common Sense Reasoning",
"Human-Object Interaction Detection",
"Relational Reasoning"
] | [
"3D Convolution",
"Convolution"
] | [
"Something-Something V2",
"Moments in Time",
"Jester",
"Jester test",
"Something-Something V1",
"Charades"
] | [
"Top 1 Accuracy",
"Val",
"Top-5 Accuracy",
"MAP",
"Top-1 Accuracy",
"Top 5 Accuracy"
] | Temporal Relational Reasoning in Videos |
In this work we investigate the effect of the convolutional network depth on
its accuracy in the large-scale image recognition setting. Our main
contribution is a thorough evaluation of networks of increasing depth using an
architecture with very small (3x3) convolution filters, which shows that a
significant improvement on the prior-art configurations can be achieved by
pushing the depth to 16-19 weight layers. These findings were the basis of our
ImageNet Challenge 2014 submission, where our team secured the first and the
second places in the localisation and classification tracks respectively. We
also show that our representations generalise well to other datasets, where
they achieve state-of-the-art results. We have made our two best-performing
ConvNet models publicly available to facilitate further research on the use of
deep visual representations in computer vision. | [
"Image Data Augmentation",
"Initialization",
"Regularization",
"Output Functions",
"Stochastic Optimization",
"Learning Rate Schedules",
"Convolutional Neural Networks",
"Activation Functions",
"Convolutions",
"Feedforward Networks",
"Pooling Operations"
] | [
"Image Classification"
] | [
"Weight Decay",
"SGD with Momentum",
"Color Jitter",
"VGG",
"Random Horizontal Flip",
"Softmax",
"Random Resized Crop",
"Max Pooling",
"Xavier Initialization",
"Convolution",
"Rectified Linear Units",
"ReLU",
"Dropout",
"ColorJitter",
"Dense Connections",
"Step Decay"
] | [
"ImageNet ReaL",
"GTAV-to-Cityscapes Labels",
"ImageNet",
"DogCentric"
] | [
"Number of params",
"Top 1 Accuracy",
"mIoU",
"Accuracy",
"Top 5 Accuracy"
] | Very Deep Convolutional Networks for Large-Scale Image Recognition |
Semantic segmentation of aerial videos has been extensively used for decision making in monitoring environmental changes, urban planning, and disaster management. The reliability of these decision support systems is dependent on the accuracy of the video semantic segmentation algorithms. The existing CNN based video semantic segmentation methods have enhanced the image semantic segmentation methods by incorporating an additional module such as LSTM or optical flow for computing temporal dynamics of the video which is a computational overhead. The proposed research work modifies the CNN architecture by incorporating temporal information to improve the efficiency of video semantic segmentation. In this work, an enhanced encoder-decoder based CNN architecture (UVid-Net) is proposed for UAV video semantic segmentation. The encoder of the proposed architecture embeds temporal information for temporally consistent labelling. The decoder is enhanced by introducing the feature retainer module, which aids in the accurate localization of the class labels. The proposed UVid-Net architecture for UAV video semantic segmentation is quantitatively evaluated on an extended ManipalUAVid dataset. The performance metric mIoU of 0.79 has been observed which is significantly greater than the other state-of-the-art algorithms. Further, the proposed work produced promising results even for the pre-trained model of UVid-Net on the urban street scene with fine-tuning the final layer on UAV aerial videos. | [
"Semantic Segmentation Models",
"Recurrent Neural Networks",
"Activation Functions",
"Convolutions",
"Pooling Operations",
"Skip Connections"
] | [
"Aerial Video Semantic Segmentation",
"Decision Making",
"Optical Flow Estimation",
"Semantic Segmentation",
"Video Semantic Segmentation"
] | [
"U-Net",
"Sigmoid Activation",
"Long Short-Term Memory",
"Concatenated Skip Connection",
"Convolution",
"Tanh Activation",
"ReLU",
"LSTM",
"Rectified Linear Units",
"Max Pooling"
] | [
"ManipalUAVid"
] | [
"mIoU"
] | UVid-Net: Enhanced Semantic Segmentation of UAV Aerial Videos by Embedding Temporal Information |
We examine the capabilities of a unified, multi-task framework for three information extraction tasks: named entity recognition, relation extraction, and event extraction. Our framework (called DyGIE++) accomplishes all tasks by enumerating, refining, and scoring text spans designed to capture local (within-sentence) and global (cross-sentence) context. Our framework achieves state-of-the-art results across all tasks, on four datasets from a variety of domains. We perform experiments comparing different techniques to construct span representations. Contextualized embeddings like BERT perform well at capturing relationships among entities in the same or adjacent sentences, while dynamic span graph updates model long-range cross-sentence relationships. For instance, propagating span representations via predicted coreference links can enable the model to disambiguate challenging entity mentions. Our code is publicly available at https://github.com/dwadden/dygiepp and can be easily adapted for new tasks or datasets. | [
"Regularization",
"Output Functions",
"Learning Rate Schedules",
"Stochastic Optimization",
"Attention Modules",
"Activation Functions",
"Subword Segmentation",
"Normalization",
"Language Models",
"Feedforward Networks",
"Attention Mechanisms",
"Skip Connections"
] | [
"Event Extraction",
"Joint Entity and Relation Extraction",
"Named Entity Recognition",
"Relation Extraction"
] | [
"Weight Decay",
"WordPiece",
"Layer Normalization",
"Softmax",
"Adam",
"Multi-Head Attention",
"Attention Dropout",
"Linear Warmup With Linear Decay",
"Residual Connection",
"Scaled Dot-Product Attention",
"Dropout",
"BERT",
"GELU",
"Dense Connections",
"Gaussian Linear Error Units"
] | [
"SciERC",
"ACE 2005"
] | [
"Entity F1",
"Relation F1",
"Sentence Encoder",
"RE Micro F1",
"NER Micro F1"
] | Entity, Relation, and Event Extraction with Contextualized Span Representations |
We address the problem of semantic segmentation using deep learning. Most
segmentation systems include a Conditional Random Field (CRF) to produce a
structured output that is consistent with the image's visual features. Recent
deep learning approaches have incorporated CRFs into Convolutional Neural
Networks (CNNs), with some even training the CRF end-to-end with the rest of
the network. However, these approaches have not employed higher order
potentials, which have previously been shown to significantly improve
segmentation performance. In this paper, we demonstrate that two types of
higher order potential, based on object detections and superpixels, can be
included in a CRF embedded within a deep network. We design these higher order
potentials to allow inference with the differentiable mean field algorithm. As
a result, all the parameters of our richer CRF model can be learned end-to-end
with our pixelwise CNN classifier. We achieve state-of-the-art segmentation
performance on the PASCAL VOC benchmark with these trainable higher order
potentials. | [
"Structured Prediction"
] | [
"Semantic Segmentation"
] | [
"Conditional Random Field",
"CRF"
] | [
"PASCAL Context"
] | [
"mIoU"
] | Higher Order Conditional Random Fields in Deep Neural Networks |
Subsets and Splits