abstract
stringlengths 13
4.33k
| field
sequence | task
sequence | method
sequence | dataset
sequence | metric
sequence | title
stringlengths 10
194
|
---|---|---|---|---|---|---|
This paper addresses semantic image segmentation by incorporating rich
information into Markov Random Field (MRF), including high-order relations and
mixture of label contexts. Unlike previous works that optimized MRFs using
iterative algorithm, we solve MRF by proposing a Convolutional Neural Network
(CNN), namely Deep Parsing Network (DPN), which enables deterministic
end-to-end computation in a single forward pass. Specifically, DPN extends a
contemporary CNN architecture to model unary terms and additional layers are
carefully devised to approximate the mean field algorithm (MF) for pairwise
terms. It has several appealing properties. First, different from the recent
works that combined CNN and MRF, where many iterations of MF were required for
each training image during back-propagation, DPN is able to achieve high
performance by approximating one iteration of MF. Second, DPN represents
various types of pairwise terms, making many existing works as its special
cases. Third, DPN makes MF easier to be parallelized and speeded up in
Graphical Processing Unit (GPU). DPN is thoroughly evaluated on the PASCAL VOC
2012 dataset, where a single DPN model yields a new state-of-the-art
segmentation accuracy. | [] | [
"Semantic Segmentation"
] | [] | [
"Cityscapes test"
] | [
"Mean IoU (class)"
] | Semantic Image Segmentation via Deep Parsing Network |
We propose the Encoder-Recurrent-Decoder (ERD) model for recognition and
prediction of human body pose in videos and motion capture. The ERD model is a
recurrent neural network that incorporates nonlinear encoder and decoder
networks before and after recurrent layers. We test instantiations of ERD
architectures in the tasks of motion capture (mocap) generation, body pose
labeling and body pose forecasting in videos. Our model handles mocap training
data across multiple subjects and activity domains, and synthesizes novel
motions while avoid drifting for long periods of time. For human pose labeling,
ERD outperforms a per frame body part detector by resolving left-right body
part confusions. For video pose forecasting, ERD predicts body joint
displacements across a temporal horizon of 400ms and outperforms a first order
motion model based on optical flow. ERDs extend previous Long Short Term Memory
(LSTM) models in the literature to jointly learn representations and their
dynamics. Our experiments show such representation learning is crucial for both
labeling and prediction in space-time. We find this is a distinguishing feature
between the spatio-temporal visual domain in comparison to 1D text, speech or
handwriting, where straightforward hard coded representations have shown
excellent results when directly combined with recurrent units. | [] | [
"Human Dynamics",
"Human Pose Forecasting",
"Motion Capture",
"Optical Flow Estimation",
"Representation Learning"
] | [] | [
"Human3.6M"
] | [
"MAR, walking, 400ms",
"MAR, walking, 1,000ms"
] | Recurrent Network Models for Human Dynamics |
Zero-shot learning (ZSL) is a challenging problem that aims to recognize the target categories without seen data, where semantic information is leveraged to transfer knowledge from some source classes. Although ZSL has made great progress in recent years, most existing approaches are easy to overfit the sources classes in generalized zero-shot learning (GZSL) task, which indicates that they learn little knowledge about target classes. To tackle such problem, we propose a novel Transferable Contrastive Network (TCN) that explicitly transfers knowledge from the source classes to the target classes. It automatically contrasts one image with different classes to judge whether they are consistent or not. By exploiting the class similarities to make knowledge transfer from source images to similar target classes, our approach is more robust to recognize the target images. Experiments on five benchmark datasets show the superiority of our approach for GZSL. | [] | [
"Generalized Zero-Shot Learning",
"Transfer Learning",
"Zero-Shot Learning"
] | [] | [
"SUN Attribute",
"CUB-200-2011"
] | [
"average top-1 classification accuracy",
"Harmonic mean"
] | Transferable Contrastive Network for Generalized Zero-Shot Learning |
A novel algorithm to segment a primary object in a video sequence is proposed in this work. First, we generate candidate regions for the primary object using both color and motion edges. Second,we estimate initial primary object regions, by exploiting the recurrence property of the primary object. Third, we augment the initial regions with missing parts or reducing them by excluding noisy parts repeatedly. This augmentation and reduction process (ARP) identifies the primary object region in each frame. Experimental results demonstrate that the proposed algorithm significantly outperforms the state-of-the-art conventional algorithms on recent benchmark datasets.
| [] | [
"Semantic Segmentation",
"Unsupervised Video Object Segmentation"
] | [] | [
"DAVIS 2016"
] | [
"F-measure (Decay)",
"Jaccard (Mean)",
"F-measure (Recall)",
"Jaccard (Decay)",
"Jaccard (Recall)",
"F-measure (Mean)",
"J&F"
] | Primary Object Segmentation in Videos Based on Region Augmentation and Reduction |
Numerous models describing the human emotional states have been built by the
psychology community. Alongside, Deep Neural Networks (DNN) are reaching
excellent performances and are becoming interesting features extraction tools
in many computer vision tasks.Inspired by works from the psychology community,
we first study the link between the compact two-dimensional representation of
the emotion known as arousal-valence, and discrete emotion classes (e.g. anger,
happiness, sadness, etc.) used in the computer vision community. It enables to
assess the benefits -- in terms of discrete emotion inference -- of adding an
extra dimension to arousal-valence (usually named dominance). Building on these
observations, we propose CAKE, a 3-dimensional representation of emotion
learned in a multi-domain fashion, achieving accurate emotion recognition on
several public datasets. Moreover, we visualize how emotions boundaries are
organized inside DNN representations and show that DNNs are implicitly learning
arousal-valence-like descriptions of emotions. Finally, we use the CAKE
representation to compare the quality of the annotations of different public
datasets. | [] | [
"Emotion Recognition",
"Facial Expression Recognition"
] | [] | [
"AffectNet"
] | [
"Accuracy (7 emotion)",
"Accuracy (8 emotion)"
] | CAKE: Compact and Accurate K-dimensional representation of Emotion |
We propose an octree guided neural network architecture and spherical
convolutional kernel for machine learning from arbitrary 3D point clouds. The
network architecture capitalizes on the sparse nature of irregular point
clouds, and hierarchically coarsens the data representation with space
partitioning. At the same time, the proposed spherical kernels systematically
quantize point neighborhoods to identify local geometric structures in the
data, while maintaining the properties of translation-invariance and asymmetry.
We specify spherical kernels with the help of network neurons that in turn are
associated with spatial locations. We exploit this association to avert dynamic
kernel generation during network training that enables efficient learning with
high resolution point clouds. The effectiveness of the proposed technique is
established on the benchmark tasks of 3D object classification and
segmentation, achieving new state-of-the-art on ShapeNet and RueMonge2014
datasets. | [] | [
"3D Object Classification",
"3D Part Segmentation",
"Object Classification"
] | [] | [
"ShapeNet-Part"
] | [
"Class Average IoU",
"Instance Average IoU"
] | Octree guided CNN with Spherical Kernels for 3D Point Clouds |
Masked Language Model (MLM) framework has been widely adopted for self-supervised language pre-training. In this paper, we argue that randomly sampled masks in MLM would lead to undesirably large gradient variance. Thus, we theoretically quantify the gradient variance via correlating the gradient covariance with the Hamming distance between two different masks (given a certain text sequence). To reduce the variance due to the sampling of masks, we propose a fully-explored masking strategy, where a text sequence is divided into a certain number of non-overlapping segments. Thereafter, the tokens within one segment are masked for training. We prove, from a theoretical perspective, that the gradients derived from this new masking schema have a smaller variance and can lead to more efficient self-supervised training. We conduct extensive experiments on both continual pre-training and general pre-training from scratch. Empirical results confirm that this new masking strategy can consistently outperform standard random masking. Detailed efficiency analysis and ablation studies further validate the advantages of our fully-explored masking strategy under the MLM framework. | [] | [
"Language Modelling",
"Sentence Classification"
] | [] | [
"ACL-ARC"
] | [
"F1"
] | Improving Self-supervised Pre-training via a Fully-Explored Masked Language Model |
Background
Atrial fibrillation (AF) is the most common and debilitating abnormalities of the arrhythmias worldwide, with a major impact on morbidity and mortality. The detection of AF becomes crucial in preventing both acute and chronic cardiac rhythm disorders.
Objective
Our objective is to devise a method for real-time, automated detection of AF episodes in electrocardiograms (ECGs). This method utilizes RR intervals, and it involves several basic operations of nonlinear/linear integer filters, symbolic dynamics and the calculation of Shannon entropy. Using novel recursive algorithms, online analytical processing of this method can be achieved.
Results
Four publicly-accessible sets of clinical data (Long-Term AF, MIT-BIH AF, MIT-BIH Arrhythmia, and MIT-BIH Normal Sinus Rhythm Databases) were selected for investigation. The first database is used as a training set; in accordance with the receiver operating characteristic (ROC) curve, the best performance using this method was achieved at the discrimination threshold of 0.353: the sensitivity (Se), specificity (Sp), positive predictive value (PPV) and overall accuracy (ACC) were 96.72%, 95.07%, 96.61% and 96.05%, respectively. The other three databases are used as testing sets. Using the obtained threshold value (i.e., 0.353), for the second set, the obtained parameters were 96.89%, 98.25%, 97.62% and 97.67%, respectively; for the third database, these parameters were 97.33%, 90.78%, 55.29% and 91.46%, respectively; finally, for the fourth set, the Sp was 98.28%. The existing methods were also employed for comparison.
Conclusions
Overall, in contrast to the other available techniques, the test results indicate that the newly developed approach outperforms traditional methods using these databases under assessed various experimental situations, and suggest our technique could be of practical use for clinicians in the future. | [] | [
"Atrial Fibrillation Detection"
] | [] | [
"MIT-BIH AF"
] | [
"Accuracy"
] | Automatic online detection of atrial fibrillation based on symbolic dynamics and Shannon entropy |
Recent research efforts enable study for natural language grounded navigation in photo-realistic environments, e.g., following natural language instructions or dialog. However, existing methods tend to overfit training data in seen environments and fail to generalize well in previously unseen environments. To close the gap between seen and unseen environments, we aim at learning a generalized navigation model from two novel perspectives: (1) we introduce a multitask navigation model that can be seamlessly trained on both Vision-Language Navigation (VLN) and Navigation from Dialog History (NDH) tasks, which benefits from richer natural language guidance and effectively transfers knowledge across tasks; (2) we propose to learn environment-agnostic representations for the navigation policy that are invariant among the environments seen during training, thus generalizing better on unseen environments. Extensive experiments show that environment-agnostic multitask learning significantly reduces the performance gap between seen and unseen environments, and the navigation agent trained so outperforms baselines on unseen environments by 16% (relative measure on success rate) on VLN and 120% (goal progress) on NDH. Our submission to the CVDN leaderboard establishes a new state-of-the-art for the NDH task on the holdout test set. Code is available at https://github.com/google-research/valan. | [] | [
"Vision-Language Navigation"
] | [] | [
"VLN Challenge",
"Cooperative Vision-and-Dialogue Navigation"
] | [
"length",
"spl",
"oracle success",
"dist_to_end_reduction",
"success",
"error"
] | Environment-agnostic Multitask Learning for Natural Language Grounded Navigation |
Segmentation is a fundamental task in medical image analysis. However, most existing methods focus on primary region extraction and ignore edge information, which is useful for obtaining accurate segmentation. In this paper, we propose a generic medical segmentation method, called Edge-aTtention guidance Network (ET-Net), which embeds edge-attention representations to guide the segmentation network. Specifically, an edge guidance module is utilized to learn the edge-attention representations in the early encoding layers, which are then transferred to the multi-scale decoding layers, fused using a weighted aggregation module. The experimental results on four segmentation tasks (i.e., optic disc/cup and vessel segmentation in retinal images, and lung segmentation in chest X-Ray and CT images) demonstrate that preserving edge-attention representations contributes to the final segmentation accuracy, and our proposed method outperforms current state-of-the-art segmentation methods. The source code of our method is available at https://github.com/ZzzJzzZ/ETNet. | [] | [
"Medical Image Segmentation",
"Optic Disc Segmentation",
"Semantic Segmentation"
] | [] | [
"Montgomery County",
"LUNA",
"DRIVE"
] | [
"mIoU",
"Accuracy"
] | ET-Net: A Generic Edge-aTtention Guidance Network for Medical Image Segmentation |
We present a transition-based AMR parser that directly generates AMR parses
from plain text. We use Stack-LSTMs to represent our parser state and make
decisions greedily. In our experiments, we show that our parser achieves very
competitive scores on English using only AMR training data. Adding additional
information, such as POS tags and dependency trees, improves the results
further. | [] | [
"AMR Parsing"
] | [] | [
"LDC2014T12"
] | [
"F1 Newswire",
"F1 Full"
] | AMR Parsing using Stack-LSTMs |
Neural architecture search (NAS) has shown promising results discovering models that are both accurate and fast. For NAS, training a one-shot model has become a popular strategy to rank the relative quality of different architectures (child models) using a single set of shared weights. However, while one-shot model weights can effectively rank different network architectures, the absolute accuracies from these shared weights are typically far below those obtained from stand-alone training. To compensate, existing methods assume that the weights must be retrained, finetuned, or otherwise post-processed after the search is completed. These steps significantly increase the compute requirements and complexity of the architecture search and model deployment. In this work, we propose BigNAS, an approach that challenges the conventional wisdom that post-processing of the weights is necessary to get good prediction accuracies. Without extra retraining or post-processing steps, we are able to train a single set of shared weights on ImageNet and use these weights to obtain child models whose sizes range from 200 to 1000 MFLOPs. Our discovered model family, BigNASModels, achieve top-1 accuracies ranging from 76.5% to 80.9%, surpassing state-of-the-art models in this range including EfficientNets and Once-for-All networks without extra retraining or post-processing. We present ablative study and analysis to further understand the proposed BigNASModels. | [] | [
"Neural Architecture Search"
] | [] | [
"ImageNet"
] | [
"Top-1 Error Rate",
"MACs",
"Params",
"Accuracy"
] | BigNAS: Scaling Up Neural Architecture Search with Big Single-Stage Models |
Estimating individual level treatment effects (ITE) from observational data is a challenging and important area in causal machine learning and is commonly considered in diverse mission-critical applications. In this paper, we propose an information theoretic approach in order to find more reliable representations for estimating ITE. We leverage the Information Bottleneck (IB) principle, which addresses the trade-off between conciseness and predictive power of representation. With the introduction of an extended graphical model for causal information bottleneck, we encourage the independence between the learned representation and the treatment type. We also introduce an additional form of a regularizer from the perspective of understanding ITE in the semi-supervised learning framework to ensure more reliable representations. Experimental results show that our model achieves the state-of-the-art results and exhibits more reliable prediction performances with uncertainty information on real-world datasets. | [] | [
"Causal Inference"
] | [] | [
"IDHP"
] | [
"Average Treatment Effect Error"
] | Reliable Estimation of Individual Treatment Effect with Causal Information Bottleneck |
Accurate segmenting nuclei instances is a crucial step in computer-aided
image analysis to extract rich features for cellular estimation and following
diagnosis as well as treatment. While it still remains challenging because the
wide existence of nuclei clusters, along with the large morphological variances
among different organs make nuclei instance segmentation susceptible to
over-/under-segmentation. Additionally, the inevitably subjective annotating
and mislabeling prevent the network learning from reliable samples and
eventually reduce the generalization capability for robustly segmenting unseen
organ nuclei. To address these issues, we propose a novel deep neural network,
namely Contour-aware Informative Aggregation Network (CIA-Net) with multi-level
information aggregation module between two task-specific decoders. Rather than
independent decoders, it leverages the merit of spatial and texture
dependencies between nuclei and contour by bi-directionally aggregating
task-specific features. Furthermore, we proposed a novel smooth truncated loss
that modulates losses to reduce the perturbation from outliers. Consequently,
the network can focus on learning from reliable and informative samples, which
inherently improves the generalization capability. Experiments on the 2018
MICCAI challenge of Multi-Organ-Nuclei-Segmentation validated the effectiveness
of our proposed method, surpassing all the other 35 competitive teams by a
significant margin. | [] | [
"Instance Segmentation",
"Multi-tissue Nucleus Segmentation",
"Semantic Segmentation"
] | [] | [
"Kumar"
] | [
"Hausdorff Distance (mm)",
"Dice"
] | CIA-Net: Robust Nuclei Instance Segmentation with Contour-aware Information Aggregation |
Domain adaptation (DA) and domain generalization (DG) have emerged as a solution to the domain shift problem where the distribution of the source and target data is different. The task of DG is more challenging than DA as the target data is totally unseen during the training phase in DG scenarios. The current state-of-the-art employs adversarial techniques, however, these are rarely considered for the DG problem. Furthermore, these approaches do not consider correlation alignment which has been proven highly beneficial for minimizing domain discrepancy. In this paper, we propose a correlation-aware adversarial DA and DG framework where the features of the source and target data are minimized using correlation alignment along with adversarial learning. Incorporating the correlation alignment module along with adversarial learning helps to achieve a more domain agnostic model due to the improved ability to reduce domain discrepancy with unlabeled target data more effectively. Experiments on benchmark datasets serve as evidence that our proposed method yields improved state-of-the-art performance. | [] | [
"Domain Adaptation",
"Domain Generalization"
] | [] | [
"Office-31",
"Office-Home",
"ImageCLEF-DA"
] | [
"Average Accuracy",
"Accuracy"
] | Correlation-aware Adversarial Domain Adaptation and Generalization |
In this paper, we analyze neural network-based dialogue systems trained in an end-to-end manner using an updated version of the recent Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words. This dataset is interesting because of its size, long context lengths, and technical nature; thus, it can be used to train large models directly from data with minimal feature engineering. We provide baselines in two different environments: one where models are trained to select the correct next response from a list of candidate responses, and one where models are trained to maximize the loglikelihood of a generated utterance conditioned on the context of the conversation. These are both evaluated on a recall task that we call next utterance classification (NUC), and using vector-based metrics that capture the topicality of the responses. We observe that current end-to-end models are unable to completely solve these tasks; thus, we provide a qualitative error analysis to determine the primary causes of error for end-to-end models evaluated on NUC, and examine sample utterances from the generative models. As a result of this analysis, we suggest some promising directions for future research on the Ubuntu Dialogue Corpus, which can also be applied to end-to-end dialogue systems in general. | [] | [
"Conversation Disentanglement",
"Feature Engineering"
] | [] | [
"Linux IRC (Ch2 Elsner)",
"Linux IRC (Ch2 Kummerfeld)"
] | [
"1-1",
"Shen F-1",
"Local"
] | Training End-to-End Dialogue Systems with the Ubuntu Dialogue Corpus |
Object segmentation and structure localization are important steps in
automated image analysis pipelines for microscopy images. We present a
convolution neural network (CNN) based deep learning architecture for
segmentation of objects in microscopy images. The proposed network can be used
to segment cells, nuclei and glands in fluorescence microscopy and histology
images after slight tuning of input parameters. The network trains at multiple
resolutions of the input image, connects the intermediate layers for better
localization and context and generates the output using multi-resolution
deconvolution filters. The extra convolutional layers which bypass the
max-pooling operation allow the network to train for variable input intensities
and object size and make it robust to noisy data. We compare our results on
publicly available data sets and show that the proposed network outperforms
recent deep learning algorithms. | [] | [
"Multi-tissue Nucleus Segmentation",
"Semantic Segmentation"
] | [] | [
"Kumar"
] | [
"Hausdorff Distance (mm)",
"Dice"
] | Micro-Net: A unified model for segmentation of various objects in microscopy images |
We present our system for the WNUT 2017 Named Entity Recognition challenge on Twitter data. We describe two modifications of a basic neural network architecture for sequence tagging. First, we show how we exploit additional labeled data, where the Named Entity tags differ from the target task. Then, we propose a way to incorporate sentence level features. Our system uses both methods and ranked second for entity level annotations, achieving an F1-score of 40.78, and second for surface form annotations, achieving an F1-score of 39.33. | [] | [
"Named Entity Recognition",
"Transfer Learning"
] | [] | [
"Long-tail emerging entities"
] | [
"F1 (surface form)",
"F1"
] | Transfer Learning and Sentence Level Features for Named Entity Recognition on Tweets |
Recently, emotion detection in conversations becomes a hot research topic in the Natural Language Processing community. In this paper, we focus on emotion detection in multi-speaker conversations instead of traditional two-speaker conversations in existing studies. Different from non-conversation text, emotion detection in conversation text has one specific challenge in modeling the context-sensitive dependence. Besides, emotion detection in multi-speaker conversations endorses another specific challenge in modeling the speaker-sensitive dependence. To address above two challenges, we propose a conversational graph-based convolutional neural network. On the one hand, our approach represents each utterance and each speaker as a node. On the other hand, the context-sensitive dependence is represented by an undirected edge between two utterances nodes from the same conversation and the speaker-sensitive dependence is represented by an undirected edge between an utterance node and its speaker node. In this way, the entire conversational corpus can be symbolized as a large heterogeneous graph and the emotion detection task can be recast as a classification problem of the utterance nodes in the graph. The experimental results on a multi-modal and multi-speaker conversation corpus demonstrate the great effectiveness of the proposed approach. | [] | [
"Emotion Recognition in Conversation"
] | [] | [
"MELD"
] | [
"Weighted Macro-F1"
] | Modeling both context- and speaker-sensitive dependence for emotion detection in multi-speaker conversations |
Self-training is a competitive approach in domain adaptive segmentation, which trains the network with the pseudo labels on the target domain. However inevitably, the pseudo labels are noisy and the target features are dispersed due to the discrepancy between source and target domains. In this paper, we rely on representative prototypes, the feature centroids of classes, to address the two issues for unsupervised domain adaptation. In particular, we take one step further and exploit the feature distances from prototypes that provide richer information than mere prototypes. Specifically, we use it to estimate the likelihood of pseudo labels to facilitate online correction in the course of training. Meanwhile, we align the prototypical assignments based on relative feature distances for two different views of the same target, producing a more compact target feature space. Moreover, we find that distilling the already learned knowledge to a self-supervised pretrained model further boosts the performance. Our method shows tremendous performance advantage over state-of-the-art methods. We will make the code publicly available. | [] | [
"Domain Adaptation",
"Image-to-Image Translation",
"Semantic Segmentation",
"Synthetic-to-Real Translation",
"Unsupervised Domain Adaptation"
] | [] | [
"GTA5 to Cityscapes",
"GTAV-to-Cityscapes Labels",
"SYNTHIA-to-Cityscapes"
] | [
"mIoU (13 classes)",
"mIoU"
] | Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation |
Deep learning based general language models have achieved state-of-the-art results in many popular tasks such as sentiment analysis and QA tasks. Text in domains like social media has its own salient characteristics. Domain knowledge should be helpful in domain relevant tasks. In this work, we devise a simple method to obtain domain knowledge and further propose a method to integrate domain knowledge with general knowledge based on deep language models to improve performance of emotion classification. Experiments on Twitter data show that even though a deep language model fine-tuned by a target domain data has attained comparable results to that of previous state-of-the-art models, this fine-tuned model can still benefit from our extracted domain knowledge to obtain more improvement. This highlights the importance of making use of domain knowledge in domain-specific applications. | [] | [
"Emotion Classification",
"Language Modelling",
"Sentiment Analysis"
] | [] | [
"SemEval 2018 Task 1E-c"
] | [
"Micro-F1",
"Macro-F1",
"Accuracy"
] | Improving Multi-label Emotion Classification by Integrating both General and Domain-specific Knowledge |
With the rise in popularity of machine and deep learning models, there is an increased focus on their vulnerability to malicious inputs. These adversarial examples drift model predictions away from the original intent of the network and are a growing concern in practical security. In order to combat these attacks, neural networks can leverage traditional image processing approaches or state-of-the-art defensive models to reduce perturbations in the data. Defensive approaches that take a global approach to noise reduction are effective against adversarial attacks, however their lossy approach often distorts important data within the image. In this work, we propose a visual saliency based approach to cleaning data affected by an adversarial attack. Our model leverages the salient regions of an adversarial image in order to provide a targeted countermeasure while comparatively reducing loss within the cleaned images. We measure the accuracy of our model by evaluating the effectiveness of state-of-the-art saliency methods prior to attack, under attack, and after application of cleaning methods. We demonstrate the effectiveness of our proposed approach in comparison with related defenses and against established adversarial attack methods, across two saliency datasets. Our targeted approach shows significant improvements in a range of standard statistical and distance saliency metrics, in comparison with both traditional and state-of-the-art approaches. | [] | [
"Adversarial Attack",
"Music Genre Recognition"
] | [] | [
"1B Words"
] | [
"10 Hops"
] | SAD: Saliency-based Defenses Against Adversarial Examples |
3D semantic scene labeling is fundamental to agents operating in the real
world. In particular, labeling raw 3D point sets from sensors provides
fine-grained semantics. Recent works leverage the capabilities of Neural
Networks (NNs), but are limited to coarse voxel predictions and do not
explicitly enforce global consistency. We present SEGCloud, an end-to-end
framework to obtain 3D point-level segmentation that combines the advantages of
NNs, trilinear interpolation(TI) and fully connected Conditional Random Fields
(FC-CRF). Coarse voxel predictions from a 3D Fully Convolutional NN are
transferred back to the raw 3D points via trilinear interpolation. Then the
FC-CRF enforces global consistency and provides fine-grained semantics on the
points. We implement the latter as a differentiable Recurrent NN to allow joint
optimization. We evaluate the framework on two indoor and two outdoor 3D
datasets (NYU V2, S3DIS, KITTI, Semantic3D.net), and show performance
comparable or superior to the state-of-the-art on all datasets. | [] | [
"Semantic Segmentation"
] | [] | [
"Semantic3D",
"S3DIS Area5"
] | [
"mAcc",
"mIoU"
] | SEGCloud: Semantic Segmentation of 3D Point Clouds |
This paper addresses the problem of 3D human pose estimation from a single
image. We follow a standard two-step pipeline by first detecting the 2D
position of the $N$ body joints, and then using these observations to infer 3D
pose. For the first step, we use a recent CNN-based detector. For the second
step, most existing approaches perform 2$N$-to-3$N$ regression of the Cartesian
joint coordinates. We show that more precise pose estimates can be obtained by
representing both the 2D and 3D human poses using $N\times N$ distance
matrices, and formulating the problem as a 2D-to-3D distance matrix regression.
For learning such a regressor we leverage on simple Neural Network
architectures, which by construction, enforce positivity and symmetry of the
predicted matrices. The approach has also the advantage to naturally handle
missing observations and allowing to hypothesize the position of non-observed
joints. Quantitative results on Humaneva and Human3.6M datasets demonstrate
consistent performance gains over state-of-the-art. Qualitative evaluation on
the images in-the-wild of the LSP dataset, using the regressor learned on
Human3.6M, reveals very promising generalization results. | [] | [
"3D Human Pose Estimation",
"Pose Estimation",
"Regression"
] | [] | [
"HumanEva-I"
] | [
"Mean Reconstruction Error (mm)"
] | 3D Human Pose Estimation from a Single Image via Distance Matrix Regression |
Faster RCNN has achieved great success for generic object detection including
PASCAL object detection and MS COCO object detection. In this report, we
propose a detailed designed Faster RCNN method named FDNet1.0 for face
detection. Several techniques were employed including multi-scale training,
multi-scale testing, light-designed RCNN, some tricks for inference and a
vote-based ensemble method. Our method achieves two 1th places and one 2nd
place in three tasks over WIDER FACE validation dataset (easy set, medium set,
hard set). | [] | [
"Face Detection",
"Object Detection"
] | [] | [
"WIDER Face (Hard)",
"WIDER Face (Medium)",
"WIDER Face (Easy)"
] | [
"AP"
] | Face Detection Using Improved Faster RCNN |
Point cloud data from 3D LiDAR sensors are one of the most crucial sensor modalities for versatile safety-critical applications such as self-driving vehicles. Since the annotations of point cloud data is an expensive and time-consuming process, therefore recently the utilisation of simulated environments and 3D LiDAR sensors for this task started to get some popularity. With simulated sensors and environments, the process for obtaining an annotated synthetic point cloud data became much easier. However, the generated synthetic point cloud data are still missing the artefacts usually exist in point cloud data from real 3D LiDAR sensors. As a result, the performance of the trained models on this data for perception tasks when tested on real point cloud data is degraded due to the domain shift between simulated and real environments. Thus, in this work, we are proposing a domain adaptation framework for bridging this gap between synthetic and real point cloud data. Our proposed framework is based on the deep cycle-consistent generative adversarial networks (CycleGAN) architecture. We have evaluated the performance of our proposed framework on the task of vehicle detection from a bird's eye view (BEV) point cloud images coming from real 3D LiDAR sensors. The framework has shown competitive results with an improvement of more than 7% in average precision score over other baseline approaches when tested on real BEV point cloud images. | [] | [
"Domain Adaptation",
"Unsupervised Domain Adaptation"
] | [] | [
"PreSIL to KITTI"
] | [
"[email protected]"
] | Domain Adaptation for Vehicle Detection from Bird's Eye View LiDAR Point Cloud Data |
Emotion recognition in conversations (ERC) has received much attention recently in the natural language processing community. Considering that the emotions of the utterances in conversations are interactive, previous works usually implicitly model the emotion interaction between utterances by modeling dialogue context, but the misleading emotion information from context often interferes with the emotion interaction. We noticed that the gold emotion labels of the context utterances can provide explicit and accurate emotion interaction, but it is impossible to input gold labels at inference time. To address this problem, we propose an iterative emotion interaction network, which uses iteratively predicted emotion labels instead of gold emotion labels to explicitly model the emotion interaction. This approach solves the above problem, and can effectively retain the performance advantages of explicit modeling. We conduct experiments on two datasets, and our approach achieves state-of-the-art performance. | [] | [
"Emotion Recognition",
"Emotion Recognition in Conversation"
] | [] | [
"IEMOCAP",
"MELD"
] | [
"Weighted Macro-F1",
"F1"
] | An Iterative Emotion Interaction Network for Emotion Recognition in Conversations |
Three-dimensional objects are commonly represented as 3D boxes in a point-cloud. This representation mimics the well-studied image-based 2D bounding-box detection but comes with additional challenges. Objects in a 3D world do not follow any particular orientation, and box-based detectors have difficulties enumerating all orientations or fitting an axis-aligned bounding box to rotated objects. In this paper, we instead propose to represent, detect, and track 3D objects as points. Our framework, CenterPoint, first detects centers of objects using a keypoint detector and regresses to other attributes, including 3D size, 3D orientation, and velocity. In a second stage, it refines these estimates using additional point features on the object. In CenterPoint, 3D object tracking simplifies to greedy closest-point matching. The resulting detection and tracking algorithm is simple, efficient, and effective. CenterPoint achieved state-of-the-art performance on the nuScenes benchmark for both 3D detection and tracking, with 65.5 NDS and 63.8 AMOTA for a single model. On the Waymo Open Dataset, CenterPoint outperforms all previous single model method by a large margin and ranks first among all Lidar-only submissions. The code and pretrained models are available at https://github.com/tianweiy/CenterPoint. | [] | [
"3D Multi-Object Tracking",
"3D Object Detection",
"3D Object Tracking",
"Object Detection",
"Object Tracking"
] | [] | [
"waymo pedestrian",
"waymo cyclist",
"nuScenes",
"waymo all_ns"
] | [
"mAAE",
"mAP",
"APH/L2",
"mAVE",
"mASE",
"mAOE",
"NDS",
"amota",
"mATE"
] | Center-based 3D Object Detection and Tracking |
This paper revisits the bilinear attention networks in the visual question answering task from a graph perspective. The classical bilinear attention networks build a bilinear attention map to extract the joint representation of words in the question and objects in the image but lack fully exploring the relationship between words for complex reasoning. In contrast, we develop bilinear graph networks to model the context of the joint embeddings of words and objects. Two kinds of graphs are investigated, namely image-graph and question-graph. The image-graph transfers features of the detected objects to their related query words, enabling the output nodes to have both semantic and factual information. The question-graph exchanges information between these output nodes from image-graph to amplify the implicit yet important relationship between objects. These two kinds of graphs cooperate with each other, and thus our resulting model can model the relationship and dependency between objects, which leads to the realization of multi-step reasoning. Experimental results on the VQA v2.0 validation dataset demonstrate the ability of our method to handle the complex questions. On the test-std set, our best single model achieves state-of-the-art performance, boosting the overall accuracy to 72.41%. | [] | [
"Question Answering",
"Visual Question Answering"
] | [] | [
"VQA v2 test-std",
"GQA Test2019"
] | [
"Binary",
"number",
"overall",
"other",
"Validity",
"Consistency",
"Plausibility",
"Distribution",
"yes/no",
"Accuracy",
"Open"
] | Bilinear Graph Networks for Visual Question Answering |
There is a natural correlation between the visual and auditive elements of a
video. In this work we leverage this connection to learn general and effective
models for both audio and video analysis from self-supervised temporal
synchronization. We demonstrate that a calibrated curriculum learning scheme, a
careful choice of negative examples, and the use of a contrastive loss are
critical ingredients to obtain powerful multi-sensory representations from
models optimized to discern temporal synchronization of audio-video pairs.
Without further finetuning, the resulting audio features achieve performance
superior or comparable to the state-of-the-art on established audio
classification benchmarks (DCASE2014 and ESC-50). At the same time, our visual
subnet provides a very effective initialization to improve the accuracy of
video-based action recognition models: compared to learning from scratch, our
self-supervised pretraining yields a remarkable gain of +19.9% in action
recognition accuracy on UCF101 and a boost of +17.7% on HMDB51. | [] | [
"Action Recognition",
"Audio Classification",
"Curriculum Learning",
"Temporal Action Localization"
] | [] | [
"ESC-50"
] | [
"Top-1 Accuracy"
] | Cooperative Learning of Audio and Video Models from Self-Supervised Synchronization |
Obtaining large, human labelled speech datasets to train models for emotion
recognition is a notoriously challenging task, hindered by annotation cost and
label ambiguity. In this work, we consider the task of learning embeddings for
speech classification without access to any form of labelled audio. We base our
approach on a simple hypothesis: that the emotional content of speech
correlates with the facial expression of the speaker. By exploiting this
relationship, we show that annotations of expression can be transferred from
the visual domain (faces) to the speech domain (voices) through cross-modal
distillation. We make the following contributions: (i) we develop a strong
teacher network for facial emotion recognition that achieves the state of the
art on a standard benchmark; (ii) we use the teacher to train a student, tabula
rasa, to learn representations (embeddings) for speech emotion recognition
without access to labelled audio data; and (iii) we show that the speech
emotion embedding can be used for speech emotion recognition on external
benchmark datasets. Code, models and data are available. | [] | [
"Emotion Recognition",
"Facial Expression Recognition",
"Speech Emotion Recognition"
] | [] | [
"FERPlus"
] | [
"Accuracy"
] | Emotion Recognition in Speech using Cross-Modal Transfer in the Wild |
Conversational Emotion Recognition (CER) is a crucial task in Natural Language Processing (NLP) with wide applications. Prior works in CER generally focus on modeling emotion influences solely with utterance-level features, with little attention paid on phrase-level semantic connection between utterances. Phrases carry sentiments when they are referred to emotional events under certain topics, providing a global semantic connection between utterances throughout the entire conversation. In this work, we propose a two-stage Summarization and Aggregation Graph Inference Network (SumAggGIN), which seamlessly integrates inference for topic-related emotional phrases and local dependency reasoning over neighbouring utterances in a global-to-local fashion. Topic-related emotional phrases, which constitutes the global topic-related emotional connections, are recognized by our proposed heterogeneous Summarization Graph. Local dependencies, which captures short-term emotional effects between neighbouring utterances, are further injected via an Aggregation Graph to distinguish the subtle differences between utterances containing emotional phrases. The two steps of graph inference are tightly-coupled for a comprehensively understanding of emotional fluctuation. Experimental results on three CER benchmark datasets verify the effectiveness of our proposed model, which outperforms the state-of-the-art approaches. | [] | [
"Emotion Recognition",
"Emotion Recognition in Conversation"
] | [] | [
"IEMOCAP",
"MELD"
] | [
"Weighted Macro-F1",
"F1",
"Accuracy"
] | Summarize before Aggregate: A Global-to-local Heterogeneous Graph Inference Network for Conversational Emotion Recognition |
Temporal coherence is a valuable source of information in the context of
optical flow estimation. However, finding a suitable motion model to leverage
this information is a non-trivial task. In this paper we propose an
unsupervised online learning approach based on a convolutional neural network
(CNN) that estimates such a motion model individually for each frame. By
relating forward and backward motion these learned models not only allow to
infer valuable motion information based on the backward flow, they also help to
improve the performance at occlusions, where a reliable prediction is
particularly useful. Moreover, our learned models are spatially variant and
hence allow to estimate non-rigid motion per construction. This, in turns,
allows to overcome the major limitation of recent rigidity-based approaches
that seek to improve the estimation by incorporating additional stereo/SfM
constraints. Experiments demonstrate the usefulness of our new approach. They
not only show a consistent improvement of up to 27% for all major benchmarks
(KITTI 2012, KITTI 2015, MPI Sintel) compared to a baseline without prediction,
they also show top results for the MPI Sintel benchmark -- the one of the three
benchmarks that contains the largest amount of non-rigid motion. | [] | [
"Optical Flow Estimation"
] | [] | [
"Sintel-clean"
] | [
"Average End-Point Error"
] | ProFlow: Learning to Predict Optical Flow |
Information selection is the most important component in document summarization task. In this paper, we propose to extend the basic neural encoding-decoding framework with an information selection layer to explicitly model and optimize the information selection process in abstractive document summarization. Specifically, our information selection layer consists of two parts: gated global information filtering and local sentence selection. Unnecessary information in the original document is first globally filtered, then salient sentences are selected locally while generating each summary sentence sequentially. To optimize the information selection process directly, distantly-supervised training guided by the golden summary is also imported. Experimental results demonstrate that the explicit modeling and optimizing of the information selection process improves document summarization performance significantly, which enables our model to generate more informative and concise summaries, and thus significantly outperform state-of-the-art neural abstractive methods. | [] | [
"Abstractive Text Summarization",
"Document Summarization",
"Machine Translation",
"Text Generation"
] | [] | [
"CNN / Daily Mail"
] | [
"ROUGE-L",
"ROUGE-1",
"ROUGE-2"
] | Improving Neural Abstractive Document Summarization with Explicit Information Selection Modeling |
Recent neural sequence-to-sequence models have shown significant progress on short text summarization. However, for document summarization, they fail to capture the long-term structure of both documents and multi-sentence summaries, resulting in information loss and repetitions. In this paper, we propose to leverage the structural information of both documents and multi-sentence summaries to improve the document summarization performance. Specifically, we import both structural-compression and structural-coverage regularization into the summarization process in order to capture the information compression and information coverage properties, which are the two most important structural properties of document summarization. Experimental results demonstrate that the structural regularization improves the document summarization performance significantly, which enables our model to generate more informative and concise summaries, and thus significantly outperforms state-of-the-art neural abstractive methods. | [] | [
"Abstractive Text Summarization",
"Document Summarization",
"Machine Translation",
"Sentence Summarization",
"Text Generation",
"Text Summarization"
] | [] | [
"CNN / Daily Mail"
] | [
"ROUGE-L",
"ROUGE-1",
"ROUGE-2"
] | Improving Neural Abstractive Document Summarization with Structural Regularization |
Existing named entity recognition (NER) systems rely on large amounts of human-labeled data for supervision. However, obtaining large-scale annotated data is challenging particularly in specific domains like health-care, e-commerce and so on. Given the availability of domain specific knowledge resources, (e.g., ontologies, dictionaries), distant supervision is a solution to generate automatically labeled training data to reduce human effort. The outcome of distant supervision for NER, however, is often noisy. False positive and false negative instances are the main issues that reduce performance on this kind of auto-generated data. In this paper, we explore distant supervision in a supervised setup. We adopt a technique of partial annotation to address false negative cases and implement a reinforcement learning strategy with a neural network policy to identify false positive instances. Our results establish a new state-of-the-art on four benchmark datasets taken from different domains and different languages. We then go on to show that our model reduces the amount of manually annotated data required to perform NER in a new domain. | [] | [
"Denoising",
"Named Entity Recognition"
] | [] | [
"BC5CDR"
] | [
"F1"
] | Reinforcement-based denoising of distantly supervised NER with partial annotation |
Recently, anchor-free detection methods have been through great progress. The major two families, anchor-point detection and key-point detection, are at opposite edges of the speed-accuracy trade-off, with anchor-point detectors having the speed advantage. In this work, we boost the performance of the anchor-point detector over the key-point counterparts while maintaining the speed advantage. To achieve this, we formulate the detection problem from the anchor point's perspective and identify ineffective training as the main problem. Our key insight is that anchor points should be optimized jointly as a group both within and across feature pyramid levels. We propose a simple yet effective training strategy with soft-weighted anchor points and soft-selected pyramid levels to address the false attention issue within each pyramid level and the feature selection issue across all the pyramid levels, respectively. To evaluate the effectiveness, we train a single-stage anchor-free detector called Soft Anchor-Point Detector (SAPD). Experiments show that our concise SAPD pushes the envelope of speed/accuracy trade-off to a new level, outperforming recent state-of-the-art anchor-free and anchor-based detectors. Without bells and whistles, our best model can achieve a single-model single-scale AP of 47.4% on COCO. | [] | [
"Feature Selection",
"Object Detection"
] | [] | [
"COCO test-dev"
] | [
"APM",
"box AP",
"AP75",
"APS",
"APL",
"AP50"
] | Soft Anchor-Point Object Detection |
This work studies the problem of object goal navigation which involves navigating to an instance of the given object category in unseen environments. End-to-end learning-based navigation methods struggle at this task as they are ineffective at exploration and long-term planning. We propose a modular system called, `Goal-Oriented Semantic Exploration' which builds an episodic semantic map and uses it to explore the environment efficiently based on the goal object category. Empirical results in visually realistic simulation environments show that the proposed model outperforms a wide range of baselines including end-to-end learning-based methods as well as modular map-based methods and led to the winning entry of the CVPR-2020 Habitat ObjectNav Challenge. Ablation analysis indicates that the proposed model learns semantic priors of the relative arrangement of objects in a scene, and uses them to explore efficiently. Domain-agnostic module design allow us to transfer our model to a mobile robot platform and achieve similar performance for object goal navigation in the real-world. | [] | [
"Robot Navigation"
] | [] | [
"Habitat 2020 Object Nav test-std"
] | [
"SOFT_SPL",
"DISTANCE_TO_GOAL",
"SUCCESS",
"SPL"
] | Object Goal Navigation using Goal-Oriented Semantic Exploration |
Depth estimation provides essential information to perform autonomous driving
and driver assistance. Especially, Monocular Depth Estimation is interesting
from a practical point of view, since using a single camera is cheaper than
many other options and avoids the need for continuous calibration strategies as
required by stereo-vision approaches. State-of-the-art methods for Monocular
Depth Estimation are based on Convolutional Neural Networks (CNNs). A promising
line of work consists of introducing additional semantic information about the
traffic scene when training CNNs for depth estimation. In practice, this means
that the depth data used for CNN training is complemented with images having
pixel-wise semantic labels, which usually are difficult to annotate (e.g.
crowded urban images). Moreover, so far it is common practice to assume that
the same raw training data is associated with both types of ground truth, i.e.,
depth and semantic labels. The main contribution of this paper is to show that
this hard constraint can be circumvented, i.e., that we can train CNNs for
depth estimation by leveraging the depth and semantic information coming from
heterogeneous datasets. In order to illustrate the benefits of our approach, we
combine KITTI depth and Cityscapes semantic segmentation datasets,
outperforming state-of-the-art results on Monocular Depth Estimation. | [] | [
"Autonomous Driving",
"Depth Estimation",
"Monocular Depth Estimation",
"Semantic Segmentation"
] | [] | [
"KITTI Eigen split"
] | [
"absolute relative error"
] | Monocular Depth Estimation by Learning from Heterogeneous Datasets |
Progress in Sentence Simplification has been hindered by the lack of supervised data, particularly in languages other than English. Previous work has aligned sentences from original and simplified corpora such as English Wikipedia and Simple English Wikipedia, but this limits corpus size, domain, and language. In this work, we propose using unsupervised mining techniques to automatically create training corpora for simplification in multiple languages from raw Common Crawl web data. When coupled with a controllable generation mechanism that can flexibly adjust attributes such as length and lexical complexity, these mined paraphrase corpora can be used to train simplification systems in any language. We further incorporate multilingual unsupervised pretraining methods to create even stronger models and show that by training on mined data rather than supervised corpora, we outperform the previous best results. We evaluate our approach on English, French, and Spanish simplification benchmarks and reach state-of-the-art performance with a totally unsupervised approach. We will release our models and code to mine the data in any language included in Common Crawl. | [] | [
"Text Simplification"
] | [] | [
"ASSET",
"TurkCorpus"
] | [
"BLEU",
"SARI (EASSE>=0.2.1)"
] | Multilingual Unsupervised Sentence Simplification |
We introduce GEM, a living benchmark for natural language Generation (NLG), its Evaluation, and Metrics. Measuring progress in NLG relies on a constantly evolving ecosystem of automated metrics, datasets, and human evaluation standards. However, due to this moving target, new models often still evaluate on divergent anglo-centric corpora with well-established, but flawed, metrics. This disconnect makes it challenging to identify the limitations of current models and opportunities for progress. Addressing this limitation, GEM provides an environment in which models can easily be applied to a wide set of corpora and evaluation strategies can be tested. Regular updates to the benchmark will help NLG research become more multilingual and evolve the challenge alongside models. This paper serves as the description of the initial release for which we are organizing a shared task at our ACL 2021 Workshop and to which we invite the entire NLG community to participate. | [] | [
"Abstractive Text Summarization",
"Cross-Lingual Abstractive Summarization",
"Data-to-Text Generation",
"Extreme Summarization",
"Question Answering",
"Task-Oriented Dialogue Systems",
"Text Generation",
"Text Simplification"
] | [] | [
"SGD",
"Cleaned E2E NLG Challenge",
"WebNLG en",
"WebNLG ru",
"MLSUM de",
"MLSUM es",
"ASSET",
"Czech restaurant information",
"TurkCorpus",
"CommonGen",
"DART",
"ToTTo"
] | [
"METEOR"
] | The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics |
The intensive annotation cost and the rich but unlabeled data contained in videos motivate us to propose an unsupervised video-based person re-identification (re-ID) method. We start from two assumptions: 1) different video tracklets typically contain different persons, given that the tracklets are taken at distinct places or with long intervals; 2) within each tracklet, the frames are mostly of the same person. Based on these assumptions, this paper propose a stepwise metric promotion approach to estimate the identities of training tracklets, which iterates between cross-camera tracklet association and feature learning. Specifically, We use each training tracklet as a query, and perform retrieval in the cross camera training set. Our method is built on reciprocal nearest neighbor search and can eliminate the hard negative label matches, i.e., the cross-camera nearest neighbors of the false matches in the initial rank list. The tracklet that passes the reciprocal nearest neighbor check is considered to have the same ID with the query. Experimental results on the PRID 2011, ILIDS-VID, and MARS datasets show that the proposed method achieves very competitive re-ID accuracy compared with its supervised counterparts.
| [] | [
"Person Re-Identification",
"Video-Based Person Re-Identification"
] | [] | [
"PRID2011"
] | [
"Rank-1",
"Rank-20",
"Rank-5"
] | Stepwise Metric Promotion for Unsupervised Video Person Re-Identification |
Sentence simplification aims to simplify the content and structure of complex
sentences, and thus make them easier to interpret for human readers, and easier
to process for downstream NLP applications. Recent advances in neural machine
translation have paved the way for novel approaches to the task. In this paper,
we adapt an architecture with augmented memory capacities called Neural
Semantic Encoders (Munkhdalai and Yu, 2017) for sentence simplification. Our
experiments demonstrate the effectiveness of our approach on different
simplification datasets, both in terms of automatic evaluation measures and
human judgments. | [] | [
"Machine Translation",
"Text Simplification"
] | [] | [
"PWKP / WikiSmall",
"Newsela",
"TurkCorpus"
] | [
"BLEU",
"SARI (EASSE>=0.2.1)",
"SARI"
] | Sentence Simplification with Memory-Augmented Neural Networks |
Sentence simplification aims to improve readability and understandability,
based on several operations such as splitting, deletion, and paraphrasing.
However, a valid simplified sentence should also be logically entailed by its
input sentence. In this work, we first present a strong pointer-copy mechanism
based sequence-to-sequence sentence simplification model, and then improve its
entailment and paraphrasing capabilities via multi-task learning with related
auxiliary tasks of entailment and paraphrase generation. Moreover, we propose a
novel 'multi-level' layered soft sharing approach where each auxiliary task
shares different (higher versus lower) level layers of the sentence
simplification model, depending on the task's semantic versus lexico-syntactic
nature. We also introduce a novel multi-armed bandit based training approach
that dynamically learns how to effectively switch across tasks during
multi-task learning. Experiments on multiple popular datasets demonstrate that
our model outperforms competitive simplification systems in SARI and FKGL
automatic metrics, and human evaluation. Further, we present several ablation
analyses on alternative layer sharing methods, soft versus hard sharing,
dynamic multi-armed bandit sampling approaches, and our model's learned
entailment and paraphrasing skills. | [] | [
"Multi-Task Learning",
"Paraphrase Generation",
"Text Simplification"
] | [] | [
"PWKP / WikiSmall",
"Newsela",
"TurkCorpus"
] | [
"BLEU",
"SARI (EASSE>=0.2.1)",
"SARI"
] | Dynamic Multi-Level Multi-Task Learning for Sentence Simplification |
Multi-person 3D human pose estimation from a single image is a challenging problem, especially for in-the-wild settings due to the lack of 3D annotated data. We propose HG-RCNN, a Mask-RCNN based network that also leverages the benefits of the Hourglass architecture for multi-person 3D Human Pose Estimation. A two-staged approach is presented that first estimates the 2D keypoints in every Region of Interest (RoI) and then lifts the estimated keypoints to 3D. Finally, the estimated 3D poses are placed in camera-coordinates using weak-perspective projection assumption and joint optimization of focal length and root translations. The result is a simple and modular network for multi-person 3D human pose estimation that does not require any multi-person 3D pose dataset. Despite its simple formulation, HG-RCNN achieves the state-of-the-art results on MuPoTS-3D while also approximating the 3D pose in the camera-coordinate system. | [] | [
"3D Human Pose Estimation",
"Pose Estimation"
] | [] | [
"MuPoTS-3D"
] | [
"3DPCK"
] | Multi-Person 3D Human Pose Estimation from Monocular Images |
Monocular depth estimation is a challenging task in scene understanding, with the goal to acquire the geometric properties of 3D space from 2D images. Due to the lack of RGB-depth image pairs, unsupervised learning methods aim at deriving depth information with alternative supervision such as stereo pairs. However, most existing works fail to model the geometric structure of objects, which generally results from considering pixel-level objective functions during training. In this paper, we propose SceneNet to overcome this limitation with the aid of semantic understanding from segmentation. Moreover, our proposed model is able to perform region-aware depth estimation by enforcing semantics consistency between stereo pairs. In our experiments, we qualitatively and quantitatively verify the effectiveness and robustness of our model, which produces favorable results against the state-of-the-art approaches do.
| [] | [
"Depth Estimation",
"Monocular Depth Estimation",
"Scene Understanding"
] | [] | [
"KITTI Eigen split"
] | [
"absolute relative error"
] | Towards Scene Understanding: Unsupervised Monocular Depth Estimation With Semantic-Aware Representation |
The key of Weakly Supervised Fine-grained Image Classification (WFGIC) is how to pick out the discriminative regions and learn the discriminative features from them. However, most recent WFGIC methods pick out the discriminative regions independently and utilize their features directly, while neglecting the facts that regions’ features are mutually semantic correlated and region groups can be more discriminative. To address these issues, we propose an end-to-end Graph-propagation based Correlation Learning (GCL) model to fully mine and exploit the discriminative potentials of region correlations for WFGIC. Specifically, in discriminative
region localization phase, a Criss-cross Graph Propagation (CGP) sub-network is proposed to learn region correlations, which establishes correlation between regions and then enhances each region by weighted aggregating other regions in a criss-cross way. By this means each region’s representation encodes the global image-level context and local spatial context simultaneously, thus the network is guided to implicitly discover the more powerful discriminative region groups for WFGIC. In discriminative feature representation phase, the Correlation Feature Strengthening (CFS) sub-network is proposed to explore the internal semantic correlation among discriminative patches feature vectors, to improve their discriminative power by iteratively enhancing informative elements while suppressing the useless ones. Extensive experiments demonstrate the effectiveness of proposed CGP and CFS sub-networks, and show that the GCL model achieves better performance both in accuracy and efficiency. | [] | [
"Fine-Grained Image Classification",
"Image Classification"
] | [] | [
" CUB-200-2011",
"Stanford Cars",
"FGVC Aircraft"
] | [
"Accuracy"
] | Graph-propagation based Correlation Learning for Weakly Supervised Fine-grained Image Classification |
Unsupervised domain adaptation (UDA) transfers knowledge from a label-rich source domain to a fully-unlabeled target domain. To tackle this task, recent approaches resort to discriminative domain transfer in virtue of pseudo-labels to enforce the class-level distribution alignment across the source and target domains. These methods, however, are vulnerable to the error accumulation and thus incapable of preserving cross-domain category consistency, as the pseudo-labeling accuracy is not guaranteed explicitly. In this paper, we propose the Progressive Feature Alignment Network (PFAN) to align the discriminative features across domains progressively and effectively, via exploiting the intra-class variation in the target domain. To be specific, we first develop an Easy-to-Hard Transfer Strategy (EHTS) and an Adaptive Prototype Alignment (APA) step to train our model iteratively and alternatively. Moreover, upon observing that a good domain adaptation usually requires a non-saturated source classifier, we consider a simple yet efficient way to retard the convergence speed of the source classification loss by further involving a temperature variate into the soft-max function. The extensive experimental results reveal that the proposed PFAN exceeds the state-of-the-art performance on three UDA datasets. | [] | [
"Domain Adaptation",
"Unsupervised Domain Adaptation"
] | [] | [
"SVHN-to-MNIST"
] | [
"Accuracy"
] | Progressive Feature Alignment for Unsupervised Domain Adaptation |
We present an efficient method for detecting anomalies in videos. Recent
applications of convolutional neural networks have shown promises of
convolutional layers for object detection and recognition, especially in
images. However, convolutional neural networks are supervised and require
labels as learning signals. We propose a spatiotemporal architecture for
anomaly detection in videos including crowded scenes. Our architecture includes
two main components, one for spatial feature representation, and one for
learning the temporal evolution of the spatial features. Experimental results
on Avenue, Subway and UCSD benchmarks confirm that the detection accuracy of
our method is comparable to state-of-the-art methods at a considerable speed of
up to 140 fps. | [] | [
"Anomaly Detection",
"Object Detection"
] | [] | [
"UBI-Fights"
] | [
"AUC"
] | Abnormal Event Detection in Videos using Spatiotemporal Autoencoder |
Current methods for training convolutional neural networks depend on large amounts of labeled samples for supervised training. In this paper we present an approach for training a convolutional neural network using only unlabeled data. We train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. We find that this simple feature learning algorithm is surprisingly successful when applied to visual object recognition. The feature representation learned by our algorithm achieves classification results matching or outperforming the current state-of-the-art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101). | [] | [
"Image Classification",
"Object Recognition"
] | [] | [
"STL-10",
"CIFAR-10"
] | [
"Percentage correct"
] | Discriminative Unsupervised Feature Learning with Convolutional Neural Networks |
PyOD is an open-source Python toolbox for performing scalable outlier detection on multivariate data. Uniquely, it provides access to a wide range of outlier detection algorithms, including established outlier ensembles and more recent neural network-based approaches, under a single, well-documented API designed for use by both practitioners and researchers. With robustness and scalability in mind, best practices such as unit testing, continuous integration, code coverage, maintainability checks, interactive examples and parallelization are emphasized as core components in the toolbox's development. PyOD is compatible with both Python 2 and 3 and can be installed through Python Package Index (PyPI) or https://github.com/yzhao062/pyod. | [] | [
"Anomaly Detection",
"Outlier Detection",
"outlier ensembles"
] | [] | [] | [] | PyOD: A Python Toolbox for Scalable Outlier Detection |
Sentiment analysis (SA) is one of the most useful natural language processing applications. Literature is flooding with many papers and systems addressing this task, but most of the work is focused on English. In this paper, we present {``}Mazajak{''}, an online system for Arabic SA. The system is based on a deep learning model, which achieves state-of-the-art results on many Arabic dialect datasets including SemEval 2017 and ASTD. The availability of such system should assist various applications and research that rely on sentiment analysis as a tool. | [] | [
"Arabic Sentiment Analysis",
"Sentiment Analysis",
"Twitter Sentiment Analysis"
] | [] | [
"ArSAS",
"SemEval 2017 Task 4-A",
"ASTD"
] | [
"Average Recall"
] | Mazajak: An Online Arabic Sentiment Analyser |
In the absence of large labelled datasets, self-supervised learning techniques
can boost performance by learning useful representations from unlabelled data,
which is often more readily available. However, there is often a domain shift
between the unlabelled collection and the downstream target problem data. We
show that by learning Bayesian instance weights for the unlabelled data, we
can improve the downstream classification accuracy by prioritising the most
useful instances. Additionally, we show that the training time can be reduced by
discarding unnecessary datapoints. Our method, BetaDataWeighter is evaluated
using the popular self-supervised rotation prediction task on STL-10 and Visual
Decathlon. We compare to related instance weighting schemes, both hand-designed
heuristics and meta-learning, as well as conventional self-supervised learning.
BetaDataWeighter achieves both the highest average accuracy and rank across
datasets, and on STL-10 it prunes up to 78% of unlabelled images without significant
loss in accuracy, corresponding to over 50% reduction in training time. | [] | [
"Image Classification",
"Meta-Learning",
"Self-Supervised Learning"
] | [] | [
"STL-10"
] | [
"Percentage correct"
] | Don’t Wait, Just Weight: Improving Unsupervised Representations by Learning Goal-Driven Instance Weights |
Understanding a narrative requires reading between the lines and reasoning
about the unspoken but obvious implications about events and people's mental
states - a capability that is trivial for humans but remarkably hard for
machines. To facilitate research addressing this challenge, we introduce a new
annotation framework to explain naive psychology of story characters as
fully-specified chains of mental states with respect to motivations and
emotional reactions. Our work presents a new large-scale dataset with rich
low-level annotations and establishes baseline performance on several new
tasks, suggesting avenues for future research. | [] | [
"Emotion Classification"
] | [] | [
"ROCStories"
] | [
"F1"
] | Modeling Naive Psychology of Characters in Simple Commonsense Stories |
In active learning, sampling bias could pose a serious inconsistency problem and hinder the algorithm from finding the optimal hypothesis. However, many methods for neural networks are hypothesis space agnostic and do not address this problem. We examine active learning with convolutional neural networks through the principled lens of version space reduction. We identify the connection between two approaches---prior mass reduction and diameter reduction---and propose a new diameter-based querying method---the minimum Gibbs-vote disagreement. By estimating version space diameter and bias, we illustrate how version space of neural networks evolves and examine the realizability assumption. With experiments on MNIST, Fashion-MNIST, SVHN and STL-10 datasets, we demonstrate that diameter reduction methods reduce the version space more effectively and perform better than prior mass reduction and other baselines, and that the Gibbs vote disagreement is on par with the best query method. | [] | [
"Active Learning",
"Image Classification"
] | [] | [
"STL-10"
] | [
"Percentage correct"
] | Effective Version Space Reduction for Convolutional Neural Networks |
Recently, skeleton based action recognition gains more popularity due to
cost-effective depth sensors coupled with real-time skeleton estimation
algorithms. Traditional approaches based on handcrafted features are limited to
represent the complexity of motion patterns. Recent methods that use Recurrent
Neural Networks (RNN) to handle raw skeletons only focus on the contextual
dependency in the temporal domain and neglect the spatial configurations of
articulated skeletons. In this paper, we propose a novel two-stream RNN
architecture to model both temporal dynamics and spatial configurations for
skeleton based action recognition. We explore two different structures for the
temporal stream: stacked RNN and hierarchical RNN. Hierarchical RNN is designed
according to human body kinematics. We also propose two effective methods to
model the spatial structure by converting the spatial graph into a sequence of
joints. To improve generalization of our model, we further exploit 3D
transformation based data augmentation techniques including rotation and
scaling transformation to transform the 3D coordinates of skeletons during
training. Experiments on 3D action recognition benchmark datasets show that our
method brings a considerable improvement for a variety of actions, i.e.,
generic actions, interaction activities and gestures. | [] | [
"3D Action Recognition",
"Action Recognition",
"Data Augmentation",
"Skeleton Based Action Recognition",
"Temporal Action Localization"
] | [] | [
"NTU RGB+D"
] | [
"Accuracy (CS)",
"Accuracy (CV)"
] | Modeling Temporal Dynamics and Spatial Configurations of Actions Using Two-Stream Recurrent Neural Networks |
We address the unsupervised learning of several interconnected problems in
low-level vision: single view depth prediction, camera motion estimation,
optical flow, and segmentation of a video into the static scene and moving
regions. Our key insight is that these four fundamental vision problems are
coupled through geometric constraints. Consequently, learning to solve them
together simplifies the problem because the solutions can reinforce each other.
We go beyond previous work by exploiting geometry more explicitly and
segmenting the scene into static and moving regions. To that end, we introduce
Competitive Collaboration, a framework that facilitates the coordinated
training of multiple specialized neural networks to solve complex problems.
Competitive Collaboration works much like expectation-maximization, but with
neural networks that act as both competitors to explain pixels that correspond
to static or moving regions, and as collaborators through a moderator that
assigns pixels to be either static or independently moving. Our novel method
integrates all these problems in a common framework and simultaneously reasons
about the segmentation of the scene into moving objects and the static
background, the camera motion, depth of the static scene structure, and the
optical flow of moving objects. Our model is trained without any supervision
and achieves state-of-the-art performance among joint unsupervised methods on
all sub-problems. | [] | [
"Depth Estimation",
"Monocular Depth Estimation",
"Motion Estimation",
"Motion Segmentation",
"Optical Flow Estimation"
] | [] | [
"KITTI Eigen split"
] | [
"absolute relative error"
] | Competitive Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation |
In this paper, we study abstractive summarization for open-domain videos. Unlike the traditional text news summarization, the goal is less to "compress" text information but rather to provide a fluent textual summary of information that has been collected and fused from different source modalities, in our case video and audio transcripts (or text). We show how a multi-source sequence-to-sequence model with hierarchical attention can integrate information from different modalities into a coherent output, compare various models trained with different modalities and present pilot experiments on the How2 corpus of instructional videos. We also propose a new evaluation metric (Content F1) for abstractive summarization task that measures semantic adequacy rather than fluency of the summaries, which is covered by metrics like ROUGE and BLEU. | [] | [
"Abstractive Text Summarization",
"Text Summarization"
] | [] | [
"How2"
] | [
"ROUGE-L",
"Content F1"
] | Multimodal Abstractive Summarization for How2 Videos |
Text-to-image retrieval is an essential task in multi-modal information retrieval, i.e. retrieving relevant images from a large and unlabelled image dataset given textual queries. In this paper, we propose VisualSparta, a novel text-to-image retrieval model that shows substantial improvement over existing models on both accuracy and efficiency. We show that VisualSparta is capable of outperforming all previous scalable methods in MSCOCO and Flickr30K. It also shows substantial retrieving speed advantages, i.e. for an index with 1 million images, VisualSparta gets over 391x speed up compared to standard vector search. Experiments show that this speed advantage even gets bigger for larger datasets because VisualSparta can be efficiently implemented as an inverted index. To the best of our knowledge, VisualSparta is the first transformer-based text-to-image retrieval model that can achieve real-time searching for very large dataset, with significant accuracy improvement compared to previous state-of-the-art methods. | [] | [
"Cross-Modal Retrieval",
"Image Retrieval",
"Information Retrieval",
"Text-Image Retrieval",
"Text-to-Image Retrieval"
] | [] | [
"MSCOCO-1k",
"COCO 2014",
"Flickr30k",
"Flickr30K 1K test"
] | [
"recall@5",
"recall@10",
"QPS",
"recall@1",
"R@10",
"Text-to-image R@10",
"Text-to-image R@1",
"R@5",
"R@1",
"Text-to-image R@5"
] | VisualSparta: Sparse Transformer Fragment-level Matching for Large-scale Text-to-Image Search |
(Unsupervised) Domain Adaptation (DA) seeks for classifying target instances when solely provided with source labeled and target unlabeled examples for training. Learning domain-invariant features helps to achieve this goal, whereas it underpins unlabeled samples drawn from a single or multiple explicit target domains (Multi-target DA). In this paper, we consider a more realistic transfer scenario: our target domain is comprised of multiple sub-targets implicitly blended with each other, so that learners could not identify which sub-target each unlabeled sample belongs to. This Blending-target Domain Adaptation (BTDA) scenario commonly appears in practice and threatens the validities of most existing DA algorithms, due to the presence of domain gaps and categorical misalignments among these hidden sub-targets. To reap the transfer performance gains in this new scenario, we propose Adversarial Meta-Adaptation Network (AMEAN). AMEAN entails two adversarial transfer learning processes. The first is a conventional adversarial transfer to bridge our source and mixed target domains. To circumvent the intra-target category misalignment, the second process presents as ``learning to adapt'': It deploys an unsupervised meta-learner receiving target data and their ongoing feature-learning feedbacks, to discover target clusters as our ``meta-sub-target'' domains. These meta-sub-targets auto-design our meta-sub-target DA loss, which empirically eliminates the implicit category mismatching in our mixed target. We evaluate AMEAN and a variety of DA algorithms in three benchmarks under the BTDA setup. Empirical results show that BTDA is a quite challenging transfer setup for most existing DA algorithms, yet AMEAN significantly outperforms these state-of-the-art baselines and effectively restrains the negative transfer effects in BTDA. | [] | [
"Domain Adaptation",
"Transfer Learning",
"Unsupervised Domain Adaptation"
] | [] | [
"Office-31",
"Office-Home"
] | [
"Accuracy"
] | Blending-target Domain Adaptation by Adversarial Meta-Adaptation Networks |
This work considers the problem of unsupervised domain adaptation in person re-identification (re-ID), which aims to transfer knowledge from the source domain to the target domain. Existing methods are primary to reduce the inter-domain shift between the domains, which however usually overlook the relations among target samples. This paper investigates into the intra-domain variations of the target domain and proposes a novel adaptation framework w.r.t. three types of underlying invariance, i.e., Exemplar-Invariance, Camera-Invariance, and Neighborhood-Invariance. Specifically, an exemplar memory is introduced to store features of samples, which can effectively and efficiently enforce the invariance constraints over the global dataset. We further present the Graph-based Positive Prediction (GPP) method to explore reliable neighbors for the target domain, which is built upon the memory and is trained on the source samples. Experiments demonstrate that 1) the three invariance properties are indispensable for effective domain adaptation, 2) the memory plays a key role in implementing invariance learning and improves the performance with limited extra computation cost, 3) GPP could facilitate the invariance learning and thus significantly improves the results, and 4) our approach produces new state-of-the-art adaptation accuracy on three re-ID large-scale benchmarks. | [] | [
"Domain Adaptation",
"Person Re-Identification",
"Unsupervised Domain Adaptation"
] | [] | [
"Duke to Market",
"Duke to MSMT",
"Market to Duke",
"Market to MSMT"
] | [
"rank-10",
"mAP",
"rank-5",
"rank-1"
] | Learning to Adapt Invariance in Memory for Person Re-identification |
In Multi-Label Text Classification (MLTC), one sample can belong to more than one class. It is observed that most MLTC tasks, there are dependencies or correlations among labels. Existing methods tend to ignore the relationship among labels. In this paper, a graph attention network-based model is proposed to capture the attentive dependency structure among the labels. The graph attention network uses a feature matrix and a correlation matrix to capture and explore the crucial dependencies between the labels and generate classifiers for the task. The generated classifiers are applied to sentence feature vectors obtained from the text feature extraction network(BiLSTM) to enable end-to-end training. Attention allows the system to assign different weights to neighbor nodes per label, thus allowing it to learn the dependencies among labels implicitly. The results of the proposed model are validated on five real-world MLTC datasets. The proposed model achieves similar or better performance compared to the previous state-of-the-art models. | [] | [
"Document Classification",
"Graph Representation Learning",
"Multi-Label Text Classification",
"Text Classification"
] | [] | [
"Slashdot",
"RCV1-v2",
"Reuters-21578",
"RCV1",
"AAPD"
] | [
"Micro-F1",
"F1",
"Micro F1"
] | MAGNET: Multi-Label Text Classification using Attention-based Graph Neural Network |
Document-level Relation Extraction (RE) requires extracting relations expressed within and across sentences. Recent works show that graph-based methods, usually constructing a document-level graph that captures document-aware interactions, can obtain useful entity representations thus helping tackle document-level RE. These methods either focus more on the entire graph, or pay more attention to a part of the graph, e.g., paths between the target entity pair. However, we find that document-level RE may benefit from focusing on both of them simultaneously. Therefore, to obtain more comprehensive entity representations, we propose the \textbf{C}oarse-to-\textbf{F}ine \textbf{E}ntity \textbf{R}epresentation model (\textbf{CFER}) that adopts a coarse-to-fine strategy involving two phases. First, CFER uses graph neural networks to integrate global information in the entire graph at a coarse level. Next, CFER utilizes the global information as a guidance to selectively aggregate path information between the target entity pair at a fine level. In classification, we combine the entity representations from both two levels into more comprehensive representations for relation extraction. Experimental results on a large-scale document-level RE dataset show that CFER achieves better performance than previous baseline models. Further, we verify the effectiveness of our strategy through elaborate model analysis. | [] | [
"Relation Extraction"
] | [] | [
"DocRED"
] | [
"Ign F1",
"F1"
] | Coarse-to-Fine Entity Representations for Document-level Relation Extraction |
A fundamental trade-off between effectiveness and efficiency needs to be
balanced when designing an online question answering system. Effectiveness
comes from sophisticated functions such as extractive machine reading
comprehension (MRC), while efficiency is obtained from improvements in
preliminary retrieval components such as candidate document selection and
paragraph ranking. Given the complexity of the real-world multi-document MRC
scenario, it is difficult to jointly optimize both in an end-to-end system. To
address this problem, we develop a novel deep cascade learning model, which
progressively evolves from the document-level and paragraph-level ranking of
candidate texts to more precise answer extraction with machine reading
comprehension. Specifically, irrelevant documents and paragraphs are first
filtered out with simple functions for efficiency consideration. Then we
jointly train three modules on the remaining texts for better tracking the
answer: the document extraction, the paragraph extraction and the answer
extraction. Experiment results show that the proposed method outperforms the
previous state-of-the-art methods on two large-scale multi-document benchmark
datasets, i.e., TriviaQA and DuReader. In addition, our online system can
stably serve typical scenarios with millions of daily requests in less than
50ms. | [] | [
"Machine Reading Comprehension",
"Question Answering",
"Reading Comprehension"
] | [] | [
"MS MARCO"
] | [
"Rouge-L",
"BLEU-1"
] | A Deep Cascade Model for Multi-Document Reading Comprehension |
Recent advances in video super-resolution have shown that convolutional
neural networks combined with motion compensation are able to merge information
from multiple low-resolution (LR) frames to generate high-quality images.
Current state-of-the-art methods process a batch of LR frames to generate a
single high-resolution (HR) frame and run this scheme in a sliding window
fashion over the entire video, effectively treating the problem as a large
number of separate multi-frame super-resolution tasks. This approach has two
main weaknesses: 1) Each input frame is processed and warped multiple times,
increasing the computational cost, and 2) each output frame is estimated
independently conditioned on the input frames, limiting the system's ability to
produce temporally consistent results.
In this work, we propose an end-to-end trainable frame-recurrent video
super-resolution framework that uses the previously inferred HR estimate to
super-resolve the subsequent frame. This naturally encourages temporally
consistent results and reduces the computational cost by warping only one image
in each step. Furthermore, due to its recurrent nature, the proposed method has
the ability to assimilate a large number of previous frames without increased
computational demands. Extensive evaluations and comparisons with previous
methods validate the strengths of our approach and demonstrate that the
proposed framework is able to significantly outperform the current state of the
art. | [] | [
"Motion Compensation",
"Multi-Frame Super-Resolution",
"Super-Resolution",
"Video Super-Resolution"
] | [] | [
"Vid4 - 4x upscaling"
] | [
"SSIM",
"PSNR"
] | Frame-Recurrent Video Super-Resolution |
We present the first dataset targeted at end-to-end NLG in Czech in the restaurant domain, along with several strong baseline models using the sequence-to-sequence approach. While non-English NLG is under-explored in general, Czech, as a morphologically rich language, makes the task even harder: Since Czech requires inflecting named entities, delexicalization or copy mechanisms do not work out-of-the-box and lexicalizing the generated outputs is non-trivial. In our experiments, we present two different approaches to this this problem: (1) using a neural language model to select the correct inflected form while lexicalizing, (2) a two-step generation setup: our sequence-to-sequence model generates an interleaved sequence of lemmas and morphological tags, which are then inflected by a morphological generator. | [] | [
"Data-to-Text Generation",
"Language Modelling"
] | [] | [
"Czech Restaurant NLG"
] | [
"CIDER",
"BLEU score",
"METEOR",
"NIST"
] | Neural Generation for Czech: Data and Baselines |
Neural models of dialog rely on generalized latent representations of language. This paper introduces a novel training procedure which explicitly learns multiple representations of language at several levels of granularity. The multi-granularity training algorithm modifies the mechanism by which negative candidate responses are sampled in order to control the granularity of learned latent representations. Strong performance gains are observed on the next utterance retrieval task using both the MultiWOZ dataset and the Ubuntu dialog corpus. Analysis significantly demonstrates that multiple granularities of representation are being learned, and that multi-granularity training facilitates better transfer to downstream tasks. | [] | [
"Conversational Response Selection"
] | [] | [
"Ubuntu Dialogue (v1, Ranking)"
] | [
"R10@1",
"R2@1"
] | Multi-Granularity Representations of Dialog |
Generative adversarial networks (GANs) have great successes on synthesizing
data. However, the existing GANs restrict the discriminator to be a binary
classifier, and thus limit their learning capacity for tasks that need to
synthesize output with rich structures such as natural language descriptions.
In this paper, we propose a novel generative adversarial network, RankGAN, for
generating high-quality language descriptions. Rather than training the
discriminator to learn and assign absolute binary predicate for individual data
sample, the proposed RankGAN is able to analyze and rank a collection of
human-written and machine-written sentences by giving a reference group. By
viewing a set of data samples collectively and evaluating their quality through
relative ranking scores, the discriminator is able to make better assessment
which in turn helps to learn a better generator. The proposed RankGAN is
optimized through the policy gradient technique. Experimental results on
multiple public datasets clearly demonstrate the effectiveness of the proposed
approach. | [] | [
"Text Generation"
] | [] | [
"Chinese Poems",
"EMNLP2017 WMT",
"COCO Captions"
] | [
"BLEU-3",
"BLEU-4",
"BLEU-2",
"BLEU-5"
] | Adversarial Ranking for Language Generation |
We achieve 3D semantic scene labeling by exploring semantic relation between each point and its contextual neighbors through edges. Besides an encoder-decoder branch for predicting point labels, we construct an edge branch to hierarchically integrate point features and generate edge features. To incorporate point features in the edge branch, we establish a hierarchical graph framework, where the graph is initialized from a coarse layer and gradually enriched along the point decoding process. For each edge in the final graph, we predict a label to indicate the semantic consistency of the two connected points to enhance point prediction. At different layers, edge features are also fed into the corresponding point module to integrate contextual information for message passing enhancement in local regions. The two branches interact with each other and cooperate in segmentation. Decent experimental results on several 3D semantic labeling datasets demonstrate the effectiveness of our work. | [] | [
"Scene Labeling",
"Semantic Segmentation"
] | [] | [
"S3DIS Area5"
] | [
"oAcc",
"mAcc",
"mIoU"
] | Hierarchical Point-Edge Interaction Network for Point Cloud Semantic Segmentation |
Head pose estimation, which computes the intrinsic Euler angles (yaw, pitch, roll) from the human, is crucial for gaze estimation, face alignment, and 3D reconstruction. Traditional approaches heavily relies on the accuracy of facial landmarks. It limits their performances, especially when the visibility of the face is not in good condition. In this paper, to do the estimation without facial landmarks, we combine the coarse and fine regression output together for a deep network. Utilizing more quantization units for the angles, a fine classifier is trained with the help of other auxiliary coarse units. Integrating regression is adopted to get the final prediction. The proposed approach is evaluated on three challenging benchmarks. It achieves the state-of-the-art on AFLW2000, BIWI and performs favorably on AFLW. The code has been released on Github. | [] | [
"3D Reconstruction",
"Face Alignment",
"Gaze Estimation",
"Head Pose Estimation",
"Pose Estimation",
"Quantization",
"Regression"
] | [] | [
"AFLW2000",
"AFLW",
"BIWI"
] | [
"MAE",
"MAE (trained with BIWI data)"
] | Hybrid coarse-fine classification for head pose estimation |
Face alignment, which fits a face model to an image and extracts the semantic
meanings of facial pixels, has been an important topic in CV community.
However, most algorithms are designed for faces in small to medium poses (below
45 degree), lacking the ability to align faces in large poses up to 90 degree.
The challenges are three-fold: Firstly, the commonly used landmark-based face
model assumes that all the landmarks are visible and is therefore not suitable
for profile views. Secondly, the face appearance varies more dramatically
across large poses, ranging from frontal view to profile view. Thirdly,
labelling landmarks in large poses is extremely challenging since the invisible
landmarks have to be guessed. In this paper, we propose a solution to the three
problems in an new alignment framework, called 3D Dense Face Alignment (3DDFA),
in which a dense 3D face model is fitted to the image via convolutional neutral
network (CNN). We also propose a method to synthesize large-scale training
samples in profile views to solve the third problem of data labelling.
Experiments on the challenging AFLW database show that our approach achieves
significant improvements over state-of-the-art methods. | [] | [
"3D Face Reconstruction",
"Face Alignment",
"Face Model",
"Head Pose Estimation"
] | [] | [
"AFLW2000",
"300W",
"Florence",
"AFLW2000-3D",
"BIWI"
] | [
"Error rate",
"NME",
"MAE (trained with other data)",
"MAE",
"Mean NME "
] | Face Alignment Across Large Poses: A 3D Solution |
In this paper, we present Deep Graph Kernels (DGK), a unified framework to learn latent representations of sub-structures for graphs, inspired by latest advancements in language modeling and deep learning. Our framework leverages the dependency information between sub-structures by learning their latent representations. We demonstrate instances of our framework on three popular graph kernels, namely Graphlet kernels, Weisfeiler-Lehman subtree kernels, and Shortest-Path graph kernels. Our experiments on several benchmark datasets show that Deep Graph Kernels achieve significant improvements in classification accuracy over state-of-the-art graph kernels. | [] | [
"Graph Classification",
"Language Modelling"
] | [] | [
"COLLAB",
"RE-M12K",
"IMDb-B",
"ENZYMES",
"Android Malware Dataset",
"PROTEINS",
"D&D",
"NCI1",
"MUTAG",
"IMDb-M",
"RE-M5K"
] | [
"Accuracy"
] | Deep Graph Kernels |
Can neural networks learn to compare graphs without feature engineering? In
this paper, we show that it is possible to learn representations for graph
similarity with neither domain knowledge nor supervision (i.e.\ feature
engineering or labeled graphs). We propose Deep Divergence Graph Kernels, an
unsupervised method for learning representations over graphs that encodes a
relaxed notion of graph isomorphism. Our method consists of three parts. First,
we learn an encoder for each anchor graph to capture its structure. Second, for
each pair of graphs, we train a cross-graph attention network which uses the
node representations of an anchor graph to reconstruct another graph. This
approach, which we call isomorphism attention, captures how well the
representations of one graph can encode another. We use the attention-augmented
encoder's predictions to define a divergence score for each pair of graphs.
Finally, we construct an embedding space for all graphs using these pair-wise
divergence scores.
Unlike previous work, much of which relies on 1) supervision, 2) domain
specific knowledge (e.g. a reliance on Weisfeiler-Lehman kernels), and 3) known
node alignment, our unsupervised method jointly learns node representations,
graph representations, and an attention-based alignment between graphs.
Our experimental results show that Deep Divergence Graph Kernels can learn an
unsupervised alignment between graphs, and that the learned representations
achieve competitive results when used as features on a number of challenging
graph classification tasks. Furthermore, we illustrate how the learned
attention allows insight into the the alignment of sub-structures across
graphs. | [] | [
"Feature Engineering",
"Graph Classification",
"Graph Similarity"
] | [] | [
"MUTAG",
"D&D",
"PTC",
"NCI1"
] | [
"Accuracy"
] | DDGK: Learning Graph Representations for Deep Divergence Graph Kernels |
Visual question answering (VQA) and image captioning require a shared body of general knowledge connecting language and vision. We present a novel approach to improve VQA performance that exploits this connection by jointly generating captions that are targeted to help answer a specific visual question. The model is trained using an existing caption dataset by automatically determining question-relevant captions using an online gradient-based method. Experimental results on the VQA v2 challenge demonstrates that our approach obtains state-of-the-art VQA performance (e.g. 68.4% on the Test-standard set using a single model) by simultaneously generating question-relevant captions. | [] | [
"Image Captioning",
"Question Answering",
"Visual Question Answering"
] | [] | [
"VQA v2 test-std"
] | [
"overall"
] | Generating Question Relevant Captions to Aid Visual Question Answering |
The success of deep supervised learning depends on its automatic data representation abilities. A good representation of high-dimensional complex data should enjoy low-dimensionally and disentanglement while losing as little information as possible.
In this work, we give a statistical understanding of how deep representation goals can be achieved with reproducing kernel Hilbert spaces (RKHS) and generative adversarial networks (GAN). At the population level, we formulate the ideal representation learning task as that of finding a nonlinear map that minimizes the sum of losses characterizing conditional independence (with RKHS) and disentanglement (with GAN). We estimate the target map at the sample level nonparametrically with deep neural networks. We prove the consistency in terms of the population objective function value. We validate the proposed methods via comprehensive numerical experiments and real data analysis in the context of regression and classification. The resulting prediction accuracies are better than state-of-the-art methods. | [] | [
"Image Classification",
"Regression",
"Representation Learning"
] | [] | [
"Kuzushiji-MNIST",
"STL-10"
] | [
"Percentage correct",
"Accuracy"
] | Toward Understanding Supervised Representation Learning with RKHS and GAN |
In this paper, we introduce a new model for leveraging unlabeled data to
improve generalization performances of image classifiers: a two-branch
encoder-decoder architecture called HybridNet. The first branch receives
supervision signal and is dedicated to the extraction of invariant
class-related representations. The second branch is fully unsupervised and
dedicated to model information discarded by the first branch to reconstruct
input data. To further support the expected behavior of our model, we propose
an original training objective. It favors stability in the discriminative
branch and complementarity between the learned representations in the two
branches. HybridNet is able to outperform state-of-the-art results on CIFAR-10,
SVHN and STL-10 in various semi-supervised settings. In addition,
visualizations and ablation studies validate our contributions and the behavior
of the model on both CIFAR-10 and STL-10 datasets. | [] | [
"Image Classification"
] | [] | [
"STL-10"
] | [
"Percentage correct"
] | HybridNet: Classification and Reconstruction Cooperation for Semi-Supervised Learning |
Although deep learning has been applied to successfully address many data mining problems, relatively limited work has been done on deep learning for anomaly detection. Existing deep anomaly detection methods, which focus on learning new feature representations to enable downstream anomaly detection methods, perform indirect optimization of anomaly scores, leading to data-inefficient learning and suboptimal anomaly scoring. Also, they are typically designed as unsupervised learning due to the lack of large-scale labeled anomaly data. As a result, they are difficult to leverage prior knowledge (e.g., a few labeled anomalies) when such information is available as in many real-world anomaly detection applications. This paper introduces a novel anomaly detection framework and its instantiation to address these problems. Instead of representation learning, our method fulfills an end-to-end learning of anomaly scores by a neural deviation learning, in which we leverage a few (e.g., multiple to dozens) labeled anomalies and a prior probability to enforce statistically significant deviations of the anomaly scores of anomalies from that of normal data objects in the upper tail. Extensive results show that our method can be trained substantially more data-efficiently and achieves significantly better anomaly scoring than state-of-the-art competing methods. | [] | [
"Anomaly Detection",
"Cyber Attack Detection",
"Fraud Detection",
"Network Intrusion Detection",
"Representation Learning"
] | [] | [
"Kaggle-Credit Card Fraud Dataset",
"NB15-Backdoor",
"Census",
"Thyroid"
] | [
"Average Precision",
"AUC"
] | Deep Anomaly Detection with Deviation Networks |
Temporal receptive fields of models play an important role in action segmentation. Large receptive fields facilitate the long-term relations among video clips while small receptive fields help capture the local details. Existing methods construct models with hand-designed receptive fields in layers. Can we effectively search for receptive field combinations to replace hand-designed patterns? To answer this question, we propose to find better receptive field combinations through a global-to-local search scheme. Our search scheme exploits both global search to find the coarse combinations and local search to get the refined receptive field combination patterns further. The global search finds possible coarse combinations other than human-designed patterns. On top of the global search, we propose an expectation guided iterative local search scheme to refine combinations effectively. Our global-to-local search can be plugged into existing action segmentation methods to achieve state-of-the-art performance. | [] | [
"Action Segmentation"
] | [] | [
"50 Salads"
] | [
"Acc",
"Edit",
"F1@10%",
"F1@25%",
"F1@50%"
] | Global2Local: Efficient Structure Search for Video Action Segmentation |
Neural networks are a powerful means of classifying object images. The proposed
image category classification method for object images combines convolutional neural
networks (CNNs) and support vector machines (SVMs). A pre-trained CNN, called Alex-Net,
is used as a pattern-feature extractor. Alex-Net is pre-trained for the large-scale object-image
dataset ImageNet. Instead of training, Alex-Net, pre-trained for ImageNet is used. An SVM is
used as trainable classifier. The feature vectors are passed to the SVM from Alex-Net. The
STL-10 dataset are used as object images. The number of classes is ten. Training and test
samples are clearly split. STL-10 object images are trained by the SVM with data
augmentation. We use the pattern transformation method with the cosine function. We also
apply some augmentation method such as rotation, skewing and elastic distortion. By using the
cosine function, the original patterns were left-justified, right-justified, top-justified, or bottomjustified. Patterns were also center-justified and enlarged. Test error rate is decreased by 0.435
percentage points from 16.055% by augmentation with cosine transformation. Error rates are
increased by other augmentation method such as rotation, skewing and elastic distortion,
compared without augmentation . Number of augmented data is 30 times that of the original
STL-10 5K training samples. Experimental test error rate for the test 8k STL-10 object images
was 15.620%, which shows that image augmentation is effective for image category
classification. | [] | [
"Data Augmentation",
"Image Augmentation",
"Image Classification"
] | [] | [
"STL-10"
] | [
"Percentage correct"
] | Image Augmentation for Object Image Classification Based On Combination of PreTrained CNN and SVM |
Inferring the depth of images is a fundamental inverse problem within the field of Computer Vision since depth information is obtained through 2D images, which can be generated from infinite possibilities of observed real scenes. Benefiting from the progress of Convolutional Neural Networks (CNNs) to explore structural features and spatial image information, Single Image Depth Estimation (SIDE) is often highlighted in scopes of scientific and technological innovation, as this concept provides advantages related to its low implementation cost and robustness to environmental conditions. In the context of autonomous vehicles, state-of-the-art CNNs optimize the SIDE task by producing high-quality depth maps, which are essential during the autonomous navigation process in different locations. However, such networks are usually supervised by sparse and noisy depth data, from Light Detection and Ranging (LiDAR) laser scans, and are carried out at high computational cost, requiring high-performance Graphic Processing Units (GPUs). Therefore, we propose a new lightweight and fast supervised CNN architecture combined with novel feature extraction models which are designed for real-world autonomous navigation. We also introduce an efficient surface normals module, jointly with a simple geometric 2.5D loss function, to solve SIDE problems. We also innovate by incorporating multiple Deep Learning techniques, such as the use of densification algorithms and additional semantic, surface normals and depth information to train our framework. The method introduced in this work focuses on robotic applications in indoor and outdoor environments and its results are evaluated on the competitive and publicly available NYU Depth V2 and KITTI Depth datasets. | [] | [
"Autonomous Navigation",
"Autonomous Vehicles",
"Depth Completion",
"Monocular Depth Estimation",
"Semantic Segmentation",
"Surface Normals Estimation",
"Visual Odometry"
] | [] | [
"NYU-Depth V2",
"NYU-Depth V2 Surface Normals",
"KITTI Eigen split",
"KITTI Depth Completion Eigen Split"
] | [
"RMSE",
"REL",
"absolute relative error"
] | On Deep Learning Techniques to Boost Monocular Depth Estimation for Autonomous Navigation |
Neural Machine Translation (NMT), though recently developed, has shown promising results for various language pairs. Despite that, NMT has only been applied to mostly formal texts such as those in the WMT shared tasks. This work further explores the effectiveness of NMT in spoken language domains by participating in the MT track of the IWSLT 2015. We consider two scenarios: (a) how to adapt existing NMT systems to a new domain and (b) the generalization of NMT to low-resource language pairs. Our results demonstrate that using an existing NMT framework1, we can achieve competitive results in the aforementioned scenarios when translating from English to German and Vietnamese. Notably, we have advanced state-of-the-art results in the IWSLT EnglishGerman MT track by up to 5.2 BLEU points. | [] | [
"Machine Translation"
] | [] | [
"IWSLT2015 English-Vietnamese"
] | [
"BLEU"
] | Stanford Neural Machine Translation Systems for Spoken Language Domains |
We study how to sample negative examples to automatically construct a training set for effective model learning in retrieval-based dialogue systems. Following an idea of dynamically adapting negative examples to matching models in learning, we consider four strategies including minimum sampling, maximum sampling, semi-hard sampling, and decay-hard sampling. Empirical studies on two benchmarks with three matching models indicate that compared with the widely used random sampling strategy, although the first two strategies lead to performance drop, the latter two ones can bring consistent improvement to the performance of all the models on both benchmarks. | [] | [
"Conversational Response Selection"
] | [] | [
"Ubuntu Dialogue (v1, Ranking)"
] | [
"R10@1",
"R10@5",
"R2@1",
"R10@2"
] | Sampling Matters! An Empirical Study of Negative Sampling Strategies for Learning of Matching Models in Retrieval-based Dialogue Systems |
While depth cameras and inertial sensors have been frequently leveraged for
human action recognition, these sensing modalities are impractical in many
scenarios where cost or environmental constraints prohibit their use. As such,
there has been recent interest on human action recognition using low-cost,
readily-available RGB cameras via deep convolutional neural networks. However,
many of the deep convolutional neural networks proposed for action recognition
thus far have relied heavily on learning global appearance cues directly from
imaging data, resulting in highly complex network architectures that are
computationally expensive and difficult to train. Motivated to reduce network
complexity and achieve higher performance, we introduce the concept of
spatio-temporal activation reprojection (STAR). More specifically, we reproject
the spatio-temporal activations generated by human pose estimation layers in
space and time using a stack of 3D convolutions. Experimental results on
UTD-MHAD and J-HMDB demonstrate that an end-to-end architecture based on the
proposed STAR framework (which we nickname STAR-Net) is proficient in
single-environment and small-scale applications. On UTD-MHAD, STAR-Net
outperforms several methods using richer data modalities such as depth and
inertial sensors. | [] | [
"Action Recognition",
"Multimodal Activity Recognition",
"Pose Estimation",
"Skeleton Based Action Recognition",
"Temporal Action Localization"
] | [] | [
"UTD-MHAD",
"J-HMDB"
] | [
"Accuracy (CS)",
"Accuracy (RGB+pose)"
] | STAR-Net: Action Recognition using Spatio-Temporal Activation Reprojection |
A new deep learning-based electroencephalography (EEG) signal analysis framework is proposed. While deep neural networks, specifically convolutional neural networks (CNNs), have gained remarkable attention recently, they still suffer from high dimensionality of the training data. Two-dimensional input images of CNNs are more vulnerable to be redundant versus one-dimensional input time-series of conventional neural networks. In this study, we propose a new dimensionality reduction framework for reducing the dimension of CNN inputs based on the tensor decomposition of the time-frequency representation of EEG signals. The proposed tensor decomposition-based dimensionality reduction algorithm transforms a large set of slices of the input tensor to a concise set of slices which are called super-slices. Employing super-slices not only handles the artifacts and redundancies of the EEG data but also reduces the dimension of the CNNs training inputs. We also consider different time-frequency representation methods for EEG image generation and provide a comprehensive comparison among them. We test our proposed framework on HCB-MIT data and as results show our approach outperforms other previous studies. | [] | [
"Dimensionality Reduction",
"EEG",
"Image Generation",
"Seizure Detection",
"Time Series"
] | [] | [
"CHB-MIT"
] | [
"Accuracy"
] | EEG Signal Dimensionality Reduction and Classification using Tensor Decomposition and Deep Convolutional Neural Networks |
Real-world data often follow a long-tailed distribution as the frequency of each class is typically different. For example, a dataset can have a large number of under-represented classes and a few classes with more than sufficient data. However, a model to represent the dataset is usually expected to have reasonably homogeneous performances across classes. Introducing class-balanced loss and advanced methods on data re-sampling and augmentation are among the best practices to alleviate the data imbalance problem. However, the other part of the problem about the under-represented classes will have to rely on additional knowledge to recover the missing information. In this work, we present a novel approach to address the long-tailed problem by augmenting the under-represented classes in the feature space with the features learned from the classes with ample samples. In particular, we decompose the features of each class into a class-generic component and a class-specific component using class activation maps. Novel samples of under-represented classes are then generated on the fly during training stages by fusing the class-specific features from the under-represented classes with the class-generic features from confusing classes. Our results on different datasets such as iNaturalist, ImageNet-LT, Places-LT and a long-tailed version of CIFAR have shown the state of the art performances. | [] | [
"Image Classification"
] | [] | [
"iNaturalist 2018"
] | [
"Top-1 Accuracy"
] | Feature Space Augmentation for Long-Tailed Data |
We present our UWB system for the task of capturing discriminative attributes at SemEval 2018. Given two words and an attribute, the system decides, whether this attribute is discriminative between the words or not. Assuming Distributional Hypothesis, i.e., a word meaning is related to the distribution across contexts, we introduce several approaches to compare word contextual information. We experiment with state-of-the-art semantic spaces and with simple co-occurrence statistics. We show the word distribution in the corpus has potential for detecting discriminative attributes. Our system achieves F1 score 72.1{\%} and is ranked {\#}4 among 26 submitted systems. | [] | [
"Relation Extraction"
] | [] | [
"SemEval 2018 Task 10"
] | [
"F1-Score"
] | UWB at SemEval-2018 Task 10: Capturing Discriminative Attributes from Word Distributions |
Knowledge Graph Completion (KGC) aims at automatically predicting missing links for large-scale knowledge graphs. A vast number of state-of-the-art KGC techniques have got published at top conferences in several research fields, including data mining, machine learning, and natural language processing. However, we notice that several recent papers report very high performance, which largely outperforms previous state-of-the-art methods. In this paper, we find that this can be attributed to the inappropriate evaluation protocol used by them and propose a simple evaluation protocol to address this problem. The proposed protocol is robust to handle bias in the model, which can substantially affect the final results. We conduct extensive experiments and report the performance of several existing methods using our protocol. The reproducible code has been made publicly available | [] | [
"Knowledge Graph Completion",
"Knowledge Graphs",
"Link Prediction"
] | [] | [
"FB15k-237"
] | [
"Hits@10",
"MR",
"MRR"
] | A Re-evaluation of Knowledge Graph Completion Methods |
We present an online approach to efficiently and simultaneously detect and track the 2D pose of multiple people in a video sequence. We build upon Part Affinity Field (PAF) representation designed for static images, and propose an architecture that can encode and predict Spatio-Temporal Affinity Fields (STAF) across a video sequence. In particular, we propose a novel temporal topology cross-linked across limbs which can consistently handle body motions of a wide range of magnitudes. Additionally, we make the overall approach recurrent in nature, where the network ingests STAF heatmaps from previous frames and estimates those for the current frame. Our approach uses only online inference and tracking, and is currently the fastest and the most accurate bottom-up approach that is runtime invariant to the number of people in the scene and accuracy invariant to input frame rate of camera. Running at $\sim$30 fps on a single GPU at single scale, it achieves highly competitive results on the PoseTrack benchmarks. | [] | [
"Pose Tracking"
] | [] | [
"PoseTrack2017"
] | [
"MOTA"
] | Efficient Online Multi-Person 2D Pose Tracking with Recurrent Spatio-Temporal Affinity Fields |
In the financial domain, risk modeling and profit generation heavily rely on the sophisticated and intricate stock movement prediction task. Stock forecasting is complex, given the stochastic dynamics and non-stationary behavior of the market. Stock movements are influenced by varied factors beyond the conventionally studied historical prices, such as social media and correlations among stocks. The rising ubiquity of online content and knowledge mandates an exploration of models that factor in such multimodal signals for accurate stock forecasting. We introduce an architecture that achieves a potent blend of chaotic temporal signals from financial data, social media, and inter-stock relationships via a graph neural network in a hierarchical temporal fashion. Through experiments on real-world S{\&}P 500 index data and English tweets, we show the practical applicability of our model as a tool for investment decision making and trading. | [] | [
"Decision Making",
"Stock Market Prediction"
] | [] | [
"stocknet"
] | [
"F1"
] | Deep Attentive Learning for Stock Movement Prediction From Social Media Text and Company Correlations |
Attention operators have been widely applied in various fields, including computer vision, natural language processing, and network embedding learning. Attention operators on graph data enables learnable weights when aggregating information from neighboring nodes. However, graph attention operators (GAOs) consume excessive computational resources, preventing their applications on large graphs. In addition, GAOs belong to the family of soft attention, instead of hard attention, which has been shown to yield better performance. In this work, we propose novel hard graph attention operator (hGAO) and channel-wise graph attention operator (cGAO). hGAO uses the hard attention mechanism by attending to only important nodes. Compared to GAO, hGAO improves performance and saves computational cost by only attending to important nodes. To further reduce the requirements on computational resources, we propose the cGAO that performs attention operations along channels. cGAO avoids the dependency on the adjacency matrix, leading to dramatic reductions in computational resource requirements. Experimental results demonstrate that our proposed deep models with the new operators achieve consistently better performance. Comparison results also indicates that hGAO achieves significantly better performance than GAO on both node and graph embedding tasks. Efficiency comparison shows that our cGAO leads to dramatic savings in computational resources, making them applicable to large graphs. | [] | [
"Graph Classification",
"Graph Embedding",
"Graph Representation Learning",
"Network Embedding",
"Representation Learning"
] | [] | [
"COLLAB",
"PROTEINS",
"D&D",
"IMDb-M",
"MUTAG",
"PTC"
] | [
"Accuracy"
] | Graph Representation Learning via Hard and Channel-Wise Attention Networks |
We propose a general framework for self-supervised learning of transferable visual representations based on Video-Induced Visual Invariances (VIVI). We consider the implicit hierarchy present in the videos and make use of (i) frame-level invariances (e.g. stability to color and contrast perturbations), (ii) shot/clip-level invariances (e.g. robustness to changes in object orientation and lighting conditions), and (iii) video-level invariances (semantic relationships of scenes across shots/clips), to define a holistic self-supervised loss. Training models using different variants of the proposed framework on videos from the YouTube-8M (YT8M) data set, we obtain state-of-the-art self-supervised transfer learning results on the 19 diverse downstream tasks of the Visual Task Adaptation Benchmark (VTAB), using only 1000 labels per task. We then show how to co-train our models jointly with labeled images, outperforming an ImageNet-pretrained ResNet-50 by 0.8 points with 10x fewer labeled images, as well as the previous best supervised model by 3.7 points using the full ImageNet data set. | [] | [
"Image Classification",
"Self-Supervised Learning",
"Transfer Learning"
] | [] | [
"VTAB-1k"
] | [
"Top-1 Accuracy"
] | Self-Supervised Learning of Video-Induced Visual Invariances |
The incorporation of pseudo data in the training of grammatical error correction models has been one of the main factors in improving the performance of such models. However, consensus is lacking on experimental configurations, namely, choosing how the pseudo data should be generated or used. In this study, these choices are investigated through extensive experiments, and state-of-the-art performance is achieved on the CoNLL-2014 test set ($F_{0.5}=65.0$) and the official test set of the BEA-2019 shared task ($F_{0.5}=70.2$) without making any modifications to the model architecture. | [] | [
"Grammatical Error Correction"
] | [] | [
"CoNLL-2014 Shared Task",
"BEA-2019 (test)"
] | [
"F0.5"
] | An Empirical Study of Incorporating Pseudo Data into Grammatical Error Correction |
In this paper, we present a novel method named RECON, that automatically identifies relations in a sentence (sentential relation extraction) and aligns to a knowledge graph (KG). RECON uses a graph neural network to learn representations of both the sentence as well as facts stored in a KG, improving the overall extraction quality. These facts, including entity attributes (label, alias, description, instance-of) and factual triples, have not been collectively used in the state of the art methods. We evaluate the effect of various forms of representing the KG context on the performance of RECON. The empirical evaluation on two standard relation extraction datasets shows that RECON significantly outperforms all state of the art methods on NYT Freebase and Wikidata datasets. RECON reports 87.23 F1 score (Vs 82.29 baseline) on Wikidata dataset whereas on NYT Freebase, reported values are 87.5(P@10) and 74.1(P@30) compared to the previous baseline scores of 81.3(P@10) and 63.1(P@30). | [] | [
"Relation Extraction"
] | [] | [
"New York Times Corpus"
] | [
"P@30%",
"P@10%"
] | RECON: Relation Extraction using Knowledge Graph Context in a Graph Neural Network |
To answer the question in machine comprehension (MC) task, the models need to
establish the interaction between the question and the context. To tackle the
problem that the single-pass model cannot reflect on and correct its answer, we
present Ruminating Reader. Ruminating Reader adds a second pass of attention
and a novel information fusion component to the Bi-Directional Attention Flow
model (BiDAF). We propose novel layer structures that construct an query-aware
context vector representation and fuse encoding representation with
intermediate representation on top of BiDAF model. We show that a multi-hop
attention mechanism can be applied to a bi-directional attention structure. In
experiments on SQuAD, we find that the Reader outperforms the BiDAF baseline by
a substantial margin, and matches or surpasses the performance of all other
published systems. | [] | [
"Question Answering",
"Reading Comprehension"
] | [] | [
"SQuAD1.1 dev",
"SQuAD1.1"
] | [
"EM",
"F1"
] | Ruminating Reader: Reasoning with Gated Multi-Hop Attention |
Graph classification is a significant problem in many scientific domains. It
addresses tasks such as the classification of proteins and chemical compounds
into categories according to their functions, or chemical and structural
properties. In a supervised setting, this problem can be framed as learning the
structure, features and relationships between features within a set of labelled
graphs and being able to correctly predict the labels or categories of unseen
graphs.
A significant difficulty in this task arises when attempting to apply
established classification algorithms due to the requirement for fixed size
matrix or tensor representations of the graphs which may vary greatly in their
numbers of nodes and edges. Building on prior work combining explicit tensor
representations with a standard image-based classifier, we propose a model to
perform graph classification by extracting fixed size tensorial information
from each graph in a given set, and using a Capsule Network to perform
classification.
The graphs we consider here are undirected and with categorical features on
the nodes. Using standard benchmarking chemical and protein datasets, we
demonstrate that our graph Capsule Network classification model using an
explicit tensorial representation of the graphs is competitive with current
state of the art graph kernels and graph neural network models despite only
limited hyper-parameter searching. | [] | [
"Graph Classification"
] | [] | [
"NCI109",
"ENZYMES",
"PROTEINS",
"D&D",
"NCI1",
"MUTAG",
"PTC"
] | [
"Accuracy"
] | Capsule Neural Networks for Graph Classification using Explicit Tensorial Graph Representations |
Objects moving at high speed along complex trajectories often appear in videos, especially videos of sports. Such objects elapse non-negligible distance during exposure time of a single frame and therefore their position in the frame is not well defined. They appear as semi-transparent streaks due to the motion blur and cannot be reliably tracked by standard trackers. We propose a novel approach called Tracking by Deblatting based on the observation that motion blur is directly related to the intra-frame trajectory of an object. Blur is estimated by solving two intertwined inverse problems, blind deblurring and image matting, which we call deblatting. The trajectory is then estimated by fitting a piecewise quadratic curve, which models physically justifiable trajectories. As a result, tracked objects are precisely localized with higher temporal resolution than by conventional trackers. The proposed TbD tracker was evaluated on a newly created dataset of videos with ground truth obtained by a high-speed camera using a novel Trajectory-IoU metric that generalizes the traditional Intersection over Union and measures the accuracy of the intra-frame trajectory. The proposed method outperforms baseline both in recall and trajectory accuracy. | [] | [
"Deblurring",
"Image Matting",
"Object Tracking"
] | [] | [
"Falling Objects",
"TbD",
"TbD-3D"
] | [
"SSIM",
"TIoU",
"PSNR"
] | Intra-frame Object Tracking by Deblatting |
Semantic segmentation of 3D point cloud data is essential for enhanced high-level perception in autonomous platforms. Furthermore, given the increasing deployment of LiDAR sensors onboard of cars and drones, a special emphasis is also placed on non-computationally intensive algorithms that operate on mobile GPUs. Previous efficient state-of-the-art methods relied on 2D spherical projection of point clouds as input for 2D fully convolutional neural networks to balance the accuracy-speed trade-off. This paper introduces a novel approach for 3D point cloud semantic segmentation that exploits multiple projections of the point cloud to mitigate the loss of information inherent in single projection methods. Our Multi-Projection Fusion (MPF) framework analyzes spherical and bird's-eye view projections using two separate highly-efficient 2D fully convolutional models then combines the segmentation results of both views. The proposed framework is validated on the SemanticKITTI dataset where it achieved a mIoU of 55.5 which is higher than state-of-the-art projection-based methods RangeNet++ and PolarNet while being 1.6x faster than the former and 3.1x faster than the latter. | [] | [
"Real-Time Semantic Segmentation",
"Semantic Segmentation"
] | [] | [
"SemanticKITTI"
] | [
"Speed (FPS)",
"mIOU",
"mIoU"
] | Multi Projection Fusion for Real-time Semantic Segmentation of 3D LiDAR Point Clouds |
Recognizing text from natural images is a hot research topic in computer
vision due to its various applications. Despite the enduring research of
several decades on optical character recognition (OCR), recognizing texts from
natural images is still a challenging task. This is because scene texts are
often in irregular (e.g. curved, arbitrarily-oriented or seriously distorted)
arrangements, which have not yet been well addressed in the literature.
Existing methods on text recognition mainly work with regular (horizontal and
frontal) texts and cannot be trivially generalized to handle irregular texts.
In this paper, we develop the arbitrary orientation network (AON) to directly
capture the deep features of irregular texts, which are combined into an
attention-based decoder to generate character sequence. The whole network can
be trained end-to-end by using only images and word-level annotations.
Extensive experiments on various benchmarks, including the CUTE80,
SVT-Perspective, IIIT5k, SVT and ICDAR datasets, show that the proposed
AON-based method achieves the-state-of-the-art performance in irregular
datasets, and is comparable to major existing methods in regular datasets. | [] | [
"Optical Character Recognition"
] | [] | [
"ICDAR2015",
"ICDAR 2003"
] | [
"Accuracy"
] | AON: Towards Arbitrarily-Oriented Text Recognition |
We aim to detect all instances of a category in an image and, for each
instance, mark the pixels that belong to it. We call this task Simultaneous
Detection and Segmentation (SDS). Unlike classical bounding box detection, SDS
requires a segmentation and not just a box. Unlike classical semantic
segmentation, we require individual object instances. We build on recent work
that uses convolutional neural networks to classify category-independent region
proposals (R-CNN [16]), introducing a novel architecture tailored for SDS. We
then use category-specific, top- down figure-ground predictions to refine our
bottom-up proposals. We show a 7 point boost (16% relative) over our baselines
on SDS, a 5 point boost (10% relative) over state-of-the-art on semantic
segmentation, and state-of-the-art performance in object detection. Finally, we
provide diagnostic tools that unpack performance and provide directions for
future work. | [] | [
"Object Detection",
"Semantic Segmentation"
] | [] | [
"PASCAL VOC 2012",
"PASCAL VOC 2012 test"
] | [
"Mean IoU",
"MAP"
] | Simultaneous Detection and Segmentation |
We investigate a new commonsense inference task: given an event described in a short free-form text ("X drinks coffee in the morning"), a system reasons about the likely intents ("X wants to stay awake") and reactions ("X feels alert") of the event's participants. To support this study, we construct a new crowdsourced corpus of 25,000 event phrases covering a diverse range of everyday events and situations. We report baseline performance on this task, demonstrating that neural encoder-decoder models can successfully compose embedding representations of previously unseen events and reason about the likely intents and reactions of the event participants. In addition, we demonstrate how commonsense inference on people's intents and reactions can help unveil the implicit gender inequality prevalent in modern movie scripts. | [] | [
"Common Sense Reasoning"
] | [] | [
"Event2Mind test",
"Event2Mind dev"
] | [
"Average Cross-Ent"
] | Event2Mind: Commonsense Inference on Events, Intents, and Reactions |