abstract
stringlengths 13
4.33k
| field
sequence | task
sequence | method
sequence | dataset
sequence | metric
sequence | title
stringlengths 10
194
|
---|---|---|---|---|---|---|
Multi-Target Multi-Camera Tracking (MTMCT) tracks many people through video
taken from several cameras. Person Re-Identification (Re-ID) retrieves from a
gallery images of people similar to a person query image. We learn good
features for both MTMCT and Re-ID with a convolutional neural network. Our
contributions include an adaptive weighted triplet loss for training and a new
technique for hard-identity mining. Our method outperforms the state of the art
both on the DukeMTMC benchmarks for tracking, and on the Market-1501 and
DukeMTMC-ReID benchmarks for Re-ID. We examine the correlation between good
Re-ID and good MTMCT scores, and perform ablation studies to elucidate the
contributions of the main components of our system. Code is available. | [] | [
"Person Re-Identification"
] | [] | [
"Market-1501"
] | [
"Rank-1",
"MAP"
] | Features for Multi-Target Multi-Camera Tracking and Re-Identification |
Joint entity and relation extraction is to detect entity and relation using a single model. In this paper, we present a novel unified joint extraction model which directly tags entity and relation labels according to a query word position p, i.e., detecting entity at p, and identifying entities at other positions that have relationship with the former. To this end, we first design a tagging scheme to generate n tag sequences for an n-word sentence. Then a position-attention mechanism is introduced to produce different sentence representations for every query position to model these n tag sequences. In this way, our method can simultaneously extract all entities and their type, as well as all overlapping relations. Experiment results show that our framework performances significantly better on extracting overlapping relations as well as detecting long-range relation, and thus we achieve state-of-the-art performance on two public datasets. | [] | [
"Joint Entity and Relation Extraction",
"Relation Extraction"
] | [] | [
"NYT",
"NYT-single"
] | [
"F1"
] | Joint extraction of entities and overlapping relations using position-attentive sequence labeling |
In the past few years, the field of computer vision has gone through a
revolution fueled mainly by the advent of large datasets and the adoption of
deep convolutional neural networks for end-to-end learning. The person
re-identification subfield is no exception to this. Unfortunately, a prevailing
belief in the community seems to be that the triplet loss is inferior to using
surrogate losses (classification, verification) followed by a separate metric
learning step. We show that, for models trained from scratch as well as
pretrained ones, using a variant of the triplet loss to perform end-to-end deep
metric learning outperforms most other published methods by a large margin. | [] | [
"Metric Learning",
"Person Re-Identification"
] | [] | [
"MARS",
"DukeMTMC-reID",
"Market-1501",
"CUHK03"
] | [
"Rank-1",
"mAP",
"Rank-5",
"MAP"
] | In Defense of the Triplet Loss for Person Re-Identification |
In this paper, we propose a novel method called AlignedReID that extracts a
global feature which is jointly learned with local features. Global feature
learning benefits greatly from local feature learning, which performs an
alignment/matching by calculating the shortest path between two sets of local
features, without requiring extra supervision. After the joint learning, we
only keep the global feature to compute the similarities between images. Our
method achieves rank-1 accuracy of 94.4% on Market1501 and 97.8% on CUHK03,
outperforming state-of-the-art methods by a large margin. We also evaluate
human-level performance and demonstrate that our method is the first to surpass
human-level performance on Market1501 and CUHK03, two widely used Person ReID
datasets. | [] | [
"Person Re-Identification"
] | [] | [
"CUHK-SYSU",
"Market-1501",
"CUHK03"
] | [
"Rank-1",
"Rank-10",
"Rank-5",
"MAP"
] | AlignedReID: Surpassing Human-Level Performance in Person Re-Identification |
In this paper, we propose quantized densely connected U-Nets for efficient
visual landmark localization. The idea is that features of the same semantic
meanings are globally reused across the stacked U-Nets. This dense connectivity
largely improves the information flow, yielding improved localization accuracy.
However, a vanilla dense design would suffer from critical efficiency issue in
both training and testing. To solve this problem, we first propose order-K
dense connectivity to trim off long-distance shortcuts; then, we use a
memory-efficient implementation to significantly boost the training efficiency
and investigate an iterative refinement that may slice the model size in half.
Finally, to reduce the memory consumption and high precision operations both in
training and testing, we further quantize weights, inputs, and gradients of our
localization network to low bit-width numbers. We validate our approach in two
tasks: human pose estimation and face alignment. The results show that our
approach achieves state-of-the-art localization accuracy, but using ~70% fewer
parameters, ~98% less model size and saving ~75% training memory compared with
other benchmark localizers. The code is available at
https://github.com/zhiqiangdon/CU-Net. | [] | [
"Face Alignment",
"Pose Estimation"
] | [] | [
"MPII Human Pose"
] | [
"PCKh-0.5"
] | Quantized Densely Connected U-Nets for Efficient Landmark Localization |
With the advances in capturing 2D or 3D skeleton data, skeleton-based action recognition has received an increasing interest over the last years. As skeleton data is commonly represented by graphs, graph convolutional networks have been proposed for this task. While current graph convolutional networks accurately recognize actions, they are too expensive for robotics applications where limited computational resources are available. In this paper, we therefore propose a highly efficient graph convolutional network that addresses the limitations of previous works. This is achieved by a parallel structure that gradually fuses motion and spatial information and by reducing the temporal resolution as early as possible. Furthermore, we explicitly address the issue that human poses can contain errors. To this end, the network first refines the poses before they are further processed to recognize the action. We therefore call the network Pose Refinement Graph Convolutional Network. Compared to other graph convolutional networks, our network requires 86\%-93\% less parameters and reduces the floating point operations by 89%-96% while achieving a comparable accuracy. It therefore provides a much better trade-off between accuracy, memory footprint and processing time, which makes it suitable for robotics applications. | [] | [
"Action Recognition",
"Skeleton Based Action Recognition"
] | [] | [
"NTU RGB+D",
"Kinetics-Skeleton dataset"
] | [
"Accuracy (CS)",
"Accuracy (CV)",
"Accuracy"
] | Pose Refinement Graph Convolutional Network for Skeleton-based Action Recognition |
In this work, we propose a novel segmental hypergraph representation to model
overlapping entity mentions that are prevalent in many practical datasets. We
show that our model built on top of such a new representation is able to
capture features and interactions that cannot be captured by previous models
while maintaining a low time complexity for inference. We also present a
theoretical analysis to formally assess how our representation is better than
alternative representations reported in the literature in terms of
representational power. Coupled with neural networks for feature learning, our
model achieves the state-of-the-art performance in three benchmark datasets
annotated with overlapping mentions. | [] | [
"Named Entity Recognition",
"Nested Mention Recognition",
"Nested Named Entity Recognition",
"Overlapping Mention Recognition"
] | [] | [
"GENIA",
"ACE 2005",
"ACE 2004"
] | [
"F1"
] | Neural Segmental Hypergraphs for Overlapping Mention Recognition |
Sentence splitting is a major simplification operator. Here we present a
simple and efficient splitting algorithm based on an automatic semantic parser.
After splitting, the text is amenable for further fine-tuned simplification
operations. In particular, we show that neural Machine Translation can be
effectively used in this situation. Previous application of Machine Translation
for simplification suffers from a considerable disadvantage in that they are
over-conservative, often failing to modify the source in any way. Splitting
based on semantic parsing, as proposed here, alleviates this issue. Extensive
automatic and human evaluation shows that the proposed method compares
favorably to the state-of-the-art in combined lexical and structural
simplification. | [] | [
"Machine Translation",
"Semantic Parsing",
"Text Simplification"
] | [] | [
"TurkCorpus"
] | [
"BLEU",
"SARI (EASSE>=0.2.1)"
] | Simple and Effective Text Simplification Using Semantic and Neural Methods |
Referring object detection and referring image segmentation are important
tasks that require joint understanding of visual information and natural
language. Yet there has been evidence that current benchmark datasets suffer
from bias, and current state-of-the-art models cannot be easily evaluated on
their intermediate reasoning process. To address these issues and complement
similar efforts in visual question answering, we build CLEVR-Ref+, a synthetic
diagnostic dataset for referring expression comprehension. The precise
locations and attributes of the objects are readily available, and the
referring expressions are automatically associated with functional programs.
The synthetic nature allows control over dataset bias (through sampling
strategy), and the modular programs enable intermediate reasoning ground truth
without human annotators.
In addition to evaluating several state-of-the-art models on CLEVR-Ref+, we
also propose IEP-Ref, a module network approach that significantly outperforms
other models on our dataset. In particular, we present two interesting and
important findings using IEP-Ref: (1) the module trained to transform feature
maps into segmentation masks can be attached to any intermediate module to
reveal the entire reasoning process step-by-step; (2) even if all training data
has at least one object referred, IEP-Ref can correctly predict no-foreground
when presented with false-premise referring expressions. To the best of our
knowledge, this is the first direct and quantitative proof that neural modules
behave in the way they are intended. | [] | [
"Object Detection",
"Question Answering",
"Referring Expression Comprehension",
"Referring Expression Segmentation",
"Semantic Segmentation",
"Visual Question Answering",
"Visual Reasoning"
] | [] | [
"CLEVR-Ref+"
] | [
"IoU"
] | CLEVR-Ref+: Diagnosing Visual Reasoning with Referring Expressions |
Siamese network based trackers formulate tracking as convolutional feature
cross-correlation between target template and searching region. However,
Siamese trackers still have accuracy gap compared with state-of-the-art
algorithms and they cannot take advantage of feature from deep networks, such
as ResNet-50 or deeper. In this work we prove the core reason comes from the
lack of strict translation invariance. By comprehensive theoretical analysis
and experimental validations, we break this restriction through a simple yet
effective spatial aware sampling strategy and successfully train a
ResNet-driven Siamese tracker with significant performance gain. Moreover, we
propose a new model architecture to perform depth-wise and layer-wise
aggregations, which not only further improves the accuracy but also reduces the
model size. We conduct extensive ablation studies to demonstrate the
effectiveness of the proposed tracker, which obtains currently the best results
on four large tracking benchmarks, including OTB2015, VOT2018, UAV123, and
LaSOT. Our model will be released to facilitate further studies based on this
problem. | [] | [
"Visual Object Tracking",
"Visual Tracking"
] | [] | [
"TrackingNet",
"VOT2017/18"
] | [
"Normalized Precision",
"Precision",
"Expected Average Overlap (EAO)",
"Accuracy"
] | SiamRPN++: Evolution of Siamese Visual Tracking with Very Deep Networks |
The problem of 3D layout recovery in indoor scenes has been a core research
topic for over a decade. However, there are still several major challenges that
remain unsolved. Among the most relevant ones, a major part of the
state-of-the-art methods make implicit or explicit assumptions on the scenes --
e.g. box-shaped or Manhattan layouts. Also, current methods are computationally
expensive and not suitable for real-time applications like robot navigation and
AR/VR. In this work we present CFL (Corners for Layout), the first end-to-end
model for 3D layout recovery on 360 images. Our experimental results show that
we outperform the state of the art relaxing assumptions about the scene and at
a lower cost. We also show that our model generalizes better to camera position
variations than conventional approaches by using EquiConvs, a type of
convolution applied directly on the sphere projection and hence invariant to
the equirectangular distortions.
CFL Webpage: https://cfernandezlab.github.io/CFL/ | [] | [
"3D Room Layouts From A Single RGB Panorama",
"Robot Navigation"
] | [] | [
"PanoContext"
] | [
"3DIoU"
] | Corners for Layout: End-to-End Layout Recovery from 360 Images |
Most state-of-the-art methods for action recognition consist of a two-stream architecture with 3D convolutions: an appearance stream for RGB frames and a motion stream for optical flow frames. Although combining flow with RGB improves the performance, the cost of computing accurate optical flow is high, and increases action recognition latency. This limits the usage of two-stream approaches in real-world applications requiring low latency. In this paper, we introduce two learning approaches to train a standard 3D CNN, operating on RGB frames, that mimics the motion stream, and as a result avoids flow computation at test time. First, by minimizing a feature-based loss compared to the Flow stream, we show that the network reproduces the motion stream with high fidelity. Second, to leverage both appearance and motion information effectively, we train with a linear combination of the feature-based loss and the standard cross-entropy loss for action recognition. We denote the stream trained using this combined loss as Motion-Augmented RGB Stream (MARS). As a single stream, MARS performs better than RGB or Flow alone, for instance with 72.7% accuracy on Kinetics compared to 72.0% and 65.6% with RGB and Flow streams respectively.
| [] | [
"Action Classification",
"Action Recognition",
"Optical Flow Estimation",
"Temporal Action Localization"
] | [] | [
"Kinetics-400",
"UCF101",
"MiniKinetics",
"Something-Something V1",
"HMDB-51"
] | [
"3-fold Accuracy",
"Top 1 Accuracy",
"Top-1 Accuracy",
"Average accuracy of 3 splits",
"Vid acc@1"
] | MARS: Motion-Augmented RGB Stream for Action Recognition |
The task of person re-identification (ReID) has attracted growing attention in recent years leading to improved performance, albeit with little focus on real-world applications. Most SotA methods are based on heavy pre-trained models, e.g. ResNet50 (~25M parameters), which makes them less practical and more tedious to explore architecture modifications. In this study, we focus on a small-sized randomly initialized model that enables us to easily introduce architecture and training modifications suitable for person ReID. The outcomes of our study are a compact network and a fitting training regime. We show the robustness of the network by outperforming the SotA on both Market1501 and DukeMTMC. Furthermore, we show the representation power of our ReID network via SotA results on a different task of multi-object tracking. | [] | [
"Multi-Object Tracking",
"Person Re-Identification"
] | [] | [
"DukeMTMC-reID",
"Market-1501"
] | [
"Rank-1",
"MAP"
] | Compact Network Training for Person ReID |
In this work, we present several deep learning models for the automatic diacritization of Arabic text. Our models are built using two main approaches, viz. Feed-Forward Neural Network (FFNN) and Recurrent Neural Network (RNN), with several enhancements such as 100-hot encoding, embeddings, Conditional Random Field (CRF) and Block-Normalized Gradient (BNG). The models are tested on the only freely available benchmark dataset and the results show that our models are either better or on par with other models, which require language-dependent post-processing steps, unlike ours. Moreover, we show that diacritics in Arabic can be used to enhance the models of NLP tasks such as Machine Translation (MT) by proposing the Translation over Diacritization (ToD) approach. | [] | [
"Arabic Text Diacritization",
"Machine Translation"
] | [] | [
"Tashkeela"
] | [
"Diacritic Error Rate",
"Word Error Rate (WER)"
] | Neural Arabic Text Diacritization: State of the Art Results and a Novel Approach for Machine Translation |
In general, sufficient data is essential for the better performance and generalization of deep-learning models. However, lots of limitations(cost, resources, etc.) of data collection leads to lack of enough data in most of the areas. In addition, various domains of each data sources and licenses also lead to difficulties in collection of sufficient data. This situation makes us hard to utilize not only the pre-trained model, but also the external knowledge. Therefore, it is important to leverage small dataset effectively for achieving the better performance. We applied some techniques in three aspects: data, loss function, and prediction to enable training from scratch with less data. With these methods, we obtain high accuracy by leveraging ImageNet data which consist of only 50 images per class. Furthermore, our model is ranked 4th in Visual Inductive Printers for Data-Effective Computer Vision Challenge. | [] | [
"Data Augmentation",
"Image Classification",
"Object Classification"
] | [] | [
"ImageNet VIPriors subset"
] | [
"Top-1"
] | Data-Efficient Deep Learning Method for Image Classification Using Data Augmentation, Focal Cosine Loss, and Ensemble |
Click-Through Rate (CTR) prediction is one of the most important and challenging in calculating advertisements and recommendation systems. To build a machine learning system with these data, it is important to properly model the interaction among features. However, many current works calculate the feature interactions in a simple way such as inner product and element-wise product. This paper aims to fully utilize the information between features and improve the performance of deep neural networks in the CTR prediction task. In this paper, we propose a Feature Interaction based Neural Network (FINN) which is able to model feature interaction via a 3-dimention relation tensor. FINN provides representations for the feature interactions on the the bottom layer and the non-linearity of neural network in modelling higher-order feature interactions. We evaluate our models on CTR prediction tasks compared with classical baselines and show that our deep FINN model outperforms other state-of-the-art deep models such as PNN and DeepFM. Evaluation results demonstrate that feature interaction contains significant information for better CTR prediction. It also indicates that our models can effectively learn the feature interactions, and achieve better performances in real-world datasets. | [] | [
"Click-Through Rate Prediction",
"Recommendation Systems"
] | [] | [
"Criteo"
] | [
"Log Loss",
"AUC"
] | Feature Interaction based Neural Network for Click-Through Rate Prediction |
Temporal action detection in long videos is an important problem.
State-of-the-art methods address this problem by applying action classifiers on
sliding windows. Although sliding windows may contain an identifiable portion
of the actions, they may not necessarily cover the entire action instance,
which would lead to inferior performance. We adapt a two-stage temporal action
detection pipeline with Cascaded Boundary Regression (CBR) model.
Class-agnostic proposals and specific actions are detected respectively in the
first and the second stage. CBR uses temporal coordinate regression to refine
the temporal boundaries of the sliding windows. The salient aspect of the
refinement process is that, inside each stage, the temporal boundaries are
adjusted in a cascaded way by feeding the refined windows back to the system
for further boundary refinement. We test CBR on THUMOS-14 and TVSeries, and
achieve state-of-the-art performance on both datasets. The performance gain is
especially remarkable under high IoU thresholds, e.g. map@tIoU=0.5 on THUMOS-14
is improved from 19.0% to 31.0%. | [] | [
"Action Detection",
"Regression"
] | [] | [
"THUMOS’14"
] | [
"mAP [email protected]",
"mAP [email protected]",
"mAP [email protected]",
"mAP [email protected]",
"mAP [email protected]",
"mAP [email protected]",
"mAP [email protected]"
] | Cascaded Boundary Regression for Temporal Action Detection |
This work considers the problem of computing distances between structured objects such as undirected graphs, seen as probability distributions in a specific metric space. We consider a new transportation distance (i.e. that minimizes a total cost of transporting probability masses) that unveils the geometric nature of the structured objects space. Unlike Wasserstein or Gromov-Wasserstein metrics that focus solely and respectively on features (by considering a metric in the feature space) or structure (by seeing structure as a metric space), our new distance exploits jointly both information, and is consequently called Fused Gromov-Wasserstein (FGW). After discussing its properties and computational aspects, we show results on a graph classification task, where our method outperforms both graph kernels and deep graph convolutional networks. Exploiting further on the metric properties of FGW, interesting geometric objects such as Fr\'echet means or barycenters of graphs are illustrated and discussed in a clustering context. | [] | [
"Graph Classification",
"Graph Clustering",
"Time Series"
] | [] | [
"PROTEINS",
"MUTAG",
"ENZYMES",
"NCI1"
] | [
"Accuracy"
] | Optimal Transport for structured data with application on graphs |
We present models for encoding sentences into embedding vectors that
specifically target transfer learning to other NLP tasks. The models are
efficient and result in accurate performance on diverse transfer tasks. Two
variants of the encoding models allow for trade-offs between accuracy and
compute resources. For both variants, we investigate and report the
relationship between model complexity, resource consumption, the availability
of transfer task training data, and task performance. Comparisons are made with
baselines that use word level transfer learning via pretrained word embeddings
as well as baselines do not use any transfer learning. We find that transfer
learning using sentence embeddings tends to outperform word level transfer.
With transfer learning via sentence embeddings, we observe surprisingly good
performance with minimal amounts of supervised training data for a transfer
task. We obtain encouraging results on Word Embedding Association Tests (WEAT)
targeted at detecting model bias. Our pre-trained sentence encoding models are
made freely available for download and on TF Hub. | [] | [
"Conversational Response Selection",
"Semantic Textual Similarity",
"Sentence Embeddings",
"Sentiment Analysis",
"Subjectivity Analysis",
"Text Classification",
"Transfer Learning",
"Word Embeddings"
] | [] | [
"CR",
"SST-2 Binary classification",
"PolyAI Reddit",
"MR",
"STS Benchmark",
"TREC-6",
"SUBJ",
"MPQA"
] | [
"Error",
"1-of-100 Accuracy",
"Pearson Correlation",
"Accuracy"
] | Universal Sentence Encoder |
Convolutional network are the de-facto standard for analysing spatio-temporal
data such as images, videos, 3D shapes, etc. Whilst some of this data is
naturally dense (for instance, photos), many other data sources are inherently
sparse. Examples include pen-strokes forming on a piece of paper, or (colored)
3D point clouds that were obtained using a LiDAR scanner or RGB-D camera.
Standard "dense" implementations of convolutional networks are very inefficient
when applied on such sparse data. We introduce a sparse convolutional operation
tailored to processing sparse data that differs from prior work on sparse
convolutional networks in that it operates strictly on submanifolds, rather
than "dilating" the observation with every layer in the network. Our empirical
analysis of the resulting submanifold sparse convolutional networks shows that
they perform on par with state-of-the-art methods whilst requiring
substantially less computation. | [] | [
"3D Part Segmentation"
] | [] | [
"ShapeNet-Part"
] | [
"Instance Average IoU"
] | Submanifold Sparse Convolutional Networks |
Triple extraction is an essential task in information extraction for natural language processing and knowledge graph construction. In this paper, we revisit the end-to-end triple extraction task for sequence generation. Since generative triple extraction may struggle to capture long-term dependencies and generate unfaithful triples, we introduce a novel model, contrastive triple extraction with a generative transformer. Specifically, we introduce a single shared transformer module for encoder-decoder-based generation. To generate faithful results, we propose a novel triplet contrastive training object. Moreover, we introduce two mechanisms to further improve model performance (i.e., batch-wise dynamic attention-masking and triple-wise calibration). Experimental results on three datasets (i.e., NYT, WebNLG, and MIE) show that our approach achieves better performance than that of baselines. | [] | [
"graph construction",
"Relation Extraction"
] | [] | [
"NYT",
"WebNLG"
] | [
"F1"
] | Contrastive Triple Extraction with Generative Transformer |
Dialogue Act (DA) classification is a challenging problem in dialogue
interpretation, which aims to attach semantic labels to utterances and
characterize the speaker's intention. Currently, many existing approaches
formulate the DA classification problem ranging from multi-classification to
structured prediction, which suffer from two limitations: a) these methods are
either handcrafted feature-based or have limited memories. b) adversarial
examples can't be correctly classified by traditional training methods. To
address these issues, in this paper we first cast the problem into a question
and answering problem and proposed an improved dynamic memory networks with
hierarchical pyramidal utterance encoder. Moreover, we apply adversarial
training to train our proposed model. We evaluate our model on two public
datasets, i.e., Switchboard dialogue act corpus and the MapTask corpus.
Extensive experiments show that our proposed model is not only robust, but also
achieves better performance when compared with some state-of-the-art baselines. | [] | [
"Dialogue Act Classification",
"Dialogue Interpretation",
"Structured Prediction"
] | [] | [
"Switchboard corpus"
] | [
"Accuracy"
] | Improved Dynamic Memory Network for Dialogue Act Classification with Adversarial Training |
Multi-hop reading comprehension focuses on one type of factoid question,
where a system needs to properly integrate multiple pieces of evidence to
correctly answer a question. Previous work approximates global evidence with
local coreference information, encoding coreference chains with DAG-styled GRU
layers within a gated-attention reader. However, coreference is limited in
providing information for rich inference. We introduce a new method for better
connecting global evidence, which forms more complex graphs compared to DAGs.
To perform evidence integration on our graphs, we investigate two recent graph
neural networks, namely graph convolutional network (GCN) and graph recurrent
network (GRN). Experiments on two standard datasets show that richer global
information leads to better answers. Our method performs better than all
published results on these datasets. | [] | [
"Multi-Hop Reading Comprehension",
"Question Answering",
"Reading Comprehension"
] | [] | [
"COMPLEXQUESTIONS",
"WikiHop"
] | [
"F1",
"Test"
] | Exploring Graph-structured Passage Representation for Multi-hop Reading Comprehension with Graph Neural Networks |
Most Reading Comprehension methods limit themselves to queries which can be
answered using a single sentence, paragraph, or document. Enabling models to
combine disjoint pieces of textual evidence would extend the scope of machine
comprehension methods, but currently there exist no resources to train and test
this capability. We propose a novel task to encourage the development of models
for text understanding across multiple documents and to investigate the limits
of existing methods. In our task, a model learns to seek and combine evidence -
effectively performing multi-hop (alias multi-step) inference. We devise a
methodology to produce datasets for this task, given a collection of
query-answer pairs and thematically linked documents. Two datasets from
different domains are induced, and we identify potential pitfalls and devise
circumvention strategies. We evaluate two previously proposed competitive
models and find that one can integrate information across documents. However,
both models struggle to select relevant information, as providing documents
guaranteed to be relevant greatly improves their performance. While the models
outperform several strong baselines, their best accuracy reaches 42.9% compared
to human performance at 74.0% - leaving ample room for improvement. | [] | [
"Multi-Hop Reading Comprehension",
"Reading Comprehension"
] | [] | [
"WikiHop"
] | [
"Test"
] | Constructing Datasets for Multi-hop Reading Comprehension Across Documents |
In statistical relational learning, the link prediction problem is key to
automatically understand the structure of large knowledge bases. As in previous
studies, we propose to solve this problem through latent factorization.
However, here we make use of complex valued embeddings. The composition of
complex embeddings can handle a large variety of binary relations, among them
symmetric and antisymmetric relations. Compared to state-of-the-art models such
as Neural Tensor Network and Holographic Embeddings, our approach based on
complex embeddings is arguably simpler, as it only uses the Hermitian dot
product, the complex counterpart of the standard dot product between real
vectors. Our approach is scalable to large datasets as it remains linear in
both space and time, while consistently outperforming alternative approaches on
standard link prediction benchmarks. | [] | [
"Link Prediction",
"Relational Reasoning"
] | [] | [
"WN18RR",
"WN18",
"FB15k-237"
] | [
"Hits@10",
"MRR",
"Hits@3",
"Hits@1"
] | Complex Embeddings for Simple Link Prediction |
Text classification is a challenging problem which aims to identify the
category of texts. Recently, Capsule Networks (CapsNets) are proposed for image
classification. It has been shown that CapsNets have several advantages over
Convolutional Neural Networks (CNNs), while, their validity in the domain of
text has less been explored. An effective method named deep compositional code
learning has been proposed lately. This method can save many parameters about
word embeddings without any significant sacrifices in performance. In this
paper, we introduce the Compositional Coding (CC) mechanism between capsules,
and we propose a new routing algorithm, which is based on k-means clustering
theory. Experiments conducted on eight challenging text classification datasets
show the proposed method achieves competitive accuracy compared to the
state-of-the-art approach with significantly fewer parameters. | [] | [
"Sentiment Analysis",
"Text Classification"
] | [] | [
"Yelp Fine-grained classification",
"Amazon Review Polarity",
"Yelp Binary classification",
"Yahoo! Answers",
"DBpedia",
"Amazon Review Full",
"AG News",
"Sogou News"
] | [
"Error",
"Accuracy"
] | Compositional coding capsule network with k-means routing for text classification |
Most of the currently successful source separation techniques use the magnitude spectrogram as input, and are therefore by default omitting part of the signal: the phase. To avoid omitting potentially useful information, we study the viability of using end-to-end models for music source separation --- which take into account all the information available in the raw audio signal, including the phase. Although during the last decades end-to-end music source separation has been considered almost unattainable, our results confirm that waveform-based models can perform similarly (if not better) than a spectrogram-based deep learning model. Namely: a Wavenet-based model we propose and Wave-U-Net can outperform DeepConvSep, a recent spectrogram-based deep learning model. | [] | [
"Music Source Separation"
] | [] | [
"MUSDB18"
] | [
"SDR (vocals)",
"SDR (other)",
"SDR (drums)",
"SDR (bass)"
] | End-to-end music source separation: is it possible in the waveform domain? |
Graph convolutional networks (GCNs) have been successfully applied in node classification tasks of network mining. However, most of these models based on neighborhood aggregation are usually shallow and lack the "graph pooling" mechanism, which prevents the model from obtaining adequate global information. In order to increase the receptive field, we propose a novel deep Hierarchical Graph Convolutional Network (H-GCN) for semi-supervised node classification. H-GCN first repeatedly aggregates structurally similar nodes to hyper-nodes and then refines the coarsened graph to the original to restore the representation for each node. Instead of merely aggregating one- or two-hop neighborhood information, the proposed coarsening procedure enlarges the receptive field for each node, hence more global information can be captured. The proposed H-GCN model shows strong empirical performance on various public benchmark graph datasets, outperforming state-of-the-art methods and acquiring up to 5.9% performance improvement in terms of accuracy. In addition, when only a few labeled samples are provided, our model gains substantial improvements. | [] | [
"Node Classification"
] | [] | [
"Cora with Public Split: fixed 20 nodes per class",
"CiteSeer with Public Split: fixed 20 nodes per class",
"PubMed with Public Split: fixed 20 nodes per class"
] | [
"Accuracy"
] | Hierarchical Graph Convolutional Networks for Semi-supervised Node Classification |
Collaborative filtering is widely used in modern recommender systems. Recent research shows that variational autoencoders (VAEs) yield state-of-the-art performance by integrating flexible representations from deep neural networks into latent variable models, mitigating limitations of traditional linear factor models. VAEs are typically trained by maximizing the likelihood (MLE) of users interacting with ground-truth items. While simple and often effective, MLE-based training does not directly maximize the recommendation-quality metrics one typically cares about, such as top-N ranking. In this paper we investigate new methods for training collaborative filtering models based on actor-critic reinforcement learning, to directly optimize the non-differentiable quality metrics of interest. Specifically, we train a critic network to approximate ranking-based metrics, and then update the actor network (represented here by a VAE) to directly optimize against the learned metrics. In contrast to traditional learning-to-rank methods that require to re-run the optimization procedure for new lists, our critic-based method amortizes the scoring process with a neural network, and can directly provide the (approximate) ranking scores for new lists. Empirically, we show that the proposed methods outperform several state-of-the-art baselines, including recently-proposed deep learning approaches, on three large-scale real-world datasets. The code to reproduce the experimental results and figure plots is on Github: https://github.com/samlobel/RaCT_CF | [] | [
"Latent Variable Models",
"Learning-To-Rank",
"Recommendation Systems"
] | [] | [
"Netflix",
"MovieLens 20M",
"Million Song Dataset"
] | [
"Recall@50",
"Recall@20",
"nDCG@100"
] | Towards Amortized Ranking-Critical Training for Collaborative Filtering |
In this paper, we propose a wide contextual residual network (WCRN) with active learning (AL) for remote sensing image (RSI)
classification. Although ResNets have achieved great success in various applications (e.g. RSI classification), its performance is limited by the requirement of abundant labeled samples. As it is very difficult and expensive to obtain class labels in real world, we integrate the proposed WCRN with AL to improve its generalization by using the most informative training samples. Specifically, we first design
a wide contextual residual network for RSI classification. We then integrate it with AL to achieve good machine generalization with
limited number of training sampling. Experimental results on the University of Pavia and Flevoland datasets demonstrate that the proposed WCRN with AL can significantly reduce the needs of samples. | [] | [
"Active Learning",
"Classification Of Hyperspectral Images",
"Hyperspectral Image Classification",
"Image Classification",
"Remote Sensing Image Classification"
] | [] | [
"Pavia University"
] | [
"Overall Accuracy",
"Accuracy"
] | Wide Contextual Residual Network with Active Learning for Remote Sensing Image Classification |
Named entity recognition (NER) is one of the best studied tasks in natural language processing. However, most approaches are not capable of handling nested structures which are common in many applications. In this paper we introduce a novel neural network architecture that first merges tokens and/or entities into entities forming nested structures, and then labels each of them independently. Unlike previous work, our merge and label approach predicts real-valued instead of discrete segmentation structures, which allow it to combine word and nested entity embeddings while maintaining differentiability. %which smoothly groups entities into single vectors across multiple levels. We evaluate our approach using the ACE 2005 Corpus, where it achieves state-of-the-art F1 of 74.6, further improved with contextual embeddings (BERT) to 82.4, an overall improvement of close to 8 F1 points over previous approaches trained on the same data. Additionally we compare it against BiLSTM-CRFs, the dominant approach for flat NER structures, demonstrating that its ability to predict nested structures does not impact performance in simpler cases. | [] | [
"Entity Embeddings",
"Named Entity Recognition",
"Nested Mention Recognition",
"Nested Named Entity Recognition"
] | [] | [
"ACE 2005"
] | [
"F1"
] | Merge and Label: A novel neural network architecture for nested NER |
We propose a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner. The attention module guides our model to focus on more important regions distinguishing between source and target domains based on the attention map obtained by the auxiliary classifier. Unlike previous attention-based method which cannot handle the geometric changes between domains, our model can translate both images requiring holistic changes and images requiring large shape changes. Moreover, our new AdaLIN (Adaptive Layer-Instance Normalization) function helps our attention-guided model to flexibly control the amount of change in shape and texture by learned parameters depending on datasets. Experimental results show the superiority of the proposed method compared to the existing state-of-the-art models with a fixed network architecture and hyper-parameters. Our code and datasets are available at https://github.com/taki0112/UGATIT or https://github.com/znxlwm/UGATIT-pytorch. | [] | [
"Fundus to Angiography Generation",
"Image-to-Image Translation",
"Unsupervised Image-To-Image Translation"
] | [] | [
"Fundus Fluorescein Angiogram Photographs & Colour Fundus Images of Diabetic Patients",
"vangogh2photo",
"photo2portrait",
"portrait2photo",
"photo2vangogh",
"selfie-to-anime",
"horse2zebra",
"cat2dog",
"zebra2horse",
"dog2cat",
"anime-to-selfie"
] | [
"Kernel Inception Distance",
"FID"
] | U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation |
Generative adversarial networks conditioned on textual image descriptions are capable of generating realistic-looking images. However, current methods still struggle to generate images based on complex image captions from a heterogeneous domain. Furthermore, quantitatively evaluating these text-to-image models is challenging, as most evaluation metrics only judge image quality but not the conformity between the image and its caption. To address these challenges we introduce a new model that explicitly models individual objects within an image and a new evaluation metric called Semantic Object Accuracy (SOA) that specifically evaluates images given an image caption. The SOA uses a pre-trained object detector to evaluate if a generated image contains objects that are mentioned in the image caption, e.g. whether an image generated from "a car driving down the street" contains a car. We perform a user study comparing several text-to-image models and show that our SOA metric ranks the models the same way as humans, whereas other metrics such as the Inception Score do not. Our evaluation also shows that models which explicitly model objects outperform models which only model global image characteristics. | [] | [
"Image Captioning",
"Image Generation",
"Text-to-Image Generation"
] | [] | [
"COCO"
] | [
"Inception score",
"SOA-C",
"FID"
] | Semantic Object Accuracy for Generative Text-to-Image Synthesis |
Distributional Reinforcement Learning (RL) differs from traditional RL in that, rather than the expectation of total returns, it estimates distributions and has achieved state-of-the-art performance on Atari Games. The key challenge in practical distributional RL algorithms lies in how to parameterize estimated distributions so as to better approximate the true continuous distribution. Existing distributional RL algorithms parameterize either the probability side or the return value side of the distribution function, leaving the other side uniformly fixed as in C51, QR-DQN or randomly sampled as in IQN. In this paper, we propose fully parameterized quantile function that parameterizes both the quantile fraction axis (i.e., the x-axis) and the value axis (i.e., y-axis) for distributional RL. Our algorithm contains a fraction proposal network that generates a discrete set of quantile fractions and a quantile value network that gives corresponding quantile values. The two networks are jointly trained to find the best approximation of the true distribution. Experiments on 55 Atari Games show that our algorithm significantly outperforms existing distributional RL algorithms and creates a new record for the Atari Learning Environment for non-distributed agents. | [] | [
"Atari Games",
"Distributional Reinforcement Learning"
] | [] | [
"Atari 2600 Amidar",
"Atari 2600 River Raid",
"Atari 2600 Alien",
"Atari 2600 Space Invaders",
"Atari 2600 Phoenix",
"Atari 2600 Gravitar",
"Atari 2600 Ice Hockey",
"Atari 2600 Bowling",
"Atari 2600 Berzerk",
"Atari 2600 Asterix",
"Atari 2600 Breakout",
"Atari 2600 Crazy Climber",
"Atari 2600 James Bond",
"Atari 2600 Robotank",
"Atari 2600 Asteroids",
"Atari 2600 Fishing Derby",
"Atari 2600 Ms. Pacman",
"Atari 2600 Frostbite",
"Atari 2600 Star Gunner",
"Atari 2600 Battle Zone",
"Atari 2600 Chopper Command",
"Atari 2600 Kung-Fu Master",
"Atari 2600 HERO",
"Atari 2600 Wizard of Wor",
"Atari 2600 Skiing"
] | [
"Score"
] | Fully Parameterized Quantile Function for Distributional Reinforcement Learning |
Face detection and alignment in unconstrained environment is always deployed on edge devices which have limited memory storage and low computing power. This paper proposes a one-stage method named CenterFace to simultaneously predict facial box and landmark location with real-time speed and high accuracy. The proposed method also belongs to the anchor free category. This is achieved by: (a) learning face existing possibility by the semantic maps, (b) learning bounding box, offsets and five landmarks for each position that potentially contains a face. Specifically, the method can run in real-time on a single CPU core and 200 FPS using NVIDIA 2080TI for VGA-resolution images, and can simultaneously achieve superior accuracy (WIDER FACE Val/Test-Easy: 0.935/0.932, Medium: 0.924/0.921, Hard: 0.875/0.873 and FDDB discontinuous: 0.980, continuous: 0.732). A demo of CenterFace can be available at https://github.com/Star-Clouds/CenterFace. | [] | [
"Face Detection"
] | [] | [
"WIDER Face (Hard)",
"WIDER Face (Medium)",
"WIDER Face (Easy)"
] | [
"AP"
] | CenterFace: Joint Face Detection and Alignment Using Face as Point |
The existing action tubelet detectors often depend on heuristic anchor design and placement, which might be computationally expensive and sub-optimal for precise localization. In this paper, we present a conceptually simple, computationally efficient, and more precise action tubelet detection framework, termed as MovingCenter Detector (MOC-detector), by treating an action instance as a trajectory of moving points. Based on the insight that movement information could simplify and assist action tubelet detection, our MOC-detector is composed of three crucial head branches: (1) Center Branch for instance center detection and action recognition, (2) Movement Branch for movement estimation at adjacent frames to form trajectories of moving points, (3) Box Branch for spatial extent detection by directly regressing bounding box size at each estimated center. These three branches work together to generate the tubelet detection results, which could be further linked to yield video-level tubes with a matching strategy. Our MOC-detector outperforms the existing state-of-the-art methods for both metrics of frame-mAP and video-mAP on the JHMDB and UCF101-24 datasets. The performance gap is more evident for higher video IoU, demonstrating that our MOC-detector is particularly effective for more precise action detection. We provide the code at https://github.com/MCG-NJU/MOC-Detector. | [] | [
"Action Detection",
"Action Recognition"
] | [] | [
"UCF101-24"
] | [
"Video-mAP 0.5",
"Video-mAP 0.75",
"mAP",
"Video-mAP 0.2"
] | Actions as Moving Points |
Learning diverse features is key to the success of person re-identification. Various part-based methods have been extensively proposed for learning local representations, which, however, are still inferior to the best-performing methods for person re-identification. This paper proposes to construct a strong lightweight network architecture, termed PLR-OSNet, based on the idea of Part-Level feature Resolution over the Omni-Scale Network (OSNet) for achieving feature diversity. The proposed PLR-OSNet has two branches, one branch for global feature representation and the other branch for local feature representation. The local branch employs a uniform partition strategy for part-level feature resolution but produces only a single identity-prediction loss, which is in sharp contrast to the existing part-based methods. Empirical evidence demonstrates that the proposed PLR-OSNet achieves state-of-the-art performance on popular person Re-ID datasets, including Market1501, DukeMTMC-reID and CUHK03, despite its small model size. | [] | [
"Person Re-Identification"
] | [] | [
"CUHK03 detected",
"DukeMTMC-reID",
"Market-1501",
"CUHK03 labeled"
] | [
"Rank-1",
"MAP"
] | Learning Diverse Features with Part-Level Resolution for Person Re-Identification |
Word Sense Disambiguation (WSD) has been a basic and on-going issue since its introduction in natural language processing (NLP) community. Its application lies in many different areas including sentiment analysis, Information Retrieval (IR), machine translation and knowledge graph construction. Solutions to WSD are mostly categorized into supervised and knowledge-based approaches. In this paper, a knowledge-based method is proposed, modeling the problem with semantic space and semantic path hidden behind a given sentence. The approach relies on the well-known Knowledge Base (KB) named WordNet and models the semantic space and semantic path by Latent Semantic Analysis (LSA) and PageRank respectively. Experiments has proven the method’s effectiveness, achieving state-of-the-art performance in several WSD datasets. | [] | [
"graph construction",
"Information Retrieval",
"Machine Translation",
"Sentiment Analysis",
"Word Sense Disambiguation"
] | [] | [
"Knowledge-based:"
] | [
"Senseval 2",
"Senseval 3",
"SemEval 2013",
"All",
"SemEval 2007",
"SemEval 2015"
] | Word Sense Disambiguation: A comprehensive knowledge exploitation framework |
Word Sense Disambiguation is a long-standing task in Natural Language Processing, lying at the core of human language understanding. However, the evaluation of automatic systems has been problematic, mainly due to the lack of a reliable evaluation framework. In this paper we develop a unified evaluation framework and analyze the performance of various Word Sense Disambiguation systems in a fair setup. The results show that supervised systems clearly outperform knowledge-based models. Among the supervised systems, a linear classifier trained on conventional local features still proves to be a hard baseline to beat. Nonetheless, recent approaches exploiting neural networks on unlabeled corpora achieve promising results, surpassing this hard baseline in most test sets. | [] | [
"Word Sense Disambiguation"
] | [] | [
"Knowledge-based:"
] | [
"Senseval 2",
"Senseval 3",
"SemEval 2013",
"All",
"SemEval 2007",
"SemEval 2015"
] | Word Sense Disambiguation: A Unified Evaluation Framework and Empirical Comparison |
This paper exploits the intrinsic features of urban-scene images and proposes a general add-on module, called height-driven attention networks (HANet), for improving semantic segmentation for urban-scene images. It emphasizes informative features or classes selectively according to the vertical position of a pixel. The pixel-wise class distributions are significantly different from each other among horizontally segmented sections in the urban-scene images. Likewise, urban-scene images have their own distinct characteristics, but most semantic segmentation networks do not reflect such unique attributes in the architecture. The proposed network architecture incorporates the capability exploiting the attributes to handle the urban scene dataset effectively. We validate the consistent performance (mIoU) increase of various semantic segmentation models on two datasets when HANet is adopted. This extensive quantitative analysis demonstrates that adding our module to existing models is easy and cost-effective. Our method achieves a new state-of-the-art performance on the Cityscapes benchmark with a large margin among ResNet-101 based segmentation models. Also, we show that the proposed model is coherent with the facts observed in the urban scene by visualizing and interpreting the attention map. Our code and trained models are publicly available at https://github.com/shachoi/HANet | [] | [
"Scene Segmentation",
"Semantic Segmentation"
] | [] | [
"Cityscapes test"
] | [
"Mean IoU (class)"
] | Cars Can't Fly up in the Sky: Improving Urban-Scene Segmentation via Height-driven Attention Networks |
We aim to provide a computationally cheap yet effective approach for fine-grained image classification (FGIC) in this letter. Unlike previous methods that rely on complex part localization modules, our approach learns fine-grained features by enhancing the semantics of sub-features of a global feature. Specifically, we first achieve the sub-feature semantic by arranging feature channels of a CNN into different groups through channel permutation. Meanwhile, to enhance the discriminability of sub-features, the groups are guided to be activated on object parts with strong discriminability by a weighted combination regularization. Our approach is parameter parsimonious and can be easily integrated into the backbone model as a plug-and-play module for end-to-end training with only image-level supervision. Experiments verified the effectiveness of our approach and validated its comparable performance to the state-of-the-art methods. Code is available at https://github.com/cswluo/SEF | [] | [
"Fine-Grained Image Classification",
"Image Classification"
] | [] | [
" CUB-200-2011",
"Stanford Dogs",
"Stanford Cars",
"FGVC Aircraft"
] | [
"Accuracy"
] | Learning Semantically Enhanced Feature for Fine-Grained Image Classification |
Multi-view stereo (MVS) is the golden mean between the accuracy of active depth sensing and the practicality of monocular depth estimation. Cost volume based approaches employing 3D convolutional neural networks (CNNs) have considerably improved the accuracy of MVS systems. However, this accuracy comes at a high computational cost which impedes practical adoption. Distinct from cost volume approaches, we propose an efficient depth estimation approach by first (a) detecting and evaluating descriptors for interest points, then (b) learning to match and triangulate a small set of interest points, and finally (c) densifying this sparse set of 3D points using CNNs. An end-to-end network efficiently performs all three steps within a deep learning framework and trained with intermediate 2D image and 3D geometric supervision, along with depth supervision. Crucially, our first step complements pose estimation using interest point detection and descriptor learning. We demonstrate state-of-the-art results on depth estimation with lower compute for different scene lengths. Furthermore, our method generalizes to newer environments and the descriptors output by our network compare favorably to strong baselines. Code is available at https://github.com/magicleap/DELTAS | [] | [
"Depth Estimation",
"Interest Point Detection",
"Monocular Depth Estimation",
"Pose Estimation"
] | [] | [
"ScanNetV2"
] | [
"Average mean absolute error",
"absolute relative error"
] | DELTAS: Depth Estimation by Learning Triangulation And densification of Sparse points |
In this paper, we tackle the new Cross-Domain Few-Shot Learning benchmark proposed by the CVPR 2020 Challenge. To this end, we build upon state-of-the-art methods in domain adaptation and few-shot learning to create a system that can be trained to perform both tasks. Inspired by the need to create models designed to be fine-tuned, we explore the integration of transfer-learning (fine-tuning) with meta-learning algorithms, to train a network that has specific layers that are designed to be adapted at a later fine-tuning stage. To do so, we modify the episodic training process to include a first-order MAML-based meta-learning algorithm, and use a Graph Neural Network model as the subsequent meta-learning module. We find that our proposed method helps to boost accuracy significantly, especially when combined with data augmentation. In our final results, we combine the novel method with the baseline method in a simple ensemble, and achieve an average accuracy of 73.78% on the benchmark. This is a 6.51% improvement over existing benchmarks that were trained solely on miniImagenet. | [] | [
"Cross-Domain Few-Shot",
"cross-domain few-shot learning",
"Data Augmentation",
"Domain Adaptation",
"Few-Shot Learning",
"Meta-Learning",
"Transfer Learning"
] | [] | [
"miniImagenet"
] | [
"Accuracy (%)"
] | Cross-Domain Few-Shot Learning with Meta Fine-Tuning |
In this paper, we demonstrate that by utilizing sparse word representations, it becomes possible to surpass the results of more complex task-specific models on the task of fine-grained all-words word sense disambiguation. Our proposed algorithm relies on an overcomplete set of semantic basis vectors that allows us to obtain sparse contextualized word representations. We introduce such an information theory-inspired synset representation based on the co-occurrence of word senses and non-zero coordinates for word forms which allows us to achieve an aggregated F-score of 78.8 over a combination of five standard word sense disambiguating benchmark datasets. We also demonstrate the general applicability of our proposed framework by evaluating it towards part-of-speech tagging on four different treebanks. Our results indicate a significant improvement over the application of the dense word representations. | [] | [
"Part-Of-Speech Tagging",
"Word Sense Disambiguation"
] | [] | [
"Supervised:"
] | [
"Senseval 2",
"Senseval 3",
"SemEval 2013",
"SemEval 2007",
"SemEval 2015"
] | Sparsity Makes Sense: Word Sense Disambiguation Using Sparse Contextualized Word Representations |
A major obstacle in Word Sense Disambiguation (WSD) is that word senses are not uniformly distributed, causing existing models to generally perform poorly on senses that are either rare or unseen during training. We propose a bi-encoder model that independently embeds (1) the target word with its surrounding context and (2) the dictionary definition, or gloss, of each sense. The encoders are jointly optimized in the same representation space, so that sense disambiguation can be performed by finding the nearest sense embedding for each target word embedding. Our system outperforms previous state-of-the-art models on English all-words WSD; these gains predominantly come from improved performance on rare senses, leading to a 31.1{\%} error reduction on less frequent senses over prior work. This demonstrates that rare senses can be more effectively disambiguated by modeling their definitions. | [] | [
"Word Sense Disambiguation"
] | [] | [
"Supervised:"
] | [
"Senseval 2",
"Senseval 3",
"SemEval 2013",
"SemEval 2007",
"SemEval 2015"
] | Moving Down the Long Tail of Word Sense Disambiguation with Gloss Informed Bi-encoders |
Neural network approaches to Named-Entity Recognition reduce the need for
carefully hand-crafted features. While some features do remain in
state-of-the-art systems, lexical features have been mostly discarded, with the
exception of gazetteers. In this work, we show that this is unfair: lexical
features are actually quite useful. We propose to embed words and entity types
into a low-dimensional vector space we train from annotated data produced by
distant supervision thanks to Wikipedia. From this, we compute - offline - a
feature vector representing each word. When used with a vanilla recurrent
neural network model, this representation yields substantial improvements. We
establish a new state-of-the-art F1 score of 87.95 on ONTONOTES 5.0, while
matching state-of-the-art performance with a F1 score of 91.73 on the
over-studied CONLL-2003 dataset. | [] | [
"Named Entity Recognition"
] | [] | [
"Ontonotes v5 (English)",
"CoNLL 2003 (English)"
] | [
"F1"
] | Robust Lexical Features for Improved Neural Network Named-Entity Recognition |
Recent advances in facial landmark detection achieve success by learning
discriminative features from rich deformation of face shapes and poses. Besides
the variance of faces themselves, the intrinsic variance of image styles, e.g.,
grayscale vs. color images, light vs. dark, intense vs. dull, and so on, has
constantly been overlooked. This issue becomes inevitable as increasing web
images are collected from various sources for training neural networks. In this
work, we propose a style-aggregated approach to deal with the large intrinsic
variance of image styles for facial landmark detection. Our method transforms
original face images to style-aggregated images by a generative adversarial
module. The proposed scheme uses the style-aggregated image to maintain face
images that are more robust to environmental changes. Then the original face
images accompanying with style-aggregated ones play a duet to train a landmark
detector which is complementary to each other. In this way, for each face, our
method takes two images as input, i.e., one in its original style and the other
in the aggregated style. In experiments, we observe that the large variance of
image styles would degenerate the performance of facial landmark detectors.
Moreover, we show the robustness of our method to the large variance of image
styles by comparing to a variant of our approach, in which the generative
adversarial module is removed, and no style-aggregated images are used. Our
approach is demonstrated to perform well when compared with state-of-the-art
algorithms on benchmark datasets AFLW and 300-W. Code is publicly available on
GitHub: https://github.com/D-X-Y/SAN | [] | [
"Facial Landmark Detection"
] | [] | [
"300W",
"AFLW-Full",
"AFLW-Front"
] | [
"NME",
"Mean NME "
] | Style Aggregated Network for Facial Landmark Detection |
We apply basic statistical reasoning to signal reconstruction by machine
learning -- learning to map corrupted observations to clean signals -- with a
simple and powerful conclusion: it is possible to learn to restore images by
only looking at corrupted examples, at performance at and sometimes exceeding
training using clean data, without explicit image priors or likelihood models
of the corruption. In practice, we show that a single model learns photographic
noise removal, denoising synthetic Monte Carlo images, and reconstruction of
undersampled MRI scans -- all corrupted by different processes -- based on
noisy data only. | [] | [
"Denoising",
"Image Restoration",
"Salt-And-Pepper Noise Removal"
] | [] | [
"BSD300 Noise Level 50%",
"Kodak24 Noise Level 30%",
"BSD300 Noise Level 70%",
"Kodak24 Noise Level 70%",
"BSD300 Noise Level 30%",
"Kodak24 Noise Level 50%"
] | [
"PSNR"
] | Noise2Noise: Learning Image Restoration without Clean Data |
Dense depth cues are important and have wide applications in various computer vision tasks. In autonomous driving, LIDAR sensors are adopted to acquire depth measurements around the vehicle to perceive the surrounding environments. However, depth maps obtained by LIDAR are generally sparse because of its hardware limitation. The task of depth completion attracts increasing attention, which aims at generating a dense depth map from an input sparse depth map. To effectively utilize multi-scale features, we propose three novel sparsity-invariant operations, based on which, a sparsity-invariant multi-scale encoder-decoder network (HMS-Net) for handling sparse inputs and sparse feature maps is also proposed. Additional RGB features could be incorporated to further improve the depth completion performance. Our extensive experiments and component analysis on two public benchmarks, KITTI depth completion benchmark and NYU-depth-v2 dataset, demonstrate the effectiveness of the proposed approach. As of Aug. 12th, 2018, on KITTI depth completion leaderboard, our proposed model without RGB guidance ranks first among all peer-reviewed methods without using RGB information, and our model with RGB guidance ranks second among all RGB-guided methods. | [] | [
"Autonomous Driving",
"Depth Completion"
] | [] | [
"KITTI Depth Completion"
] | [
"MAE",
"RMSE"
] | HMS-Net: Hierarchical Multi-scale Sparsity-invariant Network for Sparse Depth Completion |
Novice programmers often struggle with the formal syntax of programming
languages. To assist them, we design a novel programming language correction
framework amenable to reinforcement learning. The framework allows an agent to
mimic human actions for text navigation and editing. We demonstrate that the
agent can be trained through self-exploration directly from the raw input, that
is, program text itself, without any knowledge of the formal syntax of the
programming language. We leverage expert demonstrations for one tenth of the
training data to accelerate training. The proposed technique is evaluated on
6975 erroneous C programs with typographic errors, written by students during
an introductory programming course. Our technique fixes 14% more programs and
29% more compiler error messages relative to those fixed by a state-of-the-art
tool, DeepFix, which uses a fully supervised neural machine translation
approach. | [] | [
"Machine Translation",
"Program Repair"
] | [] | [
"DeepFix"
] | [
"Average Success Rate"
] | Deep Reinforcement Learning for Programming Language Correction |
Semi-supervised learning methods based on generative adversarial networks
(GANs) obtained strong empirical results, but it is not clear 1) how the
discriminator benefits from joint training with a generator, and 2) why good
semi-supervised classification performance and a good generator cannot be
obtained at the same time. Theoretically, we show that given the discriminator
objective, good semisupervised learning indeed requires a bad generator, and
propose the definition of a preferred generator. Empirically, we derive a novel
formulation based on our analysis that substantially improves over feature
matching GANs, obtaining state-of-the-art results on multiple benchmark
datasets. | [] | [
"Semi-Supervised Image Classification"
] | [] | [
"CIFAR-10, 4000 Labels"
] | [
"Accuracy"
] | Good Semi-supervised Learning that Requires a Bad GAN |
In this work, we introduce the challenging problem of joint multi-person pose
estimation and tracking of an unknown number of persons in unconstrained
videos. Existing methods for multi-person pose estimation in images cannot be
applied directly to this problem, since it also requires to solve the problem
of person association over time in addition to the pose estimation for each
person. We therefore propose a novel method that jointly models multi-person
pose estimation and tracking in a single formulation. To this end, we represent
body joint detections in a video by a spatio-temporal graph and solve an
integer linear program to partition the graph into sub-graphs that correspond
to plausible body pose trajectories for each person. The proposed approach
implicitly handles occlusion and truncation of persons. Since the problem has
not been addressed quantitatively in the literature, we introduce a challenging
"Multi-Person PoseTrack" dataset, and also propose a completely unconstrained
evaluation protocol that does not make any assumptions about the scale, size,
location or the number of persons. Finally, we evaluate the proposed approach
and several baseline methods on our new dataset. | [] | [
"Multi-Person Pose Estimation",
"Multi-Person Pose Estimation and Tracking",
"Pose Estimation",
"Pose Tracking"
] | [] | [
"Multi-Person PoseTrack"
] | [
"MOTA",
"Mean mAP",
"MOTP"
] | PoseTrack: Joint Multi-Person Pose Estimation and Tracking |
We present an approach to training neural networks to generate sequences
using actor-critic methods from reinforcement learning (RL). Current
log-likelihood training methods are limited by the discrepancy between their
training and testing modes, as models must generate tokens conditioned on their
previous guesses rather than the ground-truth tokens. We address this problem
by introducing a \textit{critic} network that is trained to predict the value
of an output token, given the policy of an \textit{actor} network. This results
in a training procedure that is much closer to the test phase, and allows us to
directly optimize for a task-specific score such as BLEU. Crucially, since we
leverage these techniques in the supervised learning setting rather than the
traditional RL setting, we condition the critic network on the ground-truth
output. We show that our method leads to improved performance on both a
synthetic task, and for German-English machine translation. Our analysis paves
the way for such methods to be applied in natural language generation tasks,
such as machine translation, caption generation, and dialogue modelling. | [] | [
"Machine Translation",
"Spelling Correction",
"Text Generation"
] | [] | [
"IWSLT2015 German-English",
"IWSLT2014 German-English",
"IWSLT2015 English-German"
] | [
"BLEU score"
] | An Actor-Critic Algorithm for Sequence Prediction |
In this paper, we study the problem of question answering when reasoning over
multiple facts is required. We propose Query-Reduction Network (QRN), a variant
of Recurrent Neural Network (RNN) that effectively handles both short-term
(local) and long-term (global) sequential dependencies to reason over multiple
facts. QRN considers the context sentences as a sequence of state-changing
triggers, and reduces the original query to a more informed query as it
observes each trigger (context sentence) through time. Our experiments show
that QRN produces the state-of-the-art results in bAbI QA and dialog tasks, and
in a real goal-oriented dialog dataset. In addition, QRN formulation allows
parallelization on RNN's time axis, saving an order of magnitude in time
complexity for training and inference. | [] | [
"Goal-Oriented Dialog",
"Question Answering"
] | [] | [
"bAbi"
] | [
"Accuracy (trained on 1k)",
"Mean Error Rate",
"Accuracy (trained on 10k)"
] | Query-Reduction Networks for Question Answering |
A long-standing challenge in coreference resolution has been the
incorporation of entity-level information - features defined over clusters of
mentions instead of mention pairs. We present a neural network based
coreference system that produces high-dimensional vector representations for
pairs of coreference clusters. Using these representations, our system learns
when combining clusters is desirable. We train the system with a
learning-to-search algorithm that teaches it which local decisions (cluster
merges) will lead to a high-scoring final coreference partition. The system
substantially outperforms the current state-of-the-art on the English and
Chinese portions of the CoNLL 2012 Shared Task dataset despite using few
hand-engineered features. | [] | [
"Coreference Resolution"
] | [] | [
"OntoNotes"
] | [
"F1"
] | Improving Coreference Resolution by Learning Entity-Level Distributed Representations |
Unsupervised methods for learning distributed representations of words are
ubiquitous in today's NLP research, but far less is known about the best ways
to learn distributed phrase or sentence representations from unlabelled data.
This paper is a systematic comparison of models that learn such
representations. We find that the optimal approach depends critically on the
intended application. Deeper, more complex models are preferable for
representations to be used in supervised systems, but shallow log-linear models
work best for building representation spaces that can be decoded with simple
spatial distance metrics. We also propose two new unsupervised
representation-learning objectives designed to optimise the trade-off between
training time, domain portability and performance. | [] | [
"Representation Learning",
"Unsupervised Representation Learning"
] | [] | [
"SUBJ"
] | [
"Accuracy"
] | Learning Distributed Representations of Sentences from Unlabelled Data |
Semantic matching is of central importance to many natural language tasks
\cite{bordes2014semantic,RetrievalQA}. A successful matching algorithm needs to
adequately model the internal structures of language objects and the
interaction between them. As a step toward this goal, we propose convolutional
neural network models for matching two sentences, by adapting the convolutional
strategy in vision and speech. The proposed models not only nicely represent
the hierarchical structures of sentences with their layer-by-layer composition
and pooling, but also capture the rich matching patterns at different levels.
Our models are rather generic, requiring no prior knowledge on language, and
can hence be applied to matching tasks of different nature and in different
languages. The empirical study on a variety of matching tasks demonstrates the
efficacy of the proposed model on a variety of matching tasks and its
superiority to competitor models. | [] | [
"Question Answering"
] | [] | [
"SemEvalCQA"
] | [
"P@1",
"MAP"
] | Convolutional Neural Network Architectures for Matching Natural Language Sentences |
We address the problem of generating images across two drastically different views, namely ground (street) and aerial (overhead) views. Image synthesis by itself is a very challenging computer vision task and is even more so when generation is conditioned on an image in another view. Due the difference in viewpoints, there is small overlapping field of view and little common content between these two views. Here, we try to preserve the pixel information between the views so that the generated image is a realistic representation of cross view input image. For this, we propose to use homography as a guide to map the images between the views based on the common field of view to preserve the details in the input image. We then use generative adversarial networks to inpaint the missing regions in the transformed image and add realism to it. Our exhaustive evaluation and model comparison demonstrate that utilizing geometry constraints adds fine details to the generated images and can be a better approach for cross view image synthesis than purely pixel based synthesis methods. | [] | [
"Cross-View Image-to-Image Translation",
"Image Generation"
] | [] | [
"Dayton (256×256) - ground-to-aerial"
] | [
"SSIM"
] | Cross-view image synthesis using geometry-guided conditional GANs |
Understanding search queries is a hard problem as it involves dealing with
"word salad" text ubiquitously issued by users. However, if a query resembles a
well-formed question, a natural language processing pipeline is able to perform
more accurate interpretation, thus reducing downstream compounding errors.
Hence, identifying whether or not a query is well formed can enhance query
understanding. Here, we introduce a new task of identifying a well-formed
natural language question. We construct and release a dataset of 25,100
publicly available questions classified into well-formed and non-wellformed
categories and report an accuracy of 70.7% on the test set. We also show that
our classifier can be used to improve the performance of neural
sequence-to-sequence models for generating questions for reading comprehension. | [] | [
"Query Wellformedness"
] | [] | [
"Query Wellformedness"
] | [
"Accuracy"
] | Identifying Well-formed Natural Language Questions |
In recent years, graph neural networks (GNNs) have emerged as a powerful neural architecture to learn vector representations of nodes and graphs in a supervised, end-to-end fashion. Up to now, GNNs have only been evaluated empirically---showing promising results. The following work investigates GNNs from a theoretical point of view and relates them to the $1$-dimensional Weisfeiler-Leman graph isomorphism heuristic ($1$-WL). We show that GNNs have the same expressiveness as the $1$-WL in terms of distinguishing non-isomorphic (sub-)graphs. Hence, both algorithms also have the same shortcomings. Based on this, we propose a generalization of GNNs, so-called $k$-dimensional GNNs ($k$-GNNs), which can take higher-order graph structures at multiple scales into account. These higher-order structures play an essential role in the characterization of social networks and molecule graphs. Our experimental evaluation confirms our theoretical findings as well as confirms that higher-order information is useful in the task of graph classification and regression. | [] | [
"Graph Classification",
"Regression"
] | [] | [
"IMDb-B",
"PROTEINS",
"NCI1",
"MUTAG",
"IMDb-M"
] | [
"Accuracy"
] | Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks |
Inspired by the increasing desire to efficiently tune machine learning hyper-parameters, in this work we rigorously analyse conventional and non-conventional assumptions inherent to Bayesian optimisation. Across an extensive set of experiments we conclude that: 1) the majority of hyper-parameter tuning tasks exhibit heteroscedasticity and non-stationarity, 2) multi-objective acquisition ensembles with Pareto-front solutions significantly improve queried configurations, and 3) robust acquisition maximisation affords empirical advantages relative to its non-robust counterparts. We hope these findings may serve as guiding principles, both for practitioners and for further research in the field. | [] | [
"Bayesian Optimisation",
"Hyperparameter Optimization"
] | [] | [
"Bayesmark"
] | [
"Mean"
] | An Empirical Study of Assumptions in Bayesian Optimisation |
Many face recognition systems boost the performance using deep learning
models, but only a few researches go into the mechanisms for dealing with
online registration. Although we can obtain discriminative facial features
through the state-of-the-art deep model training, how to decide the best
threshold for practical use remains a challenge. We develop a technique of
adaptive threshold mechanism to improve the recognition accuracy. We also
design a face recognition system along with the registering procedure to handle
online registration. Furthermore, we introduce a new evaluation protocol to
better evaluate the performance of an algorithm for real-world scenarios. Under
our proposed protocol, our method can achieve a 22\% accuracy improvement on
the LFW dataset. | [] | [
"Face Recognition"
] | [] | [
"LFW (Online Open Set)",
"Adience (Online Open Set)",
"Color FERET (Online Open Set)"
] | [
"Average Accuracy (10 times)"
] | Data-specific Adaptive Threshold for Face Recognition and Authentication |
The explosive growth in video streaming gives rise to challenges on performing video understanding at high accuracy and low computation cost. Conventional 2D CNNs are computationally cheap but cannot capture temporal relationships; 3D CNN based methods can achieve good performance but are computationally intensive, making it expensive to deploy. In this paper, we propose a generic and effective Temporal Shift Module (TSM) that enjoys both high efficiency and high performance. Specifically, it can achieve the performance of 3D CNN but maintain 2D CNN's complexity. TSM shifts part of the channels along the temporal dimension; thus facilitate information exchanged among neighboring frames. It can be inserted into 2D CNNs to achieve temporal modeling at zero computation and zero parameters. We also extended TSM to online setting, which enables real-time low-latency online video recognition and video object detection. TSM is accurate and efficient: it ranks the first place on the Something-Something leaderboard upon publication; on Jetson Nano and Galaxy Note8, it achieves a low latency of 13ms and 35ms for online video recognition. The code is available at: https://github.com/mit-han-lab/temporal-shift-module. | [] | [
"Action Classification",
"Action Recognition",
"Object Detection",
"Video Object Detection",
"Video Recognition",
"Video Understanding"
] | [] | [
"Kinetics-400",
"ImageNet VID",
"Something-Something V2",
"Something-Something V1"
] | [
"Top 1 Accuracy",
"Top-5 Accuracy",
"Top-1 Accuracy",
"MAP",
"Top 5 Accuracy",
"Vid acc@1"
] | TSM: Temporal Shift Module for Efficient Video Understanding |
Recognising dialogue acts (DA) is important for many natural language processing tasks such as dialogue generation and intention recognition. In this paper, we propose a dual-attention hierarchical recurrent neural network for DA classification. Our model is partially inspired by the observation that conversational utterances are normally associated with both a DA and a topic, where the former captures the social act and the latter describes the subject matter. However, such a dependency between DAs and topics has not been utilised by most existing systems for DA classification. With a novel dual task-specific attention mechanism, our model is able, for utterances, to capture information about both DAs and topics, as well as information about the interactions between them. Experimental results show that by modelling topic as an auxiliary task, our model can significantly improve DA classification, yielding better or comparable performance to the state-of-the-art method on three public datasets. | [] | [
"Dialogue Act Classification",
"Dialogue Generation",
"Intent Detection"
] | [] | [
"Switchboard corpus",
"ICSI Meeting Recorder Dialog Act (MRDA) corpus"
] | [
"Accuracy"
] | A Dual-Attention Hierarchical Recurrent Neural Network for Dialogue Act Classification |
Dialogue Act Recognition (DAR) is a challenging problem in dialogue
interpretation, which aims to attach semantic labels to utterances and
characterize the speaker's intention. Currently, many existing approaches
formulate the DAR problem ranging from multi-classification to structured
prediction, which suffer from handcrafted feature extensions and attentive
contextual structural dependencies. In this paper, we consider the problem of
DAR from the viewpoint of extending richer Conditional Random Field (CRF)
structural dependencies without abandoning end-to-end training. We incorporate
hierarchical semantic inference with memory mechanism on the utterance
modeling. We then extend structured attention network to the linear-chain
conditional random field layer which takes into account both contextual
utterances and corresponding dialogue acts. The extensive experiments on two
major benchmark datasets Switchboard Dialogue Act (SWDA) and Meeting Recorder
Dialogue Act (MRDA) datasets show that our method achieves better performance
than other state-of-the-art solutions to the problem. It is a remarkable fact
that our method is nearly close to the human annotator's performance on SWDA
within 2% gap. | [] | [
"Dialogue Act Classification",
"Dialogue Interpretation",
"Structured Prediction"
] | [] | [
"Switchboard corpus",
"ICSI Meeting Recorder Dialog Act (MRDA) corpus"
] | [
"Accuracy"
] | Dialogue Act Recognition via CRF-Attentive Structured Network |
Weakly supervised object detection (WSOD) using only image-level annotations has attracted a growing attention over the past few years. Whereas such task is typically addressed with a domain-specific solution focused on natural images, we show that a simple multiple instance approach applied on pre-trained deep features yields excellent performances on non-photographic datasets, possibly including new classes. The approach does not include any fine-tuning or cross-domain learning and is therefore efficient and possibly applicable to arbitrary datasets and classes. We investigate several flavors of the proposed approach, some including multi-layers perceptron and polyhedral classifiers. Despite its simplicity, our method shows competitive results on a range of publicly available datasets, including paintings (People-Art, IconArt), watercolors, cliparts and comics and allows to quickly learn unseen visual categories. | [] | [
"Multiple Instance Learning",
"Object Detection",
"Weakly Supervised Object Detection"
] | [] | [
"PeopleArt",
"Watercolor2k",
"Clipart1k",
"CASPAPaintings",
"IconArt",
"Comic2k"
] | [
"Mean mAP",
"MAP"
] | Multiple instance learning on deep features for weakly supervised object detection with extreme domain shifts |
This paper analyzes the impact of higher-order inference (HOI) on the task of coreference resolution. HOI has been adapted by almost all recent coreference resolution models without taking much investigation on its true effectiveness over representation learning. To make a comprehensive analysis, we implement an end-to-end coreference system as well as four HOI approaches, attended antecedent, entity equalization, span clustering, and cluster merging, where the latter two are our original methods. We find that given a high-performing encoder such as SpanBERT, the impact of HOI is negative to marginal, providing a new perspective of HOI to this task. Our best model using cluster merging shows the Avg-F1 of 80.2 on the CoNLL 2012 shared task dataset in English. | [] | [
"Coreference Resolution",
"Representation Learning"
] | [] | [
"CoNLL 2012"
] | [
"Avg F1"
] | Revealing the Myth of Higher-Order Inference in Coreference Resolution |
Recently, scene text detection has become an active research topic in
computer vision and document analysis, because of its great importance and
significant challenge. However, vast majority of the existing methods detect
text within local regions, typically through extracting character, word or line
level candidates followed by candidate aggregation and false positive
elimination, which potentially exclude the effect of wide-scope and long-range
contextual cues in the scene. To take full advantage of the rich information
available in the whole natural image, we propose to localize text in a holistic
manner, by casting scene text detection as a semantic segmentation problem. The
proposed algorithm directly runs on full images and produces global, pixel-wise
prediction maps, in which detections are subsequently formed. To better make
use of the properties of text, three types of information regarding text
region, individual characters and their relationship are estimated, with a
single Fully Convolutional Network (FCN) model. With such predictions of text
properties, the proposed algorithm can simultaneously handle horizontal,
multi-oriented and curved text in real-world natural images. The experiments on
standard benchmarks, including ICDAR 2013, ICDAR 2015 and MSRA-TD500,
demonstrate that the proposed algorithm substantially outperforms previous
state-of-the-art approaches. Moreover, we report the first baseline result on
the recently-released, large-scale dataset COCO-Text. | [] | [
"Scene Text",
"Scene Text Detection",
"Semantic Segmentation"
] | [] | [
"COCO-Text"
] | [
"F-Measure",
"Recall",
"Precision"
] | Scene Text Detection via Holistic, Multi-Channel Prediction |
Pretext tasks and contrastive learning have been successful in self-supervised learning for video retrieval and recognition. In this study, we analyze their optimization targets and utilize the hyper-sphere feature space to explore the connections between them, indicating the compatibility and consistency of these two different learning methods. Based on the analysis, we propose a self-supervised training method, referred as Pretext-Contrastive Learning (PCL), to learn video representations. Extensive experiments based on different combinations of pretext task baselines and contrastive losses confirm the strong agreement with their self-supervised learning targets, demonstrating the effectiveness and the generality of PCL. The combination of pretext tasks and contrastive losses showed significant improvements in both video retrieval and recognition over the corresponding baselines. And we can also outperform current state-of-the-art methods in the same manner. Further, our PCL is flexible and can be applied to almost all existing pretext task methods. | [] | [
"Self-Supervised Action Recognition",
"Self-Supervised Learning",
"Self-supervised Video Retrieval",
"Video Retrieval"
] | [] | [
"UCF101",
"HMDB51"
] | [
"3-fold Accuracy",
"Pre-Training Dataset",
"Top-1 Accuracy"
] | Self-Supervised Video Representation Using Pretext-Contrastive Learning |
Speech enhancement is challenging because of the diversity of background noise types. Most of the existing methods are focused on modelling the speech rather than the noise. In this paper, we propose a novel idea to model speech and noise simultaneously in a two-branch convolutional neural network, namely SN-Net. In SN-Net, the two branches predict speech and noise, respectively. Instead of information fusion only at the final output layer, interaction modules are introduced at several intermediate feature domains between the two branches to benefit each other. Such an interaction can leverage features learned from one branch to counteract the undesired part and restore the missing component of the other and thus enhance their discrimination capabilities. We also design a feature extraction module, namely residual-convolution-and-attention (RA), to capture the correlations along temporal and frequency dimensions for both the speech and the noises. Evaluations on public datasets show that the interaction module plays a key role in simultaneous modeling and the SN-Net outperforms the state-of-the-art by a large margin on various evaluation metrics. The proposed SN-Net also shows superior performance for speaker separation. | [] | [
"Speaker Separation",
"Speech Enhancement"
] | [] | [
"Deep Noise Suppression (DNS) Challenge"
] | [
"SI-SDR",
"PESQ-WB"
] | Interactive Speech and Noise Modeling for Speech Enhancement |
Neural network applications generally benefit from larger-sized models, but for current speech enhancement models, larger scale networks often suffer from decreased robustness to the variety of real-world use cases beyond what is encountered in training data. We introduce several innovations that lead to better large neural networks for speech enhancement. The novel PoCoNet architecture is a convolutional neural network that, with the use of frequency-positional embeddings, is able to more efficiently build frequency-dependent features in the early layers. A semi-supervised method helps increase the amount of conversational training data by pre-enhancing noisy datasets, improving performance on real recordings. A new loss function biased towards preserving speech quality helps the optimization better match human perceptual opinions on speech quality. Ablation experiments and objective and human opinion metrics show the benefits of the proposed improvements. | [] | [
"Speech Enhancement"
] | [] | [
"Deep Noise Suppression (DNS) Challenge"
] | [
"MOS (NRT, real recordings)",
"MOS (NRT, no reverb)",
"MOS (NRT)",
"PESQ-WB",
"MOS (NRT, reverb)"
] | PoCoNet: Better Speech Enhancement with Frequency-Positional Embeddings, Semi-Supervised Conversational Data, and Biased Loss |
Over the past few years, speech enhancement methods based on deep learning have greatly surpassed traditional methods based on spectral subtraction and spectral estimation. Many of these new techniques operate directly in the the short-time Fourier transform (STFT) domain, resulting in a high computational complexity. In this work, we propose PercepNet, an efficient approach that relies on human perception of speech by focusing on the spectral envelope and on the periodicity of the speech. We demonstrate high-quality, real-time enhancement of fullband (48 kHz) speech with less than 5% of a CPU core. | [] | [
"Speech Enhancement"
] | [] | [
"Deep Noise Suppression (DNS) Challenge"
] | [
"MOS (RT, no reverb)",
"MOS (RT)",
"MOS (RT, real recordings)",
"MOS (RT, reverb)"
] | A Perceptually-Motivated Approach for Low-Complexity, Real-Time Enhancement of Fullband Speech |
Existing image generator networks rely heavily on spatial convolutions and, optionally, self-attention blocks in order to gradually synthesize images in a coarse-to-fine manner. Here, we present a new architecture for image generators, where the color value at each pixel is computed independently given the value of a random latent vector and the coordinate of that pixel. No spatial convolutions or similar operations that propagate information across pixels are involved during the synthesis. We analyze the modeling capabilities of such generators when trained in an adversarial fashion, and observe the new generators to achieve similar generation quality to state-of-the-art convolutional generators. We also investigate several interesting properties unique to the new architecture. | [] | [
"Image Generation"
] | [] | [
"Landscapes 256 x 256",
"Satellite-Landscapes 256 x 256",
"LSUN Churches 256 x 256",
"Satellite-Buildings 256 x 256",
"FFHQ 256 x 256"
] | [
"FID"
] | Image Generators with Conditionally-Independent Pixel Synthesis |
Recently, attempts have been made to collect millions of videos to train CNN
models for action recognition in videos. However, curating such large-scale
video datasets requires immense human labor, and training CNNs on millions of
videos demands huge computational resources. In contrast, collecting action
images from the Web is much easier and training on images requires much less
computation. In addition, labeled web images tend to contain discriminative
action poses, which highlight discriminative portions of a video's temporal
progression. We explore the question of whether we can utilize web action
images to train better CNN models for action recognition in videos. We collect
23.8K manually filtered images from the Web that depict the 101 actions in the
UCF101 action video dataset. We show that by utilizing web action images along
with videos in training, significant performance boosts of CNN models can be
achieved. We then investigate the scalability of the process by leveraging
crawled web images (unfiltered) for UCF101 and ActivityNet. We replace 16.2M
video frames by 393K unfiltered images and get comparable performance. | [] | [
"Action Recognition",
"Action Recognition In Videos",
"Action Recognition In Videos ",
"Temporal Action Localization"
] | [] | [
"ActivityNet"
] | [
"mAP"
] | Do Less and Achieve More: Training CNNs for Action Recognition Utilizing Action Images from the Web |
The recent advance in neural network architecture and training algorithms
have shown the effectiveness of representation learning. The neural
network-based models generate better representation than the traditional ones.
They have the ability to automatically learn the distributed representation for
sentences and documents. To this end, we proposed a novel model that addresses
several issues that are not adequately modeled by the previously proposed
models, such as the memory problem and incorporating the knowledge of document
structure. Our model uses a hierarchical structured self-attention mechanism to
create the sentence and document embeddings. This architecture mirrors the
hierarchical structure of the document and in turn enables us to obtain better
feature representation. The attention mechanism provides extra source of
information to guide the summary extraction. The new model treated the
summarization task as a classification problem in which the model computes the
respective probabilities of sentence-summary membership. The model predictions
are broken up by several features such as information content, salience,
novelty and positional representation. The proposed model was evaluated on two
well-known datasets, the CNN / Daily Mail, and DUC 2002. The experimental
results show that our model outperforms the current extractive state-of-the-art
by a considerable margin. | [] | [
"Document Summarization",
"Extractive Text Summarization",
"Hierarchical structure",
"Representation Learning",
"Text Summarization"
] | [] | [
"CNN / Daily Mail (Anonymized)"
] | [
"ROUGE-L",
"ROUGE-1",
"ROUGE-2"
] | A Hierarchical Structured Self-Attentive Model for Extractive Document Summarization (HSSAS) |
We propose a neural network based approach for extracting models from dynamic data using ordinary and partial differential equations. In particular, given a time-series or spatio-temporal dataset, we seek to identify an accurate governing system which respects the intrinsic differential structure. The unknown governing model is parameterized by using both (shallow) multilayer perceptrons and nonlinear differential terms, in order to incorporate relevant correlations between spatio-temporal samples. We demonstrate the approach on several examples where the data is sampled from various dynamical systems and give a comparison to recurrent networks and other data-discovery methods. In addition, we show that for MNIST and Fashion MNIST, our approach lowers the parameter cost as compared to other deep neural networks. | [] | [
"Image Classification",
"Time Series"
] | [] | [
"MNIST",
"Fashion-MNIST"
] | [
"Percentage error"
] | NeuPDE: Neural Network Based Ordinary and Partial Differential Equations for Modeling Time-Dependent Data |
Previous researchers have considered sentiment analysis as a document classification task, in which input documents are classified into predefined sentiment classes. Although there are sentences in a document that support important evidences for sentiment analysis and sentences that do not, they have treated the document as a bag of sentences. In other words, they have not considered the importance of each sentence in the document. To effectively determine polarity of a document, each sentence in the document should be dealt with different degrees of importance. To address this problem, we propose a document-level sentence classification model based on deep neural networks, in which the importance degrees of sentences in documents are automatically determined through gate mechanisms. To verify our new sentiment analysis model, we conducted experiments using the sentiment datasets in the four different domains such as movie reviews, hotel reviews, restaurant reviews, and music reviews. In the experiments, the proposed model outperformed previous state-of-the-art models that do not consider importance differences of sentences in a document. The experimental results show that the importance of sentences should be considered in a document-level sentiment classification task. | [] | [
"Document Classification",
"Sentence Classification",
"Sentiment Analysis",
"Text Classification"
] | [] | [
"IMDb",
"IMDb-M"
] | [
"Accuracy (2 classes)",
"Accuracy (10 classes)",
"Accuracy"
] | Improving Document-Level Sentiment Classification Using Importance of Sentences |
Transfer of pre-trained representations can improve sample efficiency and reduce computational requirements for new tasks. However, representations used for transfer are usually generic, and are not tailored to a particular distribution of downstream tasks. We explore the use of expert representations for transfer with a simple, yet effective, strategy. We train a diverse set of experts by exploiting existing label structures, and use cheap-to-compute performance proxies to select the relevant expert for each target task. This strategy scales the process of transferring to new tasks, since it does not revisit the pre-training data during transfer. Accordingly, it requires little extra compute per target task, and results in a speed-up of 2-3 orders of magnitude compared to competing approaches. Further, we provide an adapter-based architecture able to compress many experts into a single model. We evaluate our approach on two different data sources and demonstrate that it outperforms baselines on over 20 diverse vision tasks in both cases. | [] | [
"Image Classification",
"Transfer Learning"
] | [] | [
"VTAB-1k"
] | [
"Top-1 Accuracy"
] | Scalable Transfer Learning with Expert Models |
Aspect based sentiment analysis (ABSA) involves three fundamental subtasks: aspect term extraction, opinion term extraction, and aspect-level sentiment classification. Early works only focused on solving one of these subtasks individually. Some recent work focused on solving a combination of two subtasks, e.g., extracting aspect terms along with sentiment polarities or extracting the aspect and opinion terms pair-wisely. More recently, the triple extraction task has been proposed, i.e., extracting the (aspect term, opinion term, sentiment polarity) triples from a sentence. However, previous approaches fail to solve all subtasks in a unified end-to-end framework. In this paper, we propose a complete solution for ABSA. We construct two machine reading comprehension (MRC) problems, and solve all subtasks by joint training two BERT-MRC models with parameters sharing. We conduct experiments on these subtasks and results on several benchmark datasets demonstrate the effectiveness of our proposed framework, which significantly outperforms existing state-of-the-art methods. | [] | [
"Aspect-Based Sentiment Analysis",
"Aspect Sentiment Triplet Extraction",
"Machine Reading Comprehension",
"Reading Comprehension",
"Sentiment Analysis"
] | [] | [
"SemEval"
] | [
"F1"
] | A Joint Training Dual-MRC Framework for Aspect Based Sentiment Analysis |
Following the success of deep convolutional networks, state-of-the-art
methods for 3d human pose estimation have focused on deep end-to-end systems
that predict 3d joint locations given raw image pixels. Despite their excellent
performance, it is often not easy to understand whether their remaining error
stems from a limited 2d pose (visual) understanding, or from a failure to map
2d poses into 3-dimensional positions. With the goal of understanding these
sources of error, we set out to build a system that given 2d joint locations
predicts 3d positions. Much to our surprise, we have found that, with current
technology, "lifting" ground truth 2d joint locations to 3d space is a task
that can be solved with a remarkably low error rate: a relatively simple deep
feed-forward network outperforms the best reported result by about 30\% on
Human3.6M, the largest publicly available 3d pose estimation benchmark.
Furthermore, training our system on the output of an off-the-shelf
state-of-the-art 2d detector (\ie, using images as input) yields state of the
art results -- this includes an array of systems that have been trained
end-to-end specifically for this task. Our results indicate that a large
portion of the error of modern deep 3d pose estimation systems stems from their
visual analysis, and suggests directions to further advance the state of the
art in 3d human pose estimation. | [] | [
"3D Human Pose Estimation",
"3D Pose Estimation",
"Pose Estimation"
] | [] | [
"Human3.6M",
"HumanEva-I",
"Geometric Pose Affordance "
] | [
"Average MPJPE (mm)",
"MPJPE (CS)",
"Mean Reconstruction Error (mm)",
"Multi-View or Monocular",
"PCK3D (CS)",
"PCK3D (CA)",
"MPJPE (CA)"
] | A simple yet effective baseline for 3d human pose estimation |
Though tremendous strides have been made in object recognition, one of the
remaining open challenges is detecting small objects. We explore three aspects
of the problem in the context of finding small faces: the role of scale
invariance, image resolution, and contextual reasoning. While most recognition
approaches aim to be scale-invariant, the cues for recognizing a 3px tall face
are fundamentally different than those for recognizing a 300px tall face. We
take a different approach and train separate detectors for different scales. To
maintain efficiency, detectors are trained in a multi-task fashion: they make
use of features extracted from multiple layers of single (deep) feature
hierarchy. While training detectors for large objects is straightforward, the
crucial challenge remains training detectors for small objects. We show that
context is crucial, and define templates that make use of massively-large
receptive fields (where 99% of the template extends beyond the object of
interest). Finally, we explore the role of scale in pre-trained deep networks,
providing ways to extrapolate networks tuned for limited scales to rather
extreme ranges. We demonstrate state-of-the-art results on
massively-benchmarked face datasets (FDDB and WIDER FACE). In particular, when
compared to prior art on WIDER FACE, our results reduce error by a factor of 2
(our models produce an AP of 82% while prior art ranges from 29-64%). | [] | [
"Face Detection",
"Object Recognition"
] | [] | [
"WIDER Face (Hard)",
"WIDER Face (Medium)",
"WIDER Face (Easy)"
] | [
"AP"
] | Finding Tiny Faces |
We study the problem of segmenting moving objects in unconstrained videos.
Given a video, the task is to segment all the objects that exhibit independent
motion in at least one frame. We formulate this as a learning problem and
design our framework with three cues: (i) independent object motion between a
pair of frames, which complements object recognition, (ii) object appearance,
which helps to correct errors in motion estimation, and (iii) temporal
consistency, which imposes additional constraints on the segmentation. The
framework is a two-stream neural network with an explicit memory module. The
two streams encode appearance and motion cues in a video sequence respectively,
while the memory module captures the evolution of objects over time, exploiting
the temporal consistency. The motion stream is a convolutional neural network
trained on synthetic videos to segment independently moving objects in the
optical flow field. The module to build a 'visual memory' in video, i.e., a
joint representation of all the video frames, is realized with a convolutional
recurrent unit learned from a small number of training video sequences.
For every pixel in a frame of a test video, our approach assigns an object or
background label based on the learned spatio-temporal features as well as the
'visual memory' specific to the video. We evaluate our method extensively on
three benchmarks, DAVIS, Freiburg-Berkeley motion segmentation dataset and
SegTrack. In addition, we provide an extensive ablation study to investigate
both the choice of the training data and the influence of each component in the
proposed framework. | [] | [
"Motion Estimation",
"Motion Segmentation",
"Object Recognition",
"Optical Flow Estimation",
"Unsupervised Video Object Segmentation"
] | [] | [
"DAVIS 2016"
] | [
"F-measure (Decay)",
"Jaccard (Mean)",
"F-measure (Recall)",
"Jaccard (Decay)",
"Jaccard (Recall)",
"F-measure (Mean)",
"J&F"
] | Learning to Segment Moving Objects |
Interactive object selection is a very important research problem and has
many applications. Previous algorithms require substantial user interactions to
estimate the foreground and background distributions. In this paper, we present
a novel deep learning based algorithm which has a much better understanding of
objectness and thus can reduce user interactions to just a few clicks. Our
algorithm transforms user provided positive and negative clicks into two
Euclidean distance maps which are then concatenated with the RGB channels of
images to compose (image, user interactions) pairs. We generate many of such
pairs by combining several random sampling strategies to model user click
patterns and use them to fine tune deep Fully Convolutional Networks (FCNs).
Finally the output probability maps of our FCN 8s model is integrated with
graph cut optimization to refine the boundary segments. Our model is trained on
the PASCAL segmentation dataset and evaluated on other datasets with different
object classes. Experimental results on both seen and unseen objects clearly
demonstrate that our algorithm has a good generalization ability and is
superior to all existing interactive object selection approaches. | [] | [
"Interactive Segmentation"
] | [] | [
"GrabCut",
"DAVIS",
"SBD"
] | [
"NoC@90",
"NoC@85"
] | Deep Interactive Object Selection |
Online platforms can be divided into information-oriented and social-oriented
domains. The former refers to forums or E-commerce sites that emphasize
user-item interactions, like Trip.com and Amazon; whereas the latter refers to
social networking services (SNSs) that have rich user-user connections, such as
Facebook and Twitter. Despite their heterogeneity, these two domains can be
bridged by a few overlapping users, dubbed as bridge users. In this work, we
address the problem of cross-domain social recommendation, i.e., recommending
relevant items of information domains to potential users of social networks. To
our knowledge, this is a new problem that has rarely been studied before.
Existing cross-domain recommender systems are unsuitable for this task since
they have either focused on homogeneous information domains or assumed that
users are fully overlapped. Towards this end, we present a novel Neural Social
Collaborative Ranking (NSCR) approach, which seamlessly sews up the user-item
interactions in information domains and user-user connections in SNSs. In the
information domain part, the attributes of users and items are leveraged to
strengthen the embedding learning of users and items. In the SNS part, the
embeddings of bridge users are propagated to learn the embeddings of other
non-bridge users. Extensive experiments on two real-world datasets demonstrate
the effectiveness and rationality of our NSCR method. | [] | [
"Collaborative Ranking",
"Recommendation Systems"
] | [] | [
"Epinions",
"WeChat"
] | [
"MAE",
"P@10",
"RMSE",
"AUC"
] | Item Silk Road: Recommending Items from Information Domains to Social Users |
In this paper, we propose our Correlation For Completion Network (CFCNet), an end-to-end deep learning model that uses the correlation between two data sources to perform sparse depth completion. CFCNet learns to capture, to the largest extent, the semantically correlated features between RGB and depth information. Through pairs of image pixels and the visible measurements in a sparse depth map, CFCNet facilitates feature-level mutual transformation of different data sources. Such a transformation enables CFCNet to predict features and reconstruct data of missing depth measurements according to their corresponding, transformed RGB features. We extend canonical correlation analysis to a 2D domain and formulate it as one of our training objectives (i.e. 2d deep canonical correlation, or "2D2CCA loss"). Extensive experiments validate the ability and flexibility of our CFCNet compared to the state-of-the-art methods on both indoor and outdoor scenes with different real-life sparse patterns. Codes are available at: https://github.com/choyingw/CFCNet. | [] | [
"Depth Completion"
] | [] | [
"KITTI Depth Completion 500 points"
] | [
"RMSE "
] | Deep RGB-D Canonical Correlation Analysis For Sparse Depth Completion |
Inspired by the effectiveness of adversarial training in the area of
Generative Adversarial Networks we present a new approach for learning feature
representations in person re-identification. We investigate different types of
bias that typically occur in re-ID scenarios, i.e., pose, body part and camera
view, and propose a general approach to address them. We introduce an
adversarial strategy for controlling bias, named Bias-controlled Adversarial
framework (BCA), with two complementary branches to reduce or to enhance
bias-related features. The results and comparison to the state of the art on
different benchmarks show that our framework is an effective strategy for
person re-identification. The performance improvements are in both full and
partial views of persons. | [] | [
"Person Re-Identification"
] | [] | [
"DukeMTMC-reID",
"Market-1501"
] | [
"Rank-1",
"MAP"
] | Person Re-identification with Bias-controlled Adversarial Training |
Video object segmentation is an essential task in robot manipulation to
facilitate grasping and learning affordances. Incremental learning is important
for robotics in unstructured environments, since the total number of objects
and their variations can be intractable. Inspired by the children learning
process, human robot interaction (HRI) can be utilized to teach robots about
the world guided by humans similar to how children learn from a parent or a
teacher. A human teacher can show potential objects of interest to the robot,
which is able to self adapt to the teaching signal without providing manual
segmentation labels. We propose a novel teacher-student learning paradigm to
teach robots about their surrounding environment. A two-stream motion and
appearance "teacher" network provides pseudo-labels to adapt an appearance
"student" network. The student network is able to segment the newly learned
objects in other scenes, whether they are static or in motion. We also
introduce a carefully designed dataset that serves the proposed HRI setup,
denoted as (I)nteractive (V)ideo (O)bject (S)egmentation. Our IVOS dataset
contains teaching videos of different objects, and manipulation tasks. Unlike
previous datasets, IVOS provides manipulation tasks sequences with segmentation
annotation along with the waypoints for the robot trajectories. It also
provides segmentation annotation for the different transformations such as
translation, scale, planar rotation, and out-of-plane rotation. Our proposed
adaptation method outperforms the state-of-the-art on DAVIS and FBMS with 6.8%
and 1.2% in F-measure respectively. It improves over the baseline on IVOS
dataset with 46.1% and 25.9% in mIoU. | [] | [
"Human robot interaction",
"Incremental Learning",
"Semantic Segmentation",
"Unsupervised Video Object Segmentation",
"Video Object Segmentation",
"Video Semantic Segmentation"
] | [] | [
"DAVIS 2016"
] | [
"F-measure (Decay)",
"Jaccard (Mean)",
"F-measure (Recall)",
"Jaccard (Decay)",
"Jaccard (Recall)",
"F-measure (Mean)",
"J&F"
] | Video Object Segmentation using Teacher-Student Adaptation in a Human Robot Interaction (HRI) Setting |
Recent BIO-tagging-based neural semantic role labeling models are very high
performing, but assume gold predicates as part of the input and cannot
incorporate span-level features. We propose an end-to-end approach for jointly
predicting all predicates, arguments spans, and the relations between them. The
model makes independent decisions about what relationship, if any, holds
between every possible word-span pair, and learns contextualized span
representations that provide rich, shared input features for each decision.
Experiments demonstrate that this approach sets a new state of the art on
PropBank SRL without gold predicates. | [] | [
"Semantic Role Labeling"
] | [] | [
"CoNLL 2005",
"OntoNotes",
"CoNLL 2012"
] | [
"F1"
] | Jointly Predicting Predicates and Arguments in Neural Semantic Role Labeling |
In single image deblurring, the "coarse-to-fine" scheme, i.e. gradually
restoring the sharp image on different resolutions in a pyramid, is very
successful in both traditional optimization-based methods and recent
neural-network-based approaches. In this paper, we investigate this strategy
and propose a Scale-recurrent Network (SRN-DeblurNet) for this deblurring task.
Compared with the many recent learning-based approaches in [25], it has a
simpler network structure, a smaller number of parameters and is easier to
train. We evaluate our method on large-scale deblurring datasets with complex
motion. Results show that our method can produce better quality results than
state-of-the-arts, both quantitatively and qualitatively. | [] | [
"Deblurring"
] | [] | [
"RealBlur-R",
"RealBlur-J",
"GoPro",
"RealBlur-J (trained on GoPro)",
"RealBlur-R (trained on GoPro)",
"HIDE (trained on GOPRO)"
] | [
"SSIM",
"SSIM (sRGB)",
"PSNR",
"PSNR (sRGB)"
] | Scale-recurrent Network for Deep Image Deblurring |
Face alignment algorithms locate a set of landmark points in images of faces taken in unrestricted situations. State-of-the-art approaches typically fail or lose accuracy in the presence of occlusions, strong deformations, large pose variations and ambiguous configurations. In this paper we present 3DDE, a robust and efficient face alignment algorithm based on a coarse-to-fine cascade of ensembles of regression trees. It is initialized by robustly fitting a 3D face model to the probability maps produced by a convolutional neural network. With this initialization we address self-occlusions and large face rotations. Further, the regressor implicitly imposes a prior face shape on the solution, addressing occlusions and ambiguous face configurations. Its coarse-to-fine structure tackles the combinatorial explosion of parts deformation. In the experiments performed, 3DDE improves the state-of-the-art in 300W, COFW, AFLW and WFLW data sets. Finally, we perform cross-dataset experiments that reveal the existence of a significant data set bias in these benchmarks. | [] | [
"Face Alignment",
"Face Model",
"Facial Landmark Detection",
"Regression"
] | [] | [
"WFLW",
"300W",
"COFW",
"AFLW-Full"
] | [
"Mean NME",
"Fullset (public)",
"[email protected] (all)",
"NME",
"ME (%, all) ",
"[email protected](%, all)",
"Mean Error Rate"
] | Face Alignment using a 3D Deeply-initialized Ensemble of Regression Trees |
In recent years, Graph Neural Networks (GNNs), which can naturally integrate node information and topological structure, have been demonstrated to be powerful in learning on graph data. These advantages of GNNs provide great potential to advance social recommendation since data in social recommender systems can be represented as user-user social graph and user-item graph; and learning latent factors of users and items is the key. However, building social recommender systems based on GNNs faces challenges. For example, the user-item graph encodes both interactions and their associated opinions; social relations have heterogeneous strengths; users involve in two graphs (e.g., the user-user social graph and the user-item graph). To address the three aforementioned challenges simultaneously, in this paper, we present a novel graph neural network framework (GraphRec) for social recommendations. In particular, we provide a principled approach to jointly capture interactions and opinions in the user-item graph and propose the framework GraphRec, which coherently models two graphs and heterogeneous strengths. Extensive experiments on two real-world datasets demonstrate the effectiveness of the proposed framework GraphRec. Our code is available at \url{https://github.com/wenqifan03/GraphRec-WWW19} | [] | [
"Recommendation Systems"
] | [] | [
"Epinions"
] | [
"MAE",
"RMSE"
] | Graph Neural Networks for Social Recommendation |
To date, most of recent work under the retrieval-reader framework for open-domain QA focuses on either extractive or generative reader exclusively. In this paper, we study a hybrid approach for leveraging the strengths of both models. We apply novel techniques to enhance both extractive and generative readers built upon recent pretrained neural language models, and find that proper training methods can provide large improvement over previous state-of-the-art models. We demonstrate that a simple hybrid approach by combining answers from both readers can efficiently take advantages of extractive and generative answer inference strategies and outperforms single models as well as homogeneous ensembles. Our approach outperforms previous state-of-the-art models by 3.3 and 2.7 points in exact match on NaturalQuestions and TriviaQA respectively. | [] | [
"Open-Domain Question Answering",
"Question Answering"
] | [] | [
"EfficientQA test",
"TriviaQA",
"EfficientQA dev",
"Natural Questions (short)"
] | [
"F1",
"Accuracy"
] | UnitedQA: A Hybrid Approach for Open Domain Question Answering |
AMR-to-text generation is a problem recently introduced to the NLP community, in which the goal is to generate sentences from Abstract Meaning Representation (AMR) graphs. Sequence-to-sequence models can be used to this end by converting the AMR graphs to strings. Approaching the problem while working directly with graphs requires the use of graph-to-sequence models that encode the AMR graph into a vector representation. Such encoding has been shown to be beneficial in the past, and unlike sequential encoding, it allows us to explicitly capture reentrant structures in the AMR graphs. We investigate the extent to which reentrancies (nodes with multiple parents) have an impact on AMR-to-text generation by comparing graph encoders to tree encoders, where reentrancies are not preserved. We show that improvements in the treatment of reentrancies and long-range dependencies contribute to higher overall scores for graph encoders. Our best model achieves 24.40 BLEU on LDC2015E86, outperforming the state of the art by 1.1 points and 24.54 BLEU on LDC2017T10, outperforming the state of the art by 1.24 points. | [] | [
"AMR-to-Text Generation",
"Graph-to-Sequence",
"Text Generation"
] | [] | [
"LDC2015E86:"
] | [
"BLEU"
] | Structural Neural Encoders for AMR-to-text Generation |
Common-sense and background knowledge is required to understand natural
language, but in most neural natural language understanding (NLU) systems, this
knowledge must be acquired from training corpora during learning, and then it
is static at test time. We introduce a new architecture for the dynamic
integration of explicit background knowledge in NLU models. A general-purpose
reading module reads background knowledge in the form of free-text statements
(together with task-specific text inputs) and yields refined word
representations to a task-specific NLU architecture that reprocesses the task
inputs with these representations. Experiments on document question answering
(DQA) and recognizing textual entailment (RTE) demonstrate the effectiveness
and flexibility of the approach. Analysis shows that our model learns to
exploit knowledge in a semantically appropriate way. | [] | [
"Common Sense Reasoning",
"Natural Language Inference",
"Natural Language Understanding",
"Question Answering"
] | [] | [
"TriviaQA"
] | [
"EM",
"F1"
] | Dynamic Integration of Background Knowledge in Neural NLU Systems |
We introduce $k$NN-LMs, which extend a pre-trained neural language model (LM) by linearly interpolating it with a $k$-nearest neighbors ($k$NN) model. The nearest neighbors are computed according to distance in the pre-trained LM embedding space, and can be drawn from any text collection, including the original LM training data. Applying this augmentation to a strong Wikitext-103 LM, with neighbors drawn from the original training set, our $k$NN-LM achieves a new state-of-the-art perplexity of 15.79 - a 2.9 point improvement with no additional training. We also show that this approach has implications for efficiently scaling up to larger training sets and allows for effective domain adaptation, by simply varying the nearest neighbor datastore, again without further training. Qualitatively, the model is particularly helpful in predicting rare patterns, such as factual knowledge. Together, these results strongly suggest that learning similarity between sequences of text is easier than predicting the next word, and that nearest neighbor search is an effective approach for language modeling in the long tail. | [] | [
"Domain Adaptation",
"Language Modelling"
] | [] | [
"WikiText-103"
] | [
"Number of params",
"Validation perplexity",
"Test perplexity"
] | Generalization through Memorization: Nearest Neighbor Language Models |
Layer normalization (LayerNorm) is a technique to normalize the distributions of intermediate layers. It enables smoother gradients, faster training, and better generalization accuracy. However, it is still unclear where the effectiveness stems from. In this paper, our main contribution is to take a step further in understanding LayerNorm. Many of previous studies believe that the success of LayerNorm comes from forward normalization. Unlike them, we find that the derivatives of the mean and variance are more important than forward normalization by re-centering and re-scaling backward gradients. Furthermore, we find that the parameters of LayerNorm, including the bias and gain, increase the risk of over-fitting and do not work in most cases. Experiments show that a simple version of LayerNorm (LayerNorm-simple) without the bias and gain outperforms LayerNorm on four datasets. It obtains the state-of-the-art performance on En-Vi machine translation. To address the over-fitting problem, we propose a new normalization method, Adaptive Normalization (AdaNorm), by replacing the bias and gain with a new transformation function. Experiments show that AdaNorm demonstrates better results than LayerNorm on seven out of eight datasets. | [] | [
"Machine Translation"
] | [] | [
"IWSLT2015 English-Vietnamese"
] | [
"BLEU"
] | Understanding and Improving Layer Normalization |
Video super-resolution plays an important role in surveillance video analysis and ultra-high-definition video display, which has drawn much attention in both the research and industrial communities. Although many deep learning-based VSR methods have been proposed, it is hard to directly compare these methods since the different loss functions and training datasets have a significant impact on the super-resolution results. In this work, we carefully study and compare three temporal modeling methods (2D CNN with early fusion, 3D CNN with slow fusion and Recurrent Neural Network) for video super-resolution. We also propose a novel Recurrent Residual Network (RRN) for efficient video super-resolution, where residual learning is utilized to stabilize the training of RNN and meanwhile to boost the super-resolution performance. Extensive experiments show that the proposed RRN is highly computational efficiency and produces temporal consistent VSR results with finer details than other temporal modeling methods. Besides, the proposed method achieves state-of-the-art results on several widely used benchmarks. | [] | [
"Super-Resolution",
"Video Super-Resolution"
] | [] | [
"Vid4 - 4x upscaling",
"UDM10 - 4x upscaling",
"SPMCS - 4x upscaling"
] | [
"SSIM",
"PSNR"
] | Revisiting Temporal Modeling for Video Super-resolution |
This work studies the use of visual semantic representations to align entities in heterogeneous knowledge graphs (KGs). Images are natural components of many existing KGs. By combining visual knowledge with other auxiliary information, we show that the proposed new approach, EVA, creates a holistic entity representation that provides strong signals for cross-graph entity alignment. Besides, previous entity alignment methods require human labelled seed alignment, restricting availability. EVA provides a completely unsupervised solution by leveraging the visual similarity of entities to create an initial seed dictionary (visual pivots). Experiments on benchmark data sets DBP15k and DWY15k show that EVA offers state-of-the-art performance on both monolingual and cross-lingual entity alignment tasks. Furthermore, we discover that images are particularly useful to align long-tail KG entities, which inherently lack the structural contexts necessary for capturing the correspondences. | [] | [
"Entity Alignment",
"Knowledge Graphs"
] | [] | [
"DBP15k zh-en",
"dbp15k fr-en",
"dbp15k ja-en"
] | [
"Hits@1"
] | Visual Pivoting for (Unsupervised) Entity Alignment |
Most object recognition approaches predominantly focus on learning discriminative visual patterns while overlooking the holistic object structure. Though important, structure modeling usually requires significant manual annotations and therefore is labor-intensive. In this paper, we propose to "look into object" (explicitly yet intrinsically model the object structure) through incorporating self-supervisions into the traditional framework. We show the recognition backbone can be substantially enhanced for more robust representation learning, without any cost of extra annotation and inference speed. Specifically, we first propose an object-extent learning module for localizing the object according to the visual patterns shared among the instances in the same category. We then design a spatial context learning module for modeling the internal structures of the object, through predicting the relative positions within the extent. These two modules can be easily plugged into any backbone networks during training and detached at inference time. Extensive experiments show that our look-into-object approach (LIO) achieves large performance gain on a number of benchmarks, including generic object recognition (ImageNet) and fine-grained object recognition tasks (CUB, Cars, Aircraft). We also show that this learning paradigm is highly generalizable to other tasks such as object detection and segmentation (MS COCO). Project page: https://github.com/JDAI-CV/LIO. | [] | [
"Fine-Grained Image Classification",
"Image Recognition",
"Instance Segmentation",
"Object Detection",
"Object Recognition",
"Representation Learning",
"Semantic Segmentation"
] | [] | [
"CUB-200-2011",
"Stanford Cars",
"ImageNet",
"FGVC Aircraft"
] | [
"Accuracy",
"Top-1 Error Rate"
] | Look-into-Object: Self-supervised Structure Modeling for Object Recognition |
Sentiment analysis in conversations has gained increasing attention in recent years for the growing amount of applications it can serve, e.g., sentiment analysis, recommender systems, and human-robot interaction. The main difference between conversational sentiment analysis and single sentence sentiment analysis is the existence of context information which may influence the sentiment of an utterance in a dialogue. How to effectively encode contextual information in dialogues, however, remains a challenge. Existing approaches employ complicated deep learning structures to distinguish different parties in a conversation and then model the context information. In this paper, we propose a fast, compact and parameter-efficient party-ignorant framework named bidirectional emotional recurrent unit for conversational sentiment analysis. In our system, a generalized neural tensor block followed by a two-channel classifier is designed to perform context compositionality and sentiment classification, respectively. Extensive experiments on three standard datasets demonstrate that our model outperforms the state of the art in most cases. | [] | [
"Emotion Recognition in Conversation",
"Human robot interaction"
] | [] | [
"IEMOCAP",
"MELD"
] | [
"Weighted Macro-F1",
"F1",
"Accuracy"
] | BiERU: Bidirectional Emotional Recurrent Unit for Conversational Sentiment Analysis |
Subsets and Splits