abstract
stringlengths 13
4.33k
| field
sequence | task
sequence | method
sequence | dataset
sequence | metric
sequence | title
stringlengths 10
194
|
---|---|---|---|---|---|---|
Interacting with relational databases through natural language helps users of
any background easily query and analyze a vast amount of data. This requires a
system that understands users' questions and converts them to SQL queries
automatically. In this paper we present a novel approach, TypeSQL, which views
this problem as a slot filling task. Additionally, TypeSQL utilizes type
information to better understand rare entities and numbers in natural language
questions. We test this idea on the WikiSQL dataset and outperform the prior
state-of-the-art by 5.5% in much less time. We also show that accessing the
content of databases can significantly improve the performance when users'
queries are not well-formed. TypeSQL gets 82.6% accuracy, a 17.5% absolute
improvement compared to the previous content-sensitive model. | [] | [
"Slot Filling",
"Text-To-Sql"
] | [] | [
"WikiSQL"
] | [
"Execution Accuracy"
] | TypeSQL: Knowledge-based Type-Aware Neural Text-to-SQL Generation |
We propose a self-supervised approach for learning representations and
robotic behaviors entirely from unlabeled videos recorded from multiple
viewpoints, and study how this representation can be used in two robotic
imitation settings: imitating object interactions from videos of humans, and
imitating human poses. Imitation of human behavior requires a
viewpoint-invariant representation that captures the relationships between
end-effectors (hands or robot grippers) and the environment, object attributes,
and body pose. We train our representations using a metric learning loss, where
multiple simultaneous viewpoints of the same observation are attracted in the
embedding space, while being repelled from temporal neighbors which are often
visually similar but functionally different. In other words, the model
simultaneously learns to recognize what is common between different-looking
images, and what is different between similar-looking images. This signal
causes our model to discover attributes that do not change across viewpoint,
but do change across time, while ignoring nuisance variables such as
occlusions, motion blur, lighting and background. We demonstrate that this
representation can be used by a robot to directly mimic human poses without an
explicit correspondence, and that it can be used as a reward function within a
reinforcement learning algorithm. While representations are learned from an
unlabeled collection of task-related videos, robot behaviors such as pouring
are learned by watching a single 3rd-person demonstration by a human. Reward
functions obtained by following the human demonstrations under the learned
representation enable efficient reinforcement learning that is practical for
real-world robotic systems. Video results, open-source code and dataset are
available at https://sermanet.github.io/imitate | [] | [
"Metric Learning",
"Self-Supervised Learning",
"Video Alignment"
] | [] | [
"UPenn Action"
] | [
"Kendall's Tau"
] | Time-Contrastive Networks: Self-Supervised Learning from Video |
This work proposed a novel learning objective to train a deep neural network
to perform end-to-end image pixel clustering. We applied the approach to
instance segmentation, which is at the intersection of image semantic
segmentation and object detection. We utilize the most fundamental property of
instance labeling -- the pairwise relationship between pixels -- as the
supervision to formulate the learning objective, then apply it to train a fully
convolutional network (FCN) for learning to perform pixel-wise clustering. The
resulting clusters can be used as the instance labeling directly. To support
labeling of an unlimited number of instance, we further formulate ideas from
graph coloring theory into the proposed learning objective. The evaluation on
the Cityscapes dataset demonstrates strong performance and therefore proof of
the concept. Moreover, our approach won the second place in the lane detection
competition of 2017 CVPR Autonomous Driving Challenge, and was the top
performer without using external data. | [] | [
"Autonomous Driving",
"Instance Segmentation",
"Lane Detection",
"Object Detection",
"Semantic Segmentation"
] | [] | [
"TuSimple"
] | [
"F1 score",
"Accuracy"
] | Learning to Cluster for Proposal-Free Instance Segmentation |
Multi-person articulated pose tracking in unconstrained videos is an
important while challenging problem. In this paper, going along the road of
top-down approaches, we propose a decent and efficient pose tracker based on
pose flows. First, we design an online optimization framework to build the
association of cross-frame poses and form pose flows (PF-Builder). Second, a
novel pose flow non-maximum suppression (PF-NMS) is designed to robustly reduce
redundant pose flows and re-link temporal disjoint ones. Extensive experiments
show that our method significantly outperforms best-reported results on two
standard Pose Tracking datasets by 13 mAP 25 MOTA and 6 mAP 3 MOTA
respectively. Moreover, in the case of working on detected poses in individual
frames, the extra computation of pose tracker is very minor, guaranteeing
online 10FPS tracking. Our source codes are made publicly
available(https://github.com/YuliangXiu/PoseFlow). | [] | [
"Pose Tracking"
] | [] | [
"COCO test-challenge",
"PoseTrack2017"
] | [
"ARM",
"MOTA",
"AR"
] | Pose Flow: Efficient Online Pose Tracking |
Domain adaptation is critical for success in new, unseen environments.
Adversarial adaptation models applied in feature spaces discover domain
invariant representations, but are difficult to visualize and sometimes fail to
capture pixel-level and low-level domain shifts. Recent work has shown that
generative adversarial networks combined with cycle-consistency constraints are
surprisingly effective at mapping images between domains, even without the use
of aligned image pairs. We propose a novel discriminatively-trained
Cycle-Consistent Adversarial Domain Adaptation model. CyCADA adapts
representations at both the pixel-level and feature-level, enforces
cycle-consistency while leveraging a task loss, and does not require aligned
pairs. Our model can be applied in a variety of visual recognition and
prediction settings. We show new state-of-the-art results across multiple
adaptation tasks, including digit classification and semantic segmentation of
road scenes demonstrating transfer from synthetic to real world domains. | [] | [
"Domain Adaptation",
"Image-to-Image Translation",
"Semantic Segmentation",
"Synthetic-to-Real Translation",
"Unsupervised Image-To-Image Translation"
] | [] | [
"GTAV-to-Cityscapes Labels",
"SVNH-to-MNIST",
"SYNTHIA Fall-to-Winter",
"SVHN-to-MNIST"
] | [
"Per-pixel Accuracy",
"fwIOU",
"mIoU",
"Classification Accuracy",
"Accuracy"
] | CyCADA: Cycle-Consistent Adversarial Domain Adaptation |
We present a new method for synthesizing high-resolution photo-realistic
images from semantic label maps using conditional generative adversarial
networks (conditional GANs). Conditional GANs have enabled a variety of
applications, but the results are often limited to low-resolution and still far
from realistic. In this work, we generate 2048x1024 visually appealing results
with a novel adversarial loss, as well as new multi-scale generator and
discriminator architectures. Furthermore, we extend our framework to
interactive visual manipulation with two additional features. First, we
incorporate object instance segmentation information, which enables object
manipulations such as removing/adding objects and changing the object category.
Second, we propose a method to generate diverse results given the same input,
allowing users to edit the object appearance interactively. Human opinion
studies demonstrate that our method significantly outperforms existing methods,
advancing both the quality and the resolution of deep image synthesis and
editing. | [] | [
"Conditional Image Generation",
"Fundus to Angiography Generation",
"Image Generation",
"Image-to-Image Translation",
"Instance Segmentation",
"Semantic Segmentation"
] | [] | [
"Fundus Fluorescein Angiogram Photographs & Colour Fundus Images of Diabetic Patients",
"ADE20K Labels-to-Photos",
"COCO-Stuff Labels-to-Photos",
"ADE20K-Outdoor Labels-to-Photos",
"Cityscapes Labels-to-Photo"
] | [
"FID",
"Per-pixel Accuracy",
"Kernel Inception Distance",
"mIoU",
"Accuracy"
] | High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs |
We propose MoNoise: a normalization model focused on generalizability and
efficiency, it aims at being easily reusable and adaptable. Normalization is
the task of translating texts from a non- canonical domain to a more canonical
domain, in our case: from social media data to standard language. Our proposed
model is based on a modular candidate generation in which each module is
responsible for a different type of normalization action. The most important
generation modules are a spelling correction system and a word embeddings
module. Depending on the definition of the normalization task, a static lookup
list can be crucial for performance. We train a random forest classifier to
rank the candidates, which generalizes well to all different types of
normaliza- tion actions. Most features for the ranking originate from the
generation modules; besides these features, N-gram features prove to be an
important source of information. We show that MoNoise beats the
state-of-the-art on different normalization benchmarks for English and Dutch,
which all define the task of normalization slightly different. | [] | [
"Lexical Normalization",
"Spelling Correction",
"Word Embeddings"
] | [] | [
"LexNorm"
] | [
"Accuracy"
] | MoNoise: Modeling Noise Using a Modular Normalization System |
We propose a novel deep learning model for joint document-level entity
disambiguation, which leverages learned neural representations. Key components
are entity embeddings, a neural attention mechanism over local context windows,
and a differentiable joint inference stage for disambiguation. Our approach
thereby combines benefits of deep learning with more traditional approaches
such as graphical models and probabilistic mention-entity maps. Extensive
experiments show that we are able to obtain competitive or state-of-the-art
accuracy at moderate computational costs. | [] | [
"Entity Disambiguation"
] | [] | [
"AQUAINT",
"WNED-WIKI",
"MSNBC",
"WNED-CWEB",
"ACE2004",
"AIDA-CoNLL"
] | [
"Micro-F1",
"In-KB Accuracy"
] | Deep Joint Entity Disambiguation with Local Neural Attention |
Semantic segmentation of 3D point clouds is a challenging problem with
numerous real-world applications. While deep learning has revolutionized the
field of image semantic segmentation, its impact on point cloud data has been
limited so far. Recent attempts, based on 3D deep learning approaches
(3D-CNNs), have achieved below-expected results. Such methods require
voxelizations of the underlying point cloud data, leading to decreased spatial
resolution and increased memory consumption. Additionally, 3D-CNNs greatly
suffer from the limited availability of annotated datasets.
In this paper, we propose an alternative framework that avoids the
limitations of 3D-CNNs. Instead of directly solving the problem in 3D, we first
project the point cloud onto a set of synthetic 2D-images. These images are
then used as input to a 2D-CNN, designed for semantic segmentation. Finally,
the obtained prediction scores are re-projected to the point cloud to obtain
the segmentation results. We further investigate the impact of multiple
modalities, such as color, depth and surface normals, in a multi-stream network
architecture. Experiments are performed on the recent Semantic3D dataset. Our
approach sets a new state-of-the-art by achieving a relative gain of 7.9 %,
compared to the previous best approach. | [] | [
"Semantic Segmentation"
] | [] | [
"Semantic3D"
] | [
"mIoU"
] | Deep Projective 3D Semantic Segmentation |
Mechanical devices such as engines, vehicles, aircrafts, etc., are typically
instrumented with numerous sensors to capture the behavior and health of the
machine. However, there are often external factors or variables which are not
captured by sensors leading to time-series which are inherently unpredictable.
For instance, manual controls and/or unmonitored environmental conditions or
load may lead to inherently unpredictable time-series. Detecting anomalies in
such scenarios becomes challenging using standard approaches based on
mathematical models that rely on stationarity, or prediction models that
utilize prediction errors to detect anomalies. We propose a Long Short Term
Memory Networks based Encoder-Decoder scheme for Anomaly Detection (EncDec-AD)
that learns to reconstruct 'normal' time-series behavior, and thereafter uses
reconstruction error to detect anomalies. We experiment with three publicly
available quasi predictable time-series datasets: power demand, space shuttle,
and ECG, and two real-world engine datasets with both predictive and
unpredictable behavior. We show that EncDec-AD is robust and can detect
anomalies from predictable, unpredictable, periodic, aperiodic, and
quasi-periodic time-series. Further, we show that EncDec-AD is able to detect
anomalies from short time-series (length as small as 30) as well as long
time-series (length as large as 500). | [] | [
"Anomaly Detection",
"Outlier Detection",
"Time Series",
"Time Series Classification"
] | [] | [
"ECG5000",
"Physionet 2017 Atrial Fibrillation"
] | [
"AUC",
"Accuracy"
] | LSTM-based Encoder-Decoder for Multi-sensor Anomaly Detection |
This article offers an empirical exploration on the use of character-level
convolutional networks (ConvNets) for text classification. We constructed
several large-scale datasets to show that character-level convolutional
networks could achieve state-of-the-art or competitive results. Comparisons are
offered against traditional models such as bag of words, n-grams and their
TFIDF variants, and deep learning models such as word-based ConvNets and
recurrent neural networks. | [] | [
"Sentiment Analysis",
"Text Classification"
] | [] | [
"Yelp Fine-grained classification",
"Yelp Binary classification",
"AG News",
"DBpedia"
] | [
"Error"
] | Character-level Convolutional Networks for Text Classification |
In this work, we present a novel neural network based architecture for
inducing compositional crosslingual word representations. Unlike previously
proposed methods, our method fulfills the following three criteria; it
constrains the word-level representations to be compositional, it is capable of
leveraging both bilingual and monolingual data, and it is scalable to large
vocabularies and large quantities of data. The key component of our approach is
what we refer to as a monolingual inclusion criterion, that exploits the
observation that phrases are more closely semantically related to their
sub-phrases than to other randomly sampled phrases. We evaluate our method on a
well-established crosslingual document classification task and achieve results
that are either comparable, or greatly improve upon previous state-of-the-art
methods. Concretely, our method reaches a level of 92.7% and 84.4% accuracy for
the English to German and German to English sub-tasks respectively. The former
advances the state of the art by 0.9% points of accuracy, the latter is an
absolute improvement upon the previous state of the art by 7.7% points of
accuracy and an improvement of 33.0% in error reduction. | [] | [
"Document Classification"
] | [] | [
"Reuters RCV1/RCV2 English-to-German",
"Reuters RCV1/RCV2 German-to-English"
] | [
"Accuracy"
] | Leveraging Monolingual Data for Crosslingual Compositional Word Representations |
Online Multi-Object Tracking (MOT) has wide applications in time-critical video analysis scenarios, such as robot navigation and autonomous driving. In tracking-by-detection, a major challenge of online MOT is how to robustly associate noisy object detections on a new video frame with previously tracked objects. In this work, we formulate the online MOT problem as decision making in Markov Decision Processes (MDPs), where the lifetime of an object is modeled with a MDP. Learning a similarity function for data association is equivalent to learning a policy for the MDP, and the policy learning is approached in a reinforcement learning fashion which benefits from both advantages of offline-learning and online-learning for data association. Moreover, our framework can naturally handle the birth/death and appearance/disappearance of targets by treating them as state transitions in the MDP while leveraging existing online single object tracking methods. We conduct experiments on the MOT Benchmark to verify the effectiveness of our method. | [] | [
"Autonomous Driving",
"Decision Making",
"Multi-Object Tracking",
"Object Tracking",
"Online Multi-Object Tracking",
"Robot Navigation"
] | [] | [
"KITTI Tracking test"
] | [
"MOTA"
] | Learning to Track: Online Multi-Object Tracking by Decision Making |
Distributed representations of meaning are a natural way to encode covariance
relationships between words and phrases in NLP. By overcoming data sparsity
problems, as well as providing information about semantic relatedness which is
not available in discrete representations, distributed representations have
proven useful in many NLP tasks. Recent work has shown how compositional
semantic representations can successfully be applied to a number of monolingual
applications such as sentiment analysis. At the same time, there has been some
initial success in work on learning shared word-level representations across
languages. We combine these two approaches by proposing a method for learning
distributed representations in a multilingual setup. Our model learns to assign
similar embeddings to aligned sentences and dissimilar ones to sentence which
are not aligned while not requiring word alignments. We show that our
representations are semantically informative and apply them to a cross-lingual
document classification task where we outperform the previous state of the art.
Further, by employing parallel corpora of multiple language pairs we find that
our model learns representations that capture semantic relationships across
languages for which no parallel data was used. | [] | [
"Cross-Lingual Document Classification",
"Document Classification",
"Sentiment Analysis",
"Word Alignment"
] | [] | [
"Reuters RCV1/RCV2 English-to-German",
"Reuters RCV1/RCV2 German-to-English"
] | [
"Accuracy"
] | Multilingual Distributed Representations without Word Alignment |
Traditional methods of computer vision and machine learning cannot match
human performance on tasks such as the recognition of handwritten digits or
traffic signs. Our biologically plausible deep artificial neural network
architectures can. Small (often minimal) receptive fields of convolutional
winner-take-all neurons yield large network depth, resulting in roughly as many
sparsely connected neural layers as found in mammals between retina and visual
cortex. Only winner neurons are trained. Several deep neural columns become
experts on inputs preprocessed in different ways; their predictions are
averaged. Graphics cards allow for fast training. On the very competitive MNIST
handwriting benchmark, our method is the first to achieve near-human
performance. On a traffic sign recognition benchmark it outperforms humans by a
factor of two. We also improve the state-of-the-art on a plethora of common
image classification benchmarks. | [] | [
"Image Classification",
"Traffic Sign Recognition"
] | [] | [
"GTSRB",
"MNIST",
"CIFAR-10"
] | [
"Percentage error",
"Percentage correct",
"Accuracy"
] | Multi-column Deep Neural Networks for Image Classification |
Deep learning methods have started to dominate the research progress of
video-based person re-identification (re-id). However, existing methods mostly
consider supervised learning, which requires exhaustive manual efforts for
labelling cross-view pairwise data. Therefore, they severely lack scalability
and practicality in real-world video surveillance applications. In this work,
to address the video person re-id task, we formulate a novel Deep Association
Learning (DAL) scheme, the first end-to-end deep learning method using none of
the identity labels in model initialisation and training. DAL learns a deep
re-id matching model by jointly optimising two margin-based association losses
in an end-to-end manner, which effectively constrains the association of each
frame to the best-matched intra-camera representation and cross-camera
representation. Existing standard CNNs can be readily employed within our DAL
scheme. Experiment results demonstrate that our proposed DAL significantly
outperforms current state-of-the-art unsupervised video person re-id methods on
three benchmarks: PRID 2011, iLIDS-VID and MARS. | [] | [
"Person Re-Identification",
"Unsupervised Person Re-Identification",
"Unsupervised Representation Learning",
"Video-Based Person Re-Identification"
] | [] | [
"PRID2011"
] | [
"Rank-1",
"Rank-20",
"Rank-5"
] | Deep Association Learning for Unsupervised Video Person Re-identification |
Background: Finding biomedical named entities is one of the most essential tasks in biomedical text mining. Recently, deep learning-based approaches have been applied to biomedical named entity recognition (BioNER) and showed promising results. However, as deep learning approaches need an abundant amount of training data, a lack of data can hinder performance. BioNER datasets are scarce resources and each dataset covers only a small subset of entity types. Furthermore, many bio entities are polysemous, which is one of the major obstacles in named entity recognition. Results: To address the lack of data and the entity type misclassification problem, we propose CollaboNet which utilizes a combination of multiple NER models. In CollaboNet, models trained on a different dataset are connected to each other so that a target model obtains information from other collaborator models to reduce false positives. Every model is an expert on their target entity type and takes turns serving as a target and a collaborator model during training time. The experimental results show that CollaboNet can be used to greatly reduce the number of false positives and misclassified entities including polysemous words. CollaboNet achieved state-of-the-art performance in terms of precision, recall and F1 score. Conclusions: We demonstrated the benefits of combining multiple models for BioNER. Our model has successfully reduced the number of misclassified entities and improved the performance by leveraging multiple datasets annotated for different entity types. Given the state-of-the-art performance of our model, we believe that CollaboNet can improve the accuracy of downstream biomedical text mining applications such as bio-entity relation extraction. | [] | [
"Named Entity Recognition",
"Relation Extraction"
] | [] | [
"BC5CDR"
] | [
"F1"
] | CollaboNet: collaboration of deep neural networks for biomedical named entity recognition |
Directed graphs have been widely used in Community Question Answering
services (CQAs) to model asymmetric relationships among different types of
nodes in CQA graphs, e.g., question, answer, user. Asymmetric transitivity is
an essential property of directed graphs, since it can play an important role
in downstream graph inference and analysis. Question difficulty and user
expertise follow the characteristic of asymmetric transitivity. Maintaining
such properties, while reducing the graph to a lower dimensional vector
embedding space, has been the focus of much recent research. In this paper, we
tackle the challenge of directed graph embedding with asymmetric transitivity
preservation and then leverage the proposed embedding method to solve a
fundamental task in CQAs: how to appropriately route and assign newly posted
questions to users with the suitable expertise and interest in CQAs. The
technique incorporates graph hierarchy and reachability information naturally
by relying on a non-linear transformation that operates on the core
reachability and implicit hierarchy within such graphs. Subsequently, the
methodology levers a factorization-based approach to generate two embedding
vectors for each node within the graph, to capture the asymmetric transitivity.
Extensive experiments show that our framework consistently and significantly
outperforms the state-of-the-art baselines on two diverse real-world tasks:
link prediction, and question difficulty estimation and expert finding in
online forums like Stack Exchange. Particularly, our framework can support
inductive embedding learning for newly posted questions (unseen nodes during
training), and therefore can properly route and assign these kinds of questions
to experts in CQAs. | [] | [
"Community Question Answering",
"Graph Embedding",
"Link Prediction",
"Question Answering"
] | [] | [
"Gnutella",
"Cit-HepPH",
"Wiki-Vote"
] | [
"AUC"
] | ATP: Directed Graph Embedding with Asymmetric Transitivity Preservation |
Machine translation is highly sensitive to the size and quality of the training data, which has led to an increasing interest in collecting and filtering large parallel corpora. In this paper, we propose a new method for this task based on multilingual sentence embeddings. In contrast to previous approaches, which rely on nearest neighbor retrieval with a hard threshold over cosine similarity, our proposed method accounts for the scale inconsistencies of this measure, considering the margin between a given sentence pair and its closest candidates instead. Our experiments show large improvements over existing methods. We outperform the best published results on the BUCC mining task and the UN reconstruction task by more than 10 F1 and 30 precision points, respectively. Filtering the English-German ParaCrawl corpus with our approach, we obtain 31.2 BLEU points on newstest2014, an improvement of more than one point over the best official filtered version. | [] | [
"Cross-Lingual Bitext Mining",
"Machine Translation",
"Parallel Corpus Mining",
"Sentence Embeddings"
] | [] | [
"BUCC German-to-English",
"BUCC French-to-English"
] | [
"F1 score"
] | Margin-based Parallel Corpus Mining with Multilingual Sentence Embeddings |
Conditional text-to-image generation is an active area of research, with many possible applications. Existing research has primarily focused on generating a single image from available conditioning information in one step. One practical extension beyond one-step generation is a system that generates an image iteratively, conditioned on ongoing linguistic input or feedback. This is significantly more challenging than one-step generation tasks, as such a system must understand the contents of its generated images with respect to the feedback history, the current feedback, as well as the interactions among concepts present in the feedback history. In this work, we present a recurrent image generation model which takes into account both the generated output up to the current step as well as all past instructions for generation. We show that our model is able to generate the background, add new objects, and apply simple transformations to existing objects. We believe our approach is an important step toward interactive generation. Code and data is available at: https://www.microsoft.com/en-us/research/project/generative-neural-visual-artist-geneva/ . | [] | [
"Image Generation",
"Text-to-Image Generation"
] | [] | [
"GeNeVA (CoDraw)",
"GeNeVA (i-CLEVR)"
] | [
"F1-score",
"rsim"
] | Tell, Draw, and Repeat: Generating and Modifying Images Based on Continual Linguistic Instruction |
Entity Linking (EL) systems aim to automatically map mentions of an entity in text to the corresponding entity in a Knowledge Graph (KG). Degree of connectivity of an entity in the KG directly affects an EL system{'}s ability to correctly link mentions in text to the entity in KG. This causes many EL systems to perform well for entities well connected to other entities in KG, bringing into focus the role of KG density in EL. In this paper, we propose Entity Linking using Densified Knowledge Graphs (ELDEN). ELDEN is an EL system which first densifies the KG with co-occurrence statistics from a large text corpus, and then uses the densified KG to train entity embeddings. Entity similarity measured using these trained entity embeddings result in improved EL. ELDEN outperforms state-of-the-art EL system on benchmark datasets. Due to such densification, ELDEN performs well for sparsely connected entities in the KG too. ELDEN{'}s approach is simple, yet effective. We have made ELDEN{'}s code and data publicly available. | [] | [
"Entity Disambiguation",
"Entity Embeddings",
"Entity Linking",
"Knowledge Graphs"
] | [] | [
"AIDA-CoNLL"
] | [
"In-KB Accuracy"
] | ELDEN: Improved Entity Linking Using Densified Knowledge Graphs |
The high-quality node embeddings learned from the Graph Neural Networks (GNNs) have been applied to a wide range of node-based applications and some of them have achieved state-of-the-art (SOTA) performance. However, when applying node embeddings learned from GNNs to generate graph embeddings, the scalar node representation may not suffice to preserve the node/graph properties efficiently, resulting in sub-optimal graph embeddings.
Inspired by the Capsule Neural Network (CapsNet), we propose the Capsule Graph Neural Network (CapsGNN), which adopts the concept of capsules to address the weakness in existing GNN-based graph embeddings algorithms. By extracting node features in the form of capsules, routing mechanism can be utilized to capture important information at the graph level. As a result, our model generates multiple embeddings for each graph to capture graph properties from different aspects. The attention module incorporated in CapsGNN is used to tackle graphs with various sizes which also enables the model to focus on critical parts of the graphs.
Our extensive evaluations with 10 graph-structured datasets demonstrate that CapsGNN has a powerful mechanism that operates to capture macroscopic properties of the whole graph by data-driven. It outperforms other SOTA techniques on several graph classification tasks, by virtue of the new instrument. | [] | [
"Graph Classification"
] | [] | [
"COLLAB",
"RE-M12K",
"IMDb-B",
"ENZYMES",
"PROTEINS",
"D&D",
"NCI1",
"MUTAG",
"IMDb-M",
"RE-M5K"
] | [
"Accuracy"
] | Capsule Graph Neural Network |
Accurate depth estimation from images is a fundamental task in many
applications including scene understanding and reconstruction. Existing
solutions for depth estimation often produce blurry approximations of low
resolution. This paper presents a convolutional neural network for computing a
high-resolution depth map given a single RGB image with the help of transfer
learning. Following a standard encoder-decoder architecture, we leverage
features extracted using high performing pre-trained networks when initializing
our encoder along with augmentation and training strategies that lead to more
accurate results. We show how, even for a very simple decoder, our method is
able to achieve detailed high-resolution depth maps. Our network, with fewer
parameters and training iterations, outperforms state-of-the-art on two
datasets and also produces qualitatively better results that capture object
boundaries more faithfully. Code and corresponding pre-trained weights are made
publicly available. | [] | [
"Depth Estimation",
"Monocular Depth Estimation",
"Transfer Learning"
] | [] | [
"NYU-Depth V2",
"KITTI Eigen split"
] | [
"RMSE",
"absolute relative error"
] | High Quality Monocular Depth Estimation via Transfer Learning |
The rapid progress in synthetic image generation and manipulation has now come to a point where it raises significant concerns for the implications towards society. At best, this leads to a loss of trust in digital content, but could potentially cause further harm by spreading false information or fake news. This paper examines the realism of state-of-the-art image manipulations, and how difficult it is to detect them, either automatically or by humans. To standardize the evaluation of detection methods, we propose an automated benchmark for facial manipulation detection. In particular, the benchmark is based on DeepFakes, Face2Face, FaceSwap and NeuralTextures as prominent representatives for facial manipulations at random compression level and size. The benchmark is publicly available and contains a hidden test set as well as a database of over 1.8 million manipulated images. This dataset is over an order of magnitude larger than comparable, publicly available, forgery datasets. Based on this data, we performed a thorough analysis of data-driven forgery detectors. We show that the use of additional domainspecific knowledge improves forgery detection to unprecedented accuracy, even in the presence of strong compression, and clearly outperforms human observers. | [] | [
"DeepFake Detection",
"Face Swapping",
"Fake Image Detection",
"Image Generation"
] | [] | [
"FaceForensics"
] | [
"Total Accuracy",
"FSF",
"NT",
"FS",
"DF",
"Real"
] | FaceForensics++: Learning to Detect Manipulated Facial Images |
We propose an effective deep learning approach to aesthetics quality
assessment that relies on a new type of pre-trained features, and apply it to
the AVA data set, the currently largest aesthetics database. While previous
approaches miss some of the information in the original images, due to taking
small crops, down-scaling or warping the originals during training, we propose
the first method that efficiently supports full resolution images as an input,
and can be trained on variable input sizes. This allows us to significantly
improve upon the state of the art, increasing the Spearman rank-order
correlation coefficient (SRCC) of ground-truth mean opinion scores (MOS) from
the existing best reported of 0.612 to 0.756. To achieve this performance, we
extract multi-level spatially pooled (MLSP) features from all convolutional
blocks of a pre-trained InceptionResNet-v2 network, and train a custom shallow
Convolutional Neural Network (CNN) architecture on these new features. | [] | [
"Aesthetics Quality Assessment",
"Image Quality Assessment"
] | [] | [
"AVA"
] | [
"Accuracy"
] | Effective Aesthetics Prediction with Multi-level Spatially Pooled Features |
Domain adaptation for semantic image segmentation is very necessary since
manually labeling large datasets with pixel-level labels is expensive and time
consuming. Existing domain adaptation techniques either work on limited
datasets, or yield not so good performance compared with supervised learning.
In this paper, we propose a novel bidirectional learning framework for domain
adaptation of segmentation. Using the bidirectional learning, the image
translation model and the segmentation adaptation model can be learned
alternatively and promote to each other. Furthermore, we propose a
self-supervised learning algorithm to learn a better segmentation adaptation
model and in return improve the image translation model. Experiments show that
our method is superior to the state-of-the-art methods in domain adaptation of
segmentation with a big margin. The source code is available at
https://github.com/liyunsheng13/BDL. | [] | [
"Domain Adaptation",
"Image-to-Image Translation",
"Self-Supervised Learning",
"Semantic Segmentation",
"Synthetic-to-Real Translation"
] | [] | [
"GTAV-to-Cityscapes Labels",
"SYNTHIA-to-Cityscapes"
] | [
"mIoU (13 classes)",
"mIoU"
] | Bidirectional Learning for Domain Adaptation of Semantic Segmentation |
In this paper, we propose a novel edge-labeling graph neural network (EGNN), which adapts a deep neural network on the edge-labeling graph, for few-shot learning. The previous graph neural network (GNN) approaches in few-shot learning have been based on the node-labeling framework, which implicitly models the intra-cluster similarity and the inter-cluster dissimilarity. In contrast, the proposed EGNN learns to predict the edge-labels rather than the node-labels on the graph that enables the evolution of an explicit clustering by iteratively updating the edge-labels with direct exploitation of both intra-cluster similarity and the inter-cluster dissimilarity. It is also well suited for performing on various numbers of classes without retraining, and can be easily extended to perform a transductive inference. The parameters of the EGNN are learned by episodic training with an edge-labeling loss to obtain a well-generalizable model for unseen low-data problem. On both of the supervised and semi-supervised few-shot image classification tasks with two benchmark datasets, the proposed EGNN significantly improves the performances over the existing GNNs. | [] | [
"Few-Shot Image Classification",
"Few-Shot Learning",
"Image Classification"
] | [] | [
"Mini-Imagenet 5-way (5-shot)",
"Tiered ImageNet 5-way (5-shot)"
] | [
"Accuracy"
] | Edge-labeling Graph Neural Network for Few-shot Learning |
Hyperbolic embeddings have recently gained attention in machine learning due to their ability to represent hierarchical data more accurately and succinctly than their Euclidean analogues. However, multi-relational knowledge graphs often exhibit multiple simultaneous hierarchies, which current hyperbolic models do not capture. To address this, we propose a model that embeds multi-relational graph data in the Poincar\'e ball model of hyperbolic space. Our Multi-Relational Poincar\'e model (MuRP) learns relation-specific parameters to transform entity embeddings by M\"obius matrix-vector multiplication and M\"obius addition. Experiments on the hierarchical WN18RR knowledge graph show that our Poincar\'e embeddings outperform their Euclidean counterpart and existing embedding methods on the link prediction task, particularly at lower dimensionality. | [] | [
"Entity Embeddings",
"Knowledge Graphs",
"Link Prediction"
] | [] | [
"WN18RR",
"FB15k-237"
] | [
"Hits@10",
"MRR",
"Hits@3",
"Hits@1"
] | Multi-relational Poincaré Graph Embeddings |
In this paper we are interested in recognizing human actions from sequences of 3D skeleton data. For this purpose we combine a 3D Convolutional Neural Network with body representations based on Euclidean Distance Matrices (EDMs), which have been recently shown to be very effective to capture the geometric structure of the human pose. One inherent limitation of the EDMs, however, is that they are defined up to a permutation of the skeleton joints, i.e., randomly shuffling the ordering of the joints yields many different representations. In oder to address this issue we introduce a novel architecture that simultaneously, and in an end-to-end manner, learns an optimal transformation of the joints, while optimizing the rest of parameters of the convolutional network. The proposed approach achieves state-of-the-art results on 3 benchmarks, including the recent NTU RGB-D dataset, for which we improve on previous LSTM-based methods by more than 10 percentage points, also surpassing other CNN-based methods while using almost 1000 times fewer parameters. | [] | [
"Action Recognition",
"Skeleton Based Action Recognition",
"Temporal Action Localization"
] | [] | [
"NTU RGB+D"
] | [
"Accuracy (CS)",
"Accuracy (CV)"
] | 3D CNNs on Distance Matrices for Human Action Recognition |
Few-shot learning is a challenging problem that has attracted more and more attention recently since abundant training samples are difficult to obtain in practical applications. Meta-learning has been proposed to address this issue, which focuses on quickly adapting a predictor as a base-learner to new tasks, given limited labeled samples. However, a critical challenge for meta-learning is the representation deficiency since it is hard to discover common information from a small number of training samples or even one, as is the representation of key features from such little information. As a result, a meta-learner cannot be trained well in a high-dimensional parameter space to generalize to new tasks. Existing methods mostly resort to extracting less expressive features so as to avoid the representation deficiency. Aiming at learning better representations, we propose a meta-learning approach with complemented representations network (MCRNet) for few-shot image classification. In particular, we embed a latent space, where latent codes are reconstructed with extra representation information to complement the representation deficiency. Furthermore, the latent space is established with variational inference, collaborating well with different base-learners, and can be extended to other models. Finally, our end-to-end framework achieves the state-of-the-art performance in image classification on three standard few-shot learning datasets. | [] | [
"Few-Shot Image Classification",
"Few-Shot Learning",
"Image Classification",
"Meta-Learning",
"Variational Inference"
] | [] | [
"FC100 5-way (1-shot)",
"Mini-Imagenet 5-way (1-shot)",
"Mini-Imagenet 5-way (5-shot)",
"CIFAR-FS 5-way (1-shot)",
"FC100 5-way (5-shot)",
"CIFAR-FS 5-way (5-shot)"
] | [
"Accuracy"
] | Complementing Representation Deficiency in Few-shot Image Classification: A Meta-Learning Approach |
We present PARADE, an end-to-end Transformer-based model that considers document-level context for document reranking. PARADE leverages passage-level relevance representations to predict a document relevance score, overcoming the limitations of previous approaches that perform inference on passages independently. Experiments on two ad-hoc retrieval benchmarks demonstrate PARADE's effectiveness over such methods. We conduct extensive analyses on PARADE's efficiency, highlighting several strategies for improving it. When combined with knowledge distillation, a PARADE model with 72\% fewer parameters achieves effectiveness competitive with previous approaches using BERT-Base. Our code is available at \url{https://github.com/canjiali/PARADE}. | [] | [
"Ad-Hoc Information Retrieval",
"Knowledge Distillation"
] | [] | [
"TREC Robust04"
] | [
"P@20",
"nDCG@20"
] | PARADE: Passage Representation Aggregation for Document Reranking |
The task of retrieving video content relevant to natural language queries plays a critical role in effectively handling internet-scale datasets. Most of the existing methods for this caption-to-video retrieval problem do not fully exploit cross-modal cues present in video. Furthermore, they aggregate per-frame visual features with limited or no temporal information. In this paper, we present a multi-modal transformer to jointly encode the different modalities in video, which allows each of them to attend to the others. The transformer architecture is also leveraged to encode and model the temporal information. On the natural language side, we investigate the best practices to jointly optimize the language embedding together with the multi-modal transformer. This novel framework allows us to establish state-of-the-art results for video retrieval on three datasets. More details are available at http://thoth.inrialpes.fr/research/MMT. | [] | [
"Video Retrieval"
] | [] | [
"MSR-VTT-1kA",
"LSMDC",
"ActivityNet"
] | [
"text-to-video Median Rank",
"text-to-video R@5",
"text-to-video R@50",
"text-to-video R@1",
"text-to-video Mean Rank",
"text-to-video R@10"
] | Multi-modal Transformer for Video Retrieval |
Efficiently modeling dynamic motion information in videos is crucial for action recognition task. Most state-of-the-art methods heavily rely on dense optical flow as motion representation. Although combining optical flow with RGB frames as input can achieve excellent recognition performance, the optical flow extraction is very time-consuming. This undoubtably will count against real-time action recognition. In this paper, we shed light on fast action recognition by lifting the reliance on optical flow. Our motivation lies in the observation that small displacements of motion boundaries are the most critical ingredients for distinguishing actions, so we design a novel motion cue called Persistence of Appearance (PA). In contrast to optical flow, our PA focuses more on distilling the motion information at boundaries. Also, it is more efficient by only accumulating pixel-wise differences in feature space, instead of using exhaustive patch-wise search of all the possible motion vectors. Our PA is over 1000x faster (8196fps vs. 8fps) than conventional optical flow in terms of motion modeling speed. To further aggregate the short-term dynamics in PA to long-term dynamics, we also devise a global temporal fusion strategy called Various-timescale Aggregation Pooling (VAP) that can adaptively model long-range temporal relationships across various timescales. We finally incorporate the proposed PA and VAP to form a unified framework called Persistent Appearance Network (PAN) with strong temporal modeling ability. Extensive experiments on six challenging action recognition benchmarks verify that our PAN outperforms recent state-of-the-art methods at low FLOPs. Codes and models are available at: https://github.com/zhang-can/PAN-PyTorch. | [] | [
"Action Recognition",
"Optical Flow Estimation",
"Video Understanding"
] | [] | [
"Jester",
"Something-Something V2",
"Something-Something V1"
] | [
"Top 1 Accuracy",
"Val",
"Top-5 Accuracy",
"Top-1 Accuracy",
"Top 5 Accuracy"
] | PAN: Towards Fast Action Recognition via Learning Persistence of Appearance |
In today's heavily overparameterized models, the value of the training loss provides few guarantees on model generalization ability. Indeed, optimizing only the training loss value, as is commonly done, can easily lead to suboptimal model quality. Motivated by the connection between geometry of the loss landscape and generalization -- including a generalization bound that we prove here -- we introduce a novel, effective procedure for instead simultaneously minimizing loss value and loss sharpness. In particular, our procedure, Sharpness-Aware Minimization (SAM), seeks parameters that lie in neighborhoods having uniformly low loss; this formulation results in a min-max optimization problem on which gradient descent can be performed efficiently. We present empirical results showing that SAM improves model generalization across a variety of benchmark datasets (e.g., CIFAR-{10, 100}, ImageNet, finetuning tasks) and models, yielding novel state-of-the-art performance for several. Additionally, we find that SAM natively provides robustness to label noise on par with that provided by state-of-the-art procedures that specifically target learning with noisy labels. | [] | [
"Fine-Grained Image Classification",
"Image Classification",
"Learning with noisy labels"
] | [] | [
"FGVC Aircraft",
"Stanford Cars",
"CIFAR-100",
"CIFAR-10",
"Oxford-IIIT Pets",
"Flowers-102",
"Food-101",
"SVHN",
"Fashion-MNIST",
"ImageNet",
"Birdsnap"
] | [
"Number of params",
"Top 1 Accuracy",
"Percentage error",
"Percentage correct",
"Top-1 Error Rate",
"Accuracy",
"Top 5 Accuracy"
] | Sharpness-Aware Minimization for Efficiently Improving Generalization |
Domain adaptive person Re-Identification (ReID) is challenging owing to the domain gap and shortage of annotations on target scenarios. To handle those two challenges, this paper proposes a coupling optimization method including the Domain-Invariant Mapping (DIM) method and the Global-Local distance Optimization (GLO), respectively. Different from previous methods that transfer knowledge in two stages, the DIM achieves a more efficient one-stage knowledge transfer by mapping images in labeled and unlabeled datasets to a shared feature space. GLO is designed to train the ReID model with unsupervised setting on the target domain. Instead of relying on existing optimization strategies designed for supervised training, GLO involves more images in distance optimization, and achieves better robustness to noisy label prediction. GLO also integrates distance optimizations in both the global dataset and local training batch, thus exhibits better training efficiency. Extensive experiments on three large-scale datasets, i.e., Market-1501, DukeMTMC-reID, and MSMT17, show that our coupling optimization outperforms state-of-the-art methods by a large margin. Our method also works well in unsupervised training, and even outperforms several recent domain adaptive methods. | [] | [
"Domain Adaptive Person Re-Identification",
"Person Re-Identification",
"Transfer Learning",
"Unsupervised Person Re-Identification"
] | [] | [
"Market-1501->MSMT17",
"DukeMTMC-reID->Market-1501",
"DukeMTMC-reID->MSMT17",
"Market-1501->DukeMTMC-reID"
] | [
"Rank-1",
"mAP"
] | Domain Adaptive Person Re-Identification via Coupling Optimization |
In this paper, we present a novel two-pass approach to unify streaming and non-streaming end-to-end (E2E) speech recognition in a single model. Our model adopts the hybrid CTC/attention architecture, in which the conformer layers in the encoder are modified. We propose a dynamic chunk-based attention strategy to allow arbitrary right context length. At inference time, the CTC decoder generates n-best hypotheses in a streaming way. The inference latency could be easily controlled by only changing the chunk size. The CTC hypotheses are then rescored by the attention decoder to get the final result. This efficient rescoring process causes very little sentence-level latency. Our experiments on the open 170-hour AISHELL-1 dataset show that, the proposed method can unify the streaming and non-streaming model simply and efficiently. On the AISHELL-1 test set, our unified model achieves 5.60% relative character error rate (CER) reduction in non-streaming ASR compared to a standard non-streaming transformer. The same model achieves 5.42% CER with 640ms latency in a streaming ASR system. | [] | [
"Speech Recognition"
] | [] | [
"AISHELL-1"
] | [
"Word Error Rate (WER)"
] | Unified Streaming and Non-streaming Two-pass End-to-end Model for Speech Recognition |
Discriminative clustering has been successfully applied to a number of
weakly-supervised learning tasks. Such applications include person and action
recognition, text-to-video alignment, object co-segmentation and colocalization
in videos and images. One drawback of discriminative clustering, however, is
its limited scalability. We address this issue and propose an online
optimization algorithm based on the Block-Coordinate Frank-Wolfe algorithm. We
apply the proposed method to the problem of weakly supervised learning of
actions and actors from movies together with corresponding movie scripts. The
scaling up of the learning problem to 66 feature length movies enables us to
significantly improve weakly supervised action recognition. | [] | [
"Action Recognition",
"Temporal Action Localization",
"Video Alignment",
"Video Retrieval",
"Weakly-Supervised Action Recognition"
] | [] | [
"LSMDC"
] | [
"text-to-video R@1",
"text-to-video R@10",
"text-to-video Median Rank",
"text-to-video R@5"
] | Learning from Video and Text via Large-Scale Discriminative Clustering |
Monocular cameras are one of the most commonly used sensors in the automotive
industry for autonomous vehicles. One major drawback using a monocular camera
is that it only makes observations in the two dimensional image plane and can
not directly measure the distance to objects. In this paper, we aim at filling
this gap by developing a multi-object tracking algorithm that takes an image as
input and produces trajectories of detected objects in a world coordinate
system. We solve this by using a deep neural network trained to detect and
estimate the distance to objects from a single input image. The detections from
a sequence of images are fed in to a state-of-the art Poisson multi-Bernoulli
mixture tracking filter. The combination of the learned detector and the PMBM
filter results in an algorithm that achieves 3D tracking using only mono-camera
images as input. The performance of the algorithm is evaluated both in 3D world
coordinates, and 2D image coordinates, using the publicly available KITTI
object tracking dataset. The algorithm shows the ability to accurately track
objects, correctly handle data associations, even when there is a big overlap
of the objects in the image, and is one of the top performing algorithms on the
KITTI object tracking benchmark. Furthermore, the algorithm is efficient,
running on average close to 20 frames per second. | [] | [
"3D Multi-Object Tracking",
"Autonomous Vehicles",
"Multi-Object Tracking",
"Object Tracking"
] | [] | [
"KITTI Tracking test"
] | [
"MOTA"
] | Mono-Camera 3D Multi-Object Tracking Using Deep Learning Detections and PMBM Filtering |
Temporal action proposal generation is an important yet challenging problem,
since temporal proposals with rich action content are indispensable for
analysing real-world videos with long duration and high proportion irrelevant
content. This problem requires methods not only generating proposals with
precise temporal boundaries, but also retrieving proposals to cover truth
action instances with high recall and high overlap using relatively fewer
proposals. To address these difficulties, we introduce an effective proposal
generation method, named Boundary-Sensitive Network (BSN), which adopts "local
to global" fashion. Locally, BSN first locates temporal boundaries with high
probabilities, then directly combines these boundaries as proposals. Globally,
with Boundary-Sensitive Proposal feature, BSN retrieves proposals by evaluating
the confidence of whether a proposal contains an action within its region. We
conduct experiments on two challenging datasets: ActivityNet-1.3 and THUMOS14,
where BSN outperforms other state-of-the-art temporal action proposal
generation methods with high recall and high temporal precision. Finally,
further experiments demonstrate that by combining existing action classifiers,
our method significantly improves the state-of-the-art temporal action
detection performance. | [] | [
"Action Detection",
"Temporal Action Localization",
"Temporal Action Proposal Generation"
] | [] | [
"THUMOS' 14",
"ActivityNet-1.3",
"THUMOS’14"
] | [
"AUC (test)",
"mAP",
"AR@200",
"[email protected]",
"mAP [email protected]",
"mAP [email protected]",
"mAP [email protected]",
"AUC (val)",
"mAP [email protected]",
"mAP [email protected]",
"[email protected]",
"AR@500",
"mAP [email protected]",
"[email protected]",
"mAP [email protected]",
"AR@50",
"AR@1000",
"AR@100"
] | BSN: Boundary Sensitive Network for Temporal Action Proposal Generation |
Due to the fast inference and good performance, discriminative learning
methods have been widely studied in image denoising. However, these methods
mostly learn a specific model for each noise level, and require multiple models
for denoising images with different noise levels. They also lack flexibility to
deal with spatially variant noise, limiting their applications in practical
denoising. To address these issues, we present a fast and flexible denoising
convolutional neural network, namely FFDNet, with a tunable noise level map as
the input. The proposed FFDNet works on downsampled sub-images, achieving a
good trade-off between inference speed and denoising performance. In contrast
to the existing discriminative denoisers, FFDNet enjoys several desirable
properties, including (i) the ability to handle a wide range of noise levels
(i.e., [0, 75]) effectively with a single network, (ii) the ability to remove
spatially variant noise by specifying a non-uniform noise level map, and (iii)
faster speed than benchmark BM3D even on CPU without sacrificing denoising
performance. Extensive experiments on synthetic and real noisy images are
conducted to evaluate FFDNet in comparison with state-of-the-art denoisers. The
results show that FFDNet is effective and efficient, making it highly
attractive for practical denoising applications. | [] | [
"Denoising",
"Image Denoising"
] | [] | [
"Kodak25 sigma50",
"Darmstadt Noise Dataset",
"CBSD68 sigma15",
"Kodak25 sigma25",
"McMaster sigma15",
"Clip300 sigma60",
"CBSD68 sigma50",
"BSD68 sigma50",
"BSD68 sigma75",
"Clip300 sigma35",
"BSD68 sigma35",
"Kodak25 sigma75",
"BSD68 sigma25",
"Kodak25 sigma35",
"Clip300 sigma25",
"CBSD68 sigma25",
"Set12 sigma15",
"McMaster sigma35",
"McMaster sigma50",
"Clip300 sigma15",
"Clip300 sigma50",
"Kodak25 sigma15",
"BSD68 sigma15",
"McMaster sigma25",
"CBSD68 sigma35",
"CBSD68 sigma75",
"McMaster sigma75"
] | [
"PSNR"
] | FFDNet: Toward a Fast and Flexible Solution for CNN based Image Denoising |
Skeleton-based human action recognition has recently attracted increasing attention thanks to the accessibility and the popularity of 3D skeleton data. One of the key challenges in skeleton-based action recognition lies in the large view variations when capturing data. In order to alleviate the effects of view variations, this paper introduces a novel view adaptation scheme, which automatically determines the virtual observation viewpoints in a learning based data driven manner. We design two view adaptive neural networks, i.e., VA-RNN based on RNN, and VA-CNN based on CNN. For each network, a novel view adaptation module learns and determines the most suitable observation viewpoints, and transforms the skeletons to those viewpoints for the end-to-end recognition with a main classification network. Ablation studies find that the proposed view adaptive models are capable of transforming the skeletons of various viewpoints to much more consistent virtual viewpoints which largely eliminates the viewpoint influence. In addition, we design a two-stream scheme (referred to as VA-fusion) that fuses the scores of the two networks to provide the fused prediction. Extensive experimental evaluations on five challenging benchmarks demonstrate that the effectiveness of the proposed view-adaptive networks and superior performance over state-of-the-art approaches. The source code is available at https://github.com/microsoft/View-Adaptive-Neural-Networks-for-Skeleton-based-Human-Action-Recognition. | [] | [
"Action Recognition",
"Skeleton Based Action Recognition",
"Temporal Action Localization"
] | [] | [
"SYSU 3D",
"NTU RGB+D",
"N-UCLA",
"SBU",
"UWA3D"
] | [
"Accuracy (CS)",
"Accuracy (CV)",
"Accuracy"
] | View Adaptive Neural Networks for High Performance Skeleton-based Human Action Recognition |
We study 3D shape modeling from a single image and make contributions to it
in three aspects. First, we present Pix3D, a large-scale benchmark of diverse
image-shape pairs with pixel-level 2D-3D alignment. Pix3D has wide applications
in shape-related tasks including reconstruction, retrieval, viewpoint
estimation, etc. Building such a large-scale dataset, however, is highly
challenging; existing datasets either contain only synthetic data, or lack
precise alignment between 2D images and 3D shapes, or only have a small number
of images. Second, we calibrate the evaluation criteria for 3D shape
reconstruction through behavioral studies, and use them to objectively and
systematically benchmark cutting-edge reconstruction algorithms on Pix3D.
Third, we design a novel model that simultaneously performs 3D reconstruction
and pose estimation; our multi-task learning approach achieves state-of-the-art
performance on both tasks. | [] | [
"3D Reconstruction",
"3D Shape Modeling",
"3D Shape Reconstruction",
"Multi-Task Learning",
"Pose Estimation",
"Viewpoint Estimation"
] | [] | [
"Pix3D"
] | [
"R@16",
"R@8",
"EMD",
"R@2",
"R@4",
"R@1",
"TIoU",
"R@32",
"CD"
] | Pix3D: Dataset and Methods for Single-Image 3D Shape Modeling |
The Jaccard index, also referred to as the intersection-over-union score, is
commonly employed in the evaluation of image segmentation results given its
perceptual qualities, scale invariance - which lends appropriate relevance to
small objects, and appropriate counting of false negatives, in comparison to
per-pixel losses. We present a method for direct optimization of the mean
intersection-over-union loss in neural networks, in the context of semantic
image segmentation, based on the convex Lov\'asz extension of submodular
losses. The loss is shown to perform better with respect to the Jaccard index
measure than the traditionally used cross-entropy loss. We show quantitative
and qualitative differences between optimizing the Jaccard index per image
versus optimizing the Jaccard index taken over an entire dataset. We evaluate
the impact of our method in a semantic segmentation pipeline and show
substantially improved intersection-over-union segmentation scores on the
Pascal VOC and Cityscapes datasets using state-of-the-art deep learning
segmentation architectures. | [] | [
"Semantic Segmentation"
] | [] | [
"PASCAL VOC 2012 test",
"Cityscapes test"
] | [
"Time (ms)",
"Mean IoU",
"mIoU",
"Mean IoU (class)",
"Frame (fps)"
] | The Lovász-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks |
In this paper, we introduce the concept of learning latent super-events from
activity videos, and present how it benefits activity detection in continuous
videos. We define a super-event as a set of multiple events occurring together
in videos with a particular temporal organization; it is the opposite concept
of sub-events. Real-world videos contain multiple activities and are rarely
segmented (e.g., surveillance videos), and learning latent super-events allows
the model to capture how the events are temporally related in videos. We design
temporal structure filters that enable the model to focus on particular
sub-intervals of the videos, and use them together with a soft attention
mechanism to learn representations of latent super-events. Super-event
representations are combined with per-frame or per-segment CNNs to provide
frame-level annotations. Our approach is designed to be fully differentiable,
enabling end-to-end learning of latent super-event representations jointly with
the activity detector using them. Our experiments with multiple public video
datasets confirm that the proposed concept of latent super-event learning
significantly benefits activity detection, advancing the state-of-the-arts. | [] | [
"Action Detection",
"Activity Detection"
] | [] | [
"Multi-THUMOS",
"Charades"
] | [
"mAP"
] | Learning Latent Super-Events to Detect Multiple Activities in Videos |
Current state-of-the-art solutions for motion capture from a single camera
are optimization driven: they optimize the parameters of a 3D human model so
that its re-projection matches measurements in the video (e.g. person
segmentation, optical flow, keypoint detections etc.). Optimization models are
susceptible to local minima. This has been the bottleneck that forced using
clean green-screen like backgrounds at capture time, manual initialization, or
switching to multiple cameras as input resource. In this work, we propose a
learning based motion capture model for single camera input. Instead of
optimizing mesh and skeleton parameters directly, our model optimizes neural
network weights that predict 3D shape and skeleton configurations given a
monocular RGB video. Our model is trained using a combination of strong
supervision from synthetic data, and self-supervision from differentiable
rendering of (a) skeletal keypoints, (b) dense 3D mesh motion, and (c)
human-background segmentation, in an end-to-end framework. Empirically we show
our model combines the best of both worlds of supervised learning and test-time
optimization: supervised learning initializes the model parameters in the right
regime, ensuring good pose and surface initialization at test time, without
manual effort. Self-supervision by back-propagating through differentiable
rendering allows (unsupervised) adaptation of the model to the test data, and
offers much tighter fit than a pretrained fixed model. We show that the
proposed model improves with experience and converges to low-error solutions
where previous optimization methods fail. | [] | [
"3D Human Pose Estimation",
"Motion Capture",
"Optical Flow Estimation",
"Self-Supervised Learning"
] | [] | [
"Surreal"
] | [
"MPJPE"
] | Self-supervised Learning of Motion Capture |
In this paper, we address semantic segmentation of road-objects from 3D LiDAR
point clouds. In particular, we wish to detect and categorize instances of
interest, such as cars, pedestrians and cyclists. We formulate this problem as
a point- wise classification problem, and propose an end-to-end pipeline called
SqueezeSeg based on convolutional neural networks (CNN): the CNN takes a
transformed LiDAR point cloud as input and directly outputs a point-wise label
map, which is then refined by a conditional random field (CRF) implemented as a
recurrent layer. Instance-level labels are then obtained by conventional
clustering algorithms. Our CNN model is trained on LiDAR point clouds from the
KITTI dataset, and our point-wise segmentation labels are derived from 3D
bounding boxes from KITTI. To obtain extra training data, we built a LiDAR
simulator into Grand Theft Auto V (GTA-V), a popular video game, to synthesize
large amounts of realistic training data. Our experiments show that SqueezeSeg
achieves high accuracy with astonishingly fast and stable runtime (8.7 ms per
frame), highly desirable for autonomous driving applications. Furthermore,
additionally training on synthesized data boosts validation accuracy on
real-world data. Our source code and synthesized data will be open-sourced. | [] | [
"3D Semantic Segmentation",
"Autonomous Driving",
"Semantic Segmentation"
] | [] | [
"SemanticKITTI"
] | [
"mIoU"
] | SqueezeSeg: Convolutional Neural Nets with Recurrent CRF for Real-Time Road-Object Segmentation from 3D LiDAR Point Cloud |
Previous work combines word-level and character-level representations using
concatenation or scalar weighting, which is suboptimal for high-level tasks
like reading comprehension. We present a fine-grained gating mechanism to
dynamically combine word-level and character-level representations based on
properties of the words. We also extend the idea of fine-grained gating to
modeling the interaction between questions and paragraphs for reading
comprehension. Experiments show that our approach can improve the performance
on reading comprehension tasks, achieving new state-of-the-art results on the
Children's Book Test dataset. To demonstrate the generality of our gating
mechanism, we also show improved results on a social media tag prediction task. | [] | [
"Question Answering",
"Reading Comprehension"
] | [] | [
"SQuAD1.1 dev",
"SQuAD1.1"
] | [
"EM",
"F1"
] | Words or Characters? Fine-grained Gating for Reading Comprehension |
This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures. | [] | [
"Real-Time Strategy Games",
"Starcraft",
"Starcraft II"
] | [] | [
"CollectMineralShards",
"MoveToBeacon"
] | [
"Max Score"
] | StarCraft II: A New Challenge for Reinforcement Learning |
We address the problem of activity detection in continuous, untrimmed video
streams. This is a difficult task that requires extracting meaningful
spatio-temporal features to capture activities, accurately localizing the start
and end times of each activity. We introduce a new model, Region Convolutional
3D Network (R-C3D), which encodes the video streams using a three-dimensional
fully convolutional network, then generates candidate temporal regions
containing activities, and finally classifies selected regions into specific
activities. Computation is saved due to the sharing of convolutional features
between the proposal and the classification pipelines. The entire model is
trained end-to-end with jointly optimized localization and classification
losses. R-C3D is faster than existing methods (569 frames per second on a
single Titan X Maxwell GPU) and achieves state-of-the-art results on THUMOS'14.
We further demonstrate that our model is a general activity detection framework
that does not rely on assumptions about particular dataset properties by
evaluating our approach on ActivityNet and Charades. Our code is available at
http://ai.bu.edu/r-c3d/. | [] | [
"Action Detection",
"Activity Detection"
] | [] | [
"Charades",
"ActivityNet-1.3",
"THUMOS’14"
] | [
"[email protected]",
"mAP",
"[email protected]",
"mAP [email protected]",
"mAP [email protected]",
"mAP [email protected]",
"[email protected]",
"[email protected]",
"mAP [email protected]",
"[email protected]",
"mAP [email protected]"
] | R-C3D: Region Convolutional 3D Network for Temporal Activity Detection |
Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements. | [] | [
"Hyperparameter Optimization",
"Image Classification",
"Neural Architecture Search"
] | [] | [
"CIFAR-100",
"CIFAR-10"
] | [
"Percentage correct"
] | Large-Scale Evolution of Image Classifiers |
Relational reasoning is a central component of generally intelligent
behavior, but has proven difficult for neural networks to learn. In this paper
we describe how to use Relation Networks (RNs) as a simple plug-and-play module
to solve problems that fundamentally hinge on relational reasoning. We tested
RN-augmented networks on three tasks: visual question answering using a
challenging dataset called CLEVR, on which we achieve state-of-the-art,
super-human performance; text-based question answering using the bAbI suite of
tasks; and complex reasoning about dynamic physical systems. Then, using a
curated dataset called Sort-of-CLEVR we show that powerful convolutional
networks do not have a general capacity to solve relational questions, but can
gain this capacity when augmented with RNs. Our work shows how a deep learning
architecture equipped with an RN module can implicitly discover and learn to
reason about entities and their relations. | [] | [
"Image Retrieval with Multi-Modal Query",
"Question Answering",
"Relational Reasoning",
"Visual Question Answering"
] | [] | [
"CLEVR",
"Fashion200k"
] | [
"Recall@50",
"Recall@1",
"Recall@10",
"Accuracy"
] | A simple neural network module for relational reasoning |
The MNIST dataset has become a standard benchmark for learning,
classification and computer vision systems. Contributing to its widespread
adoption are the understandable and intuitive nature of the task, its
relatively small size and storage requirements and the accessibility and
ease-of-use of the database itself. The MNIST database was derived from a
larger dataset known as the NIST Special Database 19 which contains digits,
uppercase and lowercase handwritten letters. This paper introduces a variant of
the full NIST dataset, which we have called Extended MNIST (EMNIST), which
follows the same conversion paradigm used to create the MNIST dataset. The
result is a set of datasets that constitute a more challenging classification
tasks involving letters and digits, and that shares the same image structure
and parameters as the original MNIST task, allowing for direct compatibility
with all existing classifiers and systems. Benchmark results are presented
along with a validation of the conversion process through the comparison of the
classification results on converted NIST digits and the MNIST digits. | [] | [
"Image Classification"
] | [] | [
"EMNIST-Digits",
"EMNIST-Letters",
"EMNIST-Balanced"
] | [
"Accuracy (%)",
"Accuracy"
] | EMNIST: an extension of MNIST to handwritten letters |
Directly reading documents and being able to answer questions from them is an
unsolved challenge. To avoid its inherent difficulty, question answering (QA)
has been directed towards using Knowledge Bases (KBs) instead, which has proven
effective. Unfortunately KBs often suffer from being too restrictive, as the
schema cannot support certain types of answers, and too sparse, e.g. Wikipedia
contains much more information than Freebase. In this work we introduce a new
method, Key-Value Memory Networks, that makes reading documents more viable by
utilizing different encodings in the addressing and output stages of the memory
read operation. To compare using KBs, information extraction or Wikipedia
documents directly in a single framework we construct an analysis tool,
WikiMovies, a QA dataset that contains raw text alongside a preprocessed KB, in
the domain of movies. Our method reduces the gap between all three settings. It
also achieves state-of-the-art results on the existing WikiQA benchmark. | [] | [
"Question Answering"
] | [] | [
"WikiQA"
] | [
"MRR",
"MAP"
] | Key-Value Memory Networks for Directly Reading Documents |
In this paper, we present supervision-by-registration, an unsupervised
approach to improve the precision of facial landmark detectors on both images
and video. Our key observation is that the detections of the same landmark in
adjacent frames should be coherent with registration, i.e., optical flow.
Interestingly, the coherency of optical flow is a source of supervision that
does not require manual labeling, and can be leveraged during detector
training. For example, we can enforce in the training loss function that a
detected landmark at frame$_{t-1}$ followed by optical flow tracking from
frame$_{t-1}$ to frame$_t$ should coincide with the location of the detection
at frame$_t$. Essentially, supervision-by-registration augments the training
loss function with a registration loss, thus training the detector to have
output that is not only close to the annotations in labeled images, but also
consistent with registration on large amounts of unlabeled videos. End-to-end
training with the registration loss is made possible by a differentiable
Lucas-Kanade operation, which computes optical flow registration in the forward
pass, and back-propagates gradients that encourage temporal coherency in the
detector. The output of our method is a more precise image-based facial
landmark detector, which can be applied to single images or video. With
supervision-by-registration, we demonstrate (1) improvements in facial landmark
detection on both images (300W, ALFW) and video (300VW, Youtube-Celebrities),
and (2) significant reduction of jittering in video detections. | [] | [
"Facial Landmark Detection",
"Optical Flow Estimation"
] | [] | [
"300-VW (C)"
] | [
"AUC0.08 private"
] | Supervision-by-Registration: An Unsupervised Approach to Improve the Precision of Facial Landmark Detectors |
High performance face detection remains a very challenging problem,
especially when there exists many tiny faces. This paper presents a novel
single-shot face detector, named Selective Refinement Network (SRN), which
introduces novel two-step classification and regression operations selectively
into an anchor-based face detector to reduce false positives and improve
location accuracy simultaneously. In particular, the SRN consists of two
modules: the Selective Two-step Classification (STC) module and the Selective
Two-step Regression (STR) module. The STC aims to filter out most simple
negative anchors from low level detection layers to reduce the search space for
the subsequent classifier, while the STR is designed to coarsely adjust the
locations and sizes of anchors from high level detection layers to provide
better initialization for the subsequent regressor. Moreover, we design a
Receptive Field Enhancement (RFE) block to provide more diverse receptive
field, which helps to better capture faces in some extreme poses. As a
consequence, the proposed SRN detector achieves state-of-the-art performance on
all the widely used face detection benchmarks, including AFW, PASCAL face,
FDDB, and WIDER FACE datasets. Codes will be released to facilitate further
studies on the face detection problem. | [] | [
"Face Detection",
"Regression"
] | [] | [
"WIDER Face (Medium)",
"WIDER Face (Easy)",
"Annotated Faces in the Wild",
"PASCAL Face",
"WIDER Face (Hard)",
"FDDB"
] | [
"AP"
] | Selective Refinement Network for High Performance Face Detection |
Multi-object tracking (MOT) becomes more challenging when objects of interest have similar appearances. In that case, the motion cues are particularly useful for discriminating multiple objects. However, for online 2D MOT in scenes acquired from moving cameras, observable motion cues are complicated by global camera movements and thus not always smooth or predictable. To deal with such unexpected camera motion for online 2D MOT, a structural motion constraint between objects has been utilized thanks to its robustness to camera motion. In this paper, we propose a new data association method that effectively exploits structural motion constraints in the presence of large camera motion. In addition, to further improve the robustness of data association against mis-detections and clutters, a novel event aggregation approach is developed to integrate structural constraints in assignment costs for online MOT. Experimental results on a large number of datasets demonstrate the effectiveness of the proposed algorithm for online 2D MOT. | [] | [
"Multi-Object Tracking",
"Object Tracking",
"Online Multi-Object Tracking"
] | [] | [
"KITTI Tracking test"
] | [
"MOTA"
] | Online Multi-Object Tracking via Structural Constraint Event Aggregation |
Word sense induction (WSI), or the task of automatically discovering multiple
senses or meanings of a word, has three main challenges: domain adaptability,
novel sense detection, and sense granularity flexibility. While current latent
variable models are known to solve the first two challenges, they are not
flexible to different word sense granularities, which differ very much among
words, from aardvark with one sense, to play with over 50 senses. Current
models either require hyperparameter tuning or nonparametric induction of the
number of senses, which we find both to be ineffective. Thus, we aim to
eliminate these requirements and solve the sense granularity problem by
proposing AutoSense, a latent variable model based on two observations: (1)
senses are represented as a distribution over topics, and (2) senses generate
pairings between the target word and its neighboring word. These observations
alleviate the problem by (a) throwing garbage senses and (b) additionally
inducing fine-grained word senses. Results show great improvements over the
state-of-the-art models on popular WSI datasets. We also show that AutoSense is
able to learn the appropriate sense granularity of a word. Finally, we apply
AutoSense to the unsupervised author name disambiguation task where the sense
granularity problem is more evident and show that AutoSense is evidently better
than competing models. We share our data and code here:
https://github.com/rktamplayo/AutoSense. | [] | [
"Latent Variable Models",
"Word Sense Induction"
] | [] | [
"SemEval 2013",
"SemEval 2010 WSI"
] | [
"F_NMI",
"F-BC",
"V-Measure",
"AVG",
"F-Score"
] | AutoSense Model for Word Sense Induction |
Despite the success of deep neural networks (DNNs) in image classification
tasks, the human-level performance relies on massive training data with
high-quality manual annotations, which are expensive and time-consuming to
collect. There exist many inexpensive data sources on the web, but they tend to
contain inaccurate labels. Training on noisy labeled datasets causes
performance degradation because DNNs can easily overfit to the label noise. To
overcome this problem, we propose a noise-tolerant training algorithm, where a
meta-learning update is performed prior to conventional gradient update. The
proposed meta-learning method simulates actual training by generating synthetic
noisy labels, and train the model such that after one gradient update using
each set of synthetic noisy labels, the model does not overfit to the specific
noise. We conduct extensive experiments on the noisy CIFAR-10 dataset and the
Clothing1M dataset. The results demonstrate the advantageous performance of the
proposed method compared to several state-of-the-art baselines. | [] | [
"Image Classification",
"Learning with noisy labels",
"Meta-Learning"
] | [] | [
"Clothing1M"
] | [
"Accuracy"
] | Learning to Learn from Noisy Labeled Data |
Computational models of visual attention are at the crossroad of disciplines like cognitive science, computational neuroscience, and computer vision. This paper proposes a model of attentional scanpath that is based on the principle that there are foundational laws that drive the emergence of visual attention. We devise variational laws of the eye-movement that rely on a generalized view of the Least Action Principle in physics. The potential energy captures details as well as peripheral visual features, while the kinetic energy corresponds with the classic interpretation in analytic mechanics. In addition, the Lagrangian contains a brightness invariance term, which characterizes significantly the scanpath trajectories. We obtain differential equations of visual attention as the stationary point of the generalized action, and we propose an algorithm to estimate the model parameters. Finally, we report experimental results to validate the model in tasks of saliency detection. | [] | [
"Saliency Detection",
"Scanpath prediction"
] | [] | [
"CAT2000"
] | [
"NSS",
"AUC"
] | Variational Laws of Visual Attention for Dynamic Scenes |
In many real-world prediction tasks, class labels include information about the relative ordering between labels, which is not captured by commonly-used loss functions such as multi-category cross-entropy. Recently, the deep learning community adopted ordinal regression frameworks to take such ordering information into account. Neural networks were equipped with ordinal regression capabilities by transforming ordinal targets into binary classification subtasks. However, this method suffers from inconsistencies among the different binary classifiers. To resolve these inconsistencies, we propose the COnsistent RAnk Logits (CORAL) framework with strong theoretical guarantees for rank-monotonicity and consistent confidence scores. Moreover, the proposed method is architecture-agnostic and can extend arbitrary state-of-the-art deep neural network classifiers for ordinal regression tasks. The empirical evaluation of the proposed rank-consistent method on a range of face-image datasets for age prediction shows a substantial reduction of the prediction error compared to the reference ordinal regression network. | [] | [
"Age And Gender Classification",
"Age Estimation",
"Gender Prediction",
"Regression"
] | [] | [
"MORPH Album2",
"UTKFace",
"AFAD",
"CACD"
] | [
"MAE"
] | Rank consistent ordinal regression for neural networks with application to age estimation |
Learning good feature embeddings for images often requires substantial
training data. As a consequence, in settings where training data is limited
(e.g., few-shot and zero-shot learning), we are typically forced to use a
generic feature embedding across various tasks. Ideally, we want to construct
feature embeddings that are tuned for the given task. In this work, we propose
Task-Aware Feature Embedding Networks (TAFE-Nets) to learn how to adapt the
image representation to a new task in a meta learning fashion. Our network is
composed of a meta learner and a prediction network. Based on a task input, the
meta learner generates parameters for the feature layers in the prediction
network so that the feature embedding can be accurately adjusted for that task.
We show that TAFE-Net is highly effective in generalizing to new tasks or
concepts and evaluate the TAFE-Net on a range of benchmarks in zero-shot and
few-shot learning. Our model matches or exceeds the state-of-the-art on all
tasks. In particular, our approach improves the prediction accuracy of unseen
attribute-object pairs by 4 to 15 points on the challenging visual
attribute-object composition task. | [] | [
"Few-Shot Learning",
"Meta-Learning",
"Zero-Shot Learning"
] | [] | [
"SUN - 0-Shot",
"AWA2 - 0-Shot",
"aPY - 0-Shot",
"AWA1 - 0-Shot",
"CUB-200 - 0-Shot Learning"
] | [
"Accuracy"
] | TAFE-Net: Task-Aware Feature Embeddings for Low Shot Learning |
Knowledge graphs capture structured information and relations between a set of entities or items. As such knowledge graphs represent an attractive source of information that could help improve recommender systems. However, existing approaches in this domain rely on manual feature engineering and do not allow for an end-to-end training. Here we propose Knowledge-aware Graph Neural Networks with Label Smoothness regularization (KGNN-LS) to provide better recommendations. Conceptually, our approach computes user-specific item embeddings by first applying a trainable function that identifies important knowledge graph relationships for a given user. This way we transform the knowledge graph into a user-specific weighted graph and then apply a graph neural network to compute personalized item embeddings. To provide better inductive bias, we rely on label smoothness assumption, which posits that adjacent items in the knowledge graph are likely to have similar user relevance labels/scores. Label smoothness provides regularization over the edge weights and we prove that it is equivalent to a label propagation scheme on a graph. We also develop an efficient implementation that shows strong scalability with respect to the knowledge graph size. Experiments on four datasets show that our method outperforms state of the art baselines. KGNN-LS also achieves strong performance in cold-start scenarios where user-item interactions are sparse. | [] | [
"Feature Engineering",
"Knowledge Graphs",
"Recommendation Systems"
] | [] | [
"Last.FM",
"MovieLens 20M",
"Book-Crossing",
"Dianping-Food"
] | [
"Recall@100",
"Recall@50",
"Recall@2",
"Recall@10"
] | Knowledge-aware Graph Neural Networks with Label Smoothness Regularization for Recommender Systems |
Over the past few years, we have witnessed the success of deep learning in image recognition thanks to the availability of large-scale human-annotated datasets such as PASCAL VOC, ImageNet, and COCO. Although these datasets have covered a wide range of object categories, there are still a significant number of objects that are not included. Can we perform the same task without a lot of human annotations? In this paper, we are interested in few-shot object segmentation where the number of annotated training examples are limited to 5 only. To evaluate and validate the performance of our approach, we have built a few-shot segmentation dataset, FSS-1000, which consists of 1000 object classes with pixelwise annotation of ground-truth segmentation. Unique in FSS-1000, our dataset contains significant number of objects that have never been seen or annotated in previous datasets, such as tiny daily objects, merchandise, cartoon characters, logos, etc. We build our baseline model using standard backbone networks such as VGG-16, ResNet-101, and Inception. To our surprise, we found that training our model from scratch using FSS-1000 achieves comparable and even better results than training with weights pre-trained by ImageNet which is more than 100 times larger than FSS-1000. Both our approach and dataset are simple, effective, and easily extensible to learn segmentation of new object classes given very few annotated training examples. Dataset is available at https://github.com/HKUSTCV/FSS-1000. | [] | [
"Few-Shot Semantic Segmentation",
"Semantic Segmentation"
] | [] | [
"FSS-1000"
] | [
"Mean IoU"
] | FSS-1000: A 1000-Class Dataset for Few-Shot Segmentation |
The rapid growth of video on the internet has made searching for video content using natural language queries a significant challenge. Human-generated queries for video datasets `in the wild' vary a lot in terms of degree of specificity, with some queries describing specific details such as the names of famous identities, content from speech, or text available on the screen. Our goal is to condense the multi-modal, extremely high dimensional information from videos into a single, compact video representation for the task of video retrieval using free-form text queries, where the degree of specificity is open-ended. For this we exploit existing knowledge in the form of pre-trained semantic embeddings which include 'general' features such as motion, appearance, and scene features from visual content. We also explore the use of more 'specific' cues from ASR and OCR which are intermittently available for videos and find that these signals remain challenging to use effectively for retrieval. We propose a collaborative experts model to aggregate information from these different pre-trained experts and assess our approach empirically on five retrieval benchmarks: MSR-VTT, LSMDC, MSVD, DiDeMo, and ActivityNet. Code and data can be found at www.robots.ox.ac.uk/~vgg/research/collaborative-experts/. This paper contains a correction to results reported in the previous version. | [] | [
"Video Retrieval"
] | [] | [
"MSR-VTT-1kA",
"MSVD",
"LSMDC",
"ActivityNet",
"MSR-VTT",
"DiDeMo"
] | [
"text-to-video Median Rank",
"text-to-video R@5",
"video-to-text Mean Rank",
"video-to-text R@10",
"text-to-video R@50",
"text-to-video R@1",
"text-to-video Mean Rank",
"video-to-text Median Rank",
"video-to-text R@1",
"text-to-video R@10",
"video-to-text R@5"
] | Use What You Have: Video Retrieval Using Representations From Collaborative Experts |
Background: Despite recent significant progress in the development of automatic sleep staging methods, building a good model still remains a big challenge for sleep studies with a small cohort due to the data-variability and data-inefficiency issues. This work presents a deep transfer learning approach to overcome these issues and enable transferring knowledge from a large dataset to a small cohort for automatic sleep staging. Methods: We start from a generic end-to-end deep learning framework for sequence-to-sequence sleep staging and derive two networks as the means for transfer learning. The networks are first trained in the source domain (i.e. the large database). The pretrained networks are then finetuned in the target domain (i.e. the small cohort) to complete knowledge transfer. We employ the Montreal Archive of Sleep Studies (MASS) database consisting of 200 subjects as the source domain and study deep transfer learning on three different target domains: the Sleep Cassette subset and the Sleep Telemetry subset of the Sleep-EDF Expanded database, and the Surrey-cEEGrid database. The target domains are purposely adopted to cover different degrees of data mismatch to the source domains. Results: Our experimental results show significant performance improvement on automatic sleep staging on the target domains achieved with the proposed deep transfer learning approach. Conclusions: These results suggest the efficacy of the proposed approach in addressing the above-mentioned data-variability and data-inefficiency issues. Significance: As a consequence, it would enable one to improve the quality of automatic sleep staging models when the amount of data is relatively small. The source code and the pretrained models are available at http://github.com/pquochuy/sleep_transfer_learning. | [] | [
"Automatic Sleep Stage Classification",
"Multimodal Sleep Stage Detection",
"Sleep Stage Detection",
"Transfer Learning"
] | [] | [
"Surrey-PSG",
"Surrey-cEEGGrid",
"Sleep-EDF-ST",
"Sleep-EDF-SC"
] | [
"Accuracy"
] | Towards More Accurate Automatic Sleep Staging via Deep Transfer Learning |
This paper studies learning the representations of whole graphs in both unsupervised and semi-supervised scenarios. Graph-level representations are critical in a variety of real-world applications such as predicting the properties of molecules and community analysis in social networks. Traditional graph kernel based methods are simple, yet effective for obtaining fixed-length representations for graphs but they suffer from poor generalization due to hand-crafted designs. There are also some recent methods based on language models (e.g. graph2vec) but they tend to only consider certain substructures (e.g. subtrees) as graph representatives. Inspired by recent progress of unsupervised representation learning, in this paper we proposed a novel method called InfoGraph for learning graph-level representations. We maximize the mutual information between the graph-level representation and the representations of substructures of different scales (e.g., nodes, edges, triangles). By doing so, the graph-level representations encode aspects of the data that are shared across different scales of substructures. Furthermore, we further propose InfoGraph*, an extension of InfoGraph for semi-supervised scenarios. InfoGraph* maximizes the mutual information between unsupervised graph representations learned by InfoGraph and the representations learned by existing supervised methods. As a result, the supervised encoder learns from unlabeled data while preserving the latent semantic space favored by the current supervised task. Experimental results on the tasks of graph classification and molecular property prediction show that InfoGraph is superior to state-of-the-art baselines and InfoGraph* can achieve performance competitive with state-of-the-art semi-supervised models. | [] | [
"Graph Classification",
"Molecular Property Prediction",
"Representation Learning",
"Unsupervised Representation Learning"
] | [] | [
"IMDb-M",
"PTC",
"IMDb-B",
"MUTAG"
] | [
"Accuracy"
] | InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization |
In matrix factorization, available graph side-information may not be well suited for the matrix completion problem, having edges that disagree with the latent-feature relations learnt from the incomplete data matrix. We show that removing these $\textit{contested}$ edges improves prediction accuracy and scalability. We identify the contested edges through a highly-efficient graphical lasso approximation. The identification and removal of contested edges adds no computational complexity to state-of-the-art graph-regularized matrix factorization, remaining linear with respect to the number of non-zeros. Computational load even decreases proportional to the number of edges removed. Formulating a probabilistic generative model and using expectation maximization to extend graph-regularised alternating least squares (GRALS) guarantees convergence. Rich simulated experiments illustrate the desired properties of the resulting algorithm. On real data experiments we demonstrate improved prediction accuracy with fewer graph edges (empirical evidence that graph side-information is often inaccurate). A 300 thousand dimensional graph with three million edges (Yahoo music side-information) can be analyzed in under ten minutes on a standard laptop computer demonstrating the efficiency of our graph update. | [] | [
"Matrix Completion",
"Recommendation Systems"
] | [] | [
"YahooMusic",
"Flixster Monti",
"MovieLens 20M",
"Douban Monti",
"MovieLens 100K"
] | [
"RMSE (u1 Splits)",
"RMSE"
] | Scalable Probabilistic Matrix Factorization with Graph-Based Priors |
Generating diverse sequences is important in many NLP applications such as question generation or summarization that exhibit semantically one-to-many relationships between source and the target sequences. We present a method to explicitly separate diversification from generation using a general plug-and-play module (called SELECTOR) that wraps around and guides an existing encoder-decoder model. The diversification stage uses a mixture of experts to sample different binary masks on the source sequence for diverse content selection. The generation stage uses a standard encoder-decoder model given each selected content from the source sequence. Due to the non-differentiable nature of discrete sampling and the lack of ground truth labels for binary mask, we leverage a proxy for ground truth mask and adopt stochastic hard-EM for training. In question generation (SQuAD) and abstractive summarization (CNN-DM), our method demonstrates significant improvements in accuracy, diversity and training efficiency, including state-of-the-art top-1 accuracy in both datasets, 6% gain in top-5 accuracy, and 3.7 times faster training over a state of the art model. Our code is publicly available at https://github.com/clovaai/FocusSeq2Seq. | [] | [
"Abstractive Text Summarization",
"Document Summarization",
"Question Generation"
] | [] | [
"CNN / Daily Mail",
"SQuAD1.1"
] | [
"ROUGE-L",
"BLEU-4",
"ROUGE-1",
"ROUGE-2"
] | Mixture Content Selection for Diverse Sequence Generation |
Despite the recent success of end-to-end learned representations,
hand-crafted optical flow features are still widely used in video analysis
tasks. To fill this gap, we propose TVNet, a novel end-to-end trainable neural
network, to learn optical-flow-like features from data. TVNet subsumes a
specific optical flow solver, the TV-L1 method, and is initialized by unfolding
its optimization iterations as neural layers. TVNet can therefore be used
directly without any extra learning. Moreover, it can be naturally concatenated
with other task-specific networks to formulate an end-to-end architecture, thus
making our method more efficient than current multi-stage approaches by
avoiding the need to pre-compute and store features on disk. Finally, the
parameters of the TVNet can be further fine-tuned by end-to-end training. This
enables TVNet to learn richer and task-specific patterns beyond exact optical
flow. Extensive experiments on two action recognition benchmarks verify the
effectiveness of the proposed approach. Our TVNet achieves better accuracies
than all compared methods, while being competitive with the fastest counterpart
in terms of features extraction time. | [] | [
"Action Recognition",
"Optical Flow Estimation",
"Video Understanding"
] | [] | [
"UCF101",
"HMDB-51"
] | [
"Average accuracy of 3 splits",
"3-fold Accuracy"
] | End-to-End Learning of Motion Representation for Video Understanding |
In Visual Question Answering (VQA), answers have a great correlation with question meaning and visual contents. Thus, to selectively utilize image, question and answer information, we propose a novel trilinear interaction model which simultaneously learns high level associations between these three inputs. In addition, to overcome the interaction complexity, we introduce a multimodal tensor-based PARALIND decomposition which efficiently parameterizes trilinear interaction between the three inputs. Moreover, knowledge distillation is first time applied in Free-form Opened-ended VQA. It is not only for reducing the computational cost and required memory but also for transferring knowledge from trilinear interaction model to bilinear interaction model. The extensive experiments on benchmarking datasets TDIUC, VQA-2.0, and Visual7W show that the proposed compact trilinear interaction model achieves state-of-the-art results when using a single model on all three datasets. | [] | [
"Knowledge Distillation",
"Question Answering",
"Visual Question Answering"
] | [] | [
"VQA v2 test-dev",
"Visual7W",
"TDIUC"
] | [
"Percentage correct",
"Accuracy"
] | Compact Trilinear Interaction for Visual Question Answering |
Recent studies in deep learning-based speech separation have proven the superiority of time-domain approaches to conventional time-frequency-based methods. Unlike the time-frequency domain approaches, the time-domain separation systems often receive input sequences consisting of a huge number of time steps, which introduces challenges for modeling extremely long sequences. Conventional recurrent neural networks (RNNs) are not effective for modeling such long sequences due to optimization difficulties, while one-dimensional convolutional neural networks (1-D CNNs) cannot perform utterance-level sequence modeling when its receptive field is smaller than the sequence length. In this paper, we propose dual-path recurrent neural network (DPRNN), a simple yet effective method for organizing RNN layers in a deep structure to model extremely long sequences. DPRNN splits the long sequential input into smaller chunks and applies intra- and inter-chunk operations iteratively, where the input length can be made proportional to the square root of the original sequence length in each operation. Experiments show that by replacing 1-D CNN with DPRNN and apply sample-level modeling in the time-domain audio separation network (TasNet), a new state-of-the-art performance on WSJ0-2mix is achieved with a 20 times smaller model than the previous best system. | [] | [
"Speech Separation"
] | [] | [
"wsj0-2mix"
] | [
"SI-SDRi"
] | Dual-path RNN: efficient long sequence modeling for time-domain single-channel speech separation |
The task of named entity recognition (NER) is normally divided into nested NER and flat NER depending on whether named entities are nested or not. Models are usually separately developed for the two tasks, since sequence labeling models, the most widely used backbone for flat NER, are only able to assign a single label to a particular token, which is unsuitable for nested NER where a token may be assigned several labels. In this paper, we propose a unified framework that is capable of handling both flat and nested NER tasks. Instead of treating the task of NER as a sequence labeling problem, we propose to formulate it as a machine reading comprehension (MRC) task. For example, extracting entities with the \textsc{per} label is formalized as extracting answer spans to the question "{\it which person is mentioned in the text?}". This formulation naturally tackles the entity overlapping issue in nested NER: the extraction of two overlapping entities for different categories requires answering two independent questions. Additionally, since the query encodes informative prior knowledge, this strategy facilitates the process of entity extraction, leading to better performances for not only nested NER, but flat NER. We conduct experiments on both {\em nested} and {\em flat} NER datasets. Experimental results demonstrate the effectiveness of the proposed formulation. We are able to achieve vast amount of performance boost over current SOTA models on nested NER datasets, i.e., +1.28, +2.55, +5.44, +6.37, respectively on ACE04, ACE05, GENIA and KBP17, along with SOTA results on flat NER datasets, i.e.,+0.24, +1.95, +0.21, +1.49 respectively on English CoNLL 2003, English OntoNotes 5.0, Chinese MSRA, Chinese OntoNotes 4.0. | [] | [
"Chinese Named Entity Recognition",
"Entity Extraction using GAN",
"Machine Reading Comprehension",
"Named Entity Recognition",
"Nested Mention Recognition",
"Nested Named Entity Recognition",
"Reading Comprehension"
] | [] | [
"GENIA",
"OntoNotes 4",
"ACE 2004",
"MSRA",
"ACE 2005",
"Ontonotes v5 (English)",
"CoNLL 2003 (English)"
] | [
"F1"
] | A Unified MRC Framework for Named Entity Recognition |
In this work, we describe a new, general, and efficient method for unstructured point cloud labeling. As the question of efficiently using deep Convolutional Neural Networks (CNNs) on 3D data is still a pending issue, we propose a framework which applies CNNs on multiple 2D image views (or snapshots) of the point cloud. The approach consists in three core ideas. (i) We pick many suitable snapshots of the point cloud. We generate two types of images: a Red-Green-Blue (RGB) view and a depth composite view containing geometric features. (ii) We then perform a pixel-wise labeling of each pair of 2D snapshots using fully convolutional networks. Different architectures are tested to achieve a profitable fusion of our heterogeneous inputs. (iii) Finally, we perform fast back-projection of the label predictions in the 3D space using efficient buffering to label every 3D point. Experiments show that our method is suitable for various types of point clouds such as Lidar or photogrammetric data. | [] | [
"Semantic Segmentation"
] | [] | [
"Semantic3D"
] | [
"mIoU"
] | Unstructured point cloud semantic labelingusing deep segmentation networks |
Visual and audio modalities are highly correlated, yet they contain different information. Their strong correlation makes it possible to predict the semantics of one from the other with good accuracy. Their intrinsic differences make cross-modal prediction a potentially more rewarding pretext task for self-supervised learning of video and audio representations compared to within-modality learning. Based on this intuition, we propose Cross-Modal Deep Clustering (XDC), a novel self-supervised method that leverages unsupervised clustering in one modality (e.g., audio) as a supervisory signal for the other modality (e.g., video). This cross-modal supervision helps XDC utilize the semantic correlation and the differences between the two modalities. Our experiments show that XDC outperforms single-modality clustering and other multi-modal variants. XDC achieves state-of-the-art accuracy among self-supervised methods on multiple video and audio benchmarks. Most importantly, our video model pretrained on large-scale unlabeled data significantly outperforms the same model pretrained with full-supervision on ImageNet and Kinetics for action recognition on HMDB51 and UCF101. To the best of our knowledge, XDC is the first self-supervised learning method that outperforms large-scale fully-supervised pretraining for action recognition on the same architecture. | [] | [
"Action Recognition",
"Audio Classification",
"Deep Clustering",
"Representation Learning",
"Self-Supervised Action Recognition",
"Self-Supervised Audio Classification",
"Self-Supervised Learning"
] | [] | [
"DCASE",
"UCF101",
"HMDB51",
"ESC-50"
] | [
"3-fold Accuracy",
"PRE-TRAINING DATASET",
"Pre-Training Dataset",
"Top-1 Accuracy"
] | Self-Supervised Learning by Cross-Modal Audio-Video Clustering |
The ability to identify the same person from multiple camera views without the explicit use of facial recognition is receiving commercial and academic interest. The current status-quo solutions are based on attention neural models. In this paper, we propose Attention and CL loss, which is a hybrid of center and Online Soft Mining (OSM) loss added to the attention loss on top of a temporal attention-based neural network. The proposed loss function applied with bag-of-tricks for training surpasses the state of the art on the common person Re-ID datasets, MARS and PRID 2011. Our source code is publicly available on github. | [] | [
"Person Re-Identification",
"Video-Based Person Re-Identification"
] | [] | [
"MARS",
"PRID2011"
] | [
"Rank-1",
"mAP"
] | Video Person Re-ID: Fantastic Techniques and Where to Find Them |
Recent strategies achieved ensembling "for free" by fitting concurrently diverse subnetworks inside a single base network. The main idea during training is that each subnetwork learns to classify only one of the multiple inputs simultaneously provided. However, the question of how to best mix these multiple inputs has not been studied so far. In this paper, we introduce MixMo, a new generalized framework for learning multi-input multi-output deep subnetworks. Our key motivation is to replace the suboptimal summing operation hidden in previous approaches by a more appropriate mixing mechanism. For that purpose, we draw inspiration from successful mixed sample data augmentations. We show that binary mixing in features - particularly with rectangular patches from CutMix - enhances results by making subnetworks stronger and more diverse. We improve state of the art for image classification on CIFAR-100 and Tiny ImageNet datasets. Our easy to implement models notably outperform data augmented deep ensembles, without the inference and memory overheads. As we operate in features and simply better leverage the expressiveness of large networks, we open a new line of research complementary to previous works. | [
"Image Data Augmentation",
"Graph Embeddings"
] | [] | [
"CutMix",
"LINE",
"Large-scale Information Network Embedding"
] | [
"Tiny ImageNet Classification",
"CIFAR-100",
"CIFAR-10"
] | [
"Validation Acc",
"Percentage correct"
] | MixMo: Mixing Multiple Inputs for Multiple Outputs via Deep Subnetworks |
Recommender systems need to mirror the complexity of the environment they are applied in. The more we know about what might benefit the user, the more objectives the recommender system has. In addition there may be multiple stakeholders - sellers, buyers, shareholders - in addition to legal and ethical constraints. Simultaneously optimizing for a multitude of objectives, correlated and not correlated, having the same scale or not, has proven difficult so far. We introduce a stochastic multi-gradient descent approach to recommender systems (MGDRec) to solve this problem. We show that this exceeds state-of-the-art methods in traditional objective mixtures, like revenue and recall. Not only that, but through gradient normalization we can combine fundamentally different objectives, having diverse scales, into a single coherent framework. We show that uncorrelated objectives, like the proportion of quality products, can be improved alongside accuracy. Through the use of stochasticity, we avoid the pitfalls of calculating full gradients and provide a clear setting for its applicability. | [] | [
"Recommendation Systems"
] | [] | [
"Amazon Books",
"MovieLens 20M"
] | [
"Recall@20"
] | Multi-Gradient Descent for Multi-Objective Recommender Systems |
Click-through rate (CTR) prediction is a crucial task in online display advertising. The embedding-based neural networks have been proposed to learn both explicit feature interactions through a shallow component and deep feature interactions using a deep neural network (DNN) component. These sophisticated models, however, slow down the prediction inference by at least hundreds of times. To address the issue of significantly increased serving delay and high memory usage for ad serving in production, this paper presents \emph{DeepLight}: a framework to accelerate the CTR predictions in three aspects: 1) accelerate the model inference via explicitly searching informative feature interactions in the shallow component; 2) prune redundant layers and parameters at intra-layer and inter-layer level in the DNN component; 3) promote the sparsity of the embedding layer to preserve the most discriminant signals. By combining the above efforts, the proposed approach accelerates the model inference by 46X on Criteo dataset and 27X on Avazu dataset without any loss on the prediction accuracy. This paves the way for successfully deploying complicated embedding-based neural networks in production for ad serving. | [] | [
"Click-Through Rate Prediction"
] | [] | [
"Avazu",
"Criteo"
] | [
"Log Loss",
"LogLoss",
"AUC"
] | DeepLight: Deep Lightweight Feature Interactions for Accelerating CTR Predictions in Ad Serving |
Estimating 3D poses of multiple humans in real-time is a classic but still challenging task in computer vision. Its major difficulty lies in the ambiguity in cross-view association of 2D poses and the huge state space when there are multiple people in multiple views. In this paper, we present a novel solution for multi-human 3D pose estimation from multiple calibrated camera views. It takes 2D poses in different camera coordinates as inputs and aims for the accurate 3D poses in the global coordinate. Unlike previous methods that associate 2D poses among all pairs of views from scratch at every frame, we exploit the temporal consistency in videos to match the 2D inputs with 3D poses directly in 3-space. More specifically, we propose to retain the 3D pose for each person and update them iteratively via the cross-view multi-human tracking. This novel formulation improves both accuracy and efficiency, as we demonstrated on widely-used public datasets. To further verify the scalability of our method, we propose a new large-scale multi-human dataset with 12 to 28 camera views. Without bells and whistles, our solution achieves 154 FPS on 12 cameras and 34 FPS on 28 cameras, indicating its ability to handle large-scale real-world applications. The proposed dataset will be released soon. | [] | [
"3D Multi-Person Pose Estimation",
"3D Pose Estimation",
"Pose Estimation"
] | [] | [
"Campus",
"Shelf"
] | [
"PCP3D"
] | Cross-View Tracking for Multi-Human 3D Pose Estimation at over 100 FPS |
A popular method for anomaly detection is to use the generator of an adversarial network to formulate anomaly scores over reconstruction loss of input. Due to the rare occurrence of anomalies, optimizing such networks can be a cumbersome task. Another possible approach is to use both generator and discriminator for anomaly detection. However, attributed to the involvement of adversarial training, this model is often unstable in a way that the performance fluctuates drastically with each training step. In this study, we propose a framework that effectively generates stable results across a wide range of training steps and allows us to use both the generator and the discriminator of an adversarial model for efficient and robust anomaly detection. Our approach transforms the fundamental role of a discriminator from identifying real and fake data to distinguishing between good and bad quality reconstructions. To this end, we prepare training examples for the good quality reconstruction by employing the current generator, whereas poor quality examples are obtained by utilizing an old state of the same generator. This way, the discriminator learns to detect subtle distortions that often appear in reconstructions of the anomaly inputs. Extensive experiments performed on Caltech-256 and MNIST image datasets for novelty detection show superior results. Furthermore, on UCSD Ped2 video dataset for anomaly detection, our model achieves a frame-level AUC of 98.1%, surpassing recent state-of-the-art methods. | [] | [
"Anomaly Detection",
"One-class classifier"
] | [] | [
"MNIST-test"
] | [
"F1 score"
] | Old is Gold: Redefining the Adversarially Learned One-Class Classifier Training Paradigm |
We study the problem of semi-supervised learning on graphs, for which graph neural networks (GNNs) have been extensively explored. However, most existing GNNs inherently suffer from the limitations of over-smoothing, non-robustness, and weak-generalization when labeled nodes are scarce. In this paper, we propose a simple yet effective framework---GRAPH RANDOM NEURAL NETWORKS (GRAND)---to address these issues. In GRAND, we first design a random propagation strategy to perform graph data augmentation. Then we leverage consistency regularization to optimize the prediction consistency of unlabeled nodes across different data augmentations. Extensive experiments on graph benchmark datasets suggest that GRAND significantly outperforms state-of-the-art GNN baselines on semi-supervised node classification. Finally, we show that GRAND mitigates the issues of over-smoothing and non-robustness, exhibiting better generalization behavior than existing GNNs. The source code of GRAND is publicly available at https://github.com/Grand20/grand. | [] | [
"Data Augmentation",
"Graph Learning",
"Node Classification"
] | [] | [
"Cora with Public Split: fixed 20 nodes per class",
"CiteSeer with Public Split: fixed 20 nodes per class",
"PubMed with Public Split: fixed 20 nodes per class"
] | [
"Accuracy"
] | Graph Random Neural Network for Semi-Supervised Learning on Graphs |
Recovering the 3D shape of an object from single or multiple images with deep neural networks has been attracting increasing attention in the past few years. Mainstream works (e.g. 3D-R2N2) use recurrent neural networks (RNNs) to sequentially fuse feature maps of input images. However, RNN-based approaches are unable to produce consistent reconstruction results when given the same input images with different orders. Moreover, RNNs may forget important features from early input images due to long-term memory loss. To address these issues, we propose a novel framework for single-view and multi-view 3D object reconstruction, named Pix2Vox++. By using a well-designed encoder-decoder, it generates a coarse 3D volume from each input image. A multi-scale context-aware fusion module is then introduced to adaptively select high-quality reconstructions for different parts from all coarse 3D volumes to obtain a fused 3D volume. To further correct the wrongly recovered parts in the fused 3D volume, a refiner is adopted to generate the final output. Experimental results on the ShapeNet, Pix3D, and Things3D benchmarks show that Pix2Vox++ performs favorably against state-of-the-art methods in terms of both accuracy and efficiency. | [] | [
"3D Object Reconstruction",
"Object Reconstruction"
] | [] | [
"Data3D−R2N2"
] | [
"3DIoU"
] | Pix2Vox++: Multi-scale Context-aware 3D Object Reconstruction from Single and Multiple Images |
We present SwapNet, a framework to transfer garments across images of people with arbitrary body pose, shape, and clothing. Garment transfer is a challenging task that requires (i) disentangling the features of the clothing from the body pose and shape and (ii) realistic synthesis of the garment texture on the new body. We present a neural network architecture that tackles these sub-problems with two task-specific sub-networks. Since acquiring pairs of images showing the same clothing on different bodies is difficult, we propose a novel weakly-supervised approach that generates training pairs from a single image via data augmentation. We present the first fully automatic method for garment transfer in unconstrained images without solving the difficult 3D reconstruction problem. We demonstrate a variety of transfer results and highlight our advantages over traditional image-to-image and analogy pipelines. | [] | [
"Virtual Try-on"
] | [] | [
"FashionIQ"
] | [
"10 fold Cross validation"
] | SwapNet: Garment Transfer in Single View Images |
Motion plays a crucial role in understanding videos and most state-of-the-art neural models for video classification incorporate motion information typically using optical flows extracted by a separate off-the-shelf method. As the frame-by-frame optical flows require heavy computation, incorporating motion information has remained a major computational bottleneck for video understanding. In this work, we replace external and heavy computation of optical flows with internal and light-weight learning of motion features. We propose a trainable neural module, dubbed MotionSqueeze, for effective motion feature extraction. Inserted in the middle of any neural network, it learns to establish correspondences across frames and convert them into motion features, which are readily fed to the next downstream layer for better prediction. We demonstrate that the proposed method provides a significant gain on four standard benchmarks for action recognition with only a small amount of additional cost, outperforming the state of the art on Something-Something-V1&V2 datasets. | [] | [
"Action Classification",
"Action Recognition",
"Video Classification",
"Video Understanding"
] | [] | [
"Kinetics-400",
"HMDB-51",
"Something-Something V2",
"Something-Something V1"
] | [
"Top 1 Accuracy",
"Top-5 Accuracy",
"Top-1 Accuracy",
"Average accuracy of 3 splits",
"Top 5 Accuracy",
"Vid acc@1"
] | MotionSqueeze: Neural Motion Feature Learning for Video Understanding |
In this paper, we introduce a new reinforcement learning (RL) based neural architecture search (NAS) methodology for effective and efficient generative adversarial network (GAN) architecture search. The key idea is to formulate the GAN architecture search problem as a Markov decision process (MDP) for smoother architecture sampling, which enables a more effective RL-based search algorithm by targeting the potential global optimal architecture. To improve efficiency, we exploit an off-policy GAN architecture search algorithm that makes efficient use of the samples generated by previous policies. Evaluation on two standard benchmark datasets (i.e., CIFAR-10 and STL-10) demonstrates that the proposed method is able to discover highly competitive architectures for generally better image generation results with a considerably reduced computational burden: 7 GPU hours. Our code is available at https://github.com/Yuantian013/E2GAN. | [] | [
"Image Generation",
"Neural Architecture Search"
] | [] | [
"STL-10",
"CIFAR-10"
] | [
"Inception score",
"FID"
] | Off-Policy Reinforcement Learning for Efficient and Effective GAN Architecture Search |
We introduce Transductive Infomation Maximization (TIM) for few-shot learning. Our method maximizes the mutual information between the query features and their label predictions for a given few-shot task, in conjunction with a supervision loss based on the support set. Furthermore, we propose a new alternating-direction solver for our mutual-information loss, which substantially speeds up transductive-inference convergence over gradient-based optimization, while yielding similar accuracy. TIM inference is modular: it can be used on top of any base-training feature extractor. Following standard transductive few-shot settings, our comprehensive experiments demonstrate that TIM outperforms state-of-the-art methods significantly across various datasets and networks, while used on top of a fixed feature extractor trained with simple cross-entropy on the base classes, without resorting to complex meta-learning schemes. It consistently brings between 2% and 5% improvement in accuracy over the best performing method, not only on all the well-established few-shot benchmarks but also on more challenging scenarios,with domain shifts and larger numbers of classes. | [] | [
"Few-Shot Image Classification",
"Few-Shot Learning",
"Meta-Learning"
] | [] | [
"Mini-ImageNet - 5-Shot Learning",
"Mini-Imagenet 5-way (1-shot)",
"Tiered ImageNet 5-way (1-shot)",
"Tiered ImageNet 5-way (5-shot)",
"Mini-Imagenet 20-way (1-shot)",
"Mini-Imagenet 10-way (1-shot)",
"Mini-ImageNet to CUB - 5 shot learning",
"CUB 200 5-way 1-shot",
"Mini-Imagenet 20-way (5-shot)",
"CUB 200 5-way 5-shot",
"Mini-Imagenet 10-way (5-shot)"
] | [
"Accuracy"
] | Transductive Information Maximization For Few-Shot Learning |
We study the multi-round response generation in visual dialog, where a
response is generated according to a visually grounded conversational history.
Given a triplet: an image, Q&A history, and current question, all the
prevailing methods follow a codec (i.e., encoder-decoder) fashion in a
supervised learning paradigm: a multimodal encoder encodes the triplet into a
feature vector, which is then fed into the decoder for the current answer
generation, supervised by the ground-truth. However, this conventional
supervised learning does NOT take into account the impact of imperfect history,
violating the conversational nature of visual dialog and thus making the codec
more inclined to learn history bias but not contextual reasoning. To this end,
inspired by the actor-critic policy gradient in reinforcement learning, we
propose a novel training paradigm called History Advantage Sequence Training
(HAST). Specifically, we intentionally impose wrong answers in the history,
obtaining an adverse critic, and see how the historic error impacts the codec's
future behavior by History Advantage-a quantity obtained by subtracting the
adverse critic from the gold reward of ground-truth history. Moreover, to make
the codec more sensitive to the history, we propose a novel attention network
called History-Aware Co-Attention Network (HACAN) which can be effectively
trained by using HAST. Experimental results on three benchmarks: VisDial
v0.9&v1.0 and GuessWhat?!, show that the proposed HAST strategy consistently
outperforms the state-of-the-art supervised counterparts. | [] | [
"Visual Dialog",
"Visual Reasoning"
] | [] | [
"Visual Dialog v1.0 test-std",
"VisDial v0.9 val"
] | [
"MRR (x 100)",
"R@10",
"NDCG (x 100)",
"R@5",
"Mean Rank",
"MRR",
"Mean",
"R@1"
] | Making History Matter: History-Advantage Sequence Training for Visual Dialog |
Joint extraction of entities and relations is an important task in natural language processing (NLP), which aims to capture all relational triplets from plain texts. This is a big challenge due to some of the triplets extracted from one sentence may have overlapping entities. Most existing methods perform entity recognition followed by relation detection between every possible entity pairs, which usually suffers from numerous redundant operations. In this paper, we propose a relation-specific attention network (RSAN) to handle the issue. Our RSAN utilizes relation-aware attention mechanism to construct specific sentence representations for each relation, and then performs sequence labeling to extract its corresponding head and tail entities. Experiments on two public datasets show that our model can effectively extract overlapping triplets and achieve state-of-the-art performance. Our code is available at https://github.com/Anery/RSAN | [] | [
"Joint Entity and Relation Extraction",
"Relation Extraction"
] | [] | [
"NYT",
"WebNLG"
] | [
"F1"
] | A Relation-Specific Attention Network for Joint Entity and Relation Extraction |
It is well known that human gaze carries significant information about visual attention. However, there are three main difficulties in incorporating the gaze data in an attention mechanism of deep neural networks: 1) the gaze fixation points are likely to have measurement errors due to blinking and rapid eye movements; 2) it is unclear when and how much the gaze data is correlated with visual attention; and 3) gaze data is not always available in many real-world situations. In this work, we introduce an effective probabilistic approach to integrate human gaze into spatiotemporal attention for egocentric activity recognition. Specifically, we represent the locations of gaze fixation points as structured discrete latent variables to model their uncertainties. In addition, we model the distribution of gaze fixations using a variational method. The gaze distribution is learned during the training process so that the ground-truth annotations of gaze locations are no longer needed in testing situations since they are predicted from the learned gaze distribution. The predicted gaze locations are used to provide informative attentional cues to improve the recognition performance. Our method outperforms all the previous state-of-the-art approaches on EGTEA, which is a large-scale dataset for egocentric activity recognition provided with gaze measurements. We also perform an ablation study and qualitative analysis to demonstrate that our attention mechanism is effective. | [] | [
"Action Recognition",
"Egocentric Activity Recognition"
] | [] | [
"EGTEA"
] | [
"Mean class accuracy",
"Average Accuracy"
] | Integrating Human Gaze into Attention for Egocentric Activity Recognition |
Previous work introduced transition-based algorithms to form a unified architecture of parsing rhetorical structures (including span, nuclearity and relation), but did not achieve satisfactory performance. In this paper, we propose that transition-based model is more appropriate for parsing the naked discourse tree (i.e., identifying span and nuclearity) due to data sparsity. At the same time, we argue that relation labeling can benefit from naked tree structure and should be treated elaborately with consideration of three kinds of relations including within-sentence, across-sentence and across-paragraph relations. Thus, we design a pipelined two-stage parsing method for generating an RST tree from text. Experimental results show that our method achieves state-of-the-art performance, especially on span and nuclearity identification. | [] | [
"Discourse Parsing"
] | [] | [
"RST-DT"
] | [
"RST-Parseval (Relation)",
"RST-Parseval (Span)",
"RST-Parseval (Nuclearity)"
] | A Two-Stage Parsing Method for Text-Level Discourse Analysis |
Recurrent neural networks are powerful models for processing sequential data,
but they are generally plagued by vanishing and exploding gradient problems.
Unitary recurrent neural networks (uRNNs), which use unitary recurrence
matrices, have recently been proposed as a means to avoid these issues.
However, in previous experiments, the recurrence matrices were restricted to be
a product of parameterized unitary matrices, and an open question remains: when
does such a parameterization fail to represent all unitary matrices, and how
does this restricted representational capacity limit what can be learned? To
address this question, we propose full-capacity uRNNs that optimize their
recurrence matrix over all unitary matrices, leading to significantly improved
performance over uRNNs that use a restricted-capacity recurrence matrix. Our
contribution consists of two main components. First, we provide a theoretical
argument to determine if a unitary parameterization has restricted capacity.
Using this argument, we show that a recently proposed unitary parameterization
has restricted capacity for hidden state dimension greater than 7. Second, we
show how a complete, full-capacity unitary recurrence matrix can be optimized
over the differentiable manifold of unitary matrices. The resulting
multiplicative gradient step is very simple and does not require gradient
clipping or learning rate adaptation. We confirm the utility of our claims by
empirically evaluating our new full-capacity uRNNs on both synthetic and
natural data, achieving superior performance compared to both LSTMs and the
original restricted-capacity uRNNs. | [] | [
"Sequential Image Classification"
] | [] | [
"Sequential MNIST"
] | [
"Permuted Accuracy",
"Unpermuted Accuracy"
] | Full-Capacity Unitary Recurrent Neural Networks |
Clustering is central to many data-driven application domains and has been
studied extensively in terms of distance functions and grouping algorithms.
Relatively little work has focused on learning representations for clustering.
In this paper, we propose Deep Embedded Clustering (DEC), a method that
simultaneously learns feature representations and cluster assignments using
deep neural networks. DEC learns a mapping from the data space to a
lower-dimensional feature space in which it iteratively optimizes a clustering
objective. Our experimental evaluations on image and text corpora show
significant improvement over state-of-the-art methods. | [] | [
"Image Clustering",
"Unsupervised Image Classification"
] | [] | [
"CMU-PIE",
"Imagenet-dog-15",
"YouTube Faces DB",
"CIFAR-100",
"CIFAR-10",
"Tiny-ImageNet",
"ImageNet-10",
"STL-10",
"SVHN"
] | [
"Acc",
"Train set",
"Train Split",
"ARI",
"# of clusters (k)",
"Backbone",
"Train Set",
"NMI",
"Accuracy"
] | Unsupervised Deep Embedding for Clustering Analysis |
Disentangling conversations mixed together in a single stream of messages is a difficult task, made harder by the lack of large manually annotated datasets. We created a new dataset of 77,563 messages manually annotated with reply-structure graphs that both disentangle conversations and define internal conversation structure. Our dataset is 16 times larger than all previously released datasets combined, the first to include adjudication of annotation disagreements, and the first to include context. We use our data to re-examine prior work, in particular, finding that 80% of conversations in a widely used dialogue corpus are either missing messages or contain extra messages. Our manually-annotated data presents an opportunity to develop robust data-driven methods for conversation disentanglement, which will help advance dialogue research. | [] | [
"Conversation Disentanglement"
] | [] | [
"irc-disentanglement",
"Linux IRC (Ch2 Elsner)",
"Linux IRC (Ch2 Kummerfeld)"
] | [
"F",
"P",
"Local",
"1-1",
"Shen F-1",
"VI",
"R"
] | A Large-Scale Corpus for Conversation Disentanglement |
Most conditional generation tasks expect diverse outputs given a single conditional context. However, conditional generative adversarial networks (cGANs) often focus on the prior conditional information and ignore the input noise vectors, which contribute to the output variations. Recent attempts to resolve the mode collapse issue for cGANs are usually task-specific and computationally expensive. In this work, we propose a simple yet effective regularization term to address the mode collapse issue for cGANs. The proposed method explicitly maximizes the ratio of the distance between generated images with respect to the corresponding latent codes, thus encouraging the generators to explore more minor modes during training. This mode seeking regularization term is readily applicable to various conditional generation tasks without imposing training overhead or modifying the original network structures. We validate the proposed algorithm on three conditional image synthesis tasks including categorical generation, image-to-image translation, and text-to-image synthesis with different baseline models. Both qualitative and quantitative results demonstrate the effectiveness of the proposed regularization method for improving diversity without loss of quality. | [] | [
"Image Generation",
"Image-to-Image Translation",
"Multimodal Unsupervised Image-To-Image Translation"
] | [] | [
"AFHQ",
"CelebA-HQ",
"CIFAR-10"
] | [
"FID"
] | Mode Seeking Generative Adversarial Networks for Diverse Image Synthesis |
Recent success of semantic segmentation approaches on demanding road driving
datasets has spurred interest in many related application fields. Many of these
applications involve real-time prediction on mobile platforms such as cars,
drones and various kinds of robots. Real-time setup is challenging due to
extraordinary computational complexity involved. Many previous works address
the challenge with custom lightweight architectures which decrease
computational complexity by reducing depth, width and layer capacity with
respect to general purpose architectures. We propose an alternative approach
which achieves a significantly better performance across a wide range of
computing budgets. First, we rely on a light-weight general purpose
architecture as the main recognition engine. Then, we leverage light-weight
upsampling with lateral connections as the most cost-effective solution to
restore the prediction resolution. Finally, we propose to enlarge the receptive
field by fusing shared features at multiple resolutions in a novel fashion.
Experiments on several road driving datasets show a substantial advantage of
the proposed approach, either with ImageNet pre-trained parameters or when we
learn from scratch. Our Cityscapes test submission entitled SwiftNetRN-18
delivers 75.5% MIoU and achieves 39.9 Hz on 1024x2048 images on GTX1080Ti. | [] | [
"Real-Time Semantic Segmentation",
"Semantic Segmentation"
] | [] | [
"Cityscapes test"
] | [
"Mean IoU (class)",
"Frame (fps)",
"mIoU"
] | In Defense of Pre-trained ImageNet Architectures for Real-time Semantic Segmentation of Road-driving Images |
Semantic segmentation generates comprehensive understanding of scenes through densely predicting the category for each pixel. High-level features from Deep Convolutional Neural Networks already demonstrate their effectiveness in semantic segmentation tasks, however the coarse resolution of high-level features often leads to inferior results for small/thin objects where detailed information is important. It is natural to consider importing low level features to compensate for the lost detailed information in high-level features.Unfortunately, simply combining multi-level features suffers from the semantic gap among them. In this paper, we propose a new architecture, named Gated Fully Fusion (GFF), to selectively fuse features from multiple levels using gates in a fully connected way. Specifically, features at each level are enhanced by higher-level features with stronger semantics and lower-level features with more details, and gates are used to control the propagation of useful information which significantly reduces the noises during fusion. We achieve the state of the art results on four challenging scene parsing datasets including Cityscapes, Pascal Context, COCO-stuff and ADE20K. | [] | [
"Scene Parsing",
"Scene Understanding",
"Semantic Segmentation"
] | [] | [
"Cityscapes test"
] | [
"Mean IoU (class)"
] | GFF: Gated Fully Fusion for Semantic Segmentation |
We show that Neural Ordinary Differential Equations (ODEs) learn representations that preserve the topology of the input space and prove that this implies the existence of functions Neural ODEs cannot represent. To address these limitations, we introduce Augmented Neural ODEs which, in addition to being more expressive models, are empirically more stable, generalize better and have a lower computational cost than Neural ODEs. | [] | [
"Image Classification"
] | [] | [
"SVHN",
"MNIST",
"CIFAR-10"
] | [
"Percentage error",
"Percentage correct",
"Accuracy"
] | Augmented Neural ODEs |
Node classification and graph classification are two graph learning problems
that predict the class label of a node and the class label of a graph
respectively. A node of a graph usually represents a real-world entity, e.g., a
user in a social network, or a protein in a protein-protein interaction
network. In this work, we consider a more challenging but practically useful
setting, in which a node itself is a graph instance. This leads to a
hierarchical graph perspective which arises in many domains such as social
network, biological network and document collection. For example, in a social
network, a group of people with shared interests forms a user group, whereas a
number of user groups are interconnected via interactions or common members. We
study the node classification problem in the hierarchical graph where a `node'
is a graph instance, e.g., a user group in the above example. As labels are
usually limited in real-world data, we design two novel semi-supervised
solutions named \underline{SE}mi-supervised gr\underline{A}ph
c\underline{L}assification via \underline{C}autious/\underline{A}ctive
\underline{I}teration (or SEAL-C/AI in short). SEAL-C/AI adopt an iterative
framework that takes turns to build or update two classifiers, one working at
the graph instance level and the other at the hierarchical graph level. To
simplify the representation of the hierarchical graph, we propose a novel
supervised, self-attentive graph embedding method called SAGE, which embeds
graph instances of arbitrary size into fixed-length vectors. Through
experiments on synthetic data and Tencent QQ group data, we demonstrate that
SEAL-C/AI not only outperform competing methods by a significant margin in
terms of accuracy/Macro-F1, but also generate meaningful interpretations of the
learned representations. | [] | [
"Graph Classification",
"Graph Embedding",
"Graph Learning",
"Node Classification"
] | [] | [
"D&D",
"PROTEINS"
] | [
"Accuracy"
] | Semi-Supervised Graph Classification: A Hierarchical Graph Perspective |
Emotion is intrinsic to humans and consequently emotion understanding is a key part of human-like artificial intelligence (AI). Emotion recognition in conversation (ERC) is becoming increasingly popular as a new research frontier in natural language processing (NLP) due to its ability to mine opinions from the plethora of publicly available conversational data in platforms such as Facebook, Youtube, Reddit, Twitter, and others. Moreover, it has potential applications in health-care systems (as a tool for psychological analysis), education (understanding student frustration) and more. Additionally, ERC is also extremely important for generating emotion-aware dialogues that require an understanding of the user's emotions. Catering to these needs calls for effective and scalable conversational emotion-recognition algorithms. However, it is a strenuous problem to solve because of several research challenges. In this paper, we discuss these challenges and shed light on the recent research in this field. We also describe the drawbacks of these approaches and discuss the reasons why they fail to successfully overcome the research challenges in ERC. | [] | [
"Emotion Recognition",
"Emotion Recognition in Conversation"
] | [] | [
"EC"
] | [
"Micro-F1"
] | Emotion Recognition in Conversation: Research Challenges, Datasets, and Recent Advances |
As the complexity of neural network models has grown, it has become increasingly important to optimize their design automatically through metalearning. Methods for discovering hyperparameters, topologies, and learning rate schedules have lead to significant increases in performance. This paper shows that loss functions can be optimized with metalearning as well, and result in similar improvements. The method, Genetic Loss-function Optimization (GLO), discovers loss functions de novo, and optimizes them for a target task. Leveraging techniques from genetic programming, GLO builds loss functions hierarchically from a set of operators and leaf nodes. These functions are repeatedly recombined and mutated to find an optimal structure, and then a covariance-matrix adaptation evolutionary strategy (CMA-ES) is used to find optimal coefficients. Networks trained with GLO loss functions are found to outperform the standard cross-entropy loss on standard image classification tasks. Training with these new loss functions requires fewer steps, results in lower test error, and allows for smaller datasets to be used. Loss-function optimization thus provides a new dimension of metalearning, and constitutes an important step towards AutoML. | [] | [
"AutoML",
"Image Classification"
] | [] | [
"MNIST"
] | [
"Percentage error"
] | Improved Training Speed, Accuracy, and Data Utilization Through Loss Function Optimization |