abstract
stringlengths 13
4.33k
| field
sequence | task
sequence | method
sequence | dataset
sequence | metric
sequence | title
stringlengths 10
194
|
---|---|---|---|---|---|---|
We present a self-training approach to unsupervised dependency parsing that
reuses existing supervised and unsupervised parsing algorithms. Our approach,
called `iterated reranking' (IR), starts with dependency trees generated by an
unsupervised parser, and iteratively improves these trees using the richer
probability models used in supervised parsing that are in turn trained on these
trees. Our system achieves 1.8% accuracy higher than the state-of-the-part
parser of Spitkovsky et al. (2013) on the WSJ corpus. | [] | [
"Dependency Parsing",
"Unsupervised Dependency Parsing"
] | [] | [
"Penn Treebank"
] | [
"UAS"
] | Unsupervised Dependency Parsing: Let's Use Supervised Parsers |
Inducing a grammar directly from text is one of the oldest and most challenging tasks in Computational Linguistics. Significant progress has been made for inducing dependency grammars, however the models employed are overly simplistic, particularly in comparison to supervised parsing models. In this paper we present an approach to dependency grammar induction using tree substitution grammar which is capable of learning large dependency fragments and thereby better modelling the text. We define a hierarchical non-parametric Pitman-Yor Process prior which biases towards a small grammar with simple productions. This approach significantly improves the state-of-the-art, when measured by head attachment accuracy. | [] | [
"Dependency Grammar Induction",
"Dependency Parsing",
"Unsupervised Dependency Parsing"
] | [] | [
"Penn Treebank"
] | [
"UAS"
] | Unsupervised Induction of Tree Substitution Grammars for Dependency Parsing |
We present a family of priors over probabilistic grammar weights, called the shared logistic normal distribution. This family extends the partitioned logistic normal distribution, enabling factored covariance between the probabilities of different derivation events in the probabilistic grammar, providing a new way to encode prior knowledge about an unknown grammar. We describe a variational EM algorithm for learning a probabilistic grammar based on this family of priors. We then experiment with unsupervised dependency grammar induction and show significant improvements using our model for both monolingual learning and bilingual learning with a non-parallel, multilingual corpus. | [] | [
"Dependency Grammar Induction",
"Unsupervised Dependency Parsing"
] | [] | [
"Penn Treebank"
] | [
"UAS"
] | Shared Logistic Normal Distributions for Soft Parameter Tying in Unsupervised Grammar Induction |
The NOESIS II challenge, as the Track 2 of the 8th Dialogue System Technology Challenges (DSTC 8), is the extension of DSTC 7. This track incorporates new elements that are vital for the creation of a deployed task-oriented dialogue system. This paper describes our systems that are evaluated on all subtasks under this challenge. We study the problem of employing pre-trained attention-based network for multi-turn dialogue systems. Meanwhile, several adaptation methods are proposed to adapt the pre-trained language models for multi-turn dialogue systems, in order to keep the intrinsic property of dialogue systems. In the released evaluation results of Track 2 of DSTC 8, our proposed models ranked fourth in subtask 1, third in subtask 2, and first in subtask 3 and subtask 4 respectively. | [] | [
"Conversation Disentanglement",
"Task-Oriented Dialogue Systems"
] | [] | [
"irc-disentanglement"
] | [
"VI",
"F",
"P",
"R"
] | Pre-Trained and Attention-Based Neural Networks for Building Noetic Task-Oriented Dialogue Systems |
Autonomous driving requires 3D perception of vehicles and other objects in
the in environment. Much of the current methods support 2D vehicle detection.
This paper proposes a flexible pipeline to adopt any 2D detection network and
fuse it with a 3D point cloud to generate 3D information with minimum changes
of the 2D detection networks. To identify the 3D box, an effective model
fitting algorithm is developed based on generalised car models and score maps.
A two-stage convolutional neural network (CNN) is proposed to refine the
detected 3D box. This pipeline is tested on the KITTI dataset using two
different 2D detection networks. The 3D detection results based on these two
networks are similar, demonstrating the flexibility of the proposed pipeline.
The results rank second among the 3D detection algorithms, indicating its
competencies in 3D detection. | [] | [
"3D Object Detection",
"Autonomous Driving"
] | [] | [
"KITTI Cars Hard",
"KITTI Cars Moderate",
"KITTI Cars Easy"
] | [
"AP"
] | A General Pipeline for 3D Detection of Vehicles |
Attention networks in multimodal learning provide an efficient way to utilize
given visual information selectively. However, the computational cost to learn
attention distributions for every pair of multimodal input channels is
prohibitively expensive. To solve this problem, co-attention builds two
separate attention distributions for each modality neglecting the interaction
between multimodal inputs. In this paper, we propose bilinear attention
networks (BAN) that find bilinear attention distributions to utilize given
vision-language information seamlessly. BAN considers bilinear interactions
among two groups of input channels, while low-rank bilinear pooling extracts
the joint representations for each pair of channels. Furthermore, we propose a
variant of multimodal residual networks to exploit eight-attention maps of the
BAN efficiently. We quantitatively and qualitatively evaluate our model on
visual question answering (VQA 2.0) and Flickr30k Entities datasets, showing
that BAN significantly outperforms previous methods and achieves new
state-of-the-arts on both datasets. | [] | [
"Visual Question Answering"
] | [] | [
"VQA v2 test-std",
"Flickr30k Entities Test",
"VQA v2 test-dev"
] | [
"overall",
"R@10",
"R@5",
"Accuracy",
"R@1"
] | Bilinear Attention Networks |
Network representation learning (NRL) has been widely used to help analyze
large-scale networks through mapping original networks into a low-dimensional
vector space. However, existing NRL methods ignore the impact of properties of
relations on the object relevance in heterogeneous information networks (HINs).
To tackle this issue, this paper proposes a new NRL framework, called
Event2vec, for HINs to consider both quantities and properties of relations
during the representation learning process. Specifically, an event (i.e., a
complete semantic unit) is used to represent the relation among multiple
objects, and both event-driven first-order and second-order proximities are
defined to measure the object relevance according to the quantities and
properties of relations. We theoretically prove how event-driven proximities
can be preserved in the embedding space by Event2vec, which utilizes event
embeddings to facilitate learning the object embeddings. Experimental studies
demonstrate the advantages of Event2vec over state-of-the-art algorithms on
four real-world datasets and three network analysis tasks (including network
reconstruction, link prediction, and node classification). | [] | [
"Link Prediction",
"Node Classification",
"Representation Learning"
] | [] | [
"IMDb",
"Yelp",
"Douban",
"DBLP"
] | [
"AUC"
] | Representation Learning for Heterogeneous Information Networks via Embedding Events |
In this work we present a self-supervised learning framework to
simultaneously train two Convolutional Neural Networks (CNNs) to predict depth
and surface normals from a single image. In contrast to most existing
frameworks which represent outdoor scenes as fronto-parallel planes at
piece-wise smooth depth, we propose to predict depth with surface orientation
while assuming that natural scenes have piece-wise smooth normals. We show that
a simple depth-normal consistency as a soft-constraint on the predictions is
sufficient and effective for training both these networks simultaneously. The
trained normal network provides state-of-the-art predictions while the depth
network, relying on much realistic smooth normal assumption, outperforms the
traditional self-supervised depth prediction network by a large margin on the
KITTI benchmark. Demo video: https://youtu.be/ZD-ZRsw7hdM | [] | [
"Depth Estimation",
"Monocular Depth Estimation",
"Self-Supervised Learning"
] | [] | [
"KITTI Eigen split"
] | [
"absolute relative error"
] | Self-supervised Learning for Single View Depth and Surface Normal Estimation |
The symmetry for the corners of a box, the continuity for the surfaces of a monitor, the linkage between the torso and other body parts --- it suggests that 3D objects may have common and underlying inner relations between local structures, and it is a fundamental ability for intelligent species to reason for them. In this paper, we propose an effective plug-and-play module called the structural relation network (SRN) to reason about the structural dependencies of local regions in 3D point clouds. Existing network architectures on point sets such as PointNet++ capture local structures individually, without considering their inner interactions. Instead, our SRN simultaneously exploits local information by modeling their geometrical and locational relations, which play critical roles for our humans to understand 3D objects. The proposed SRN module is simple, interpretable, and does not require any additional supervision signals, which can be easily equipped with the existing networks. Experimental results on benchmark datasets indicate promising boosts on the tasks of 3D point cloud classification and segmentation by capturing structural relations with the SRN module.
| [] | [
"3D Part Segmentation",
"3D Point Cloud Classification",
"Relational Reasoning"
] | [] | [
"ShapeNet-Part"
] | [
"Class Average IoU",
"Instance Average IoU"
] | Structural Relational Reasoning of Point Clouds |
This work presents a novel pipeline that demonstrates what is achievable with a combined effort of state-of-the-art approaches, surpassing the 50% exact match on NaturalQuestions and EfficentQA datasets. Specifically, it proposes the novel R2-D2 (Rank twice, reaD twice) pipeline composed of retriever, reranker, extractive reader, generative reader and a simple way to combine them. Furthermore, previous work often comes with a massive index of external documents that scales in the order of tens of GiB. This work presents a simple approach for pruning the contents of a massive index such that the open-domain QA system altogether with index, OS, and library components fits into 6GiB docker image while retaining only 8% of original index contents and losing only 3% EM accuracy. | [] | [
"Open-Domain Question Answering"
] | [] | [
"Natural Questions"
] | [
"Exact Match"
] | Pruning the Index Contents for Memory Efficient Open-Domain QA |
Our work involves enriching the Stack-LSTM transition-based AMR parser (Ballesteros and Al-Onaizan, 2017) by augmenting training with Policy Learning and rewarding the Smatch score of sampled graphs. In addition, we also combined several AMR-to-text alignments with an attention mechanism and we supplemented the parser with pre-processed concept identification, named entities and contextualized embeddings. We achieve a highly competitive performance that is comparable to the best published results. We show an in-depth study ablating each of the new components of the parser | [] | [
"AMR Parsing"
] | [] | [
"LDC2017T10"
] | [
"Smatch"
] | Rewarding Smatch: Transition-Based AMR Parsing with Reinforcement Learning |
Video based action recognition is one of the important and challenging
problems in computer vision research. Bag of Visual Words model (BoVW) with
local features has become the most popular method and obtained the
state-of-the-art performance on several realistic datasets, such as the HMDB51,
UCF50, and UCF101. BoVW is a general pipeline to construct a global
representation from a set of local features, which is mainly composed of five
steps: (i) feature extraction, (ii) feature pre-processing, (iii) codebook
generation, (iv) feature encoding, and (v) pooling and normalization. Many
efforts have been made in each step independently in different scenarios and
their effect on action recognition is still unknown. Meanwhile, video data
exhibits different views of visual pattern, such as static appearance and
motion dynamics. Multiple descriptors are usually extracted to represent these
different views. Many feature fusion methods have been developed in other areas
and their influence on action recognition has never been investigated before.
This paper aims to provide a comprehensive study of all steps in BoVW and
different fusion methods, and uncover some good practice to produce a
state-of-the-art action recognition system. Specifically, we explore two kinds
of local features, ten kinds of encoding methods, eight kinds of pooling and
normalization strategies, and three kinds of fusion methods. We conclude that
every step is crucial for contributing to the final recognition rate.
Furthermore, based on our comprehensive study, we propose a simple yet
effective representation, called hybrid representation, by exploring the
complementarity of different BoVW frameworks and local descriptors. Using this
representation, we obtain the state-of-the-art on the three challenging
datasets: HMDB51 (61.1%), UCF50 (92.3%), and UCF101 (87.9%). | [] | [
"Action Recognition",
"Temporal Action Localization"
] | [] | [
"UCF101"
] | [
"3-fold Accuracy"
] | Bag of Visual Words and Fusion Methods for Action Recognition: Comprehensive Study and Good Practice |
Real-time semantic segmentation of LiDAR data is crucial for autonomously driving vehicles, which are usually equipped with an embedded platform and have limited computational resources. Approaches that operate directly on the point cloud use complex spatial aggregation operations, which are very expensive and difficult to optimize for embedded platforms. They are therefore not suitable for real-time applications with embedded systems. As an alternative, projection-based methods are more efficient and can run on embedded platforms. However, the current state-of-the-art projection-based methods do not achieve the same accuracy as point-based methods and use millions of parameters. In this paper, we therefore propose a projection-based method, called Multi-scale Interaction Network (MINet), which is very efficient and accurate. The network uses multiple paths with different scales and balances the computational resources between the scales. Additional dense interactions between the scales avoid redundant computations and make the network highly efficient. The proposed network outperforms point-based, image-based, and projection-based methods in terms of accuracy, number of parameters, and runtime. Moreover, the network processes more than 24 scans per second on an embedded platform, which is higher than the framerates of LiDAR sensors. The network is therefore suitable for autonomous vehicles. | [] | [
"3D Semantic Segmentation",
"Autonomous Vehicles",
"Real-Time 3D Semantic Segmentation",
"Real-Time Semantic Segmentation",
"Semantic Segmentation"
] | [] | [
"SemanticKITTI"
] | [
"Parameters (M)",
"Speed (FPS)",
"mIoU"
] | Multi-scale Interaction for Real-time LiDAR Data Segmentation on an Embedded Platform |
Current state-of-the-art approaches for spatio-temporal action localization
rely on detections at the frame level and model temporal context with 3D
ConvNets. Here, we go one step further and model spatio-temporal relations to
capture the interactions between human actors, relevant objects and scene
elements essential to differentiate similar human actions. Our approach is
weakly supervised and mines the relevant elements automatically with an
actor-centric relational network (ACRN). ACRN computes and accumulates
pair-wise relation information from actor and global scene features, and
generates relation features for action classification. It is implemented as
neural networks and can be trained jointly with an existing action detection
system. We show that ACRN outperforms alternative approaches which capture
relation information, and that the proposed framework improves upon the
state-of-the-art performance on JHMDB and AVA. A visualization of the learned
relation features confirms that our approach is able to attend to the relevant
relations for each action. | [] | [
"Action Classification",
"Action Classification ",
"Action Detection",
"Action Localization",
"Action Recognition",
"Spatio-Temporal Action Localization",
"Temporal Action Localization"
] | [] | [
"AVA v2.1"
] | [
"mAP (Val)"
] | Actor-Centric Relation Network |
The continually increasing number of complex datasets each year necessitates
ever improving machine learning methods for robust and accurate categorization
of these data. This paper introduces Random Multimodel Deep Learning (RMDL): a
new ensemble, deep learning approach for classification. Deep learning models
have achieved state-of-the-art results across many domains. RMDL solves the
problem of finding the best deep learning structure and architecture while
simultaneously improving robustness and accuracy through ensembles of deep
learning architectures. RDML can accept as input a variety data to include
text, video, images, and symbolic. This paper describes RMDL and shows test
results for image and text data including MNIST, CIFAR-10, WOS, Reuters, IMDB,
and 20newsgroup. These test results show that RDML produces consistently better
performance than standard methods over a broad range of data types and
classification problems. | [] | [
"Document Classification",
"Face Recognition",
"Hierarchical Text Classification of Blurbs (GermEval 2019)",
"Image Classification",
"Multi-Label Text Classification",
"Unsupervised Pre-training"
] | [] | [
"Measles",
"LOCAL DATASET",
"20NEWS",
"CIFAR-10",
"MNIST",
"UCI measles"
] | [
"Percentage error",
"Percentage correct",
"Sensitivity",
"Accuracy (%)",
"Accuracy",
"Sensitivity (VEB)"
] | RMDL: Random Multimodel Deep Learning for Classification |
This work proposes a general-purpose, fully-convolutional network
architecture for efficiently processing large-scale 3D data. One striking
characteristic of our approach is its ability to process unorganized 3D
representations such as point clouds as input, then transforming them
internally to ordered structures to be processed via 3D convolutions. In
contrast to conventional approaches that maintain either unorganized or
organized representations, from input to output, our approach has the advantage
of operating on memory efficient input data representations while at the same
time exploiting the natural structure of convolutional operations to avoid the
redundant computing and storing of spatial information in the network. The
network eliminates the need to pre- or post process the raw sensor data. This,
together with the fully-convolutional nature of the network, makes it an
end-to-end method able to process point clouds of huge spaces or even entire
rooms with up to 200k points at once. Another advantage is that our network can
produce either an ordered output or map predictions directly onto the input
cloud, thus making it suitable as a general-purpose point cloud descriptor
applicable to many 3D tasks. We demonstrate our network's ability to
effectively learn both low-level features as well as complex compositional
relationships by evaluating it on benchmark datasets for semantic voxel
segmentation, semantic part segmentation and 3D scene captioning. | [] | [
"Semantic Segmentation"
] | [] | [
"ScanNet"
] | [
"3DIoU"
] | Fully-Convolutional Point Networks for Large-Scale Point Clouds |
We propose a new layer design by adding a linear gating mechanism to shortcut
connections. By using a scalar parameter to control each gate, we provide a way
to learn identity mappings by optimizing only one parameter. We build upon the
motivation behind Residual Networks, where a layer is reformulated in order to
make learning identity mappings less problematic to the optimizer. The
augmentation introduces only one extra parameter per layer, and provides easier
optimization by making degeneration into identity mappings simpler. We propose
a new model, the Gated Residual Network, which is the result when augmenting
Residual Networks. Experimental results show that augmenting layers provides
better optimization, increased performance, and more layer independence. We
evaluate our method on MNIST using fully-connected networks, showing empirical
indications that our augmentation facilitates the optimization of deep models,
and that it provides high tolerance to full layer removal: the model retains
over 90% of its performance even after half of its layers have been randomly
removed. We also evaluate our model on CIFAR-10 and CIFAR-100 using Wide Gated
ResNets, achieving 3.65% and 18.27% error, respectively. | [] | [
"Image Classification"
] | [] | [
"CIFAR-100",
"CIFAR-10"
] | [
"Percentage correct"
] | Learning Identity Mappings with Residual Gates |
In this paper, we propose a novel Pattern-Affinitive Propagation (PAP) framework to jointly predict depth, surface normal and semantic segmentation. The motivation behind it comes from the statistic observation that pattern-affinitive pairs recur much frequently across different tasks as well as within a task. Thus, we can conduct two types of propagations, cross-task propagation and task-specific propagation, to adaptively diffuse those similar patterns. The former integrates cross-task affinity patterns to adapt to each task therein through the calculation on non-local relationships. Next the latter performs an iterative diffusion in the feature space so that the cross-task affinity patterns can be widely-spread within the task. Accordingly, the learning of each task can be regularized and boosted by the complementary task-level affinities. Extensive experiments demonstrate the effectiveness and the superiority of our method on the joint three tasks. Meanwhile, we achieve the state-of-the-art or competitive results on the three related datasets, NYUD-v2, SUN-RGBD and KITTI. | [] | [
"Monocular Depth Estimation",
"Semantic Segmentation"
] | [] | [
"NYU-Depth V2"
] | [
"RMSE"
] | Pattern-Affinitive Propagation across Depth, Surface Normal and Semantic Segmentation |
Monocular depth estimation is an ill-posed problem, and as such critically relies on scene priors and semantics. Due to its complexity, we propose a deep neural network model based on a semantic divide-and-conquer approach. Our model decomposes a scene into semantic segments, such as object instances and background stuff classes, and then predicts a scale and shift invariant depth map for each semantic segment in a canonical space. Semantic segments of the same category share the same depth decoder, so the global depth prediction task is decomposed into a series of category-specific ones, which are simpler to learn and easier to generalize to new scene types. Finally, our model stitches each local depth segment by predicting its scale and shift based on the global context of the image. The model is trained end-to-end using a multi-task loss for panoptic segmentation and depth prediction, and is therefore able to leverage large-scale panoptic segmentation datasets to boost its semantic understanding. We validate the effectiveness of our approach and show state-of-the-art performance on three benchmark datasets.
| [] | [
"Depth Estimation",
"Monocular Depth Estimation",
"Panoptic Segmentation"
] | [] | [
"NYU-Depth V2",
"Cityscapes test"
] | [
"RMSE"
] | SDC-Depth: Semantic Divide-and-Conquer Network for Monocular Depth Estimation |
Pose Machines provide a sequential prediction framework for learning rich
implicit spatial models. In this work we show a systematic design for how
convolutional networks can be incorporated into the pose machine framework for
learning image features and image-dependent spatial models for the task of pose
estimation. The contribution of this paper is to implicitly model long-range
dependencies between variables in structured prediction tasks such as
articulated pose estimation. We achieve this by designing a sequential
architecture composed of convolutional networks that directly operate on belief
maps from previous stages, producing increasingly refined estimates for part
locations, without the need for explicit graphical model-style inference. Our
approach addresses the characteristic difficulty of vanishing gradients during
training by providing a natural learning objective function that enforces
intermediate supervision, thereby replenishing back-propagated gradients and
conditioning the learning procedure. We demonstrate state-of-the-art
performance and outperform competing methods on standard benchmarks including
the MPII, LSP, and FLIC datasets. | [] | [
"3D Human Pose Estimation",
"Pose Estimation",
"Structured Prediction"
] | [] | [
"FLIC Wrists",
"Leeds Sports Poses",
"FLIC Elbows",
"Total Capture",
"J-HMDB",
"MPII Human Pose"
] | [
"Average MPJPE (mm)",
"[email protected]",
"PCKh-0.5",
"Mean [email protected]",
"PCK"
] | Convolutional Pose Machines |
In this paper, we present Adaptive Computation Steps (ACS) algo-rithm, which
enables end-to-end speech recognition models to dy-namically decide how many
frames should be processed to predict a linguistic output. The model that
applies ACS algorithm follows the encoder-decoder framework, while unlike the
attention-based mod-els, it produces alignments independently at the encoder
side using the correlation between adjacent frames. Thus, predictions can be
made as soon as sufficient acoustic information is received, which makes the
model applicable in online cases. Besides, a small change is made to the
decoding stage of the encoder-decoder framework, which allows the prediction to
exploit bidirectional contexts. We verify the ACS algorithm on a Mandarin
speech corpus AIShell-1, and it achieves a 31.2% CER in the online occasion,
compared to the 32.4% CER of the attention-based model. To fully demonstrate
the advantage of ACS algorithm, offline experiments are conducted, in which our
ACS model achieves an 18.7% CER, outperforming the attention-based counterpart
with the CER of 22.0%. | [] | [
"End-To-End Speech Recognition",
"Speech Recognition"
] | [] | [
"AISHELL-1"
] | [
"Word Error Rate (WER)"
] | End-to-end Speech Recognition with Adaptive Computation Steps |
Normalizing Flows are generative models which produce tractable distributions where both sampling and density evaluation can be efficient and exact. The goal of this survey article is to give a coherent and comprehensive review of the literature around the construction and use of Normalizing Flows for distribution learning. We aim to provide context and explanation of the models, review current state-of-the-art literature, and identify open questions and promising future directions. | [
"Distribution Approximation"
] | [] | [
"Normalizing Flows"
] | [] | [] | Normalizing Flows: An Introduction and Review of Current Methods |
Vision-and-Language Navigation (VLN) tasks such as Room-to-Room (R2R) require machine agents to interpret natural language instructions and learn to act in visually realistic environments to achieve navigation goals. The overall task requires competence in several perception problems: successful agents combine spatio-temporal, vision and language understanding to produce appropriate action sequences. Our approach adapts pre-trained vision and language representations to relevant in-domain tasks making them more effective for VLN. Specifically, the representations are adapted to solve both a cross-modal sequence alignment and sequence coherence task. In the sequence alignment task, the model determines whether an instruction corresponds to a sequence of visual frames. In the sequence coherence task, the model determines whether the perceptual sequences are predictive sequentially in the instruction-conditioned latent space. By transferring the domain-adapted representations, we improve competitive agents in R2R as measured by the success rate weighted by path length (SPL) metric. | [] | [
"Representation Learning",
"Vision and Language Navigation"
] | [] | [
"VLN Challenge"
] | [
"length",
"spl",
"oracle success",
"success",
"error"
] | Transferable Representation Learning in Vision-and-Language Navigation |
Despite recent impressive results on single-object and single-domain image generation, the generation of complex scenes with multiple objects remains challenging. In this paper, we start with the idea that a model must be able to understand individual objects and relationships between objects in order to generate complex scenes well. Our layout-to-image-generation method, which we call Object-Centric Generative Adversarial Network (or OC-GAN), relies on a novel Scene-Graph Similarity Module (SGSM). The SGSM learns representations of the spatial relationships between objects in the scene, which lead to our model's improved layout-fidelity. We also propose changes to the conditioning mechanism of the generator that enhance its object instance-awareness. Apart from improving image quality, our contributions mitigate two failure modes in previous approaches: (1) spurious objects being generated without corresponding bounding boxes in the layout, and (2) overlapping bounding boxes in the layout leading to merged objects in images. Extensive quantitative evaluation and ablation studies demonstrate the impact of our contributions, with our model outperforming previous state-of-the-art approaches on both the COCO-Stuff and Visual Genome datasets. Finally, we address an important limitation of evaluation metrics used in previous works by introducing SceneFID -- an object-centric adaptation of the popular Fr{\'e}chet Inception Distance metric, that is better suited for multi-object images. | [] | [
"Image Generation",
"Layout-to-Image Generation"
] | [] | [
"COCO-Stuff 64x64",
"COCO-Stuff 256x256",
"Visual Genome 128x128",
"Visual Genome 256x256",
"Visual Genome 64x64",
"COCO-Stuff 128x128"
] | [
"Inception Score",
"SceneFID",
"FID"
] | Object-Centric Image Generation from Layouts |
Despite data augmentation being a de facto technique for boosting the performance of deep neural networks, little attention has been paid to developing augmentation strategies for generative adversarial networks (GANs). To this end, we introduce a novel augmentation scheme designed specifically for GAN-based semantic image synthesis models. We propose to randomly warp object shapes in the semantic label maps used as an input to the generator. The local shape discrepancies between the warped and non-warped label maps and images enable the GAN to learn better the structural and geometric details of the scene and thus to improve the quality of generated images. While benchmarking the augmented GAN models against their vanilla counterparts, we discover that the quantification metrics reported in the previous semantic image synthesis studies are strongly biased towards specific semantic classes as they are derived via an external pre-trained segmentation network. We therefore propose to improve the established semantic image synthesis evaluation scheme by analyzing separately the performance of generated images on the biased and unbiased classes for the given segmentation network. Finally, we show strong quantitative and qualitative improvements obtained with our augmentation scheme, on both class splits, using state-of-the-art semantic image synthesis models across three different datasets. On average across COCO-Stuff, ADE20K and Cityscapes datasets, the augmented models outperform their vanilla counterparts by ~3 mIoU and ~10 FID points. | [] | [
"Data Augmentation",
"Image Generation",
"Image-to-Image Translation"
] | [] | [
"ADE20K Labels-to-Photos",
"COCO-Stuff Labels-to-Photos",
"Cityscapes Labels-to-Photo"
] | [
"mIoU",
"FID",
"Accuracy"
] | Improving Augmentation and Evaluation Schemes for Semantic Image Synthesis |
Supervised training of neural networks for classification is typically performed with a global loss function. The loss function provides a gradient for the output layer, and this gradient is back-propagated to hidden layers to dictate an update direction for the weights. An alternative approach is to train the network with layer-wise loss functions. In this paper we demonstrate, for the first time, that layer-wise training can approach the state-of-the-art on a variety of image datasets. We use single-layer sub-networks and two different supervised loss functions to generate local error signals for the hidden layers, and we show that the combination of these losses help with optimization in the context of local learning. Using local errors could be a step towards more biologically plausible deep learning because the global error does not have to be transported back to hidden layers. A completely backprop free variant outperforms previously reported results among methods aiming for higher biological plausibility. Code is available https://github.com/anokland/local-loss | [] | [
"Image Classification"
] | [] | [
"Kuzushiji-MNIST",
"CIFAR-100",
"CIFAR-10",
"MNIST",
"STL-10",
"SVHN",
"Fashion-MNIST"
] | [
"Error",
"Percentage error",
"Percentage correct",
"Accuracy"
] | Training Neural Networks with Local Error Signals |
Recent studies have used deep residual convolutional neural networks (CNNs)
for JPEG compression artifact reduction. This study proposes a scalable CNN
called S-Net. Our approach effectively adjusts the network scale dynamically in
a multitask system for real-time operation with little performance loss. It
offers a simple and direct technique to evaluate the performance gains obtained
with increasing network depth, and it is helpful for removing redundant network
layers to maximize the network efficiency. We implement our architecture using
the Keras framework with the TensorFlow backend on an NVIDIA K80 GPU server. We
train our models on the DIV2K dataset and evaluate their performance on public
benchmark datasets. To validate the generality and universality of the proposed
method, we created and utilized a new dataset, called WIN143, for
over-processed images evaluation. Experimental results indicate that our
proposed approach outperforms other CNN-based methods and achieves
state-of-the-art performance. | [] | [
"JPEG Artifact Correction",
"Jpeg Compression Artifact Reduction"
] | [] | [
"LIVE1 (Quality 20 Grayscale)",
"Live1 (Quality 10 Grayscale)",
"LIVE1 (Quality 10 Color)",
"LIVE1 (Quality 20 Color)"
] | [
"SSIM",
"PSNR",
"PSNR-B"
] | S-Net: A Scalable Convolutional Neural Network for JPEG Compression Artifact Reduction |
JPEG is one of the widely used lossy compression methods. JPEG-compressed
images usually suffer from compression artifacts including blocking and
blurring, especially at low bit-rates. Soft decoding is an effective solution
to improve the quality of compressed images without changing codec or
introducing extra coding bits. Inspired by the excellent performance of the
deep convolutional neural networks (CNNs) on both low-level and high-level
computer vision problems, we develop a dual pixel-wavelet domain deep
CNNs-based soft decoding network for JPEG-compressed images, namely DPW-SDNet.
The pixel domain deep network takes the four downsampled versions of the
compressed image to form a 4-channel input and outputs a pixel domain
prediction, while the wavelet domain deep network uses the 1-level discrete
wavelet transformation (DWT) coefficients to form a 4-channel input to produce
a DWT domain prediction. The pixel domain and wavelet domain estimates are
combined to generate the final soft decoded result. Experimental results
demonstrate the superiority of the proposed DPW-SDNet over several
state-of-the-art compression artifacts reduction algorithms. | [] | [
"JPEG Artifact Correction"
] | [] | [
"LIVE1 (Quality 20 Grayscale)",
"Live1 (Quality 10 Grayscale)",
"LIVE1 (Quality 10 Color)",
"LIVE1 (Quality 20 Color)"
] | [
"SSIM",
"PSNR",
"PSNR-B"
] | DPW-SDNet: Dual Pixel-Wavelet Domain Deep CNNs for Soft Decoding of JPEG-Compressed Images |
Network embeddings have become very popular in learning effective feature
representations of networks. Motivated by the recent successes of embeddings in
natural language processing, researchers have tried to find network embeddings
in order to exploit machine learning algorithms for mining tasks like node
classification and edge prediction. However, most of the work focuses on
finding distributed representations of nodes, which are inherently ill-suited
to tasks such as community detection which are intuitively dependent on
subgraphs.
Here, we propose sub2vec, an unsupervised scalable algorithm to learn feature
representations of arbitrary subgraphs. We provide means to characterize
similarties between subgraphs and provide theoretical analysis of sub2vec and
demonstrate that it preserves the so-called local proximity. We also highlight
the usability of sub2vec by leveraging it for network mining tasks, like
community detection. We show that sub2vec gets significant gains over
state-of-the-art methods and node-embedding methods. In particular, sub2vec
offers an approach to generate a richer vocabulary of features of subgraphs to
support representation and reasoning. | [] | [
"Community Detection",
"Node Classification"
] | [] | [
"Android Malware Dataset"
] | [
"Accuracy"
] | Distributed Representation of Subgraphs |
We present a neural encoder-decoder AMR parser that extends an attention-based model by predicting the alignment between graph nodes and sentence tokens explicitly with a pointer mechanism. Candidate lemmas are predicted as a pre-processing step so that the lemmas of lexical concepts, as well as constant strings, are factored out of the graph linearization and recovered through the predicted alignments. The approach does not rely on syntactic parses or extensive external resources. Our parser obtained 59{\%} Smatch on the SemEval test set. | [] | [
"AMR Parsing",
"Lemmatization"
] | [] | [
"LDC2017T10"
] | [
"Smatch"
] | Oxford at SemEval-2017 Task 9: Neural AMR Parsing with Pointer-Augmented Attention |
Panoptic segmentation aims at generating pixel-wise class and instance predictions for each pixel in the input image, which is a challenging task and far more complicated than naively fusing the semantic and instance segmentation results. Prediction fusion is therefore important to achieve accurate panoptic segmentation. In this paper, we present REFINE, pREdiction FusIon NEtwork for panoptic segmentation, to achieve high-quality panoptic segmentation by improving cross-task prediction fusion, and within-task prediction fusion. Our single-model ResNeXt-101 with DCN achieves PQ=51.5 on the COCO dataset, surpassing state-of-the-art performance by a convincing margin and is comparable with ensembled models. Our smaller model with a ResNet-50 backbone achieves PQ=44.9, which is comparable with state-of-the-art methods with larger backbones. | [] | [
"Instance Segmentation",
"Panoptic Segmentation",
"Semantic Segmentation"
] | [] | [
"COCO test-dev"
] | [
"PQst",
"PQ",
"PQth"
] | REFINE: Prediction Fusion Network for Panoptic Segmentation |
Panoptic segmentation that unifies instance segmentation and semantic segmentation has recently attracted increasing attention. While most existing methods focus on designing novel architectures, we steer toward a different perspective: performing automated multi-loss adaptation (named Ada-Segment) on the fly to flexibly adjust multiple training losses over the course of training using a controller trained to capture the learning dynamics. This offers a few advantages: it bypasses manual tuning of the sensitive loss combination, a decisive factor for panoptic segmentation; it allows to explicitly model the learning dynamics, and reconcile the learning of multiple objectives (up to ten in our experiments); with an end-to-end architecture, it generalizes to different datasets without the need of re-tuning hyperparameters or re-adjusting the training process laboriously. Our Ada-Segment brings 2.7% panoptic quality (PQ) improvement on COCO val split from the vanilla baseline, achieving the state-of-the-art 48.5% PQ on COCO test-dev split and 32.9% PQ on ADE20K dataset. The extensive ablation studies reveal the ever-changing dynamics throughout the training process, necessitating the incorporation of an automated and adaptive learning strategy as presented in this paper. | [] | [
"Instance Segmentation",
"Panoptic Segmentation",
"Semantic Segmentation"
] | [] | [
"COCO test-dev"
] | [
"PQst",
"PQ",
"PQth"
] | Ada-Segment: Automated Multi-loss Adaptation for Panoptic Segmentation |
Person re-identification is an important task that requires learning
discriminative visual features for distinguishing different person identities.
Diverse auxiliary information has been utilized to improve the visual feature
learning. In this paper, we propose to exploit natural language description as
additional training supervisions for effective visual features. Compared with
other auxiliary information, language can describe a specific person from more
compact and semantic visual aspects, thus is complementary to the pixel-level
image data. Our method not only learns better global visual feature with the
supervision of the overall description but also enforces semantic consistencies
between local visual and linguistic features, which is achieved by building
global and local image-language associations. The global image-language
association is established according to the identity labels, while the local
association is based upon the implicit correspondences between image regions
and noun phrases. Extensive experiments demonstrate the effectiveness of
employing language as training supervisions with the two association schemes.
Our method achieves state-of-the-art performance without utilizing any
auxiliary information during testing and shows better performance than other
joint embedding methods for the image-language association. | [] | [
"Person Re-Identification",
"Text based Person Retrieval"
] | [] | [
"CUHK-PEDES"
] | [
"R@10",
"R@1",
"R@5"
] | Improving Deep Visual Representation for Person Re-identification by Global and Local Image-language Association |
We introduce the dense captioning task, which requires a computer vision
system to both localize and describe salient regions in images in natural
language. The dense captioning task generalizes object detection when the
descriptions consist of a single word, and Image Captioning when one predicted
region covers the full image. To address the localization and description task
jointly we propose a Fully Convolutional Localization Network (FCLN)
architecture that processes an image with a single, efficient forward pass,
requires no external regions proposals, and can be trained end-to-end with a
single round of optimization. The architecture is composed of a Convolutional
Network, a novel dense localization layer, and Recurrent Neural Network
language model that generates the label sequences. We evaluate our network on
the Visual Genome dataset, which comprises 94,000 images and 4,100,000
region-grounded captions. We observe both speed and accuracy improvements over
baselines based on current state of the art approaches in both generation and
retrieval settings. | [] | [
"Image Captioning",
"Language Modelling",
"Object Detection"
] | [] | [
"Visual Genome"
] | [
"MAP"
] | DenseCap: Fully Convolutional Localization Networks for Dense Captioning |
Video-based human action recognition is currently one of the most active research areas in computer vision. Various research studies indicate that the performance of action recognition is highly dependent on the type of features being extracted and how the actions are represented. Since the release of the Kinect camera, a large number of Kinect-based human action recognition techniques have been proposed in the literature. However, there still does not exist a thorough comparison of these Kinect-based techniques under the grouping of feature types, such as handcrafted versus deep learning features and depth-based versus skeleton-based features. In this paper, we analyze and compare ten recent Kinect-based algorithms for both cross-subject action recognition and cross-view action recognition using six benchmark datasets. In addition, we have implemented and improved some of these techniques and included their variants in the comparison. Our experiments show that the majority of methods perform better on cross-subject action recognition than cross-view action recognition, that skeleton-based features are more robust for cross-view recognition than depth-based features, and that deep learning features are suitable for large datasets. | [] | [
"Action Recognition",
"Skeleton Based Action Recognition",
"Temporal Action Localization"
] | [] | [
"NTU RGB+D"
] | [
"Accuracy (CS)",
"Accuracy (CV)"
] | A Comparative Review of Recent Kinect-based Action Recognition Algorithms |
Skeleton-based human action recognition is becoming popular due to its computational efficiency and robustness. Since not all skeleton joints are informative for action recognition, attention mechanisms are adopted to extract informative joints and suppress the influence of irrelevant ones. However, existing attention frameworks usually ignore helpful scenario context information. In this paper, we propose a cross-attention module that consists of a self-attention branch and a cross-attention branch for skeleton-based action recognition. It helps to extract joints that are not only more informative but also highly correlated to the corresponding scenario context information. Moreover, the cross-attention module maintains input variables’ size and can be flexibly incorporated into many existing frameworks without breaking their behaviors. To facilitate end-to-end training, we further develop a scenario context information extraction branch to extract context information from raw RGB video directly. We conduct comprehensive experiments on the NTU RGB+D and the Kinetics databases, and experimental results demonstrate the correctness and effectiveness of the proposed model. | [] | [
"Action Recognition",
"Skeleton Based Action Recognition",
"Temporal Action Localization"
] | [] | [
"NTU RGB+D"
] | [
"Accuracy (CS)",
"Accuracy (CV)"
] | Context-Aware Cross-Attention for Skeleton-Based Human Action Recognition |
In this paper, we propose a three-stream convolutional neural network (3SCNN) for action recognition from skeleton sequences, which aims to thoroughly and fully exploit the skeleton data by extracting, learning, fusing and inferring multiple motion-related features, including 3D joint positions and joint displacements across adjacent frames as well as oriented bone segments. The proposed 3SCNN involves three sequential stages. The first stage enriches three independently extracted features by co-occurrence feature learning. The second stage involves multi-channel pairwise fusion to take advantage of the complementary and diverse nature among three features. The third stage is a multi-task and ensemble learning network to further improve the generalization ability of 3SCNN. Experimental results on the standard dataset show the effectiveness of our proposed multi-stream feature learning, fusion and inference method for skeleton-based 3D action recognition. | [] | [
"3D Action Recognition",
"Action Recognition",
"Skeleton Based Action Recognition"
] | [] | [
"NTU RGB+D"
] | [
"Accuracy (CS)",
"Accuracy (CV)"
] | Three-Stream Convolutional Neural Network With Multi-Task and Ensemble Learning for 3D Action Recognition |
Skeleton-based action recognition has recently attracted a lot of attention. Researchers are coming up with new approaches for extracting spatio-temporal relations and making considerable progress on large-scale skeleton-based datasets. Most of the architectures being proposed are based upon recurrent neural networks (RNNs), convolutional neural networks (CNNs) and graph-based CNNs. When it comes to skeleton-based action recognition, the importance of long term contextual information is central which is not captured by the current architectures. In order to come up with a better representation and capturing of long term spatio-temporal relationships, we propose three variants of Self-Attention Network (SAN), namely, SAN-V1, SAN-V2 and SAN-V3. Our SAN variants has the impressive capability of extracting high-level semantics by capturing long-range correlations. We have also integrated the Temporal Segment Network (TSN) with our SAN variants which resulted in improved overall performance. Different configurations of Self-Attention Network (SAN) variants and Temporal Segment Network (TSN) are explored with extensive experiments. Our chosen configuration outperforms state-of-the-art Top-1 and Top-5 by 4.4% and 7.9% respectively on Kinetics and shows consistently better performance than state-of-the-art methods on NTU RGB+D. | [] | [
"Action Recognition",
"Skeleton Based Action Recognition",
"Temporal Action Localization"
] | [] | [
"NTU RGB+D"
] | [
"Accuracy (CS)",
"Accuracy (CV)"
] | Self-Attention Network for Skeleton-based Human Action Recognition |
Recurrent neural networks (RNNs) are capable of modeling temporal dependencies of complex sequential data. In general, current available structures of RNNs tend to concentrate on controlling the contributions of current and previous information. However, the exploration of different importance levels of different elements within an input vector is always ignored. We propose a simple yet effective Element-wise-Attention Gate (EleAttG), which can be easily added to an RNN block (e.g. all RNN neurons in an RNN layer), to empower the RNN neurons to have attentiveness capability. For an RNN block, an EleAttG is used for adaptively modulating the input by assigning different levels of importance, i.e., attention, to each element/dimension of the input. We refer to an RNN block equipped with an EleAttG as an EleAtt-RNN block. Instead of modulating the input as a whole, the EleAttG modulates the input at fine granularity, i.e., element-wise, and the modulation is content adaptive. The proposed EleAttG, as an additional fundamental unit, is general and can be applied to any RNN structures, e.g., standard RNN, Long Short-Term Memory (LSTM), or Gated Recurrent Unit (GRU). We demonstrate the effectiveness of the proposed EleAtt-RNN by applying it to different tasks including the action recognition, from both skeleton-based data and RGB videos, gesture recognition, and sequential MNIST classification. Experiments show that adding attentiveness through EleAttGs to RNN blocks significantly improves the power of RNNs. | [] | [
"Action Recognition",
"Gesture Recognition",
"Skeleton Based Action Recognition"
] | [] | [
"NTU RGB+D",
"N-UCLA",
"SYSU 3D"
] | [
"Accuracy (CS)",
"Accuracy (CV)",
"Accuracy"
] | EleAtt-RNN: Adding Attentiveness to Neurons in Recurrent Neural Networks |
Human action recognition is an important task in computer vision. Extracting
discriminative spatial and temporal features to model the spatial and temporal
evolutions of different actions plays a key role in accomplishing this task. In
this work, we propose an end-to-end spatial and temporal attention model for
human action recognition from skeleton data. We build our model on top of the
Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM), which
learns to selectively focus on discriminative joints of skeleton within each
frame of the inputs and pays different levels of attention to the outputs of
different frames. Furthermore, to ensure effective training of the network, we
propose a regularized cross-entropy loss to drive the model learning process
and develop a joint training strategy accordingly. Experimental results
demonstrate the effectiveness of the proposed model,both on the small human
action recognition data set of SBU and the currently largest NTU dataset. | [] | [
"Action Recognition",
"Temporal Action Localization"
] | [] | [
"NTU RGB+D"
] | [
"Accuracy (CS)",
"Accuracy (CV)"
] | An End-to-End Spatio-Temporal Attention Model for Human Action Recognition from Skeleton Data |
We propose a deep learning-based approach to the problem of premise
selection: selecting mathematical statements relevant for proving a given
conjecture. We represent a higher-order logic formula as a graph that is
invariant to variable renaming but still fully preserves syntactic and semantic
information. We then embed the graph into a vector via a novel embedding method
that preserves the information of edge ordering. Our approach achieves
state-of-the-art results on the HolStep dataset, improving the classification
accuracy from 83% to 90.3%. | [] | [
"Automated Theorem Proving",
"Graph Embedding"
] | [] | [
"HolStep (Conditional)",
"HolStep (Unconditional)"
] | [
"Classification Accuracy"
] | Premise Selection for Theorem Proving by Deep Graph Embedding |
In this paper we propose to exploit multiple related tasks for accurate multi-sensor 3D object detection. Towards this goal we present an end-to-end learnable architecture that reasons about 2D and 3D object detection as well as ground estimation and depth completion. Our experiments show that all these tasks are complementary and help the network learn better representations by fusing information at various levels. Importantly, our approach leads the KITTI benchmark on 2D, 3D and BEV object detection, while being real time. | [] | [
"3D Object Detection",
"Depth Completion",
"Object Detection",
"Sensor Fusion"
] | [] | [
"KITTI Cars Hard",
"KITTI Cars Moderate",
"KITTI Cars Easy"
] | [
"AP"
] | Multi-Task Multi-Sensor Fusion for 3D Object Detection |
This paper introduces SelfMatch, a semi-supervised learning method that combines the power of contrastive self-supervised learning and consistency regularization. SelfMatch consists of two stages: (1) self-supervised pre-training based on contrastive learning and (2) semi-supervised fine-tuning based on augmentation consistency regularization. We empirically demonstrate that SelfMatch achieves the state-of-the-art results on standard benchmark datasets such as CIFAR-10 and SVHN. For example, for CIFAR-10 with 40 labeled examples, SelfMatch achieves 93.19% accuracy that outperforms the strong previous methods such as MixMatch (52.46%), UDA (70.95%), ReMixMatch (80.9%), and FixMatch (86.19%). We note that SelfMatch can close the gap between supervised learning (95.87%) and semi-supervised learning (93.19%) by using only a few labels for each class. | [] | [
"Self-Supervised Learning",
"Semi-Supervised Image Classification"
] | [] | [
"CIFAR-10, 250 Labels",
"CIFAR-10, 40 Labels",
"CIFAR-10, 4000 Labels"
] | [
"Percentage error",
"Accuracy"
] | SelfMatch: Combining Contrastive Self-Supervision and Consistency for Semi-Supervised Learning |
Most recent person re-identification approaches are based on the use of deep convolutional neural networks (CNNs). These networks, although effective in multiple tasks such as classification or object detection, tend to focus on the most discriminative part of an object rather than retrieving all its relevant features. This behavior penalizes the performance of a CNN for the re-identification task, since it should identify diverse and fine grained features. It is then essential to make the network learn a wide variety of finer characteristics in order to make the re-identification process of people effective and robust to finer changes. In this article, we introduce Deep Miner, a method that allows CNNs to "mine" richer and more diverse features about people for their re-identification. Deep Miner is specifically composed of three types of branches: a Global branch (G-branch), a Local branch (L-branch) and an Input-Erased branch (IE-branch). G-branch corresponds to the initial backbone which predicts global characteristics, while L-branch retrieves part level resolution features. The IE-branch for its part, receives partially suppressed feature maps as input thereby allowing the network to "mine" new features (those ignored by G-branch) as output. For this special purpose, a dedicated suppression procedure for identifying and removing features within a given CNN is introduced. This suppression procedure has the major benefit of being simple, while it produces a model that significantly outperforms state-of-the-art (SOTA) re-identification methods. Specifically, we conduct experiments on four standard person re-identification benchmarks and witness an absolute performance gain up to 6.5% mAP compared to SOTA. | [] | [
"Object Detection",
"Person Re-Identification"
] | [] | [
"CUHK03 detected",
"MSMT17",
"CUHK03 labeled",
"DukeMTMC-reID",
"Market-1501"
] | [
"Rank-1",
"mAP",
"MAP"
] | Deep Miner: A Deep and Multi-branch Network which Mines Rich and Diverse Features for Person Re-identification |
This paper describes BomJi, a supervised system for capturing discriminative
attributes in word pairs (e.g. yellow as discriminative for banana over
watermelon). The system relies on an XGB classifier trained on carefully
engineered graph-, pattern- and word embedding based features. It participated
in the SemEval- 2018 Task 10 on Capturing Discriminative Attributes, achieving
an F1 score of 0:73 and ranking 2nd out of 26 participant systems. | [] | [
"Relation Extraction"
] | [] | [
"SemEval 2018 Task 10"
] | [
"F1-Score"
] | BomJi at SemEval-2018 Task 10: Combining Vector-, Pattern- and Graph-based Information to Identify Discriminative Attributes |
Biological systems understand the world by simultaneously processing high-dimensional inputs from modalities as diverse as vision, audition, touch, proprioception, etc. The perception models used in deep learning on the other hand are designed for individual modalities, often relying on domain-specific assumptions such as the local grid structures exploited by virtually all existing vision models. These priors introduce helpful inductive biases, but also lock models to individual modalities. In this paper we introduce the Perceiver - a model that builds upon Transformers and hence makes few architectural assumptions about the relationship between its inputs, but that also scales to hundreds of thousands of inputs, like ConvNets. The model leverages an asymmetric attention mechanism to iteratively distill inputs into a tight latent bottleneck, allowing it to scale to handle very large inputs. We show that this architecture performs competitively or beyond strong, specialized models on classification tasks across various modalities: images, point clouds, audio, video and video+audio. The Perceiver obtains performance comparable to ResNet-50 on ImageNet without convolutions and by directly attending to 50,000 pixels. It also surpasses state-of-the-art results for all modalities in AudioSet. | [] | [] | [] | [
"AudioSet",
"ImageNet",
"ModelNet40"
] | [
"Mean Accuracy",
"Test mAP",
"Top 1 Accuracy"
] | Perceiver: General Perception with Iterative Attention |
We present a new method for finding video CNN architectures that capture rich spatio-temporal information in videos. Previous work, taking advantage of 3D convolutions, obtained promising results by manually designing video CNN architectures. We here develop a novel evolutionary search algorithm that automatically explores models with different types and combinations of layers to jointly learn interactions between spatial and temporal aspects of video representations. We demonstrate the generality of this algorithm by applying it to two meta-architectures, obtaining new architectures superior to manually designed architectures. Further, we propose a new component, the iTGM layer, which more efficiently utilizes its parameters to allow learning of space-time interactions over longer time horizons. The iTGM layer is often preferred by the evolutionary algorithm and allows building cost-efficient networks. The proposed approach discovers new and diverse video architectures that were previously unknown. More importantly they are both more accurate and faster than prior models, and outperform the state-of-the-art results on multiple datasets we test, including HMDB, Kinetics, and Moments in Time. We will open source the code and models, to encourage future model development. | [] | [
"Action Classification",
"Action Recognition",
"Action Recognition In Videos"
] | [] | [
"Kinetics-400",
"Charades",
"Moments in Time"
] | [
"MAP",
"Vid acc@1",
"Top 1 Accuracy"
] | Evolving Space-Time Neural Architectures for Videos |
Conversations have an intrinsic one-to-many property, which means that multiple responses can be appropriate for the same dialog context. In task-oriented dialogs, this property leads to different valid dialog policies towards task completion. However, none of the existing task-oriented dialog generation approaches takes this property into account. We propose a Multi-Action Data Augmentation (MADA) framework to utilize the one-to-many property to generate diverse appropriate dialog responses. Specifically, we first use dialog states to summarize the dialog history, and then discover all possible mappings from every dialog state to its different valid system actions. During dialog system training, we enable the current dialog state to map to all valid system actions discovered in the previous process to create additional state-action pairs. By incorporating these additional pairs, the dialog policy learns a balanced action distribution, which further guides the dialog model to generate diverse responses. Experimental results show that the proposed framework consistently improves dialog policy diversity, and results in improved response diversity and appropriateness. Our model obtains state-of-the-art results on MultiWOZ. | [] | [
"Data Augmentation",
"End-To-End Dialogue Modelling"
] | [] | [
"MULTIWOZ 2.0"
] | [
"MultiWOZ (Inform)",
"BLEU",
"MultiWOZ (Success)"
] | Task-Oriented Dialog Systems that Consider Multiple Appropriate Responses under the Same Context |
One of the most popular approaches to multi-target tracking is
tracking-by-detection. Current min-cost flow algorithms which solve the data
association problem optimally have three main drawbacks: they are
computationally expensive, they assume that the whole video is given as a
batch, and they scale badly in memory and computation with the length of the
video sequence. In this paper, we address each of these issues, resulting in a
computationally and memory-bounded solution. First, we introduce a dynamic
version of the successive shortest-path algorithm which solves the data
association problem optimally while reusing computation, resulting in
significantly faster inference than standard solvers. Second, we address the
optimal solution to the data association problem when dealing with an incoming
stream of data (i.e., online setting). Finally, we present our main
contribution which is an approximate online solution with bounded memory and
computation which is capable of handling videos of arbitrarily length while
performing tracking in real time. We demonstrate the effectiveness of our
algorithms on the KITTI and PETS2009 benchmarks and show state-of-the-art
performance, while being significantly faster than existing solvers. | [] | [] | [] | [
"KITTI Tracking test"
] | [
"MOTA"
] | FollowMe: Efficient Online Min-Cost Flow Tracking with Bounded Memory and Computation |
We propose a simple neural architecture for natural language inference. Our
approach uses attention to decompose the problem into subproblems that can be
solved separately, thus making it trivially parallelizable. On the Stanford
Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results
with almost an order of magnitude fewer parameters than previous work and
without relying on any word-order information. Adding intra-sentence attention
that takes a minimum amount of order into account yields further improvements. | [] | [
"Natural Language Inference"
] | [] | [
"SNLI"
] | [
"Parameters",
"% Train Accuracy",
"% Test Accuracy"
] | A Decomposable Attention Model for Natural Language Inference |
This paper designs a high-performance deep convolutional network (DeepID2+) for face recognition. It is learned with the identification-verification supervisory signal. By increasing the dimension of hidden representations and adding supervision to early convolutional layers, DeepID2+ achieves new state-of-the-art on LFW and YouTube Faces benchmarks. Through empirical studies, we have discovered three properties of its deep neural activations critical for the high performance: sparsity, selectiveness and robustness. (1) It is observed that neural activations are moderately sparse. Moderate sparsity maximizes the discriminative power of the deep net as well as the distance between images. It is surprising that DeepID2+ still can achieve high recognition accuracy even after the neural responses are binarized. (2) Its neurons in higher layers are highly selective to identities and identity-related attributes. We can identify different subsets of neurons which are either constantly excited or inhibited when different identities or attributes are present. Although DeepID2+ is not taught to distinguish attributes during training, it has implicitly learned such high-level concepts. (3) It is much more robust to occlusions, although occlusion patterns are not included in the training set. | [] | [
"Face Recognition"
] | [] | [
"YouTube Faces DB",
"Labeled Faces in the Wild",
"Oulu-CASIA"
] | [
"Accuracy"
] | Deeply learned face representations are sparse, selective, and robust |
Although Neural Machine Translation (NMT) models have advanced
state-of-the-art performance in machine translation, they face problems like
the inadequate translation. We attribute this to that the standard Maximum
Likelihood Estimation (MLE) cannot judge the real translation quality due to
its several limitations. In this work, we propose an adequacy-oriented learning
mechanism for NMT by casting translation as a stochastic policy in
Reinforcement Learning (RL), where the reward is estimated by explicitly
measuring translation adequacy. Benefiting from the sequence-level training of
RL strategy and a more accurate reward designed specifically for translation,
our model outperforms multiple strong baselines, including (1) standard and
coverage-augmented attention models with MLE-based training, and (2) advanced
reinforcement and adversarial training strategies with rewards based on both
word-level BLEU and character-level chrF3. Quantitative and qualitative
analyses on different language pairs and NMT architectures demonstrate the
effectiveness and universality of the proposed approach. | [] | [
"Machine Translation"
] | [] | [
"WMT2014 English-German"
] | [
"BLEU score"
] | Neural Machine Translation with Adequacy-Oriented Learning |
The huge variance of human pose and the misalignment of detected human images
significantly increase the difficulty of person Re-Identification (Re-ID).
Moreover, efficient Re-ID systems are required to cope with the massive visual
data being produced by video surveillance systems. Targeting to solve these
problems, this work proposes a Global-Local-Alignment Descriptor (GLAD) and an
efficient indexing and retrieval framework, respectively. GLAD explicitly
leverages the local and global cues in human body to generate a discriminative
and robust representation. It consists of part extraction and descriptor
learning modules, where several part regions are first detected and then deep
neural networks are designed for representation learning on both the local and
global regions. A hierarchical indexing and retrieval framework is designed to
eliminate the huge redundancy in the gallery set, and accelerate the online
Re-ID procedure. Extensive experimental results show GLAD achieves competitive
accuracy compared to the state-of-the-art methods. Our retrieval framework
significantly accelerates the online Re-ID procedure without loss of accuracy.
Therefore, this work has potential to work better on person Re-ID tasks in real
scenarios. | [] | [
"Person Re-Identification",
"Representation Learning"
] | [] | [
"Market-1501"
] | [
"Rank-1",
"MAP"
] | GLAD: Global-Local-Alignment Descriptor for Pedestrian Retrieval |
Aspect-based sentiment analysis (ABSA) involves three subtasks, i.e., aspect term extraction, opinion term extraction, and aspect-level sentiment classification. Most existing studies focused on one of these subtasks only. Several recent researches made successful attempts to solve the complete ABSA problem with a unified framework. However, the interactive relations among three subtasks are still under-exploited. We argue that such relations encode collaborative signals between different subtasks. For example, when the opinion term is \textit{{``}delicious{''}}, the aspect term must be \textit{{``}food{''}} rather than \textit{{``}place{''}}. In order to fully exploit these relations, we propose a Relation-Aware Collaborative Learning (RACL) framework which allows the subtasks to work coordinately via the multi-task learning and relation propagation mechanisms in a stacked multi-layer network. Extensive experiments on three real-world datasets demonstrate that RACL significantly outperforms the state-of-the-art methods for the complete ABSA task. | [] | [
"Aspect-Based Sentiment Analysis",
"Multi-Task Learning",
"Sentiment Analysis"
] | [] | [
"SemEval 2014 Task 4 Subtask 1+2",
"SemEval 2014 Task 4 Laptop"
] | [
"F1"
] | Relation-Aware Collaborative Learning for Unified Aspect-Based Sentiment Analysis |
Relation extraction studies the issue of predicting semantic relations between pairs of entities in sentences. Attention mechanisms are often used in this task to alleviate the inner-sentence noise by performing soft selections of words independently. Based on the observation that information pertinent to relations is usually contained within segments (continuous words in a sentence), it is possible to make use of this phenomenon for better extraction. In this paper, we aim to incorporate such segment information into neural relation extractor. Our approach views the attention mechanism as linear-chain conditional random fields over a set of latent variables whose edges encode the desired structure, and regards attention weight as the marginal distribution of each word being selected as a part of the relational expression. Experimental results show that our method can attend to continuous relational expressions without explicit annotations, and achieve the state-of-the-art performance on the large-scale TACRED dataset. | [] | [
"Relation Extraction"
] | [] | [
"TACRED"
] | [
"F1"
] | Beyond Word Attention: Using Segment Attention in Neural Relation Extraction |
Hyperspectral image (HSI) classification is widely used for the analysis of remotely sensed images. Hyperspectral imagery includes varying bands of images. Convolutional Neural Network (CNN) is one of the most frequently used deep learning based methods for visual data processing. The use of CNN for HSI classification is also visible in recent works. These approaches are mostly based on 2D CNN. Whereas, the HSI classification performance is highly dependent on both spatial and spectral information. Very few methods have utilized the 3D CNN because of increased computational complexity. This letter proposes a Hybrid Spectral Convolutional Neural Network (HybridSN) for HSI classification. Basically, the HybridSN is a spectral-spatial 3D-CNN followed by spatial 2D-CNN. The 3D-CNN facilitates the joint spatial-spectral feature representation from a stack of spectral bands. The 2D-CNN on top of the 3D-CNN further learns more abstract level spatial representation. Moreover, the use of hybrid CNNs reduces the complexity of the model compared to 3D-CNN alone. To test the performance of this hybrid approach, very rigorous HSI classification experiments are performed over Indian Pines, Pavia University and Salinas Scene remote sensing datasets. The results are compared with the state-of-the-art hand-crafted as well as end-to-end deep learning based methods. A very satisfactory performance is obtained using the proposed HybridSN for HSI classification. The source code can be found at \url{https://github.com/gokriznastic/HybridSN}. | [] | [
"Hyperspectral Image Classification",
"Image Classification"
] | [] | [
"Indian Pines",
"Salinas Scene"
] | [
"Overall Accuracy"
] | HybridSN: Exploring 3D-2D CNN Feature Hierarchy for Hyperspectral Image Classification |
In this paper, we propose a new dataset for outdoor depth estimation from single and stereo RGB images. The dataset was acquired from the point of view of a pedestrian. Currently, the most novel approaches take advantage of deep learning-based techniques, which have proven to outperform traditional state-of-the-art computer vision methods. Nonetheless, these methods require large amounts of reliable ground-truth data. Despite their already existing several datasets that could be used for depth estimation, almost none of them are outdoor-oriented from an egocentric point of view. Our dataset introduces a large number of high-definition pairs of color frames and corresponding depth maps from a human perspective. In addition, the proposed dataset also features human interaction and great variability of data, as shown in this work. | [] | [
"Depth Estimation",
"Monocular Depth Estimation"
] | [] | [
"UASOL"
] | [
"RMSE"
] | UASOL, a large-scale high-resolution outdoor stereo dataset |
Multiple-choice reading comprehension (MCRC) is the task of selecting the
correct answer from multiple options given a question and an article. Existing
MCRC models typically either read each option independently or compute a
fixed-length representation for each option before comparing them. However,
humans typically compare the options at multiple-granularity level before
reading the article in detail to make reasoning more efficient. Mimicking
humans, we propose an option comparison network (OCN) for MCRC which compares
options at word-level to better identify their correlations to help reasoning.
Specially, each option is encoded into a vector sequence using a skimmer to
retain fine-grained information as much as possible. An attention mechanism is
leveraged to compare these sequences vector-by-vector to identify more subtle
correlations between options, which is potentially valuable for reasoning.
Experimental results on the human English exam MCRC dataset RACE show that our
model outperforms existing methods significantly. Moreover, it is also the
first model that surpasses Amazon Mechanical Turker performance on the whole
dataset. | [] | [
"Question Answering",
"Reading Comprehension"
] | [] | [
"RACE"
] | [
"RACE-h",
"RACE-m",
"RACE"
] | Option Comparison Network for Multiple-choice Reading Comprehension |
We show that the basic classification framework alone can be used to tackle some of the most challenging tasks in image synthesis. In contrast to other state-of-the-art approaches, the toolkit we develop is rather minimal: it uses a single, off-the-shelf classifier for all these tasks. The crux of our approach is that we train this classifier to be adversarially robust. It turns out that adversarial robustness is precisely what we need to directly manipulate salient features of the input. Overall, our findings demonstrate the utility of robustness in the broader machine learning context. Code and models for our experiments can be found at https://git.io/robust-apps. | [] | [
"Image Generation"
] | [] | [
"CIFAR-10"
] | [
"Inception score"
] | Image Synthesis with a Single (Robust) Classifier |
The term fine-grained visual classification (FGVC) refers to classification tasks where the classes are very similar and the classification model needs to be able to find subtle differences to make the correct prediction. State-of-the-art approaches often include a localization step designed to help a classification network by localizing the relevant parts of the input images. However, this usually requires multiple iterations or passes through a full classification network or complex training schedules. In this work we present an efficient localization module that can be fused with a classification network in an end-to-end setup. On the one hand the module is trained by the gradient flowing back from the classification network. On the other hand, two self-supervised loss functions are introduced to increase the localization accuracy. We evaluate the new model on the three benchmark datasets CUB200-2011, Stanford Cars and FGVC-Aircraft and are able to achieve competitive recognition performance. | [] | [
"Fine-Grained Image Classification"
] | [] | [
" CUB-200-2011",
"Stanford Cars",
"FGVC Aircraft"
] | [
"Accuracy"
] | Fine-Grained Visual Classification with Efficient End-to-end Localization |
Clustering is an important problem in various machine learning applications, but still a challenging task when dealing with complex real data. The existing clustering algorithms utilize either shallow models with insufficient capacity for capturing the non-linear nature of data, or deep models with large number of parameters prone to overfitting. In this paper, we propose a deep Generative Adversarial Clustering Network (ClusterGAN), which tackles the problems of training of deep clustering models in unsupervised manner. ClusterGAN consists of three networks, a discriminator, a generator and a clusterer (i.e. a clustering network). We employ an adversarial game between these three players to synthesize realistic samples given discriminative latent variables via the generator, and learn the inverse mapping of the real samples to the discriminative embedding space via the clusterer. Moreover, we utilize a conditional entropy minimization loss to increase/decrease the similarity of intra/inter cluster samples. Since the ground-truth similarities are unknown in clustering task, we propose a novel balanced self-paced learning algorithm to gradually include samples into training from easy to difficult, while considering the diversity of selected samples from all clusters. Therefore, our method makes it possible to efficiently train clusterers with large depth by leveraging the proposed adversarial game and balanced self-paced learning algorithm. According our experiments, ClusterGAN achieves competitive results compared to the state-of-the-art clustering and hashing models on several datasets.
| [] | [
"Deep Clustering",
"Image Clustering",
"Image Retrieval",
"Unsupervised Spatial Clustering"
] | [] | [
"USPS",
"MNIST-full"
] | [
"NMI",
"Accuracy"
] | Balanced Self-Paced Learning for Generative Adversarial Clustering Network |
Sequence encoders are crucial components in many neural architectures for learning to read and comprehend. This paper presents a new compositional encoder for reading comprehension (RC). Our proposed encoder is not only aimed at being fast but also expressive. Specifically, the key novelty behind our encoder is that it explicitly models across multiple granularities using a new dilated composition mechanism. In our approach, gating functions are learned by modeling relationships and reasoning over multi-granular sequence information, enabling compositional learning that is aware of both long and short term information. We conduct experiments on three RC datasets, showing that our proposed encoder demonstrates very promising results both as a standalone encoder as well as a complementary building block. Empirical results show that simple Bi-Attentive architectures augmented with our proposed encoder not only achieves state-of-the-art / highly competitive results but is also considerably faster than other published works. | [] | [
"Open-Domain Question Answering",
"Question Answering",
"Reading Comprehension"
] | [] | [
"SearchQA",
"NarrativeQA"
] | [
"METEOR",
"BLEU-1",
"N-gram F1",
"Unigram Acc",
"EM",
"F1",
"Rouge-L",
"BLEU-4"
] | Multi-Granular Sequence Encoding via Dilated Compositional Units for Reading Comprehension |
Recurrent Neural Networks (RNNs) with attention mechanisms have obtained
state-of-the-art results for many sequence processing tasks. Most of these
models use a simple form of encoder with attention that looks over the entire
sequence and assigns a weight to each token independently. We present a
mechanism for focusing RNN encoders for sequence modelling tasks which allows
them to attend to key parts of the input as needed. We formulate this using a
multi-layer conditional sequence encoder that reads in one token at a time and
makes a discrete decision on whether the token is relevant to the context or
question being asked. The discrete gating mechanism takes in the context
embedding and the current hidden state as inputs and controls information flow
into the layer above. We train it using policy gradient methods. We evaluate
this method on several types of tasks with different attributes. First, we
evaluate the method on synthetic tasks which allow us to evaluate the model for
its generalization ability and probe the behavior of the gates in more
controlled settings. We then evaluate this approach on large scale Question
Answering tasks including the challenging MS MARCO and SearchQA tasks. Our
models shows consistent improvements for both tasks over prior work and our
baselines. It has also shown to generalize significantly better on synthetic
tasks as compared to the baselines. | [] | [
"Open-Domain Question Answering",
"Policy Gradient Methods",
"Question Answering"
] | [] | [
"SearchQA"
] | [
"Unigram Acc",
"N-gram F1"
] | Focused Hierarchical RNNs for Conditional Sequence Processing |
Existing weakly supervised fine-grained image recognition (WFGIR) methods usually pick out the discriminative regions from the high-level feature maps directly. We discover that due to the operation of stacking local receptive filed, Convolutional Neural Network causes the discriminative region diffusion in high-level feature maps, which leads to inaccurate discriminative region localization. In this paper, we propose an end-to-end Discriminative Feature-oriented Gaussian Mixture Model (DF-GMM), to address the problem of discriminative region diffusion and find better fine-grained details. Specifically, DF-GMM consists of 1) a low-rank representation mechanism (LRM), which learns a set of low-rank discriminative bases by Gaussian Mixture Model (GMM) in high-level semantic feature maps to improve discriminative ability of feature representation, 2) a low-rank representation reorganization mechanism (LR ^2 M) which resumes the space information corresponding to low-rank discriminative bases to reconstruct the low-rank feature maps. It alleviates the discriminative region diffusion problem and locate discriminative regions more precisely. Extensive experiments verify that DF-GMM yields the best performance under the same settings with the most competitive approaches, in CUB-Bird, Stanford-Cars datasets, and FGVC Aircraft.
| [] | [
"Fine-Grained Image Classification",
"Fine-Grained Image Recognition",
"Image Classification"
] | [] | [
" CUB-200-2011",
"Stanford Cars",
"FGVC Aircraft"
] | [
"Accuracy"
] | Weakly Supervised Fine-Grained Image Classification via Guassian Mixture Model Oriented Discriminative Learning |
Fine-grained visual classification (FGVC) is becoming an important research field, due to its wide applications and the rapid development of computer vision technologies. The current state-of-the-art (SOTA) methods in the FGVC usually employ attention mechanisms to first capture the semantic parts and then discover their subtle differences between distinct classes. The channel-spatial attention mechanisms, which focus on the discriminative channels and regions simultaneously, have significantly improved the classification performance. However, the existing attention modules are poorly guided since part-based detectors in the FGVC depend on the network learning ability without the supervision of part annotations. As obtaining such part annotations is labor-intensive, some visual localization and explanation methods, such as gradient-weighted class activation mapping (Grad-CAM), can be utilized for supervising the attention mechanism. We propose a Grad-CAM guided channel-spatial attention module for the FGVC, which employs the Grad-CAM to supervise and constrain the attention weights by generating the coarse localization maps. To demonstrate the effectiveness of the proposed method, we conduct comprehensive experiments on three popular FGVC datasets, including CUB-$200$-$2011$, Stanford Cars, and FGVC-Aircraft datasets. The proposed method outperforms the SOTA attention modules in the FGVC task. In addition, visualizations of feature maps also demonstrate the superiority of the proposed method against the SOTA approaches. | [] | [
"Fine-Grained Image Classification",
"Visual Localization"
] | [] | [
" CUB-200-2011"
] | [
"Accuracy"
] | Grad-CAM guided channel-spatial attention module for fine-grained visual classification |
The current supervised relation classification (RC) task uses a single embedding to represent the relation between a pair of entities. We argue that a better approach is to treat the RC task as a Question answering (QA) like span prediction problem. We present a span-prediction based system for RC and evaluate its performance compared to the embedding based system. We achieve state-of-the-art results on the TACRED and SemEval task 8 datasets. | [] | [
"Question Answering",
"Relation Classification",
"Relation Extraction"
] | [] | [
"TACRED",
"SemEval-2010 Task 8"
] | [
"F1"
] | Relation Extraction as Two-way Span-Prediction |
Previous research on relation classification has verified the effectiveness
of using dependency shortest paths or subtrees. In this paper, we further
explore how to make full use of the combination of these dependency
information. We first propose a new structure, termed augmented dependency path
(ADP), which is composed of the shortest dependency path between two entities
and the subtrees attached to the shortest path. To exploit the semantic
representation behind the ADP structure, we develop dependency-based neural
networks (DepNN): a recursive neural network designed to model the subtrees,
and a convolutional neural network to capture the most important features on
the shortest path. Experiments on the SemEval-2010 dataset show that our
proposed method achieves state-of-art results. | [] | [
"Relation Classification"
] | [] | [
"SemEval 2010 Task 8"
] | [
"F1"
] | A Dependency-Based Neural Network for Relation Classification |
Nowadays, neural networks play an important role in the task of relation classification. In this paper, we propose a novel attention-based convolutional neural network architecture for this task. Our model makes full use of word embedding, part-of-speech tag embedding and position embedding information. Word level attention mechanism is able to better determine which parts of the sentence are most influential with respect to the two entities of interest. This architecture enables learning some important features from task-specific labeled data, forgoing the need for external knowledge such as explicit dependency structures. Experiments on the SemEval-2010 Task 8 benchmark dataset show that our model achieves better performances than several state-of-the-art neural network models and can achieve a competitive performance just with minimal feature engineering. | [] | [
"Feature Engineering",
"Relation Classification",
"Relation Extraction"
] | [] | [
"SemEval-2010 Task 8"
] | [
"F1"
] | Attention-Based Convolutional Neural Network for Semantic Relation Extraction |
This paper describes a novel method of live keyword spotting using a two-stage time delay neural network. The model is trained using transfer learning: initial training with phone targets from a large speech corpus is followed by training with keyword targets from a smaller data set. The accuracy of the system is evaluated on two separate tasks. The first is the freely available Google Speech Commands dataset. The second is an in-house task specifically developed for keyword spotting. The results show significant improvements in false accept and false reject rates in both clean and noisy environments when compared with previously known techniques. Furthermore, we investigate various techniques to reduce computation in terms of multiplications per second of audio. Compared to recently published work, the proposed system provides up to 89% savings on computational complexity. | [] | [
"Keyword Spotting",
"Transfer Learning"
] | [] | [
"Google Speech Commands"
] | [
"10-keyword Speech Commands dataset"
] | Efficient keyword spotting using time delay neural networks |
With the rise of low power speech-enabled devices, there is a growing demand to quickly produce models for recognizing arbitrary sets of keywords. As with many machine learning tasks, one of the most challenging parts in the model creation process is obtaining a sufficient amount of training data. In this paper, we explore the effectiveness of synthesized speech data in training small, spoken term detection models of around 400k parameters. Instead of training such models directly on the audio or low level features such as MFCCs, we use a pre-trained speech embedding model trained to extract useful features for keyword spotting models. Using this speech embedding, we show that a model which detects 10 keywords when trained on only synthetic speech is equivalent to a model trained on over 500 real examples. We also show that a model without our speech embeddings would need to be trained on over 4000 real examples to reach the same accuracy. | [] | [
"Keyword Spotting"
] | [] | [
"Google Speech Commands"
] | [
"Google Speech Commands V2 12"
] | Training Keyword Spotters with Limited and Synthesized Speech Data |
Data augmentation is often used to enlarge datasets with synthetic samples generated in accordance with the underlying data distribution. To enable a wider range of augmentations, we explore negative data augmentation strategies (NDA)that intentionally create out-of-distribution samples. We show that such negative out-of-distribution samples provide information on the support of the data distribution, and can be leveraged for generative modeling and representation learning. We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator. We prove that under suitable conditions, optimizing the resulting objective still recovers the true data distribution but can directly bias the generator towards avoiding samples that lack the desired structure. Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities. Further, we incorporate the same negative data augmentation strategy in a contrastive learning framework for self-supervised representation learning on images and videos, achieving improved performance on downstream image classification, object detection, and action recognition tasks. These results suggest that prior knowledge on what does not constitute valid data is an effective form of weak supervision across a range of unsupervised learning tasks. | [] | [
"Action Recognition",
"Anomaly Detection",
"Conditional Image Generation",
"Data Augmentation",
"Image Classification",
"Image Generation",
"Object Detection",
"Representation Learning"
] | [] | [
"CIFAR-100",
"CIFAR-10"
] | [
"FID"
] | Negative Data Augmentation |
Who did what to whom is a major focus in natural language understanding, which is right the aim of semantic role labeling (SRL) task. Despite of sharing a lot of processing characteristics and even task purpose, it is surprisingly that jointly considering these two related tasks was never formally reported in previous work. Thus this paper makes the first attempt to let SRL enhance text comprehension and inference through specifying verbal predicates and their corresponding semantic roles. In terms of deep learning models, our embeddings are enhanced by explicit contextual semantic role labels for more fine-grained semantics. We show that the salient labels can be conveniently added to existing models and significantly improve deep learning models in challenging text comprehension tasks. Extensive experiments on benchmark machine reading comprehension and inference datasets verify that the proposed semantic learning helps our system reach new state-of-the-art over strong baselines which have been enhanced by well pretrained language models from the latest progress. | [] | [
"Machine Reading Comprehension",
"Natural Language Understanding",
"Reading Comprehension",
"Semantic Role Labeling"
] | [] | [
"SNLI"
] | [
"Parameters",
"% Train Accuracy",
"% Test Accuracy"
] | Explicit Contextual Semantics for Text Comprehension |
We evaluate the character-level translation method for neural semantic
parsing on a large corpus of sentences annotated with Abstract Meaning
Representations (AMRs). Using a sequence-to-sequence model, and some trivial
preprocessing and postprocessing of AMRs, we obtain a baseline accuracy of 53.1
(F-score on AMR-triples). We examine five different approaches to improve this
baseline result: (i) reordering AMR branches to match the word order of the
input sentence increases performance to 58.3; (ii) adding part-of-speech tags
(automatically produced) to the input shows improvement as well (57.2); (iii)
So does the introduction of super characters (conflating frequent sequences of
characters to a single character), reaching 57.4; (iv) optimizing the training
process by using pre-training and averaging a set of models increases
performance to 58.7; (v) adding silver-standard training data obtained by an
off-the-shelf parser yields the biggest improvement, resulting in an F-score of
64.0. Combining all five techniques leads to an F-score of 71.0 on holdout
data, which is state-of-the-art in AMR parsing. This is remarkable because of
the relative simplicity of the approach. | [] | [
"AMR Parsing",
"Semantic Parsing"
] | [] | [
"LDC2017T10"
] | [
"Smatch"
] | Neural Semantic Parsing by Character-based Translation: Experiments with Abstract Meaning Representations |
Recurrent neural network grammars (RNNG) are a recently proposed
probabilistic generative modeling family for natural language. They show
state-of-the-art language modeling and parsing performance. We investigate what
information they learn, from a linguistic perspective, through various
ablations to the model and the data, and by augmenting the model with an
attention mechanism (GA-RNNG) to enable closer inspection. We find that
explicit modeling of composition is crucial for achieving the best performance.
Through the attention mechanism, we find that headedness plays a central role
in phrasal representation (with the model's latent attention largely agreeing
with predictions made by hand-crafted head rules, albeit with some important
differences). By training grammars without nonterminal labels, we find that
phrasal representations depend minimally on nonterminals, providing support for
the endocentricity hypothesis. | [] | [
"Constituency Parsing",
"Dependency Parsing",
"Language Modelling"
] | [] | [
"Penn Treebank"
] | [
"F1 score"
] | What Do Recurrent Neural Network Grammars Learn About Syntax? |
Supervised object detection and semantic segmentation require object or even
pixel level annotations. When there exist image level labels only, it is
challenging for weakly supervised algorithms to achieve accurate predictions.
The accuracy achieved by top weakly supervised algorithms is still
significantly lower than their fully supervised counterparts. In this paper, we
propose a novel weakly supervised curriculum learning pipeline for multi-label
object recognition, detection and semantic segmentation. In this pipeline, we
first obtain intermediate object localization and pixel labeling results for
the training images, and then use such results to train task-specific deep
networks in a fully supervised manner. The entire process consists of four
stages, including object localization in the training images, filtering and
fusing object instances, pixel labeling for the training images, and
task-specific network training. To obtain clean object instances in the
training images, we propose a novel algorithm for filtering, fusing and
classifying object instances collected from multiple solution mechanisms. In
this algorithm, we incorporate both metric learning and density-based
clustering to filter detected object instances. Experiments show that our
weakly supervised pipeline achieves state-of-the-art results in multi-label
image classification as well as weakly supervised object detection and very
competitive results in weakly supervised semantic segmentation on MS-COCO,
PASCAL VOC 2007 and PASCAL VOC 2012. | [] | [
"Curriculum Learning",
"Image Classification",
"Metric Learning",
"Multi-Label Classification",
"Object Detection",
"Object Localization",
"Object Recognition",
"Semantic Segmentation",
"Weakly Supervised Object Detection",
"Weakly-Supervised Semantic Segmentation"
] | [] | [
"PASCAL VOC 2007"
] | [
"MAP"
] | Multi-Evidence Filtering and Fusion for Multi-Label Classification, Object Detection and Semantic Segmentation Based on Weakly Supervised Learning |
We consider addressing the major failures in weakly supervised object detectors. As most weakly supervised object detection methods are based on pre-generated proposals, they often show two false detections: (i) group multiple object instances with one bounding box, and (ii) focus on only parts rather than the whole objects. We propose an image segmentation framework to help correctly detect individual instances. The input images are first segmented into several sub-images based on the proposal overlaps to uncouple the grouping objects. Then the batch of sub-images are fed into the convolutional network to train an object detector. Within each sub-image, a partial aggregation strategy is adopted to dynamically select a portion of the proposal-level scores to produce the sub-image-level output. This regularizes the model to learn context knowledge about the object content. Finally, the outputs of the sub-images are pooled together as the model prediction. The ideas are implemented with VGG-D backbone to be comparable with recent state-of-the-art weakly supervised methods. Extensive experiments on PASCAL VOC datasets show the superiority of our design. The proposed model outperforms other alternatives on detection, localization, and classification tasks. | [] | [
"Object Detection",
"Semantic Segmentation",
"Weakly Supervised Object Detection"
] | [] | [
"PASCAL VOC 2007"
] | [
"MAP"
] | Fewer is More: Image Segmentation Based Weakly Supervised Object Detection with Partial Aggregation |
We consider the problem of weakly supervised object detection, where the
training samples are annotated using only image-level labels that indicate the
presence or absence of an object category. In order to model the uncertainty in
the location of the objects, we employ a dissimilarity coefficient based
probabilistic learning objective. The learning objective minimizes the
difference between an annotation agnostic prediction distribution and an
annotation aware conditional distribution. The main computational challenge is
the complex nature of the conditional distribution, which consists of terms
over hundreds or thousands of variables. The complexity of the conditional
distribution rules out the possibility of explicitly modeling it. Instead, we
exploit the fact that deep learning frameworks rely on stochastic optimization.
This allows us to use a state of the art discrete generative model that can
provide annotation consistent samples from the conditional distribution.
Extensive experiments on PASCAL VOC 2007 and 2012 data sets demonstrate the
efficacy of our proposed approach. | [] | [
"Object Detection",
"Stochastic Optimization",
"Weakly Supervised Object Detection"
] | [] | [
"PASCAL VOC 2007",
"PASCAL VOC 2012 test"
] | [
"MAP"
] | Dissimilarity Coefficient based Weakly Supervised Object Detection |
Most existing weakly supervised localization (WSL) approaches learn detectors
by finding positive bounding boxes based on features learned with image-level
supervision. However, those features do not contain spatial location related
information and usually provide poor-quality positive samples for training a
detector. To overcome this issue, we propose a deep self-taught learning
approach, which makes the detector learn the object-level features reliable for
acquiring tight positive samples and afterwards re-train itself based on them.
Consequently, the detector progressively improves its detection ability and
localizes more informative positive samples. To implement such self-taught
learning, we propose a seed sample acquisition method via image-to-object
transferring and dense subgraph discovery to find reliable positive samples for
initializing the detector. An online supportive sample harvesting scheme is
further proposed to dynamically select the most confident tight positive
samples and train the detector in a mutual boosting way. To prevent the
detector from being trapped in poor optima due to overfitting, we propose a new
relative improvement of predicted CNN scores for guiding the self-taught
learning process. Extensive experiments on PASCAL 2007 and 2012 show that our
approach outperforms the state-of-the-arts, strongly validating its
effectiveness. | [] | [
"Object Localization",
"Weakly Supervised Object Detection",
"Weakly-Supervised Object Localization"
] | [] | [
"PASCAL VOC 2007",
"PASCAL VOC 2012 test"
] | [
"MAP"
] | Deep Self-Taught Learning for Weakly Supervised Object Localization |
In this paper, we address the problem of weakly supervisedobject localization (WSL), which trains a detection network on the datasetwith only image-level annotations. The proposed approach is built on theobservation that the proposal set from the training dataset is a collectionof background, object parts, and objects. Several strategies are taken toadaptively eliminate the noisy proposals and generate pseudo object-levelannotations for the weakly labeled dataset. A multiple instance learning(MIL) algorithm enhanced by mask-out strategy is adopted to collect theclass-specific object proposals, which are then utilized to adapt a pre-trained classification network to a detection network. In addition, thedetection results from the detection network are re-weighted by jointlyconsidering the detection scores and the overlap ratio of proposals in aproposal subset optimization framework. The optimal proposals work asobject-level labels that enable a pseudo-strongly supervised dataset fortraining the detection network. Consequently, we establish a fully adap-tive detection network. Extensive evaluations on the PASCAL VOC 2007and 2012 datasets demonstrate a significant improvement compared withthe state-of-the-art methods. | [] | [
"Denoising",
"Multiple Instance Learning",
"Object Localization",
"Weakly Supervised Object Detection"
] | [] | [
"PASCAL VOC 2007",
"PASCAL VOC 2012 test"
] | [
"MAP"
] | Adaptively Denoising Proposal Collection forWeakly Supervised Object Localization |
Deep learning has achieved excellent performance in various computer vision
tasks, but requires a lot of training examples with clean labels. It is easy to
collect a dataset with noisy labels, but such noise makes networks overfit
seriously and accuracies drop dramatically. To address this problem, we propose
an end-to-end framework called PENCIL, which can update both network parameters
and label estimations as label distributions. PENCIL is independent of the
backbone network structure and does not need an auxiliary clean dataset or
prior information about noise, thus it is more general and robust than existing
methods and is easy to apply. PENCIL outperforms previous state-of-the-art
methods by large margins on both synthetic and real-world datasets with
different noise types and noise rates. Experiments show that PENCIL is robust
on clean datasets, too. | [] | [
"Image Classification",
"Learning with noisy labels"
] | [] | [
"Clothing1M"
] | [
"Accuracy"
] | Probabilistic End-to-end Noise Correction for Learning with Noisy Labels |
The well-known word analogy experiments show that the recent word vectors
capture fine-grained linguistic regularities in words by linear vector offsets,
but it is unclear how well the simple vector offsets can encode visual
regularities over words. We study a particular image-word relevance relation in
this paper. Our results show that the word vectors of relevant tags for a given
image rank ahead of the irrelevant tags, along a principal direction in the
word vector space. Inspired by this observation, we propose to solve image
tagging by estimating the principal direction for an image. Particularly, we
exploit linear mappings and nonlinear deep neural networks to approximate the
principal direction from an input image. We arrive at a quite versatile tagging
model. It runs fast given a test image, in constant time w.r.t.\ the training
set size. It not only gives superior performance for the conventional tagging
task on the NUS-WIDE dataset, but also outperforms competitive baselines on
annotating images with previously unseen tags | [] | [
"Multi-label zero-shot learning",
"Zero-Shot Learning"
] | [] | [
"NUS-WIDE"
] | [
"mAP"
] | Fast Zero-Shot Image Tagging |
Recently, there is rising interest in modelling the interactions of two
sentences with deep neural networks. However, most of the existing methods
encode two sequences with separate encoders, in which a sentence is encoded
with little or no information from the other sentence. In this paper, we
propose a deep architecture to model the strong interaction of sentence pair
with two coupled-LSTMs. Specifically, we introduce two coupled ways to model
the interdependences of two LSTMs, coupling the local contextualized
interactions of two sentences. We then aggregate these interactions and use a
dynamic pooling to select the most informative features. Experiments on two
very large datasets demonstrate the efficacy of our proposed architecture and
its superiority to state-of-the-art methods. | [] | [] | [] | [
"SNLI"
] | [
"Parameters",
"% Train Accuracy",
"% Test Accuracy"
] | Modelling Interaction of Sentence Pair with coupled-LSTMs |
Recurrent neural networks (RNNs) process input text sequentially and model
the conditional transition between word tokens. In contrast, the advantages of
recursive networks include that they explicitly model the compositionality and
the recursive structure of natural language. However, the current recursive
architecture is limited by its dependence on syntactic tree. In this paper, we
introduce a robust syntactic parsing-independent tree structured model, Neural
Tree Indexers (NTI) that provides a middle ground between the sequential RNNs
and the syntactic treebased recursive models. NTI constructs a full n-ary tree
by processing the input text with its node function in a bottom-up fashion.
Attention mechanism can then be applied to both structure and node function. We
implemented and evaluated a binarytree model of NTI, showing the model achieved
the state-of-the-art performance on three different NLP tasks: natural language
inference, answer sentence selection, and sentence classification,
outperforming state-of-the-art recurrent and recursive neural networks. | [] | [
"Natural Language Inference",
"Sentence Classification"
] | [] | [
"SNLI"
] | [
"Parameters",
"% Train Accuracy",
"% Test Accuracy"
] | Neural Tree Indexers for Text Understanding |
Convolutional Neural Networks (CNN) conduct image classification by activating dominant features that correlated with labels. When the training and testing data are under similar distributions, their dominant features are similar, which usually facilitates decent performance on the testing data. The performance is nonetheless unmet when tested on samples from different distributions, leading to the challenges in cross-domain image classification. We introduce a simple training heuristic, Representation Self-Challenging (RSC), that significantly improves the generalization of CNN to the out-of-domain data. RSC iteratively challenges (discards) the dominant features activated on the training data, and forces the network to activate remaining features that correlates with labels. This process appears to activate feature representations applicable to out-of-domain data without prior knowledge of new domain and without learning extra network parameters. We present theoretical properties and conditions of RSC for improving cross-domain generalization. The experiments endorse the simple, effective and architecture-agnostic nature of our RSC method. | [] | [
"Domain Generalization",
"Image Classification"
] | [] | [
"VLCS",
"Office-Home",
"PACS"
] | [
"Average Accuracy"
] | Self-Challenging Improves Cross-Domain Generalization |
We present a new two-stage 3D object detection framework, named sparse-to-dense 3D Object Detector (STD). The first stage is a bottom-up proposal generation network that uses raw point cloud as input to generate accurate proposals by seeding each point with a new spherical anchor. It achieves a high recall with less computation compared with prior works. Then, PointsPool is applied for generating proposal features by transforming their interior point features from sparse expression to compact representation, which saves even more computation time. In box prediction, which is the second stage, we implement a parallel intersection-over-union (IoU) branch to increase awareness of localization accuracy, resulting in further improved performance. We conduct experiments on KITTI dataset, and evaluate our method in terms of 3D object and Bird's Eye View (BEV) detection. Our method outperforms other state-of-the-arts by a large margin, especially on the hard set, with inference speed more than 10 FPS. | [] | [
"3D Object Detection",
"Object Detection"
] | [] | [
"KITTI Cyclists Hard",
"KITTI Pedestrians Hard",
"KITTI Cars Moderate",
"KITTI Cyclists Moderate",
"KITTI Cars Hard",
"KITTI Pedestrians Moderate",
"KITTI Pedestrians Easy",
"KITTI Cyclists Easy",
"KITTI Cars Easy"
] | [
"AP"
] | STD: Sparse-to-Dense 3D Object Detector for Point Cloud |
This paper proposes dynamic chunk reader (DCR), an end-to-end neural reading
comprehension (RC) model that is able to extract and rank a set of answer
candidates from a given document to answer questions. DCR is able to predict
answers of variable lengths, whereas previous neural RC models primarily
focused on predicting single tokens or entities. DCR encodes a document and an
input question with recurrent neural networks, and then applies a word-by-word
attention mechanism to acquire question-aware representations for the document,
followed by the generation of chunk representations and a ranking module to
propose the top-ranked chunk as the answer. Experimental results show that DCR
achieves state-of-the-art exact match and F1 scores on the SQuAD dataset. | [] | [
"Question Answering",
"Reading Comprehension"
] | [] | [
"SQuAD1.1 dev",
"SQuAD1.1"
] | [
"EM",
"F1"
] | End-to-End Answer Chunk Extraction and Ranking for Reading Comprehension |
Keyword spotting (KWS) is a major component of human–computer interaction for smarton-device terminals and service robots, the purpose of which is to maximize the detection accuracy whilekeeping footprint size small. In this paper, based on the powerful ability of DenseNet on extracting localfeature-maps, we propose a new network architecture (DenseNet-BiLSTM) for KWS. In our DenseNet-BiLSTM, the DenseNet is primarily applied to obtain local features, while the BiLSTM is used to grabtime series features. In general, the DenseNet is used in computer vision tasks, and it may corrupt contextualinformation for speech audios. In order to make DenseNet suitable for KWS, we propose a variant DenseNet,called DenseNet-Speech, which removes the pool on the time dimension in transition layers to preservespeech time series information. In addition, our DenseNet-Speech uses less dense blocks and filters to keepthe model small, thereby reducing time consumption for mobile devices. The experimental results show thatfeature-maps from DenseNet-Speech maintain time series information well. Our method outperforms thestate-of-the-art methods in terms of accuracy on Google Speech Commands dataset. DenseNet-BiLSTM isable to achieve the accuracy of 96.6% for the 20-commands recognition task with 223K trainable parameters. | [] | [
"Keyword Spotting",
"Time Series"
] | [] | [
"Google Speech Commands"
] | [
"Google Speech Commands V2 20"
] | Effective Combination of DenseNet andBiLSTM for Keyword Spotting |
Self-attention networks have proven to be of profound value for its strength
of capturing global dependencies. In this work, we propose to model localness
for self-attention networks, which enhances the ability of capturing useful
local context. We cast localness modeling as a learnable Gaussian bias, which
indicates the central and scope of the local region to be paid more attention.
The bias is then incorporated into the original attention distribution to form
a revised distribution. To maintain the strength of capturing long distance
dependencies and enhance the ability of capturing short-range dependencies, we
only apply localness modeling to lower layers of self-attention networks.
Quantitative and qualitative analyses on Chinese-English and English-German
translation tasks demonstrate the effectiveness and universality of the
proposed approach. | [] | [] | [] | [
"WMT2014 English-German"
] | [
"BLEU score"
] | Modeling Localness for Self-Attention Networks |
Change detection in high resolution remote sensing images is crucial to the understanding of land surface changes. As traditional change detection methods are not suitable for the task considering the challenges brought by the fine image details and complex texture features conveyed in high resolution images, a number of deep learning-based change detection methods have been proposed to improve the change detection performance. Although the state-of-the-art deep feature based methods outperform all the other deep learning-based change detection methods, networks in the existing deep feature based methods are mostly modified from architectures that are originally proposed for single-image semantic segmentation. Transferring these networks for change detection task still poses some key issues. In this paper, we propose a deeply supervised image fusion network (IFN) for change detection in high resolution bi-temporal remote sensing images. Specifically, highly representative deep features of bi-temporal images are firstly extracted through a fully convolutional two-stream architecture. Then, the extracted deep features are fed into a deeply supervised difference discrimination network (DDN) for change detection. To improve boundary completeness and internal compactness of objects in the output change maps, multi-level deep features of raw images are fused with image difference features by means of attention modules for change map reconstruction. DDN is further enhanced by directly introducing change map losses to intermediate layers in the network, and the whole network is trained in an end-to-end manner. IFN is applied to a publicly available dataset, as well as a challenging dataset consisting of multi-source bi-temporal images from Google Earth covering different cities in China. Both visual interpretation and quantitative assessment confirm that IFN outperforms four benchmark methods derived from the literature, by returning changed areas with complete boundaries and high internal compactness compared to the state-of-the-art methods. | [] | [
"Change detection for remote sensing images",
"Semantic Segmentation"
] | [] | [
"CDD Dataset (season-varying)"
] | [
"F1-Score"
] | A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sensing images |
Implementation and experiments of graph embedding algorithms.deep walk,LINE(Large-scale Information Network Embedding),node2vec,SDNE(Structural Deep Network Embedding),struc2vec | [] | [
"Graph Embedding",
"Network Embedding",
"Node Classification"
] | [] | [
"BlogCatalog",
"Wikipedia"
] | [
"Macro-F1",
"Accuracy"
] | struc2vec: Learning Node Representations from Structural Identity |
In this paper, we propose the TBCNN-pair model to recognize entailment and
contradiction between two sentences. In our model, a tree-based convolutional
neural network (TBCNN) captures sentence-level semantics; then heuristic
matching layers like concatenation, element-wise product/difference combine the
information in individual sentences. Experimental results show that our model
outperforms existing sentence encoding-based approaches by a large margin. | [] | [
"Natural Language Inference"
] | [] | [
"SNLI"
] | [
"Parameters",
"% Train Accuracy",
"% Test Accuracy"
] | Natural Language Inference by Tree-Based Convolution and Heuristic Matching |
In Word Sense Disambiguation (WSD), the predominant approach generally
involves a supervised system trained on sense annotated corpora. The limited
quantity of such corpora however restricts the coverage and the performance of
these systems. In this article, we propose a new method that solves these
issues by taking advantage of the knowledge present in WordNet, and especially
the hypernymy and hyponymy relationships between synsets, in order to reduce
the number of different sense tags that are necessary to disambiguate all words
of the lexical database. Our method leads to state of the art results on most
WSD evaluation tasks, while improving the coverage of supervised systems,
reducing the training time and the size of the models, without additional
training data. In addition, we exhibit results that significantly outperform
the state of the art when our method is combined with an ensembling technique
and the addition of the WordNet Gloss Tagged as training corpus. | [] | [
"Word Sense Disambiguation"
] | [] | [
"Supervised:",
"SensEval 3 Task 1",
"SemEval 2013 Task 12",
"SemEval 2007 Task 17",
"SemEval 2015 Task 13",
"SemEval 2007 Task 7",
"SensEval 2"
] | [
"Senseval 2",
"Senseval 3",
"SemEval 2013",
"F1",
"SemEval 2007",
"SemEval 2015"
] | Improving the Coverage and the Generalization Ability of Neural Word Sense Disambiguation through Hypernymy and Hyponymy Relationships |
Most existing person re-identification (re-id) methods rely on supervised
model learning on per-camera-pair manually labelled pairwise training data.
This leads to poor scalability in a practical re-id deployment, due to the lack
of exhaustive identity labelling of positive and negative image pairs for every
camera-pair. In this work, we present an unsupervised re-id deep learning
approach. It is capable of incrementally discovering and exploiting the
underlying re-id discriminative information from automatically generated person
tracklet data end-to-end. We formulate an Unsupervised Tracklet Association
Learning (UTAL) framework. This is by jointly learning within-camera tracklet
discrimination and cross-camera tracklet association in order to maximise the
discovery of tracklet identity matching both within and across camera views.
Extensive experiments demonstrate the superiority of the proposed model over
the state-of-the-art unsupervised learning and domain adaptation person re-id
methods on eight benchmarking datasets. | [] | [
"Domain Adaptation",
"Person Re-Identification"
] | [] | [
"PRID2011",
"iLIDS-VID",
"CUHK03",
"DukeTracklet",
"MSMT17",
"DukeMTMC-reID",
"MARS",
"Market-1501"
] | [
"mAP",
"Rank-10",
"MAP",
"Rank-1",
"Rank-20",
"Rank-5"
] | Unsupervised Tracklet Person Re-Identification |
We propose a novel crowd counting approach that leverages abundantly
available unlabeled crowd imagery in a learning-to-rank framework. To induce a
ranking of cropped images , we use the observation that any sub-image of a
crowded scene image is guaranteed to contain the same number or fewer persons
than the super-image. This allows us to address the problem of limited size of
existing datasets for crowd counting. We collect two crowd scene datasets from
Google using keyword searches and query-by-example image retrieval,
respectively. We demonstrate how to efficiently learn from these unlabeled
datasets by incorporating learning-to-rank in a multi-task network which
simultaneously ranks images and estimates crowd density maps. Experiments on
two of the most challenging crowd counting datasets show that our approach
obtains state-of-the-art results. | [] | [
"Crowd Counting",
"Image Retrieval",
"Learning-To-Rank"
] | [] | [
"UCF CC 50",
"ShanghaiTech A",
"ShanghaiTech B"
] | [
"MAE"
] | Leveraging Unlabeled Data for Crowd Counting by Learning to Rank |
Person Re-Identification aims to retrieve person identities from images captured by multiple cameras or the same cameras in different time instances and locations. Because of its importance in many vision applications from surveillance to human-machine interaction, person re-identification methods need to be reliable and fast. While more and more deep architectures are proposed for increasing performance, those methods also increase overall model complexity. This paper proposes a lightweight network that combines global, part-based, and channel features in a unified multi-branch architecture that builds on the resource-efficient OSNet backbone. Using a well-founded combination of training techniques and design choices, our final model achieves state-of-the-art results on CUHK03 labeled, CUHK03 detected, and Market-1501 with 85.1% mAP / 87.2% rank1, 82.4% mAP / 84.9% rank1, and 91.5% mAP / 96.3% rank1, respectively. | [] | [
"Person Re-Identification"
] | [] | [
"CUHK03 detected",
"Market-1501",
"CUHK03 labeled"
] | [
"Rank-1",
"MAP"
] | Lightweight Multi-Branch Network for Person Re-Identification |
People live in a 3D world. However, existing works on person re-identification (re-id) mostly consider the semantic representation learning in a 2D space, intrinsically limiting the understanding of people. In this work, we address this limitation by exploring the prior knowledge of the 3D body structure. Specifically, we project 2D images to a 3D space and introduce a novel parameter-efficient Omni-scale Graph Network (OG-Net) to learn the pedestrian representation directly from 3D point clouds. OG-Net effectively exploits the local information provided by sparse 3D points and takes advantage of the structure and appearance information in a coherent manner. With the help of 3D geometry information, we can learn a new type of deep re-id feature free from noisy variants, such as scale and viewpoint. To our knowledge, we are among the first attempts to conduct person re-identification in the 3D space. We demonstrate through extensive experiments that the proposed method (1) eases the matching difficulty in the traditional 2D space, (2) exploits the complementary information of 2D appearance and 3D structure, (3) achieves competitive results with limited parameters on four large-scale person re-id datasets, and (4) has good scalability to unseen datasets. | [] | [
"3D Point Cloud Classification",
"Person Re-Identification",
"Representation Learning"
] | [] | [
"MSMT17",
"DukeMTMC-reID->Market-1501",
"ModelNet40",
"DukeMTMC-reID",
"Market-1501->DukeMTMC-reID",
"Market-1501"
] | [
"Overall Accuracy",
"mAP",
"MAP",
"Rank-1",
"Mean Accuracy"
] | Parameter-Efficient Person Re-identification in the 3D Space |
Person re-identification (re-ID) has become increasingly popular in the
community due to its application and research significance. It aims at spotting
a person of interest in other cameras. In the early days, hand-crafted
algorithms and small-scale evaluation were predominantly reported. Recent years
have witnessed the emergence of large-scale datasets and deep learning systems
which make use of large data volumes. Considering different tasks, we classify
most current re-ID methods into two classes, i.e., image-based and video-based;
in both tasks, hand-crafted and deep learning systems will be reviewed.
Moreover, two new re-ID tasks which are much closer to real-world applications
are described and discussed, i.e., end-to-end re-ID and fast re-ID in very
large galleries. This paper: 1) introduces the history of person re-ID and its
relationship with image classification and instance retrieval; 2) surveys a
broad selection of the hand-crafted systems and the large-scale methods in both
image- and video-based re-ID; 3) describes critical future directions in
end-to-end re-ID and fast retrieval in large galleries; and 4) finally briefs
some important yet under-developed issues. | [] | [
"Image Classification",
"Person Re-Identification"
] | [] | [
"DukeMTMC-reID",
"Market-1501"
] | [
"Rank-1",
"MAP"
] | Person Re-identification: Past, Present and Future |
We introduce associative embedding, a novel method for supervising
convolutional neural networks for the task of detection and grouping. A number
of computer vision problems can be framed in this manner including multi-person
pose estimation, instance segmentation, and multi-object tracking. Usually the
grouping of detections is achieved with multi-stage pipelines, instead we
propose an approach that teaches a network to simultaneously output detections
and group assignments. This technique can be easily integrated into any
state-of-the-art network architecture that produces pixel-wise predictions. We
show how to apply this method to both multi-person pose estimation and instance
segmentation and report state-of-the-art performance for multi-person pose on
the MPII and MS-COCO datasets. | [] | [
"Instance Segmentation",
"Keypoint Detection",
"Multi-Person Pose Estimation",
"Pose Estimation"
] | [] | [
"COCO",
"MPII Multi-Person",
"COCO test-dev"
] | [
"Test AP",
"ARM",
"APM",
"AR75",
"AR50",
"ARL",
"AP75",
"AP",
"APL",
"[email protected]",
"AP50",
"AR"
] | Associative Embedding: End-to-End Learning for Joint Detection and Grouping |
Discriminative model learning for image denoising has been recently
attracting considerable attentions due to its favorable denoising performance.
In this paper, we take one step forward by investigating the construction of
feed-forward denoising convolutional neural networks (DnCNNs) to embrace the
progress in very deep architecture, learning algorithm, and regularization
method into image denoising. Specifically, residual learning and batch
normalization are utilized to speed up the training process as well as boost
the denoising performance. Different from the existing discriminative denoising
models which usually train a specific model for additive white Gaussian noise
(AWGN) at a certain noise level, our DnCNN model is able to handle Gaussian
denoising with unknown noise level (i.e., blind Gaussian denoising). With the
residual learning strategy, DnCNN implicitly removes the latent clean image in
the hidden layers. This property motivates us to train a single DnCNN model to
tackle with several general image denoising tasks such as Gaussian denoising,
single image super-resolution and JPEG image deblocking. Our extensive
experiments demonstrate that our DnCNN model can not only exhibit high
effectiveness in several general image denoising tasks, but also be efficiently
implemented by benefiting from GPU computing. | [] | [
"Denoising",
"Image Denoising",
"Image Super-Resolution",
"JPEG Artifact Correction",
"Super-Resolution"
] | [] | [
"Darmstadt Noise Dataset",
"Urban100 sigma15",
"BSD100 - 4x upscaling",
"Set14 - 2x upscaling",
"BSD100 - 2x upscaling",
"Urban100 - 3x upscaling",
"LIVE1 (Quality 40 Grayscale)",
"BSD68 sigma25",
"Classic5 (Quality 20 Grayscale)",
"Set5 - 2x upscaling",
"Urban100 - 4x upscaling",
"Set5 - 3x upscaling",
"Urban100 sigma25",
"Set14 - 4x upscaling",
"Set14 - 3x upscaling",
"Live1 (Quality 10 Grayscale)",
"Classic5 (Quality 10 Grayscale)",
"LIVE1 (Quality 20 Grayscale)",
"Set5 - 4x upscaling",
"LIVE1 (Quality 30 Grayscale)",
"Classic5 (Quality 40 Grayscale)",
"BSD68 sigma15",
"BSD100 - 3x upscaling",
"Urban100 - 2x upscaling",
"CBSD68 sigma35",
"Classic5 (Quality 30 Grayscale)"
] | [
"SSIM",
"PSNR"
] | Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising |
Sparse matrix factorization is a popular tool to obtain interpretable data
decompositions, which are also effective to perform data completion or
denoising. Its applicability to large datasets has been addressed with online
and randomized methods, that reduce the complexity in one of the matrix
dimension, but not in both of them. In this paper, we tackle very large
matrices in both dimensions. We propose a new factoriza-tion method that scales
gracefully to terabyte-scale datasets, that could not be processed by previous
algorithms in a reasonable amount of time. We demonstrate the efficiency of our
approach on massive functional Magnetic Resonance Imaging (fMRI) data, and on
matrix completion problems for recommender systems, where we obtain significant
speed-ups compared to state-of-the art coordinate descent methods. | [] | [
"Dictionary Learning",
"Matrix Completion",
"Recommendation Systems"
] | [] | [
"MovieLens 1M",
"MovieLens 10M"
] | [
"RMSE"
] | Dictionary Learning for Massive Matrix Factorization |