abstract
stringlengths 13
4.33k
| field
sequence | task
sequence | method
sequence | dataset
sequence | metric
sequence | title
stringlengths 10
194
|
---|---|---|---|---|---|---|
The problem of finding the missing values of a matrix given a few of its
entries, called matrix completion, has gathered a lot of attention in the
recent years. Although the problem under the standard low rank assumption is
NP-hard, Cand\`es and Recht showed that it can be exactly relaxed if the number
of observed entries is sufficiently large. In this work, we introduce a novel
matrix completion model that makes use of proximity information about rows and
columns by assuming they form communities. This assumption makes sense in
several real-world problems like in recommender systems, where there are
communities of people sharing preferences, while products form clusters that
receive similar ratings. Our main goal is thus to find a low-rank solution that
is structured by the proximities of rows and columns encoded by graphs. We
borrow ideas from manifold learning to constrain our solution to be smooth on
these graphs, in order to implicitly force row and column proximities. Our
matrix recovery model is formulated as a convex non-smooth optimization
problem, for which a well-posed iterative scheme is provided. We study and
evaluate the proposed matrix completion on synthetic and real data, showing
that the proposed structured low-rank recovery model outperforms the standard
matrix completion model in many situations. | [] | [
"Matrix Completion",
"Recommendation Systems"
] | [] | [
"MovieLens 100K"
] | [
"RMSE (u1 Splits)"
] | Matrix Completion on Graphs |
In this paper, we present a new open source toolkit for speech recognition, named CAT (CTC-CRF based ASR Toolkit). CAT inherits the data-efficiency of the hybrid approach and the simplicity of the E2E approach, providing a full-fledged implementation of CTC-CRFs and complete training and testing scripts for a number of English and Chinese benchmarks. Experiments show CAT obtains state-of-the-art results, which are comparable to the fine-tuned hybrid models in Kaldi but with a much simpler training pipeline. Compared to existing non-modularized E2E models, CAT performs better on limited-scale datasets, demonstrating its data efficiency. Furthermore, we propose a new method called contextualized soft forgetting, which enables CAT to do streaming ASR without accuracy degradation. We hope CAT, especially the CTC-CRF based framework and software, will be of broad interest to the community, and can be further explored and improved. | [] | [
"Speech Recognition"
] | [] | [
"Hub5'00 FISHER-SWBD",
"AISHELL-1",
"Hub5'00 SwitchBoard",
"WSJ eval92",
"WSJ eval93"
] | [
"CallHome",
"Hub5'00",
"SwitchBoard",
"Word Error Rate (WER)"
] | CAT: A CTC-CRF based ASR Toolkit Bridging the Hybrid and the End-to-end Approaches towards Data Efficiency and Low Latency |
While recent years have witnessed astonishing improvements in visual tracking
robustness, the advancements in tracking accuracy have been limited. As the
focus has been directed towards the development of powerful classifiers, the
problem of accurate target state estimation has been largely overlooked. In
fact, most trackers resort to a simple multi-scale search in order to estimate
the target bounding box. We argue that this approach is fundamentally limited
since target estimation is a complex task, requiring high-level knowledge about
the object.
We address this problem by proposing a novel tracking architecture,
consisting of dedicated target estimation and classification components. High
level knowledge is incorporated into the target estimation through extensive
offline learning. Our target estimation component is trained to predict the
overlap between the target object and an estimated bounding box. By carefully
integrating target-specific information, our approach achieves previously
unseen bounding box accuracy. We further introduce a classification component
that is trained online to guarantee high discriminative power in the presence
of distractors. Our final tracking framework sets a new state-of-the-art on
five challenging benchmarks. On the new large-scale TrackingNet dataset, our
tracker ATOM achieves a relative gain of 15% over the previous best approach,
while running at over 30 FPS. Code and models are available at
https://github.com/visionml/pytracking. | [] | [
"Visual Object Tracking",
"Visual Tracking"
] | [] | [
"TrackingNet"
] | [
"Normalized Precision",
"Precision",
"Accuracy"
] | ATOM: Accurate Tracking by Overlap Maximization |
Human-Object Interaction (HOI) Detection is an important problem to understand how humans interact with objects. In this paper, we explore Interactiveness Knowledge which indicates whether human and object interact with each other or not. We found that interactiveness knowledge can be learned across HOI datasets, regardless of HOI category settings. Our core idea is to exploit an Interactiveness Network to learn the general interactiveness knowledge from multiple HOI datasets and perform Non-Interaction Suppression before HOI classification in inference. On account of the generalization of interactiveness, interactiveness network is a transferable knowledge learner and can be cooperated with any HOI detection models to achieve desirable results. We extensively evaluate the proposed method on HICO-DET and V-COCO datasets. Our framework outperforms state-of-the-art HOI detection results by a great margin, verifying its efficacy and flexibility. Code is available at https://github.com/DirtyHarryLYL/Transferable-Interactiveness-Network. | [] | [
"Human-Object Interaction Detection"
] | [] | [
"HICO-DET",
"Ambiguious-HOI",
"V-COCO"
] | [
"mAP",
"Time Per Frame(ms)",
"Time Per Frame (ms)",
"MAP"
] | Transferable Interactiveness Knowledge for Human-Object Interaction Detection |
Deep convolutional networks (CNNs) have achieved great success in face
completion to generate plausible facial structures. These methods, however, are
limited in maintaining global consistency among face components and recovering
fine facial details. On the other hand, reflectional symmetry is a prominent
property of face image and benefits face recognition and consistency modeling,
yet remaining uninvestigated in deep face completion. In this work, we leverage
two kinds of symmetry-enforcing subnets to form a symmetry-consistent CNN model
(i.e., SymmFCNet) for effective face completion. For missing pixels on only one
of the half-faces, an illumination-reweighted warping subnet is developed to
guide the warping and illumination reweighting of the other half-face. As for
missing pixels on both of half-faces, we present a generative reconstruction
subnet together with a perceptual symmetry loss to enforce symmetry consistency
of recovered structures. The SymmFCNet is constructed by stacking generative
reconstruction subnet upon illumination-reweighted warping subnet, and can be
end-to-end learned from training set of unaligned face images. Experiments show
that SymmFCNet can generate high quality results on images with synthetic and
real occlusion, and performs favorably against state-of-the-arts. | [] | [
"Face Recognition",
"Facial Inpainting"
] | [] | [
"WebFace",
"VggFace2"
] | [
"PSNR"
] | Learning Symmetry Consistent Deep CNNs for Face Completion |
In this paper, we propose a deep convolutional neural network for learning
the embeddings of images in order to capture the notion of visual similarity.
We present a deep siamese architecture that when trained on positive and
negative pairs of images learn an embedding that accurately approximates the
ranking of images in order of visual similarity notion. We also implement a
novel loss calculation method using an angular loss metrics based on the
problems requirement. The final embedding of the image is combined
representation of the lower and top-level embeddings. We used fractional
distance matrix to calculate the distance between the learned embeddings in
n-dimensional space. In the end, we compare our architecture with other
existing deep architecture and go on to demonstrate the superiority of our
solution in terms of image retrieval by testing the architecture on four
datasets. We also show how our suggested network is better than the other
traditional deep CNNs used for capturing fine-grained image similarities by
learning an optimum embedding. | [] | [
"Fine-Grained Visual Recognition",
"Image Retrieval",
"Product Recommendation",
"Recommendation Systems"
] | [] | [
"street2shop - topwear"
] | [
"Accuracy"
] | Retrieving Similar E-Commerce Images Using Deep Learning |
A key technical challenge in performing 6D object pose estimation from RGB-D
image is to fully leverage the two complementary data sources. Prior works
either extract information from the RGB image and depth separately or use
costly post-processing steps, limiting their performances in highly cluttered
scenes and real-time applications. In this work, we present DenseFusion, a
generic framework for estimating 6D pose of a set of known objects from RGB-D
images. DenseFusion is a heterogeneous architecture that processes the two data
sources individually and uses a novel dense fusion network to extract
pixel-wise dense feature embedding, from which the pose is estimated.
Furthermore, we integrate an end-to-end iterative pose refinement procedure
that further improves the pose estimation while achieving near real-time
inference. Our experiments show that our method outperforms state-of-the-art
approaches in two datasets, YCB-Video and LineMOD. We also deploy our proposed
method to a real robot to grasp and manipulate objects based on the estimated
pose. | [] | [
"6D Pose Estimation",
"6D Pose Estimation using RGBD",
"Pose Estimation"
] | [] | [
"LineMOD",
"YCB-Video"
] | [
"Mean ADD",
"ADDS AUC",
"Accuracy (ADD)"
] | DenseFusion: 6D Object Pose Estimation by Iterative Dense Fusion |
Image segmentation is an important task in many medical applications. Methods
based on convolutional neural networks attain state-of-the-art accuracy;
however, they typically rely on supervised training with large labeled
datasets. Labeling medical images requires significant expertise and time, and
typical hand-tuned approaches for data augmentation fail to capture the complex
variations in such images.
We present an automated data augmentation method for synthesizing labeled
medical images. We demonstrate our method on the task of segmenting magnetic
resonance imaging (MRI) brain scans. Our method requires only a single
segmented scan, and leverages other unlabeled scans in a semi-supervised
approach. We learn a model of transformations from the images, and use the
model along with the labeled example to synthesize additional labeled examples.
Each transformation is comprised of a spatial deformation field and an
intensity change, enabling the synthesis of complex effects such as variations
in anatomy and image acquisition procedures. We show that training a supervised
segmenter with these new examples provides significant improvements over
state-of-the-art methods for one-shot biomedical image segmentation. Our code
is available at https://github.com/xamyzhao/brainstorm. | [] | [
"Data Augmentation",
"Medical Image Segmentation",
"Semantic Segmentation"
] | [] | [
"T1-weighted MRI"
] | [
"Dice Score"
] | Data augmentation using learned transformations for one-shot medical image segmentation |
Hyper-relational knowledge graphs (KGs) (e.g., Wikidata) enable associating additional key-value pairs along with the main triple to disambiguate, or restrict the validity of a fact. In this work, we propose a message passing based graph encoder - StarE capable of modeling such hyper-relational KGs. Unlike existing approaches, StarE can encode an arbitrary number of additional information (qualifiers) along with the main triple while keeping the semantic roles of qualifiers and triples intact. We also demonstrate that existing benchmarks for evaluating link prediction (LP) performance on hyper-relational KGs suffer from fundamental flaws and thus develop a new Wikidata-based dataset - WD50K. Our experiments demonstrate that StarE based LP model outperforms existing approaches across multiple benchmarks. We also confirm that leveraging qualifiers is vital for link prediction with gains up to 25 MRR points compared to triple-based representations. | [] | [
"Knowledge Graphs",
"Link Prediction"
] | [] | [
"WD50K",
"JF17K"
] | [
"Hit@1",
"Hit@5",
"MRR",
"Hit@10"
] | Message Passing for Hyper-Relational Knowledge Graphs |
Human action recognition remains as a challenging task partially due to the presence of large variations in the execution of action. To address this issue, we propose a probabilistic model called Hierarchical Dynamic Model (HDM). Leveraging on Bayesian framework, the model parameters are allowed to vary across different sequences of data, which increase the capacity of the model to adapt to intra-class variations on both spatial and temporal extent of actions. Meanwhile, the generative learning process allows the model to preserve the distinctive dynamic pattern for each action class. Through Bayesian inference, we are able to quantify the uncertainty of the classification, providing insight during the decision process. Compared to state-of-the-art methods, our method not only achieves competitive recognition performance within individual dataset but also shows better generalization capability across different datasets. Experiments conducted on data with missing values also show the robustness of the proposed method.
| [] | [
"Action Recognition",
"Bayesian Inference",
"Multimodal Activity Recognition",
"Skeleton Based Action Recognition",
"Temporal Action Localization"
] | [] | [
"MSR Action3D",
"UPenn Action",
"Gaming 3D (G3D)",
"UTD-MHAD"
] | [
"Accuracy (CS)",
"Accuracy"
] | Bayesian Hierarchical Dynamic Model for Human Action Recognition |
Over the past decade, knowledge graphs became popular for capturing structured domain knowledge. Relational learning models enable the prediction of missing links inside knowledge graphs. More specifically, latent distance approaches model the relationships among entities via a distance between latent representations. Translating embedding models (e.g., TransE) are among the most popular latent distance approaches which use one distance function to learn multiple relation patterns. However, they are mostly inefficient in capturing symmetric relations since the representation vector norm for all the symmetric relations becomes equal to zero. They also lose information when learning relations with reflexive patterns since they become symmetric and transitive. We propose the Multiple Distance Embedding model (MDE) that addresses these limitations and a framework to collaboratively combine variant latent distance-based terms. Our solution is based on two principles: 1) we use a limit-based loss instead of a margin ranking loss and, 2) by learning independent embedding vectors for each of the terms we can collectively train and predict using contradicting distance terms. We further demonstrate that MDE allows modeling relations with (anti)symmetry, inversion, and composition patterns. We propose MDE as a neural network model that allows us to map non-linear relations between the embedding vectors and the expected output of the score function. Our empirical results show that MDE performs competitively to state-of-the-art embedding models on several benchmark datasets. | [] | [
"Knowledge Graphs",
"Link Prediction",
"Relational Pattern Learning",
"Relational Reasoning"
] | [] | [
"FB15k",
"WN18RR",
"WN18",
"FB15k-237"
] | [
"Hits@10",
"MR",
"MRR"
] | MDE: Multiple Distance Embeddings for Link Prediction in Knowledge Graphs |
Aspect-based sentiment analysis produces a list of aspect terms and their corresponding sentiments for a natural language sentence. This task is usually done in a pipeline manner, with aspect term extraction performed first, followed by sentiment predictions toward the extracted aspect terms. While easier to develop, such an approach does not fully exploit joint information from the two subtasks and does not use all available sources of training information that might be helpful, such as document-level labeled sentiment corpus. In this paper, we propose an interactive multi-task learning network (IMN) which is able to jointly learn multiple related tasks simultaneously at both the token level as well as the document level. Unlike conventional multi-task learning methods that rely on learning common features for the different tasks, IMN introduces a message passing architecture where information is iteratively passed to different tasks through a shared set of latent variables. Experimental results demonstrate superior performance of the proposed method against multiple baselines on three benchmark datasets. | [] | [
"Aspect-Based Sentiment Analysis",
"Multi-Task Learning",
"Sentiment Analysis"
] | [] | [
"SemEval 2014 Task 4 Subtask 1+2",
"SemEval 2014 Task 4 Laptop",
"SemEval 2014 Task 4 Sub Task 2"
] | [
"Laptop (Acc)",
"Restaurant (Acc)",
"F1",
"Mean Acc (Restaurant + Laptop)"
] | An Interactive Multi-Task Learning Network for End-to-End Aspect-Based Sentiment Analysis |
Conventional methods for object detection typically require a substantial amount of training data and preparing such high-quality training data is very labor-intensive. In this paper, we propose a novel few-shot object detection network that aims at detecting objects of unseen categories with only a few annotated examples. Central to our method are our Attention-RPN, Multi-Relation Detector and Contrastive Training strategy, which exploit the similarity between the few shot support set and query set to detect novel objects while suppressing false detection in the background. To train our network, we contribute a new dataset that contains 1000 categories of various objects with high-quality annotations. To the best of our knowledge, this is one of the first datasets specifically designed for few-shot object detection. Once our few-shot network is trained, it can detect objects of unseen categories without further training or fine-tuning. Our method is general and has a wide range of potential applications. We produce a new state-of-the-art performance on different datasets in the few-shot setting. The dataset link is https://github.com/fanq15/Few-Shot-Object-Detection-Dataset. | [] | [
"Few-Shot Object Detection",
"Object Detection"
] | [] | [
"MS-COCO (10-shot)"
] | [
"AP"
] | Few-Shot Object Detection with Attention-RPN and Multi-Relation Detector |
Learning an effective similarity measure between image representations is key to the success of recent advances in visual search tasks (e.g. verification or zero-shot learning). Although the metric learning part is well addressed, this metric is usually computed over the average of the extracted deep features. This representation is then trained to be discriminative. However, these deep features tend to be scattered across the feature space. Consequently, the representations are not robust to outliers, object occlusions, background variations, etc. In this paper, we tackle this scattering problem with a distribution-aware regularization named HORDE. This regularizer enforces visually-close images to have deep features with the same distribution which are well localized in the feature space. We provide a theoretical analysis supporting this regularization effect. We also show the effectiveness of our approach by obtaining state-of-the-art results on 4 well-known datasets (Cub-200-2011, Cars-196, Stanford Online Products and Inshop Clothes Retrieval). | [] | [
"Image Retrieval",
"Metric Learning"
] | [] | [
" CUB-200-2011",
"CARS196"
] | [
"R@1"
] | Metric Learning With HORDE: High-Order Regularizer for Deep Embeddings |
Pre-trained language models such as BERT have proven to be highly effective for natural language processing (NLP) tasks. However, the high demand for computing resources in training such models hinders their application in practice. In order to alleviate this resource hunger in large-scale model training, we propose a Patient Knowledge Distillation approach to compress an original large model (teacher) into an equally-effective lightweight shallow network (student). Different from previous knowledge distillation methods, which only use the output from the last layer of the teacher network for distillation, our student model patiently learns from multiple intermediate layers of the teacher model for incremental knowledge extraction, following two strategies: ($i$) PKD-Last: learning from the last $k$ layers; and ($ii$) PKD-Skip: learning from every $k$ layers. These two patient distillation schemes enable the exploitation of rich information in the teacher's hidden layers, and encourage the student model to patiently learn from and imitate the teacher through a multi-layer distillation process. Empirically, this translates into improved results on multiple NLP tasks with significant gain in training efficiency, without sacrificing model accuracy. | [
"Regularization",
"Output Functions",
"Learning Rate Schedules",
"Stochastic Optimization",
"Attention Modules",
"Activation Functions",
"Subword Segmentation",
"Normalization",
"Language Models",
"Feedforward Networks",
"Attention Mechanisms",
"Skip Connections"
] | [
"Knowledge Distillation",
"Model Compression"
] | [
"Weight Decay",
"WordPiece",
"Layer Normalization",
"Softmax",
"Adam",
"Multi-Head Attention",
"Attention Dropout",
"Linear Warmup With Linear Decay",
"Residual Connection",
"Scaled Dot-Product Attention",
"Dropout",
"BERT",
"GELU",
"Dense Connections",
"Gaussian Linear Error Units"
] | [] | [] | Patient Knowledge Distillation for BERT Model Compression |
In this paper, we propose a novel controllable text-to-image generative adversarial network (ControlGAN), which can effectively synthesise high-quality images and also control parts of the image generation according to natural language descriptions. To achieve this, we introduce a word-level spatial and channel-wise attention-driven generator that can disentangle different visual attributes, and allow the model to focus on generating and manipulating subregions corresponding to the most relevant words. Also, a word-level discriminator is proposed to provide fine-grained supervisory feedback by correlating words with image regions, facilitating training an effective generator which is able to manipulate specific visual attributes without affecting the generation of other content. Furthermore, perceptual loss is adopted to reduce the randomness involved in the image generation, and to encourage the generator to manipulate specific attributes required in the modified text. Extensive experiments on benchmark datasets demonstrate that our method outperforms existing state of the art, and is able to effectively manipulate synthetic images using natural language descriptions. Code is available at https://github.com/mrlibw/ControlGAN. | [] | [
"Image Generation",
"Text-to-Image Generation"
] | [] | [
"COCO",
"CUB"
] | [
"Inception score"
] | Controllable Text-to-Image Generation |
Despite the remarkable success of generative models in creating photorealistic images using deep neural networks, gaps could still exist between the real and generated images, especially in the frequency domain. In this study, we find that narrowing the frequency domain gap can ameliorate the image synthesis quality further. To this end, we propose the focal frequency loss, a novel objective function that brings optimization of generative models into the frequency domain. The proposed loss allows the model to dynamically focus on the frequency components that are hard to synthesize by down-weighting the easy frequencies. This objective function is complementary to existing spatial losses, offering great impedance against the loss of important frequency information due to the inherent crux of neural networks. We demonstrate the versatility and effectiveness of focal frequency loss to improve various baselines in both perceptual quality and quantitative performance. | [] | [
"Image Generation",
"Image-to-Image Translation"
] | [] | [
"Cityscapes Labels-to-Photo"
] | [
"FID",
"Per-pixel Accuracy",
"mIoU"
] | Focal Frequency Loss for Generative Models |
Interpretability is an emerging area of research in trustworthy machine learning. Safe deployment of machine learning system mandates that the prediction and its explanation be reliable and robust. Recently, it has been shown that the explanations could be manipulated easily by adding visually imperceptible perturbations to the input while keeping the model's prediction intact. In this work, we study the problem of attributional robustness (i.e. models having robust explanations) by showing an upper bound for attributional vulnerability in terms of spatial correlation between the input image and its explanation map. We propose a training methodology that learns robust features by minimizing this upper bound using soft-margin triplet loss. Our methodology of robust attribution training (\textit{ART}) achieves the new state-of-the-art attributional robustness measure by a margin of $\approx$ 6-18 $\%$ on several standard datasets, ie. SVHN, CIFAR-10 and GTSRB. We further show the utility of the proposed robust training technique (\textit{ART}) in the downstream task of weakly supervised object localization by achieving the new state-of-the-art performance on CUB-200 dataset. | [] | [
"Object Localization",
"Weakly-Supervised Object Localization"
] | [] | [
" CUB-200-2011"
] | [
"Top-1 Localization Accuracy",
"Top-1 Error Rate"
] | Attributional Robustness Training using Input-Gradient Spatial Alignment |
Convolutional neural networks (CNNs) with residual links (ResNets) and causal dilated convolutional units have been the network of choice for deep learning approaches to speech enhancement. While residual links improve gradient flow during training, feature diminution of shallow layer outputs can occur due to repetitive summations with deeper layer outputs. One strategy to improve feature re-usage is to fuse both ResNets and densely connected CNNs (DenseNets). DenseNets, however, over-allocate parameters for feature re-usage. Motivated by this, we propose the residual-dense lattice network (RDL-Net), which is a new CNN for speech enhancement that employs both residual and dense aggregations without over-allocating parameters for feature re-usage. This is managed through the topology of the RDL blocks, which limit the number of outputs used for dense aggregations. Our extensive experimental investigation shows that RDL-Nets are able to achieve a higher speech enhancement performance than CNNs that employ residual and/or dense aggregations. RDL-Nets also use substantially fewer parameters and have a lower computational requirement. Furthermore, we demonstrate that RDL-Nets outperform many state-of-the-art deep learning approaches to speech enhancement. | [] | [
"Speech Enhancement"
] | [] | [
"DEMAND"
] | [
"CSIG",
"COVL",
"CBAK",
"PESQ"
] | Deep Residual-Dense Lattice Network for Speech Enhancement |
Few-shot classification is challenging because the data distribution of the training set can be widely different to the test set as their classes are disjoint. This distribution shift often results in poor generalization. Manifold smoothing has been shown to address the distribution shift problem by extending the decision boundaries and reducing the noise of the class representations. Moreover, manifold smoothness is a key factor for semi-supervised learning and transductive learning algorithms. In this work, we propose to use embedding propagation as an unsupervised non-parametric regularizer for manifold smoothing in few-shot classification. Embedding propagation leverages interpolations between the extracted features of a neural network based on a similarity graph. We empirically show that embedding propagation yields a smoother embedding manifold. We also show that applying embedding propagation to a transductive classifier achieves new state-of-the-art results in mini-Imagenet, tiered-Imagenet, Imagenet-FS, and CUB. Furthermore, we show that embedding propagation consistently improves the accuracy of the models in multiple semi-supervised learning scenarios by up to 16\% points. The proposed embedding propagation operation can be easily integrated as a non-parametric layer into a neural network. We provide the training code and usage examples at https://github.com/ElementAI/embedding-propagation. | [] | [
"Few-Shot Image Classification"
] | [] | [
"Mini-Imagenet 5-way (1-shot)",
"Tiered ImageNet 5-way (1-shot)",
"Mini-Imagenet 5-way (5-shot)",
"Mini-ImageNet - 1-Shot Learning",
"Tiered ImageNet 5-way (5-shot)"
] | [
"Accuracy"
] | Embedding Propagation: Smoother Manifold for Few-Shot Classification |
Cutting out an object and estimating its opacity mask, known as image matting, is a key task in many image editing applications. Deep learning approaches have made significant progress by adapting the encoder-decoder architecture of segmentation networks. However, most of the existing networks only predict the alpha matte and post-processing methods must then be used to recover the original foreground and background colours in the transparent regions. Recently, two methods have shown improved results by also estimating the foreground colours, but at a significant computational and memory cost. In this paper, we propose a low-cost modification to alpha matting networks to also predict the foreground and background colours. We study variations of the training regime and explore a wide range of existing and novel loss functions for the joint prediction. Our method achieves the state of the art performance on the Adobe Composition-1k dataset for alpha matte and composite colour quality. It is also the current best performing method on the alphamatting.com online evaluation. | [] | [
"Image Matting"
] | [] | [
"Composition-1K"
] | [
"MSE"
] | $F$, $B$, Alpha Matting |
Performing sound event detection on real-world recordings often implies dealing with overlapping target sound events and non-target sounds, also referred to as interference or noise. Until now these problems were mainly tackled at the classifier level. We propose to use sound separation as a pre-processing for sound event detection. In this paper we start from a sound separation model trained on the Free Universal Sound Separation dataset and the DCASE 2020 task 4 sound event detection baseline. We explore different methods to combine separated sound sources and the original mixture within the sound event detection. Furthermore, we investigate the impact of adapting the sound separation model to the sound event detection data on both the sound separation and the sound event detection. | [] | [
"Audio Source Separation",
"Sound Event Detection"
] | [] | [
"DESED"
] | [
"event-based F1 score"
] | Improving Sound Event Detection In Domestic Environments Using Sound Separation |
Neural networks are known to be vulnerable to adversarial examples, inputs
that have been intentionally perturbed to remain visually similar to the source
input, but cause a misclassification. It was recently shown that given a
dataset and classifier, there exists so called universal adversarial
perturbations, a single perturbation that causes a misclassification when
applied to any input. In this work, we introduce universal adversarial
networks, a generative network that is capable of fooling a target classifier
when it's generated output is added to a clean sample from a dataset. We show
that this technique improves on known universal adversarial attacks. | [] | [
"Graph Classification"
] | [] | [
"NCI1"
] | [
"Accuracy"
] | Learning Universal Adversarial Perturbations with Generative Models |
Existing multi-person pose estimators can be roughly divided into two-stage approaches (top-down and bottom-up approaches) and one-stage approaches. The two-stage methods either suffer high computational redundancy for additional person detectors or group keypoints heuristically after predicting all the instance-free keypoints. The recently proposed single-stage methods do not rely on the above two extra stages but have lower performance than the latest bottom-up approaches. In this work, a novel single-stage multi-person pose regression, termed SMPR, is presented. It follows the paradigm of dense prediction and predicts instance-aware keypoints from every location. Besides feature aggregation, we propose better strategies to define positive pose hypotheses for training which all play an important role in dense pose estimation. The network also learns the scores of estimated poses. The pose scoring strategy further improves the pose estimation performance by prioritizing superior poses during non-maximum suppression (NMS). We show that our method not only outperforms existing single-stage methods and but also be competitive with the latest bottom-up methods, with 70.2 AP and 77.5 AP75 on the COCO test-dev pose benchmark. Code is available at https://github.com/cmdi-dlut/SMPR. | [] | [
"Multi-Person Pose Estimation",
"Pose Estimation",
"Regression"
] | [] | [
"COCO test-dev"
] | [
"APM",
"AP75",
"AP",
"APL",
"AP50"
] | SMPR: Single-Stage Multi-Person Pose Regression |
We propose a new algorithm, Mean Actor-Critic (MAC), for discrete-action
continuous-state reinforcement learning. MAC is a policy gradient algorithm
that uses the agent's explicit representation of all action values to estimate
the gradient of the policy, rather than using only the actions that were
actually executed. We prove that this approach reduces variance in the policy
gradient estimate relative to traditional actor-critic methods. We show
empirical results on two control domains and on six Atari games, where MAC is
competitive with state-of-the-art policy search algorithms. | [] | [
"Atari Games"
] | [] | [
"Cart Pole (OpenAI Gym)",
"Lunar Lander (OpenAI Gym)",
"Atari 2600 Beam Rider",
"Atari 2600 Seaquest",
"Atari 2600 Breakout",
"Atari 2600 Space Invaders",
"Atari 2600 Pong",
"Atari 2600 Q*Bert"
] | [
"Score"
] | Mean Actor Critic |
Text preprocessing is often the first step in the pipeline of a Natural
Language Processing (NLP) system, with potential impact in its final
performance. Despite its importance, text preprocessing has not received much
attention in the deep learning literature. In this paper we investigate the
impact of simple text preprocessing decisions (particularly tokenizing,
lemmatizing, lowercasing and multiword grouping) on the performance of a
standard neural text classifier. We perform an extensive evaluation on standard
benchmarks from text categorization and sentiment analysis. While our
experiments show that a simple tokenization of input text is generally
adequate, they also highlight significant degrees of variability across
preprocessing techniques. This reveals the importance of paying attention to
this usually-overlooked step in the pipeline, particularly when comparing
different models. Finally, our evaluation provides insights into the best
preprocessing practices for training word embeddings. | [] | [
"Sentiment Analysis",
"Text Categorization",
"Text Classification",
"Tokenization",
"Word Embeddings"
] | [] | [
"IMDb",
"SST-2 Binary classification",
"Ohsumed"
] | [
"Accuracy"
] | On the Role of Text Preprocessing in Neural Network Architectures: An Evaluation Study on Text Categorization and Sentiment Analysis |
Common language models typically predict the next word given the context. In this work, we propose a method that improves language modeling by learning to align the given context and the following phrase. The model does not require any linguistic annotation of phrase segmentation. Instead, we define syntactic heights and phrase segmentation rules, enabling the model to automatically induce phrases, recognize their task-specific heads, and generate phrase embeddings in an unsupervised learning manner. Our method can easily be applied to language models with different network architectures since an independent module is used for phrase induction and context-phrase alignment, and no change is required in the underlying language modeling network. Experiments have shown that our model outperformed several strong baseline models on different data sets. We achieved a new state-of-the-art performance of 17.4 perplexity on the Wikitext-103 dataset. Additionally, visualizing the outputs of the phrase induction module showed that our model is able to learn approximate phrase-level structural knowledge without any annotation. | [] | [
"Language Modelling"
] | [] | [
"WikiText-103"
] | [
"Number of params",
"Test perplexity"
] | Improving Neural Language Models by Segmenting, Attending, and Predicting the Future |
Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), is one of the most important problems in natural language processing. It requires to infer the logical relationship between two given sentences. While current approaches mostly focus on the interaction architectures of the sentences, in this paper, we propose to transfer knowledge from some important discourse markers to augment the quality of the NLI model. We observe that people usually use some discourse markers such as "so" or "but" to represent the logical relationship between two sentences. These words potentially have deep connections with the meanings of the sentences, thus can be utilized to help improve the representations of them. Moreover, we use reinforcement learning to optimize a new objective function with a reward defined by the property of the NLI datasets to make full use of the labels information. Experiments show that our method achieves the state-of-the-art performance on several large-scale datasets. | [] | [
"Natural Language Inference"
] | [] | [
"SNLI"
] | [
"Parameters",
"% Train Accuracy",
"% Test Accuracy"
] | Discourse Marker Augmented Network with Reinforcement Learning for Natural Language Inference |
Foundational verification allows programmers to build software which has been empirically shown to have high levels of assurance in a variety of important domains. However, the cost of producing foundationally verified software remains prohibitively high for most projects,as it requires significant manual effort by highly trained experts. In this paper we present Proverbot9001 a proof search system using machine learning techniques to produce proofs of software correctness in interactive theorem provers. We demonstrate Proverbot9001 on the proof obligations from a large practical proof project,the CompCert verified C compiler,and show that it can effectively automate what was previously manual proofs,automatically solving 15.77% of proofs in our test dataset. This corresponds to an over 3X improvement over the prior state of the art machine learning technique for generating proofs in Coq. | [] | [
"Automated Theorem Proving"
] | [] | [
"CompCert"
] | [
"Percentage correct"
] | Generating Correctness Proofs with Neural Networks |
Graph neural networks (GNNs) are widely used in many applications. However, their robustness against adversarial attacks is criticized. Prior studies show that using unnoticeable modifications on graph topology or nodal features can significantly reduce the performances of GNNs. It is very challenging to design robust graph neural networks against poisoning attack and several efforts have been taken. Existing work aims at reducing the negative impact from adversarial edges only with the poisoned graph, which is sub-optimal since they fail to discriminate adversarial edges from normal ones. On the other hand, clean graphs from similar domains as the target poisoned graph are usually available in the real world. By perturbing these clean graphs, we create supervised knowledge to train the ability to detect adversarial edges so that the robustness of GNNs is elevated. However, such potential for clean graphs is neglected by existing work. To this end, we investigate a novel problem of improving the robustness of GNNs against poisoning attacks by exploring clean graphs. Specifically, we propose PA-GNN, which relies on a penalized aggregation mechanism that directly restrict the negative impact of adversarial edges by assigning them lower attention coefficients. To optimize PA-GNN for a poisoned graph, we design a meta-optimization algorithm that trains PA-GNN to penalize perturbations using clean graphs and their adversarial counterparts, and transfers such ability to improve the robustness of PA-GNN on the poisoned graph. Experimental results on four real-world datasets demonstrate the robustness of PA-GNN against poisoning attacks on graphs. Code and data are available here: https://github.com/tangxianfeng/PA-GNN. | [] | [
"Node Classification",
"Transfer Learning"
] | [] | [
"Pubmed"
] | [
"Accuracy"
] | Transferring Robustness for Graph Neural Network Against Poisoning Attacks |
Temporal knowledge bases associate relational (s,r,o) triples with a set of times (or a single time instant) when the relation is valid. While time-agnostic KB completion (KBC) has witnessed significant research, temporal KB completion (TKBC) is in its early days. In this paper, we consider predicting missing entities (link prediction) and missing time intervals (time prediction) as joint TKBC tasks where entities, relations, and time are all embedded in a uniform, compatible space. We present TIMEPLEX, a novel time-aware KBC method, that also automatically exploits the recurrent nature of some relations and temporal interactions between pairs of relations. TIMEPLEX achieves state-of-the-art performance on both prediction tasks. We also find that existing TKBC models heavily overestimate link prediction performance due to imperfect evaluation mechanisms. In response, we propose improved TKBC evaluation protocols for both link and time prediction tasks, dealing with subtle issues that arise from the partial overlap of time intervals in gold instances and system predictions. | [] | [
"Knowledge Base Completion",
"Knowledge Graph Completion",
"Knowledge Graphs",
"Link Prediction",
"Temporal Information Extraction",
"Temporal Knowledge Graph Completion",
"Time-interval Prediction"
] | [] | [
"ICEWS05-15",
"ICEWS14",
"Wikidata12k",
"Yago11k"
] | [
"MRR"
] | Temporal Knowledge Base Completion: New Algorithms and Evaluation Protocols |
Object recognition requires a generalization capability to avoid overfitting, especially when the samples are extremely few. Generalization from limited samples, usually studied under the umbrella of meta-learning, equips learning techniques with the ability to adapt quickly in dynamical environments and proves to be an essential aspect of life long learning. In this paper, we provide a framework for few-shot learning by introducing dynamic classifiers that are constructed from few samples. A subspace method is exploited as the central block of a dynamic classifier. We will empirically show that such modelling leads to robustness against perturbations (e.g., outliers) and yields competitive results on the task of supervised and semi-supervised few-shot classification. We also develop a discriminative form which can boost the accuracy even further. Our code is available at https://github.com/chrysts/dsn_fewshot
| [] | [
"Few-Shot Image Classification",
"Few-Shot Learning",
"Meta-Learning",
"Object Recognition"
] | [] | [
"Mini-Imagenet 5-way (1-shot)",
"Tiered ImageNet 5-way (1-shot)",
"Mini-Imagenet 5-way (5-shot)",
"CIFAR-FS 5-way (1-shot)",
"Tiered ImageNet 5-way (5-shot)",
"CIFAR-FS 5-way (5-shot)"
] | [
"Accuracy"
] | Adaptive Subspaces for Few-Shot Learning |
In this paper, we present CorefQA, an accurate and extensible approach for the coreference resolution task. We formulate the problem as a span prediction task, like in question answering: A query is generated for each candidate mention using its surrounding context, and a span prediction module is employed to extract the text spans of the coreferences within the document using the generated query. This formulation comes with the following key advantages: (1) The span prediction strategy provides the flexibility of retrieving mentions left out at the mention proposal stage; (2) In the question answering framework, encoding the mention and its context explicitly in a query makes it possible to have a deep and thorough examination of cues embedded in the context of coreferent mentions; and (3) A plethora of existing question answering datasets can be used for data augmentation to improve the model{'}s generalization capability. Experiments demonstrate significant performance boost over previous models, with 83.1 (+3.5) F1 score on the CoNLL-2012 benchmark and 87.5 (+2.5) F1 score on the GAP benchmark. | [] | [
"Coreference Resolution",
"Data Augmentation",
"Question Answering"
] | [] | [
"CoNLL 2012"
] | [
"Avg F1"
] | CorefQA: Coreference Resolution as Query-based Span Prediction |
Real-world image noise removal is a long-standing yet very challenging task in computer vision. The success of deep neural network in denoising stimulates the research of noise generation, aiming at synthesizing more clean-noisy image pairs to facilitate the training of deep denoisers. In this work, we propose a novel unified framework to simultaneously deal with the noise removal and noise generation tasks. Instead of only inferring the posteriori distribution of the latent clean image conditioned on the observed noisy image in traditional MAP framework, our proposed method learns the joint distribution of the clean-noisy image pairs. Specifically, we approximate the joint distribution with two different factorized forms, which can be formulated as a denoiser mapping the noisy image to the clean one and a generator mapping the clean image to the noisy one. The learned joint distribution implicitly contains all the information between the noisy and clean images, avoiding the necessity of manually designing the image priors and noise assumptions as traditional. Besides, the performance of our denoiser can be further improved by augmenting the original training dataset with the learned generator. Moreover, we propose two metrics to assess the quality of the generated noisy image, for which, to the best of our knowledge, such metrics are firstly proposed along this research line. Extensive experiments have been conducted to demonstrate the superiority of our method over the state-of-the-arts both in the real noise removal and generation tasks. The training and testing code is available at https://github.com/zsyOAOA/DANet. | [] | [
"Denoising",
"Image Denoising"
] | [] | [
"SIDD",
"DND"
] | [
"SSIM (sRGB)",
"PSNR (sRGB)"
] | Dual Adversarial Network: Toward Real-world Noise Removal and Noise Generation |
Optimising a ranking-based metric, such as Average Precision (AP), is notoriously challenging due to the fact that it is non-differentiable, and hence cannot be optimised directly using gradient-descent methods. To this end, we introduce an objective that optimises instead a smoothed approximation of AP, coined Smooth-AP. Smooth-AP is a plug-and-play objective function that allows for end-to-end training of deep networks with a simple and elegant implementation. We also present an analysis for why directly optimising the ranking based metric of AP offers benefits over other deep metric learning losses. We apply Smooth-AP to standard retrieval benchmarks: Stanford Online products and VehicleID, and also evaluate on larger-scale datasets: INaturalist for fine-grained category retrieval, and VGGFace2 and IJB-C for face retrieval. In all cases, we improve the performance over the state-of-the-art, especially for larger-scale datasets, thus demonstrating the effectiveness and scalability of Smooth-AP to real-world scenarios. | [] | [
"Image Instance Retrieval",
"Image Retrieval",
"Metric Learning",
"Vehicle Re-Identification"
] | [] | [
"iNaturalist",
"SOP",
"VehicleID Large",
"VehicleID Small",
"VehicleID Medium"
] | [
"R@16",
"R@5",
"Rank-1",
"R@1",
"R@32",
"Rank-5"
] | Smooth-AP: Smoothing the Path Towards Large-Scale Image Retrieval |
We propose a neural rendering-based system that creates head avatars from a single photograph. Our approach models a person's appearance by decomposing it into two layers. The first layer is a pose-dependent coarse image that is synthesized by a small neural network. The second layer is defined by a pose-independent texture image that contains high-frequency details. The texture image is generated offline, warped and added to the coarse image to ensure a high effective resolution of synthesized head views. We compare our system to analogous state-of-the-art systems in terms of visual quality and speed. The experiments show significant inference speedup over previous neural head avatar models for a given visual quality. We also report on a real-time smartphone-based implementation of our system. | [] | [
"Neural Rendering",
"Talking Head Generation"
] | [] | [
"VoxCeleb2 - 1-shot learning"
] | [
"Normalized Pose Error",
"inference time (ms)",
"CSIM",
"LPIPS",
"SSIM"
] | Fast Bi-layer Neural Synthesis of One-Shot Realistic Head Avatars |
Aspect Sentiment Triplet Extraction (ASTE) is the task of extracting the triplets of target entities, their associated sentiment, and opinion spans explaining the reason for the sentiment. Existing research efforts mostly solve this problem using pipeline approaches, which break the triplet extraction process into several stages. Our observation is that the three elements within a triplet are highly related to each other, and this motivates us to build a joint model to extract such triplets using a sequence tagging approach. However, how to effectively design a tagging approach to extract the triplets that can capture the rich interactions among the elements is a challenging research question. In this work, we propose the first end-to-end model with a novel position-aware tagging scheme that is capable of jointly extracting the triplets. Our experimental results on several existing datasets show that jointly capturing elements in the triplet using our approach leads to improved performance over the existing approaches. We also conducted extensive experiments to investigate the model effectiveness and robustness. | [] | [
"Aspect Sentiment Triplet Extraction"
] | [] | [
"SemEval"
] | [
"F1"
] | Position-Aware Tagging for Aspect Sentiment Triplet Extraction |
Conventional unsupervised multi-source domain adaptation (UMDA) methods assume all source domains can be accessed directly. This neglects the privacy-preserving policy, that is, all the data and computations must be kept decentralized. There exists three problems in this scenario: (1) Minimizing the domain distance requires the pairwise calculation of the data from source and target domains, which is not accessible. (2) The communication cost and privacy security limit the application of UMDA methods (e.g., the domain adversarial training). (3) Since users have no authority to check the data quality, the irrelevant or malicious source domains are more likely to appear, which causes negative transfer. In this study, we propose a privacy-preserving UMDA paradigm named Knowledge Distillation based Decentralized Domain Adaptation (KD3A), which performs domain adaptation through the knowledge distillation on models from different source domains. KD3A solves the above problems with three components: (1) A multi-source knowledge distillation method named Knowledge Vote to learn high-quality domain consensus knowledge. (2) A dynamic weighting strategy named Consensus Focus to identify both the malicious and irrelevant domains. (3) A decentralized optimization strategy for domain distance named BatchNorm MMD. The extensive experiments on DomainNet demonstrate that KD3A is robust to the negative transfer and brings a 100x reduction of communication cost compared with other decentralized UMDA methods. Moreover, our KD3A significantly outperforms state-of-the-art UMDA approaches. | [] | [
"Domain Adaptation",
"Knowledge Distillation",
"Unsupervised Domain Adaptation"
] | [] | [
"DomainNet"
] | [
"Average Accuracy"
] | KD3A: Unsupervised Multi-Source Decentralized Domain Adaptation via Knowledge Distillation |
Emerging interests have been brought to recognize previously unseen objects given very few training examples, known as few-shot object detection (FSOD). Recent researches demonstrate that good feature embedding is the key to reach favorable few-shot learning performance. We observe object proposals with different Intersection-of-Union (IoU) scores are analogous to the intra-image augmentation used in contrastive approaches. And we exploit this analogy and incorporate supervised contrastive learning to achieve more robust objects representations in FSOD. We present Few-Shot object detection via Contrastive proposals Encoding (FSCE), a simple yet effective approach to learning contrastive-aware object proposal encodings that facilitate the classification of detected objects. We notice the degradation of average precision (AP) for rare objects mainly comes from misclassifying novel instances as confusable classes. And we ease the misclassification issues by promoting instance level intra-class compactness and inter-class variance via our contrastive proposal encoding loss (CPE loss). Our design outperforms current state-of-the-art works in any shot and all data splits, with up to +8.8% on standard benchmark PASCAL VOC and +2.7% on challenging COCO benchmark. Code is available at: https: //github.com/MegviiDetection/FSCE | [] | [
"Few-Shot Learning",
"Few-Shot Object Detection",
"Image Augmentation",
"Object Detection"
] | [] | [
"MS-COCO (30-shot)",
"MS-COCO (10-shot)"
] | [
"AP"
] | FSCE: Few-Shot Object Detection via Contrastive Proposal Encoding |
Commonsense reasoning is a long-standing challenge for deep learning. For example, it is difficult to use neural networks to tackle the Winograd Schema dataset (Levesque et al., 2011). In this paper, we present a simple method for commonsense reasoning with neural networks, using unsupervised learning. Key to our method is the use of language models, trained on a massive amount of unlabled data, to score multiple choice questions posed by commonsense reasoning tests. On both Pronoun Disambiguation and Winograd Schema challenges, our models outperform previous state-of-the-art methods by a large margin, without using expensive annotated knowledge bases or hand-engineered features. We train an array of large RNN language models that operate at word or character level on LM-1-Billion, CommonCrawl, SQuAD, Gutenberg Books, and a customized corpus for this task and show that diversity of training data plays an important role in test performance. Further analysis also shows that our system successfully discovers important features of the context that decide the correct answer, indicating a good grasp of commonsense knowledge. | [] | [
"Common Sense Reasoning"
] | [] | [
"Winograd Schema Challenge"
] | [
"Score"
] | A Simple Method for Commonsense Reasoning |
Many image-to-image translation problems are ambiguous, as a single input
image may correspond to multiple possible outputs. In this work, we aim to
model a \emph{distribution} of possible outputs in a conditional generative
modeling setting. The ambiguity of the mapping is distilled in a
low-dimensional latent vector, which can be randomly sampled at test time. A
generator learns to map the given input, combined with this latent code, to the
output. We explicitly encourage the connection between output and the latent
code to be invertible. This helps prevent a many-to-one mapping from the latent
code to the output during training, also known as the problem of mode collapse,
and produces more diverse results. We explore several variants of this approach
by employing different training objectives, network architectures, and methods
of injecting the latent code. Our proposed method encourages bijective
consistency between the latent encoding and output modes. We present a
systematic comparison of our method and other variants on both perceptual
realism and diversity. | [] | [
"Image-to-Image Translation"
] | [] | [
"Edge-to-Shoes",
"Edge-to-Handbags"
] | [
"Quality",
"Diversity"
] | Toward Multimodal Image-to-Image Translation |
We consider the problem of representation learning for graph data. Convolutional neural networks can naturally operate on images, but have significant challenges in dealing with graph data. Given images are special cases of graphs with nodes lie on 2D lattices, graph embedding tasks have a natural correspondence with image pixel-wise prediction tasks such as segmentation. While encoder-decoder architectures like U-Nets have been successfully applied on many image pixel-wise prediction tasks, similar methods are lacking for graph data. This is due to the fact that pooling and up-sampling operations are not natural on graph data. To address these challenges, we propose novel graph pooling (gPool) and unpooling (gUnpool) operations in this work. The gPool layer adaptively selects some nodes to form a smaller graph based on their scalar projection values on a trainable projection vector. We further propose the gUnpool layer as the inverse operation of the gPool layer. The gUnpool layer restores the graph into its original structure using the position information of nodes selected in the corresponding gPool layer. Based on our proposed gPool and gUnpool layers, we develop an encoder-decoder model on graph, known as the graph U-Nets. Our experimental results on node classification and graph classification tasks demonstrate that our methods achieve consistently better performance than previous models. | [] | [
"Graph Classification",
"Graph Embedding",
"Node Classification",
"Representation Learning"
] | [] | [
"COLLAB",
"Cora",
"PROTEINS",
"D&D",
"Citeseer",
"Pubmed"
] | [
"Accuracy"
] | Graph U-Nets |
We introduce the first end-to-end coreference resolution model and show that
it significantly outperforms all previous work without using a syntactic parser
or hand-engineered mention detector. The key idea is to directly consider all
spans in a document as potential mentions and learn distributions over possible
antecedents for each. The model computes span embeddings that combine
context-dependent boundary representations with a head-finding attention
mechanism. It is trained to maximize the marginal likelihood of gold antecedent
spans from coreference clusters and is factored to enable aggressive pruning of
potential mentions. Experiments demonstrate state-of-the-art performance, with
a gain of 1.5 F1 on the OntoNotes benchmark and by 3.1 F1 using a 5-model
ensemble, despite the fact that this is the first approach to be successfully
trained with no external resources. | [] | [
"Coreference Resolution"
] | [] | [
"OntoNotes",
"CoNLL 2012"
] | [
"Avg F1",
"F1"
] | End-to-end Neural Coreference Resolution |
In recent years, we have seen tremendous progress in the field of object
detection. Most of the recent improvements have been achieved by targeting
deeper feedforward networks. However, many hard object categories such as
bottle, remote, etc. require representation of fine details and not just
coarse, semantic representations. But most of these fine details are lost in
the early convolutional layers. What we need is a way to incorporate finer
details from lower layers into the detection architecture. Skip connections
have been proposed to combine high-level and low-level features, but we argue
that selecting the right features from low-level requires top-down contextual
information. Inspired by the human visual pathway, in this paper we propose
top-down modulations as a way to incorporate fine details into the detection
framework. Our approach supplements the standard bottom-up, feedforward ConvNet
with a top-down modulation (TDM) network, connected using lateral connections.
These connections are responsible for the modulation of lower layer filters,
and the top-down network handles the selection and integration of contextual
information and low-level features. The proposed TDM architecture provides a
significant boost on the COCO testdev benchmark, achieving 28.6 AP for VGG16,
35.2 AP for ResNet101, and 37.3 for InceptionResNetv2 network, without any
bells and whistles (e.g., multi-scale, iterative box refinement, etc.). | [] | [
"Object Detection"
] | [] | [
"COCO test-dev"
] | [
"box AP"
] | Beyond Skip Connections: Top-Down Modulation for Object Detection |
We introduce a new generative model where samples are produced via Langevin dynamics using gradients of the data distribution estimated with score matching. Because gradients can be ill-defined and hard to estimate when the data resides on low-dimensional manifolds, we perturb the data with different levels of Gaussian noise, and jointly estimate the corresponding scores, i.e., the vector fields of gradients of the perturbed data distribution for all noise levels. For sampling, we propose an annealed Langevin dynamics where we use gradients corresponding to gradually decreasing noise levels as the sampling process gets closer to the data manifold. Our framework allows flexible model architectures, requires no sampling during training or the use of adversarial methods, and provides a learning objective that can be used for principled model comparisons. Our models produce samples comparable to GANs on MNIST, CelebA and CIFAR-10 datasets, achieving a new state-of-the-art inception score of 8.87 on CIFAR-10. Additionally, we demonstrate that our models learn effective representations via image inpainting experiments. | [] | [
"Image Generation",
"Image Inpainting"
] | [] | [
"CIFAR-10"
] | [
"Inception score",
"FID"
] | Generative Modeling by Estimating Gradients of the Data Distribution |
We seek to improve deep neural networks by generalizing the pooling
operations that play a central role in current architectures. We pursue a
careful exploration of approaches to allow pooling to learn and to adapt to
complex and variable patterns. The two primary directions lie in (1) learning a
pooling function via (two strategies of) combining of max and average pooling,
and (2) learning a pooling function in the form of a tree-structured fusion of
pooling filters that are themselves learned. In our experiments every
generalized pooling operation we explore improves performance when used in
place of average or max pooling. We experimentally demonstrate that the
proposed pooling operations provide a boost in invariance properties relative
to conventional pooling and set the state of the art on several widely adopted
benchmark datasets; they are also easy to implement, and can be applied within
various deep neural network architectures. These benefits come with only a
light increase in computational overhead during training and a very modest
increase in the number of model parameters. | [] | [
"Image Classification"
] | [] | [
"SVHN",
"MNIST",
"CIFAR-100",
"CIFAR-10"
] | [
"Percentage error",
"Percentage correct"
] | Generalizing Pooling Functions in Convolutional Neural Networks: Mixed, Gated, and Tree |
Given a partial description like "she opened the hood of the car," humans can
reason about the situation and anticipate what might come next ("then, she
examined the engine"). In this paper, we introduce the task of grounded
commonsense inference, unifying natural language inference and commonsense
reasoning.
We present SWAG, a new dataset with 113k multiple choice questions about a
rich spectrum of grounded situations. To address the recurring challenges of
the annotation artifacts and human biases found in many existing datasets, we
propose Adversarial Filtering (AF), a novel procedure that constructs a
de-biased dataset by iteratively training an ensemble of stylistic classifiers,
and using them to filter the data. To account for the aggressive adversarial
filtering, we use state-of-the-art language models to massively oversample a
diverse set of potential counterfactuals. Empirical results demonstrate that
while humans can solve the resulting inference problems with high accuracy
(88%), various competitive models struggle on our task. We provide
comprehensive analysis that indicates significant opportunities for future
research. | [] | [
"Common Sense Reasoning",
"Natural Language Inference",
"Question Answering"
] | [] | [
"SWAG"
] | [
"Dev",
"Test"
] | SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference |
We propose a self-supervised framework for learning facial attributes by
simply watching videos of a human face speaking, laughing, and moving over
time. To perform this task, we introduce a network, Facial Attributes-Net
(FAb-Net), that is trained to embed multiple frames from the same video
face-track into a common low-dimensional space. With this approach, we make
three contributions: first, we show that the network can leverage information
from multiple source frames by predicting confidence/attention masks for each
frame; second, we demonstrate that using a curriculum learning regime improves
the learned embedding; finally, we demonstrate that the network learns a
meaningful face embedding that encodes information about head pose, facial
landmarks and facial expression, i.e. facial attributes, without having been
supervised with any labelled data. We are comparable or superior to
state-of-the-art self-supervised methods on these tasks and approach the
performance of supervised methods. | [] | [
"Curriculum Learning",
"Self-Supervised Learning",
"Unsupervised Facial Landmark Detection"
] | [] | [
"MAFL",
"300W"
] | [
"NME"
] | Self-supervised learning of a facial attribute embedding from video |
Online multi-object tracking is a fundamental problem in time-critical video
analysis applications. A major challenge in the popular tracking-by-detection
framework is how to associate unreliable detection results with existing
tracks. In this paper, we propose to handle unreliable detection by collecting
candidates from outputs of both detection and tracking. The intuition behind
generating redundant candidates is that detection and tracks can complement
each other in different scenarios. Detection results of high confidence prevent
tracking drifts in the long term, and predictions of tracks can handle noisy
detection caused by occlusion. In order to apply optimal selection from a
considerable amount of candidates in real-time, we present a novel scoring
function based on a fully convolutional neural network, that shares most
computations on the entire image. Moreover, we adopt a deeply learned
appearance representation, which is trained on large-scale person
re-identification datasets, to improve the identification ability of our
tracker. Extensive experiments show that our tracker achieves real-time and
state-of-the-art performance on a widely used people tracking benchmark. | [] | [
"Large-Scale Person Re-Identification",
"Multi-Object Tracking",
"Multiple People Tracking",
"Object Tracking",
"Online Multi-Object Tracking",
"Person Re-Identification"
] | [] | [
"MOT16",
"MOT17"
] | [
"MOTA"
] | Real-time Multiple People Tracking with Deeply Learned Candidate Selection and Person Re-Identification |
Graph classification has recently received a lot of attention from various
fields of machine learning e.g. kernel methods, sequential modeling or graph
embedding. All these approaches offer promising results with different
respective strengths and weaknesses. However, most of them rely on complex
mathematics and require heavy computational power to achieve their best
performance. We propose a simple and fast algorithm based on the spectral
decomposition of graph Laplacian to perform graph classification and get a
first reference score for a dataset. We show that this method obtains
competitive results compared to state-of-the-art algorithms. | [] | [
"Graph Classification",
"Graph Embedding"
] | [] | [
"ENZYMES",
"PROTEINS",
"D&D",
"NCI1",
"MUTAG",
"PTC"
] | [
"Accuracy"
] | A Simple Baseline Algorithm for Graph Classification |
Semantic scene understanding is important for various applications. In particular, self-driving cars need a fine-grained understanding of the surfaces and objects in their vicinity. Light detection and ranging (LiDAR) provides precise geometric information about the environment and is thus a part of the sensor suites of almost all self-driving cars. Despite the relevance of semantic scene understanding for this application, there is a lack of a large dataset for this task which is based on an automotive LiDAR. In this paper, we introduce a large dataset to propel research on laser-based semantic segmentation. We annotated all sequences of the KITTI Vision Odometry Benchmark and provide dense point-wise annotations for the complete $360^{o}$ field-of-view of the employed automotive LiDAR. We propose three benchmark tasks based on this dataset: (i) semantic segmentation of point clouds using a single scan, (ii) semantic segmentation using multiple past scans, and (iii) semantic scene completion, which requires to anticipate the semantic scene in the future. We provide baseline experiments and show that there is a need for more sophisticated models to efficiently tackle these tasks. Our dataset opens the door for the development of more advanced methods, but also provides plentiful data to investigate new research directions. | [] | [
"3D Semantic Segmentation",
"Scene Understanding",
"Self-Driving Cars",
"Semantic Segmentation"
] | [] | [
"SemanticKITTI"
] | [
"mIoU"
] | SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences |
We present an end-to-end head-pose estimation network designed to predict Euler angles through the full range head yaws from a single RGB image. Existing methods perform well for frontal views but few target head pose from all viewpoints. This has applications in autonomous driving and retail. Our network builds on multi-loss approaches with changes to loss functions and training strategies adapted to wide range estimation. Additionally, we extract ground truth labelings of anterior views from a current panoptic dataset for the first time. The resulting Wide Headpose Estimation Network (WHENet) is the first fine-grained modern method applicable to the full-range of head yaws (hence wide) yet also meets or beats state-of-the-art methods for frontal head pose estimation. Our network is compact and efficient for mobile devices and applications. | [] | [
"Autonomous Driving",
"Head Pose Estimation",
"Pose Estimation"
] | [] | [
"AFLW2000",
"BIWI"
] | [
"MAE",
"MAE (trained with other data)"
] | WHENet: Real-time Fine-Grained Estimation for Wide Range Head Pose |
Graph kernels are powerful tools to bridge the gap between machine learning and data encoded as graphs. Most graph kernels are based on the decomposition of graphs into a set of patterns. The similarity between two graphs is then deduced from the similarity between corresponding patterns. Kernels based on linear patterns constitute a good trade-off between accuracy performance and computational complexity. In this work, we propose a thorough investigation and comparison of graph kernels based on different linear patterns, namely walks and paths. First, all these kernels are explored in detail, including their mathematical foundations, structures of patterns and computational complexity. Then, experiments are performed on various benchmark datasets exhibiting different types of graphs, including labeled and unlabeled graphs, graphs with different numbers of vertices, graphs with different average vertex degrees, cyclic and acyclic graphs. Finally, for regression and classification tasks, performance and computational complexity of kernels are compared and analyzed, and suggestions are proposed to choose kernels according to the types of graph datasets. This work leads to a clear comparison of strengths and weaknesses of these kernels. An open-source Python library containing an implementation of all discussed kernels is publicly available on GitHub to the community, thus allowing to promote and facilitate the use of graph kernels in machine learning problems. | [] | [
"Graph Classification",
"Regression"
] | [] | [
"MUTAG"
] | [
"Accuracy"
] | Graph Kernels Based on Linear Patterns: Theoretical and Experimental Comparisons |
Neural Architecture Search (NAS) has emerged as a promising technique for automatic neural network design. However, existing MCTS based NAS approaches often utilize manually designed action space, which is not directly related to the performance metric to be optimized (e.g., accuracy), leading to sample-inefficient explorations of architectures. To improve the sample efficiency, this paper proposes Latent Action Neural Architecture Search (LaNAS), which learns actions to recursively partition the search space into good or bad regions that contain networks with similar performance metrics. During the search phase, as different action sequences lead to regions with different performance, the search efficiency can be significantly improved by biasing towards the good regions. On three NAS tasks, empirical results demonstrate that LaNAS is at least an order more sample efficient than baseline methods including evolutionary algorithms, Bayesian optimizations and random search. When applied in practice, both one-shot and regular LaNAS consistently outperforms existing results. Particularly, LaNAS achieves 99.0\% accuracy on CIFAR-10 and 80.8\% top1 accuracy at 600 MFLOPS on ImageNet in only 800 samples, significantly outperforming AmoebaNet with $33\times$ fewer samples. | [] | [
"Image Classification",
"Neural Architecture Search"
] | [] | [
"CIFAR-10"
] | [
"PARAMS",
"Percentage correct"
] | Sample-Efficient Neural Architecture Search by Learning Action Space for Monte Carlo Tree Search |
We present a deep learning-based multi-task approach for head pose estimation in images. We contribute with a network architecture and training strategy that harness the strong dependencies among face pose, alignment and visibility, to produce a top performing model for all three tasks. Our architecture is an encoder-decoder CNN with residual blocks and lateral skip connections. We show that the combination of head pose estimation and landmark-based face alignment significantly improve the performance of the former task. Further, the location of the pose task at the bottleneck layer, at the end of the encoder, and that of tasks depending on spatial information, such as visibility and alignment, in the final decoder layer, also contribute to increase the final performance. In the experiments conducted the proposed model outperforms the state-of-the-art in the face pose and visibility tasks. By including a final landmark regression step it also produces face alignment results on par with the state-of-the-art. | [] | [
"Face Alignment",
"Head Pose Estimation",
"Pose Estimation",
"Regression"
] | [] | [
"AFLW2000",
"AFLW2000-3D",
"BIWI",
"COFW"
] | [
"MAE (trained with other data)",
"MAE",
"Mean NME ",
"Recall at 80% precision (Landmarks Visibility)",
"Mean Error Rate"
] | Multi-task head pose estimation in-the-wild |
Soft Actor-Critic is a state-of-the-art reinforcement learning algorithm for continuous action settings that is not applicable to discrete action settings. Many important settings involve discrete actions, however, and so here we derive an alternative version of the Soft Actor-Critic algorithm that is applicable to discrete action settings. We then show that, even without any hyperparameter tuning, it is competitive with the tuned model-free state-of-the-art on a selection of games from the Atari suite. | [] | [
"Atari Games"
] | [] | [
"Atari 2600 Amidar",
"Atari 2600 Beam Rider",
"Atari 2600 Enduro",
"Atari 2600 Alien",
"Atari 2600 Space Invaders",
"Atari 2600 Assault",
"Atari 2600 Asterix",
"Atari 2600 Breakout",
"Atari 2600 Crazy Climber",
"Atari 2600 Freeway",
"Atari 2600 James Bond",
"Atari 2600 Pong",
"Atari 2600 Kangaroo",
"Atari 2600 Ms. Pacman",
"Atari 2600 Seaquest",
"Atari 2600 Frostbite",
"Atari 2600 Battle Zone",
"Atari 2600 Road Runner",
"Atari 2600 Up and Down",
"Atari 2600 Q*Bert"
] | [
"Score"
] | Soft Actor-Critic for Discrete Action Settings |
We present a simple methods to leverage the table content for the BERT-based model to solve the text-to-SQL problem. Based on the observation that some of the table content match some words in question string and some of the table header also match some words in question string, we encode two addition feature vector for the deep model. Our methods also benefit the model inference in testing time as the tables are almost the same in training and testing time. We test our model on the WikiSQL dataset and outperform the BERT-based baseline by 3.7% in logic form and 3.7% in execution accuracy and achieve state-of-the-art. | [] | [
"Semantic Parsing",
"Text-To-Sql"
] | [] | [
"WikiSQL"
] | [
"Accuracy"
] | Content Enhanced BERT-based Text-to-SQL Generation |
We propose a simple yet robust stochastic answer network (SAN) that simulates
multi-step reasoning in machine reading comprehension. Compared to previous
work such as ReasoNet which used reinforcement learning to determine the number
of steps, the unique feature is the use of a kind of stochastic prediction
dropout on the answer module (final layer) of the neural network during the
training. We show that this simple trick improves robustness and achieves
results competitive to the state-of-the-art on the Stanford Question Answering
Dataset (SQuAD), the Adversarial SQuAD, and the Microsoft MAchine Reading
COmprehension Dataset (MS MARCO). | [] | [
"Machine Reading Comprehension",
"Question Answering",
"Reading Comprehension"
] | [] | [
"SQuAD1.1 dev",
"SQuAD1.1",
"SQuAD2.0"
] | [
"EM",
"F1"
] | Stochastic Answer Networks for Machine Reading Comprehension |
Image-level weakly supervised semantic segmentation is a challenging problem that has been deeply studied in recent years. Most of advanced solutions exploit class activation map (CAM). However, CAMs can hardly serve as the object mask due to the gap between full and weak supervisions. In this paper, we propose a self-supervised equivariant attention mechanism (SEAM) to discover additional supervision and narrow the gap. Our method is based on the observation that equivariance is an implicit constraint in fully supervised semantic segmentation, whose pixel-level labels take the same spatial transformation as the input images during data augmentation. However, this constraint is lost on the CAMs trained by image-level supervision. Therefore, we propose consistency regularization on predicted CAMs from various transformed images to provide self-supervision for network learning. Moreover, we propose a pixel correlation module (PCM), which exploits context appearance information and refines the prediction of current pixel by its similar neighbors, leading to further improvement on CAMs consistency. Extensive experiments on PASCAL VOC 2012 dataset demonstrate our method outperforms state-of-the-art methods using the same level of supervision. The code is released online. | [] | [
"Data Augmentation",
"Semantic Segmentation",
"Weakly-Supervised Semantic Segmentation"
] | [] | [
"PASCAL VOC 2012 val"
] | [
"Mean IoU"
] | Self-supervised Equivariant Attention Mechanism for Weakly Supervised Semantic Segmentation |
We consider an important task of effective and efficient semantic image
segmentation. In particular, we adapt a powerful semantic segmentation
architecture, called RefineNet, into the more compact one, suitable even for
tasks requiring real-time performance on high-resolution inputs. To this end,
we identify computationally expensive blocks in the original setup, and propose
two modifications aimed to decrease the number of parameters and floating point
operations. By doing that, we achieve more than twofold model reduction, while
keeping the performance levels almost intact. Our fastest model undergoes a
significant speed-up boost from 20 FPS to 55 FPS on a generic GPU card on
512x512 inputs with solid 81.1% mean iou performance on the test set of PASCAL
VOC, while our slowest model with 32 FPS (from original 17 FPS) shows 82.7%
mean iou on the same dataset. Alternatively, we showcase that our approach is
easily mixable with light-weight classification networks: we attain 79.2% mean
iou on PASCAL VOC using a model that contains only 3.3M parameters and performs
only 9.3B floating point operations. | [] | [
"Real-Time Semantic Segmentation",
"Semantic Segmentation"
] | [] | [
"NYU Depth v2",
"PASCAL VOC 2012 test"
] | [
"Speed(ms/f)",
"Mean IoU",
"mIoU"
] | Light-Weight RefineNet for Real-Time Semantic Segmentation |
Natural Language Inference (NLI) task requires an agent to determine the
logical relationship between a natural language premise and a natural language
hypothesis. We introduce Interactive Inference Network (IIN), a novel class of
neural network architectures that is able to achieve high-level understanding
of the sentence pair by hierarchically extracting semantic features from
interaction space. We show that an interaction tensor (attention weight)
contains semantic information to solve natural language inference, and a denser
interaction tensor contains richer semantic information. One instance of such
architecture, Densely Interactive Inference Network (DIIN), demonstrates the
state-of-the-art performance on large scale NLI copora and large-scale NLI
alike corpus. It's noteworthy that DIIN achieve a greater than 20% error
reduction on the challenging Multi-Genre NLI (MultiNLI) dataset with respect to
the strongest published system. | [] | [
"Natural Language Inference",
"Paraphrase Identification"
] | [] | [
"Quora Question Pairs",
"SNLI"
] | [
"Parameters",
"% Train Accuracy",
"% Test Accuracy",
"Accuracy"
] | Natural Language Inference over Interaction Space |
This work presents a method for adapting a single, fixed deep neural network
to multiple tasks without affecting performance on already learned tasks. By
building upon ideas from network quantization and pruning, we learn binary
masks that piggyback on an existing network, or are applied to unmodified
weights of that network to provide good performance on a new task. These masks
are learned in an end-to-end differentiable fashion, and incur a low overhead
of 1 bit per network parameter, per task. Even though the underlying network is
fixed, the ability to mask individual weights allows for the learning of a
large number of filters. We show performance comparable to dedicated fine-tuned
networks for a variety of classification tasks, including those with large
domain shifts from the initial task (ImageNet), and a variety of network
architectures. Unlike prior work, we do not suffer from catastrophic forgetting
or competition between tasks, and our performance is agnostic to task ordering.
Code available at https://github.com/arunmallya/piggyback. | [] | [
"Continual Learning",
"Quantization"
] | [] | [
"Stanford Cars (Fine-grained 6 Tasks)",
"Sketch (Fine-grained 6 Tasks)",
"Wikiart (Fine-grained 6 Tasks)",
"visual domain decathlon (10 tasks)",
"CUBS (Fine-grained 6 Tasks)",
"ImageNet (Fine-grained 6 Tasks)",
"Flowers (Fine-grained 6 Tasks)"
] | [
"decathlon discipline (Score)",
"Accuracy"
] | Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights |
Convolutional networks reach top quality in pixel-level video object
segmentation but require a large amount of training data (1k~100k) to deliver
such results. We propose a new training strategy which achieves
state-of-the-art results across three evaluation datasets while using 20x~1000x
less annotated data than competing methods. Our approach is suitable for both
single and multiple object segmentation. Instead of using large training sets
hoping to generalize across domains, we generate in-domain training data using
the provided annotation on the first frame of each video to synthesize ("lucid
dream") plausible future video frames. In-domain per-video training data allows
us to train high quality appearance- and motion-based models, as well as tune
the post-processing stage. This approach allows to reach competitive results
even when training from only a single annotated frame, without ImageNet
pre-training. Our results indicate that using a larger training set is not
automatically better, and that for the video object segmentation task a smaller
training set that is closer to the target domain is more effective. This
changes the mindset regarding how many training samples and general
"objectness" knowledge are required for the video object segmentation task. | [] | [
"Multiple Object Tracking",
"Object Tracking",
"Semantic Segmentation",
"Semi-Supervised Video Object Segmentation",
"Video Object Segmentation",
"Video Semantic Segmentation"
] | [] | [
"DAVIS 2017 (test-dev)",
"DAVIS 2016"
] | [
"F-measure (Decay)",
"Jaccard (Mean)",
"F-measure (Recall)",
"Jaccard (Decay)",
"Jaccard (Recall)",
"F-measure (Mean)",
"J&F"
] | Lucid Data Dreaming for Video Object Segmentation |
Although the performance of person Re-Identification (ReID) has been
significantly boosted, many challenging issues in real scenarios have not been
fully investigated, e.g., the complex scenes and lighting variations, viewpoint
and pose changes, and the large number of identities in a camera network. To
facilitate the research towards conquering those issues, this paper contributes
a new dataset called MSMT17 with many important features, e.g., 1) the raw
videos are taken by an 15-camera network deployed in both indoor and outdoor
scenes, 2) the videos cover a long period of time and present complex lighting
variations, and 3) it contains currently the largest number of annotated
identities, i.e., 4,101 identities and 126,441 bounding boxes. We also observe
that, domain gap commonly exists between datasets, which essentially causes
severe performance drop when training and testing on different datasets. This
results in that available training data cannot be effectively leveraged for new
testing domains. To relieve the expensive costs of annotating new training
samples, we propose a Person Transfer Generative Adversarial Network (PTGAN) to
bridge the domain gap. Comprehensive experiments show that the domain gap could
be substantially narrowed-down by the PTGAN. | [] | [
"Person Re-Identification",
"Unsupervised Domain Adaptation"
] | [] | [
"DukeMTMC-reID",
"Duke to MSMT",
"Market to MSMT"
] | [
"rank-10",
"mAP",
"Rank-10",
"Rank-1",
"rank-1",
"rank-5"
] | Person Transfer GAN to Bridge Domain Gap for Person Re-Identification |
Deep convolutional networks have achieved great success for visual
recognition in still images. However, for action recognition in videos, the
advantage over traditional methods is not so evident. This paper aims to
discover the principles to design effective ConvNet architectures for action
recognition in videos and learn these models given limited training samples.
Our first contribution is temporal segment network (TSN), a novel framework for
video-based action recognition. which is based on the idea of long-range
temporal structure modeling. It combines a sparse temporal sampling strategy
and video-level supervision to enable efficient and effective learning using
the whole action video. The other contribution is our study on a series of good
practices in learning ConvNets on video data with the help of temporal segment
network. Our approach obtains the state-the-of-art performance on the datasets
of HMDB51 ( $ 69.4\% $) and UCF101 ($ 94.2\% $). We also visualize the learned
ConvNet models, which qualitatively demonstrates the effectiveness of temporal
segment network and the proposed good practices. | [] | [
"Action Classification",
"Action Recognition",
"Action Recognition In Videos",
"Action Recognition In Videos ",
"Multimodal Activity Recognition",
"Temporal Action Localization"
] | [] | [
"Kinetics-400",
"UCF101",
"HMDB-51",
"EV-Action"
] | [
"3-fold Accuracy",
"Vid acc@5",
"Accuracy",
"Average accuracy of 3 splits",
"Vid acc@1"
] | Temporal Segment Networks: Towards Good Practices for Deep Action Recognition |
Neural networks with tree-based sentence encoders have shown better results
on many downstream tasks. Most of existing tree-based encoders adopt syntactic
parsing trees as the explicit structure prior. To study the effectiveness of
different tree structures, we replace the parsing trees with trivial trees
(i.e., binary balanced tree, left-branching tree and right-branching tree) in
the encoders. Though trivial trees contain no syntactic information, those
encoders get competitive or even better results on all of the ten downstream
tasks we investigated. This surprising result indicates that explicit syntax
guidance may not be the main contributor to the superior performances of
tree-based neural sentence modeling. Further analysis show that tree modeling
gives better results when crucial words are closer to the final representation.
Additional experiments give more clues on how to design an effective tree-based
encoder. Our code is open-source and available at
https://github.com/ExplorerFreda/TreeEnc. | [] | [
"Sentiment Analysis",
"Text Classification"
] | [] | [
"DBpedia",
"Amazon Review Polarity",
"AG News",
"Amazon Review Full"
] | [
"Error",
"Accuracy"
] | On Tree-Based Neural Sentence Modeling |
We present a simple and accurate span-based model for semantic role labeling
(SRL). Our model directly takes into account all possible argument spans and
scores them for each label. At decoding time, we greedily select higher scoring
labeled spans. One advantage of our model is to allow us to design and use
span-level features, that are difficult to use in token-based BIO tagging
approaches. Experimental results demonstrate that our ensemble model achieves
the state-of-the-art results, 87.4 F1 and 87.0 F1 on the CoNLL-2005 and 2012
datasets, respectively. | [] | [
"Semantic Role Labeling"
] | [] | [
"CoNLL 2005",
"OntoNotes"
] | [
"F1"
] | A Span Selection Model for Semantic Role Labeling |
This research note combines two methods that have recently improved the state
of the art in language modeling: Transformers and dynamic evaluation.
Transformers use stacked layers of self-attention that allow them to capture
long range dependencies in sequential data. Dynamic evaluation fits models to
the recent sequence history, allowing them to assign higher probabilities to
re-occurring sequential patterns. By applying dynamic evaluation to
Transformer-XL models, we improve the state of the art on enwik8 from 0.99 to
0.94 bits/char, text8 from 1.08 to 1.04 bits/char, and WikiText-103 from 18.3
to 16.4 perplexity points. | [] | [
"Language Modelling"
] | [] | [
"Text8",
"enwik8",
"WikiText-103",
"Hutter Prize"
] | [
"Number of params",
"Bit per Character (BPC)",
"Validation perplexity",
"Test perplexity"
] | Dynamic Evaluation of Transformer Language Models |
Real-world large-scale datasets usually contain noisy labels and are imbalanced. Therefore, we propose derivative manipulation (DM), a novel and general example weighting approach for training robust deep models under these adverse conditions. DM has two main merits. First, loss function and example weighting are two common techniques in robust learning. In gradient-based optimisation, the role of a loss function is to provide the gradient for back-propagation to update a model, so that the derivative magnitude of an example defines how much impact it has, namely its weight. By DM, we connect the design of loss function and example weighting together. Second, although designing a loss function sometimes has the same effect, we need to care whether a loss is differentiable, and derive its derivative to understand its example weighting scheme. They make the design complicated. Instead, DM is more flexible and straightforward by directly modifying the derivative. Concretely, DM modifies a derivative magnitude function, including transformation and normalisation, after which we term it an emphasis density function, which expresses a weighting scheme. Accordingly, diverse weighting schemes are derived from common probability density functions, including those of well-known robust losses, e.g., MAE and GCE. We conduct extensive experiments demonstrating the effectiveness of DM on both vision and language tasks. | [] | [
"Image Classification",
"Representation Learning"
] | [] | [
"Clothing1M"
] | [
"Accuracy"
] | Derivative Manipulation for General Example Weighting |
Integrating logical reasoning within deep learning architectures has been a major goal of modern AI systems. In this paper, we propose a new direction toward this goal by introducing a differentiable (smoothed) maximum satisfiability (MAXSAT) solver that can be integrated into the loop of larger deep learning systems. Our (approximate) solver is based upon a fast coordinate descent approach to solving the semidefinite program (SDP) associated with the MAXSAT problem. We show how to analytically differentiate through the solution to this SDP and efficiently solve the associated backward pass. We demonstrate that by integrating this solver into end-to-end learning systems, we can learn the logical structure of challenging problems in a minimally supervised fashion. In particular, we show that we can learn the parity function using single-bit supervision (a traditionally hard task for deep networks) and learn how to play 9x9 Sudoku solely from examples. We also solve a "visual Sudok" problem that maps images of Sudoku puzzles to their associated logical solutions by combining our MAXSAT solver with a traditional convolutional architecture. Our approach thus shows promise in integrating logical structures within deep learning. | [] | [
"Game of Suduko"
] | [] | [
"Sudoko 9x9"
] | [
"Accuracy"
] | SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver |
Consider end-to-end training of a multi-modal vs. a single-modal network on a task with multiple input modalities: the multi-modal network receives more information, so it should match or outperform its single-modal counterpart. In our experiments, however, we observe the opposite: the best single-modal network always outperforms the multi-modal network. This observation is consistent across different combinations of modalities and on different tasks and benchmarks. This paper identifies two main causes for this performance drop: first, multi-modal networks are often prone to overfitting due to increased capacity. Second, different modalities overfit and generalize at different rates, so training them jointly with a single optimization strategy is sub-optimal. We address these two problems with a technique we call Gradient Blending, which computes an optimal blend of modalities based on their overfitting behavior. We demonstrate that Gradient Blending outperforms widely-used baselines for avoiding overfitting and achieves state-of-the-art accuracy on various tasks including human action recognition, ego-centric action recognition, and acoustic event detection. | [] | [
"Action Classification",
"Action Recognition",
"Temporal Action Localization"
] | [] | [
"Kinetics-400",
"Sports-1M",
"miniSports"
] | [
"Video hit@1 ",
"Video hit@1",
"Vid acc@1",
"Video hit@5",
"Clip Hit@1"
] | What Makes Training Multi-Modal Classification Networks Hard? |
In this paper, we propose a state-of-the-art video denoising algorithm based on a convolutional neural network architecture. Until recently, video denoising with neural networks had been a largely under explored domain, and existing methods could not compete with the performance of the best patch-based methods. The approach we introduce in this paper, called FastDVDnet, shows similar or better performance than other state-of-the-art competitors with significantly lower computing times. In contrast to other existing neural network denoisers, our algorithm exhibits several desirable properties such as fast runtimes, and the ability to handle a wide range of noise levels with a single network model. The characteristics of its architecture make it possible to avoid using a costly motion compensation stage while achieving excellent performance. The combination between its denoising performance and lower computational load makes this algorithm attractive for practical denoising applications. We compare our method with different state-of-art algorithms, both visually and with respect to objective quality metrics. | [] | [
"Denoising",
"Motion Compensation",
"Motion Estimation",
"Video Denoising"
] | [] | [
"DAVIS sigma50",
"Set8 sigma30",
"DAVIS sigma20",
"DAVIS sigma40",
"DAVIS sigma10",
"Set8 sigma40",
"Set8 sigma10",
"DAVIS sigma30",
"Set8 sigma20",
"Set8 sigma50"
] | [
"PSNR"
] | FastDVDnet: Towards Real-Time Deep Video Denoising Without Flow Estimation |
We introduce a new, rigorously-formulated Bayesian meta-learning algorithm that learns a probability distribution of model parameter prior for few-shot learning. The proposed algorithm employs a gradient-based variational inference to infer the posterior of model parameters to a new task. Our algorithm can be applied to any model architecture and can be implemented in various machine learning paradigms, including regression and classification. We show that the models trained with our proposed meta-learning algorithm are well calibrated and accurate, with state-of-the-art calibration and classification results on two few-shot classification benchmarks (Omniglot and Mini-ImageNet), and competitive results in a multi-modal task-distribution regression. | [] | [
"Few-Shot Image Classification",
"Few-Shot Learning",
"Meta-Learning",
"Omniglot",
"Regression",
"Variational Inference"
] | [] | [
"OMNIGLOT - 1-Shot, 5-way",
"OMNIGLOT - 5-Shot, 20-way",
"Mini-Imagenet 5-way (1-shot)",
"Tiered ImageNet 5-way (1-shot)",
"Mini-Imagenet 5-way (5-shot)",
"OMNIGLOT - 5-Shot, 5-way",
"OMNIGLOT - 1-Shot, 20-way",
"Tiered ImageNet 5-way (5-shot)"
] | [
"Accuracy"
] | Uncertainty in Model-Agnostic Meta-Learning using Variational Inference |
Current state-of-the-art models for video action recognition are mostly based on expensive 3D ConvNets. This results in a need for large GPU clusters to train and evaluate such architectures. To address this problem, we present a lightweight and memory-friendly architecture for action recognition that performs on par with or better than current architectures by using only a fraction of resources. The proposed architecture is based on a combination of a deep subnet operating on low-resolution frames with a compact subnet operating on high-resolution frames, allowing for high efficiency and accuracy at the same time. We demonstrate that our approach achieves a reduction by $3\sim4$ times in FLOPs and $\sim2$ times in memory usage compared to the baseline. This enables training deeper models with more input frames under the same computational budget. To further obviate the need for large-scale 3D convolutions, a temporal aggregation module is proposed to model temporal dependencies in a video at very small additional computational costs. Our models achieve strong performance on several action recognition benchmarks including Kinetics, Something-Something and Moments-in-time. The code and models are available at https://github.com/IBM/bLVNet-TAM. | [] | [
"Action Classification",
"Action Recognition",
"Temporal Action Localization"
] | [] | [
"Kinetics-400",
"Something-Something V2"
] | [
"Vid acc@5",
"Vid acc@1",
"Top-1 Accuracy"
] | More Is Less: Learning Efficient Video Representations by Big-Little Network and Depthwise Temporal Aggregation |
We propose to modify the common training protocols of optical flow, leading to sizable accuracy improvements without adding to the computational complexity of the training process. The improvement is based on observing the bias in sampling challenging data that exists in the current training protocol, and improving the sampling process. In addition, we find that both regularization and augmentation should decrease during the training protocol. Using an existing low parameters architecture, the method is ranked first on the MPI Sintel benchmark among all other methods, improving the best two frames method accuracy by more than 10%. The method also surpasses all similar architecture variants by more than 12% and 19.7% on the KITTI benchmarks, achieving the lowest Average End-Point Error on KITTI2012 among two-frame methods, without using extra datasets. | [] | [
"Optical Flow Estimation"
] | [] | [
"Sintel-final",
"Sintel-clean"
] | [
"Average End-Point Error"
] | ScopeFlow: Dynamic Scene Scoping for Optical Flow |
Reading strategies have been shown to improve comprehension levels,
especially for readers lacking adequate prior knowledge. Just as the process of
knowledge accumulation is time-consuming for human readers, it is
resource-demanding to impart rich general domain knowledge into a deep language
model via pre-training. Inspired by reading strategies identified in cognitive
science, and given limited computational resources -- just a pre-trained model
and a fixed number of training instances -- we propose three general strategies
aimed to improve non-extractive machine reading comprehension (MRC): (i) BACK
AND FORTH READING that considers both the original and reverse order of an
input sequence, (ii) HIGHLIGHTING, which adds a trainable embedding to the text
embedding of tokens that are relevant to the question and candidate answers,
and (iii) SELF-ASSESSMENT that generates practice questions and candidate
answers directly from the text in an unsupervised manner.
By fine-tuning a pre-trained language model (Radford et al., 2018) with our
proposed strategies on the largest general domain multiple-choice MRC dataset
RACE, we obtain a 5.8% absolute increase in accuracy over the previous best
result achieved by the same pre-trained model fine-tuned on RACE without the
use of strategies. We further fine-tune the resulting model on a target MRC
task, leading to an absolute improvement of 6.2% in average accuracy over
previous state-of-the-art approaches on six representative non-extractive MRC
datasets from different domains (i.e., ARC, OpenBookQA, MCTest, SemEval-2018
Task 11, ROCStories, and MultiRC). These results demonstrate the effectiveness
of our proposed strategies and the versatility and general applicability of our
fine-tuned models that incorporate these strategies. Core code is available at
https://github.com/nlpdata/strategy/. | [] | [
"Language Modelling",
"Machine Reading Comprehension",
"Question Answering",
"Reading Comprehension"
] | [] | [
"Story Cloze Test"
] | [
"Accuracy"
] | Improving Machine Reading Comprehension with General Reading Strategies |
Recent advances in image-based 3D human shape estimation have been driven by the significant improvement in representation power afforded by deep neural networks. Although current approaches have demonstrated the potential in real world settings, they still fail to produce reconstructions with the level of detail often present in the input images. We argue that this limitation stems primarily form two conflicting requirements; accurate predictions require large context, but precise predictions require high resolution. Due to memory limitations in current hardware, previous approaches tend to take low resolution images as input to cover large spatial context, and produce less precise (or low resolution) 3D estimates as a result. We address this limitation by formulating a multi-level architecture that is end-to-end trainable. A coarse level observes the whole image at lower resolution and focuses on holistic reasoning. This provides context to an fine level which estimates highly detailed geometry by observing higher-resolution images. We demonstrate that our approach significantly outperforms existing state-of-the-art techniques on single image human shape reconstruction by fully leveraging 1k-resolution input images. | [] | [
"3D Human Pose Estimation",
"3D Object Reconstruction From A Single Image",
"3D Shape Reconstruction"
] | [] | [
"BUFF",
"RenderPeople"
] | [
"Surface normal consistency",
"Point-to-surface distance (cm)",
"Chamfer (cm)"
] | PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization |
Depth maps contain geometric clues for assisting Salient Object Detection (SOD). In this paper, we propose a novel Cross-Modal Weighting (CMW) strategy to encourage comprehensive interactions between RGB and depth channels for RGB-D SOD. Specifically, three RGB-depth interaction modules, named CMW-L, CMW-M and CMW-H, are developed to deal with respectively low-, middle- and high-level cross-modal information fusion. These modules use Depth-to-RGB Weighing (DW) and RGB-to-RGB Weighting (RW) to allow rich cross-modal and cross-scale interactions among feature layers generated by different network blocks. To effectively train the proposed Cross-Modal Weighting Network (CMWNet), we design a composite loss function that summarizes the errors between intermediate predictions and ground truth over different scales. With all these novel components working together, CMWNet effectively fuses information from RGB and depth channels, and meanwhile explores object localization and details across scales. Thorough evaluations demonstrate CMWNet consistently outperforms 15 state-of-the-art RGB-D SOD methods on seven popular benchmarks. | [] | [
"Object Detection",
"Object Localization",
"RGB-D Salient Object Detection",
"RGB Salient Object Detection",
"Salient Object Detection"
] | [] | [
"NJU2K"
] | [
"Average MAE",
"S-Measure"
] | Cross-Modal Weighting Network for RGB-D Salient Object Detection |
Human-Object Interaction (HOI) consists of human, object and implicit interaction/verb. Different from previous methods that directly map pixels to HOI semantics, we propose a novel perspective for HOI learning in an analytical manner. In analogy to Harmonic Analysis, whose goal is to study how to represent the signals with the superposition of basic waves, we propose the HOI Analysis. We argue that coherent HOI can be decomposed into isolated human and object. Meanwhile, isolated human and object can also be integrated into coherent HOI again. Moreover, transformations between human-object pairs with the same HOI can also be easier approached with integration and decomposition. As a result, the implicit verb will be represented in the transformation function space. In light of this, we propose an Integration-Decomposition Network (IDN) to implement the above transformations and achieve state-of-the-art performance on widely-used HOI detection benchmarks. Code is available at https://github.com/DirtyHarryLYL/HAKE-Action-Torch/tree/IDN-(Integrating-Decomposing-Network). | [] | [
"Human-Object Interaction Detection"
] | [] | [
"HICO-DET",
"V-COCO"
] | [
"MAP"
] | HOI Analysis: Integrating and Decomposing Human-Object Interaction |
In this paper, we propose a recurrent framework for Joint Unsupervised
LEarning (JULE) of deep representations and image clusters. In our framework,
successive operations in a clustering algorithm are expressed as steps in a
recurrent process, stacked on top of representations output by a Convolutional
Neural Network (CNN). During training, image clusters and representations are
updated jointly: image clustering is conducted in the forward pass, while
representation learning in the backward pass. Our key idea behind this
framework is that good representations are beneficial to image clustering and
clustering results provide supervisory signals to representation learning. By
integrating two processes into a single model with a unified weighted triplet
loss and optimizing it end-to-end, we can obtain not only more powerful
representations, but also more precise image clusters. Extensive experiments
show that our method outperforms the state-of-the-art on image clustering
across a variety of image datasets. Moreover, the learned representations
generalize well when transferred to other tasks. | [] | [
"Image Clustering",
"Representation Learning"
] | [] | [
"coil-100",
"MNIST-test",
"CMU-PIE",
"Imagenet-dog-15",
"YouTube Faces DB",
"USPS",
"CIFAR-100",
"CIFAR-10",
"UMist",
"FRGC",
"Tiny-ImageNet",
"CUB Birds",
"ImageNet-10",
"STL-10",
"Coil-20",
"Stanford Dogs",
"Stanford Cars",
"MNIST-full"
] | [
"Train set",
"Train Split",
"ARI",
"Train Set",
"NMI",
"Accuracy"
] | Joint Unsupervised Learning of Deep Representations and Image Clusters |
In this paper we present a novel deep learning method for 3D object detection and 6D pose estimation from RGB images. Our method, named DPOD (Dense Pose Object Detector), estimates dense multi-class 2D-3D correspondence maps between an input image and available 3D models. Given the correspondences, a 6DoF pose is computed via PnP and RANSAC. An additional RGB pose refinement of the initial pose estimates is performed using a custom deep learning-based refinement scheme. Our results and comparison to a vast number of related works demonstrate that a large number of correspondences is beneficial for obtaining high-quality 6D poses both before and after refinement. Unlike other methods that mainly use real data for training and do not train on synthetic renderings, we perform evaluation on both synthetic and real training data demonstrating superior results before and after refinement when compared to all recent detectors. While being precise, the presented approach is still real-time capable. | [] | [
"3D Object Detection",
"6D Pose Estimation",
"6D Pose Estimation using RGB",
"Object Detection",
"Pose Estimation"
] | [] | [
"LineMOD",
"Occlusion LineMOD"
] | [
"Mean ADD",
"Accuracy (ADD)"
] | DPOD: 6D Pose Object Detector and Refiner |
Recently, it has been demonstrated that deep neural networks can significantly improve the performance of single image super-resolution (SISR). Numerous studies have concentrated on raising the quantitative quality of super-resolved (SR) images. However, these methods that target PSNR maximization usually produce blurred images at large upscaling factor. The introduction of generative adversarial networks (GANs) can mitigate this issue and show impressive results with synthetic high-frequency textures. Nevertheless, these GAN-based approaches always have a tendency to add fake textures and even artifacts to make the SR image of visually higher-resolution. In this paper, we propose a novel perceptual image super-resolution method that progressively generates visually high-quality results by constructing a stage-wise network. Specifically, the first phase concentrates on minimizing pixel-wise error, and the second stage utilizes the features extracted by the previous stage to pursue results with better structural retention. The final stage employs fine structure features distilled by the second phase to produce more realistic results. In this way, we can maintain the pixel, and structural level information in the perceptual image as much as possible. It is useful to note that the proposed method can build three types of images in a feed-forward process. Also, we explore a new generator that adopts multi-scale hierarchical features fusion. Extensive experiments on benchmark datasets show that our approach is superior to the state-of-the-art methods. Code is available at https://github.com/Zheng222/PPON. | [] | [
"Image Super-Resolution",
"Super-Resolution"
] | [] | [
"Set14 - 4x upscaling",
"Manga109 - 4x upscaling",
"BSD100 - 4x upscaling",
"Set5 - 4x upscaling",
"Urban100 - 4x upscaling"
] | [
"SSIM",
"PSNR"
] | Progressive Perception-Oriented Network for Single Image Super-Resolution |
We propose to estimate 3D human pose from multi-view images and a few IMUs attached at person's limbs. It operates by firstly detecting 2D poses from the two signals, and then lifting them to the 3D space. We present a geometric approach to reinforce the visual features of each pair of joints based on the IMUs. This notably improves 2D pose estimation accuracy especially when one joint is occluded. We call this approach Orientation Regularized Network (ORN). Then we lift the multi-view 2D poses to the 3D space by an Orientation Regularized Pictorial Structure Model (ORPSM) which jointly minimizes the projection error between the 3D and 2D poses, along with the discrepancy between the 3D pose and IMU orientations. The simple two-step approach reduces the error of the state-of-the-art by a large margin on a public dataset. Our code will be released at https://github.com/CHUNYUWANG/imu-human-pose-pytorch. | [] | [
"3D Absolute Human Pose Estimation",
"3D Human Pose Estimation",
"Pose Estimation"
] | [] | [
"Total Capture"
] | [
"Average MPJPE (mm)",
"MPJPE"
] | Fusing Wearable IMUs with Multi-View Images for Human Pose Estimation: A Geometric Approach |
We present a general framework for exemplar-based image translation, which synthesizes a photo-realistic image from the input in a distinct domain (e.g., semantic segmentation mask, or edge map, or pose keypoints), given an exemplar image. The output has the style (e.g., color, texture) in consistency with the semantically corresponding objects in the exemplar. We propose to jointly learn the crossdomain correspondence and the image translation, where both tasks facilitate each other and thus can be learned with weak supervision. The images from distinct domains are first aligned to an intermediate domain where dense correspondence is established. Then, the network synthesizes images based on the appearance of semantically corresponding patches in the exemplar. We demonstrate the effectiveness of our approach in several image translation tasks. Our method is superior to state-of-the-art methods in terms of image quality significantly, with the image style faithful to the exemplar with semantic consistency. Moreover, we show the utility of our method for several applications | [] | [
"Image Generation",
"Image-to-Image Translation"
] | [] | [
"ADE20K Labels-to-Photos",
"ADE20K-Outdoor Labels-to-Photos",
"CelebA-HQ",
"Deep-Fashion"
] | [
"FID"
] | Cross-domain Correspondence Learning for Exemplar-based Image Translation |
In statistical relational learning, knowledge graph completion deals with
automatically understanding the structure of large knowledge graphs---labeled
directed graphs---and predicting missing relationships---labeled edges.
State-of-the-art embedding models propose different trade-offs between modeling
expressiveness, and time and space complexity. We reconcile both expressiveness
and complexity through the use of complex-valued embeddings and explore the
link between such complex-valued embeddings and unitary diagonalization. We
corroborate our approach theoretically and show that all real square
matrices---thus all possible relation/adjacency matrices---are the real part of
some unitarily diagonalizable matrix. This results opens the door to a lot of
other applications of square matrices factorization. Our approach based on
complex embeddings is arguably simple, as it only involves a Hermitian dot
product, the complex counterpart of the standard dot product between real
vectors, whereas other methods resort to more and more complicated composition
functions to increase their expressiveness. The proposed complex embeddings are
scalable to large data sets as it remains linear in both space and time, while
consistently outperforming alternative approaches on standard link prediction
benchmarks. | [] | [
"Knowledge Graph Completion",
"Knowledge Graphs",
"Link Prediction",
"Relational Reasoning"
] | [] | [
" FB15k"
] | [
"Hits@10",
"MRR",
"Hits@3",
"Hits@1"
] | Knowledge Graph Completion via Complex Tensor Factorization |
This paper tackles the task of semi-supervised video object segmentation,
i.e., the separation of an object from the background in a video, given the
mask of the first frame. We present One-Shot Video Object Segmentation (OSVOS),
based on a fully-convolutional neural network architecture that is able to
successively transfer generic semantic information, learned on ImageNet, to the
task of foreground segmentation, and finally to learning the appearance of a
single annotated object of the test sequence (hence one-shot). Although all
frames are processed independently, the results are temporally coherent and
stable. We perform experiments on two annotated video segmentation databases,
which show that OSVOS is fast and improves the state of the art by a
significant margin (79.8% vs 68.0%). | [] | [
"Semi-Supervised Video Object Segmentation",
"Video Object Segmentation",
"Video Segmentation",
"Visual Object Tracking"
] | [] | [
"YouTube-VOS",
"DAVIS 2017 (test-dev)",
"DAVIS 2017 (val)",
"YouTube",
"DAVIS 2016"
] | [
"F-measure (Decay)",
"Jaccard (Mean)",
"Speed (FPS)",
"Jaccard (Unseen)",
"F-Measure (Seen)",
"Jaccard (Seen)",
"mIoU",
"F-measure (Recall)",
"Jaccard (Decay)",
"Overall",
"O (Average of Measures)",
"Jaccard (Recall)",
"F-measure (Mean)",
"J&F",
"F-Measure (Unseen)"
] | One-Shot Video Object Segmentation |
In this paper, drawing intuition from the Turing test, we propose using
adversarial training for open-domain dialogue generation: the system is trained
to produce sequences that are indistinguishable from human-generated dialogue
utterances. We cast the task as a reinforcement learning (RL) problem where we
jointly train two systems, a generative model to produce response sequences,
and a discriminator---analagous to the human evaluator in the Turing test--- to
distinguish between the human-generated dialogues and the machine-generated
ones. The outputs from the discriminator are then used as rewards for the
generative model, pushing the system to generate dialogues that mostly resemble
human dialogues.
In addition to adversarial training we describe a model for adversarial {\em
evaluation} that uses success in fooling an adversary as a dialogue evaluation
metric, while avoiding a number of potential pitfalls. Experimental results on
several metrics, including adversarial evaluation, demonstrate that the
adversarially-trained system generates higher-quality responses than previous
baselines. | [] | [
"Dialogue Evaluation",
"Dialogue Generation"
] | [] | [
"Amazon-5"
] | [
"1 in 10 R@2"
] | Adversarial Learning for Neural Dialogue Generation |
Most state-of-the-art text detection methods are specific to horizontal Latin
text and are not fast enough for real-time applications. We introduce Segment
Linking (SegLink), an oriented text detection method. The main idea is to
decompose text into two locally detectable elements, namely segments and links.
A segment is an oriented box covering a part of a word or text line; A link
connects two adjacent segments, indicating that they belong to the same word or
text line. Both elements are detected densely at multiple scales by an
end-to-end trained, fully-convolutional neural network. Final detections are
produced by combining segments connected by links. Compared with previous
methods, SegLink improves along the dimensions of accuracy, speed, and ease of
training. It achieves an f-measure of 75.0% on the standard ICDAR 2015
Incidental (Challenge 4) benchmark, outperforming the previous best by a large
margin. It runs at over 20 FPS on 512x512 images. Moreover, without
modification, SegLink is able to detect long lines of non-Latin text, such as
Chinese. | [] | [
"Curved Text Detection",
"Scene Text Detection"
] | [] | [
"ICDAR 2013",
"ICDAR 2015",
"MSRA-TD500"
] | [
"F-Measure",
"Recall",
"Precision"
] | Detecting Oriented Text in Natural Images by Linking Segments |
Semantic segmentation requires large amounts of pixel-wise annotations to learn accurate models. In this paper, we present a video prediction-based methodology to scale up training sets by synthesizing new training samples in order to improve the accuracy of semantic segmentation networks. We exploit video prediction models' ability to predict future frames in order to also predict future labels. A joint propagation strategy is also proposed to alleviate mis-alignments in synthesized samples. We demonstrate that training segmentation models on datasets augmented by the synthesized samples leads to significant improvements in accuracy. Furthermore, we introduce a novel boundary label relaxation technique that makes training robust to annotation noise and propagation artifacts along object boundaries. Our proposed methods achieve state-of-the-art mIoUs of 83.5% on Cityscapes and 82.9% on CamVid. Our single model, without model ensembles, achieves 72.8% mIoU on the KITTI semantic segmentation test set, which surpasses the winning entry of the ROB challenge 2018. Our code and videos can be found at https://nv-adlr.github.io/publication/2018-Segmentation. | [] | [
"Semantic Segmentation",
"Video Prediction"
] | [] | [
"CamVid",
"KITTI Semantic Segmentation",
"Cityscapes test"
] | [
"Mean IoU (class)",
"Mean IoU"
] | Improving Semantic Segmentation via Video Propagation and Label Relaxation |
Instance-level alignment is widely exploited for person re-identification, e.g. spatial alignment, latent semantic alignment and triplet alignment. This paper probes another feature alignment modality, namely cluster-level feature alignment across whole dataset, where the model can see not only the sampled images in local mini-batch but the global feature distribution of the whole dataset from distilled anchors. Towards this aim, we propose anchor loss and investigate many variants of cluster-level feature alignment, which consists of iterative aggregation and alignment from the overview of dataset. Our extensive experiments have demonstrated that our methods can provide consistent and significant performance improvement with small training efforts after the saturation of traditional training. In both theoretical and experimental aspects, our proposed methods can result in more stable and guided optimization towards better representation and generalization for well-aligned embedding. | [] | [
"Person Re-Identification"
] | [] | [
"DukeMTMC-reID",
"Market-1501"
] | [
"Rank-1",
"MAP"
] | Cluster-level Feature Alignment for Person Re-identification |
Depth estimation is a traditional computer vision task, which plays a crucial
role in understanding 3D scene geometry. Recently,
deep-convolutional-neural-networks based methods have achieved promising
results in the monocular depth estimation field. Specifically, the framework
that combines the multi-scale features extracted by the dilated convolution
based block (atrous spatial pyramid pooling, ASPP) has gained the significant
improvement in the dense labeling task. However, the discretized and predefined
dilation rates cannot capture the continuous context information that differs
in diverse scenes and easily introduce the grid artifacts in depth estimation.
In this paper, we propose an attention-based context aggregation network (ACAN)
to tackle these difficulties. Based on the self-attention model, ACAN
adaptively learns the task-specific similarities between pixels to model the
context information. First, we recast the monocular depth estimation as a dense
labeling multi-class classification problem. Then we propose a soft ordinal
inference to transform the predicted probabilities to continuous depth values,
which can reduce the discretization error (about 1% decrease in RMSE). Second,
the proposed ACAN aggregates both the image-level and pixel-level context
information for depth estimation, where the former expresses the statistical
characteristic of the whole image and the latter extracts the long-range
spatial dependencies for each pixel. Third, for further reducing the
inconsistency between the RGB image and depth map, we construct an attention
loss to minimize their information entropy. We evaluate on public monocular
depth-estimation benchmark datasets (including NYU Depth V2, KITTI). The
experiments demonstrate the superiority of our proposed ACAN and achieve the
competitive results with the state of the arts. | [] | [
"Depth Estimation",
"Monocular Depth Estimation",
"Multi-class Classification"
] | [] | [
"NYU-Depth V2"
] | [
"RMSE"
] | Attention-based Context Aggregation Network for Monocular Depth Estimation |
Supervised depth estimation has achieved high accuracy due to the advanced
deep network architectures. Since the groundtruth depth labels are hard to
obtain, recent methods try to learn depth estimation networks in an
unsupervised way by exploring unsupervised cues, which are effective but less
reliable than true labels. An emerging way to resolve this dilemma is to
transfer knowledge from synthetic images with ground truth depth via domain
adaptation techniques. However, these approaches overlook specific geometric
structure of the natural images in the target domain (i.e., real data), which
is important for high-performing depth prediction. Motivated by the
observation, we propose a geometry-aware symmetric domain adaptation framework
(GASDA) to explore the labels in the synthetic data and epipolar geometry in
the real data jointly. Moreover, by training two image style translators and
depth estimators symmetrically in an end-to-end network, our model achieves
better image style transfer and generates high-quality depth maps. The
experimental results demonstrate the effectiveness of our proposed method and
comparable performance against the state-of-the-art. Code will be publicly
available at: https://github.com/sshan-zhao/GASDA. | [] | [
"Depth Estimation",
"Domain Adaptation",
"Monocular Depth Estimation",
"Style Transfer"
] | [] | [
"KITTI Eigen split"
] | [
"absolute relative error"
] | Geometry-Aware Symmetric Domain Adaptation for Monocular Depth Estimation |
Face sketch synthesis has made great progress in the past few years. Recent methods based on deep neural networks are able to generate high quality sketches from face photos. However, due to the lack of training data (photo-sketch pairs), none of such deep learning based methods can be applied successfully to face photos in the wild. In this paper, we propose a semi-supervised deep learning architecture which extends face sketch synthesis to handle face photos in the wild by exploiting additional face photos in training. Instead of supervising the network with ground truth sketches, we first perform patch matching in feature space between the input photo and photos in a small reference set of photo-sketch pairs. We then compose a pseudo sketch feature representation using the corresponding sketch feature patches to supervise our network. With the proposed approach, we can train our networks using a small reference set of photo-sketch pairs together with a large face photo dataset without ground truth sketches. Experiments show that our method achieve state-of-the-art performance both on public benchmarks and face photos in the wild. Codes are available at https://github.com/chaofengc/Face-Sketch-Wild. | [] | [
"Face Sketch Synthesis",
"Patch Matching"
] | [] | [
"CUFS",
"CUHK",
"CUFSF"
] | [
"SSIM",
"FSIM"
] | Semi-Supervised Learning for Face Sketch Synthesis in the Wild |
Meta-reinforcement learning algorithms can enable robots to acquire new skills much more quickly, by leveraging prior experience to learn how to learn. However, much of the current research on meta-reinforcement learning focuses on task distributions that are very narrow. For example, a commonly used meta-reinforcement learning benchmark uses different running velocities for a simulated robot as different tasks. When policies are meta-trained on such narrow task distributions, they cannot possibly generalize to more quickly acquire entirely new tasks. Therefore, if the aim of these methods is to enable faster acquisition of entirely new behaviors, we must evaluate them on task distributions that are sufficiently broad to enable generalization to new behaviors. In this paper, we propose an open-source simulated benchmark for meta-reinforcement learning and multi-task learning consisting of 50 distinct robotic manipulation tasks. Our aim is to make it possible to develop algorithms that generalize to accelerate the acquisition of entirely new, held-out tasks. We evaluate 6 state-of-the-art meta-reinforcement learning and multi-task learning algorithms on these tasks. Surprisingly, while each task and its variations (e.g., with different object positions) can be learned with reasonable success, these algorithms struggle to learn with multiple tasks at the same time, even with as few as ten distinct training tasks. Our analysis and open-source environments pave the way for future research in multi-task learning and meta-learning that can enable meaningful generalization, thereby unlocking the full potential of these methods. | [] | [
"Meta-Learning",
"Meta Reinforcement Learning",
"Multi-Task Learning"
] | [] | [
"MT50",
"ML10"
] | [
"Meta-test success rate",
"Meta-train success rate",
"Average Success Rate"
] | Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning |
The performance of text classification has improved tremendously using
intelligently engineered neural-based models, especially those injecting
categorical metadata as additional information, e.g., using user/product
information for sentiment classification. These information have been used to
modify parts of the model (e.g., word embeddings, attention mechanisms) such
that results can be customized according to the metadata. We observe that
current representation methods for categorical metadata, which are devised for
human consumption, are not as effective as claimed in popular classification
methods, outperformed even by simple concatenation of categorical features in
the final layer of the sentence encoder. We conjecture that categorical
features are harder to represent for machine use, as available context only
indirectly describes the category, and even such context is often scarce (for
tail category). To this end, we propose to use basis vectors to effectively
incorporate categorical metadata on various parts of a neural-based model. This
additionally decreases the number of parameters dramatically, especially when
the number of categorical features is large. Extensive experiments on various
datasets with different properties are performed and show that through our
method, we can represent categorical metadata more effectively to customize
parts of the model, including unexplored ones, and increase the performance of
the model greatly. | [] | [
"Sentiment Analysis",
"Text Classification",
"Word Embeddings"
] | [] | [
"User and product information"
] | [
"Yelp 2013 (Acc)"
] | Categorical Metadata Representation for Customized Text Classification |
Change detection is a basic task of remote sensing image processing. The research objective is to identity the change information of interest and filter out the irrelevant change information as interference factors. Recently, the rise of deep learning has provided new tools for change detection, which have yielded impressive results. However, the available methods focus mainly on the difference information between multitemporal remote sensing images and lack robustness to pseudo-change information. To overcome the lack of resistance of current methods to pseudo-changes, in this paper, we propose a new method, namely, dual attentive fully convolutional Siamese networks (DASNet) for change detection in high-resolution images. Through the dual-attention mechanism, long-range dependencies are captured to obtain more discriminant feature representations to enhance the recognition performance of the model. Moreover, the imbalanced sample is a serious problem in change detection, i.e. unchanged samples are much more than changed samples, which is one of the main reasons resulting in pseudo-changes. We put forward the weighted double margin contrastive loss to address this problem by punishing the attention to unchanged feature pairs and increase attention to changed feature pairs. The experimental results of our method on the change detection dataset (CDD) and the building change detection dataset (BCDD) demonstrate that compared with other baseline methods, the proposed method realizes maximum improvements of 2.1\% and 3.6\%, respectively, in the F1 score. Our Pytorch implementation is available at https://github.com/lehaifeng/DASNet. | [] | [
"Change detection for remote sensing images"
] | [] | [
"CDD Dataset (season-varying)"
] | [
"F1-Score"
] | DASNet: Dual attentive fully convolutional siamese networks for change detection of high resolution satellite images |
We propose a new way of constructing invertible neural networks by combining simple building blocks with a novel set of composition rules. This leads to a rich set of invertible architectures, including those similar to ResNets. Inversion is achieved with a locally convergent iterative procedure that is parallelizable and very fast in practice. Additionally, the determinant of the Jacobian can be computed analytically and efficiently, enabling their generative use as flow models. To demonstrate their flexibility, we show that our invertible neural networks are competitive with ResNets on MNIST and CIFAR-10 classification. When trained as generative models, our invertible networks achieve competitive likelihoods on MNIST, CIFAR-10 and ImageNet 32x32, with bits per dimension of 0.98, 3.32 and 4.06 respectively. | [] | [
"Image Generation"
] | [] | [
"MNIST",
"ImageNet 32x32",
"CIFAR-10"
] | [
"bits/dimension",
"bpd"
] | MintNet: Building Invertible Neural Networks with Masked Convolutions |
Entity alignment typically suffers from the issues of structural heterogeneity and limited seed alignments. In this paper, we propose a novel Multi-channel Graph Neural Network model (MuGNN) to learn alignment-oriented knowledge graph (KG) embeddings by robustly encoding two KGs via multiple channels. Each channel encodes KGs via different relation weighting schemes with respect to self-attention towards KG completion and cross-KG attention for pruning exclusive entities respectively, which are further combined via pooling techniques. Moreover, we also infer and transfer rule knowledge for completing two KGs consistently. MuGNN is expected to reconcile the structural differences of two KGs, and thus make better use of seed alignments. Extensive experiments on five publicly available datasets demonstrate our superior performance (5% Hits@1 up on average). | [] | [
"Entity Alignment"
] | [] | [
"DBP15k zh-en"
] | [
"Hits@1"
] | Multi-Channel Graph Neural Network for Entity Alignment |
BACKGROUND:
Automated single-channel spindle detectors, for human sleep EEG, are blind to the presence of spindles in other recorded channels unlike visual annotation by a human expert.
NEW METHOD:
We propose a multichannel spindle detection method that aims to detect global and local spindle activity in human sleep EEG. Using a non-linear signal model, which assumes the input EEG to be the sum of a transient and an oscillatory component, we propose a multichannel transient separation algorithm. Consecutive overlapping blocks of the multichannel oscillatory component are assumed to be low-rank whereas the transient component is assumed to be piecewise constant with a zero baseline. The estimated oscillatory component is used in conjunction with a bandpass filter and the Teager operator for detecting sleep spindles.
RESULTS AND COMPARISON WITH OTHER METHODS:
The proposed method is applied to two publicly available databases and compared with 7 existing single-channel automated detectors. F1 scores for the proposed spindle detection method averaged 0.66 (0.02) and 0.62 (0.06) for the two databases, respectively. For an overnight 6 channel EEG signal, the proposed algorithm takes about 4min to detect sleep spindles simultaneously across all channels with a single setting of corresponding algorithmic parameters.
CONCLUSIONS:
The proposed method attempts to mimic and utilize, for better spindle detection, a particular human expert behavior where the decision to mark a spindle event may be subconsciously influenced by the presence of a spindle in EEG channels other than the central channel visible on a digital screen. | [] | [
"EEG",
"Spindle Detection"
] | [] | [
"MASS SS2"
] | [
"F1-score (@IoU = 0.3)"
] | Multichannel sleep spindle detection using sparse low-rank optimization |
We propose a novel spectral convolutional neural network (CNN) model on graph structured data, namely Distributed Feedback-Looped Networks (DFNets). This model is incorporated with a robust class of spectral graph filters, called feedback-looped filters, to provide better localization on vertices, while still attaining fast convergence and linear memory requirements. Theoretically, feedback-looped filters can guarantee convergence w.r.t. a specified error bound, and be applied universally to any graph without knowing its structure. Furthermore, the propagation rule of this model can diversify features from the preceding layers to produce strong gradient flows. We have evaluated our model using two benchmark tasks: semi-supervised document classification on citation networks and semi-supervised entity classification on a knowledge graph. The experimental results show that our model considerably outperforms the state-of-the-art methods in both benchmark tasks over all datasets. | [] | [
"Document Classification",
"Node Classification"
] | [] | [
"Cora",
"Pubmed",
"Citeseer",
"NELL"
] | [
"Accuracy"
] | DFNets: Spectral CNNs for Graphs with Feedback-Looped Filters |