abstract
stringlengths
13
4.33k
field
sequence
task
sequence
method
sequence
dataset
sequence
metric
sequence
title
stringlengths
10
194
Neural networks are typically designed to deal with data in tensor forms. In this paper, we propose a novel neural network architecture accepting graphs of arbitrary structure. Given a dataset containing graphs in the form of (G,y) where G is a graph and y is its class, we aim to develop neural networks that read the graphs directly and learn a classification function. There are two main challenges: 1) how to extract useful features characterizing the rich information encoded in a graph for classification purpose, and 2) how to sequentially read a graph in a meaningful and consistent order. To address the first challenge, we design a localized graph convolution model and show its connection with two graph kernels. To address the second challenge, we design a novel SortPooling layer which sorts graph vertices in a consistent order so that traditional neural networks can be trained on the graphs. Experiments on benchmark graph classification datasets demonstrate that the proposed architecture achieves highly competitive performance with state-of-the-art graph kernels and other graph neural network methods. Moreover, the architecture allows end-to-end gradient-based training with original graphs, without the need to first transform graphs into vectors.
[ "Convolutions" ]
[ "Graph Classification" ]
[ "Convolution" ]
[ "COLLAB", "IMDb-B", "PROTEINS", "D&D", "NCI1", "IMDb-M", "MUTAG", "PTC" ]
[ "Accuracy" ]
An End-to-End Deep Learning Architecture for Graph Classification
MoCo is effective for unsupervised image representation learning. In this paper, we propose VideoMoCo for unsupervised video representation learning. Given a video sequence as an input sample, we improve the temporal feature representations of MoCo from two perspectives. First, we introduce a generator to drop out several frames from this sample temporally. The discriminator is then learned to encode similar feature representations regardless of frame removals. By adaptively dropping out different frames during training iterations of adversarial learning, we augment this input sample to train a temporally robust encoder. Second, we use temporal decay to model key attenuation in the memory queue when computing the contrastive loss. As the momentum encoder updates after keys enqueue, the representation ability of these keys degrades when we use the current input sample for contrastive learning. This degradation is reflected via temporal decay to attend the input sample to recent keys in the queue. As a result, we adapt MoCo to learn video representations without empirically designing pretext tasks. By empowering the temporal robustness of the encoder and modeling the temporal decay of the keys, our VideoMoCo improves MoCo temporally based on contrastive learning. Experiments on benchmark datasets including UCF101 and HMDB51 show that VideoMoCo stands as a state-of-the-art video representation learning method.
[ "Self-Supervised Learning", "Loss Functions", "Normalization" ]
[ "Action Recognition", "Representation Learning" ]
[ "MoCo", "Momentum Contrast", "InfoNCE", "Batch Normalization" ]
[ "UCF101", "HMDB-51" ]
[ "Average accuracy of 3 splits", "3-fold Accuracy" ]
VideoMoCo: Contrastive Video Representation Learning with Temporally Adversarial Examples
Learning socially-aware motion representations is at the core of recent advances in human trajectory forecasting and robot navigation in crowded spaces. Yet existing methods often struggle to generalize to challenging scenarios and even output unacceptable solutions (e.g., collisions). In this work, we propose to address this issue via contrastive learning. Concretely, we introduce a social contrastive loss that encourages the encoded motion representation to preserve sufficient information for distinguishing a positive future event from a set of negative ones. We explicitly draw these negative samples based on our domain knowledge about socially unfavorable scenarios in the multi-agent context. Experimental results show that the proposed method consistently boosts the performance of previous trajectory forecasting, behavioral cloning, and reinforcement learning algorithms in various settings. Our method makes little assumptions about neural architecture designs, and hence can be used as a generic way to incorporate negative data augmentation into motion representation learning.
[ "Loss Functions" ]
[ "Autonomous Driving", "Autonomous Navigation", "Trajectory Forecasting", "Trajectory Prediction" ]
[ "InfoNCE" ]
[ "TrajNet++" ]
[ "COL", "FDE" ]
Social NCE: Contrastive Learning of Socially-aware Motion Representations
Deep learning based single image super-resolution methods use a large number of training datasets and have recently achieved great quality progress both quantitatively and qualitatively. Most deep networks focus on nonlinear mapping from low-resolution inputs to high-resolution outputs via residual learning without exploring the feature abstraction and analysis. We propose a Hierarchical Back Projection Network (HBPN), that cascades multiple HourGlass (HG) modules to bottom-up and top-down process features across all scales to capture various spatial correlations and then consolidates the best representation for reconstruction. We adopt the back projection blocks in our proposed network to provide the error correlated up and down-sampling process to replace simple deconvolution and pooling process for better estimation. A new Softmax based Weighted Reconstruction (WR) process is used to combine the outputs of HG modules to further improve super-resolution. Experimental results on various datasets (including the validation dataset, NTIRE2019, of the Real Image Super-resolution Challenge) show that our proposed approach can achieve and improve the performance of the state-of-the-art methods for different scaling factors.
[ "Output Functions" ]
[ "Image Super-Resolution", "Super-Resolution" ]
[ "Softmax" ]
[ "Set14 - 2x upscaling", "Set14 - 4x upscaling", "Manga109 - 8x upscaling", "BSD100 - 2x upscaling", "Manga109 - 4x upscaling", "Urban100 - 2x upscaling", "BSD100 - 4x upscaling", "Manga109 - 2x upscaling", "Set5 - 4x upscaling", "Set14 - 8x upscaling", "Urban100 - 8x upscaling", "Set5 - 8x upscaling", "BSD100 - 8x upscaling", "Set5 - 2x upscaling", "Urban100 - 4x upscaling" ]
[ "SSIM", "PSNR" ]
Hierarchical Back Projection Network for Image Super-Resolution
We design a new connectivity pattern for the U-Net architecture. Given several stacked U-Nets, we couple each U-Net pair through the connections of their semantic blocks, resulting in the coupled U-Nets (CU-Net). The coupling connections could make the information flow more efficiently across U-Nets. The feature reuse across U-Nets makes each U-Net very parameter efficient. We evaluate the coupled U-Nets on two benchmark datasets of human pose estimation. Both the accuracy and model parameter number are compared. The CU-Net obtains comparable accuracy as state-of-the-art methods. However, it only has at least 60% fewer parameters than other approaches.
[ "Semantic Segmentation Models", "Activation Functions", "Convolutions", "Pooling Operations", "Skip Connections" ]
[ "Pose Estimation" ]
[ "U-Net", "Concatenated Skip Connection", "Convolution", "ReLU", "Rectified Linear Units", "Max Pooling" ]
[ "MPII Human Pose" ]
[ "PCKh-0.5" ]
CU-Net: Coupled U-Nets
Machine reading comprehension (MRC) is an AI challenge that requires machine to determine the correct answers to questions based on a given passage. MRC systems must not only answer question when necessary but also distinguish when no answer is available according to the given passage and then tactfully abstain from answering. When unanswerable questions are involved in the MRC task, an essential verification module called verifier is especially required in addition to the encoder, though the latest practice on MRC modeling still most benefits from adopting well pre-trained language models as the encoder block by only focusing on the "reading". This paper devotes itself to exploring better verifier design for the MRC task with unanswerable questions. Inspired by how humans solve reading comprehension questions, we proposed a retrospective reader (Retro-Reader) that integrates two stages of reading and verification strategies: 1) sketchy reading that briefly investigates the overall interactions of passage and question, and yield an initial judgment; 2) intensive reading that verifies the answer and gives the final prediction. The proposed reader is evaluated on two benchmark MRC challenge datasets SQuAD2.0 and NewsQA, achieving new state-of-the-art results. Significance tests show that our model is significantly better than the strong ELECTRA and ALBERT baselines. A series of analysis is also conducted to interpret the effectiveness of the proposed reader.
[ "Attention Modules", "Output Functions", "Stochastic Optimization", "Activation Functions", "Subword Segmentation", "Normalization", "Large Batch Optimization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections" ]
[ "Machine Reading Comprehension", "Reading Comprehension" ]
[ "ALBERT", "WordPiece", "Layer Normalization", "Softmax", "Adam", "Multi-Head Attention", "Residual Connection", "Scaled Dot-Product Attention", "GELU", "LAMB", "Dense Connections", "Gaussian Linear Error Units" ]
[ "SQuAD2.0" ]
[ "EM", "F1" ]
Retrospective Reader for Machine Reading Comprehension
Single image super-resolution(SISR) has witnessed great progress as convolutional neural network(CNN) gets deeper and wider. However, enormous parameters hinder its application to real world problems. In this letter, We propose a lightweight feature fusion network (LFFN) that can fully explore multi-scale contextual information and greatly reduce network parameters while maximizing SISR results. LFFN is built on spindle blocks and a softmax feature fusion module (SFFM). Specifically, a spindle block is composed of a dimension extension unit, a feature exploration unit and a feature refinement unit. The dimension extension layer expands low dimension to high dimension and implicitly learns the feature maps which is suitable for the next unit. The feature exploration unit performs linear and nonlinear feature exploration aimed at different feature maps. The feature refinement layer is used to fuse and refine features. SFFM fuses the features from different modules in a self-adaptive learning manner with softmax function, making full use of hierarchical information with a small amount of parameter cost. Both qualitative and quantitative experiments on benchmark datasets show that LFFN achieves favorable performance against state-of-the-art methods with similar parameters.
[ "Output Functions" ]
[ "Image Super-Resolution", "Super-Resolution" ]
[ "Softmax" ]
[ "Set5 - 3x upscaling", "Manga109 - 3x upscaling", "BSD100 - 2x upscaling", "Manga109 - 4x upscaling", "BSD100 - 3x upscaling", "BSD100 - 4x upscaling", "Manga109 - 2x upscaling", "Set5 - 4x upscaling", "Set5 - 2x upscaling" ]
[ "SSIM", "PSNR" ]
Lightweight Feature Fusion Network for Single Image Super-Resolution
Neural networks are often over-parameterized and hence benefit from aggressive regularization. Conventional regularization methods, such as Dropout or weight decay, do not leverage the structures of the network's inputs and hidden states. As a result, these conventional methods are less effective than methods that leverage the structures, such as SpatialDropout and DropBlock, which randomly drop the values at certain contiguous areas in the hidden states and setting them to zero. Although the locations of dropout areas random, the patterns of SpatialDropout and DropBlock are manually designed and fixed. Here we propose to learn the dropout patterns. In our method, a controller learns to generate a dropout pattern at every channel and layer of a target network, such as a ConvNet or a Transformer. The target network is then trained with the dropout pattern, and its resulting validation performance is used as a signal for the controller to learn from. We show that this method works well for both image recognition on CIFAR-10 and ImageNet, as well as language modeling on Penn Treebank and WikiText-2. The learned dropout patterns also transfers to different tasks and datasets, such as from language model on Penn Treebank to Engligh-French translation on WMT 2014. Our code will be available.
[ "Regularization", "Output Functions", "Attention Modules", "Stochastic Optimization", "Subword Segmentation", "Normalization", "Feedforward Networks", "Transformers", "Attention Mechanisms", "Skip Connections" ]
[ "Image Classification", "Language Modelling", "Machine Translation" ]
[ "SpatialDropout", "DropBlock", "Layer Normalization", "Byte Pair Encoding", "BPE", "Softmax", "Adam", "Transformer", "Multi-Head Attention", "Residual Connection", "Label Smoothing", "Scaled Dot-Product Attention", "Dropout", "Dense Connections" ]
[ "cifar-10,4000", "CIFAR-10", "Penn Treebank (Word Level)", "WMT2014 English-French", "ImageNet-10", "ImageNet", "IWSLT2014 German-English" ]
[ "Top 1 Accuracy", "Percentage error", "Percentage correct", "Validation perplexity", "Test perplexity", "BLEU score" ]
AutoDropout: Learning Dropout Patterns to Regularize Deep Networks
This report describes the entry by the Intelligent Knowledge Management (IKM) Lab in the WSDM 2019 Fake News Classification challenge. We treat the task as natural language inference (NLI). We individually train a number of the strongest NLI models as well as BERT. We ensemble these results and retrain with noisy labels in two stages. We analyze transitivity relations in the train and test sets and determine a set of test cases that can be reliably classified on this basis. The remainder of test cases are classified by our ensemble. Our entry achieves test set accuracy of 88.063% for 3rd place in the competition.
[ "Output Functions", "Regularization", "Stochastic Optimization", "Attention Modules", "Learning Rate Schedules", "Activation Functions", "Normalization", "Subword Segmentation", "Language Models", "Feedforward Networks", "Attention Mechanisms", "Skip Connections" ]
[ "Fake News Detection", "Natural Language Inference", "News Classification" ]
[ "Weight Decay", "Layer Normalization", "WordPiece", "Softmax", "Adam", "Multi-Head Attention", "Attention Dropout", "Linear Warmup With Linear Decay", "Residual Connection", "Scaled Dot-Product Attention", "Dropout", "BERT", "GELU", "Dense Connections", "Gaussian Linear Error Units" ]
[ "COCO Captions" ]
[ "BLEU-2" ]
Fake News Detection as Natural Language Inference
We propose a spherical kernel for efficient graph convolution of 3D point clouds. Our metric-based kernels systematically quantize the local 3D space to identify distinctive geometric relationships in the data. Similar to the regular grid CNN kernels, the spherical kernel maintains translation-invariance and asymmetry properties, where the former guarantees weight sharing among similar local structures in the data and the latter facilitates fine geometric learning. The proposed kernel is applied to graph neural networks without edge-dependent filter generation, making it computationally attractive for large point clouds. In our graph networks, each vertex is associated with a single point location and edges connect the neighborhood points within a defined range. The graph gets coarsened in the network with farthest point sampling. Analogous to the standard CNNs, we define pooling and unpooling operations for our network. We demonstrate the effectiveness of the proposed spherical kernel with graph neural networks for point cloud classification and semantic segmentation using ModelNet, ShapeNet, RueMonge2014, ScanNet and S3DIS datasets. The source code and the trained models can be downloaded from https://github.com/hlei-ziyan/SPH3D-GCN.
[ "Convolutions" ]
[ "3D Instance Segmentation", "3D Object Classification", "3D Part Segmentation", "Semantic Segmentation" ]
[ "Convolution" ]
[ "ShapeNet-Part", "ModelNet40" ]
[ "Classification Accuracy", "Class Average IoU", "Instance Average IoU" ]
Spherical Kernel for Efficient Graph Convolution on 3D Point Clouds
In this paper, we introduce the task of targeted aspect-based sentiment analysis. The goal is to extract fine-grained information with respect to entities mentioned in user comments. This work extends both aspect-based sentiment analysis that assumes a single entity per document and targeted sentiment analysis that assumes a single sentiment towards a target entity. In particular, we identify the sentiment towards each aspect of one or more entities. As a testbed for this task, we introduce the SentiHood dataset, extracted from a question answering (QA) platform where urban neighbourhoods are discussed by users. In this context units of text often mention several aspects of one or more neighbourhoods. This is the first time that a generic social media platform in this case a QA platform, is used for fine-grained opinion mining. Text coming from QA platforms is far less constrained compared to text from review specific platforms which current datasets are based on. We develop several strong baselines, relying on logistic regression and state-of-the-art recurrent neural networks.
[ "Generalized Linear Models" ]
[ "Aspect-Based Sentiment Analysis", "Opinion Mining", "Question Answering", "Regression", "Sentiment Analysis" ]
[ "Logistic Regression" ]
[ "Sentihood" ]
[ "Aspect", "Sentiment" ]
SentiHood: Targeted Aspect Based Sentiment Analysis Dataset for Urban Neighbourhoods
We propose $\textit{Mish}$, a novel self-regularized non-monotonic activation function which can be mathematically defined as: $f(x)=x\tanh(softplus(x))$. As activation functions play a crucial role in the performance and training dynamics in neural networks, we validated experimentally on several well-known benchmarks against the best combinations of architectures and activation functions. We also observe that data augmentation techniques have a favorable effect on benchmarks like ImageNet-1k and MS-COCO across multiple architectures. For example, Mish outperformed Leaky ReLU on YOLOv4 with a CSP-DarkNet-53 backbone on average precision ($AP_{50}^{val}$) by 2.1$\%$ in MS-COCO object detection and ReLU on ResNet-50 on ImageNet-1k in Top-1 accuracy by $\approx$1$\%$ while keeping all other network parameters and hyperparameters constant. Furthermore, we explore the mathematical formulation of Mish in relation with the Swish family of functions and propose an intuitive understanding on how the first derivative behavior may be acting as a regularizer helping the optimization of deep neural networks. Code is publicly available at https://github.com/digantamisra98/Mish.
[ "Image Data Augmentation", "Initialization", "Output Functions", "Convolutional Neural Networks", "Learning Rate Schedules", "Regularization", "Stochastic Optimization", "Activation Functions", "Normalization", "Convolutions", "Feedforward Networks", "Pooling Operations", "Skip Connection Blocks", "Skip Connections", "Image Model Blocks", "Image Models", "Miscellaneous Components" ]
[ "Image Classification", "Object Detection" ]
[ "Depthwise Convolution", "Weight Decay", "Cosine Annealing", "Average Pooling", "Channel Shuffle", "Residual Block", "ShuffleNet V2 Block", "Mixup", "Tanh Activation", "1x1 Convolution", "Softplus", "ResNet", "SqueezeNet", "Mish", "Convolution", "SimpleNet", "ReLU", "Residual Connection", "WideResNet", "Wide Residual Block", "Leaky ReLU", "Dense Connections", "Max Pooling", "MobileNetV1", "Dense Block", "Swish", "Grouped Convolution", "Xavier Initialization", "Batch Normalization", "Residual Network", "ShuffleNet v2", "L1 Regularization", "Pointwise Convolution", "Squeeze-and-Excitation Block", "Kaiming Initialization", "Sigmoid Activation", "ResNeXt Block", "ResNeXt", "Softmax", "Concatenated Skip Connection", "Xception", "Bottleneck Residual Block", "DenseNet", "Depthwise Separable Convolution", "Dropout", "NADAM", "Fire Module", "Global Average Pooling", "Rectified Linear Units", "ShuffleNet V2 Downsampling Block" ]
[ "ImageNet", "COCO test-dev", "CIFAR-100", "CIFAR-10" ]
[ "APM", "Top 1 Accuracy", "Percentage correct", "box AP", "AP75", "APS", "APL", "AP50", "Top 5 Accuracy" ]
Mish: A Self Regularized Non-Monotonic Activation Function