text
stringlengths 82
736
| label
int64 0
1
|
---|---|
we present a general framework for comparing multiple groups of documents---in this work , we present a general framework to perform such comparisons | 1 |
the characters themselves are often composed of subcharacter components which are also semantically informative---characters are often composed of subcharacter components which are also semantically informative | 1 |
through comparative experiments , we show that emotion recognition can be performed using either textual or musical features , and that the joint use of lyrics and music can improve significantly over classifiers that use only one dimension at a time---on the dataset of 100 songs , we showed that emotion recognition can be performed using either textual or musical features , and that the joint use of lyrics and music can improve significantly over classifiers that use only one dimension at a time | 1 |
for our logistic regression classifier we use the implementation included in the scikit-learn toolkit 2---we use the logistic regression classifier in the skll package , which is based on scikit-learn , optimizing for f 1 score | 1 |
triviaqa , which has wikipedia entities as answers , makes it possible to leverage structured kbs like freebase , which we leave to future work---as answers , makes it possible to leverage structured kbs like freebase , which we leave to future work | 1 |
for the contextual polarity disambiguation subtask , we described a very efficient and robust method based on a sentiment lexicon associated with a polarity shift detector and a tree based classification---for the contextual polarity disambiguation subtask , covered in section 2 , we use a system that combines a lexicon based approach to sentiment detection with two types of supervised learning methods , one used for polarity shift identification | 1 |
the target fourgram language model was built with the english part of training data using the sri language modeling toolkit---we presented the first neural network based shift-reduce parsers for ccg | 0 |
we implement the pbsmt system with the moses toolkit---we use an in-house implementation of a pbsmt system similar to moses | 1 |
we discuss an interactive approach to robust interpretation in a large scale speech-to-speech translation system---we discuss rose , an interactive approach to robust interpretation developed in the context of the janus speech-to-speech translation system | 1 |
we trained a standard 5-gram language model with modified kneser-ney smoothing using the kenlm toolkit on 4 billion running words---for tagging , we use the stanford pos tagger package | 0 |
chang and teng extends the work in chang and lai to automatically extract the relations between full-form phrases and their abbreviations , where both the full-form phrase and its abbreviation are not given---chang and teng extends the work in chang and lai to automatically extract the relations between full-form phrases and their abbreviations | 1 |
in the second step , we propose a relational adaptive bootstrapping ( rap ) algorithm to expand the seeds in the target domain---in the second step , we propose a novel relational adaptive bootstrapping ( rap ) algorithm to expand the seeds in the target domain by exploiting the labeled source domain | 1 |
coreference resolution is the problem of identifying which mentions ( i.e. , noun phrases ) refer to which real-world entities---coreference resolution is the task of determining which mentions in a text refer to the same entity | 1 |
albrecht and hwa , 2007 ) presented a regression based method for developing automatic evaluation metrics for machine translation systems without directly relying on human reference translations---albrecht and hwa proposed a method to evaluate mt outputs with pseudo references using support vector regression as the learner to evaluate translations | 1 |
we used the svm implementation provided within scikit-learn---we used the svm implementation of scikit learn | 1 |
we used a phrase-based smt model as implemented in the moses toolkit---we use the moses software package 5 to train a pbmt model | 1 |
continuous-valued vector representation of words has been one of the key components in neural architectures for natural language processing---one of the most useful neural network techniques for nlp is the word embedding , which learns vector representations of words | 1 |
ambiguity is the task of building up multiple alternative linguistic structures for a single input ( cite-p-13-1-8 )---we build a model of all unigrams and bigrams in the gigaword corpus using the c-mphr method , srilm , irstlm , and randlm 3 toolkits | 0 |
relation extraction is a traditional information extraction task which aims at detecting and classifying semantic relations between entities in text ( cite-p-10-1-18 )---relation extraction is the task of finding semantic relations between entities from text | 1 |
in 2003 , bengio et al proposed a neural network architecture to train language models which produced word embeddings in the neural network---introduced by bengio et al , the authors proposed a statistical language model based on shallow neural networks | 1 |
we use logistic regression with l2 regularization , implemented using the scikit-learn toolkit---we train and evaluate a l2-regularized logistic regression classifier with the liblin-ear solver as implemented in scikit-learn | 1 |
sentence compression is the task of producing a summary at the sentence level---in section 4 , we show that this result still holds for multimodal ccg | 0 |
in this run , we use a sentence vector derived from word embeddings obtained from word2vec---named entity disambiguation ( ned ) is the task of resolving ambiguous mentions of entities to their referent entities in a knowledge base ( kb ) ( e.g. , wikipedia ) | 0 |
we adapted the moses phrase-based decoder to translate word lattices---the language model is trained and applied with the srilm toolkit | 0 |
stance detection has been defined as automatically detecting whether the author of a piece of text is in favor of the given target or against it---we propose a reinforcement learning based approach that integrates target information and generates target-specific tree structures | 0 |
moreover , as we will show in experiment section , a preprocessing method does not work well when only source information is available---however , as we will show below , existing smt systems do not deal well with the measure word generation in general due to data | 1 |
our baseline system is phrase-based moses with feature weights trained using mert---we employ widely used and standard machine translation tool moses to train the phrasebased smt system | 1 |
"socher et al used recursive neural networks to model sentences for different tasks , including paraphrase detection and sentence classification---socher et al utilized parsing to model the hierarchical structure of sentences and uses unfolding recursive autoencoders to learn representations for single words and phrases acting as nonleaf nodes in the tree | 1 |
krause generalized the work by khuller et alon budgeted maximum cover problem to the submodular framework , and showed a 1 2 -approximation algorithm---khuller et al studied the maximum coverage problem with a knapsack constraint , and proved that the greedy algorithm achieves -approximation | 1 |
melamud et al use word embeddings generated using the word2vec skip-gram model---in 2013 , mikolov et al generated phrase representation using the same method used for word representation in word2vec | 1 |
syntactic analysis for syntactic features , we trained an arabic dependency parser using maltparser on the columbia arabic treebank version of the patb ,---to avoid this problem , tromble et al propose linear bleu , an approximation to the bleu score to efficiently perform mbr decoding when the search space is represented with lattices | 0 |
wordnet is a large semantic lexicon database of english words , where nouns , verbs , adjectives and adverbs are grouped into sets of cognitive synonyms---wordnet is a large lexical database of english , where open class words are grouped into concepts represented by synonyms that are linked to each other by semantic relations such as hyponymy and meronymy | 1 |
our model builds on word2vec , a neural network based language model that learns word embeddings by maximizing the probability of raw text---we use a popular word2vec neural language model to learn the word embeddings on an unsupervised tweet corpus | 1 |
relation extraction is the task of finding semantic relations between two entities from text---relation extraction is the task of finding relationships between two entities from text | 1 |
to speed up training using parallel processing , we use the iterative parameter mixing approach of mcdonald et al , where training data are split into several parts and weight updates are averaged after each pass through the training data---we furthermore use the distributed learning technique of iterative parameter mixing , where multiple models on several shards of the training data are trained in parallel and parameters are averaged after each epoch | 1 |
levin has in fact proposed a well-known classification of verbs based on their range of syntactic alternations---levin provides a classification of over 3000 verbs according to their participation in alternations involving np and pp constituents | 1 |
phrase-based translation systems prove to be the stateof-the-art as they have delivered translation performance in recent machine translation evaluations---in recent years , phrase-based systems for statistical machine translation have delivered state-of-the-art performance on standard translation tasks | 1 |
eurowordnet is a multilingual semantic lexicon with wordnets for several european languages , which are structured as the princeton wordnet---eurowordnet is a multilingual lexical knowledge base comprised of hierarchical representations of lexical items for several european languages | 1 |
we use a set of 318 english function words from the scikit-learn package---we use the svm implementation from scikit-learn , which in turn is based on libsvm | 0 |
aw et al and kaufmann and kalita consider normalisation as a machine translation task from lexical variants to standard forms using off-theshelf tools---aw et al , kobus et al viewed the text message normalization as a statistical machine translation process from the texting language to standard english | 1 |
a pun is a means of expression , the essence of which is in the given context the word or phrase can be understood in two meanings simultaneously ( cite-p-22-3-7 )---a pun is a word used in a context to evoke two or more distinct senses for humorous effect | 1 |
we use minimal error rate training to maximize bleu on the complete development data---we used minimum error rate training to optimize the feature weights | 1 |
we use 5-grams for all language models implemented using the srilm toolkit---we train trigram language models on the training set using the sri language modeling tookit | 1 |
pitler and nenkova show that the entity transition features extracted from the entity grid model on its own do not significantly predict human readability ratings---human-annotated image and video descriptions allow us to investigate what types of verb ¨c noun relations are in principle present in the visual data | 0 |
each sentence in the documents is firstly assigned a salience score---a salience score is computed for each phrase by exploiting redundancy of the document content | 1 |
semantic role labeling ( srl ) is a task of automatically identifying semantic relations between predicate and its related arguments in the sentence---semantic role labeling ( srl ) is the task of identifying the arguments of lexical predicates in a sentence and labeling them with semantic roles ( cite-p-13-3-3 , cite-p-13-3-11 ) | 1 |
relation extraction is the task of detecting and characterizing semantic relations between entities from free text---the language model is trained on the target side of the parallel training corpus using srilm | 0 |
other work extracts hypernym relations from encyclopedias but has limited coverage---other work tries to extract hypernym relations from large-scale encyclopedias like wikipedia and achieves high precision | 1 |
word sense disambiguation ( wsd ) is the problem of assigning a sense to an ambiguous word , using its context---in natural language , a word often assumes different meanings , and the task of determining the correct meaning , or sense , of a word in different contexts is known as word sense disambiguation ( wsd ) | 1 |
we presented an approach of using esa for sentiment classification---in this work , we investigated the use of esa for the given task of sentiment analysis | 1 |
our machine translation system is a phrase-based system using the moses toolkit---we use the moses toolkit to train various statistical machine translation systems | 1 |
coreference resolution is a challenging task , that involves identification and clustering of noun phrases mentions that refer to the same real-world entity---coreference resolution is the process of linking together multiple expressions of a given entity | 1 |
we performed paired bootstrap sampling to test the significance in bleu score differences---we performed significance testing using paired bootstrap resampling | 1 |
this sampler sidesteps the intractability issues of previous models which required inference over derivation forests---sampler over synchronous derivation trees can efficiently draw samples from the posterior , overcoming the limitations of previous models | 1 |
the optimisation of the feature weights of the model is done with minimum error rate training against the bleu evaluation metric---the feature weights for the log-linear combination of the features are tuned using minimum error rate training on the devset in terms of bleu | 1 |
both files are concatenated and learned by word2vec---the word embeddings are word2vec of dimension 300 pre-trained on google news | 1 |
for capturing the semantics of words , we again derive features from the pre-trained fasttext word vectors---to alleviate issues with out-of-vocabulary words , we use both character-and subwordbased word embeddings computed with fasttext | 1 |
we then perform training by using an expectation-maximization algorithm that iteratively maximizes fto reach a local optimal solution---for training the trigger-based lexicon model , we apply the expectation-maximization algorithm | 1 |
we used weka to experiment with several classifiers---an lm is trained on 462 million words in english using the srilm toolkit | 0 |
the mt performance is measured with the widely adopted bleu and ter metrics---semantic role labeling ( srl ) is the task of identifying semantic arguments of predicates in text | 0 |
we use the word2vec framework in the gensim implementation to generate the embedding spaces---we pre-train the word embedding via word2vec on the whole dataset | 1 |
the language models were created using the srilm toolkit on the standard training sections of the ccgbank , with sentenceinitial words uncapitalized---the language models are 4-grams with modified kneser-ney smoothing which have been trained with the srilm toolkit | 1 |
we selected target verbs by choosing classes from levin that are expected to undergo the causative alternation---afterwards , user and product information is considered via attentions over different semantic levels | 0 |
supervised systems based on neural networks achieve the most promising results---exploiting neural networks on unlabeled corpora achieve promising results , surpassing this hard baseline | 1 |
the multi-label setting is common and useful in the real world---multi-label text categorization is a common and useful | 1 |
our cdsm feature is based on word vectors derived using a skip-gram model---as embedding vectors , we used the publicly available representations obtained from the word2vec cbow model | 1 |
first , we examine three subproblems that play a role in coreference resolution : named entity recognition , anaphoricity determination , and coreference element detection---our submission to the english-french task was a phrase-based statistical machine translation based on the moses decoder | 0 |
word embeddings are low-dimensional vector representations of words such as word2vec that recently gained much attention in various semantic tasks---word embeddings are distributed representations of words learned on large scale corpus using neural networks | 1 |
by incorporating textual information , rcm can effectively deal with data sparseness problem---we take fully advantage of questions ’ textual descriptions to address data sparseness problem and cold-start problem | 1 |
a semantic parser is learned given a set of sentences and their correct logical forms using smt methods---however , obtaining labeled data is a big challenge in many real-world problems | 0 |
in this paper , we propose to compress neural language models by sparse word representations---we propose an approach to represent uncommon words ¡¯ embeddings by a sparse linear combination of common ones | 1 |
text categorization is the task of classifying documents into a certain number of predefined categories---text categorization is the task of assigning a text document to one of several predefined categories | 1 |
we use the glove word vector representations of dimension 300---for word embeddings , we consider word2vec and glove | 1 |
this approach can be used for word alignment in language pairs like english-hindi---approach has been proposed as an alternative strategy for word alignment | 1 |
coreference resolution is the task of partitioning a set of mentions ( i.e . person , organization and location ) into entities---coreference resolution is the task of partitioning the set of mentions of discourse referents in a text into classes ( or ‘ chains ’ ) corresponding to those referents ( cite-p-12-3-14 ) | 1 |
latent dirichlet allocation is one of the widely adopted generative models for topic modeling---for this experiment , we train a standard phrase-based smt system over the entire parallel corpus | 0 |
we use the same evaluation metrics as described in , which is similar to those in---we use the same metrics as described in wu et al , which is similar to those in | 1 |
all the feature weights and the weight for each probability factor are tuned on the development set with minimumerror-rate training---unreliable scores does not result in a reliable one | 0 |
a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit---a trigram language model with modified kneser-ney discounting and interpolation was used as produced by the srilm toolkit | 1 |
this has shown to be effective for numerous nlp tasks as it can capture word morphology and reduce out-of-vocabulary---stance detection is the task of automatically determining from text whether the author of the text is in favor of , against , or neutral towards a proposition or target | 0 |
as a result , bus request cycle may conceivably be understood either as a corn- * when a sequence has length three or more the order of modification may vary---as a result , bus request cycle may conceivably be understood either as a corn- * when a sequence has length three or more | 1 |
for building our statistical ape system , we used maximum phrase length of 7 and a 5-gram language model trained using kenlm---blei et al proposed lda as a general bayesian framework and gave a variational model for learning topics from data | 0 |
in table 6 , we list the rtm test results for tasks and subtasks that predict hter or meteor from qet15 , qet14 , and qet13---we use the logistic regression classifier as implemented in the skll package , which is based on scikitlearn , with f1 optimization | 0 |
using word2vec , we compute word embeddings for our text corpus---then we train word2vec to represent each entity with a 100-dimensional embedding vector | 1 |
that is , since the morphological analysis is the first-step in most nlp applications , the sentences with incorrect word spacing must be corrected for their further processing---we assume that a morphological analysis consists of three processes : tokenization , dictionary lookup , and disambiguation | 1 |
thus , in this paper , we assume that there is a relationship between the canonical word order and the proportion of each word order in a large corpus and present a corpus-based analysis of canonical word order of japanese double object constructions---we have already used our pos-based model to rescore word-graphs , which results in a one percent absolute reduction in word error rate in comparison to a word-based model | 0 |
mining parallel data from web is a promising method to overcome the knowledge bottleneck faced by machine translation---web mining for parallel data becomes a promising solution to this knowledge acquisition problem | 1 |
results show approximately 6-10 % cer reduction of the acms in comparison with the word trigram models , even when the acms are slightly smaller---experimental results show substantial improvements of the acm in comparison with classical cluster models and word n-gram models | 1 |
mikolov et al proposed a computationally efficient method for learning distributed word representation such that words with similar meanings will map to similar vectors---by an unsupervised one , we may raise the question as to whether the end of supervised nlp comes in sight | 0 |
another stream of work tries to identify domain-specific words to improve crossdomain classification---another line of work tries to derive domain-specific sentiment words | 1 |
goldwasser et al presented a confidence-driven approach to semantic parsing based on self-training---in contrast , goldwasser et al proposed a self-supervised approach , which iteratively chose high-confidence parses to retrain the parser | 1 |
since coreference resolution is a pervasive discourse phenomenon causing performance impediments in current ie systems , we considered a corpus of aligned english and romanian texts to identify coreferring expressions---coreference resolution is the task of clustering a set of mentions in the text such that all mentions in the same cluster refer to the same entity | 1 |
in this paper , we show that using well calibrated probabilities to estimate sense priors is important---named entity recognition ( ner ) is the task of identifying and classifying phrases that denote certain types of named entities ( nes ) , such as persons , organizations and locations in news articles , and genes , proteins and chemicals in biomedical literature | 0 |
in this paper we described the system submitted for the semeval 2014 task 9 ( sentiment analysis in twitter )---in this paper we describe the system submitted for the semeval 2014 sentiment analysis in twitter task ( task 9 | 1 |
it also outperforms related models on similarity tasks and named entity recognition---we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus | 0 |
the accuracy was measured using the bleu score and the string edit distance by comparing the generated sentences with the original sentences---the overall mt system is evaluated both with and without function guessing on 500 held-out sentences , and the quality of the translation is measured using the bleu metric | 1 |
for all experiments , we used a 4-gram language model with modified kneser-ney smoothing which was trained with the srilm toolkit---for all data sets , we trained a 5-gram language model using the sri language modeling toolkit | 1 |
furthermore , we propose a way to generate onthe-fly knowledge in logical inference , by combining our framework with the idea of tree transformation---in practical inference , we combine our framework with the idea of tree transformation ( cite-p-26-1-2 ) , to propose a way of generating knowledge in logical representation | 1 |
multi-task joint modeling has been shown to effectively improve individual tasks---we present the first approach for applying distant supervision to cross-sentence relation extraction | 0 |
there are numerous theoretical approaches describing is and its semantics and the terminology used is diverse for an overview )---we adopted a novel formulation that models dependency edges in argument paths and jointly predicts them along with events and arguments | 0 |