text
stringlengths
82
736
label
int64
0
1
multiword expressions are word combinations which have idiosyncratic properties relative to their component words , such as taken aback or red tape---multiword expressions are defined as idiosyncratic interpretations that cross word boundaries or spaces
1
we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit---wu introduced the inversion transduction grammar formalism which treats translation as a process of parallel parsing of the source and target language via a synchronized grammar
0
an example of such a query is : ¡±asus laptop + opinions¡± , another , more detailed query , might be ¡±asus laptop + positive opinions¡±---asus laptop + opinions ¡± , another , more detailed query , might be ¡± asus laptop + positive opinions ¡±
1
we evaluated translation quality based on the caseinsensitive automatic evaluation score bleu-4---we used the bleu score to evaluate the translation accuracy with and without the normalization
1
we show that it outperforms an n-gram model in predicting more than one upcoming word---and demonstrated that our parser outperforms an n-gram model in predicting more than one upcoming word
1
part-of-speech ( pos ) tagging is the task of assigning each of the words in a given piece of text a contextually suitable grammatical category---part-of-speech ( pos ) tagging is a critical task for natural language processing ( nlp ) applications , providing lexical syntactic information
1
since text categorization is a task based on predefined categories , we know the categories for classifying documents---text categorization is the classificationof documents with respect to a set of predefined categories
1
in this paper , we propose a novel and effective approach to sentiment analysis on product reviews---in this paper , we propose a novel hl-sot approach to labeling a product ¡¯ s attributes and their associated sentiments in product reviews
1
an event schema is a structured representation of an event , it defines a set of atomic predicates or facts and a set of role slots that correspond to the typical entities that participate in the event---event schema is a high-level representation of a bunch of similar events
1
a combination of sufficient amounts of noise and rich , diverse errors appears to lead to better model performance---on a large corpus of noisy and clean sentences , the model is able to generate rich , diverse errors that better capture the noise
1
our preliminary experiments show that both methods can improve smt performance without using any additional data---without using any additional resource , both methods can improve smt performance significantly
1
the net result is a sampler that is non-convergent , overly dependent on its initialisation and can not be said to be sampling from the posterior---hara et al derived turn level ratings from overall ratings of the dialogue which were applied by the users after the interaction on a five point scale within an online questionnaire
0
our parser plus stochastic disambiguator achieves 79 % f-score under this evaluation regime---as noted earlier , this strategy is characteristic of the systems that participated in the semeval task on classifying semantic relations between nominals , such as butnariu and veale
0
in systematic experiments , we have demonstrated the strong impact of modeling overall argumentation---we tune the systems using kbest batch mira
0
the semantic orientation of a phrase is not a mere sum of its component words---twitter is a microblogging site where people express themselves and react to content in real-time
0
the bleu metric has been used to evaluate the performance of the systems---svms have proven to be an effective means for text categorization as they are capable to robustly deal with high-dimensional , sparse feature spaces
0
recognition data was again analysed using linear mixed model logistic regression---recognition data was analysed using a linear mixed model logistic regression
1
peters et al show how deep contextualized word representations model both complex characteristics of word use , and usage across various linguistic contexts---peters et al show that their language model elmo can implicitly disambiguate word meaning with their contexts
1
in addition , we use an english corpus of roughly 227 million words to build a target-side 5-gram language model with srilm in combination with kenlm---we use srilm to train a 5-gram language model on the target side of our training corpus with modified kneser-ney discounting
1
we show that emotion-word hashtags often impact emotion intensity , usually conveying a more intense emotion---emotion-word hashtags often impact emotion intensity , often conveying a more intense emotion
1
in addition , a 5-gram lm with kneser-ney smoothing and interpolation was built using the srilm toolkit---the model was built using the srilm toolkit with backoff and kneser-ney smoothing
1
distributional semantic models represent lexical meaning in vector spaces by encoding corpora derived word co-occurrences in vectors---a 5-gram language model was built using srilm on the target side of the corresponding training corpus
0
word2vec , glove and fasttext are the most simple and popular word embedding algorithms---stance detection is the task of automatically determining from the text whether the author of the text is in favor of , against , or neutral towards a proposition or target
0
the penn discourse treebank , developed by prasad et al , is currently the largest discourse-annotated corpus , consisting of 2159 wall street journal articles---as suggested by the in section 2 , relationals occur closer to the than qualitatives , so this result is consistent
0
the approach is a direct extension of the incremental algorithm---we base our gre approach on an extension of the incremental algorithm
1
the lack of da-english parallel corpora suggests pivoting on msa can improve the translation quality---word embedding models are aimed at learning vector representations of word meaning
0
we use word2vec as the vector representation of the words in tweets---we used word2vec to learn these dense vectors
1
the dependency parse trees are finally obtained using a phrase structure parser , using the post-processing of the stanford corenlp package---the charniak-lease phrase structure parses are transformed into the collapsed stanford dependency scheme using the stanford tools
1
efficiency of such learning method may suffer from the mismatch of dialogue state distribution between offline training and online interactive learning stages---as in mitchell et al , a linear regression model was used to learn the mapping from semantic features to brain activity levels
0
a key feature in our approach is the reliance on a story planner which we acquire automatically by recording events , their participants , and their precedence relationships in a training corpus---to train our model we use markov chain monte carlo sampling
0
however , li et al have pointed out that the transliteration precision of the phoneme-based approaches could be limited by two main constraints---deep learning techniques have shown enormous success in sequence to sequence mapping tasks
0
in addition to these two key indicators , we evaluated the translation quality using an automatic measure , namely bleu score---to verify sentence generation quantitatively , we evaluated the sentences automatically using bleu score
1
we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing---a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke
0
sentence retrieval is to retrieve sentences in response to certain requirements---sentence retrieval is always treated as a special type of document retrieval
1
1 the atb comprises manually annotated morphological and syntactic analyses of newswire text from different arabic sources , while the ag is simply a huge collection of raw arabic newswire text---in recent years , vector space models ( vsms ) have been proved successful in solving various nlp tasks including named entity recognition , part-of-speech tagging , parsing , semantic role-labeling
0
we trained a support vector machine for regression with rbf kernel using scikitlearn , which in turn uses libsvm---we applied liblinear via its scikitlearn python interface to train the logistic regression model with l2 regularization
1
examples of these are freebase , yago , dbpedia , and google knowledge vault---prominent examples include freebase which powers the google knowledge graph , conceptnet , yago , and others
1
in this paper , we used the decision list to solve the homophone problem---in this paper , we incorporate the written word into the original decision list
1
different from most work relying on a large number of handcrafted features , collobert and weston proposed a convolutional neural network for srl---collobert and weston deepened the original neural model by adding a convolutional layer and an extra layer for modeling long-distance dependencies
1
feature weights were set with minimum error rate training on a development set using bleu as the objective function---feature weights are tuned using minimum error rate training on the 455 provided references
1
heilman et al combined unigram models with grammatical features and trained machine learning models for readability assessment---in recent years has created an increasing need for improvements in organic and sponsored search
0
open information extraction has been shown to be useful in a number of nlp tasks , such as question answering , relation extraction , and information retrieval---open ie in the monolingual setting has shown to be useful in a wide range of tasks , such as question answering , ontology learning , and summarization
1
alshawi et al , 2000 ) represents each production in parallel dependency trees as a finite-state transducer---alshawi et al represent each production in parallel dependency tree as a finite transducer
1
and our experimental results on the ace data set shows the model is effective for coreference resolution---we present novel methods for analyzing the activation patterns of recurrent neural networks from a linguistic point of view and explore the types of linguistic structure they learn
0
we have tested cpra on benchmark data created from freebase---we evaluate cpra on benchmark data created from freebase
1
the n-gram based language model is developed by employing the irstlm toolkit---the target language model is a 7-gram , binarized irstlm
1
we use 300 dimensional glove embeddings trained on the common crawl 840b tokens dataset , which remain fixed during training---we used the 300-dimensional glove word embeddings learned from 840 billion tokens in the web crawl data , as general word embeddings
1
by including predictions of other models as features , we achieve aer of 3.8 on the standard hansards dataset---a set of 500 sentences is used to tune the decoder parameters using the mert
0
to train our neural algorithm , we apply word embeddings of a look-up from 100-d glove pre-trained on wikipedia and gigaword---in this paper , we propose a forest-based tree sequence to string model , which is designed to integrate the strengths of the forest-based and the tree
0
importantly , word embeddings have been effectively used for several nlp tasks---word embeddings have been used to help to achieve better performance in several nlp tasks
1
these methods can not only reduce oov words , but also deal with unknown words---the decoder uses cky-style parsing with cube pruning to integrate the language model
0
twitter is a huge microbloging service with more than 500 million tweets per day 1 from different locations in the world and in different languages---twitter is a well-known social network service that allows users to post short 140 character status update which is called “ tweet ”
1
the language model was a 5-gram language model estimated on the target side of the parallel corpora by using the modified kneser-ney smoothing implemented in the srilm toolkit---named entity typing is a fundamental building block for many natural-language processing tasks
0
like soricut and marcu , they formulate the discourse segmentation task as a binary classification problem of deciding whether a word is the boundary or no-boundary of edus---the language model is trained and applied with the srilm toolkit
0
burkett and klein propose a reranking based method for joint constituent parsing of bitext , which can make use of structural correspondence features in both languages---n-gram translation models helps to address some of the search problems that are nontrivial to handle when decoding
0
we used the svd implementation provided in the scikit-learn toolkit---for this task , we used the svm implementation provided with the python scikit-learn module
1
research in this has resulted in the construction of several large scale kgs , such as nell , google knowledge vault and yago---recent research in this area has resulted in the development of several large kgs , such as nell , yago , and freebase , among others
1
we propose sampling pseudo-negative examples taken from probabilistic language models---we have presented a novel discriminative language model using pseudo-negative examples
1
the following two subsections review typical methods for each phase---two subsections review typical methods for each phase
1
we are interested in capturing aspects of coherence as defined by grosz and sidner , based on the attentional state , intentional structure and linguistic structure of discourse---the model weights are automatically tuned using minimum error rate training
0
in this paper we demonstrate our model by running it on items used in psycholinguistic experiments about human preferences---in this paper we showed how a computational model can mirror human preferences in pronoun resolution and reading times
1
in our experiments we used rasp , a broad coverage dependency parser , and the opennlp 1 coreference resolution engine---in our experiments this knowledge base was created using the rasp relational parser
1
sentiment analysis is the natural language processing ( nlp ) task dealing with the detection and classification of sentiments in texts---sentiment analysis is a research area in the field of natural language processing
1
for language model , we use a trigram language model trained with the srilm toolkit on the english side of the training corpus---in this paper , we propose a framework for automatic evaluation of nlp applications which is able to account for the variation in the human evaluation
0
we learn our word embeddings by using word2vec 3 on unlabeled review data---we perform pre-training using the skip-gram nn architecture available in the word2vec 13 tool
1
these experiments demonstrate that fbrnn achieves competitive results compared to the current state-of-the-art---the simplest method of evaluation is direct comparison of the extracted synonyms with a manuallycreated gold standard
0
all features were log-linearly combined and their weights were optimized by performing minimum error rate training---feature weights were set with minimum error rate training on a development set using bleu as the objective function
1
the integrated dialect classifier is a maximum entropy model that we train using the liblinear toolkit---we adapted the moses phrase-based decoder to translate word lattices
0
it is also shown that the analyses provided by the functional uncertainty machinery can be obtained without requiring power beyond mildly context-sensitive grammars---functional uncertainty machinery can be obtained without going beyond the power of mildly context-sensitive grammars
1
for training our system classifier , we have used scikit-learn---for nb and svm , we used their implementation available in scikit-learn
1
2 ) our model leverages both the semantic and sentiment correlations between bilingual documents---in the following three aspects , 1 ) we exploit both the semantic and sentiment correlations of the bilingual texts
1
it has been shown that word embeddings are able to capture to certain semantic and syntactic aspects of words---these embeddings provide a nuanced representation of words that can capture various syntactic and semantic properties of natural language
1
the language models were interpolated kneser-ney discounted trigram models , all constructed using the srilm toolkit---the language model used was a 5-gram with modified kneserney smoothing , built with srilm toolkit
1
semantic role labeling ( srl ) is the task of automatically labeling predicates and arguments in a sentence with shallow semantic labels---semantic role labeling ( srl ) is the process of producing such a markup
1
we use the srilm toolkit to compute our language models---the language models in our systems are trained with srilm
1
capturing these changes is problematic for current language technologies , which are typically developed for speakers of the standard dialect only---language technologies are usually developed for standard dialects , ignoring the linguistic differences in other dialects
1
we further explore three algorithms in rule matching : 0-1 matching , likelihood matching , and deep similarity matching---we designed and explored three fuzzy rule matching algorithms : 0-1 matching , likelihood matching , and deep similarity matching
1
as shown in figure 3 , whether or not contributors could be attributed to the hearer did not correlate with the choice of sinceor because---as shown in figure 3 , whether or not contributors could be attributed to the hearer did not correlate with the choice of sinceor
1
we use the adam optimizer for the gradient-based optimization---for the loss function , we used the mean square error and adam optimizer
1
the top-down method had better bleu scores for 7 language pairs without relying on supervised syntactic parsers compared to other preordering methods---using the top-down parsing algorithm was faster and gave higher bleu scores than btg-based preordering
1
we estimate a 5-gram language model using interpolated kneser-ney discounting with srilm---we evaluate the perplexity of the n-gram model with srilm package
1
the baseline of our approach is a statistical phrase-based system which is trained using moses---the standard phrase-based model that we use as our top-line is the moses system trained over the full europarl v5 parallel corpus
1
rhetorical structure theory is a framework for describing the organization of a text and what a text conveys by identifying hierarchical structures in text---rhetorical structure theory is one way of introducing the discourse structure of a document to a summarization task
1
this paper proposes a novel two-stage method for mining opinion words and opinion targets---this paper proposes a two-stage framework for mining opinion words and opinion targets
1
luong et al learn word representations based on morphemes that are obtained from an external morphological segmentation system---we use the pre-trained word2vec embeddings provided by mikolov et al as model input
0
for the compilation , we focus on travel blogs , which are defined as travel journals written by bloggers in diary form---ma et al proposed an interactive attention network which interactively learned attentions in the contexts and targets
0
our system also ranked 4 th out of 40 submissions in identifying the sentiment of sarcastic tweets---and ranking 4 th out of 40 in identifying the sentiment of sarcastic tweets
1
in this paper , we presented a supervised classification model for keyphrase extraction from scientific research papers that are embedded in citation networks---in many ir and nlp tasks , to our knowledge , we are the first to propose the incorporation of information available in citation networks for keyphrase extraction
1
charniak and johnson , eg , supply a discriminative reranker that uses eg , features to capture syntactic parallelism across conjuncts---charniak and johnson incorporated some features of syntactic parallelism in coordinate structures into their maxent reranking parser
1
we map the pos labels in the conll datasets to the universal pos tagset---for the sake of comparability we applied the split to the universal tagset
1
we used the logistic regression implemented in the scikit-learn library with the default settings---we used svm classifier that implements linearsvc from the scikit-learn library
1
for this model , we use a binary logistic regression classifier implemented in the lib-linear package , coupled with the ovo scheme---we use the logistic regression implementation of liblinear wrapped by the scikit-learn library
1
therefore , word segmentation is a crucial first step for many chinese language processing tasks such as syntactic parsing , information retrieval and machine translation---each individual system is a phrase-based system trained using the moses toolkit
0
ng et al proposed that rather than focusing on just adjective-noun relationships , the subject-verb and verb-object relationships should also be considered for polarity classification---the translation quality is evaluated by bleu and ribes
0
we propose the joint parsing models by the feed-forward and bi-lstm neural networks---we propose neural network-based joint models for word segmentation , pos tagging and dependency parsing
1
in this paper , our coreference resolution system for conll-2012 shared task is summarized---we evaluate our approach on the english portion of the conll-2012 dataset
1
the bleu score is based on the geometric mean of n-gram precision---for the evaluation of the results we use the bleu score
1
experimental results show that our method can effectively resolve the vocabulary mismatch problem and achieve accurate and robust performance---experimental results show that our system outperforms the base system with a 3 . 4 % gain in f1 , and generates logical forms more accurately
1
semantic role labeling ( srl ) is a kind of shallow semantic parsing task and its goal is to recognize some related phrases and assign a joint structure ( who did what to whom , when , where , why , how ) to each predicate of a sentence ( cite-p-24-3-4 )---semantic role labeling ( srl ) is the task of labeling the predicate-argument structures of sentences with semantic frames and their roles ( cite-p-18-1-2 , cite-p-18-1-19 )
1
our randomized lm is based on the bloomier filter---our randomized language model is based on the bloomier filter
1