sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
information retrieval ( ir ) is the task of ranking a collection of documents according to an estimate of their relevance to a query .
information retrieval ( ir ) is the task of retrieving , given a query , the documents relevant to the user from a large quantity of documents ( cite-p-13-3-13 ) .
we perform minimum-error-rate training to tune the feature weights of the translation model to maximize the bleu score on development set .
we use minimum error rate training to tune the feature weights of hpb for maximum bleu score on the development set with serval groups of different start weights .
available in external resources , our model uses general-purpose embeddings and optimises a separate neural component to adapt these to the specific task .
instead of optimising individual word embeddings , our model uses general-purpose embeddings and optimises a separate neural component to adapt these to the specific task .
1 to our knowledge , read-x is the first system that performs in real time a ) keyword search , b ) thematic classification and c ) analysis of reading difficulty .
to our knowledge , read-x is the first web-based system that performs real-time searches and returns results classified thematically and by reading level within seconds .
coreference resolution is the process of finding discourse entities ( markables ) referring to the same real-world entity or concept .
coreference resolution is the process of linking together multiple referring expressions of a given entity in the world .
we test this hypothesis with an approximate randomization approach .
to compute statistical significance , we use the approximate randomization test .
ding and palmer propose a syntax-based translation model based on a probabilistic synchronous dependency insertion grammar .
ding and palmer introduce the notion of a synchronous dependency insertion grammar as a tree substitution grammar defined on dependency trees .
we initialize our word vectors with 300-dimensional word2vec word embeddings .
we also obtain the embeddings of each word from word2vec .
similarity is a fundamental concept in theories of knowledge and behavior .
similarity is a kind of association implying the presence of characteristics in common .
as noted in joachims , support vector machines are well suited for text categorisation .
svms are frequently used for text classification and have been applied successfully to nli .
we use conditional random fields sequence labeling as described in .
we use conditional random fields , a popular approach to solve sequence labeling problems .
following the method presented in hatzivassiloglou and mckeown , we can connect words if they appear together in a conjunction in the corpus .
following the method presented in , we can connect words if they appear in a conjunctive form in the corpus .
shen et al , 2008 ) exploits target dependency structures as dependency language models to ensure the grammaticality of the target string .
as dependency relations directly model the semantics structure of a sentence , shen et al introduce dependency language model to better account for the generation of target sentences .
in this paper , we propose to improve target-dependent twitter sentiment classification .
in this paper , we address target-dependent sentiment classification of tweets .
dadvar et al , 2013 ) proposed a method to improve the cyberbullying detection by taking user context into account .
dadvar et al , 2013 ) affirmed that user context was crucial in the bonafide detection of cyberbullying .
in the experimental results , we have illustrated that character-based wrappers are better suited than htmlbased wrappers .
our conjecture is that the less constrained character-level methods will produce more candidate wrappers than html-based techniques .
the feature weights are tuned to optimize bleu using the minimum error rate training algorithm .
the parameters of the log-linear model are tuned by optimizing bleu on the development data using mert .
we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing .
for the fluency and grammaticality features , we train 4-gram lms using the development dataset with the sri toolkit .
bouchard-c么t茅 et al propose mcmc-based methods to model context , and operate on more than one pair of languages at a time .
bouchard-c么t茅 et al use mcmc-based methods to model context , and operate on more than a pair of languages .
we use the moses software package 5 to train a pbmt model .
we use moses toolkit for pbsmt training and sockeye toolkit for nmt training .
lindberg et al employed a template-based approach while taking advantage of semantic information to generate natural language questions for on-line learning support .
lindberg et al introduced a sophisticated template based system which merges semantic role labels into a system that automatically generates natural language questions to support online learning .
zeng et al , 2014 , exploited a convolutional deep neural network to extract lexical and sentence level features .
zeng et al developed a deep convolutional neural network to extract lexical and sentence level features , which are concatenated and fed into the softmax classifier .
in the universal dependencies corpus , we show that the proposed transfer learning model improves the pos tagging performance of the target languages .
given insufficient training examples , we can improve the pos tagging performance by cross-lingual pos tagging , which exploits affluent pos tagging corpora from other source languages .
we convert both data sets to stanford dependencies with the stanford dependency converter .
we extract the corresponding feature from the output of the stanford parser .
in this paper , we propose adversarial multi-criteria learning for cws by fully exploiting the underlying shared knowledge .
in this paper , we propose an adversarial multi-criteria learning for cws by integrating shared knowledge from multiple segmentation criteria .
we used 5-gram models , estimated using the sri language modeling toolkit with modified kneser-ney smoothing .
for the language model , we used sri language modeling toolkit to train a trigram model with modified kneser-ney smoothing on the 31 , 149 english sentences .
snow et al detect syntactic is-a patterns by analyzing the parse trees and train a hypernym classifier based on syntactic features .
snow et al utilize wordnet to learn dependency path patterns for extracting the hypernym relation from text .
we conducted baseline experiments for phrasebased machine translation using the moses toolkit .
we used the open source moses phrase-based mt system to test the impact of the preprocessing technique on translation results .
while our results show improvement over the baseline ( up to 25 . 9 % ) .
our best performing method obtains a significant increase over the baseline ( 25.9 % f-1 ) .
feature weights were set with minimum error rate training on a development set using bleu as the objective function .
the smt systems are tuned on the dev development set with minimum error rate training using bleu accuracy measure as the optimization criterion .
an english 5-gram language model is trained using kenlm on the gigaword corpus .
kenlm is used to train a 5-gram language model on english gigaword .
the srilm toolkit was used to build the 5-gram language model .
all language models were trained using the srilm toolkit .
for adjusting feature weights , the mert method was applied , optimizing the bleu-4 metric obtained on the development corpus .
finally , the ape system was tuned on the development set , optimizing ter with minimum error rate training .
we adopt the idea of translation model entropy from koehn et al .
in particular , we use measures such as translation model entropy , inspired by koehn et al .
stance detection is the task of automatically determining from text whether the author of the text is in favor of , against , or neutral towards a proposition or target .
stance detection has been defined as automatically detecting whether the author of a piece of text is in favor of the given target or against it .
more importantly , when operating on new domains , the web-derived selectional preference features show great potential for achieving robust performance .
more importantly , when operating on new domains , the web-derived selectional preferences show great potential for achieving robust performance .
to demonstrate this we tried to translate the ifrs 2009 taxonomy using the moses decoder , which we trained on the europarl corpus , translating from spanish to english .
in order to evaluate the quality of locating the wrong term translation , we applied the terminology verification service to an smt model trained with moses on the europarl corpus .
we present a framework named dcfee which can extract document-level events from announcements .
we present an event extraction framework to detect event mentions and extract events from the document-level financial news .
this algorithm is based on pagerank , but with several changes .
it is a modified version of the original lexrank algorithm .
we use a standard long short-term memory model to learn the document representation .
we use long shortterm memory networks to build another semanticsbased sentence representation .
we present a machine learning based system for extraction of drug-drug interactions , using lexical , syntactic and semantic .
we present our machine learning system which utilizes lexical , syntactical and semantic based feature sets .
we obtain improvements in two mainstream nlp tasks , namely part-of-speech tagging and dependency parsing .
we show improvements in part-of-speech tagging and dependency parsing using our proposed models .
we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting .
we used srilm to build a 4-gram language model with interpolated kneser-ney discounting .
neural machine translation has recently become the dominant approach to machine translation .
in recent years , neural machine translation based on encoder-decoder models has become the mainstream approach for machine translation .
relation extraction is a challenging task in natural language processing .
relation extraction is a fundamental task in information extraction .
we describe a supervised and also a semi-supervised method to discriminate the senses of partial cognates between french and english .
in this paper we propose a supervised and a semi-supervised method to disambiguate partial cognates between two languages : french and english .
we obtained both phrase structures and dependency relations for every sentence using the stanford parser .
after generating a context-free parse , these relations are extracted by the stanford parser that we used in our experiments .
in vocabulary acquisition is learning which of many possible meanings is appropriate for a word .
a key challenge in vocabulary acquisition is learning which of the many possible meanings is appropriate for a word .
negation is a linguistic phenomenon that can alter the meaning of a textual segment .
negation is a linguistic phenomenon where a negation cue ( e.g . not ) can alter the meaning of a particular text segment or of a fact .
hammarstr枚m and borin give an extensive overview of stateof-the-art unsupervised learning of morphology .
the task of unsupervised learning of morphology has an over fifty years long history , which is exhaustively presented by hammarstr枚m and borin .
wang et al proposed an attention-based lstm method for the asc task by concentrating on different parts of a sentence to different aspects .
wang et al , proposed an attention based lstm which introduced the aspect clues by concatenating the aspect embeddings and the word representations .
a 3-gram language model is trained on the target side of the training data by the srilm toolkits with modified kneser-ney smoothing .
we use a pbsmt model where the language model is a 5-gram lm with modified kneser-ney smoothing .
we use 300-dimensional word embeddings from glove to initialize the model .
we use the glove pre-trained word embeddings for the vectors of the content words .
we also use editor score as an outcome variable for a linear regression classifier , which we evaluate using 10-fold cross-validation in scikit-learn .
we use several classifiers including logistic regression , random forest and adaboost implemented in scikit-learn .
xie et al explored content features based on the lexical similarity between the response and a set of sample responses for each question .
xie et al explored content measures based on the lexical similarity between the response and a set of reference responses .
for the automatic evaluation we used the bleu , meteor and chrf metrics .
for the automatic evaluation we used the bleu and meteor algorithms .
we use minimum error rate training with nbest list size 100 to optimize the feature weights for maximum development bleu .
we use minimal error rate training to maximize bleu on the complete development data .
we combine this embedding based framework with a pre-selection of candidate lexicalisations .
we use an embeddings based framework for identifying plausible lexicalisations of kb properties .
the skip-thought vector method uses surrounding sentences by abstracting the skip-gram structure from word to sequence .
the skip-thoughts model is a sentence-level abstraction of the skip-gram model .
we will show translation quality measured with the bleu score as a function of the phrase table size .
in this paper we will consider sentence-level approximations of the popular bleu score .
coreference resolution is the process of linking together multiple referring expressions of a given entity in the world .
additionally , coreference resolution is a pervasive problem in nlp and many nlp applications could benefit from an effective coreference resolver that can be easily configured and customized .
bannard and callison-burch , for instance , used a bilingual parallel corpus and obtained english paraphrases by pivoting through foreign language phrases .
bannard and callison-burch proposed identifying paraphrases by pivoting through phrases in a bilingual parallel corpora .
we estimate the correlation of human judgements with five automatic evaluation measures on two image description data sets .
we estimate the correlation of unigram and smoothed bleu , ter , rouge - su 4 , and meteor against human judgements on two data sets .
we use the stanford parser to derive the trees .
we use the stanford parser for obtaining all syntactic information .
the language model used was a 5-gram with modified kneserney smoothing , built with srilm toolkit .
a back-off 2-gram model with good-turing discounting and no lexical classes was also created from the training set , using the srilm toolkit , .
these models can be tuned using minimum error rate training .
all the weights of those features are tuned by using minimal error rate training .
we apply online training , where model parameters are optimized by using adagrad .
we use mini-batch update and adagrad to optimize the parameter learning .
for sequence-level smoothing , we propose to use restricted token replacement vocabularies , and a “ lazy evaluation ” .
for the sequence-level , which is computationally expensive , we introduced an efficient “ lazy ” evaluation scheme , and introduced an improved resampling strategy .
the language model is a 5-gram with interpolation and kneserney smoothing .
we use a pbsmt model where the language model is a 5-gram lm with modified kneser-ney smoothing .
in particular , the vector-space word representations learned by a neural network have been shown to successfully improve various nlp tasks .
in particular , neural language models have demonstrated impressive performance at the task of language modeling .
sketch engine has been widely deployed in lexicography and the study of language learning , but less often for broader questions in social science .
sketch engine has been widely deployed in lexicography the study of language learning , but less often for broader questions in social science .
supertagging is the process of assigning the correct supertag to each word of an input sentence .
supertagging is the tagging process of assigning the correct elementary tree of ltag , or the correct supertag , to each word of an input sentence 1 .
sentiwordnet is a large lexicon for sentiment analysis and opinion mining applications .
sentiwordnet describes itself as a lexical resource for opinion mining .
coreference resolution is a key problem in natural language understanding that still escapes reliable solutions .
coreference resolution is the task of grouping mentions to entities .
denkowski proposed a method for real time integration of post-edited mt output into the translation model .
denkowski developed a method for real time integration of post-edited mt output into the translation model by extracting a grammar for each input sentence .
from wordnet , we show that using monolingual resources and textual entailment relationships allows substantially increasing the quality of translations .
we show that the use of source language resources , and in particular the extension to non-symmetric textual entailment relationships , is useful for substantially increasing the amount of texts that are properly translated .
the log-lineal combination weights were optimized using mert .
the feature weights 位 m are tuned with minimum error rate training .
in this paper , we name the problem of choosing the correct word from the homophone set .
in this paper , we propose a practical method to detect japanese homophone errors in japanese texts .
we evaluate the performance of different translation models using both bleu and ter metrics .
for this labeling , we estimate translation quality by the translation edit rate ter metric .
the syntagmatic kernel is based on a gap-weighted subsequences kernel .
it is based on a gap-weighted subsequences kernel .
in this paper , we propose a forest-based tree-sequence to string translation model .
to integrate their strengths , in this paper , we propose a forest-based tree sequence to string translation model .
phoneme based models like the ones based on weighted finite state transducers and extended markov window treat transliteration as a phonetic process rather than an orthographic process .
phoneme based models , such as , the ones based on weighted finite state transducers and extended markov window treat transliteration as a phonetic process rather than an orthographic process .
for the loss function , we used the mean square error and adam optimizer .
we use a binary cross-entropy loss function , and the adam optimizer .
we use word2vec tool for learning distributed word embeddings .
we use word2vec as the vector representation of the words in tweets .
in this paper , we have proposed a method to incorporate discrete probabilistic lexicons into nmt systems .
in this paper , we propose a simple , yet effective method to incorporate discrete , probabilistic lexicons as an additional information source in nmt ( ¡ì3 ) .
we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing .
the language model was constructed using the srilm toolkit with interpolated kneser-ney discounting .
we use the adaptive moment estimation for the optimizer .
we update the gradient with adaptive moment estimation .
closed tests on the first and second sighan bakeoffs show that our system is competitive with the best in the literature .
closed tests using the first and second sighan cws bakeoff data demonstrated our system to be competitive with the best in the literature .
we used glove 10 to learn 300-dimensional word embeddings .
we used the 200-dimensional word vectors for twitter produced by glove .
work introduces a new strategy to compare the numerous conventions that have been proposed over the years for expressing dependency structures and discover the one for which .
this work introduces a new strategy to compare the numerous representations that have been proposed over the years for expressing dependency structures and discover the one that is easiest to learn .
for standard phrase-based translation , galley and manning introduced a hierarchical phrase orientation model .
galley and manning extended the lexicalized reordering mode to tackle long-distance phrase reorderings .
marcu and wong argued for a different phrase-based translation modeling that directly induces a phrase-by-phrase lexicon model from word-wise data .
marcu and wong proposed the joint probability model which directly estimates the phrase translation probabilities from the corpus in a theoretically governed way .
pitler et al proved that 1ec trees are a subclass of graphs whose pagenumber is at most 2 .
pitler et al , 2013 ) proved that 1-endpointcrossing trees are a subclass of graphs whose pagenumber is at most 2 .
a 4-gram language model was trained on the monolingual data by the srilm toolkit .
a standard sri 5-gram language model is estimated from monolingual data .
development and proliferation of social media services has led to the emergence of new approaches for surveying the population and addressing social issues .
the wide use of social media services has led to the emergence of new approaches for surveying the population and addressing social issues .
for nb and svm , we used their implementation available in scikit-learn .
we used the svd implementation provided in the scikit-learn toolkit .
in this work , we experiment with two bwe models that have demonstrated a strong bli performance .
in this work , we detect two major gaps in current representation learning for bli .
we used a phrase-based smt model as implemented in the moses toolkit .
our experiments use the ghkm-based string-totree pipeline implemented in moses .
and we have shown that using supervised machine learning with gold dialog acts we can achieve an f-measure of 66 % .
we obtain a best cross validation f-measure of 65.8 using gold dialog act features and 55.6 without using them .
word alignment , which can be defined as an object for indicating the corresponding words in a parallel text , was first introduced as an intermediate result of statistical translation models ( cite-p-13-1-2 ) .
word alignment is a critical component in training statistical machine translation systems and has received a significant amount of research , for example , ( cite-p-17-1-0 , cite-p-17-1-8 , cite-p-17-1-4 ) , including work leveraging syntactic parse trees , e.g. , ( cite-p-17-1-1 , cite-p-17-1-2 , cite-p-17-1-3 ) .