sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
we use different pretrained word embeddings such as glove 1 and fasttext 2 as the initial word embeddings .
we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training .
coreference resolution is the task of partitioning a set of entity mentions in a text , where each partition corresponds to some entity in an underlying discourse model .
coreference resolution is a challenging task , that involves identification and clustering of noun phrases mentions that refer to the same real-world entity .
we apply statistical significance tests using the paired bootstrapped resampling method .
following , we use the bootstrap resampling test to do significance testing .
in this paper , we study the problem of topic segmentation for emails .
in this paper we presented an email corpus annotated for topic segmentation .
as far as we know , this is the first study that gives a solid empirical foundation .
as far as we know , this is the first paper to quantitatively explore this question .
we propose a new , simple model for selectional preference induction that uses corpus-based semantic similarity metrics , such as cosine or lin ¡¯ s ( 1998 ) .
we propose a new , simple model for the automatic induction of selectional preferences , using corpus-based semantic similarity metrics .
the semantic textual similarity is a core problem in the computational linguistic field .
semantic textual similarity is the task of computing the similarity between any two given texts .
statistical machine translation , especially the phrase-based model , has developed very fast in the last decade .
in the last decade , statistical machine translation has been advanced by expanding the basic unit of translation from word to phrase and grammar .
we used classification of politeness factors in line with trosborg and d铆az-p茅rez .
in our case , we used classification of politeness factors in line with trosborg and d铆az-p茅rez .
unlike dong et al , we initialize our word embeddings using a concatenation of the glove and cove embeddings .
for the word-embedding based classifier , we use the glove pre-trained word embeddings .
we demonstrate that an lda-based topic modelling approach outperforms a baseline distributional semantic approach .
the results show that our topic modelling approach outperforms the other two methods .
we evaluated the translation quality using the case-insensitive bleu-4 metric .
we evaluate the translation quality using the case-sensitive bleu-4 metric .
we use the same evaluation criterion as described in .
we use evaluation metrics similar to those in .
in the previous sections , we discussed how we construct the main components of the literature .
in the next section , we start by describing our symbolic representation of the literature .
we used pos tags predicted by the stanford pos tagger .
we used the stanford tagger to tag wsj and paraphrase datasets .
we used the disambig tool provided by the srilm toolkit .
we train a trigram language model with the srilm toolkit .
the targetside 4-gram language model was estimated using the srilm toolkit and modified kneser-ney discounting with interpolation .
a 5-gram lm was trained using the srilm toolkit 5 , exploiting improved modified kneser-ney smoothing , and quantizing both probabilities and back-off weights .
our rnn model uses a long short-term memory component .
we use the long short-term memory architecture for recurrent layers .
we used the maximum entropy approach 5 as a machine learner for this task .
we use the maximum entropy model for our classification task .
hierarchical phrase-based translation was first proposed by chiang .
the scfg formalism was repopularized for statistical machine translation by chiang .
we formalize the problem as submodular function maximization under the budget constraint .
we treat the text summarization problem as maximizing a submodular function under a budget constraint .
the knowledge engineering approach has been used in early grammatical error correction systems .
early grammatical error correction systems use the knowledge engineering approach .
word alignment is the process of identifying wordto-word links between parallel sentences .
word alignment is the task of identifying translational relations between words in parallel corpora , in which a word at one language is usually translated into several words at the other language ( fertility model ) ( cite-p-18-1-0 ) .
for cross-lingual document matching is explicit semantics analysis ( esa , cite-p-11-1-7 ) and its cross-lingual extension .
a prominent example for this kind of topic modelling approach is explicit semantic analysis ( esa , cite-p-11-1-7 ) .
in this paper , we exploited a type of word embeddings obtained by feed-forward .
in this paper , we also follow the same approach for word sense disambiguation .
we present a novel , unsupervised , and distance measure agnostic method for search space reduction in spell correction .
in this paper , we proposed a novel , unsupervised , distance-measure agnostic method of search space reduction for spell correction .
we use the popular word2vec 1 tool proposed by mikolov et al to extract the vector representations of words .
we use the perplexity computation method of mikolov et al suitable for skip-gram models .
coreference resolution is the process of finding discourse entities ( markables ) referring to the same real-world entity or concept .
coreference resolution is a challenging task , that involves identification and clustering of noun phrases mentions that refer to the same real-world entity .
to compensate this , we apply a strong recurrent neural network language model .
we model the generative architecture with a recurrent language model based on a recurrent neural network .
like pavlopoulos et al , we initialize the word embeddings to glove vectors .
we initialize these word embeddings with glove vectors .
for phrase-based smt translation , we used the moses decoder and its support training scripts .
for training the translation model and for decoding we used the moses toolkit .
multiword expressions are lexical items that can be decomposed into single words and display idiosyncratic features .
multiword expressions or mwes can be understood as idiosyncratic interpretations or words with spaces wherein concepts cross the word boundaries or spaces .
keyphrase extraction is a fundamental technique in natural language processing .
keyphrase extraction is a natural language processing task for collecting the main topics of a document into a list of phrases .
the parameter weights are optimized with minimum error rate training .
the weights of the different feature functions were optimised by means of minimum error rate training .
evaluations show that the generated paraphrases almost always follow their target specifications , while paraphrase quality does not significantly deteriorate compared to vanilla .
a combination of automated and human evaluations show that scpn s generate paraphrases that follow their target specifications without decreasing paraphrase quality when compared to baseline ( uncontrolled ) paraphrase systems .
the translation quality is evaluated by case-insensitive bleu and ter metrics using multeval .
translation performances are measured with case-insensitive bleu4 score .
particularly , we used a partitioning algorithm of the cluto library for clustering .
we used the cluto clustering toolkit to induce a hierarchical agglomerative clustering on the vectors for w s .
for word splitting in sub-word units , we use the byte pair encoding tools from the subword-nmt toolkit .
for subword granularity , we use the bpe method to merge 30k and 32k steps .
we used the open source moses phrase-based mt system to test the impact of the preprocessing technique on translation results .
we used the dataset made available by the workshop on statistical machine translation to train a german-english phrase-based system using the moses toolkit in a standard setup .
the decoding weights were optimized with minimum error rate training .
the decoding weights are optimized with minimum error rate training to maximize bleu scores .
han et al propose a graph-based collective concept linking method which can model and exploit the global interdependence between different assignment decisions .
the collective inference algorithm is partially inspired by han et al who propose a graphbased collective entity linking method to model global interdependences among different el decisions .
efficiently , we design a new two player referring expression game ( referitgame ) .
by designing a two player game , we can both collect and verify referring expressions directly within the game .
we present a method for unsupervised topic modelling which allows us to approach both problems simultaneously , inferring a set of topics .
we have presented an unsupervised generative model which allows topic segmentation and identification from unlabelled data .
we use treetagger with the default parameter file for tokenization , lemmatization and annotation of part-of-speech information in the corpus .
we build a vector space from the sdewac corpus , part-of-speech tagged and lemmatized using treetagger .
this type of features are based on a trigram model with kneser-ney smoothing .
the language model is a 5-gram with interpolation and kneser-ney smoothing .
word sense disambiguation ( wsd ) is a natural language processing ( nlp ) task in which the correct meaning ( sense ) of a word in a given context is to be determined .
word sense disambiguation ( wsd ) is the task of identifying the correct sense of an ambiguous word in a given context .
properly , discriminative models have been shown to outperform generative models .
moreover , incorporating domain knowledge is not straightforward in these generative models .
we use bleu as the metric to evaluate the systems .
we use mira to tune the parameters of the system to maximize bleu .
we perform named entity tagging using the stanford four-class named entity tagger .
we extract named entities using a python wrapper for the stanford ner tool .
the neural embeddings were created using the word2vec software 3 accompanying .
their word embeddings were generated with word2vec , and trained on the arabic gigaword corpus .
evaluation results show that the proposed procedure can achieve competitive performance in terms of bleu score and slot error rate .
the results show that the proposed adaptation recipe improves not only the objective scores but also the user ’ s perceived quality of the system .
we rely for this task on an adaptation of the shared nearest neighbor algorithm described in .
the clustering algorithm that we use is an adaptation of the shared nearest neighbors algorithm presented in .
srilm toolkit is used to build these language models .
furthermore , we train a 5-gram language model using the sri language toolkit .
with this result , we further show that these paraphrases can be used to obtain high precision surface patterns that enable the discovery of relations .
we further show that we can use these paraphrases to generate surface patterns for relation extraction .
long short term memory units are proposed in hochreiter and schmidhuber to overcome this problem .
to solve this problem , hochreiter and schmidhuber introduced the long short-term memory rnn .
sentiment analysis is a natural language processing ( nlp ) task ( cite-p-10-1-14 ) which aims at classifying documents according to the opinion expressed about a given subject ( federici and dragoni , 2016a , b ) .
sentiment analysis is a much-researched area that deals with identification of positive , negative and neutral opinions in text .
the tweets were tokenized and part-ofspeech tagged with the cmu ark twitter nlp tool and stanford corenlp .
all tweets were tokenized and pos-tagged using the carnegie mellon university twitter part-of-speechtagger .
barzilay and lee offer an attractive frame work for constructing a context-specific hidden markov model of topic drift .
barzilay and lee propose an account for constraints on topic selection based on probabilistic content models .
script knowledge is a body of knowledge that describes a typical sequence of actions people do in a particular situation ( cite-p-7-1-6 ) .
script knowledge is defined as the knowledge about everyday activities which is mentioned in narrative documents .
word alignment is an essential step in phrase-based statistical machine translation .
word alignment is an important component of a complete statistical machine translation pipeline .
the language model is trained on the target side of the parallel training corpus using srilm .
a 4-gram language model is trained on the xinhua portion of the gigaword corpus with the srilm toolkit .
as the splitting criterion , and select the proper number for math-w-2-4-0-104 .
we use the bic as the splitting criterion , and estimate the proper number for math-w-3-1-3-144 .
for the parliament corpus , we have shown that the ape system complements and improves the rbmt system in terms of suitability .
for the parliament corpus , we show that the ape system significantly complements and improves the rbmt system .
it has been empirically shown that word embeddings can capture semantic and syntactic similarities between words .
word embedding models are aimed at learning vector representations of word meaning .
the comparison with voice recognition and a screen keyboard showed that koosho can be a more practical solution .
the comparison with voice recognition and a screen keyboard showed koosho can be a more practical solution compared to the screen keyboard .
the grammar is grounded in the theoretical framework of hpsg and uses minimal recursion semantics for the semantic representation .
the grammar matrix is written within the hpsg framework , using minimal recursion semantics for the semantic representations .
the language model is trained on the target side of the parallel training corpus using srilm .
the language model is a 3-gram language model trained using the srilm toolkit on the english side of the training data .
we use conditional random fields sequence labeling as described in .
for parameter training we use conditional random fields as described in .
the semantic textual similarity metric sagan is based on a complex textual entailment pipeline .
sagan is a semantic textual similarity metric based on a complex textual entailment pipeline .
practical , and apply our method to the ace coreference dataset , achieving a 45 % error reduction .
we empirically validate our approach on the ace coreference dataset , showing that the first-order features can lead to an 45 % error reduction .
for smt decoding , we use the moses toolkit with kenlm for language model queries .
we use the moses toolkit with a phrase-based baseline to extract the qe features for the x l , x u , and testing .
our trigram word language model was trained on the target side of the training corpus using the srilm toolkit with modified kneser-ney smoothing .
we trained a 5-gram language model on the english side of each training corpus using the sri language modeling toolkit .
for estimating the monolingual we , we use the cbow algorithm as implemented in the word2vec package using a 5-token window .
for estimating monolingual word vector models , we use the cbow algorithm as implemented in the word2vec package using a 5-token window .
in this work , we present a novel beam-search decoder for grammatical error correction .
we have presented a novel beam-search decoder for grammatical error correction .
for training the model , we use the linear kernel svm implemented in the scikit-learn toolkit .
for the classifiers we use the scikit-learn machine learning toolkit .
word alignment , which can be defined as an object for indicating the corresponding words in a parallel text , was first introduced as an intermediate result of statistical translation models ( cite-p-13-1-2 ) .
word alignment is a key component in most statistical machine translation systems .
we perform minimum-error-rate training to tune the feature weights of the translation model to maximize the bleu score on development set .
for minimum error rate tuning , we use nist mt-02 as the development set for the translation task .
we evaluated the system using bleu score on the test set .
we chose the optimal model that achieves the best bleu score over the dev corpus .
this paper proposes a simple yet effective framework for semi-supervised dependency parsing .
this paper proposes a generalized training framework of semi-supervised dependency parsing based on ambiguous labelings .
preparing an aligned abbreviation corpus , we obtain the optimal combination of the features by using the maximum entropy framework .
in this paper , we use the maximum entropy framework to automatically predict the correctness of kbp sf intermediate responses .
then we split the words into subwords by joint bytepair-encoding with 32,000 merge operations .
we use a joint source and target byte-pair encoding with 10k merge operations .
in this paper , we view the task of sms normalization as a translation problem from the sms language to the english language .
in this paper , we present a phrase-based statistical model for sms text normalization .
we evaluate global translation quality with bleu and meteor .
we evaluated translation quality using uncased bleu and ter .
visweswariah et al regarded the preordering problem as a traveling salesman problem and applied tsp solvers for obtaining reordered words .
visweswariah et al and tromble and eisner have considered the source reordering problem to be a problem of learning word reordering from word-aligned data .
relation extraction is the task of finding relational facts in unstructured text and putting them into a structured ( tabularized ) knowledge base .
relation extraction ( re ) is the task of extracting instances of semantic relations between entities in unstructured data such as natural language text .
semi-supervised approach has been successfully applied to named entity recognition ( cite-p-20-3-4 ) and dependency parsing ( cite-p-20-3-1 ) .
this simple solution has been shown effective for named entity recognition ( cite-p-20-3-4 ) and dependency parsing ( cite-p-20-3-1 ) .
lstms have become more popular after being successfully applied in statistical machine translation .
recently , rnn-based models have been successfully used in machine translation and dialogue systems .
semantic role labeling ( srl ) is a form of shallow semantic parsing whose goal is to discover the predicate-argument structure of each predicate in a given input sentence .
semantic role labeling ( srl ) is the task of identifying semantic arguments of predicates in text .
we adopt essentially the probabilistic tree-adjoining grammar formalism and grammar induction technique of .
we adopt the probabilistic tree-adjoining grammar formalism and grammar induction technique of .
we train a kn-smoothed 5-gram language model on the target side of the parallel training data with srilm .
further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus .
we represent input words using pre-trained glove wikipedia 6b word embeddings .
we use 300d glove vectors trained on 840b tokens as the word embedding input to the lstm .
as in previous works on onthe-fly model estimation for smt , we compute a suffix array for the source corpus .
as in previous works on on-the-fly model estimation for smt , we first build a suffix array for the source corpus .
document summarization is the process of generating a generic or topic-focused summary by reducing documents in size while retaining the main characteristics of original documents ( cite-p-16-1-18 ) .
document summarization is a task to generate a fluent , condensed summary for a document , and keep important information .
we use glove 300-dimension embedding vectors pre-trained on 840 billion tokens of web data .
we use 100-dimension glove vectors which are pre-trained on a large twitter corpus and fine-tuned during training .
coreference resolution is a key problem in natural language understanding that still escapes reliable solutions .
coreference resolution is the task of partitioning a set of entity mentions in a text , where each partition corresponds to some entity in an underlying discourse model .
we train a 5-gram language model with the xinhua portion of english gigaword corpus and the english side of the training set using the srilm toolkit .
we train a trigram language model with modified kneser-ney smoothing from the training dataset using the srilm toolkit , and use the same language model for all three systems .
the target-side language models were estimated using the srilm toolkit .
language models were built with srilm , modified kneser-ney smoothing , default pruning , and order 5 .
under each event , the system can group reader comments into cultural-common discussion topics .
under each event , reader comments are grouped by cultural-common topics .
cue-phrase-based patterns were utilized to collect a large number of discourse instances .
from a raw corpus , a small set of cue-phrase-based patterns were used to collect discourse instances .
as ‘ constrained ’ , which used only the provided training and development data .
tjp was focused on the ‘ constrained ’ task , which used only training and development data provided .