sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
and our method is independent of word frequency , neither limitation applies to this work .
as wiktionary contains all parts of speech and our method is independent of word frequency , neither limitation applies to this work .
lui et al proposed a system for language identification in multilingual documents using a generative mixture model that is based on supervised topic modeling algorithms .
lui et al proposed a system that does language identification in multilingual documents , using a generative mixture model that is based on supervised topic modeling algorithms .
cite-p-17-3-1 also take the grammar constraints into consideration .
cite-p-17-1-6 extend the scheme to frame identification , for which they obtain satisfying results .
for our purpose we use word2vec embeddings trained on a google news dataset and find the pairwise cosine distances for all words .
in this research , we use the pre-trained google news dataset 2 by word2vec algorithms .
we employ the pretrained word vector , glove , to obtain the fixed word embedding of each word .
we use the glove vectors of 300 dimension to represent the input words .
we use the stanford corenlp caseless tagger for part-of-speech tagging .
we use the stanford corenlp shift-reduce parsers for english , german , and french .
relation extraction ( re ) is the task of recognizing the assertion of a particular relationship between two or more entities in text .
relation extraction ( re ) is the task of determining semantic relations between entities mentioned in text .
coreference resolution is a well known clustering task in natural language processing .
coreference resolution is the task of identifying all mentions which refer to the same entity in a document .
we use lstm units as 桅 for our implementation based on its recent success in language processing tasks .
like recent work , we use the lstm variant of recurrent neural networks as language modeling architecture .
as word vectors the authors use word2vec embeddings trained with the skip-gram model .
we use a cws-oriented model modified from the skip-gram model to derive word embeddings .
we adapt the models of mikolov et al and mikolov et al to infer feature embeddings .
as monolingual baselines , we use the skip-gram and cbow methods of mikolov et al as implemented in the gensim package .
long sentences are removed , and the remaining sentences are pos-tagged and dependency parsed using the pre-trained stanford parser .
the syntactic relations are obtained using the constituency and dependency parses from the stanford parser .
we develop a focused web crawling system which collects primarily relevant documents and ignores irrelevant documents .
we create a combined , focused web crawling system that automatically collects relevant documents and minimizes the amount of irrelevant web content .
we train distributional similarity models with word2vec for the source and target side separately .
we use the cnn model with pretrained word embedding for the convolutional layer .
also suggest that the obtained subtree alignment can improve the performance of both phrase and syntax based smt systems .
it is suggested that the subtree alignment benefits both phrase and syntax based systems by relaxing the constraint of the word alignment .
a trigram language model with modified kneser-ney discounting and interpolation was used as produced by the srilm toolkit .
in addition , a 5-gram lm with kneser-ney smoothing and interpolation was built using the srilm toolkit .
lexical chains have lexical cohesion relations such as repetition , synonym , which may range over the entire text .
lexical chains provide a representation of the lexical cohesion structure of a text .
li and hoiem adopted a method to gradually add new capabilities to a multi-task system while preserve the original capabilities .
li and hoiem adopted this method to gradually add new capabilities to a multi-task system .
we used moses , a phrase-based smt toolkit , for training the translation model .
for phrase-based smt translation , we used the moses decoder and its support training scripts .
this study is called morphological analysis .
we assume that a morphological analysis consists of three processes : tokenization , dictionary lookup , and disambiguation .
zhou et al proposed a monolingual phrase-based translation model for question retrieval .
recently , riezler et al and zhou et al proposed a phrase-based translation model for question and answer retrieval .
high quality word embeddings have been proven helpful in many nlp tasks .
word embeddings have proven to be effective models of semantic representation of words in various nlp tasks .
work , we intend to investigate in more detail the contribution of various kinds of words to word association profiles .
we present a study of the relationship between quality of writing and word association profiles .
we trained word vectors with the two architectures included in the word2vec software .
for chinese posts , we trained our word2vec model on our crawled 30m weibo corpus .
in this paper , we propose a gated recursive neural network ( grnn ) .
inspired by grconv , we propose a gated recursive neural network ( grnn ) for sentence modeling .
distributional semantic models are employed to produce semantic representations of words from co-occurrence patterns in texts or documents .
distributional semantic models produce vector representations which capture latent meanings hidden in association of words in documents .
we use sri language modeling toolkit to train a 5-gram language model on the english sentences of fbis corpus .
for language model , we use a trigram language model trained with the srilm toolkit on the english side of the training corpus .
neural networks , working on top of conventional n-gram back-off language models , have been introduced in as a potential means to improve discrete language models .
neural networks , working on top of conventional n-gram back-off language models , have been introduced in as a potential means to improve discrete language models .
for evaluation , we measured the end translation quality with case-sensitive bleu .
we used the bleu score to evaluate the translation accuracy with and without the normalization .
xing et al pre-defined a set of topics from an external corpus to guide the generation of the seq2seq model .
xing et al incorporated the topic information from an external corpus into the seq2seq framework to guide the generation .
we used the enju parser for syntactic parsing .
as the syntactic parser , we used the enju 5 english hpsg parser .
word sense disambiguation ( wsd ) is a problem long recognised in computational linguistics ( yngve 1955 ) and there has been a recent resurgence of interest , including a special issue of this journal devoted to the topic ( cite-p-27-8-11 ) .
word sense disambiguation ( wsd ) is a particular problem of computational linguistics which consists in determining the correct sense for a given ambiguous word .
in order to overcome this , several methods are proposed , including minimally-supervised learning methods , and active learning methods , .
to overcome this problem , unsupervised learning methods using huge unlabeled data to boost the performance of rules learned by small labeled data have been proposed recently .
we show that it is feasible to combine existing parsing speedup techniques with our binarization to achieve even better performance .
we expect that a better binarization will also help improve the efficiency of chart parsing .
nevertheless , studies have shown that a steady change in the linguistic nature and the degree of symptoms in speech and writing are early and could be identified by using language technology analysis .
nevertheless , studies have shown that a steady change in the linguistic nature of the symptoms and the degree in speech and writing are early and could be identified by using language technology analysis .
relation extraction ( re ) is the task of extracting instances of semantic relations between entities in unstructured data such as natural language text .
relation extraction ( re ) is the process of generating structured relation knowledge from unstructured natural language texts .
to compute the statistical significance of the performance differences between qe models , we use paired bootstrap resampling following koehn .
finally , we conduct paired bootstrap sampling to test the significance in bleu scores differences .
if the anaphor is a definite noun phrase and the referent is in focus ( i.e . in the cache ) , anaphora resolution will be hindered .
the anaphor is a pronoun and the referent is in operating memory ( not in focus ) .
for the automatic evaluation , we used the bleu metric from ibm .
additionally , we used bleu , a very popular machine translation evaluation metric , as a feature .
compositional models that explicitly handle extreme cases of lexical ambiguity in a step prior to composition present consistently better performance than their ¡° ambiguous ¡± counterparts ( cite-p-19-3-0 , cite-p-19-3-2 ) .
recent studies have shown that , compared to their co-occurrence counterparts , neural word vectors reflect better the semantic relationships between words ( cite-p-19-1-0 ) and are more effective in compositional settings ( cite-p-19-3-9 ) .
to the metagrammar , the engineer can automatically generate versions of the grammar containing different combinations of previous analyses .
this metagrammar can generate all possible combinations of these analyses automatically , creating different versions of a grammar that cover the same phenomena .
we enrich the content of microblogs by inferring the association between microblogs and external words .
we first build an optimization model to infer the topics of microblogs by employing the topic-word distribution of the external knowledge .
long short-term memory is a special type of rnn that leverages multiple gate vectors and a memory cell vector to solve the vanishing and exploding gradient problems of training rnns .
long short-term memory is an rnn architecture specifically designed to address the vanishing gradient and exploding gradient problems .
models are evaluated in terms of bleu , meteor and ter on tokenized , cased test data .
the various models developed are evaluated using bleu and nist .
the srilm toolkit was used for training the language models using kneser-ney smoothing .
language models were built with srilm , modified kneser-ney smoothing , default pruning , and order 5 .
data-to-text generation refers to the task of automatically generating text from non-linguistic data .
concept-to-text generation broadly refers to the task of automatically producing textual output from nonlinguistic input .
in this case , target side parse trees could also be used alone or together with the source side parse trees .
in this case , target side parse trees could also be used alone or together with the source side parse trees to induce the latent syntactic categories .
dependency parsing is a topic that has engendered increasing interest in recent years .
dependency parsing is a valuable form of syntactic processing for nlp applications due to its transparent lexicalized representation and robustness with respect to flexible word order languages .
a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit .
a 4-gram language model is trained on the monolingual data by srilm toolkit .
we used moses , a phrase-based smt toolkit , for training the translation model .
for all submissions , we used the phrase-based variant of the moses decoder .
notion of the method is to identify ues and ses based on the occurrence probability in the written and spoken language corpora which are automatically collected from the web .
the key notion of the method is to distinguish ues and ses based on the occurrence probability in written and spoken language corpora which are automatically collected from the web .
in order to reduce the amount of annotated data to train a dependency parser , koo et al used word clusters computed from unlabelled data as features for training a parser .
koo et al and suzuki et al use unsupervised wordclusters as features in a dependency parser to get lexical dependencies .
we use mt02 as the development set 4 for minimum error rate training .
we used minimum error rate training for tuning on the development set .
results and analyses show that our approach is more robust to adversarial inputs .
in addition , we showed that our approach is more robust to adversarial inputs .
neural networks , working on top of conventional n-gram back-off language models , have been introduced in as a potential means to improve discrete language models .
neural networks , working on top of conventional n-gram back-off language models , have been introduced in as a potential means to improve discrete language models .
word embedding has been extensively studied in recent years .
there has been a line of research on learning word embeddings via nnlms .
researchers have developed framenet , a large lexical database of english that comes with sentences annotated with semantic frames .
the berkeley framenet project aims at creating a human and machine-readable lexical database of english , supported by corpus evidence annotated in terms of frame semantics .
the language models used were 7-gram srilm with kneser-ney smoothing and linear interpolation .
the language model was constructed using the srilm toolkit with interpolated kneser-ney discounting .
we used the srilm toolkit to build unpruned 5-gram models using interpolated modified kneser-ney smoothing .
we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting .
relation extraction is the task of automatically detecting occurrences of expressed relations between entities in a text and structuring the detected information in a tabularized form .
relation extraction is the task of finding semantic relations between two entities from text .
phoneme connectivity table supports the grammaticality of the adjacency of two phonetic morphemes .
the phoneme connectivity table supports grammaticality checking of the adjacent two phonetic morphemes .
in this paper , we propose a reinforcement learning based framework of dialogue system for automatic diagnosis .
in this paper , we make a move to build a dialogue system for automatic diagnosis .
riedel et al present an approach for extracting bio-molecular events and their arguments using markov logic .
riedel et al use markov logic to model interactions between event-argument relations for biomedical event extraction .
takamura et al propose using spin models for extracting semantic orientation of words .
takamura et al construct a word graph with the gloss of wordnet .
we use a list of such connectives compiled by and study the statistics of our corpus to discover the discourse relations .
we use lists of discourse markers compiled from the penn discourse treebank and from to identify such markers in the text .
word alignment is a fundamental problem in statistical machine translation .
word alignment is a critical first step for building statistical machine translation systems .
expert search shows promising improvement .
the evaluation shows promising expert search results .
performance on different tasks , however , the main interest of this paper was a comparison of convergence speed across different objectives .
the focus of this paper is on an experimental evaluation of the empirical performance and convergence speed of the different algorithms .
one is constructions expressing a cause-effect relation , and the other is semantic information in a text , such as word pair probability .
one is patterns or constructions expressing a cause-effect relation , and the other is semantic information underlying in a text , such as word pair probability .
we propose a lexicon-based approach that examines the consistency of bilingual subjectivity , sentiment .
we propose to explicitly model the consistency of sentiment between the source and target side with a lexicon-based approach .
we use the pre-trained word2vec embeddings provided by mikolov et al as model input .
we pre-train the word embedding via word2vec on the whole dataset .
coreference resolution is a key task in natural language processing ( cite-p-13-1-8 ) aiming to detect the referential expressions ( mentions ) in a text that point to the same entity .
coreference resolution is a field in which major progress has been made in the last decade .
a template is a structure , based on slots for three semantic formulas that can themselves have dependent formulas , such that the whole structure represents a possible message .
this template is a specialist which picks out the information from the task object to be included in the mission paragraph .
n-gram features were based on language models of order 5 , built with the srilm toolkit on monolingual training material from the europarl and the news corpora .
the trigram models were created using the srilm toolkit on the standard training sections of the ccgbank , with sentence-initial words uncapitalized .
taglda is a representative latent topic model by extending latent dirichlet allocation .
mallet uses latent dirichlet allocation to produce a topic distribution over any given text .
our experiments were conducted on two datasets : the publicly available microsoft research paraphrasing corpus ( cite-p-15-3-2 ) and a dataset that we constructed from the mtc corpus .
we experimented with a maximum entropy classifier on two datasets ; the publicly available msr corpus and one that we constructed from the mtc corpus .
the models are trained with support vector machines as implemented in weka .
this baseline is based on dkpro tc and relies on support vector classification using weka .
in this paper , we aim to construct a unified model of topics , events and users .
in this paper , we propose a unified model to study topics , events and users jointly .
although the itg constraint allows more flexible reordering during decoding , zens and ney showed that the ibm constraint results in higher bleu scores .
zens and ney show that itg constraints yield significantly better alignment coverage than the constraints used in ibm statistical machine translation models on both german-english and french-english .
using a different approach , blitzer et al induces correspondences between feature spaces in different domains , by detecting pivot features .
blitzer et al apply structural correspondence learning for learning pivot features to increase accuracy in the target domain .
they then extend their work by applying the page rank algorithm to ranking the wordnet senses in terms of how strongly a sense possesses a given semantic property .
they then extend their work by applying the page rank algorithm to rank the wordnet senses in terms of how strongly a sense possesses a given semantic property .
recently a couple of methods of automatic translation error analysis have emerged .
recently a couple of methods of automatic analysis of translation errors have been described .
semantic role labeling ( srl ) is a kind of shallow semantic parsing task and its goal is to recognize some related phrases and assign a joint structure ( who did what to whom , when , where , why , how ) to each predicate of a sentence ( cite-p-24-3-4 ) .
semantic role labeling ( srl ) consists of finding the arguments of a predicate and labeling them with semantic roles ( cite-p-9-1-5 , cite-p-9-3-0 ) .
bansal et al show the benefits of such modified-context embeddings in dependency parsing task .
in bansal et al , better word embeddings for dependency parsing were obtained by using a corpus created to capture dependency context .
and it is well-known that readers are more likely to fixate on words from open syntactic categories ( verbs , nouns , adjectives ) than on closed category items .
it is well-known that readers are less likely to fixate their gaze on closed class syntactic categories such as prepositions and pronouns .
the third feature type is based on the politeness theory .
this can be partly explained by the politeness theory .
princeton wordnet is an english lexical database that groups nouns , verbs , adjectives and adverbs into sets of cognitive synonyms , which are named as synsets .
princeton wordnet 1 is an english lexical database that groups nouns , verbs , adjectives and adverbs into sets of cognitive synonyms , which are named as synsets .
we use the moses toolkit to train various statistical machine translation systems .
we used the moses toolkit for performing statistical machine translation .
to date , most accurate wsd systems are supervised and rely on the availability of training data .
most accurate wsd systems to date are supervised and rely on the availability of training data .
more importantly , event coreference resolution is a necessary component in any reasonable , broadly applicable computational model of natural language understanding ( cite-p-18-3-4 ) .
event coreference resolution is the task of determining which event mentions expressed in language refer to the same real-world event instances .
model fitting for our model is based on the expectation-maximization algorithm .
in this work , we use the expectation-maximization algorithm .
in order to reduce the vocabulary size , we apply byte pair encoding .
we use byte pair encoding with 45k merge operations to split words into subwords .
we compute statistical significance using the approximate randomization test .
to compute statistical significance , we use the approximate randomization test .
our phrase-based mt system is trained by moses with standard parameters settings .
in all submitted systems , we use the phrase-based moses decoder .
we used the 300-dimensional glove word embeddings learned from 840 billion tokens in the web crawl data , as general word embeddings .
we use 300 dimensional glove embeddings trained on the common crawl 840b tokens dataset , which remain fixed during training .
word representations have shown to outperform methods that use only local co-occurrences ( cite-p-12-3-7 , cite-p-12-3-20 ) .
distributed word representations have gained much popularity lately because of their accuracy as semantic representations for words ( cite-p-12-3-12 , cite-p-12-3-20 ) .
by utilizing the sub-labels , we gain significant improvement in model accuracy .
we propose improving tagging accuracy by utilizing dependencies within subcomponents of the fine-grained labels .
as a baseline system for our experiments we use the syntax-based component of the moses toolkit .
we use the opensource moses toolkit to build a phrase-based smt system .
arabic is a highly inflectional language with 85 % of words derived from trilateral roots ( alfedaghi and al-anzi 1989 ) .
arabic is a morphologically complex language .
we experiment with a machine learning strategy to model multilingual coreference for the conll-2012 shared task .
we train and evaluate our model on the english corpus of the conll-2012 shared task .