sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
we use stanford part-of-speech tagger to automatically detect nouns from text .
we use stanford log-linear partof-speech tagger to produce pos tags for the english side .
as a baseline system for our experiments we use the syntax-based component of the moses toolkit .
our implementation of the segment-based imt protocol is based on the moses toolkit .
we use latent dirichlet allocation , or lda , to obtain a topic distribution over conversations .
an effective strategy to cluster words into topics , is latent dirichlet allocation .
in this paper , we present an unsupervised dynamic bayesian model that allows us to model stylistic style accommodation .
in this paper we presented an unsupervised dynamic bayesian modeling approach to modeling speech style accommodation in face-to-face interactions .
coreference resolution is the process of linking together multiple referring expressions of a given entity in the world .
coreference resolution is the next step on the way towards discourse understanding .
our evaluation on two wordnet-derived taxonomies shows that the learned taxonomies capture a higher number of correct taxonomic relations compared to those produced by traditional distributional similarity approaches .
we evaluate our method on two wordnetderived subtaxonomies and show that our method leads to the development of concept hierarchies that capture a higher number of correct taxonomic relations in comparison to those generated by current distributional similarity approaches .
for training our system classifier , we have used scikit-learn .
we implemented the different aes models using scikit-learn .
we trained a trigram language model on the chinese side , with the srilm toolkit , using the modified kneser-ney smoothing option .
we trained a 4-gram language model on this data with kneser-ney discounting using srilm .
the word embeddings are pre-trained by skip-gram .
all word vectors are trained on the skipgram architecture .
our dataset and parser can be found at http : / / www .
the dataset and parser can be found at http : //www .
parsing is a computationally intensive task due to the combinatorial explosion seen in chart parsing algorithms that explore possible parse trees .
parsing is the process of mapping sentences to their syntactic representations .
through the method , various kinds of collocations induced by key strings are retrieved .
through the method , various range of collocations which are frequently used in a specific domain are retrieved automatically .
mcclosky et al use self-training in combination with a pcfg parser and reranking .
mcclosky et al , 2006 , presents a successful instance of parsing with self-training by using a re-ranker .
our model is a first order linear chain conditional random field .
our model is a structured conditional random field .
rating scales are common in psychology and related fields , such studies hardly exist in nlp , and so .
studies assessing rating scales are very common in psychology and related fields , but are rare in nlp .
we used data from the conll-x shared task on multilingual dependency parsing .
for monolingual treebank data we relied on the conll-x and conll-2007 shared tasks on dependency parsing .
to train the link embeddings , we use the speedy , skip-gram neural language model of mikolov et al via their toolkit word2vec .
we use the publicly available 300-dimensional word vectors of mikolov et al , trained on part of the google news dataset .
huang et al use hand-crafted features with lstms to improve performance .
huang et al exploit bilstm to extract features and feed them into crf decoder .
with the fixed order strategy , it performs better if the same strategy is used for evaluation .
in contrast , when the model is trained with the fixed order strategy , it performs better if the same strategy is used for evaluation .
we use 300d glove vectors trained on 840b tokens as the word embedding input to the lstm .
we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training .
we use the wrapper of the scikit learn python library over the liblinear logistic regression implementation .
we used a logistic regression classifier provided by the liblinear software .
we use the svm implementation available in the li-blinear package .
we use the logistic regression implementation of liblinear wrapped by the scikit-learn library .
grammars can significantly increase the diversity of base models , which plays a central role in parser ensemble , and therefore lead to better and more promising hybrid systems .
moreover , pseudo grammars increase the diversity of base models ; therefore , together with all other models , further improve system combination .
we then used word2vec to train word embeddings with 512 dimensions on each of the prepared corpora .
since our dataset is not so large , we make use of pre-trained word embeddings , which are trained on a much larger corpus with word2vec toolkit .
coreference resolution is a key task in natural language processing ( cite-p-13-1-8 ) aiming to detect the referential expressions ( mentions ) in a text that point to the same entity .
coreference resolution is the task of identifying all mentions which refer to the same entity in a document .
in this paper we investigate the use of character-level translation models to support the translation from and to under-resourced languages .
in this paper , we have discussed possibilities to translate via pivot languages on the character level .
we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting .
we also use a 4-gram language model trained using srilm with kneser-ney smoothing .
the newer method of latent semantic indexing 1 is a variant of the vsm in which documents are represented in a lower dimensional space created from the input training dataset .
latent semantic indexing 1 is a variant of the vsm in which documents are represented in a lower dimensional vector space created from a training dataset .
sentiment analysis is the task of identifying positive and negative opinions , sentiments , emotions and attitudes expressed in text .
sentiment analysis is a collection of methods and algorithms used to infer and measure affection expressed by a writer .
pcfg parsing features are generated on the output of the berkeley parser trained over an english , a german and a spanish treebank .
pcfg parsing features were generated on the output of the berkeley parser , trained over an english and a spanish treebank .
table 1 shows an example item designed for teaching english .
see table 1 for the item produced from the bottom sentence .
all the feature weights and the weight for each probability factor are tuned on the development set with minimum-error-rate training .
the feature weights for the log-linear combination of the features are tuned using minimum error rate training on the devset in terms of bleu .
in § 3 , we describe our approach to paraphrase identification .
in §3 , we describe our approach to paraphrase identification using mt metrics as features .
the penn treebank is perhaps the most influential resource in natural language processing .
the penn treebank is an example of such a resource with worldwide impact on natural language processing .
bilingual data are critical resources for building many applications , such as machine translation and cross language information retrieval .
parallel bilingual corpora are critical resources for statistical machine translation , and cross-lingual information retrieval .
table 1 shows the translation performance by bleu .
the bleu score for all the methods is summarised in table 5 .
the lexicalized reordering models have become the de facto standard in modern phrase-based systems .
among them , lexicalized reordering models have been widely used in practical phrase-based systems .
coreference resolution is the task of determining whether two or more noun phrases refer to the same entity in a text .
coreference resolution is the task of clustering referring expressions in a text so that each resulting cluster represents an entity .
we demonstrate that concept drift is a real , pervasive issue for learning from issue .
we demonstrate that concept drift is an important consideration .
we use hmm alignments along with higher quality alignments from a supervised aligner .
to extract phrases we use hmm alignments along with higher quality alignments from a supervised aligner .
to train our neural algorithm , we apply word embeddings of a look-up from 100-d glove pre-trained on wikipedia and gigaword .
we use 100-dimension glove vectors which are pre-trained on a large twitter corpus and fine-tuned during training .
in recent years , the research community has noticed the great success of neural networks in computer vision , speech recognition and natural language processing tasks .
recently , with the development of neural network , deep learning based models attract much attention in various tasks .
table 4 show the feature templates of our parser , most of which are based on those of zhang and nivre .
our baseline parser uses the feature set described by zhang and nivre .
for any pcfg math-w-7-1-0-40 , there are equivalent ppdts .
a pcfg is proper if math-w-3-1-3-40 for each math-w-3-1-3-55 .
in this paper , we introduced a framework of matrix co-factorization .
in this paper , we address the above challenges with a framework of matrix co-factorization .
word sense disambiguation ( wsd ) is the process of assigning a meaning to a word based on the context in which it occurs .
word sense disambiguation ( wsd ) is the process of determining which sense of a homograph is used in a given context .
we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit .
we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing .
which combines the advantages of nmt and smt efficiently .
it is therefore a promising direction to combine the advantages of both nmt and smt .
event coreference resolution is the task of identifying event mentions and clustering them such that each cluster represents a unique real world event .
event coreference resolution is the task of determining which event mentions in a text refer to the same real-world event .
a method based on singular value decomposition provides an efficient and exact solution to this problem .
fortunately , a method based on singular value decomposition provides an efficient and exact solution to this problem .
the input to the network is the embeddings of words , and we use the pre-trained word embeddings by using word2vec on the wikipedia corpus whose size is over 11g .
thus , we pre-train the embeddings on a huge unlabeled data , the chinese wikipedia corpus , with word2vec toolkit .
in addition , we use an english corpus of roughly 227 million words to build a target-side 5-gram language model with srilm in combination with kenlm .
we used kenlm with srilm to train a 5-gram language model based on all available target language training data .
at evaluation time , by proposing a correction only when the confidence of the classifier is high enough , but the article can not be used in training .
cite-p-16-1-11 use the source article at evaluation time and propose a correction only when the score of the classifier is high enough , but the source article is not used in training .
socher et al learned vector space representations for multi-word phrases using recursive autoencoders for the task of sentiment analysis .
socher et al and socher et al present a framework based on recursive neural networks that learns vector space representations for multi-word phrases and sentences .
in both pre-training and fine-tuning , we adopt adagrad and l2 regularizer for optimization .
we use stochastic gradient descent with adagrad , l 2 regularization and minibatch training .
we use opennmt 1 to train the nmt models discussed in this paper .
we use an nmt-small model from the opennmt framework for the neural translation .
as for the boundary detection problem , we use the windowdiff and p k metrics .
we evaluate using the standard penalty metrics p k and windowdiff .
we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit .
we implemented this model using the srilm toolkit with the modified kneser-ney discounting and interpolation options .
coreference resolution is a fundamental component of natural language processing ( nlp ) and has been widely applied in other nlp tasks ( cite-p-15-3-9 ) .
coreference resolution is a well known clustering task in natural language processing .
in different relations , ctransr clusters diverse head-tail entity pairs into groups and sets a relation vector for each group .
ctransr is an extension of transr by clustering diverse head-tail entity pairs into groups and learning distinct relation vectors for each group .
the srilm toolkit was used to build this language model .
the model was built using the srilm toolkit with backoff and good-turing smoothing .
to rerank the candidate texts , we used a 5-gram language model trained on the europarl corpus using kenlm .
for building our ap e b2 system , we set a maximum phrase length of 7 for the translation model , and a 5-gram language model was trained using kenlm .
ritter et al learn conversation-specific language models to filter out content words .
ritter et al study twitter dialogues using a clustering approach .
predicates like endocytosis , exocytosis and translocate , though common in biomedical text , are absent from both the framenet and propbank data .
predicates like endocytosis and translocate , though common in biomedical text , are absent from both the framenet and propbank data .
the lstm word embeddings are initialized with 100-dim embeddings from glove and fine-tuned during training .
the word embeddings are initialized using the pre-trained glove , and the embedding size is 300 .
turney used mutual information to choose the best answer to questions about near-synonyms in the test of english as a foreign language and english as a second language .
turney used mutual information to detect the best answer to questions about synonyms from test of english as a foreign language and english as a second language .
we used minimum error rate training mert for tuning the feature weights .
we use our reordering model for n-best re-ranking and optimize bleu using minimum error rate training .
we use the same features as in the first-order model implemented in the mstparser system for syntactic dependency parsing .
we use the arc-based features of turboparser , which descend from several other feature models from the literature on syntactic dependency parsing .
for all experiments , we used a 4-gram language model with modified kneser-ney smoothing which was trained with the srilm toolkit .
we used a 4-gram language model which was trained on the xinhua section of the english gigaword corpus using the srilm 4 toolkit with modified kneser-ney smoothing .
information extraction ( ie ) is a main nlp aspects for analyzing scientific papers , which includes named entity recognition ( ner ) and relation extraction ( re ) .
information extraction ( ie ) is the process of identifying events or actions of interest and their participating entities from a text .
the translation quality is evaluated by case-insensitive bleu-4 metric .
translation quality is evaluated by case-insensitive bleu-4 metric .
coreference resolution is the process of linking together multiple referring expressions of a given entity in the world .
coreference resolution is the task of partitioning a set of entity mentions in a text , where each partition corresponds to some entity in an underlying discourse model .
poon and domingos present an unsupervised semantic parsing approach to partition dependency trees into meaningful fragments .
poon and domingos proposed a model for unsupervised semantic parsing that transforms dependency trees into semantic representations using markov logic .
maltparser is a languageindependent system for data-driven dependency parsing , based on a transition-based parsing model .
maltparser is a data-driven parser-generator , which can induce a dependency parser from a treebank , and which supports several parsing algorithms and learning algorithms .
we used 300-dimensional pre-trained glove word embeddings .
we use theano and pretrained glove word embeddings .
a 5-gram language model with kneser-ney smoothing is trained using s-rilm on the target language .
the language model is a trigram model with modified kneser-ney discounting and interpolation .
evaluation shows that our sentence extraction method performs better than a baseline of taking the sentence with the strongest sentiment .
we showed that our keyphrase-based system performs better than a baseline of extracting the sentence with the highest sentiment score .
we use the official rouge tool 2 to evaluate the performance of the baselines as well as our approach .
we evaluate our models with the standard rouge metric and obtain rouge scores using the pyrouge package .
we have designed a generalized ie system that allows utilizing any tagging strategy .
we have also introduced a new tagging strategy , bia ( begin/after tagging ) .
we also use mini-batch adagrad for optimization and apply dropout .
we apply online training , where model parameters are optimized by using adagrad .
we use a pbsmt model built with the moses smt toolkit .
we use the moses software package 5 to train a pbmt model .
we argue that crime drama exemplified in television programs such as csi : crime scene investigation is an ideal testbed for approximating real-world natural language understanding and the complex inferences associated with it .
specifically , we argue that crime drama exemplified in television programs such as csi : crime scene investigation can be used to approximate real-world natural language understanding and the complex inferences associated with it .
collobert et al adjust the feature embeddings according to the specific task in a deep neural network architecture .
collobert et al use a convolutional neural network over the sequence of word embeddings .
sentiment analysis is the task of identifying the polarity ( positive , negative or neutral ) of review .
the sentiment analysis is a field of study that investigates feelings present in texts .
temporal importance weighting worked very well with textrank .
temporal importance weighting offers consistent improvements over baseline systems .
we build an open-vocabulary language model with kneser-ney smoothing using the srilm toolkit .
in the case of the trigram model , we expand the lattice with the aid of the srilm toolkit .
the resulting constituent parse trees were converted into stanford dependency graphs .
the resulting phrase structures were then converted into dependency structures with the stanford conversion tool .
translating a source language training corpus into target language and creating a corpusbased system in target language 3 .
translating test sentences in target language into source language and inputting them into a source language system 2 .
in philosophy and linguistics , it is accepted that negation conveys positive meaning .
in philosophy and linguistics , it is generally accepted that negation conveys positive meaning .
we propose a data-driven approach for generating short children ’ s stories that does not require extensive manual involvement .
we propose a data-driven approach to story generation that does not require extensive manual involvement .
even worse , mikros and argiri showed that many features besides ngrams are significantly correlated with topic , including sentence and token length , readability measures , and word length distributions .
mikros and argiri have shown that many features besides ngrams are significantly correlated with topic , including sentence and token length , readability measures , and word length distributions .
we see that our estimator compares favorably with the best estimator of vocabulary size .
in this work , we propose a novel nonparametric estimator of vocabulary size .
we employ the trick proposed by blitzer et al to select 魏 pivot features to be reconstructed .
following blitzer et al , we consider pivot features that appear more than 50 times in all the domains .
coreference resolution is the task of determining whether two or more noun phrases refer to the same entity in a text .
coreference resolution is the next step on the way towards discourse understanding .
ratinov and roth and turian et al also explored this approach for name tagging .
turian et al applied this method to both named entity recognition and text chunking .
as word vectors the authors use word2vec embeddings trained with the skip-gram model .
to train the link embeddings , we use the speedy , skip-gram neural language model of mikolov et al via their toolkit word2vec .
for all data sets , we trained a 5-gram language model using the sri language modeling toolkit .
we trained a 5-gram language model on the english side of each training corpus using the sri language modeling toolkit .
transliteration is a key building block for multilingual and cross-lingual nlp since it is useful for user-friendly input methods and applications like machine translation and cross-lingual information retrieval .
transliteration is the conversion of a text from one script to another .
the language model is trained with the sri lm toolkit , on all the available french data without the ted data .
the target language model is trained by the sri language modeling toolkit on the news monolingual corpus .
the language model is trained with the sri lm toolkit , on all the available french data without the ted data .
it has been trained with the srilm toolkit on the target side of all the training data .