sentence1
stringlengths 16
446
| sentence2
stringlengths 14
436
|
---|---|
le and mikolov extended the word embedding learning model by incorporating paragraph information .
|
le and mikolov introduced a distributed memory model with paragraph vectors .
|
note that we used long short-term memory instead of gated recurrent unit for each recurrent neural network unit of the model .
|
in addition to the svm classifier , we parallelly trained a recurrent neural classifier using both long short-term memory and gated recurrent unit cells .
|
that group of words which co-occur together across many documents with a given emotion are highly probable to express the same emotion .
|
our hypothesis is that words which tend to co-occur across many documents with a given emotion are highly probable to express this emotion .
|
a back-off 2-gram model with good-turing discounting and no lexical classes was also created from the training set , using the srilm toolkit , .
|
the language model was constructed using the srilm toolkit with interpolated kneser-ney discounting .
|
performance is measured in terms of perplexity .
|
the performance is measured in terms of character error rate ( cer ) .
|
a context-free grammar ( cfg ) is a 4-tuple math-w-4-1-0-9 , where math-w-4-1-0-18 is the set of nonterminals , σ the set of terminals , math-w-4-1-0-31 the set of production rules and math-w-4-1-0-38 a set of starting nonterminals ( i.e . multiple starting nonterminals are possible ) .
|
a context-free grammar ( cfg ) is a tuple math-w-2-5-5-22 , where vn and vt are finite , disjoint sets of nonterminal and terminal symbols , respectively , and s e vn is the start symbol .
|
semantic role labeling ( srl ) is the task of identifying the semantic arguments of a predicate and labeling them with their semantic roles .
|
semantic role labeling ( srl ) is a task of automatically identifying semantic relations between predicate and its related arguments in the sentence .
|
shen et al , 2008 shen et al , 2009 proposed a way to integrate dependency structure into target and source side string on hierarchical phrase rules .
|
shen et al , 2008 shen et al , 2009 proposed a string-to-dependency language model to capture longdistance word order .
|
there are a number of excellent textbook presentations of hidden markov models , so we do not present them in detail here .
|
there are several excellent textbook presentations of hidden markov models and the forward-backward algorithm for expectation-maximization , so we do not cover them in detail here .
|
we used the stanford parser to parse each of the reviews and the natural language toolkit to post process the results .
|
we used the stanford parser to extract dependency features for each quote and response .
|
we process the embedded words through a multi-layer bidirectional lstm to obtain contextualized embeddings .
|
the first component of the network is a bi-lstm encoder which builds contextual representations for every token in the sentence .
|
the decoder and encoder word embeddings are of size 620 , the encoder uses a bidirectional layer with 1000 lstms to encode the source side .
|
the decoder and encoder word embeddings are of size 500 , the encoder uses a bidirectional lstm layer with 1k units to encode the source side .
|
abstract meaning representation is a semantic representation where the meaning of a sentence is encoded as a rooted , directed graph .
|
abstract meaning representation is a semantic formalism in which the meaning of a sentence is encoded as a rooted , directed , acyclic graph .
|
the pearson coefficient shows that our dataset correlates with human annotation better than the dataset of kajiwara and yamamoto , possibly because we controlled each sentence to include only one complex word .
|
finally , to compare two datasets , we used the pearson product-moment correlation coefficient between our dataset and the dataset of kajiwara and yamamoto against the annotated data .
|
as well as allowing the model to choose between context-dependent and context-independent word representations , we can obtain dramatic improvements and reach performance comparable to state-of-the-art .
|
we find that model performance substantially improves , reaching accuracy comparable to state-of-the-art on the competitive squad dataset , showing that contextual word representations captured by the language model are beneficial for reading comprehension .
|
xie et al proposed to use weighted bipartite graph to extract definition and corresponding abbreviation pairs from anchor texts .
|
xie et al proposed to extract chinese abbreviations and their corresponding definitions based on anchor texts .
|
cite-p-10-1-3 and cite-p-10-1-6 built systems to predict hierarchical power relations between people in the enron email corpus using lexical features from all the messages exchanged between them .
|
cite-p-10-1-3 and cite-p-10-1-6 predict hierarchical power relations between people in the enron email corpus using lexical features extracted from all the messages exchanged between them .
|
in our current work , we investigate how to create pos tagging and dependency parsing experts for heterogeneous data .
|
however , our work is comparable to domain adaptation since we create experts to tag and parse heterogeneous datasets .
|
named entity recognition ( ner ) is a well-known problem in nlp which feeds into many other related tasks such as information retrieval ( ir ) and machine translation ( mt ) and more recently social network discovery and opinion mining .
|
named entity recognition ( ner ) is the task of identifying and typing phrases that contain the names of persons , organizations , locations , and so on .
|
we use stanford corenlp for chinese word segmentation and pos tagging .
|
we use the stanford corenlp toolkit to obtain the part-of-speech tagging .
|
we pre-train the word embeddings using word2vec .
|
the model parameters of word embedding are initialized using word2vec .
|
semantic parsing is the task of automatically translating natural language text to formal meaning representations ( e.g. , statements in a formal logic ) .
|
semantic parsing is the task of mapping natural language to a formal meaning representation .
|
tuning was performed by minimum error rate training .
|
parameters were tuned using minimum error rate training .
|
exploiting the difference in coverage between these two corpora , escudero et al separated the dso corpus into its bc and wsj parts to investigate the domain dependence of several wsd algorithms .
|
escudero et al exploited the difference in coverage between these two corpora to separate the dso corpus into its bc and wsj parts for investigating the domain dependence of several wsd algorithms .
|
for a large class of modern shift-reduce parsers , dynamic programming is in fact possible and runs in polynomial time .
|
we show that , surprisingly , dynamic programming is in fact possible for many shift-reduce parsers , by merging ¡°equivalent¡± stacks based on feature values .
|
we evaluate the translation quality using the case-insensitive bleu-4 metric .
|
in order to measure translation quality , we use bleu 7 and ter scores .
|
woodsend and lapata investigate the use of simple wikipedia edit histories and an aligned wikipediasimple wikipedia corpus to induce a model based on quasi-synchronous grammar .
|
woodsend and lapata , 2011 ) use simple wikipedia edit histories and an aligned wikipediasimple wikipedia corpus to induce a model based on quasi-synchronous grammar and integer linear programming .
|
we used the byte pair encoding algorithm for obtaining the sub-word vocabulary whose size was set to 50,000 .
|
we applied joint byte pair encoding , learning 32 , 000 merge operations , on the out-of-domain dataset .
|
we report the mt performance using the original bleu metric .
|
we report bleu gains obtained by each method .
|
aue and gamon explored various strategies for customizing sentiment classifiers to new domains , where the training is based on a small number of labelled examples and large amounts of unlabelled in-domain data .
|
aue and gamon combined a small amount of labeled data with a large amount of unlabeled data in target domain for cross-domain sentiment classification based on the em algorithm .
|
for feature building , we use word2vec pre-trained word embeddings .
|
we preinitialize the word embeddings by running the word2vec tool on the english wikipedia dump .
|
knowledge representation system , knowtator has been developed as a protege plug-in that leverages protege ’ s knowledge representation capabilities .
|
knowtator has been developed to leverage the knowledge representation and editing capabilities of the protégé system .
|
sun et al are focused on detecting causality between search query pairs in temporal query logs .
|
in , the authors focused on detecting causality between search query pairs in temporal query logs .
|
relation extraction is a key step towards question answering systems by which vital structured data is acquired from underlying free text resources .
|
relation extraction is a crucial task in the field of natural language processing ( nlp ) .
|
data selection methods mostly use language models trained on small scale in-domain data .
|
the existing data selection methods are mostly based on language model .
|
they then searched the propbank wall street journal corpus for sentences containing such lexical items and annotated them with respect to metaphoricity .
|
then , they searched the propbank wall street journal corpus for sentences containing such lexical items and annotated them with respect to metaphoricity .
|
in this paper , we present the limitations of constituency based discourse parsing .
|
in this paper , we present the benefits and feasibility of applying dependency structure in text-level discourse parsing .
|
typically , this selection is made based on translation scores , confidence estimations , language and other models .
|
the selection is made based on the scores of translation , language , and other models .
|
scate annotations are converted to intervals following the formal semantics of each entity , using the library provided by bethard and parker .
|
scate annotations are converted to intervals according to the formal semantics of each entity , using the scala library provided by bethard and parker .
|
the pre-processed monolingual sentences will be used by srilm or berkeleylm to train a n-gram language model .
|
a 3-gram language model was trained from the target side of the training data for chinese and arabic , using the srilm toolkit .
|
semantic role labeling ( srl ) is the task of automatic recognition of individual predicates together with their major roles ( e.g . frame elements ) as they are grammatically realized in input sentences .
|
semantic role labeling ( srl ) is the task of identifying semantic arguments of predicates in text .
|
a pun is a word used in a context to evoke two or more distinct senses for humorous effect .
|
a pun is a form of wordplay in which one sign ( e.g. , a word or phrase ) suggests two or more meanings by exploiting polysemy , homonymy , or phonological similarity to another sign , for an intended humorous or rhetorical effect ( aarons , 2017 ; hempelmann and miller , 2017 ) .
|
the language model is a 5-gram lm with modified kneser-ney smoothing .
|
the language model is a large interpolated 5-gram lm with modified kneser-ney smoothing .
|
ganchev et al , 2010 ) describes a method based on posterior regularization that incorporates additional constraints within the em algorithm for estimation of ibm models .
|
ganchev et al propose postcat which uses posterior regularization to enforce posterior agreement between the two models .
|
we used the pre-trained word embeddings that are learned using the word2vec toolkit on google news dataset .
|
we use the word2vec vectors with 300 dimensions , pre-trained on 100 billion words of google news .
|
over the years , there has been continuing interest in the research of ealp .
|
over the years there has been continuing interest in the research of ealp .
|
some researchers use similarity and association measures to build alignment links .
|
some researchers used similarity and association measures to build alignment links .
|
since coreference resolution is a pervasive discourse phenomenon causing performance impediments in current ie systems , we considered a corpus of aligned english and romanian texts to identify coreferring expressions .
|
coreference resolution is the task of clustering referring expressions in a text so that each resulting cluster represents an entity .
|
in this paper , we compare and extend approaches to obtain multi-sense embeddings , in order to model word senses .
|
in this paper , we compare and extend approaches to obtain multi-sense embeddings , in order to model word senses on the token level .
|
we used the moses toolkit to build mt systems using various alignments .
|
we implement the pbsmt system with the moses toolkit .
|
liao and grishman employed cross-event consistent information to improve sentence-level event extraction .
|
liao and grishman employed cross-event consistency information to improve sentence-level event extraction .
|
1 a bunsetsu is the linguistic unit in japanese that roughly corresponds to a basic phrase in english .
|
1 bunsetsu is a linguistic unit in japanese that roughly corresponds to a basic phrase in english .
|
segal et al and murray argue that readers expect a sentence to be causally congruent and continuous with respect to its preceding context .
|
segal et al and murray argue that readers expect a sentence to be continuous with respect to its preceding context .
|
or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsor .
|
any opinions , findings , and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsor .
|
sentiment analysis is a natural language processing task whose aim is to classify documents according to the opinion ( polarity ) they express on a given subject ( cite-p-13-8-14 ) .
|
sentiment analysis is a much-researched area that deals with identification of positive , negative and neutral opinions in text .
|
we also report the results using bleu and ter metrics .
|
for this task , we use the widely-used bleu metric .
|
le and mikolov extends the neural network of word embedding to learn the document embedding .
|
le and mikolov extended the word embedding learning model by incorporating paragraph information .
|
we use the hierarchical lexicalized reordering model , with a distortion limit of 7 .
|
as a baseline we use a translation system with distortion limit 6 and a lexicalized reordering model .
|
sentiment analysis is the process of identifying and extracting subjective information using natural language processing ( nlp ) .
|
one of the first challenges in sentiment analysis is the vast lexical diversity of subjective language .
|
shimbo and hara and hara et al considered many features for coordination disambiguation and automatically optimized their weights , which were heuristically determined in kurohashi and nagao , by using a discriminative learning model .
|
shimbo and hara considered many features for coordination disambiguation and automatically optimized their weights , which were heuristically determined in kurohashi and nagao , using a discriminative learning model .
|
we trained a te generation policy using the above user simulation model for 10,000 runs using the sarsa reinforcement learning algorithm .
|
we trained a time-series generation policy for 10,000 runs using the tabular temporaldifference learning .
|
as table 7 shows , our system clearly outperforms the system proposed by silfverberg and hulden with regard to f1-score on tags .
|
we only report results for precision , recall and f1-score with regard to tags because the system by silfverberg and hulden is not capable of lemmatization .
|
in our earlier experiments , we used latent semantic analysis for dimensionality reduction in an attempt to automatically cluster words that are semantically similar .
|
in order to overcome data sparseness , we used techniques borrowed from latent semantic indexing observed between terms which are related but do not co-occur .
|
we extracted the most similar texts we could find from the spanish simplex corpus .
|
we picked two similar texts from the spanish corpus simplext .
|
klog is a framework for kernel-based learning that has already proven successful in solving a number of relational tasks in natural language processing .
|
klog is a new language for statistical relational learning with kernels , that is embedded in prolog , and builds upon and links together concepts from database theory , logic programming and learning from interpretations .
|
in this paper , we present our contribution to the closed track of the 2011 conll shared task .
|
this paper describes our coreference resolution system participating in the close track of conll 2011 shared task .
|
by applying the iornn to dependency parses , we have shown that using an ¡þ-order generative model for dependency .
|
we demonstrate the use of the iornn by applying it to an ¡þ-order generative dependency model which is impractical for counting due to the problem of data sparsity .
|
in future work , we plan to extend the parameterization of our models to not only predict phrase orientation , but also the length of each displacement .
|
in future work , we plan to extend the parameterization of our models to not only predict phrase orientation , but also the length of each displacement as in ( cite-p-10-1-0 ) .
|
dictionary creation is a costly process : an automatic method for creating them would make dialogue technology more scalable .
|
dictionary creation is a costly process ; it is currently done by hand for each dialogue domain .
|
we presented our previous efforts on using wikipedia as a semantic knowledge source .
|
we present our work on using wikipedia as a knowledge source for natural language processing .
|
recently , distributed word representations using the skip-gram model has been shown to give competitive results on analogy detection .
|
furthermore , this approach has achieved competitive results to dense vector space models like cbow and skip-gram in word similarity evaluations .
|
mcgough et al proposed an approach to build a web-based testing system with the facility of dynamic qg .
|
mcgough et al proposed an approach to build a web-based testing system with the facility of dynamic question generation .
|
the evaluation method is the case insensitive ib-m bleu-4 .
|
we adopted the case-insensitive bleu-4 as the evaluation metric .
|
we use the logistic regression classifier as implemented in the skll package , which is based on scikitlearn , with f1 optimization .
|
we use the linearsvc classifier as implemented in scikit-learn package 17 with the default parameters .
|
several massive knowledge bases such as dbpedia and freebase have been released .
|
the recent years have shown a large number of knowledge bases such as yago , wikidata and freebase .
|
sentiment analysis is a natural language processing ( nlp ) task ( cite-p-10-1-14 ) which aims at classifying documents according to the opinion expressed about a given subject ( federici and dragoni , 2016a , b ) .
|
the sentiment analysis is a field of study that investigates feelings present in texts .
|
zhu et al learn a sentence simplification model which is able to perform four rewrite operations on the parse trees of the input sentences , namely substitution , reordering , splitting , and deletion .
|
zhu et al also use wikipedia to learn a sentence simplification model which is able to perform four rewrite operations , namely substitution , reordering , splitting , and deletion .
|
but it also eliminates the need to directly predict the direction of translation of the parallel corpus .
|
furthermore , we eliminate the need to ( manually or automatically ) detect the direction of translation of the parallel corpus .
|
with the svm reranker , we obtain a significant improvement in bleu scores over white & rajkumar ¡¯ s averaged perceptron model .
|
with the svm reranker , we obtain a significant improvement in bleu scores over white & rajkumar¡¯s averaged perceptron model on both development and test data .
|
named entity recognition is a well established information extraction task with many state of the art systems existing for a variety of languages .
|
named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance .
|
semantic role labeling ( srl ) is the task of automatic recognition of individual predicates together with their major roles ( e.g . frame elements ) as they are grammatically realized in input sentences .
|
semantic role labeling ( srl ) is defined as the task to recognize arguments for a given predicate and assign semantic role labels to them .
|
we propose a transition-based parser for spinal parsing , based on the arc-eager strategy .
|
our method is based on constraining a shift-reduce parser using the arc-eager strategy .
|
madamira is a system developed for morphological analysis and disambiguation of arabic text .
|
madamira is a tool , originally designed for morphological analysis and disambiguation of msa and dialectal arabic texts .
|
in our study , lay annotators had similar agreement on the ratings .
|
in our study , lay annotators had similar agreement on the ratings as experts .
|
we use 300 dimensional glove embeddings trained on the common crawl 840b tokens dataset , which remain fixed during training .
|
since it is operated on the word level , we use pre-trained 300-dimensional glove embeddings and keep them fixed during training .
|
for each target , 10 sentences were chosen from the english internet corpus and presented to 5 annotators to collect substitutes .
|
the full dataset consists of 2,010 sentences , 10 for each of 201 target words , extracted from the english internet corpus , and annotated by five native english speakers .
|
we briefly conclude and offer directions for future work .
|
finally , in section 7 we briefly conclude and offer directions for future work .
|
for phrase-based smt translation , we used the moses decoder and its support training scripts .
|
for generating the translations from english into german , we used the statistical translation toolkit moses .
|
we evaluated the translation quality using the case-insensitive bleu-4 metric .
|
we adopted the case-insensitive bleu-4 as the evaluation metric .
|
we build an open-vocabulary language model with kneser-ney smoothing using the srilm toolkit .
|
for improving the word alignment , we use the word-classes that are trained from a monolingual corpus using the srilm toolkit .
|
a 5-gram language model was built using srilm on the target side of the corresponding training corpus .
|
a 3-gram language model was trained from the target side of the training data for chinese and arabic , using the srilm toolkit .
|
in our experiments , we choose to use the published glove pre-trained word embeddings .
|
we employed the glove as the word embedding for the esim .
|
we used minimum error rate training to optimize the feature weights .
|
we optimized each system separately using minimum error rate training .
|
word embedding is a key component in many downstream applications in processing natural languages .
|
word embedding is a dense , low dimensional , real-valued vector .
|
for the out-of-domain testing of framenet srl ; we publish the annotations for the yags benchmark set and our frame identification system for research purposes .
|
to support reproducibility of our results , we publish the yags test set annotations and our frame identification system for research purposes .
|
the first one , described in section 3 , is based on the approach of simard et al and considers the ape task as the automatic translation between a translation hypothesis and its post-edition .
|
the first one is based on the approach of simard et al and considers the ape task as a monolingual translation between a translation hypothesis and its post-edition .
|
semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation .
|
semantic parsing is the task of mapping a natural language query to a logical form ( lf ) such as prolog or lambda calculus , which can be executed directly through database query ( zettlemoyer and collins , 2005 , 2007 ; haas and riezler , 2016 ; kwiatkowksi et al. , 2010 ) .
|
we also consider the recently popular word2vec tool to obtain vector representation of words which are trained on 300 million words of google news dataset and are of length 300 .
|
for the textual sources , we populate word embeddings from the google word2vec embeddings trained on roughly 100 billion words from google news .
|
marcu and wong present a joint probability model for phrase-based translation .
|
marcu and wong propose a model to learn lexical correspondences at the phrase level .
|
the log-likelihood ratio decides in which order rules in a decision list are applied to the target noun in novel context .
|
the log-likelihood ratio decides in which order rules in a decision list are applied to the target noun in countability prediction .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.