sentence1
stringlengths 16
446
| sentence2
stringlengths 14
436
|
---|---|
table 1 presents the results from the automatic evaluation , in terms of bleu and nist test .
|
table 1 shows the evaluation of all the systems in terms of bleu score with the best score highlighted .
|
the parameter weights are optimized with minimum error rate training .
|
the nnlm weights are optimized as the other feature weights using minimum error rate training .
|
in the experiment , we show that a neural network trained using stair captions can generate more natural and better japanese captions , compared to those generated using english-japanese machine translation .
|
in our experiment , we compared the performance of japanese caption generation by a neural network-based model with and without stair captions to highlight the necessity of japanese captions .
|
in the experiments described above , rnnlms are compared to a 4-gram back-off n-gram language model with modified kneser-ney smoothing trained using the srilm toolkit .
|
the targetside 4-gram language model was estimated using the srilm toolkit and modified kneser-ney discounting with interpolation .
|
coreference resolution is the task of determining which mentions in a text refer to the same entity .
|
coreference resolution is a well known clustering task in natural language processing .
|
this paper presents a large-scale system for the recognition and semantic disambiguation of named entities .
|
the system discussed in this paper performs both named entity identification and disambiguation .
|
the scripts were further post-processed with the stanford corenlp pipeline to perform tagging , parsing , named entity recognition and coreference resolution .
|
then , the texts were tokenized , lemmatized , pos-tagged and annotated with named entity tags using stanford corenlp toolkit .
|
coreference resolution is a well known clustering task in natural language processing .
|
coreference resolution is a task aimed at identifying phrases ( mentions ) referring to the same entity .
|
gao et al design user-specific features to capture user leniency .
|
for example , in 2013 , gao et al design userspecific features to capture user leniency .
|
in the experiments presented in this paper , we use bleu scores as training labels .
|
in this paper we will consider sentence-level approximations of the popular bleu score .
|
we used the logistic regression implementation in scikit-learn for the maximum entropy models in our experiments .
|
we use logistic regression with l2 regularization , implemented using the scikit-learn toolkit .
|
in order to capture the properties of semantic orientations of phrases , we introduce latent variables into the models .
|
in order to capture the property of such phrases , we introduce latent variables into the models .
|
building a realistic su can be just as difficult as building a good dialogue policy .
|
building a dialogue policy can be a challenging task especially for complex applications .
|
phoneme based models like the ones based on weighted finite state transducers and extended markov window treat transliteration as a phonetic process rather than an orthographic process .
|
phoneme-based models , based on weighted finite state transducers and markov window considers transliteration as a phonetic process .
|
transliteration is a process of translating a foreign word into a native language by preserving its pronunciation in the original language , otherwise known as translationby-sound .
|
transliteration is often defined as phonetic translation ( cite-p-21-3-2 ) .
|
we use the moses package to train a phrase-based machine translation model .
|
for phrase-based smt translation , we used the moses decoder and its support training scripts .
|
we used a phrase-based smt model as implemented in the moses toolkit .
|
we make use of moses toolkit for this paradigm .
|
as word embeddings we use the pre-trained word2vec vectors trained on the google news corpus 11 .
|
we trained our default model using the widely used tool word2vec with the default parameters values on the bnc corpus 1 .
|
the lexicalized reordering model was trained with the msd-bidirectional-fe option .
|
a lexicalized reordering model was trained with the msd-bidirectional-fe option .
|
we use the skipgram model with negative sampling implemented in the open-source word2vec toolkit to learn word representations .
|
we use the skip-gram strategy in word2vec , which uses the central word in a sliding window with radius r to predict other words in the window and make local optimizations .
|
unfortunately , wordnet is a fine-grained resource , which encodes possibly subtle sense distictions .
|
unfortunately , wordnet is a fine-grained resource , encoding sense distinctions that are often difficult to recognize even for human annotators ( cite-p-15-1-6 ) .
|
we measure the overall translation quality using 4-gram bleu , which is computed on tokenized and lowercased data for all systems .
|
translation performance is measured using the automatic bleu metric , on one reference translation .
|
our results show that we consistently improve over a state-of-the-art baseline in terms of bleu , yet .
|
our results show a consistent improvement over a state-of-the-art baseline in terms of bleu and a manual error analysis .
|
a 5-gram language model was built using srilm on the target side of the corresponding training corpus .
|
gram language models are trained over the target-side of the training data , using srilm with modified kneser-ney discounting .
|
morfessor is a family of methods for unsupervised morphological segmentation .
|
morfessor is a family of probabilistic machine learning methods for finding the morphological segmentation from raw text data .
|
in this paper , we report a system based on neural networks to take advantage of their modeling capacity and generalization power for the automated essay .
|
in this paper , we develop an approach based on recurrent neural networks to learn the relation between an essay and its assigned score , without any feature engineering .
|
zeng et al use convolutional neural network for learning sentence-level features of contexts and obtain good performance even without using syntactic features .
|
zeng et al use a convolutional deep neural network to extract lexical features learned from word embeddings and then fed into a softmax classifier to predict the relationship between words .
|
finally , we position this effort at the intersection of noisy text parsing and grammatical error correction .
|
our work is positioned at the intersection of noisy text parsing and grammatical error correction .
|
each candidate property ¡¯ s compatibility with the complementary simile component .
|
each candidate property is generated from just one component of the simile .
|
that can exploit multiple , variable sized word embeddings .
|
this cnn-based architecture accepts multiple word embeddings as inputs .
|
in the restricted condition , all non-concat models perform near the cosine baseline , suggesting that in the standard setting .
|
in the restricted condition , all non-concat models perform near the cosine baseline , suggesting that in the standard setting they were memorizing antonyms of semantically similar words .
|
for example , morante et al discuss the need for corpora which covers different domains apart from biomedical .
|
morante et al also discuss the need for corpora which cover other domains .
|
we use the word2vec skip-gram model to train our word embeddings .
|
our cdsm feature is based on word vectors derived using a skip-gram model .
|
for tree-to-string translation , we parse the english source side of the parallel data with the english berkeley parser .
|
for samt grammar extraction , we parsed the english training data using the berkeley parser with the provided treebank-trained grammar .
|
experimental results show that the proposed approach outperforms the state-of-the-art semi-supervised method .
|
results showed that the proposed method outperformed all baseline methods .
|
birke and sarkar proposed the trope finder system to recognize verbs with non-literal meaning using word sense disambiguation and clustering .
|
birke and sarkar clustered literal and figurative contexts using a wordsense-disambiguation approach .
|
coreference resolution is the next step on the way towards discourse understanding .
|
coreference resolution is the task of determining when two textual mentions name the same individual .
|
we use glove word embeddings , an unsupervised learning algorithm for obtaining vector representations of words .
|
we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training .
|
on the other hand , glorot et al , proposed a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion .
|
glorot et al first propose a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion .
|
we demonstrate that this cascade-like framework is applicable to machine comprehension and can be trained endto-end .
|
we have demonstrated that this cascade-like framework is applicable to machine comprehension and can be trained endto-end .
|
part-of-speech ( pos ) tagging is a critical task for natural language processing ( nlp ) applications , providing lexical syntactic information .
|
part-of-speech ( pos ) tagging is a crucial task for natural language processing ( nlp ) tasks , providing basic information about syntax .
|
we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit .
|
the language model pis implemented as an n-gram model using the srilm-toolkit with kneser-ney smoothing .
|
the minimum error rate training was used to tune the feature weights .
|
the parameter weights are optimized with minimum error rate training .
|
kilicoglu and bergler apply a linguistically motivated approach to the same clasification task by using knowledge from existing lexical resources and incorporating syntactic patterns .
|
kilicoglu and bergler apply a linguistically motivated approach to the same classification task by using knowledge from existing lexical resources and incorporating syntactic patterns .
|
we use case-sensitive bleu-4 to measure the quality of translation result .
|
we used the bleu score to evaluate the translation accuracy with and without the normalization .
|
underlying the semantic classes was trained by a combination of the em algorithm and the mdl principle , providing soft clusters with two dimensions ( verb senses and subcategorisation frames with selectional preferences ) .
|
the probabilistic verb class model underlying the semantic classes is trained by a combination of the em algorithm and the mdl principle , providing soft clusters with two dimensions ( verb senses and subcategorisation frames with selectional preferences ) as a result .
|
for strings , many such kernel functions exist with various applications in computational biology and computational linguistics .
|
for strings , a lot of such kernel functions exist with many applications in computational biology and computational linguistics .
|
tanev and magnini proposed a weaklysupervised method that requires as training data a list of named entities , without context , for each category under consideration .
|
tanev and magnini proposed a weaklysupervised method that requires as training data a list of terms without context for each category under consideration .
|
in this paper , we explore strategies for generating and evaluating such surveys of scientific topics automatically .
|
in this paper , we investigate the problem of automatic generation of scientific surveys starting from keywords provided by a user .
|
the international corpus of learner english was widely used until recently , despite its shortcomings 1 being widely noted .
|
the above-mentioned international corpus of learner english was widely used until recently , despite its shortcomings 3 being widely noted .
|
we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing .
|
we used srilm to build a 4-gram language model with interpolated kneser-ney discounting .
|
in our experiments , all word vectors are initialized by glove 1 .
|
as the word embeddings , we used the 300 dimension vectors pre-trained by glove 6 .
|
we use the word2vec tool with the skip-gram learning scheme .
|
for cos , we used the cbow model 6 of word2vec .
|
then we review the path ranking algorithm introduced by lao and cohen .
|
we now review the path ranking algorithm introduced by lao and cohen .
|
katiyar and cardie presented a standard lstm-based sequence labeling model to learn the nested entity hypergraph structure for an input sentence .
|
katiyar and cardie proposed a neural network-based approach that learns hypergraph representation for nested entities using features extracted from a recurrent neural network .
|
we use sri language modeling toolkit to train a 5-gram language model on the english sentences of fbis corpus .
|
incometo select the most fluent path , we train a 5-gram language model with the srilm toolkit on the english gigaword corpus .
|
this paper has proposed an incremental parser based on an adjoining operation .
|
this paper describes an incremental parser based on an adjoining operation .
|
recently , convolutional neural networks are reported to perform well on a range of nlp tasks .
|
also , neural network translation models show a success in smt .
|
this paper presents a method for statistical paraphrase generation .
|
this paper proposes a method for statistical paraphrase generation .
|
also , rl has been applied to tutoring domains .
|
rl has also been applied to question-answering and tutoring domains .
|
this paper introduces a new corpus called , qa-it , sampled from nine different genres .
|
this paper introduces a new corpus , qa-it , for the classification of non-referential it .
|
embeddings , have recently shown to be effective in a wide range of tasks .
|
high quality word embeddings have been proven helpful in many nlp tasks .
|
in a second baseline model , we also incorporate 300-dimensional glove word embeddings trained on wikipedia and the gigaword corpus .
|
we used glove word embeddings with 300 dimensions pre-trained using commoncrawl to get a vector representation of the evidence sentence .
|
the decoder uses a cky-style parsing algorithm to integrate the language model scores .
|
the decoder is capable of both cnf parsing and earley-style parsing with cube-pruning .
|
the sg model is a popular choice to learn word embeddings by leveraging the relations between a word and its neighboring words .
|
the skip-gram model adopts a neural network structure to derive the distributed representation of words from textual corpus .
|
therefore , dependency parsing is a potential “ sweet spot ” that deserves investigation .
|
dependency parsing is a basic technology for processing japanese and has been the subject of much research .
|
we have presented an exploration of content models for multi-document summarization .
|
we present an exploration of generative probabilistic models for multi-document summarization .
|
our experiments show that it is possible to learn an image annotation model from caption-picture .
|
we also demonstrate that the news article associated with the picture can be used to boost image annotation performance .
|
coreference resolution is a key problem in natural language understanding that still escapes reliable solutions .
|
coreference resolution is the process of finding discourse entities ( markables ) referring to the same real-world entity or concept .
|
leskovec et al use the evolution of quotes reproduced online to identify memes and track their spread overtime .
|
leskovec et al perform clustering of quotations and their variations , uncovering patterns in the temporal dynamics of how memes spread through the media .
|
experiments on the benchmark data set show that our model achieves comparable and even better performance .
|
experiment results show that our approach achieves satisfactory performance against the baseline models .
|
we use minimum error rate training to tune the feature weights of hpb for maximum bleu score on the development set with serval groups of different start weights .
|
we tune phrase-based smt models using minimum error rate training and the development data for each language pair .
|
we propose a generative model that incorporates distributional prior knowledge .
|
in this paper , we propose a generative model that incorporates this distributional prior knowledge .
|
existing topic models attempted to model such structural dependency among topics .
|
however , existing topic models generally can not capture the latent topical structures in documents .
|
relation extraction ( re ) is the task of extracting instances of semantic relations between entities in unstructured data such as natural language text .
|
relation extraction is the task of finding relationships between two entities from text .
|
we use stanford named entity recognizer 7 to extract named entities from the texts .
|
we extract named entities using a python wrapper for the stanford ner tool .
|
our word embedding features are based on the recent success of word2vec 4 , a method for representing indidivual words as distributed vectors .
|
the word embeddings required by our proposed methods were trained using the gensim 5 implementation of the skip gram version of word2vec .
|
we applied a 5-gram mixture language model with each sub-model trained on one fifth of the monolingual corpus with kneser-ney smoothing using srilm toolkit .
|
we used a 4-gram language model which was trained on the xinhua section of the english gigaword corpus using the srilm 4 toolkit with modified kneser-ney smoothing .
|
yamada and matsumoto proposed a shift-reducelike odeterministic algorithm .
|
yamada and matsumoto proposed a deterministic classifierbased parser .
|
moreover , arabic is a morphologically complex language .
|
arabic is a morphologically rich language that is much more challenging to work , mainly due to its significantly larger vocabulary .
|
the model approximates stretches of f 0 by employing a phonetically motivated model function .
|
the painte model approximates a peak in the f 0 contour by employing a model function operating on a 3-syllable window .
|
all language models were trained using the srilm toolkit .
|
a 4-grams language model is trained by the srilm toolkit .
|
the model achieved the state-of-the-art performance on three different nlp tasks : natural language inference , answer sentence selection , and sentence classification , outperforming state-of-the-art recurrent and recursive neural networks .
|
although the model does not follow the syntactic tree structure , we empirically show that it achieved the state-of-the-art performance on three different nlp applications : natural language inference , answer sentence selection , and sentence classification .
|
we use word2vec technique to compute the vector representation of all the tags .
|
we use skip-gram representation for the training of word2vec tool .
|
in all our experiments , we used a 5-gram language model trained on the one billion word benchmark dataset with kenlm .
|
for building our statistical ape system , we used maximum phrase length of 7 and a 5-gram language model trained using kenlm .
|
in particular , a regularization term is added , which has the effect of trying to separate the data with a thick separator .
|
in particular , a regularization term is added , which has the affect of trying to separate the data with a think separator .
|
we use the wordsim353 dataset , divided into similarity and relatedness categories .
|
specifically , we used wordsim353 , a benchmark dataset , consisting of relatedness judgments for 353 word pairs .
|
we solve this sequence tagging problem using the mallet implementation of conditional random fields .
|
we employ conditional random fields to predict the sentiment label for each segment .
|
we present deep dirichlet multinomial regression , a supervised topic model which both learns a representation of document-level features .
|
we present deep dirichlet multinomial regression ( ddmr ) , a generative topic model that simultaneously learns document feature representations and topics .
|
one of the central challenges in sentiment-based text categorization is that not every portion of a given document is equally informative for inferring its overall sentiment .
|
one of the central challenges in sentiment-based text categorization is that not every portion of a document is equally informative for inferring the overall sentiment of the document .
|
the language model is a large interpolated 5-gram lm with modified kneser-ney smoothing .
|
the language model is a 5-gram with interpolation and kneser-ney smoothing .
|
this system is based on a distributional approach which uses syntactic dependencies .
|
the core of this system is a clique-based clustering method based upon a distributional approach .
|
semeval is a yearly event in which international teams of researchers work on tasks in a competition format where they tackle open research questions in the field of semantic analysis .
|
semeval is the international workshop on semantic evaluation that has evolved from senseval .
|
hockenmaier and steedman extracted a corpus of ccg derivations and dependency structures from the penn treebank .
|
hockenmaier and steedman showed that a ccg corpus could be created by adapting the penn treebank .
|
the core of the algorithm is a dynamic program for phrase-based translation which is efficient , but which allows some ill-formed translations .
|
the core of the algorithm is a beam-search based decoder operating on the packed forest in a bottom-up manner .
|
we applied our algorithms to word-level alignment using the english-french hansards data from the 2003 naacl shared task .
|
we evaluated our approaches using the englishfrench hansards data from the 2003 naacl shared task .
|
syntactic parsing is a central task in natural language processing because of its importance in mediating between linguistic expression and meaning .
|
syntactic parsing is a computationally intensive and slow task .
|
some language-specific properties in chinese have impact on errors .
|
meanwhile , confusion sets of chinese words play an important role in chinese spelling correction .
|
the nnlm weights are optimized as the other feature weights using minimum error rate training .
|
the score combination weights are trained by a minimum error rate training procedure similar to .
|
schema based approaches and rhetorical structure theory , offer methods for generating text driven by the relations between messages or groups of messages .
|
rhetorical structure theory posits a hierarchical structure of discourse relations between spans of text .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.