sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
metonymy is a figure of speech that uses β€œ one entity to refer to another that is related to it ” ( lakoff and johnson , 1980 , p.35 ) .
metonymy is a figure of speech , in which one expression is used to refer to the standard referent of a related one ( cite-p-18-1-13 ) .
building on this frame-semantic model , the berkeley framenet project has been developing a frame-semantic lexicon for the core vocabulary of english since 1997 .
the berkeley framenet is an ongoing project for building a large lexical resource for english with expert annotations based on frame semantics .
the target language model was a standard ngram language model trained by the sri language modeling toolkit .
the language models were interpolated kneser-ney discounted trigram models , all constructed using the srilm toolkit .
recently , levy and goldberg showed that linear linguistic regularities first observed with word2vec extend to other embedding methods .
levy and goldberg further reveal that the attractive properties observed in word embeddings are not restricted to neural models such as word2vec and glove .
wordnet-based methods are consistently worse than the 1911 thesaurus .
roget‘¯s thesaurus was found generally to outperform wordnet on these problems .
then , we give the paraphrase lattice as an input to the moses decoder .
then , we give the paraphrase lattice as an input to the lattice decoder .
in this paper , we applied a graph-based ssl algorithm to improve the performance of qa task by exploiting unlabeled entailment .
we implement a semi-supervised learning ( ssl ) approach to demonstrate that utilization of more unlabeled data points can improve the answer-ranking task of qa .
we utilize maximum entropy model to design the basic classifier used in active learning for wsd and tc tasks .
we utilize a maximum entropy model to design the basic classifier used in active learning for wsd .
the translation quality is evaluated by case-insensitive bleu-4 metric .
translation quality is measured by case-insensitive bleu on newstest13 using one reference translation .
in this paper , we propose a novel uncertainty classification scheme and construct the first uncertainty corpus based on social media data – tweets in specific .
in this paper , we propose a variant of annotation scheme for uncertainty identification and construct the first uncertainty corpus based on tweets .
for our experiments reported here , we obtained word vectors using the word2vec tool and the text8 corpus .
in our experiment , word embeddings were 200-dimensional as used in , trained on gigaword with word2vec .
proposed methods are very effective in topical keyphrase extraction .
experiments show that these methods are very effective for topical keyphrase extraction .
in our experiments , we use the english-french part of the europarl corpus .
in particular , we used the english and spanish sides of the europarl parallel corpus .
for comparison purposes , we replicated the hiero decoder ( cite-p-22-1-2 ) .
for comparison purposes , we replicated the hiero system as described in ( cite-p-22-1-2 ) .
semantic role labeling was pioneered by gildea and jurafsky , also known as shallow semantic parsing .
automatic semantic role labeling was first introduced by gildea and jurafsky .
nearest neighbors of the test sentence are identified using this scoring function .
this similarity score is used to find the nearest neighbors of the test sentence from the training data .
there are several studies about grammatical error correction using phrase-based statistical machine translation .
among others , there are studies using phrase-based statistical machine translation , which does not limit the types of grammatical errors made by a learner .
to constrain the application of wide-coverage hpsg rules , we can benefit from a number of parsing techniques designed for high-accuracy dependency parsing , while actually performing deep syntactic analysis .
we show that by using surface dependencies to constrain the application of wide-coverage hpsg rules , we can benefit from a number of parsing techniques designed for high-accuracy dependency parsing , while actually performing deep syntactic analysis .
semantic role labeling ( srl ) is the task of identifying the arguments of lexical predicates in a sentence and labeling them with semantic roles ( cite-p-13-3-3 , cite-p-13-3-11 ) .
semantic role labeling ( srl ) is the task of identifying the predicate-argument structure of a sentence .
ramshaw and marcusfirst represented base noun phrase recognition as a machine learning problem .
the pioneering work of ramshaw and marcus introduced np chunking as a machine-learning problem , with standard datasets and evaluation metrics .
then , the output of bigru is fed as input to the capsule network .
the output of bigru is then used as the input to the capsule network .
the feature weights are tuned to optimize bleu using the minimum error rate training algorithm .
the weights associated to feature functions are optimally combined using the minimum error rate training .
we used the moses toolkit for performing statistical machine translation .
we obtained a phrase table out of this data using the moses toolkit .
relation extraction is the task of finding semantic relations between entities from text .
relation extraction ( re ) has been defined as the task of identifying a given set of semantic binary relations in text .
and the experimental results show that the tagging based methods are better than most of the existing pipelined and joint learning methods .
the tagging-based methods are better than most of the existing pipelined and joint learning methods .
semantic parsing is the task of translating natural language utterances to a formal meaning representation language ( cite-p-16-1-6 , cite-p-16-3-6 , cite-p-16-1-8 , cite-p-16-3-7 , cite-p-16-1-0 ) .
semantic parsing is the problem of mapping natural language strings into meaning representations .
we used the moses toolkit to build mt systems using various alignments .
we used moses to train an alignment model on the created paraphrase dataset .
feature weights were set with minimum error rate training on a development set using bleu as the objective function .
the smt system was tuned on the development set newstest10 with minimum error rate training using the bleu error rate measure as the optimization criterion .
we used the pre-trained word embeddings that are learned using the word2vec toolkit on google news dataset .
to encode the original sentences we used word2vec embeddings pre-trained on google news .
in section 3 , we explain how to use these gazetteers as features .
in section 3 , we explain how to use these gazetteers as features in an ne tagger .
a well-founded theory for this is the partially observable markov decision process ( pomdp ) ( cite-p-18-1-13 ) , which can provide robustness to errors from the input module and automatic policy optimization by reinforcement learning .
this theory is a derivative of constructivism which proposes that students construct an understanding of a topic by interpreting new material in the context of prior knowledge ( cite-p-10-1-0 ) .
we used the 300 dimensional model trained on google news .
we use word embedding pre-trained on newswire with 300 dimensions from word2vec .
jiang et al proposed a character-based model employing similar feature templates using averaged perceptron .
jiang et al investigate the automatic integration of word segmentation knowledge in different annotated corpora .
work was to point out the difficulties associated with the resolution of cataphoric cases of shell nouns .
this work drew on the observation that shell nouns following cataphoric constructions are easy to resolve .
sentiment lexicon is a set of words ( or phrases ) each of which is assigned with a sentiment polarity score .
a sentiment lexicon is a list of sentiment expressions , which are used to indicate sentiment polarity ( e.g. , positive or negative ) .
translation quality is measured in truecase with bleu on the mt08 test sets .
translation performance is measured using the automatic bleu metric , on one reference translation .
for training our system classifier , we have used scikit-learn .
we employed the machine learning tool of scikit-learn 3 , for training the classifier .
we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing .
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing .
twitter is a communication platform which combines sms , instant messages and social networks .
twitter is a microblogging service that has 313 million monthly active users 1 .
meanwhile , its effectiveness has also been verified in many nlp tasks such as sentiment analysis , parsing , summarization and machine translation .
it has also been successfully applied to different nlp tasks such as part-of-speech tagging , sentiment analysis , parsing , and machine translation .
we further adopt the approach of distant supervision in a chinese dataset .
we introduce a novel approach to distant supervision using topic models .
another corpus has been annotated for discourse phenomena in english , the penn discourse treebank .
one of the most important resources for discourse connectives in english is the penn discourse treebank .
we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing .
firstly , we built a forward 5-gram language model using the srilm toolkit with modified kneser-ney smoothing .
it has been shown that scores on these dimensions correlate with some aspects of language use .
work has also investigated whether scores on these dimensions correlate with language use .
failure modes , we are interested in understanding the behavior of vqa models along specific dimensions .
in this paper , we develop novel techniques to characterize the behavior of vqa models .
here we use stanford corenlp toolkit to deal with the co-reference problem .
we use stanford corenlp for pos tagging and lemmatization .
we propose a graph-based microblog entity linking ( gmel ) method .
we propose a context-expansion-based and a graph-based method .
peters et al propose a deep neural model that generates contextual word embeddings which are able to model both language and semantics of word use .
peters et al show how deep contextualized word representations model both complex characteristics of word use , and usage across various linguistic contexts .
mei et al propose an encoder-aligner-decoder model to generate weather forecasts .
mei et al proposed an encoder-aligner-decoder framework for generating weather broadcast .
for the language model , we used srilm with modified kneser-ney smoothing .
for language modeling , we used the trigram model of stolcke .
we use glove pre-trained word embeddings , a 100 dimension embedding layer that is followed by a bilstm layer of size 32 .
we use the 300-dimensional pre-trained word2vec 3 word embeddings and compare the performance with that of glove 4 embeddings .
the remainder of the paper consists of 3 parts .
the remainder of this paper comprises 4 sections .
we used svm classifier that implements linearsvc from the scikit-learn library .
we use the svm implementation from scikit-learn , which in turn is based on libsvm .
we train trigram language models on the training set using the sri language modeling tookit .
we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing .
in this paper , we identify a range of collocations that are necessary for language generation .
in summary , we have shown in this paper that there axe many different types of collocations needed for language generation .
early studies have suggested that lexical features , word pairs in particular , will be powerful predictors of discourse relations .
lexical co-occurrences have previously been shown to be useful for discourse level learning tasks .
maximum phrase length is set to 10 words and the parameters in the log-linear model are tuned by mert .
the parameters of the log-linear model are tuned by optimizing bleu on the development data using mert .
text categorization is the classificationof documents with respect to a set of predefined categories .
text categorization is a classical text information processing task which has been studied adequately ( cite-p-18-1-9 ) .
the system output is evaluated using the meteor and bleu scores computed against a single reference sentence .
we evaluate system output automatically , using the bleu-4 modified precision score with the human written sentences as reference .
experimental results show that our model leads to significant improvements .
experiments show that our model leads to significant improvements .
this paper has presented a higher-order model for constituent parsing that factorizes a parse tree into larger parts than before .
this paper presents a higher-order model for constituent parsing aimed at utilizing more local structural context to decide the score of a grammar rule instance in a parse tree .
most of the recent work in this area cohen et al , 2008 ) has focused on variants of the the big dog barks dependency model with valence by klein and manning .
the most successful recent work on dependency induction has focused on the dependency model with valence by klein and manning .
semantic similarity is a well established research area of natural language processing , concerned with measuring the extent to which two linguistic items are similar ( cite-p-13-1-1 ) .
semantic similarity is a central concept that extends across numerous fields such as artificial intelligence , natural language processing , cognitive science and psychology .
we used 14 datasets , most of which are non-projective , from the conll 2006 and 2008 shared tasks .
we used 14 datasets with non-projective dependencies from the conll-2006 and conll-2008 shared tasks .
in recent years , approaches based on deep learning architectures have also shown promising results .
recently , a number of neural models have been developed and achieved new levels of performance .
glove is an unsupervised learning algorithm for word embeddings .
glove is an unsupervised learning algorithm for obtaining vector representations of words .
reichart and rappoport applied selftraining to domain adaptation using a small set of in-domain training data .
reichart and rappoport show that the number of unknown words is a good indicator of the usefulness of self-training when applied to small seed data sets .
most opinion mining approaches in english are based on sentiwordnet for extracting word-level sentiment polarity .
some opinion mining methods in english rely on the english lexicon sentiwordnet for extracting word-level sentiment polarity .
of the three base systems , the feature-based model obtained the best results , outperforming each lstm-based model ‘¯ s correlation by . 06 .
of the three base systems , the feature-based model obtained the best results , outperforming the lstm-based models by .06 .
in this paper we describe the system submitted for the semeval 2014 sentiment analysis in twitter task ( task 9 .
in this paper we described the system submitted for the semeval 2014 task 9 ( sentiment analysis in twitter ) .
cite-p-15-1-13 proposed an automatic method that gives an evaluation result of a translation system as a score .
cite-p-15-1-13 proposed an automatic method that gives an evaluation result of a translation system as a score for toeic .
since estimating the probabilities of rules extracted from hypergraphs is an np-complete problem .
however , estimating the probabilities of rules extracted from hypergraphs is an np-complete problem , which is computationally infeasible .
we propose an algorithm called ledir that filters incorrect inference rules and identifies the directionality of correct ones .
our algorithm filters incorrect inference rules and identifies the directionality of the correct ones .
part-of-speech ( pos ) tagging is a crucial task for natural language processing ( nlp ) tasks , providing basic information about syntax .
part-of-speech ( pos ) tagging is a fundamental language analysis task .
semantic parsing is the task of mapping natural language sentences to a formal representation of meaning .
semantic parsing is the mapping of text to a meaning representation .
nmt models learn representations that capture the meaning of sentences .
researchers suggest that nmt models learn sentence representations that capture meaning .
automatic word alignment can be defined as the problem of determining a translational correspondence at word level given a parallel corpus of aligned sentences .
automatic word alignment is a vital component of nearly all current statistical translation pipelines .
although there is no consensus in the literature on what exactly a discourse unit consists of , it is generally assumed that each discourse unit describes a single event .
although there is no consensus in the literature on what exactly these units have to comprise , it is generally assumed that each discourse unit describes a single event .
in this paper , we introduce gate mechanism into multi-task cnn .
in this paper , we introduce gate mechanism in multi-task cnn to reduce the interference .
downside of learning from scratch is failing to capitalize on prior linguistic or semantic knowledge , often encoded in existing resources .
a potential drawback to learning from scratch in endto-end neural models is a failure to capitalize on existing knowledge sources .
relation extraction is the task of predicting attributes and relations for entities in a sentence ( zelenko et al. , 2003 ; bunescu and mooney , 2005 ; guodong et al. , 2005 ) .
relation extraction is the problem of populating a target relation ( representing an entity-level relationship or attribute ) with facts extracted from natural-language text .
comparing measurements made by these two methods allows researchers to determine whether changes are more cultural or linguistic .
comparing measurements made by these two approaches also allows researchers to assess the extent to which semantic changes are linguistic or cultural in nature .
coreference resolution is the process of linking multiple mentions that refer to the same entity .
coreference resolution is the next step on the way towards discourse understanding .
we use the degraded mt systems to translate queries and submit the translated queries of varying quality .
we conduct query translation with the degraded mt systems and obtain translated queries of varying quality .
to train our neural algorithm , we apply word embeddings of a look-up from 100-d glove pre-trained on wikipedia and gigaword .
in a second baseline model , we also incorporate 300-dimensional glove word embeddings trained on wikipedia and the gigaword corpus .
we use a pbsmt model built with the moses smt toolkit .
in this work we use the open-source toolkit moses .
following foulds et al , we perform simulated annealing which varies the m-h acceptance ratio to improve mixing .
foulds et al propose to apply simulated annealing to optimize instead of sample , and which improves mixing for the mmsgtm .
semantic parsing is the task of mapping a natural language query to a logical form ( lf ) such as prolog or lambda calculus , which can be executed directly through database query ( zettlemoyer and collins , 2005 , 2007 ; haas and riezler , 2016 ; kwiatkowksi et al. , 2010 ) .
semantic parsing is the task of transducing natural language ( nl ) utterances into formal meaning representations ( mrs ) , commonly represented as tree structures .
the model parameters of word embedding are initialized using word2vec .
an interesting implementation to get the word embeddings is the word2vec model which is used here .
specifically , a metaphor is a mapping of concepts from a source domain to a target domain ( cite-p-23-1-13 ) .
a metaphor is a figure of speech that creates an analogical mapping between two conceptual domains so that the terminology of one ( source ) domain can be used to describe situations and objects in the other ( target ) domain .
kaplan , king , and maxwell introduce a system designed for building a grammar by both extending and restricting another grammar .
kaplan et al introduce a system designed for building a grammar by both extending and restricting another grammar .
we build an open-vocabulary language model with kneser-ney smoothing using the srilm toolkit .
for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing .
semantic role labeling ( srl ) is the task of labeling the predicate-argument structures of sentences with semantic frames and their roles ( cite-p-18-1-2 , cite-p-18-1-19 ) .
semantic role labeling ( srl ) is the process of extracting simple event structures , i.e. , β€œ who ” did β€œ what ” to β€œ whom ” , β€œ when ” and β€œ where ” .
boostedmert is easy to implement , inherits the efficient optimization properties of mert , and can quickly boost the bleu score .
boostedmert is easy to implement , inherits mert ’ s efficient optimization procedure , and more effectively boosts the training score .
discourse parsing is the process of discovering the latent relational structure of a long form piece of text and remains a significant open challenge .
discourse parsing is a challenging natural language processing ( nlp ) task that has utility for many other nlp tasks such as summarization , opinion mining , etc . ( cite-p-17-3-3 ) .
parameters are updated through backpropagation with adagrad for speeding up convergence .
the parameters are optimized with adagrad under a cosine proximity objective function .
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing .
we used the srilm toolkit to train a 4-gram language model on the english side of the training corpus .
we use the long short-term memory architecture for recurrent layers .
we adopt a long short-term memory network for the word-level and sentence-level feature extraction .
two runs used first-order measures ( lesk and first-order vector ) , and the third run used a second-order measure ( second-order vector ) .
the first two runs used first-order measures ( lesk and first-order vector ) , and the third run used a second-order measure ( second-order vector ) .
our mt decoder is a proprietary engine similar to moses .
we use the popular moses toolkit to build the smt system .