sentence1
stringlengths 16
446
| sentence2
stringlengths 14
436
|
---|---|
and we model the adaptive sentiment propagations as learning distributions over these composition functions . | it consists of more than one composition functions , and we model the adaptive sentiment propagations as distributions over these composition functions . |
graph connectivity measures can be successfully employed to perform unsupervised parameter tuning . | graph connectivity measures are employed for unsupervised parameter tuning . |
sentiment analysis is a research area where does a computational analysis of people ’ s feelings or beliefs expressed in texts such as emotions , opinions , attitudes , appraisals , etc . ( cite-p-12-1-3 ) . | sentiment analysis is a collection of methods and algorithms used to infer and measure affection expressed by a writer . |
zhang et al explore different markov chain orderings for an n-gram model on mtus in rescoring . | zhang et al explore different markov chain orderings for an n-gram model on mtus . |
morphological tagging is a distinct but related task , which aims at determining a single correct analysis of a word-form within the context of a sentence . | morphological tagging is the task of assigning a morphological analysis to a token in context . |
maas et al presented a probabilistic model that combined unsupervised and supervised techniques to learn word vectors , capturing semantic information as well as sentiment information . | maas et al combine two components -a probabilistic document model and a sentiment component -to jointly learn word vectors . |
socher et al , 2012 ) uses a recursive neural network in relation extraction . | socher et al present a compositional model based on a recursive neural network . |
we employ the glove and node2vec to generate the pre-trained word embedding , obtaining two distinct embedding for each word . | for the word-embedding based classifier , we use the glove pre-trained word embeddings . |
for estimating the monolingual we , we use the cbow algorithm as implemented in the word2vec package using a 5-token window . | to obtain these features , we use the word2vec implementation available in the gensim toolkit to obtain word vectors with dimension 300 for each word in the responses . |
the decoding weights are optimized with minimum error rate training to maximize bleu scores . | each system is optimized using mert with bleu as an evaluation measure . |
elson et al present a method for extracting social networks from nineteenth-century british novels and serials . | elson et al has looked at debunking comparative literature theories by examining networks for sixty 19th-century novels . |
the resulting constituent parse trees were converted into stanford dependency graphs . | the conversion to dependency trees was done using the stanford parser . |
carlson et al modify an ilp system similar to foil to learn rules with probabilistic conclusions . | carlson et al proposed a method based on inductive logic programming . |
there are several well-established , large-scale repositories of semantic frames for general language , eg , verbnet , propbank and framenet . | examples of well-known srl schemes motivated by different linguistic theories are framenet , propbank , and verbnet . |
we make use of the automatic pdtb discourse parser from lin et al to obtain the discourse relations over an input article . | we note that the discourse parser of lin et al comes trained on the pdtb , which provides annotations on top of the whole wsj data . |
coreference resolution is the problem of identifying which mentions ( i.e. , noun phrases ) refer to which real-world entities . | coreference resolution is the task of identifying all mentions which refer to the same entity in a document . |
skipgrams are a relatively new approach in nlp , most notable for their effectiveness in approximating word meaning in vector space models . | the skip-gram model has become one of the most popular manners of learning word representations in nlp . |
the taxonomy in yago is constructed by linking conceptual categories in wikipedia to wordnet synsets . | we use wikipedia item categories and the wordnet ontology for identifying entities from each subcategory . |
we used srilm to build a 4-gram language model with kneser-ney discounting . | firstly , we built a forward 5-gram language model using the srilm toolkit with modified kneser-ney smoothing . |
we use the logistic regression implementation of liblinear wrapped by the scikit-learn library . | for the classifiers we use the scikit-learn machine learning toolkit . |
for training the model , we use the linear kernel svm implemented in the scikit-learn toolkit . | we use logistic regression with l2 regularization , implemented using the scikit-learn toolkit . |
luong and manning designed a hybrid character-and word-based encoder to try to solve the out-of-vocabulary problem . | luong and manning proposed a hybrid scheme that consults character-level information whenever the model encounters an oov word . |
our approach is insensitive to the choice of pivot language , producing roughly the same alignments over six different pivot language choices . | somewhat surprisingly , we find that our approach is insensitive to the choice of pivot language . |
takamura et al used the spin model to extract word semantic orientation . | takamura et al proposed using a spin model to predict word polarity . |
applying a combination of asr confidence scores , nl-based features and domain-dependent predictors significantly improves the confidence measure . | the confidence model produces a score based on several predictor features including asr scores , nl scores , and domain knowledge . |
we apply online training , where model parameters are optimized by using adagrad . | we train the concept identification stage using infinite ramp loss with adagrad . |
a promising way to provide insight into these questions was brought forward as shared task 1 in the semeval-2014 campaign for semantic evaluation . | the semeval-2014 task 1 was designed to allow a rigorous evaluation of compositional distributional semantic models . |
in this work , we organize microblog posts as conversation trees based on reposting and replying relations . | we link microblog posts using reposting and replying relations to build conversation trees . |
distributed representations of words have become immensely successful as the building blocks for deep neural networks applied to a wide range of natural language processing tasks . | word embeddings are considered one of the key building blocks in natural language processing and are widely used for various applications . |
topic models have recently been applied to information retrieval , text classification , and dialogue segmentation . | traditional topic models like latent dirichlet allocation have been explored extensively to discover topics from text . |
relation extraction is a challenging task in natural language processing . | relation extraction is the task of finding relations between entities in text , which is useful for several tasks such as information extraction , summarization , and question answering ( cite-p-14-3-7 ) . |
meanwhile , we adopt glove pre-trained word embeddings 5 to initialize the representation of input tokens . | for the mix one , we also train word embeddings of dimension 50 using glove . |
for two grammars in different languages , an adequate algorithm should both improve the cross-lingual similarity between two grammars and maintain the non-triviality of each grammar , where non-triviality . | the dependency grammars given by cross-lingual similarization have much higher cross-lingual similarity while maintaining non-triviality . |
abstract meaning representation is a sembanking language that captures whole sentence meanings in a rooted , directed , labeled , and acyclic graph structure . | abstract meaning representation is a semantic representation that expresses the logical meaning of english sentences with rooted , directed , acylic graphs . |
etzioni et al presented the knowitall system that also utilizes hyponym patterns to extract class instances from the web . | etzioni et al present a system called knowitall , which implements an unsupervised domainindependent , bootstrapping approach to generate large facts of a specified ne from the web . |
according to results from a dependency parser , we can significantly improve the accuracy of deep parsing by using shallow syntactic analyses . | we present a novel framework that combines strengths from surface syntactic parsing and deep syntactic parsing to increase deep parsing accuracy , specifically by combining dependency and hpsg parsing . |
this paper describes a fully incremental dialogue system that can engage in dialogues . | this paper presents a dialogue system , called n umbers , in which all components operate incrementally . |
to alleviate this shortcoming , we performed smoothing of the phrase table using the goodturing smoothing technique . | to compensate this shortcoming , we performed smoothing of the phrase table using the good-turing smoothing technique . |
le and mikolov extended the word embedding learning model by incorporating paragraph information . | le and mikolov applied paragraph information into the word embedding technique to learn semantic representation . |
our pronominal anaphora model is an adaptation of the pronoun prediction model described by hardmeier et al to smt . | the first component of our model is a modified reimplementation of the pronoun prediction network introduced by hardmeier et al . |
the language models are estimated using the kenlm toolkit with modified kneser-ney smoothing . | unpruned language models were trained using lmplz which employs modified kneser-ney smoothing . |
most recent approaches use sequenceto-sequence model for paraphrase generation . | most recent approaches use the sequenceto-sequence model for paraphrase generation . |
by incorporating the mers models , the baseline system achieves statistically significant improvements . | experiments show that our approach achieves significant improvements over the baseline system . |
clark and curran also shows how the supertagger can reduce the size of the packed charts to allow discriminative log-linear training . | clark and curran showed that using a frequency cutoff can significantly reduce the size of the category set with only a small loss in coverage . |
we used srilm -sri language modeling toolkit to train several character models . | we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit . |
for all machine learning results , we train a logistic regression classifier implemented in scikitlearn with l2 regularization and the liblinear solver . | we train and evaluate a l2-regularized logistic regression classifier with the liblin-ear solver as implemented in scikit-learn . |
we used the mit java wordnet interface version 1 . 1 . 1 . | we use wordnet 3.0 , the latest version ( cite-p-14-1-3 ) . |
word embedding has been proven of great significance in most natural language processing tasks in recent years . | word embeddings have recently led to improvements in a wide range of tasks in natural language processing . |
given a sentence pair and a corresponding word alignment , phrases are extracted following the criterion in och and ney . | given a sentence pair and its corresponding word-level alignment , phrases will be extracted by using the approach in . |
we use the stanford corenlp for obtaining pos tags and parse trees from our data . | we use stanford corenlp for chinese word segmentation and pos tagging . |
sentiment analysis is the task of identifying positive and negative opinions , sentiments , emotions and attitudes expressed in text . | sentiment analysis is a natural language processing ( nlp ) task ( cite-p-10-1-14 ) which aims at classifying documents according to the opinion expressed about a given subject ( federici and dragoni , 2016a , b ) . |
evaluation shows that our model achieves the best performance . | our final event-driven model obtains the best result on this dataset . |
bleu is a system for automatic evaluation of machine translation . | the bleu is a classical automatic evaluation method for the translation quality of an mt system . |
they apply the dependency parser described in sagae and tsujii to the tree representations . | sagae and tsujii applied the standard co-training method for dependency parsing . |
in this work , we formally define the semantic structure of noun phrase queries . | in this work , we make the first attempt to define the semantic structure of noun phrase queries . |
we applied the ems in moses to build up the phrase-based translation system . | we experimented with the phrase-based smt model as implemented in moses . |
relation extraction is the task of finding relational facts in unstructured text and putting them into a structured ( tabularized ) knowledge base . | relation extraction is the task of tagging semantic relations between pairs of entities from free text . |
we use the same evaluation criterion as described in . | we use the same metrics as described in wu et al , which is similar to those in . |
automatic classification results were compared with a baseline method and with the manual judgement of several linguistics students . | the automatic classification results were compared with the manual judgement of several linguistics students . |
we used the penn wall street journal treebank . | we used the penn treebank wall street journal corpus . |
the genetic algorithms of mellish et al and karamanis and manarung , as well as the greedy algorithm of lapata , provide no theoretical guarantees on the optimality of the solutions they propose . | mellish et al and karamanis and manurung present algorithms based on genetic programming , and lapata uses a graph-based heuristic algorithm , but none of them can give any guarantees about the quality of the computed ordering . |
syntactic parsing is the process of determining the grammatical structure of a sentence as conforming to the grammatical rules of the relevant natural language . | syntactic parsing is a computationally intensive and slow task . |
a user of our system can explore the result space of a query by drilling down / up from one statement to another , according to entailment relations specified by an entailment graph . | a user of this system can explore the result space of her query , by drilling down/up from one proposition to another , according to a set of entailment relations described by an entailment graph . |
the evaluation method is the case insensitive ib-m bleu-4 . | our evaluation metric is case-insensitive bleu-4 . |
zeng et al use a convolutional deep neural network to extract lexical features learned from word embeddings and then fed into a softmax classifier to predict the relationship between words . | zeng et al developed a deep convolutional neural network to extract lexical and sentence level features , which are concatenated and fed into the softmax classifier . |
machine comprehension of text is the central goal in nlp . | machine comprehension of text is the overarching goal of a great deal of research in natural language processing . |
on the english portion of celex ( cite-p-18-1-2 ) , we achieve a 5 point improvement in segmentation accuracy . | we experiment with the model on english celex data and german derivbase ( cite-p-19-4-3 ) data . |
the lms are build using the srilm language modelling toolkit with modified kneserney discounting and interpolation . | the model was built using the srilm toolkit with backoff and kneser-ney smoothing . |
we first encode each word in the input sentence to an m-dimensional vector using word2vec . | in this run , we use a sentence vector derived from word embeddings obtained from word2vec . |
coreference resolution is a field in which major progress has been made in the last decade . | coreference resolution is a challenging task , that involves identification and clustering of noun phrases mentions that refer to the same real-world entity . |
in this paper , we present a novel approach to incremental decision making for output planning that is based on hierarchical reinforcement . | we have presented a novel approach to incremental dialogue decision making based on hierarchical rl combined with the notion of information density . |
this idea has been recently introduced in many nlp tasks , such as machine translation . | the encoder-decoder model has been shown effective in the field of machine translation . |
relation extraction ( re ) is the process of generating structured relation knowledge from unstructured natural language texts . | relation extraction is a core task in information extraction and natural language understanding . |
analysis indicates that our supervised similarity network learns phrase representations with a very clear boundary . | they have shown that the model learned with such coarse semantic features is portable across languages . |
on the output side , cite-p-24-1-7 , cite-p-24-1-8 and cite-p-24-1-5 use fol rules to rectify the output probability of nn , and then let nn learn from the rectified distribution . | on the output side , cite-p-24-1-7 , cite-p-24-1-8 and cite-p-24-1-5 use fol rules to rectify the output probability of nn , and then let nn learn from the rectified distribution in a teacher-student framework . |
the srilm toolkit was used to build the trigram mkn smoothed language model . | the model was built using the srilm toolkit with backoff and kneser-ney smoothing . |
we extract syntactic dependencies using stanford parser and use its collapsed dependency format . | we use the collapsed tree formalism of the stanford dependency parser . |
we used srilm to build a 4-gram language model with interpolated kneser-ney discounting . | we first trained a trigram bnlm as the baseline with interpolated kneser-ney smoothing , using srilm toolkit . |
we trained a 5-gram language model on the xinhua portion of gigaword corpus using the srilm toolkit . | we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model . |
smith and eisner propose quasi-synchronous grammar for cross-lingual parser projection and assume the existence of hundreds of target language annotated sentences . | smith and eisner perform dependency projection and annotation adaptation with quasi-synchronous grammar features . |
compressive summarization models can not merge facts from different source sentences , because . | nonetheless , compressive methods are unable to merge the related facts from different sentences . |
we use the automatic mt evaluation metrics bleu , meteor , and ter , to evaluate the absolute translation quality obtained . | for automatic evaluations , we use bleu and meteor to evaluate the generated comments with ground-truth outputs . |
on the wsj test data measured by average number of errors per sentence ; the numbers in bold indicate the least errors in each error type . | table 7 : comparison of different parsers on the wsj test data measured by average number of errors per sentence ; the numbers in bold indicate the least errors in each error type . |
our experiments show that the mt quality was improved by 10 % in paired comparison . | our experiments show that the mt quality improves by 10 % in test sentences according to a subjective evaluation . |
the translation quality is evaluated by case-insensitive bleu and ter metrics using multeval . | translation quality is measured by case-insensitive bleu on newstest13 using one reference translation . |
evaluation shows that our method has achieved an 82 % average fmeasure in aligning the most ambiguous framenet lexical entries . | evaluation results show that we achieve a promising 82 % average fmeasure for the most ambiguous lexical entries . |
to allow a comparison between transliteration systems , we are able to show that adding our transliterations to a production-level smt . | finally , we have demonstrated that machine transliteration is immediately useful to endto-end smt . |
for a large number of labelled negative stories , we classify them into some clusters . | to overcome the problem of a large number of labelled negative stories , we classify them into some clusters . |
as stated by , most nlg systems available generate text for high-skilled users . | on the other hand , as stated by , most nlg systems generate text for readers with good reading ability . |
the stanford dependency parser is used for extracting features from the dependency parse trees . | the phrase structure trees produced by the parser are further processed with the stanford conversion tool to create dependency graphs . |
the weights of the embedding layer are initialized using word2vec embeddings trained on 400 million tweets from the acl w-nut share task . | we present the text to the encoder as a sequence of word2vec word embeddings from a word2vec model trained on the hrwac corpus . |
spoken term detection ( std ) is a key information retrieval technology which aims open vocabulary search over large collections of spoken documents . | spoken term detection ( std ) is a subfield of speech retrieval , which locates occurrences of a query in a spoken archive . |
we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit . | we used a 5-gram language model trained on 126 million words of the xinhua section of the english gigaword corpus , estimated with srilm . |
bordes et al uses a vector space embedding approach to measure the semantic similarity between question and answers . | bordes et al , 2014 ) utilizes subgraph embedding to predict the confidence of candidate answers . |
mikolov et al proposed a method to use distributed representation of words and learns a linear mapping between vector space of different languages . | in the case of bilingual word embedding , mikolov et al propose a method to learn a linear transformation from the source language to the target language for the task of lexicon extraction from bilingual corpora . |
in this paper , we address the first three parts and evaluate our methodology . | for the purposes of this paper , we address the first three parts and leave the last for future work . |
for this language , which has limited the number of possible tags , we used a very rich tagset of 680 morphosyntactic tags . | unlike most previous work , which has used a small number of grammatical categories , we work with 680 morphosyntactic tags . |
we used the moses mt toolkit with default settings and features for both phrase-based and hierarchical systems . | our phrase-based mt system is trained by moses with standard parameters settings . |
adopting the extracted translations can significantly improve the performance of the moses machine translation system . | we also demonstrate that extracted translations significantly improve the performance of the moses machine translation system . |
coreference resolution is a task aimed at identifying phrases ( mentions ) referring to the same entity . | coreference resolution is a multi-faceted task : humans resolve references by exploiting contextual and grammatical clues , as well as semantic information and world knowledge , so capturing each of these will be necessary for an automatic system to fully solve the problem . |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.