sentence1
stringlengths 16
446
| sentence2
stringlengths 14
436
|
---|---|
relation extraction is the problem of populating a target relation ( representing an entity-level relationship or attribute ) with facts extracted from natural-language text . | relation extraction is a crucial task in the field of natural language processing ( nlp ) . |
in machine learning , there is a class of semi-supervised learning algorithms that learns from positive and unlabeled examples ( pu learning for short ) . | machine learning consists of a hypothesis function which learns this mapping based on latent or explicit features extracted from the input data . |
for pos tagging and syntactic parsing , we use the stanford nlp toolkit . | we use stanford corenlp for pos tagging and lemmatization . |
al used translation probabilities between the document and query terms to account for synonymy and polysemy . | al used translation probabilities between terms to account for synonymy and polysemy . |
reranking has become a popular technique for solving various structured prediction tasks , such as phrase-structure and dependency parsing , semantic role labeling and machine translation . | discriminative reranking has become a popular technique for many nlp problems , in particular , parsing and machine translation . |
therefore , word segmentation is a preliminary and important preprocess for chinese language processing . | word segmentation is the first step of natural language processing for japanese , chinese and thai because they do not delimit words by whitespace . |
information extraction ( ie ) is the task of generating structured information , often in the form of subject-predicate-object relation triples , from unstructured information such as natural language text . | information extraction ( ie ) is a technology that can be applied to identifying both sources and targets of new hyperlinks . |
the irstlm toolkit is used to build language models , which are scored using kenlm in the decoding process . | the irstlm toolkit is used to build ngram language models with modified kneser-ney smoothing . |
takamura et al also have reported a method for extracting polarity of words . | takamura et al used the spin model to extract word semantic orientation . |
the colaba project is another large effort to create dialectal arabic resources . | the cross lingual arabic blog alerts project is another large-scale effort to create dialectal arabic resources . |
for probabilities , we trained 5-gram language models using srilm . | we also use a 4-gram language model trained using srilm with kneser-ney smoothing . |
which show that phrase structure trees , even when deprived of the labels , retain in a certain sense all the structural information . | we have also shown that phrase structure trees , even when deprived of the labels , retain in a certain sense all the structural information . |
le and mikolov presented the paragraph vector algorithm to learn a fixed-size feature representation for documents . | le and mikolov extended the word embedding learning model by incorporating paragraph information . |
after discussing some helpful implications of critical tokenization in effective tokenization disambiguation and in efficient tokenization implementation , we suggest areas for future research . | in this paper , we have also discussed some important implications of the notion of critical tokenization in the area of character string tokenization research and development . |
a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit . | language models were built using the sri language modeling toolkit with modified kneser-ney smoothing . |
we used the 300-dimensional glove word embeddings learned from 840 billion tokens in the web crawl data , as general word embeddings . | we use the 200-dimensional global vectors , pre-trained on 2 billion tweets , covering over 27-billion tokens . |
the log-linear parameter weights are tuned with mert on the development set . | maximum phrase length is set to 10 words and the parameters in the log-linear model are tuned by mert . |
we trained kneser-ney discounted 5-gram language models on each available corpus using the srilm toolkit . | we trained a specific language model using srilm from each of these corpora in order to estimate n-gram log-probabilities . |
in addition , we utilize the pre-trained word embeddings with 300 dimensions from for initialization . | in particular , we use the neural-network based models from , also referred as word embeddings . |
sentiment analysis is a multi-faceted problem . | sentiment analysis is a much-researched area that deals with identification of positive , negative and neutral opinions in text . |
pseudo-word is a kind of multi-word expression ( includes both unary word and multi-word ) . | pseudo-word is a kind of basic multi-word expression that characterizes minimal sequence of consecutive words in sense of translation . |
based on this observation , we propose using context gates in nmt to dynamically control the contributions from the source and target contexts . | in this work , we propose to use context gates to control the contributions of source and target contexts on the generation of target words ( decoding ) in nmt . |
the feature weights of the log-linear models were trained with the help of minimum error rate training and optimized for 4-gram bleu on the development test set . | the system was trained in a standard manner , using a minimum error-rate training procedure with respect to the bleu score on held-out development data to optimize the loglinear model weights . |
a number of researchers speak of cue phrases in utterances that can serve as useful indicators of discourse structure . | a number of researchers speak of cue or key phrases in utterances that can serve as useful indicators of discourse structure . |
in this work , we propose a method , called dual training and dual prediction ( dtdp ) , to address the polarity shift problem . | in this paper , we focus on the polarity shift problem , and propose a novel approach , called dual training and dual prediction ( dtdp ) , to address it . |
for all experiments , we used a 5-gram english language model trained on the afp and xinua portions of the gigaword v3 corpus with modified kneser-ney smoothing . | a trigram english language model with modified kneser-ney smoothing was trained on the english side of our training data as well as portions of the gigaword v2 english corpus , and was used for all experiments . |
we use the stanford corenlp shift-reduce parsers for english , german , and french . | we use stanford corenlp for chinese word segmentation and pos tagging . |
we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model . | we trained a 4-gram language model on the xinhua portion of gigaword corpus using the sri language modeling toolkit with modified kneser-ney smoothing . |
to our best knowledge , this is not only the first work to report the empirical results of active learning . | to our knowledge , it is the first work on considering the three criteria all together for active learning . |
we use the word2vec skip-gram model to learn initial word representations on wikipedia . | we use word2vec as the vector representation of the words in tweets . |
framenet is a semantic resource which provides over 1200 semantic frames that comprise words with similar semantic behaviour . | framenet is an expert-built lexical-semantic resource incorporating the theory of frame-semantics . |
we used moses , a phrase-based smt toolkit , for training the translation model . | we used moses , a state-of-the-art phrase-based smt model , in decoding . |
in this paper , we focused on adapting only the translation model . | in this paper , we presented a new approach for domain adaptation using ensemble decoding . |
we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit . | the language models were interpolated kneser-ney discounted trigram models , all constructed using the srilm toolkit . |
riloff et al , 2013 ) addressed one common form of sarcasm as the juxtaposition of a positive sentiment attached to a negative situation , or vice versa . | riloff et al identify sarcasm that arises from the contrast between a positive sentiment referring to a negative situation . |
in this paper , we introduce a supervised learning approach to re that requires only a handful of training examples . | in this paper , we introduce a supervised learning approach to re that requires only a handful of training examples and uses the web as a corpus . |
the log-linear parameter weights are tuned with mert on the development set . | the log-linear feature weights are tuned with minimum error rate training on bleu . |
semantic parsing is the task of mapping natural language sentences to a formal representation of meaning . | semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation ( mr ) . |
bilingual dictionaries of technical terms are important resources for many natural language processing tasks including statistical machine translation and cross-language information retrieval . | bilingual dictionaries are an essential resource in many multilingual natural language processing tasks such as machine translation and cross-language information retrieval . |
to the best of our knowledge , this is the first detailed annotation study on scoring narrative essays for different aspects of narrative quality . | to the best of our knowledge , this work makes a first attempt at investigating the evaluation of narrative quality using automated methods . |
automatic image captioning is a fundamental task that couples visual and linguistic learning . | automatic image captioning is a fast growing area of research which lies at the intersection of computer vision and natural language processing and refers to the problem of generating natural language descriptions from images . |
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing . | for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided . |
semantic role labeling ( srl ) is the task of labeling predicate-argument structure in sentences with shallow semantic information . | semantic role labeling ( srl ) is a task of analyzing predicate-argument structures in texts . |
transliteration is the process of converting terms written in one language into their approximate spelling or phonetic equivalents in another language . | phonetic translation across these pairs is called transliteration . |
to this end , we proposed a probabilistic approach for performing joint query annotation . | to address this challenge , we propose a probabilistic approach for performing joint query annotation . |
a particular generative model , which is well suited for the modeling of text , is called latent dirichlet allocation . | latent dirichlet allocation is one of the widely adopted generative models for topic modeling . |
with the argumentative structure proposed by cite-p-9-3-0 ( section 2 ) , we show that using all the argumentation features predicts essay scores that are highly correlated with human scores ( section 3 ) . | we show that argumentation features derived from a coarse-grained , argumentative structure of essays are helpful in predicting essays scores that have a high correlation with human scores . |
we use pre-trained 100 dimensional glove word embeddings . | we use pre-trained word vectors of glove for twitter as our word embedding . |
there have been attempts at using minimal semantic information in dependency parsing for hindi . | recently , in the work on hindi dependency parser by bharati et al , the use of semantic features has been exploited . |
we used moses for pbsmt and hpbsmt systems in our experiments . | for all experiments , we used the moses smt system . |
the n-gram models are created using the srilm toolkit with good-turning smoothing for both the chinese and english data . | the lms are build using the srilm language modelling toolkit with modified kneserney discounting and interpolation . |
we further add skip connections between the lstm layers to the softmax layers , since they are proved effective for training neural networks . | in this work , we integrate residual connections with our networks to form connections between layers . |
t盲ckstr枚m et al explore the use of mixed type and token annotations in which a tagger is learned by projecting information via parallel text . | t盲ckstr枚m et al evaluate the use of mixed type and token constraints generated by projecting information from a highresource language to a low-resource language via a parallel corpus . |
whitehill et al proposed a probabilistic model to filter labels from non-experts , in the context of an image labeling task . | whitehill et al proposed a probabilistic method for combining the labels of multiple crowdworkers to acquire reliable labels . |
for example , socher et al demonstrates that sentiment analysis , which is usually approached as a flat classification task , can be viewed as tree-structured . | for example , socher et al exploited tensor-based function in the task of sentiment analysis to capture more semantic information from constituents . |
in the following experiments , we explore which factors affect stability , as well as how this stability affects downstream tasks . | in the following experiments , we explore which factors affect stability , as well as how this stability affects downstream tasks that word embeddings are commonly used for . |
the grefenstette relation extractor produces context relations that are then lemmatised using the minnen et al morphological analyser . | the sextant relation extractor produces context relations that are then lemmatised using the minnen et al morphological analyser . |
the parameters of our mt system were tuned on a development corpus using minimum error rate training . | we used minimum error rate training for tuning on the development set . |
sadamitsu et al proposed a bootstrapping method that uses unsupervised topic information estimated by latent dirichlet allocation to alleviate semantic drift . | xing et al presented topic aware response generation by incorporating topic words obtained from a pre-trained lda model . |
our approach follows that of johnson et al , a multilingual mt approach that adds an artificial token to encode the target language to the beginning of each source sentence in the parallel corpus . | following the setup of johnson et al , we prepend a totarget-language tag to the source side of each sentence pair and mix all language pairs in the nmt training data . |
this model deals with phonetic errors significantly better than previous models . | the proposed method builds an explicit error model for word pronunciations . |
we show that it is beneficial to distinguish expert users from non-experts . | our results illustrate the importance of distinguishing experts from non-experts . |
the system was trained using moses with default settings , using a 5-gram language model created from the english side of the training corpus using srilm . | a 5-gram language model was created with the sri language modeling toolkit and trained using the gigaword corpus and english sentences from the parallel data . |
the proposed forms of browsing structures include topic clusters and monothetic concept hierarchies . | the popular ir approaches include clustering and monothetic concept hierarchies . |
recent work on evaluating spoken dialogue systems suggests that the information presentation phase of complex dialogues is often the primary contributor to dialogue duration . | work on evaluating sds suggests that the information presentation phase is the primary contributor to dialogue duration , and as such , is a central aspect of sds design . |
nguyen and grishman employed convolutional neural networks to automatically extract sentence-level features for event detection . | nguyen et al use convolutional neural networks and recurrent neural networks with wordand entity-position-embeddings for relation extraction and event detection . |
later , xue et al combined the language model and translation model to a translation-based language model and observed better performance in question retrieval . | xue et al enhanced the performance of word based translation model by combining query likelihood language model to it . |
in natural language , a word often assumes different meanings , and the task of determining the correct meaning , or sense , of a word in different contexts is known as word sense disambiguation ( wsd ) . | word sense disambiguation ( wsd ) is a widely studied task in natural language processing : given a word and its context , assign the correct sense of the word based on a predefined sense inventory ( cite-p-15-3-4 ) . |
in section 7 we present the results followed by discussion . | in section 7 we present the results followed by discussion in section 8 . |
the word embeddings were obtained using word2vec 2 tool . | word embeddings for english and hindi have been trained using word2vec 1 tool . |
we test the statistical significance of differences between various mt systems using the bootstrap resampling method . | to see whether an improvement is statistically significant , we also conduct significance tests using the paired bootstrap approach . |
we incorporate the configuration of the crf as described in a participating system using only the shortest possible annotation as exact true positive per entity . | we incorporate the configuration of the crf as described in a participating system using only the shortest possible annotation as exact true positive per entity containing the classes person , organization , locations and misc . |
to train our models , which are fully differentiable , we use the adadelta optimizer . | we train the model through stochastic gradient descent with the adadelta update rule . |
answer selection is a process which pinpoints correct answer ( s ) from the extracted candidate answers . | answer selection ( as ) is a crucial subtask of the open domain question answering ( qa ) problem . |
we use sri language modeling toolkit to train a 5-gram language model on the english sentences of fbis corpus . | for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided . |
in this paper we propose l obby b ack , a system that automatically identifies clusters of documents that exhibit text reuse , and generates “ prototypes ” . | in this paper we present l obby b ack , a system to reconstruct the “ dark corpora ” that is comprised of model bills which are copied ( and modified ) by resource constrained state legislatures . |
smith and eisner perform dependency projection and annotation adaptation with quasi-synchronous grammar features . | smith and eisner propose effective qg features for parser adaptation and projection . |
we use the adam stohastic optimization method to minimize the negative log-likelihood cost with fine-tuning on the word embeddings . | we use binary crossentropy loss and the adam optimizer for training the nil-detection models . |
pantel and lin automatically map the senses to wordnet , and then measure the quality of the mapping . | pantel and lin automatically mapped the senses to wordnet , and then measured the quality of the mapping . |
semantic parsing is the task of mapping a natural language ( nl ) sentence into a completely formal meaning representation ( mr ) or logical form . | semantic parsing is the task of transducing natural language ( nl ) utterances into formal meaning representations ( mrs ) , commonly represented as tree structures . |
wsd assigns to each cluster a score equal to the sum of weights of its hyperedges found in the local context of a target word . | wsd assigns to each induced cluster a score equal to the sum of weights of its hyperedges found in the local context of the target word . |
hamilton et al measured the variation between models by observing semantic change using diachronic corpora . | similarly , hamilton et al defined a methodology to quantify semantic change using four languages . |
we propose a cross-lingual framework for fine-grained opinion mining . | we presented a cross-lingual framework for fine-grained opinion mining . |
a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit . | the probabilistic language model is constructed on google web 1t 5-gram corpus by using the srilm toolkit . |
in particular , using pre-trained word embeddings like word2vec to represent the text has proved to be useful in classifying text from different domains . | embeddings pre-trained on unlabeled text with tools such as word2vec and glove have been used to extend traditional ir models . |
we assume the part-of-speech tagset of the penn treebank . | we use the standard corpus for this task , the penn treebank . |
bilingual lexicon induction is the task of identifying word translation pairs using source and target monolingual corpora , which are often comparable . | bilingual lexicon induction is the task of finding words that share a common meaning across different languages . |
other work tries to extract hypernym relations from large-scale encyclopedias like wikipedia and achieves high precision . | other work extracts hypernym relations from encyclopedias but has limited coverage . |
experiments show that the new dataset does not only enable detailed analyses of the different encoders , but also provides a gauge to predict successes of distributed representations of relational patterns . | experiments show that the new dataset does not only enable detailed analyses of the different encoders , but also provides a gauge to predict successes of distributed representations of relational patterns in the relation classification task . |
lexical markers combined with syntactic structures , are easy to spot , and can provide a first set of detection . | these clues can be characterized by syntactic structures and lexical markers . |
coreference resolution is a challenging task , that involves identification and clustering of noun phrases mentions that refer to the same real-world entity . | coreference resolution is a multi-faceted task : humans resolve references by exploiting contextual and grammatical clues , as well as semantic information and world knowledge , so capturing each of these will be necessary for an automatic system to fully solve the problem . |
we use the stanford parser to generate the grammar structure of review sentences for extracting syntactic d-features . | in order to acquire syntactic rules , we parse the chinese sentence using the stanford parser with its default chinese grammar . |
we present an extension to the itg constraints . | in the following , we will call these the itg constraints . |
we use the pool-based approach to active learning , because it is a natural fit for domain adaptation . | in this work , we are interested in selective sampling for pool-based active learning , and focus on uncertainty sampling . |
word-insertion operation is intended to capture linguistic differences in specifying syntactic cases . | these operations capture linguistic differences such as word order and case marking . |
semantic role labeling ( srl ) is the task of labeling the predicate-argument structures of sentences with semantic frames and their roles ( cite-p-18-1-2 , cite-p-18-1-19 ) . | semantic role labeling ( srl ) is the process of assigning semantic roles to strings of words in a sentence according to their relationship to the semantic predicates expressed in the sentence . |
auc scores for apg and sl kernels for medline corpus have been reported in , while scores of all three baseline kernels for aimed and lll corpus are reported in . | the auc scores for apg and sl kernels for medline corpus have been reported in , while scores of all baseline kernels for aimed and lll corpus are reported in . |
coreference resolution is a task aimed at identifying phrases ( mentions ) referring to the same entity . | coreference resolution is the task of identifying all mentions which refer to the same entity in a document . |
we use stanford part-of-speech tagger to automatically detect nouns from text . | we build language models on words as well as part-of-speech tags from stanford pos-tagger . |
in this paper , we propose methods for controlling the output sequence length for neural encoder-decoder . | in this paper , we propose and investigate four methods for controlling the output sequence length for neural encoder-decoder models . |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.