sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
relation extraction is the task of predicting attributes and relations for entities in a sentence ( zelenko et al. , 2003 ; bunescu and mooney , 2005 ; guodong et al. , 2005 ) .
relation extraction is a core task in information extraction and natural language understanding .
we extract all word pairs which occur as 1-to-1 alignments , and later refer to them as the word-aligned list .
we extract all word pairs which occur as 1-to-1 alignments , and later refer to them as the list of word pairs .
for parsing , we use the stanford parser .
after this we parse articles using the stanford parser .
as expected , the glass-box features help to reduce mae and rmse .
as expected , the glass-box features help to reduce mae and rmse for both err and n ? .
and allows us to conclude that the variability of word order is a more negative factor on parsing performance than long dependencies .
based on these artificial data on twelve languages , we show that longer dependencies and higher word order variability degrade parsing performance .
we used a standard pbmt system built using moses toolkit .
we implement the pbsmt system with the moses toolkit .
we examine one method for performing knowledge base completion that is currently in use : the path ranking algorithm .
we have explored several practical issues that arise when using the path ranking algorithm for knowledge base completion .
this supports the findings of wallace that lexical features alone are not effective at identifying irony .
this partly supports the findings of wallace that verbal irony can not be recognised through lexical clues alone .
we use the adam optimizer with its default parameters and a mini-batch size of 32 .
additionally , we compile the model using the adamax optimizer .
we used the logistic regression implemented in the scikit-learn library with the default settings .
we employed the machine learning tool of scikit-learn 3 , for training the classifier .
instead , the metric topic coherence has been shown in to correlate well with human judgments .
so the topic coherence metric is utilized to assess topic quality , which is consistent with human labeling .
coreference resolution is the process of linking together multiple expressions of a given entity .
coreference resolution is the task of identifying all mentions which refer to the same entity in a document .
in this paper , we propose a syllable-based method for tweet normalization to study the cognitive process of non-standard word creation .
in this paper , we propose a syllable-based method for tweet normalization to study the cognitive process of non-standard word creation in social media .
liu and gildea introduced two types of semantic features for tree-to-string machine translation .
liu and gildea added two types of semantic role features into a tree-to-string translation model .
given that many comparisons are figurative , a system that discriminates literal from figurative comparisons .
importantly , there is no general rule separating literal from figurative comparisons .
we propose both the concept of excitation and an automatic method for its acquisition .
we propose a new semantic orientation , excitation , and its automatic acquisition method .
svms are a new learning method but have been reported by joachims to be well suited for learning in text classification .
svms are frequently used for text classification and have been applied successfully to nli .
on nell ¡¯ s knowledge base , 87 % of the word senses it creates correspond to real-world concepts , and 85 % of noun phrases that it suggests .
when conceptresolver is run on nell¡¯s knowledge base , 87 % of the word senses it creates correspond to real-world concepts , and 85 % of noun phrases that it suggests refer to the same concept are indeed synonyms .
enseval words , we showed that the wikipedia sense annotations can be used to build a word sense disambiguation system .
through word sense disambiguation experiments , we show that the wikipedia-based sense annotations are reliable and can be used to construct accurate sense classifiers .
phrasebased smt models are tuned using minimum error rate training .
the nnlm weights are optimized as the other feature weights using minimum error rate training .
the word embeddings are initialized with the publicly available word vectors trained through glove 5 and updated through back propagation .
the weights of the word embeddings use the 300-dimensional glove embeddings pre-trained on common crawl data .
we consider a phrase-based translation model and a hierarchical translation model .
in this work , we apply a standard phrase-based translation system .
however , in their further study , they reported even lower bleu scores after grouping mwes according to part-of-speech on a large corpus .
however , in a further study , a lower bleu score is reported after grouping mwes by part-of-speech on a large corpus .
experimental results on duc2004 data sets and some expanded data demonstrate the good quality of our summaries .
experimental results on duc2004 dataset demonstrate the effectiveness of our model .
for the n-gram lm , we use srilm toolkits to train a 4-gram lm on the xinhua portion of the gigaword corpus .
we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .
these language models were built up to an order of 5 with kneser-ney smoothing using the srilm toolkit .
the language model was a kneser-ney interpolated trigram model generated using the srilm toolkit .
maltparser is a transition-based dependency parser generator .
being a transition-based parser , maltparser does incremental parsing by design .
we extract dependency structures from the penn treebank using the head rules of yamada and matsumoto .
we use the wsj portion of the penn treebank 4 , augmented with head-dependant information using the rules of yamada and matsumoto .
the phrase structure trees produced by the parser are further processed with the stanford conversion tool to create dependency graphs .
the dependency parse trees are finally obtained using a phrase structure parser , using the post-processing of the stanford corenlp package .
an lm is trained on 462 million words in english using the srilm toolkit .
a 4-gram language model is trained on the monolingual data by srilm toolkit .
gabrilovich and markovitch introduced the explicit semantic analysis which represents a word by its distribution over the labeled wikipedia pages instead of the latent concepts as in lsa and lda .
gabrilovich and markovitch introduced the esa model in which wikipedia and open directory project 1 was used to obtain the explicit concepts .
minimum error rate training under bleu criterion is used to estimate 20 feature function weights over the larger development set .
all the feature weights and the weight for each probability factor are tuned on the development set with minimum-error-rate training .
active learning is a general framework and does not depend on tasks or domains .
active learning is a framework that makes it possible to efficiently train statistical models by selecting informative examples from a pool of unlabeled data .
we used glove 10 to learn 300-dimensional word embeddings .
for representing words , we used 100 dimensional pre-trained glove embeddings .
we have shown that incorporating eye gaze information improves reference resolution performance .
in addition , incorporating eye gaze with word confusion networks further improves performance .
in recent years , phrase-based systems for statistical machine translation have delivered state-of-the-art performance on standard translation tasks .
phrase-based translation systems prove to be the stateof-the-art as they have delivered translation performance in recent machine translation evaluations .
p rop ’ s sentiment-focused approach , we provide a framework to understand the semantics of words with respect to 732 semantic axes .
we have proposed s em a xis to examine a nuanced representation of words based on diverse semantic axes .
recently , neural networks based methods are proposed to learn the distributed representation of words on large scale of corpus .
recently , the embedding of words into a low-dimensional space using neural networks was suggested .
for co-occurrence statistical methods , hu and liu proposed a pioneer research for opinion summarization based on association rules .
the dataset proposed in hu and liu is the most used resource in aspect-based opinion summarization .
this algorithm is based on distributional clustering and alignmentbased learning .
the alignment-based learning algorithm is an unsupervised , symbolic , structure bootstrapping system .
all the weights of those features are tuned by using minimal error rate training .
the feature weights 位 i are trained in concert with the lm weight via minimum error rate training .
part-of-speech ( pos ) tagging is the task of assigning each of the words in a given piece of text a contextually suitable grammatical category .
part-of-speech ( pos ) tagging is a well studied problem in these fields .
we use the word2vec tool to pre-train the word embeddings .
we pre-train the word embedding via word2vec on the whole dataset .
we used latent dirichlet allocation as our exploratory tool .
we have used latent dirichlet allocation model as our main topic modeling tool .
in order to compare our system with both baselines , we employed the test set of examples which was made available by durrett and denero , since this test-set included verbs with both irregular and regular forms .
to evaluate the systems , we used the data set published by durrett and denero , which includes full inflection tables for a large number of lemmas in german , spanish , and finnish .
distributions inferred from a similarity graph are used to regularize the learning of crfs model on labeled and unlabeled data .
different from their concern , our emphasis is to learn the semi-supervised model by injecting the label information from a similarity graph constructed from labeled and unlabeled data .
we used minimum error rate training for tuning on the development set .
all the feature weights were trained using our implementation of minimum error rate training .
which demonstrates the feasibility of this approach to single sentence text generation .
this system demonstrates the feasibility of the semantic transformational method of text generation .
we used the srilm toolkit to train a 4-gram language model on the xinhua portion of the gigaword corpus , which contains 238m english words .
we train a 4-gram language model on the xinhua portion of the english gigaword corpus using the srilm toolkits with modified kneser-ney smoothing .
the pre-processed monolingual sentences will be used by srilm or berkeleylm to train a n-gram language model .
the probabilistic language model is constructed on google web 1t 5-gram corpus by using the srilm toolkit .
we train a 5-gram language model with the xinhua portion of english gigaword corpus and the english side of the training set using the srilm toolkit .
we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus .
we propose a novel model of learning visually-grounded representations of language from paired textual and visual input .
we propose i maginet , a model of learning visually grounded representations of language from coupled textual and visual input .
word sense disambiguation ( wsd ) is the task of automatically determining the correct sense for a target word given the context in which it occurs .
word sense disambiguation ( wsd ) is a key enabling technology that automatically chooses the intended sense of a word in context .
in this work , we study sentiment composition in phrases that include at least one positive and at least one negative word ¡ª .
in this work , we apply several unsupervised and supervised techniques of sentiment composition for a specific type of phrases¡ªopposing polarity phrases .
we were able to train a 4-gram language model using kenlm .
and for language modeling , we used kenlm to build a 5-gram language model .
for this purpose , we assume a generative model for multilingual corpora , where each sentence is generated from a language dependent probabilistic context .
for this purpose , we propose a generative model for multilingual grammars that is learned in an unsupervised fashion .
in this paper , we present an original approach to assessing the readability of ffl texts using nlp techniques and extracts from ffl textbooks .
in this paper , we present an original approach to assessing the readability of ffl texts using nlp techniques and extracts from ffl textbooks as our corpus .
we optimize the objective in equation 5 using adagrad .
to maximize the log-likelihood , we use adagrad .
we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing .
we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .
since similarity is only one type of relatedness , comparison to similarity norms fails to provide a complete view of a measure ¡¯ s ability to capture more general types of relatedness .
because similarity is only one particular type of relatedness , comparison to similarity norms fails to give a complete view of a relatedness measure¡¯s efficacy .
we measure machine translation performance using the bleu metric .
we evaluate the translation quality using the case-insensitive bleu-4 metric .
the language model is a 5-gram lm with modified kneser-ney smoothing .
we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit .
the language model is trained on the target side of the parallel training corpus using srilm .
the target language model is trained by the sri language modeling toolkit on the news monolingual corpus .
without sacrificing computational efficiency , we propose a new method to distill an ensemble of 20 transition-based parsers into a single one .
we also propose a new method to distill an ensemble of 20 greedy parsers into a single one to overcome annotation noise without sacrificing efficiency .
the image representations are then obtained by extracting the pre-softmax layer from a forward pass in a convolutional neural network that has been trained on the imagenet classification task using caffe .
we extract the 4096-dimensional pre-softmax layer from a for-ward pass through a convolutional neural network , which has been pretrained on the imagenet classification task using caffe .
on all datasets and models , we use 300-dimensional word vectors pre-trained on google news .
we use the 300-dimensional skip-gram word embeddings built on the google-news corpus .
conditional random fields are undirected graphical models trained to maximize the conditional probability of the desired outputs given the corresponding inputs .
crfs are undirected graphical models which define a conditional distribution over labellings given an observation .
relation classification is the task of assigning sentences with two marked entities to a predefined set of relations .
relation classification is the task of identifying the semantic relation holding between two nominal entities in text .
word sense disambiguation is the task of determining the particular sense of a word from a given set of pre-defined senses .
word sense disambiguation is the task to identify the intended sense of a word in a computational manner based on the context in which it appears .
gildea and jurafsky were the first to describe a statistical system trained on the data from the framenet project to automatically assign semantic roles .
the pioneering work on building an automatic semantic role labeler was proposed by gildea and jurafsky .
recently , vaswani et al proposed a model called transformer , which completely relies on attention and feed-forward layers instead of rnn architecture .
vaswani et al came up with a highly parallelizable architecture called transformer which uses the self-attention to better encode a sequences .
long short term memory units are proposed in hochreiter and schmidhuber to overcome this problem .
hochreiter and schmidhuber , 1997 ) proposed a long short-term memory network , which can be used for sequence processing tasks .
we then used the python nltk toolkit to tokenise the words .
we use the nltk stopwords corpus to identify function words .
twitter is a social platform which contains rich textual content .
twitter is a widely used microblogging platform , where users post and interact with messages , “ tweets ” .
brockett et al applied machinetranslation techniques to correct noun number errors on mass nouns and article usage but their application was restricted to a small set of constructions .
brockett et al used a brown noise channel translation model to record patterns of determiner error correction on a small set of mass-nouns , and reducing the error spectrum in both class and semantic domain , but adding detection capabilities .
we describe in detail the methodology of constructing the acm .
we then describe in detail the methodology of constructing the acm .
smt systems still suffers from inaccurate lexical choice .
yet smt translation quality still obviously suffers from inaccurate lexical choice .
relation extraction is the key component for building relation knowledge graphs , and it is of crucial significance to natural language processing applications such as structured search , sentiment analysis , question answering , and summarization .
relation extraction is a crucial task in the field of natural language processing ( nlp ) .
we initialize these word embeddings with glove vectors .
our word embeddings is initialized with 100-dimensional glove word embeddings .
in this paper we presented the mmsystem for lexical simplification we submitted to the semeval-2012 task .
in this paper , we describe the system we submitted to the semeval-2012 lexical simplification task .
part-of-speech ( pos ) tagging is a job to assign a proper pos tag to each linguistic unit such as word for a given sentence .
part-of-speech ( pos ) tagging is a fundamental language analysis task .
to evaluate our method , we use the webquestions dataset , which contains 5,810 questions crawled via google suggest api .
to evaluate the proposed method , we conduct experiments on webquestions dataset that includes 3,778 question-answer pairs for training and 2,032 for testing .
they then searched the propbank wall street journal corpus for sentences containing such lexical items and annotated them with respect to metaphoricity .
then they searched the propbank wall street journal corpus for sentences containing such lexical items and annotated them with respect to metaphoricity .
coreference resolution is the process of finding discourse entities ( markables ) referring to the same real-world entity or concept .
coreference resolution is the next step on the way towards discourse understanding .
among these techniques , latent semantic indexing is a wellknown approach .
latent semantic indexing was an early , highly influential approach to solve this problem .
statistical topic models such as latent dirichlet allocation provide a powerful framework for representing and summarizing the contents of large document collections .
topic models such as latent dirichlet allocation have emerged as a powerful tool to analyze document collections in an unsupervised fashion .
we are the first to consider this more loosely-coupled approach to out-of-domain image captioning , which allows the model to take advantage of information not available at training time , and avoids the need to retrain the captioning model .
we address this problem using a flexible approach that enables existing deep captioning architectures to take advantage of image taggers at test time , without retraining .
our strategy is suitable to build a generic system that performs competitively on any domain .
our system uses the domain-specific data as one dataset to build a robust system .
we tackle this problem , and propose an endto-end neural crf autoencoder ( ncrf-ae ) model for semi-supervised learning on sequence labeling problems .
in this paper we propose an endto-end neural crf autoencoder ( ncrf-ae ) model for semi-supervised learning of sequential structured prediction problems .
we apply online training , where model parameters are optimized by using adagrad .
we use online learning to train model parameters , updating the parameters using the adagrad algorithm .
our thread disentanglement performance matches our baselines , and is in line with heuristic-based assignments from elsner and charniak .
we compare the performance of our thread partitioning pipeline to the results reported by elsner and charniak and wang and oard .
for training the model , we use the linear kernel svm implemented in the scikit-learn toolkit .
to train the models we use the default stochastic gradient descent classifier provided by scikit-learn .
we build an open-vocabulary language model with kneser-ney smoothing using the srilm toolkit .
furthermore , we train a 5-gram language model using the sri language toolkit .
fast align was used to generate word alignment files .
the fast align toolkit is used for word alignment .
in this paper , we propose a generic dlm , which can be used not only for specific applications .
in this paper we have presented a novel discriminative language model using pseudo-negative examples .
zhao and ng applied feature-based methods on anaphoricity determination and antecedent identification with most of features structural in nature .
as a representative in chinese zero anaphora resolution , zhao and ng focused on anaphoricity determination and antecedent identification using feature-based methods .
we use the word and context vectors released by melamud et al , 5 which were previously shown to perform strongly in lexical substitution tasks .
to do this , we adapt the word embedding-based lexical substitution model of melamud et al to the simplification task .
we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit .
trigram language models were estimated using the sri language modeling toolkit with modified kneser-ney smoothing .
for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing .
we first trained a trigram bnlm as the baseline with interpolated kneser-ney smoothing , using srilm toolkit .
we complement the neural approaches with a simple neural network that uses word representations , namely a continuous bag-of-words model .
we train embeddings using continuous bag-of-words model which can be used also to predict target words from the context .