sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
liu et al propose to cluster candidate words based on their semantic relationship to ensure that the extracted keyphrases cover the entire document .
liu et al employed clustering to extract keywords that cover all important topics from the original text .
to train our neural algorithm , we apply word embeddings of a look-up from 100-d glove pre-trained on wikipedia and gigaword .
for this task , we use glove pre-trained word embedding trained on common crawl corpus .
in this paper , we show how the memory required for parallel lvm training can be reduced by partitioning the training corpus .
in this paper , we develop greedy algorithms for the task that are effective in practice .
as a pivot language , we can build a word alignment model for l1 and l2 .
to perform word alignment between languages l1 and l2 , we introduce a third language l3 .
this extraction process is called transliteration mining .
transliteration mining is the extraction of transliteration pairs from unlabelled data .
other approaches address the classification of argument components into claims and premises , supporting and opposing claims , or backings , rebuttals and refutations .
other approaches focus on online comments and recognize argument components , justifications or different types of claims .
socher et al used an rnn-based architecture to generate compositional vector representations of sentences .
for relation classification , socher et al proposed a recursive matrix-vector model based on constituency parse trees .
we measure the quality of the automatically created summaries using the rouge measure .
for automated evaluation , we use rouge , which evaluates a summary by comparing it against several gold standard summaries .
sentence pairs are selected to train the smt system .
the sentence pairs with top scores are selected to train the system .
bengio et al introduced feed forward neural network into traditional n-gram language models , which might be the foundation work for neural network language models .
in 2003 , bengio et al proposed a neural network architecture to train language models which produced word embeddings in the neural network .
in workers generated paraphrases of 250 noun-noun compounds which were then used as the gold standard dataset for evaluating an automatic method of noun compound paraphrasing .
in , non-expert annotators generated paraphrases for 250 noun-noun compounds , which were then used as the gold standard data for evaluating an automatic paraphrasing system .
for english , we use the stanford parser for both pos tagging and cfg parsing .
we use the stanford parser for syntactic and dependency parsing .
named entity ( ne ) tagging is crucial in many natural language processing tasks , such as information .
name tagging is a critical early stage in many natural language processing pipelines .
we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting .
we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit .
we used the scikit-learn implementation of a logistic regression model using the default parameters .
for the feature-based system we used logistic regression classifier from the scikit-learn library .
we analyze the attention bias problem in traditional attention based rnn models .
we analyze the deficiency of traditional outer attention-based rnn models qualitatively and quantitatively .
we divide negations and their corresponding interpretations into training and test , and use svm with rbf kernel as implemented in scikit-learn .
as classifier we use a traditional model , a support vector machine with linear kernel implemented in scikit-learn .
our systems participated in these two subtasks .
our systems were among the top performing systems in both subtasks .
dependency parsing is a very important nlp task and has wide usage in different tasks such as question answering , semantic parsing , information extraction and machine translation .
therefore , dependency parsing is a potential β€œ sweet spot ” that deserves investigation .
so we can estimate it more accurately via a semi-supervised or transductive extension .
finally , we extend feature noising for structured prediction to a transductive or semi-supervised setting .
our baseline is a standard phrase-based smt system .
we use the moses phrase-based mt system with standard features .
based on the observation that authors of many bilingual web pages , especially those whose primary language is chinese , japanese or korean , sometimes annotate terms with their english translations inside a pair of parentheses , like ‘° .
the primary insight is that authors of many bilingual web pages , especially those whose primary language is chinese , japanese or korean sometimes annotate terms with their english translations inside a pair of parentheses .
we have applied topic modeling based on latent dirichlet allocation as implemented in the mallet package .
the clustering method used in this work is latent dirichlet allocation topic modelling .
for the best alignment , cite-p-9-1-5 divided sequences into chunks of a fixed time duration , and applied the a βˆ— alignment algorithm to each chunk independently .
for this reason , cite-p-9-1-5 divided the sequences into chunks of a fixed time duration , and applied the a βˆ— alignment algorithm to each chunk independently .
chapman et al created the negex algorithm , a simple rule-based system that uses regular expressions with trigger terms to determine whether a medical term is absent in a patient .
chapman et al proposed a rule-based algorithm called negex for determining whether a finding or disease mentioned within narrative medical reports is present or absent .
readability is used to provide documents to non-expert users so that they can read the retrieved documents easily .
readability is used to provide users with high-quality service in text recommendation or text visualization .
for nb and svm , we used their implementation available in scikit-learn .
we used standard classifiers available in scikit-learn package .
word embeddings such as word2vec and glove have been widely recognized for their ability to capture linguistic regularities .
vector based models such as word2vec , glove and skip-thought have shown promising results on textual data to learn semantic representations .
the task of complex word identification has often been regarded as a critical first step for automatic lexical simplification .
in particular , accurate automatic complex word identification strongly benefits lexical simplification as a first step in an ls pipeline .
the two baseline methods were implemented using scikit-learn in python .
the algorithms were implemented using scikit-learn , a general purpose machine learning python library .
our behavior analysis reveals that despite recent progress , today ’ s vqa models are β€œ myopic ” ( tend to fail on sufficiently novel instances ) , often β€œ jump to conclusions ” ( converge on a predicted answer after β€˜ listening ’ to just half the question ) , and are β€œ stubborn ” ( do not change their answers across images .
our behavior analysis reveals that despite recent progress , today ’ s vqa models are β€œ myopic ” ( tend to fail on sufficiently novel instances ) , often β€œ jump to conclusions ” ( converge on a predicted answer after β€˜ listening ’ to just half the question ) , and are β€œ stubborn ” ( do not change their answers across images ) .
and we seek to leverage the connection between these two tasks to improve both qa and qg .
in this paper , we give a systematic study that seeks to leverage the connection to improve both qa and qg .
we used the moses mt toolkit with default settings and features for both phrase-based and hierarchical systems .
we trained the statistical phrase-based systems using the moses toolkit with mert tuning .
moreover , arabic is a morphologically complex language .
arabic is a morphologically complex language .
in this work , we employ the toolkit word2vec to pre-train the word embedding for the source and target languages .
we first use the popular toolkit word2vec 1 provided by mikolov et al to train our word embeddings .
we created 5-gram language models for every domain using srilm with improved kneserney smoothing on the target side of the training parallel corpora .
we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .
our target application is the identification of lexico-semantic relations in specialized corpora .
we use these models to identify semantic relations in a specialized corpus .
the graph and the largest component consisted of 120 relations .
the input dataset was also smaller – the biggest graph consisted of 118 relations .
which cover english , italian and spanish , are made available to the community at http : / / trainomatic . org .
all the training data is available for research purposes at http : //trainomatic.org .
we evaluate the performance of different translation models using both bleu and ter metrics .
we measure the translation quality using a single reference bleu .
sentiment analysis is a recent attempt to deal with evaluative aspects of text .
sentiment analysis is a research area in the field of natural language processing .
named entity recognition ( ner ) is a fundamental task in text mining and natural language understanding .
named entity recognition ( ner ) is a key technique for ie and other natural language processing tasks .
in a first stage , the method generates candidate compressions by removing branches from the source sentence ‘¯ s dependency tree using a maximum entropy classifier .
in a first stage , it generates candidate compressions by removing branches from the source sentence‘¯s dependency tree using a maximum entropy classifier .
in this paper , we propose a novel spatiotemporal framework for entity linking .
given the ambiguity of keywords , in this paper , we study the task of entity linking ( cite-p-17-1-2 ) on microblogs .
rentzepopoulos and kokkinakis , 1996 ) describes a hidden markov model approach for phoneme-tographeme conversion , in seven european languages evaluated on a number of corpora .
rentzepopoulos describes a hidden markov model approach for phoneme-to-grapheme conversion , in seven european languages on a number of corpora .
we propose using a graphical representation of the discourse structure as a way of improving the performance of complex-domain dialogue systems .
in this paper , we study the utility of the discourse structure on the user side of a dialogue system .
pitler et al argued that the overall degree of ambiguity for english connectives were low .
pitler et al argued that discourse senses triggered by explicit connectives were easy to be identified in english pdtb2 .
following , we adopt a general log-linear model .
the basic model of the our system is a log-linear model .
part-of-speech ( pos ) tagging is a well studied problem in these fields .
part-of-speech ( pos ) tagging is a fundamental language analysis task .
for training our system classifier , we have used scikit-learn .
we used the svm implementation provided within scikit-learn .
in this paper , we present two deep-learning systems for short text sentiment analysis developed for semeval-2017 task 4 β€œ sentiment analysis .
in this paper , we present two deep-learning systems that competed at semeval-2017 task 4 ( cite-p-18-3-16 ) .
the skip-gram model adopts a neural network structure to derive the distributed representation of words from textual corpus .
the embedded word vectors are trained over large collections of text using variants of neural networks .
and we show that they act complementary to a very strong recurrent neural network-based language model .
we have also shown that our models are complementary to a very strong rnn language model ( cite-p-12-3-9 ) .
coreference resolution is the task of determining which mentions in a text refer to the same entity .
coreference resolution is the task of grouping mentions to entities .
keller and lapata show that bigram statistics for english language is correlated between corpus and web counts .
keller and lapata showed that web frequencies correlate reliably with standard corpus frequencies .
relation extraction is the task of predicting semantic relations over entities expressed in structured or semi-structured text .
relation extraction is a traditional information extraction task which aims at detecting and classifying semantic relations between entities in text ( cite-p-10-1-18 ) .
in this paper we present a hierarchical multi-class document categorization , which focus on maximize the margin of the classes .
in this paper , we propose a hierarchical multi-class text categorization method with global margin maximization .
we employ the libsvm library for support vector machine classifiers , as implemented in weka machine learning toolkit .
we use classifiers from the weka toolkit , which are integrated in the dkpro tc framework .
and while discourse parsing is a document level task , discourse segmentation is done at the sentence level , assuming that sentence boundaries are known .
discourse parsing is the task of identifying the presence and the type of the discourse relations between discourse units .
in particular , socher et al obtain good parsing performance by building compositional representations from word vectors .
another related approach is introduced by socher et al , which used a neural tensor network to learn relational compositionality .
mccarthy et al consider the identification of predominant word senses in corpora , focusing on differences between domains .
mccarthy et al consider the identification of predominant wordsenses in corpora , including differences between domains .
we use the popular word2vec 1 tool proposed by mikolov et al to extract the vector representations of words .
for efficiency , we follow the hierarchical softmax optimization used in word2vec .
in this paper , we suggest a method that automatically constructs an ne tagged corpus from the web .
in this paper , we presented a method that automatically generates an ne tagged corpus using enormous web documents .
we have applied topic modeling based on latent dirichlet allocation as implemented in the mallet package .
we leverage latent dirichlet allocation for topic discovery and modeling in the reference source .
bannard and callison-burch proposed identifying paraphrases by pivoting through phrases in a bilingual parallel corpora .
bannard and callison-burch learned phrasal paraphrases using bilingual parallel corpora .
we use the moses phrase-based mt system with standard features .
we use phrase based moses with default options as the spe engine .
we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting .
we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus .
because the pop method employs random projections .
the pop method employs random projections .
word representations to learn word embeddings from our unlabeled corpus , we use the gensim im-plementation of the word2vec algorithm .
in this run , we use a sentence vector derived from word embeddings obtained from word2vec .
in this paper can be used to measure the relatedness of word pairs .
in this paper , we address the task of cross-lingual semantic relatedness .
our hdp extension is also inspired from the bayesian model proposed by haghighi and klein .
this extension was inspired from the fully generative bayesian model proposed by haghighi and klein .
relation extraction is the key component for building relation knowledge graphs , and it is of crucial significance to natural language processing applications such as structured search , sentiment analysis , question answering , and summarization .
relation extraction is a key step towards question answering systems by which vital structured data is acquired from underlying free text resources .
we employ the morfessor categories-map algorithm for segmentation .
as a baseline for this comparison , we use morfessor categories-map .
summarization is a classic text processing problem .
summarization is the process of condensing text to its most essential facts .
the evaluation data comes from the wsi task of semeval-2007 .
recent wsi methods were evaluated under the framework of semeval-2007 wsi task .
we use the logistic regression classifier in the skll package , which is based on scikit-learn , optimizing for f 1 score .
we use the linearsvc classifier as implemented in scikit-learn package 17 with the default parameters .
aim of the task is not unlike the task of post-editing where human translators correct errors provided by machine-generated translations .
this is not unlike the task of post-editing where human translators improve machine-generated translations .
the probabilistic language model is constructed on google web 1t 5-gram corpus by using the srilm toolkit .
a 4-gram language model is trained on the xinhua portion of the gigaword corpus with the srilm toolkit .
budanitsky and hirst found the method proposed by jiang and conrath to be the most successful in malapropism detection .
budanitsky and hirst report that jiang-conrath is the best knowledge-based measure for the task of spelling correction .
this parser is based on pcfgs with latent annotations , a formalism that showed state-of-the-art parsing accuracy for a wide range of languages .
the majority of the state-of-the-art constituent parsers are based on generative pcfg learning , with lexicalized or latent annotation refinements .
the corpus has been used for many tasks such as spelling correction and multi-word expression classification .
the google n-gram corpus has been applied to many nlp tasks such as spelling correction , multi-word expression classification and lexical disambiguation .
in such formulations , sentence compression is finding the best derivation from a syntax tree .
in such models a synchronous grammar is extracted from a corpus of parallel syntax trees with leaves aligned .
word-level labels are utilized to derive the segment scores .
in an hscrf , word-level labels are utilized to derive the segment scores .
we use srilm for n-gram language model training and hmm decoding .
we use srilm for training a trigram language model on the english side of the training corpus .
in both pre-training and fine-tuning , we adopt adagrad and l2 regularizer for optimization .
we also use mini-batch adagrad for optimization and apply dropout .
kendall ’ s math-w-2-5-3-76 and explain how it can be employed for evaluating information ordering .
in this article , we argue that kendall ’ s math-w-11-1-0-8 can be used as an automatic evaluation method for information-ordering tasks .
without human-annotated examples of complex emotions , automatic emotion detectors remain ignorant of these emotions .
however , existing automatic emotion detectors are limited to recognize only the basic emotions .
language is a weaker source of supervision for colorization than user clicks .
the language is a form of modal propositional logic .
for collapsed syntactic dependencies we use the stanford dependency parser .
we use the stanford parser for obtaining all syntactic information .
we find substantial performance gains over the ccm model , a strong monolingual baseline .
both our model and even the monolingual ccm baseline yield far higher performance on the same korean-english corpus .
among many others , morante and daelemans and li et al propose scope detectors using the bioscope corpus .
morante and daelemans and ozgηœ‰r and radev propose scope detectors using the bioscope corpus .
evaluation ( texeval-2 ) has its main focus on hypernym-hyponym relation extraction from given lists of terms collected from multiple domains .
this task focuses only on the hypernym-hyponym relation extraction from a list of terms collected from various domains and languages .
coreference resolution is a key task in natural language processing ( cite-p-13-1-8 ) aiming to detect the referential expressions ( mentions ) in a text that point to the same entity .
coreference resolution is a complex problem , and successful systems must tackle a variety of non-trivial subproblems that are central to the coreference task β€” e.g. , mention/markable detection , anaphor identification β€” and that require substantial implementation efforts .
we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing .
we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus .
we pre-train the word embedding via word2vec on the whole dataset .
to keep consistent , we initialize the embedding weight with pre-trained word embeddings .
this paper presents a system that participated in semeval 2017 task 10 ( subtask a and subtask b ) : extracting keyphrases and relations from scientific publications .
this paper gives a brief description of our system at semeval 2017 task 10 for keyphrase extraction of scientific papers .
lodhi et al used string kernels to solve the text classification problem .
lodhi et al , 2002 ) first used string kernels with character level features for text categorization .
cross-lingual textual entailment detection is an extension of the textual entailment detection problem .
cross-lingual textual entailment is an extension of textual entailment .
we improved over the baselines ; in some cases we obtained greater than 30 % improvement for mean r ouge scores over the best performing baseline .
that is , we obtained greater than 30 % improvement over the highest performing baseline in terms of mean r ouge scores .
in this paper , we propose a novel framework , companion teaching , to include a human teacher in the dialogue policy training loop .
in this paper , a novel safe online policy learning framework is proposed , referred to as companion teaching .