sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
elkiss et al carried out a largescale study and confirmed that citation summaries contain extra information that does not appear in paper abstracts .
elkiss et al , perform a large-scale study on citations in the free pubmed central and show that they contain information that may not be present in abstracts .
dependency parsing is the task of predicting the most probable dependency structure for a given sentence .
dependency parsing is the task of building dependency links between words in a sentence , which has recently gained a wide interest in the natural language processing community .
spelling variants can then be used to mitigate the problems caused by spelling variation that were described above .
alternatively , specialized tools can be developed that directly use the knowledge about spelling variation .
arguably the most influential approach to the topic modeling domain is latent dirichlet allocation .
nowadays a very popular topic model is latent dirichlet allocation , a generative bayesian hierarchical model .
sources of information are represented by kernel functions .
each source of information is represented by a specific kernel function .
cui et al developed an information theoretic measure based on dependency trees .
cui et al developed a dependency-tree based information discrepancy measure .
the 5-gram kneser-ney smoothed language models were trained by srilm , with kenlm used at runtime .
the language model pis implemented as an n-gram model using the srilm-toolkit with kneser-ney smoothing .
and unseen predicates , we study the performance of a state-of-the-art srl system trained on either codification of roles and some specific settings , i . e . including / excluding verb-specific information .
by testing a state–of–the–art srl system with the two alternative role annotations , we show that the propbank role set is more robust to the lack of verb–specific semantic information and generalizes better to infrequent and unseen predicates .
topic models such as lda and psla and their extensions have been popularly used to find topics in text documents .
topic models , such as plsa and lda , have shown great success in discovering latent topics in text collections .
both the transfer and transducer systems were trained and evaluated on english-to-mandarin chinese translation of transcribed utterances from the atis corpus .
the head transducer model was trained and evaluated on english-to-mandarin chinese translation of transcribed utterances from the atis corpus .
to address this problem , long short-term memory network was proposed in where the architecture of a standard rnn was modified to avoid vanishing or exploding gradients .
to tackle this problem , hochreiter and schmidhuber proposed long short term memory , which uses a cell with input , forget and output gates to prevent the vanishing gradient problem .
the skip-gram model aims to find word representations that are useful for predicting the surrounding words in a sentence or document .
the skip-gram model implemented by word2vec learns vectors by predicting context words from targets .
alignment types are shown with the ? symbol .
the incorrectly predicted tags are shown with the ? symbol .
the weights of the log-linear interpolation model were optimized via minimum error rate training on the ted development set , using 200 best translations at each tuning iteration .
the weights of the different feature functions were optimised by means of minimum error rate training on the 2008 test set .
relation extraction is the task of predicting semantic relations over entities expressed in structured or semi-structured text .
relation extraction ( re ) is the task of extracting instances of semantic relations between entities in unstructured data such as natural language text .
jiang et al proposes a cascaded linear model for joint chinese word segmentation and pos tagging .
jiang et al used a character-based model using perceptron for pos tagging and a log-linear model for re-ranking .
semantic role labeling ( srl ) is a form of shallow semantic parsing whose goal is to discover the predicate-argument structure of each predicate in a given input sentence .
semantic role labeling ( srl ) consists of finding the arguments of a predicate and labeling them with semantic roles ( cite-p-9-1-5 , cite-p-9-3-0 ) .
while l-related candidate lexicalisation phrases are phrases containing synonyms or derivationally related .
in contrast , extensionally-related candidate lexicalisations are phrases containing named entities which are in its extension .
for our classifiers , we used the weka implementation of na茂ve bayes and the svmlight implementation of the svm .
we used the weka implementation of na茂ve bayes for this baseline nb system .
in particular , we use the liblinear svm 1va classifier .
we use liblinear 9 to solve the lr and svm classification problems .
we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting .
we first trained a trigram bnlm as the baseline with interpolated kneser-ney smoothing , using srilm toolkit .
the log-linear model features weights are tuned using the newswire part of nist mt06 as the tuning dataset and bleu as the objective function .
the smt systems are tuned on the dev development set with minimum error rate training using bleu accuracy measure as the optimization criterion .
a 3-gram language model was trained from the target side of the training data for chinese and arabic , using the srilm toolkit .
srilm toolkit was used to create up to 5-gram language models using the mentioned resources .
by including predictions of other models as features , we achieve aer of 3 . 8 .
by also including predictions of another model , we drive aer down to 3.8 .
to the ad classification task , and our cnn-lstm model achieves a new benchmark accuracy .
we achieve a new independent benchmark accuracy for the ad classification task .
distributional semantic models represent lexical meaning in vector spaces by encoding corpora derived word co-occurrences in vectors .
distributional semantic models produce vector representations which capture latent meanings hidden in association of words in documents .
semantic parsing is the problem of mapping natural language strings into meaning representations .
semantic parsing is the problem of deriving a structured meaning representation from a natural language utterance .
to solve this problem , hochreiter and schmidhuber introduced the long short-term memory rnn .
lstms were introduced by hochreiter and schmidhuber in order to mitigate the vanishing gradient problem .
feng et al proposed accessor variety to measure the likelihood a substring is a chinese word .
feng et al proposed accessor variety to measure how likely a character substring is a chinese word .
as an interesting byproduct , the earth mover ¡¯ s distance provides a distance measure that may quantify a facet of language difference .
in addition , we reveal an interesting finding that the earth mover¡¯s distance shows potential as a measure of language difference .
for example , suendermann-oeft et al acquired 500,000 dialogues with over 2 million utterances , observing that statistical systems outperform rule-based ones as the amount of data increases .
for example , suendermann et al acquired 500,000 dialogues with over 2 million utterances , observin that statistical systems outperform rule-based ones as the amount of data increases .
automatic summarisation is the task of reducing a document to its main points .
automatic summarisation is a popular approach to reduce a document to its main arguments .
we used 5-gram models , estimated using the sri language modeling toolkit with modified kneser-ney smoothing .
we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing .
we used the stanford factored parser to retrieve both the stanford dependencies and the phrase structure parse .
we used the stanford factored parser to parse sentences into constituency grammar tree representations .
transition-based methods have become a popular approach in multilingual dependency parsing because of their speed and performance .
such approaches , for example , transition-based and graph-based models have attracted the most attention in dependency parsing in recent works .
in this work , we present a new method to do semantic abstractive summarization .
in this work , we propose an alternative method to use amrs for abstractive summarization .
dinu and lapata introduced a probabilistic model for computing word representations in context .
dinu and lapata propose a probabilistic framework for representing word meaning and measuring similarity of words in context .
a lattice is a connected directed acyclic graph in which each edge is labeled with a term hypothesis and a likelihood value ( cite-p-19-3-5 ) ; each path through a lattice gives a hypothesis of the sequence of terms spoken in the utterance .
a lattice is a directed acyclic graph that is used to compactly represent the search space for a speech recognition system .
second , we propose a novel abstractive summarization technique based on an optimization framework that generates section-specific summaries for wikipedia .
second , we propose a novel abstractive summarization ( cite-p-10-1-6 ) technique to summarize content from multiple snippets of relevant information .
zelenko et al and culotta and sorensen proposed kernels for dependency trees inspired by string kernels .
zelenko et al developed a kernel over parse trees for relation extraction .
coreference resolution is the task of determining which mentions in a text are used to refer to the same real-world entity .
coreference resolution is a well known clustering task in natural language processing .
word segmentation is a fundamental task for chinese language processing .
therefore , word segmentation is a preliminary and important preprocess for chinese language processing .
and then we extract subtrees from dependency parsing trees in the auto-parsed data .
then we extract subtrees from dependency parse trees in the auto-parsed data .
clark and curran describes how a packed chart can be used to efficiently represent the derivation space , and also efficient algorithms for finding the most probable derivation .
and clark and curran describe how a packed chart can be used to efficiently represent the derivation space , and also efficient algorithms for finding the most probable derivation .
we also need a restriction on the entity-tuple embedding space .
the presented approach requires a restriction on the entity-tuple embedding space .
semantic parsing is the task of mapping natural language sentences to a formal representation of meaning .
semantic parsing is the task of mapping a natural language ( nl ) sentence into a complete , formal meaning representation ( mr ) which a computer program can execute to perform some task , like answering database queries or controlling a robot .
as a further test , we ran the stanford parser on the queries to generate syntactic parse trees .
we obtained both phrase structures and dependency relations for every sentence using the stanford parser .
the mre is the shortest possible summary of a story ; it is what we would say about the story if we could only say one thing .
the mre is the point of the story – the most unusual event that has the greatest emotional impact on the narrator and the audience .
weber et al used three-dimensional tensor-based networks to construct the event representations .
weber et al proposed a tensor-based composition model to construct event embeddings with agents and patients .
method is effective , and is a key technology enabling smooth conversation with a dialogue translation system .
therefore , we can say that our method is effective for smooth conversation with a dialogue translation system .
heilman et al studied the impact of grammar-based features combined with language modeling approach for readability assessment of first and second language texts .
heilman et al combined unigram models with grammatical features and trained machine learning models for readability assessment .
nlg is a critical component in a dialogue system , where its goal is to generate the natural language given the semantics provided by the dialogue manager .
informally , nlg is the production of a natural language text from computer-internal representation of information , where nlg can be seen as a complex -- potentially cascaded -- decision making process .
although wordnet is a fine resources , we believe that ignoring other thesauri is a serious oversight .
wordnet is a byproduct of such an analysis .
the method of tsvetkov et al used both concreteness features and hand-coded domain information for words .
tsvetkov et al presented a language-independent approach to metaphor identification .
we evaluate the translation quality using the case-insensitive bleu-4 metric .
to evaluate segment translation quality , we use corpus level bleu .
in this paper , we study the use of more expressive loss functions in the structured prediction framework for cr , although .
in this paper , we trade off exact computation for enabling the use of more complex loss functions for coreference resolution ( cr ) .
it is shown that the structure of semantic concepts helps decide domain-specific slots and further improves the slu performance .
also , dependency relations successfully differentiate the generic concepts from the domain-specific concepts , so that the slu model is able to predict more coherent set of semantic slots .
although such approaches perform reasonably well , features are often derived from language-specific resources .
a more promising approach is to automatically learn effective features from data , without relying on language-specific resources .
erkan and radev introduced a stochastic graph-based method , lexrank , for computing the relative importance of textual units for multi-document summarization .
erkan and radev proposed lexpagerank to compute the sentence saliency based on the concept of eigenvector centrality .
the srilm toolkit was used to build the trigram mkn smoothed language model .
we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit .
the model weights were trained using the minimum error rate training algorithm .
the feature weights are tuned to optimize bleu using the minimum error rate training algorithm .
such features have been useful in a variety of english nlp models , including chunking , named entity recognition , and spoken language understanding .
importantly , word embeddings have been effectively used for several nlp tasks , such as named entity recognition , machine translation and part-of-speech tagging .
however , s-lstm models hierarchical encoding of sentence structure as a recurrent state .
empirically , s-lstm can give effective sentence encoding after 3 ¨c 6 recurrent steps .
the weights for these features are optimized using mert .
all the weights of those features are tuned by using minimal error rate training .
hamilton et al propose the use of cosine similarities of words in different contexts to detect changes .
similarly , hamilton et al defined a methodology to quantify semantic change using four languages .
as a sequence labeler we use conditional random fields .
for parameter training we use conditional random fields as described in .
we apply several unsupervised and supervised techniques of sentiment composition to determine their efficacy .
finally , we apply several unsupervised and supervised techniques of sentiment composition to determine their efficacy on this dataset .
stance detection is the task of estimating whether the attitude expressed in a text towards a given topic is ‘ in favour ’ , ‘ against ’ , or ‘ neutral ’ .
stance detection is the task of automatically determining from text whether the author of the text is in favor of , against , or neutral towards a proposition or target .
as a sequence labeler we use conditional random fields .
we define a conditional random field for this task .
we give a brief ( and non-exhaustive ) overview of prior work on gender bias .
we present an empirical study of gender bias in coreference resolution systems .
sentiment classification is a well-studied and active research area ( cite-p-20-1-11 ) .
sentiment classification is the fundamental task of sentiment analysis ( cite-p-15-3-11 ) , where we are to classify the sentiment of a given text .
cite-p-27-1-12 proposed a method calculating word candidates with their unigram frequencies .
cite-p-27-1-11 proposed a stochastic word segmenter based on a word -gram model to solve the word segmentation problem .
lsa has remained as a popular approach for asag and been applied in many variations .
lsa has remained a popular approach for asag and been applied in many variations .
we used the maximum entropy approach 5 as a machine learner for this task .
we utilize a maximum entropy model to design the basic classifier used in active learning for wsd .
semantic role labeling ( srl ) is the task of labeling the predicate-argument structures of sentences with semantic frames and their roles ( cite-p-18-1-2 , cite-p-18-1-19 ) .
semantic role labeling ( srl ) is the task of automatically labeling predicates and arguments in a sentence with shallow semantic labels .
part-of-speech ( pos ) tagging is a job to assign a proper pos tag to each linguistic unit such as word for a given sentence .
part-of-speech ( pos ) tagging is a fundamental natural-language-processing problem , and pos tags are used as input to many important applications .
for all the methods in this section , we use the same corpus , the icwsm spinn3r 2009 dataset , which has been used successfully in earlier work .
for both attributes addressed in this paper , we use the same corpus , the 2009 icwsm spinn3r dataset , a publicly-available blog corpus which we also used in our earlier work on lexical formality .
we use the pre-trained word2vec embeddings provided by mikolov et al as model input .
as embedding vectors , we used the publicly available representations obtained from the word2vec cbow model .
we used the moses decoder , with default settings , to obtain the translations .
we adapted the moses phrase-based decoder to translate word lattices .
conjuncts tend to be similar and ( b ) that replacing the coordination phrase with a conjunct results in a coherent sentence .
replacing a conjunct with the whole coordination phrase usually produce a coherent sentence ( huddleston et al. , 2002 ) .
we use the pre-trained glove vectors to initialize word embeddings .
in our experiments , the pre-trained word embeddings for english are 100-dimensional glove vectors .
socher et al , 2012 ) presented a recursive neural network for relation classification to learn vectors in the syntactic tree path connecting two nominals to determine their semantic relationship .
socher et al present a novel recursive neural network for relation classification that learns vectors in the syntactic tree path that connects two nominals to determine their semantic relationship .
the abstract meaning representation is a semantic meaning representation language that is purposefully syntax-agnostic .
the abstract meaning representation is a readable and compact framework for broad-coverage semantic annotation of english sentences .
dong et al use three columns of cnns to represent questions respectively when dealing with different answer aspects .
dong et al employs three fixed cnns to represent questions , while ours is able to express the focus of each unique answer aspect to the words in the question .
automatic word alignment is a vital component of nearly all current statistical translation pipelines .
automatic word alignment is a key step in training statistical machine translation systems .
zhang and kim developed a system for automated learning of morphological word formation rules .
kim developed a system for automated learning of morphological word function rules .
we measure the translation quality using a single reference bleu .
we evaluate the translation quality using the case-insensitive bleu-4 metric .
we focus much more on the analysis of concept drift .
we demonstrate that concept drift is an important consideration .
with the shared task , we aimed to make a first step towards taking srl beyond the domain of individual sentences .
in that sense the task represents a first step towards taking srl beyond the sentence level .
t ype sql + tc gains roughly 9 % improvement compared to the content-insensitive model , and outperforms the previous content-sensitive model .
t ype sql gets 82.6 % accuracy , a 17.5 % absolute improvement compared to the previous content-sensitive model .
in section 3 and 4 , we formally define the task .
in section 3 and 4 , we formally define the task and present our method .
such a model can be used for topic identification of unseen calls .
such a model can be used for identification of topics of unseen calls .
in order to increase the number of training instances , we tried to use the disambiguated wordnet glosses from xwn project .
in addition we used disambiguated wordnet glosses from xwn to measure the improvement made by adding additional training examples .
choosing an appropriate entity and its mention has a big influence on the coherence of a text , as studied in centering theory .
according to the centering theory , the coherence of text is to a large extent maintained by entities and the relations between them .
the algorithm is similar to those for context free parsing such as chart parsing and the cky algorithm .
the algorithm is similar to those for context-free parsing such as chart parsing and the cky algorithm .
in this work , we use bleu-4 score as the evaluation metric , which measures the overlap between the generated question and the referenced question .
we evaluate the performance of k2q rnn with other baselines to compare the k2q approaches , we use bleu score between the generated question and the reference question .
metonymy is typically defined as a figure of speech in which a speaker uses one entity to refer to another that is related to it ( cite-p-10-1-3 ) .
metonymy is a pervasive phenomenon in language and the interpretation of metonymic expressions can impact tasks from semantic parsing ( cite-p-13-1-10 ) to question answering ( cite-p-13-1-4 ) .
recent years have witnessed the success of various statistical machine translation models using different levels of linguistic knowledgephrase , hiero , and syntax-based .
recent years have witnessed burgeoning development of statistical machine translation research , notably phrase-based and syntax-based approaches .
and compare our method with a monolingual syntax-based method .
we will refer to such systems as monolingual syntax-based systems .
distributed representations for words and sentences have been shown to significantly boost the performance of a nlp system .
it has been empirically shown that word embeddings could capture semantic and syntactic similarities between words .