sentence1
stringlengths 16
446
| sentence2
stringlengths 14
436
|
---|---|
mead is centroid based multi-document summarizer which generates summaries using cluster centroids produced by topic detection and tracking system .
|
mead is a centroid based multi document summarizer which generates summaries using cluster centroids produced by topic detection and tracking system .
|
brown et al present a hierarchical word clustering algorithm that can handle a large number of classes and a large vocabulary .
|
our first choice is the bottom-up agglomerative word clustering algorithm of brown et al , which derives a hierarchical clustering of words from unlabeled data .
|
in this paper , we present a greedy non-directional parsing algorithm which doesn ’ t need a fully connected parse and can learn from partial parses .
|
in this paper , we present a dependency parsing algorithm which can train on partial projected parses and can take rich syntactic information as features for learning .
|
in this paper , we propose a neural architecture for coherence assessment that can capture long range entity transitions .
|
following this tradition , in this paper we propose to neuralize the popular entity grid models .
|
word sense disambiguation ( wsd ) is a particular problem of computational linguistics which consists in determining the correct sense for a given ambiguous word .
|
word sense disambiguation ( wsd ) is a key enabling technology that automatically chooses the intended sense of a word in context .
|
the weights of the log-linear interpolation model were optimized via minimum error rate training on the ted development set , using 200 best translations at each tuning iteration .
|
the weights of the different feature functions were optimised by means of minimum error rate training on the 2013 wmt test set .
|
kalchbrenner et al show that a cnn for modeling sentences can achieve competitive results in polarity classification .
|
to capture the relation between words , kalchbrenner et al propose a novel cnn model with a dynamic k-max pooling .
|
kim et al proposed a convolutional module to process complex inputs for the problem of language modeling .
|
kim et al apply a simple convolutional neural network model , which uses character level inputs for word representations .
|
ranking methods based on importance scores are proposed for keyphrase extraction .
|
therefore , automatic keyphrase extraction is an important research task .
|
similarly , korhonen et al relied on the information bottleneck and subcategorisation frame types to induce soft verb clusters .
|
korhonen et al used verb-frame pairs to cluster verbs into levin-style semantic classes .
|
twitter is the medium where people post real time messages to discuss on the different topics , and express their sentiments .
|
twitter is a popular microblogging service , which , among other things , is used for knowledge sharing among friends and peers .
|
the language model is a large interpolated 5-gram lm with modified kneser-ney smoothing .
|
we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .
|
success , however , depends on a high-coverage dictionary .
|
this success rests on a high-coverage dictionary .
|
for the phrase based system , we use moses with its default settings .
|
in all submitted systems , we use the phrase-based moses decoder .
|
the rules were extracted using the pos tags generated by the treetagger .
|
the source and target sentences are tagged respectively using the treetagger and amira toolkits .
|
to encode the original sentences we used word2vec embeddings pre-trained on google news .
|
we used the pre-trained word embeddings that were learned using the word2vec toolkit on google news dataset .
|
for the information-access applications described above .
|
however , consider the interactive information-access application described above .
|
we use 100-dimension glove vectors which are pre-trained on a large twitter corpus and fine-tuned during training .
|
we use glove word embeddings , which are 50-dimension word vectors trained with a crawled large corpus with 840 billion tokens .
|
currently , recurrent neural network based models are widely used on natural language processing tasks for excellent performance .
|
recurrent neural network architectures have proven to be well suited for many natural language generation tasks .
|
and consequently , we propose a weakly supervised fully-bayesian approach to pos tagging , which relaxes the unrealistic assumption by automatically acquiring the lexicon from a small amount of pos-tagged data .
|
as a result , we investigated a weakly supervised fully-bayesian approach to pos tagging , which relaxes the unrealistic assumption by automatically acquiring the lexicon from a small amount of postagged data .
|
we use pre-trained 50 dimensional glove vectors 4 for word embeddings initialization .
|
we use 300d glove vectors trained on 840b tokens as the word embedding input to the lstm .
|
while automatic evaluation methods like bleu can be useful for estimating translation quality , a higher score is no guarantee of quality improvement .
|
aggregating evaluation methods like bleu give a useful overview of the quality of a translation , but they do not afford specific information and leave too many details to chance .
|
for building our ap e b2 system , we set a maximum phrase length of 7 for the translation model , and a 5-gram language model was trained using kenlm .
|
we calculated the language model probabilities using kenlm , and built a 5-gram language model from the english gigaword fifth edition .
|
which is a generalization of current perceptron-based reranking methods .
|
the subtree ranking approach is a generalization of the perceptron-based approach .
|
neural networks , working on top of conventional n-gram back-off language models , have been introduced in as a potential means to improve discrete language models .
|
neural networks , working on top of conventional n-gram back-off language models , have been introduced in as a potential means to improve discrete language models .
|
model fitting for our model is based on the expectation-maximization algorithm .
|
we estimate the parameters by maximizingp using the expectation maximization algorithm .
|
hammarstr枚m and borin give an extensive overview of stateof-the-art unsupervised learning of morphology .
|
hammarstr枚m and borin presented a literature survey on unsupervised learning of morphology , including methods for learning morphological segmentation .
|
following the work of koo et al , we used a tagger trained on the training data to provide part-of-speech tags for the development and test sets , and used 10-way jackknifing to generate part-of-speech tags for the training set .
|
following koo et al , we used the mxpost tagger trained on the full training data to provide part-of-speech tags for the development and the test set , and we used 10-way jackknifing to generate tags for the training set .
|
word sense disambiguation ( wsd ) is a natural language processing ( nlp ) task in which the correct meaning ( sense ) of a word in a given context is to be determined .
|
word sense disambiguation ( wsd ) is the task to identify the intended sense of a word in a computational manner based on the context in which it appears ( cite-p-13-3-4 ) .
|
ahsan and kolachina introduce a hybrid mt system that utilised online mt engines for msmt .
|
mellebeek et al introduced a hybrid mt system that utilised online mt engines for msmt .
|
coreference resolution is the next step on the way towards discourse understanding .
|
coreference resolution is the task of clustering a set of mentions in the text such that all mentions in the same cluster refer to the same entity .
|
we are the first to tie figurative language to the social context in which it is produced and show its relation to internal and external .
|
to the best of our knowledge , this is the first time figurative language is tied to the social context in which it appears .
|
coreference resolution is the process of linking together multiple referring expressions of a given entity in the world .
|
coreference resolution is the task of partitioning a set of mentions ( i.e . person , organization and location ) into entities .
|
we show that our method disambiguates a significant proportion of subject-object ambiguities in german .
|
we show that a simple method disambiguates some subject-object ambiguities in german , while making few errors .
|
we parsed the corpus with rasp and with the stanford pcfg parser .
|
we extracted the features from the gigaword corpus , which was first parsed using the rasp parser .
|
the alignment template approach uses word classes rather than lexical items to model phrase translation .
|
the alignment template model enhanced phrasal generalizations by using words classes rather than the words themselves .
|
collection comprises 132 , 229 dialogues containing a total of 764 , 146 turns / utterances that have been extracted from 753 movies .
|
the collected dataset comprises 132,229 dialogues containing a total of 764,146 turns that have been extracted from 753 movies .
|
we use the scikit-learn toolkit as our underlying implementation .
|
we trained the five classifiers using the svm implementation in scikit-learn .
|
in this paper , we propose a novel task that is crucial and generic from the viewpoint of health surveillance .
|
in this paper , we propose a more generalized task setting for public surveillance .
|
wang et al use all amr concepts and relations that appear in the training set as possible parameters if they appear in any sentence containing the same lemma as 蟽 0 and 尾 .
|
wang et al use all concepts that occur in the training data in the same sentence as the lemma of the node , leading to hundreds or thousands of possible actions from some states .
|
the language model is trained and applied with the srilm toolkit .
|
it has been trained with the srilm toolkit on the target side of all the training data .
|
we propose a method for extracting semantic orientations of phrases ( pairs of an adjective and a noun .
|
we proposed a method for extracting semantic orientations of phrases ( pairs of an adjective and a noun ) .
|
semantic role labeling ( srl ) is defined as the task to recognize arguments for a given predicate and assign semantic role labels to them .
|
semantic role labeling ( srl ) is the process of producing such a markup .
|
which beyonds the capability of phrase-based mt , we extend the search-aware tuning framework from phrase-based mt to syntax-based mt , in particular the hierarchical phrase-based translation model .
|
we extend this approach from phrase-based translation to syntax-based translation by generalizing the evaluation metrics for partial translations to handle tree-structured derivations in a way inspired by inside-outside algorithm .
|
svms have been shown to be robust in classification tasks involving text where the dimensionality is high .
|
svms are frequently used for text classification and have been applied successfully to nli .
|
we have adopted a supervised approach , a svm polynomial kernel classifier trained with the data provided by the challenge .
|
we follow a supervised approach , exploiting a svm polynomial kernel classifier trained with the challenge data .
|
the model parameters are trained using minimum error-rate training .
|
minimum error rate training is applied to tune the cn weights .
|
conditional random fields are undirected graphical models trained to maximize a conditional probability .
|
conditional random fields are probabilistic models for labelling sequential data .
|
coreference resolution is the task of clustering a sequence of textual entity mentions into a set of maximal non-overlapping clusters , such that mentions in a cluster refer to the same discourse entity .
|
coreference resolution is the process of determining whether two expressions in natural language refer to the same entity in the world .
|
our baseline is a standard phrase-based smt system .
|
our smt system is a phrase-based system based on the moses smt toolkit .
|
supervised methods shows that while supervised methods generally outperform the unsupervised ones .
|
we show that supervised methods outperform the unsupervised ones , while also being more efficient , computed on top of low-dimensional vectors .
|
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing .
|
we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .
|
table 3 gives the results for the penn treebank converted with the head-finding rules of yamada and matsumoto and the labeling rules of nivre .
|
table 4 shows labeled and unlabeled accuracy scores of previous work reported for the penn2malt conversion with the head finding rules of yamada and matsumoto .
|
we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .
|
we first trained a trigram bnlm as the baseline with interpolated kneser-ney smoothing , using srilm toolkit .
|
for the training of the drank and fixrank models , we utilised svm rank .
|
for the ranker we used svm rank , an efficient implementation for training ranking svms .
|
semantic role labeling ( srl ) is the process of extracting simple event structures , i.e. , “ who ” did “ what ” to “ whom ” , “ when ” and “ where ” .
|
semantic role labeling ( srl ) is the task of identifying semantic arguments of predicates in text .
|
whilst , the parameters for the maximum entropy model are developed based on the minimum error rate training method .
|
the weights associated to feature functions are optimally combined using the minimum error rate training .
|
xing et al pre-defined a set of topics from an external corpus to guide the generation of the seq2seq model .
|
moreover , xing et al incorporated topic words into seq2seq frameworks , where topic words are obtained from a pre-trained l-da model .
|
and show that our dependency language model provides improvements on five different test sets , with an overall gain of 0 . 92 in ter and 0 . 45 in bleu scores .
|
our results show that augmenting a state-of-the-art phrase-based system with this dependency language model leads to significant improvements in ter ( 0.92 % ) and bleu ( 0.45 % ) scores on five nist chinese-english evaluation test sets .
|
suchanek et al regarded the heading of a wikipedia article as a hyponym and obtained category labels attached to the article as its hypernym candidates .
|
suchanek et al extracted hyponymy relations from the category pages in the wikipedia using wordnet information .
|
sun and wan proposed a structure-based stacking model , which makes use of structured features such as sub-words for model combination .
|
sun and wan further extend the guide-feature method and propose a more complex sub-word stacking approach .
|
decoding paths adopted by other mt systems , this framework achieves better translation quality with much less re-decoding time .
|
thanks to the refined translation models , this approach produces better translations with a much shorter re-decoding time .
|
questions show that a discriminatively trained preference rank model is able to outperform alternative approaches designed for the same task .
|
a discriminative preference ranking model with a preference for appropriate answers is trained and applied to unseen questions .
|
we use a conditional random field sequence model , which allows for globally optimal training and decoding .
|
we use an information extraction tool for named entity recognition based on conditional random fields .
|
furthermore , tang et al proposed a new neural network approach called sswe to train sentimentaware word representation .
|
tang et al , proposed a method to learn sentiment specific word embeddings from tweets with emoticons as distantsupervised corpora without any manual annotation .
|
we use the berkeley probabilistic parser to obtain syntactic trees for english and its bonsai adaptation for french .
|
we use the berkeley probabilistic parser to obtain syntactic trees for english and its adapted version for french .
|
cover-based method guarantees that all bursty n-grams including irregularly-formed ones must be covered by extracted bursty phrases .
|
the proposed set cover-based method finds a minimum set of bursty phrases that cover all bursty n-grams including incomplete ones .
|
in the first part of the paper a novel , sortally-based approach to aspectual composition .
|
the first part of the paper develops a novel , sortally-based approach to the problem of aspectual composition .
|
for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing .
|
we train a kn-smoothed 5-gram language model on the target side of the parallel training data with srilm .
|
we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit .
|
we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .
|
the neural network for greedy training is based on the neural networks of chen and manning and .
|
our base model is a transition-based neural parser of chen and manning .
|
each context consists of approximately a paragraph of surrounding text , where the word to be discriminated ( the target word ) is found approximately in the middle of the context .
|
each context consists of several sentences that use a single sense of a target word , where at least one sentence contains the word .
|
corpus offers two improvements over current resources .
|
our corpus offers two main contributions .
|
supervised methods shows that while supervised methods generally outperform the unsupervised ones , the former are sensitive to the distribution of training instances , hurting their reliability .
|
as a consequence , supervised methods are sensitive to the distribution of examples in a particular dataset , making them less reliable for real-world applications .
|
finite-state head transducers produces implementations that are much more efficient than those for the ibm model .
|
the resulting finite-state machines are more expressive than standard leftto-right transducers .
|
the language model used was a 5-gram with modified kneserney smoothing , built with srilm toolkit .
|
we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit .
|
for example , turian et al used word embeddings as input features for several nlp systems , including a traditional chunking system based on conditional random fields .
|
turian et al , for example , used embeddings from existing language models as unsupervised lexical features to improve named entity recognition and chunking .
|
information extraction ( ie ) is a task of identifying 憽甪acts挕 ? ( entities , relations and events ) within unstructured documents , and converting them into structured representations ( e.g. , databases ) .
|
information extraction ( ie ) is the task of extracting information from natural language texts to fill a database record following a structure called a template .
|
for the classifiers we use the scikit-learn machine learning toolkit .
|
we use the selectfrommodel 4 feature selection method as implemented in scikit-learn .
|
twitter is a huge microblogging service with more than 500 million tweets per day from different locations of the world and in different languages ( cite-p-10-1-6 ) .
|
twitter is a communication platform which combines sms , instant messages and social networks .
|
distributional similarity is used in many proposals to find semantically related words .
|
a common approach to the automatic extraction of semantically related words is to use distributional similarity .
|
a context-free grammar ( cfg ) is a 4-tuple math-w-4-1-0-9 , where math-w-4-1-0-18 is the set of nonterminals , σ the set of terminals , math-w-4-1-0-31 the set of production rules and math-w-4-1-0-38 a set of starting nonterminals ( i.e . multiple starting nonterminals are possible ) .
|
a context-free grammar ( cfg ) is a tuple math-w-3-1-1-9 , where math-w-3-1-1-22 is a finite set of nonterminal symbols , math-w-3-1-1-31 is a finite set of terminal symbols disjoint from n , math-w-3-1-1-44 is the start symbol and math-w-3-1-1-52 is a finite set of rules .
|
in this work , we detailed the multiple choice questions in subject history of gaokao , present two different approaches to address them .
|
in this work , we detailed the gaokao history multiple choice questions ( gkhmc ) and proposed two different approaches to address them using various resources .
|
deep learning has been considered as a generic solution to domain adaptation , and transfer learning problems .
|
deep learning with knowledge transfer has been previously applied to sentiment analysis in the context of domain adaptation and cross-lingual applications .
|
the log-linear feature weights are tuned with minimum error rate training on bleu .
|
the feature weights 位 m are tuned with minimum error rate training .
|
we trained a 5-gram sri language model using the corpus supplied for this purpose by the shared task organizers .
|
we trained a specific language model using srilm from each of these corpora in order to estimate n-gram log-probabilities .
|
word sense disambiguation ( wsd ) is the problem of assigning a sense to an ambiguous word , using its context .
|
word sense disambiguation ( wsd ) is a key task in computational lexical semantics , inasmuch as it addresses the lexical ambiguity of text by making explicit the meaning of words occurring in a given context ( cite-p-18-3-10 ) .
|
our method returns an “ explanation ” consisting of sets of input and output tokens that are causally related .
|
our method returns an “ explanation ” consisting of groups of input-output tokens that are causally related .
|
we used srilm for training the 5-gram language model with interpolated modified kneser-ney discounting , .
|
we also use a 4-gram language model trained using srilm with kneser-ney smoothing .
|
for instance swales develops his notion of genre in academic and research settings , bathia in professional settings , and so on .
|
for instance swales develops his notion of genre in academic and research settings , bathia and trosborg in professional settings , yates and orlikowsky within organizational communication .
|
work , we developed an approach based on distributional semantics to check whether a word in an answer is similar enough to a word in the question to count as given .
|
in place of surface-based givenness checks , as a first step in this direction we developed an approach integrating distributional semantics to check whether a word in a sentence is similar enough to a word in the context to count as given .
|
we use 5-grams for all language models implemented using the srilm toolkit .
|
we use srilm with its default parameters for this purpose .
|
trigram language models were estimated using the sri language modeling toolkit with modified kneser-ney smoothing .
|
the language model was constructed using the srilm toolkit with interpolated kneser-ney discounting .
|
a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit .
|
a 4-gram language model is trained on the xinhua portion of the gigaword corpus with the srilm toolkit .
|
riloff and wiebe performed pattern learning through bootstrapping while extracting subjective expressions .
|
riloff and wiebe extracted subjective expressions from sentences using a bootstrapping pattern learning process .
|
shang et al and serban et al apply the rnn-based general encoder-decoder framework to the open-domain dialogue response generation task .
|
serban et al propose a hierarchical recurrent encoder-decoder neural network to the open domain dialogue .
|
bidirectional lstm is an extension of traditional lst-m to train two lstms on the input sequence .
|
the charner model uses bidirectional stacked lstms to map character sequences to tag sequences .
|
this tree kernel was slightly generalized by culotta and sorensen to compute similarity between two dependency trees .
|
culotta used this kernel on dependency trees to train a svm classifier for relation extraction .
|
relation extraction ( re ) is the task of recognizing relationships between entities mentioned in text .
|
relation extraction is the task of predicting attributes and relations for entities in a sentence ( zelenko et al. , 2003 ; bunescu and mooney , 2005 ; guodong et al. , 2005 ) .
|
we used the phrase-based smt in moses 5 for the translation experiments .
|
we used moses with the default configuration for phrase-based translation .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.