sentence1
stringlengths 16
446
| sentence2
stringlengths 14
436
|
---|---|
we used the srilm toolkit to train a 4-gram language model on the xinhua portion of the gigaword corpus , which contains 238m english words .
|
we used kenlm with srilm to train a 5-gram language model based on all available target language training data .
|
we used svm classifier that implements linearsvc from the scikit-learn library .
|
we used the svd implementation provided in the scikit-learn toolkit .
|
despite their frequent use in topic modeling , we find that stemmers produce no meaningful improvement in likelihood and coherence .
|
while stemmers are used in topic modeling , we know of no analysis focused on their effect .
|
seki et al proposed a probabilistic model for zero pronoun detection and resolution that used hand-crafted case frames .
|
seki et al proposed a probabilistic model for zero pronoun detection and resolution that uses hand-crafted case frames .
|
traditional approaches of natural language generation consist in creating specific algorithms in the consensual nlg pipeline .
|
in the traditional pipeline view of natural language generation , many steps involve converting between increasingly specific tree representations .
|
we perform random replications of parameter tuning , as suggested by clark et al .
|
for our primary results , we perform random replications of parameter tuning , as suggested by clark et al .
|
we built a 5-gram language model from it with the sri language modeling toolkit .
|
we used srilm to build a 4-gram language model with kneser-ney discounting .
|
coreference resolution is the task of grouping mentions to entities .
|
coreference resolution is the task of determining when two textual mentions name the same individual .
|
we used a standard pbmt system built using moses toolkit .
|
we used the moses toolkit for performing statistical machine translation .
|
knowledge graphs , such as freebase , contain a wealth of structured knowledge in the form of relationships between entities and are useful for numerous end applications .
|
knowledge graphs such as freebase , yago and wordnet are among the most widely used resources in nlp applications .
|
stance detection is the task of classifying the attitude previous work has assumed that either the target is mentioned in the text or that training data for every target is given .
|
stance detection is the task of determining whether the author of a text is in favor or against a given topic , while rejecting texts in which neither inference is likely .
|
these models were implemented using the package scikit-learn .
|
all linear models were trained with the perceptron update rule .
|
system combination procedures , on the other hand , generate translations from the output of multiple component systems by combining the best fragments of these outputs .
|
system combination procedures , on the other hand , generate translations from the output of multiple component systems .
|
incremental parsing is a salient feature of glp .
|
incremental parsing is the task of assigning a syntactic structure to an input sentence as it unfolds word by word .
|
relation extraction ( re ) has been defined as the task of identifying a given set of semantic binary relations in text .
|
relation extraction is the task of predicting semantic relations over entities expressed in structured or semi-structured text .
|
the problem has been addressed recently by researchers working on large knowledge bases such as reverb and freebase .
|
fortunately , this task has been simplified with the emergence of large knowledge graphs , including freebase , from where we can retrieve information .
|
relation extraction is a key step towards question answering systems by which vital structured data is acquired from underlying free text resources .
|
relation extraction is a fundamental task in information extraction .
|
we will show translation quality measured with the bleu score as a function of the phrase table size .
|
we evaluate the translation quality using the case-sensitive bleu-4 metric .
|
in the remaining part of the paper , we introduce nivre ¡¯ s parsing algorithm , propose a framework for online learning for deterministic parsing .
|
in this paper , we present an online large margin based training framework for deterministic parsing using nivre¡¯s shift-reduce parsing algorithm .
|
the english side of the parallel corpus is trained into a language model using srilm .
|
gram language models are trained over the target-side of the training data , using srilm with modified kneser-ney discounting .
|
sentence-level bleu ( cite-p-21-3-1 ) is utilized as the reinforced objective for the generator .
|
furthermore , we propose to utilize the sentence-level bleu as the specific objective for the generator .
|
in this paper , we present a parsing-based model of task-oriented dialog that tightly integrates interpretation and generation .
|
in this paper , we present an integrated model of the two central tasks of dialog management : interpreting user actions and generating system actions .
|
the target-side language models were estimated using the srilm toolkit .
|
we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing .
|
in the first phase , a post will be automatically classified into several categories including interrogation , discussion , sharing and chat based on the intention .
|
in the first phase , the post plus its responses are classified into four categories based on the intention , interrogation , sharing , discussion and chat .
|
we use 300 dimensional glove embeddings trained on the common crawl 840b tokens dataset , which remain fixed during training .
|
we use 100-dimension glove vectors which are pre-trained on a large twitter corpus and fine-tuned during training .
|
we used the sri language modeling toolkit with kneser-kney smoothing .
|
we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing .
|
using ensembles of multiple systems is a standard approach to improving accuracy in machine learning .
|
ensembling multiple systems is a well known standard approach to improving accuracy in several machine learning applications .
|
we use the standard corpus for this task , the penn treebank .
|
we use the wsj corpus , a pos annotated corpus , for this purpose .
|
in addition tromble and eisner and visweswariah et al present models that use binary classification to decide whether each pair of words should be placed in forward or reverse order .
|
visweswariah et al and tromble and eisner have considered the source reordering problem to be a problem of learning word reordering from word-aligned data .
|
we use srilm to build 5-gram language models with modified kneser-ney smoothing .
|
we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit .
|
baldwin and li evaluate the effect of different normalization actions on dependency parsing performance for the social media domain .
|
baldwin and li examined the theoretical impact of different normalization actions on parsing performance .
|
information retrieval ( ir ) is the task of retrieving , given a query , the documents relevant to the user from a large quantity of documents ( cite-p-13-3-13 ) .
|
information retrieval ( ir ) is a challenging endeavor due to problems caused by the underlying expressiveness of all natural languages .
|
le and mikolov presented the paragraph vector in sentiment analysis .
|
le and mikolov extended the word embedding learning model by incorporating paragraph information .
|
we adapt the minimum error rate training algorithm to estimate parameters for each member model in co-decoding .
|
we tune phrase-based smt models using minimum error rate training and the development data for each language pair .
|
we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit .
|
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided .
|
we use the stanford part of speech tagger to annotate each word with its pos tag .
|
in our wok , we have used the stanford log-linear part-of-speech to do pos tagging .
|
proof of the new algorithm is simpler than the one reported by vijay-shanker and weir ( 1993 ) .
|
the same holds true for the algorithm of vijay-shanker and weir ( 1993 ) .
|
in which to consider each contribution , the editors use the questions to divide up the contents of the book .
|
rather than defining a framework in which to consider each contribution , the editors use the questions to divide up the contents of the book .
|
su et al presented a clustering method that utilizes the mutual reinforcement associations between features and opinion words .
|
su et al , 2008 ) used heterogeneous relations to find implicit sentiment associations among words .
|
the feature weights are tuned to optimize bleu using the minimum error rate training algorithm .
|
all the feature weights and the weight for each probability factor are tuned on the development set with minimumerror-rate training .
|
we used the logistic regression implemented in the scikit-learn library with the default settings .
|
within this subpart of our ensemble model , we used a svm model from the scikit-learn library .
|
fader et al present a question answering system that learns to paraphrase a question so that it can be answered using a corpus of open ie triples .
|
fader et al presented a qa system that maps questions onto simple queries against open ie extractions , by learning paraphrases from a large monolingual parallel corpus , and performing a single paraphrasing step .
|
word sense disambiguation ( wsd ) is the task of assigning sense tags to ambiguous lexical items ( lis ) in a text .
|
word sense disambiguation ( wsd ) is a widely studied task in natural language processing : given a word and its context , assign the correct sense of the word based on a predefined sense inventory ( cite-p-15-3-4 ) .
|
we evaluate the performance of different translation models using both bleu and ter metrics .
|
we evaluate text generated from gold mr graphs using the well-known bleu measure .
|
in this paper , we propose a stack based multi-layer attention method , in which , stack is simulated with two binary vectors , and multi-layer attention is introduced to capture multiple word dependencies in partial trees .
|
in our method , two binary vectors are used to track the decoding stack in transition-based parsing , and multi-layer attention is introduced to capture multiple word dependencies in partial trees .
|
and we have constructed lexicons for 11 different languages .
|
we construct lexicons in 11 languages of varying morphological complexity .
|
word sense disambiguation ( wsd ) is the task of determining the meaning of an ambiguous word in its context .
|
word sense disambiguation ( wsd ) is the task to identify the intended sense of a word in a computational manner based on the context in which it appears ( cite-p-13-3-4 ) .
|
the language model used was a 5-gram with modified kneserney smoothing , built with srilm toolkit .
|
the language models used were 7-gram srilm with kneser-ney smoothing and linear interpolation .
|
the output of the recurrent layer is additionally regularized by using dropout , and classification is performed using softmax with crossentropy loss .
|
dropout is applied to the output of the recurrent layers , which are concatenated and passed further to the first order chain crf layer .
|
sentiment analysis ( sa ) is a hot-topic in the academic world , and also in the industry .
|
sentiment analysis ( sa ) is the task of prediction of opinion in text .
|
in this paper , we explore a new problem of text recap extraction .
|
in section 3 , we introduce our new dataset for text recap extraction .
|
unfortunately , wordnet is a fine-grained resource , which encodes possibly subtle sense distictions .
|
unfortunately , wordnet is a fine-grained resource , encoding sense distinctions that are difficult to recognize even for human annotators ( cite-p-13-1-2 ) .
|
ganchev et al , 2008 ) use agreement-driven training of alignment models and replace viterbi decoding with posterior decoding .
|
ganchev et al propose postcat which uses posterior regularization to enforce posterior agreement between the two models .
|
on similar lines , we developed an algorithm which employs an online thesaurus .
|
on similar lines , we developed an algorithm which employs an online thesaurus as a knowledge base .
|
in this presentation , x i is the ith example .
|
in this presentation , x i is the ith example in the corpus ,
|
and , based on our analysis of conversational data , propose a model of grounding using both verbal and nonverbal information .
|
based on these results , we present an eca that uses verbal and nonverbal grounding acts to update dialogue state .
|
we initialize our word vectors with 300-dimensional word2vec word embeddings .
|
we initialize the embedding weights by the pre-trained word embeddings with 200 dimensional vectors .
|
on the basis of this observation , we describe a class of formalisms which we call linear contextfree rewriting systems .
|
we outlined the definition of a family of constrained grammatical formalisms , called linear context-free rewriting systems .
|
we further show that prediction performance could be improved by incorporating specialized features that capture helpfulness information specific to peer reviews .
|
in addition , we investigate the utility of incorporating additional specialized features tailored to peer review .
|
we evaluate our method on a range of languages taken from the conll shared tasks on multilingual dependency parsing .
|
we use a recently proposed dependency parser 1 which has demonstrated state-of-theart performance on a selection of languages from the conll-x shared task .
|
for unsupervised pos tagging , ldc shows a substantial improvement in performance over state-of-the-art methods .
|
the ldc approach is shown to yield substantial improvement over state-of-the-art methods for the problem of fully unsupervised , distributional only , pos tagging .
|
a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke .
|
we trained two 5-gram language models on the entire target side of the parallel data , with srilm .
|
we found that simple , unsupervised models perform significantly better when n-gram frequencies are obtained from the web .
|
in all cases , we propose a simple , unsupervised n-gram based model whose parameters are estimated using web counts .
|
in this paper , we work on candidate generation at the character level , which can be applied to spelling error correction .
|
without loss of generality , in this paper we address candidate generation in spelling error correction .
|
more recently , the method described in produces improvements over the methods above , while reducing the computational cost by using weighted alignment matrices to represent the alignment distribution over each parallel sentence .
|
more recently , a more efficient representation of multiple alignments was proposed in named weighted alignment matrices , which represents the alignment probability distribution over the words of each parallel sentence .
|
a bunsetsu consists of one independent word and zero or more ancillary words .
|
1a bunsetsu is a common unit when syntactic structures in japanese are discussed .
|
twitter is a subject of interest among researchers in behavioral studies investigating how people react to different events , topics , etc. , as well as among users hoping to forge stronger and more meaningful connections with their audience through social media .
|
twitter is a popular microblogging service which provides real-time information on events happening across the world .
|
we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .
|
we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit .
|
we present a new semi-supervised training algorithm for structured svms .
|
we present a novel semi-supervised training algorithm for learning dependency parsers .
|
in this paper , we develop a novel adaptive topic model with the ability to adapt topics .
|
in this paper , we develop an adaptive topic model to go beyond a strictly sequential model while allow some hierarchical influence .
|
we employ the pretrained word vector , glove , to obtain the fixed word embedding of each word .
|
our word embeddings is initialized with 100-dimensional glove word embeddings .
|
the weights of the different feature functions were optimised by means of minimum error rate training .
|
the feature weights for each system were tuned on development sets using the moses implementation of minimum error rate training .
|
chiang introduces formal synchronous grammars for phrase-based translation .
|
chiang introduces hiero , a hierarchical phrase-based model for statistical machine translation .
|
for example , xue et al designed a retrieval model for cqa search , which considers both question and answer parts when measuring the relatedness between queries and cqa resources .
|
for example , xue et al have exploited the translation-based language model for question retrieval in large qa database and achieved significant retrieval effectiveness .
|
we use the logistic regression classifier as implemented in the skll package , which is based on scikitlearn , with f1 optimization .
|
we used the scikit-learn implementation of a logistic regression model using the default parameters .
|
and thus the limited availability of labeled data often becomes the bottleneck of data-driven , supervised models .
|
therefore , the limited availability of parallel data has become the bottleneck of existing , purely supervised-based models .
|
the system used a tri-gram language model built from sri toolkit with modified kneser-ney interpolation smoothing technique .
|
these language models were built up to an order of 5 with kneser-ney smoothing using the srilm toolkit .
|
another line of research focuses on neural models , which have shown great effectiveness in automatic feature learning on a variety of nlp tasks .
|
this is motivated by the fact that multi-task learning has shown to be beneficial in several nlp tasks .
|
similarity propagation is used to exploit the prior knowledge and merge two language spaces .
|
propagation method is used to guide the language-space merging process .
|
it is used to support semantic analyses in the english hpsg grammar erg , but also in other grammar formalisms like lfg .
|
it is used to support semantic analyses in the hpsg english resource grammar - , but also in other grammar formalisms like lfg .
|
word sense disambiguation ( wsd ) is the problem of assigning a sense to an ambiguous word , using its context .
|
many words have multiple meanings , and the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd ) .
|
moreover , uchiyama et al evaluate their methods on a set of jcvs that are mostly monosemous .
|
uchiyama et al also propose a statistical token classification method for jcvs .
|
a simile is a form of figurative language that compares two essentially unlike things ( cite-p-20-3-11 ) , such as “ jane swims like a dolphin ” .
|
the simile is a figure of speech that builds on a comparison in order to exploit certain attributes of an entity in a striking manner .
|
the evaluation metric for the overall translation quality was case-insensitive bleu4 .
|
translation performance was measured by case-insensitive bleu .
|
exploitation of generic patterns substantially increases system recall with small effect on overall precision .
|
by exploiting generic patterns , system recall substantially increases with little effect on precision .
|
context-free grammar augmented with λ-operators is learned given a set of training sentences and their correct logical forms .
|
a semantic parser is learned given a set of training sentences and their correct logical forms using standard smt techniques .
|
mcdonald et al introduced a simple , flexible framework for scoring dependency parses .
|
mcdonald et al proposed an online large-margin method for training dependency parsers .
|
as erhan et al reported , word embeddings learned from a significant amount of unlabeled data are more powerful for capturing the meaningful semantic regularities of words .
|
word embeddings learned from a large amount of unlabeled data have been shown to be able to capture the meaningful semantic regularities of words .
|
hawes , lin , and cite-p-16-7-11 use a conditional random fields ( crf ) model to predict the next speaker .
|
hawes , lin , and cite-p-16-7-11 use a conditional random fields ( crf ) model to predict the next speaker in supreme court oral argument transcripts .
|
coreference resolution is a set partitioning problem in which each resulting partition refers to an entity .
|
coreference resolution is a field in which major progress has been made in the last decade .
|
in the vso constructions , the verb agrees with the syntactic subject in gender only , while in the svo constructions , the verb agrees with the subject .
|
4 in the vso constructions , the verb agrees with the syntactic subject in gender only , while in the svo constructions , the verb agrees with the subject in both number and gender .
|
we measure translation quality via the bleu score .
|
to evaluate segment translation quality , we use corpus level bleu .
|
tang et al 2002 ) use the density information to weight the selected examples but do not use it to select a sample .
|
tang et al 2002 ) use the density information to weight the selected examples while we use it to select examples .
|
even without such syntactic information , our neural models can realize comparable performance exclusively using the word sequence information of a sentence .
|
to remedy this problem , we propose a neural model which automatically induces features sensitive to multi-predicate interactions exclusively from the word sequence information of a sentence .
|
lai et al proposed recurrent cnn while johnson and zhang proposed semi-supervised cnn for solving text classification task .
|
lai et al and visin et al proposed recurrent cnns , while johnson and zhang proposed semi-supervised cnns for solving a text classification task .
|
focus , coherence and referential clarity are best evaluated by a class of features .
|
as in system-level prediction , for referential clarity , focus , and structure , the best feature class is continuity .
|
the weights of the different feature functions were optimised by means of minimum error rate training .
|
the standard minimum error rate training algorithm was used for tuning .
|
a prefix verb appears with a hyphen between the prefix and stem .
|
a prefix verb is a derived word with a bound morpheme as prefix .
|
taglda is a representative latent topic model by extending latent dirichlet allocation .
|
rel-lda is an application of the lda topic model to the relation discovery task .
|
the first syntactic transformation method is presented by atallah et al .
|
the first syntactic transformation method was presented by atallah et al .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.