sentence1
stringlengths 16
446
| sentence2
stringlengths 14
436
|
---|---|
we trained a 4-gram language model on this data with kneser-ney discounting using srilm .
|
the targetside 4-gram language model was estimated using the srilm toolkit and modified kneser-ney discounting with interpolation .
|
the feature weights are tuned to optimize bleu using the minimum error rate training algorithm .
|
the component features are weighted to minimize a translation error criterion on a development set .
|
to represent the semantics of the nouns , we use the word2vec method which has proven to produce accurate approximations of word meaning in different nlp tasks .
|
more specifically we use word2vec which seems to be a reasonable choice to model context similarity as the word vectors are trained to maximize the log probability of context words .
|
we used srilm -sri language modeling toolkit to train several character models .
|
we used the sri language modeling toolkit with kneser-kney smoothing .
|
the annotation was performed using the brat 2 tool .
|
they used the web-based annotation tool brat for the annotation .
|
traditionally , ensemble learning combines the output from several different classifiers to obtain a single improved model .
|
model ensemble is a common technique to combine predictions of multiple classifiers for better results .
|
to the best of our knowledge , there exists no analysis of the performance of modern framenet srl systems when applied to data from new domains .
|
to the best of our knowledge , there is no recent study of the domain dependence of framenet srl , also prohibited by a lack of appropriate datasets .
|
reinforcement learning is a machine learning technique that defines how an agent learns to take optimal sequences of actions so as to maximize a cumulative reward .
|
reinforcement learning is a machine learning technique that defines how an agent learns to take optimal actions so as to maximise a cumulative reward .
|
we use 300-dimensional word embeddings from glove to initialize the model .
|
we use the 200-dimensional global vectors , pre-trained on 2 billion tweets , covering over 27-billion tokens .
|
classes can be induced directly from the corpus using distributional clustering or taken from a manually crafted taxonomy .
|
classes can be induced directly from the corpus or taken from a manually crafted taxonomy .
|
turkish is an agglutinative language in which a sequence of inflectional and derivational morphemes get affixed to a root .
|
turkish is an agglutinative language where a sequence of inflectional and derivational morphemes get affixed to a root .
|
convolutional neural networks have been proven to significantly outperform other methods for relation classification .
|
a number of convolutional neural network , recurrent neural network , and other neural architectures have been proposed for relation classification .
|
we use mateplus for srl which produces predicate-argument structures as per propbank .
|
we derive our predicate-argument structures from a semantic parse based on the propbank annotation scheme .
|
the 5-gram kneser-ney smoothed language models were trained by srilm , with kenlm used at runtime .
|
this means in practice that the language model was trained using the srilm toolkit .
|
analysis is based on the analysis of the pronunciation of the vowels found in the data set .
|
the study focuses on the pronunciation of vowels found in the data .
|
riloff and wiebe performed pattern learning through bootstrapping while extracting subjective expressions .
|
riloff and wiebe learned the extraction patterns for subjective expressions .
|
we use the moses toolkit to train our phrase-based smt models .
|
our implementation of the segment-based imt protocol is based on the moses toolkit .
|
the stanford parser was used to generate the dependency parse information for each sentence .
|
we used the stanford parser to generate dependency trees of sentences .
|
the objective measures used were the bleu score , the nist score and multi-reference word error rate .
|
the system was evaluated in terms of bleu score , word error rate and sentence error rate .
|
the 'grammar ' consists of a lexicon where each lexical item is associated with a finite number of structures for which that item is the 'head ' .
|
the grammar is the general dart of the syntactic box , the part concerned with syntactic structures .
|
in this paper , we propose methods of using new linguistic and contextual features that do not suffer from this problem .
|
unlike previous work , we employed robust probabilistic models to capture useful linguistic and contextual information .
|
our method returns an ¡° explanation ¡± consisting of sets of input and output tokens that are causally related .
|
our method returns an ¡°explanation¡± consisting of groups of input-output tokens that are causally related .
|
srilm toolkit was used to create up to 5-gram language models using the mentioned resources .
|
furthermore , we train a 5-gram language model using the sri language toolkit .
|
the distributional pattern or dependency with syntactic patterns is also a prominent source of data input .
|
distributional pattern or dependency with syntactic patterns is also a prominent source of data input .
|
we initialize our model with 300-dimensional word2vec toolkit vectors generated by a continuous skip-gram model trained on around 100 billion words from the google news corpus .
|
we computed pre-trained word embeddings in 300 dimensions for all the words in the stories using the skip-gram architecture algorithm .
|
a core feature of learning to write is receiving feedback and making revisions based on the information provided .
|
a core feature of learning to write is receiving feedback and making revisions based on that feedback .
|
collobert et al used word embeddings as input to a deep neural network for multi-task learning .
|
collobert et al developed a general neural network architecture for sequence labeling tasks .
|
the language models are 4-grams with modified kneser-ney smoothing which have been trained with the srilm toolkit .
|
the lms are build using the srilm language modelling toolkit with modified kneserney discounting and interpolation .
|
as expected , this analysis suggests that including context in the model helps more .
|
in both settings , we show that including context significantly improves results against a context-free version of the model .
|
relation extraction is the problem of populating a target relation ( representing an entity-level relationship or attribute ) with facts extracted from natural-language text .
|
relation extraction ( re ) is the task of extracting semantic relationships between entities in text .
|
later work by wang et al was inspired by the similarity between the dependency parse of a sentence and its semantic amr graph .
|
later work by wang et al adopted a different strategy based on the similarity between the dependency parse of a sentence and the semantic amr graph .
|
on a collection of 1 . 5 million abstracts , the method was found to lead to an improvement of roughly 60 % in map and 70 % in p @ 10 .
|
on a data set composed of 1.5 million abstracts extracted from pubmed , our method obtains an increase of 61.5 % for map and 70 % for p @ 10 over the classical language modeling approach .
|
bengio et al and kumar et al developed training paradigms which are inspired by the learning principle that humans can learn more effectively when training starts with easier concepts and gradually proceeds with more difficult concepts .
|
bengio et al and kumar et al developed training paradigms which are inspired by the learning principle that humans can learn more effectively when training starts with easier concepts and gradually proceeds with more difficult ones .
|
in this paper , we have presented how to extract comparative sentences from korean text documents .
|
this paper proposes how to automatically identify korean comparative sentences from text documents .
|
in this paper , we focus on the study of applying structure regularization to the relation classification task of chinese literature .
|
in this paper , we present a novel model , structure regularized brcnn , to classify the relation of two entities in a sentence .
|
we use the pmi score to evaluate the quality of topics learnt by topic models .
|
we employ normalised pointwise mutual information which outperforms other metrics in measuring topic coherence .
|
we build a 9-gram lm using srilm toolkit with modified kneser-ney smoothing .
|
we used srilm to build a 4-gram language model with interpolated kneser-ney discounting .
|
coreference resolution is a field in which major progress has been made in the last decade .
|
coreference resolution is the task of identifying all mentions which refer to the same entity in a document .
|
in the following three aspects , 1 ) we exploit both the semantic and sentiment correlations of the bilingual texts .
|
2 ) our model leverages both the semantic and sentiment correlations between bilingual documents .
|
as discussed in the introduction , we use conditional random fields , since they are particularly suitable for sequence labelling .
|
for simplicity , we use the well-known conditional random fields for sequential labeling .
|
a is translator , but not researcher ; developer b is software engineer , but not researcher .
|
developer a is translator , but not researcher ; developer b is software engineer , but not researcher .
|
esa was introduced by gabrilovich and markovitch employing wikipedia as a knowledge base .
|
gabrilovich and markovitch utilized wikipedia-based concepts as the basis for a high-dimensional meaning representation space .
|
parameters are learned using mini-batch stochastic gradient descent with adagrad learning schedule .
|
training is done through stochastic gradient descent over shuffled mini-batches with the adagrad update rule .
|
a penalized probabilistic first-order inductive learning algorithm has been presented for chinese grammatical error diagnosis .
|
a penalized probabilistic first-order inductive learning algorithm was presented for chinese grammatical error diagnosis .
|
zhao and vogel describe a generative model for discovering parallel sentences in the xinhua news chineseenglish corpus .
|
zhao and vogel combine a sentence length model with an ibm model 1-type translation model .
|
experimental results on a large-scale subtitle corpus show that our approach improves translation performance by 0 . 61 bleu points ( cite-p-20-1-17 ) .
|
experimental results show that our approach achieves a significant improvement of 1.58 bleu points in translation performance with 66 % f-score for dp generation accuracy .
|
we used a regularized maximum entropy model .
|
we use the maximum entropy model as a classifier .
|
we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model .
|
furthermore , we train a 5-gram language model using the sri language toolkit .
|
in particular , haussler and watkins proposed the best-known convolution kernels for a discrete structure .
|
haussler and watkins proposed a new kernel method based on discrete structures respectively .
|
our baseline system is an standard phrase-based smt system built with moses .
|
we implement the pbsmt system with the moses toolkit .
|
grammars acquired from this model demonstrate a consistent use of category labels , something which has not been demonstrated by other acquisition .
|
moreover , grammars acquired from this model demonstrate a consistent use of category labels , something which has not been demonstrated by other acquisition models .
|
hepple introduces first-order compilation for implicational linear logic , and shows how that method can be used with labelling as a basis parsing implicational categorial systems .
|
hepple shows how deductions in implicational linear logic can be recast as deductions involving only first-order formulae .
|
system selection for diglossic languages .
|
system selection and combination in machine translation .
|
cite-p-17-1-11 reported that discourse segments tend to be in a fixed order for structured texts .
|
cite-p-17-1-18 reported that discourse structure helps to extract anaphoric relations .
|
specifically , we follow callison-burch et al and use a source language suffix array to extract only those rules which will actually be used in translating a particular set of test sentences .
|
instead , we follow callison-burch et al and lopez , and use a source language suffix array to extract only rules that will actually be used in translating a particular test set .
|
the trigram language model is implemented in the srilm toolkit .
|
srilm can be used to compute a language model from ngram counts .
|
this type of features are based on a trigram model with kneser-ney smoothing .
|
the language model is a large interpolated 5-gram lm with modified kneser-ney smoothing .
|
the penn discourse tree bank is the largest resource to date that provides a discourse annotated corpus in english .
|
the penn discourse treebank is a large corpus annotated with discourse relations , .
|
across sentences , the proposed method outperformed regressionand conventional nn-based methods presented in previous studies .
|
experimental results show that the proposed method outperforms lexicon-based , regression-based , and nn-based methods proposed in previous studies .
|
the srilm toolkit was used to build the trigram mkn smoothed language model .
|
language models were built using the sri language modeling toolkit with modified kneser-ney smoothing .
|
for word embedding , we used pre-trained glove word vectors with 300 dimensions , and froze them during training .
|
we used the 300-dimensional glove word embeddings learned from 840 billion tokens in the web crawl data , as general word embeddings .
|
gu et al introduced copynet to simulate the repeating behavior of humans in conversation .
|
gu et al proposed copynet , which is able to copy words from the source message .
|
we use a minibatch stochastic gradient descent algorithm together with the adam method to train each model .
|
we use binary crossentropy loss and the adam optimizer for training the nil-detection models .
|
ma et al adapted features from earlier studies and proposed to model them over time .
|
ma et al extended the model using time series to capture the variation of features over time .
|
here , a singleton detection system based on word embeddings and neural networks is presented , which achieves state-of-the-art performance ( 79 . 6 % accuracy ) .
|
in this paper , a novel singleton detection system which makes use of word embeddings and neural networks is presented .
|
opinion can be obtained by applying natural language processing techniques .
|
irony detection is a key task for many natural language processing works .
|
the initial ndt system was created from components of the virtual human toolkit .
|
all modules of the system are built on top of the virtual human toolkit .
|
yago is a large ontology based on wordnet and extended with concepts from wikipedia and other resources .
|
yago is a knowledge base , linking wikipedia entries to the wordnet ontology .
|
for english posts , we used the 200d glove vectors as word embeddings .
|
for english , we use the pre-trained glove vectors .
|
for all data sets , we trained a 5-gram language model using the sri language modeling toolkit .
|
we trained a 5-gram sri language model using the corpus supplied for this purpose by the shared task organizers .
|
table 6 : pearson ¡¯ s r of acceptability measure and sentence minimum word frequency .
|
table 6 : pearson¡¯s r of acceptability measure and sentence minimum word frequency for all models in bnc .
|
n-gram language models for different orders with interpolated kneser-ney smoothing as well as entropy based pruning were built for this morph lexicon using the srilm toolkit .
|
the targetside 4-gram language model was estimated using the srilm toolkit and modified kneser-ney discounting with interpolation .
|
we acquired 138 . 1 million pattern pairs with 70 % precision with such non-trivial lexical substitution as ¡° use y to distribute .
|
using our proposed method , we acquired 217.8 million japanese entailment pairs with 80 % precision and 138.1 million non-trivial pairs with 70 % precision .
|
dropouts are applied on the outputs of bi-lstm .
|
for regularization , dropout is applied to each layer .
|
gru is reported to be better for long-term dependency modeling than the simple rnn .
|
the birnn is implemented with lstms for better long-term dependencies handling .
|
stance detection is the task of classifying the attitude previous work has assumed that either the target is mentioned in the text or that training data for every target is given .
|
stance detection is the task of assigning stance labels to a piece of text with respect to a topic , i.e . whether a piece of text is in favour of “ abortion ” , neutral , or against .
|
we demonstrate superagent as an add-on extension to mainstream web browsers such as microsoft edge and google chrome .
|
we demonstrate superagent as an add-on extension to mainstream web browsers and show its usefulness to user¡¯s online shopping experience .
|
goldsmith describes unsupervised algorithms for extracting morphological rules from a corpus having no prior knowledge of the language , using minimum description length analysis .
|
goldsmith gives a comprehensive heuristic algorithm for unsupervised morphological analysis , which uses an mdl criterion to segment words and find morphological paradigms .
|
we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing .
|
we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model .
|
in this work , we provide just such a framework for training .
|
we have provided just such a framework for improving parsing performance .
|
we use a recurrent neural network with lstm cells to avoid the vanishing gradient problem when training long sequences .
|
we use an lstm model with an attention mechanism for capturing long dependencies in questions for the question similarity task .
|
we propose a probabilistic sentence selection algorithm to address the issue of local redundancy in description .
|
to tackle the problem of local redundancy , we also propose a probabilistic sentence selection algorithm .
|
we also demonstrate that our method clearly outperforms a recent state of the art method proposed for handling the problem of repeating phrases with a gain of 7 % ( absolute ) in rouge-l scores .
|
we also introduced a new data set and empirically verified we perform significantly better ( gain of 28 % ( absolute ) in rouge-l score ) than applying a plain encode-attend-decode mechanism to this problem .
|
more recently , neural networks have become prominent in word representation learning .
|
recently , neural networks , and in particular recurrent neural networks have shown excellent performance in language modeling .
|
in this paper , we present a bootstrapping solution that exploits a large unannotated corpus for training .
|
in this paper , we propose a bootstrapping solution for event role filler extraction that requires minimal human supervision .
|
for example , jimeno et al argue that the use of disease terms in biomedical literature is well standardized , which is quite opposite for the gene terms .
|
this supports the argument of jimeno et al that the use of disease terms in biomedical literature is well standardized .
|
research on error detection has mostly been concerned with function words , such as determiners and prepositions .
|
many studies deal with the issue of preposition error detection and correction .
|
hochreiter and schmidhuber , 1997 ) proposed a long short-term memory network , which can be used for sequence processing tasks .
|
to solve this problem , hochreiter and schmidhuber introduced the long short-term memory rnn .
|
as mentioned earlier , we use the dataset created in feng and lapata .
|
we trained the multimodal topic model on the corpus created in feng and lapata .
|
in this paper , we address the problem of product aspect rating prediction .
|
in this paper , we proposed a sentiment aligned topic model ( satm ) for product aspect rating prediction .
|
topic modeling is a useful mechanism for discovering and characterizing various semantic concepts embedded in a collection of documents .
|
topic modeling is the standard technique for such purposes , and latent dirichlet allocation ( lda ) ( cite-p-16-1-1 ) is the most used algorithm , which models the documents as distribution over topics and topics as distribution over words .
|
finally , blanc uses a variation on the rand index suitable for evaluating coreference .
|
blanc is a link-based metric that adapts the rand index to coreference resolution evaluation .
|
for nb and svm , we used their implementation available in scikit-learn .
|
within this subpart of our ensemble model , we used a svm model from the scikit-learn library .
|
we use logistic regression as the per-class binary classifier , implemented using liblinear .
|
we use the logistic regression implementation of liblinear wrapped by the scikit-learn library .
|
for all data sets , we trained a 5-gram language model using the sri language modeling toolkit .
|
on the remaining tweets , we trained a 10-gram word length model , and a 5-gram language model , using srilm with kneyser-ney smoothing .
|
the target-side language models were estimated using the srilm toolkit .
|
the srilm toolkit was used to build the 5-gram language model .
|
we adapt the perceptron discriminative learning algorithm to the cws problem .
|
we proposed a word-based cws model using the discriminative perceptron learning algorithm .
|
the model weights are automatically tuned using minimum error rate training .
|
the model parameters are trained using minimum error-rate training .
|
in this paper , we present lp-mert , an exact search algorithm for math-w-2-3-2-13-best optimization that exploits general assumptions commonly made with mert , e . g . , that the error .
|
in this paper , we present lp-mert , an exact search algorithm for minimum error rate training that reaches the global optimum using a series of reductions to linear programming .
|
a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke .
|
the language model used was a 5-gram with modified kneserney smoothing , built with srilm toolkit .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.