sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
the translation quality is evaluated by caseinsensitive bleu-4 metric .
the translation quality is evaluated by case-insensitive bleu-4 .
for brevity we will omit discussion of the exact score calculation and refer the interested reader to coppersmith et al .
we briefly describe the procedure here , and refer interested readers to coppersmith et al .
the performance of the phrase-based smt system is measured by bleu score and ter .
the mt performance is measured with the widely adopted bleu and ter metrics .
in this paper we report our work on anchoring temporal expressions .
in this paper we report our work on anchoring temporal expressions in a novel genre , emails .
role induction can be naturally formalized as a clustering problem .
we treat role induction as a clustering problem .
translation quality can be measured in terms of the bleu metric .
we measure translation quality via the bleu score .
hatzivassiloglou and mckeown proposed a supervised algorithm to determine the semantic orientation of adjectives .
hatzivassiloglou and mckeown showed how the pattern x and y could be used to automatically classify adjectives as having positive or negative orientation .
niu et al automatically convert the dependency-structure cdt into the phrasestructure style of ctb5 using a trained constituency parser on ctb5 , and then combine the converted treebanks for constituency parsing .
as discussed in section 2 , niu et al automatically convert the dependency-structure cdt to the phrase-structure annotation style of ctb5x and use the converted treebank as additional labeled data .
we evaluated the translation quality using the bleu-4 metric .
we evaluated the translation quality using the case-insensitive bleu-4 metric .
ghosh et al proposed a linear tagging approach for argument identification using conditional random fields and n-best results .
ghosh et al , 2014 , used a linear tagging approach based on conditional random fields .
we train a 5-gram language model with the xinhua portion of english gigaword corpus and the english side of the training set using the srilm toolkit .
incometo select the most fluent path , we train a 5-gram language model with the srilm toolkit on the english gigaword corpus .
next , we performed a translation evaluation , measured by bleu .
we measured translation performance with bleu .
we additionally show that such transfer learning can be applicable in other nlp tasks .
we also report results on sick to show that span-supervised qa dataset can be also useful for non-qa datasets .
the dependency model with valence is one of representative work , in which the valence is explicitly modelled .
the models we use are based on the generative dependency model with valence .
the structural information we extract is enough to robustly identify non-local dependencies in a local dependency graph .
on the other hand , using these simplified patterns , we may loose some structural information important for recovery of non-local dependencies .
statistical topic models such as latent dirichlet allocation provide a powerful framework for representing and summarizing the contents of large document collections .
topic models such as latent dirichlet allocation are hierarchical probabilistic models of document collections .
for this task , we use glove pre-trained word embedding trained on common crawl corpus .
we obtain pre-trained tweet word embeddings using glove 3 .
we used moses as the implementation of the baseline smt systems .
we implement the pbsmt system with the moses toolkit .
for our baseline we use the moses software to train a phrase based machine translation model .
we use the moses toolkit to train various statistical machine translation systems .
for all the systems we train , we build n-gram language model with modified kneserney smoothing using kenlm .
finally , we experiment with adding a 5-gram modified kneser-ney language model during inference using kenlm .
gaussian processes are a bayesian kernelised framework considered the state-of-the-art for regression .
gaussian processes are a bayesian non-parametric machine learning framework considered the stateof-the-art for regression .
for building our statistical ape system , we used maximum phrase length of 7 and a 5-gram language model trained using kenlm .
to rerank the candidate texts , we used a 5-gram language model trained on the europarl corpus using kenlm .
in this work , we describe a maximum entropy model of compound word splitting that relies on a few general features that can be used to generate segmentation .
in this paper , we describe a maximum entropy word segmentation model that is trained to assign high probability to possibly several segmentations of an input word .
relation extraction is a traditional information extraction task which aims at detecting and classifying semantic relations between entities in text ( cite-p-10-1-18 ) .
relation extraction is the task of finding semantic relations between two entities from text .
pun is a figure of speech that consists of a deliberate confusion of similar words or phrases for rhetorical effect , whether humorous or serious .
a pun is a form of wordplay in which a word suggests two or more meanings by exploiting polysemy , homonymy , or phonological similarity to another word , for an intended humorous or rhetorical effect ( cite-p-15-3-1 ) .
then we split compounds with the lattice-based model in cdec .
we adopt this approach for the hypergraph built by the cdec decoder .
it is well known that support vector machine methods are very suitable for this task .
a discriminative classifier is trained for this purpose based on support vector machines with an rbf kernel .
we use minimum error rate training with nbest list size 100 to optimize the feature weights for maximum development bleu .
we set all feature weights using minimum error rate training , and we optimize their number on the development dataset .
to keep consistent , we initialize the embedding weight with pre-trained word embeddings .
we initialize the embedding weights by the pre-trained word embeddings with 200 dimensional vectors .
word sense disambiguation ( wsd ) is a task to identify the intended sense of a word based on its context .
word sense disambiguation ( wsd ) is a key enabling-technology that automatically chooses the intended sense of a word in context .
poon and domingos proposed a model for unsupervised semantic parsing that transforms dependency trees into semantic representations using markov logic .
richardson and domingos propose a method for reasoning about databases and logical constraints using markov random fields .
this is done by training a multiclass support vector machine classifier implemented in the svmmulticlass package by joachims .
disambiguation is performed as point-wise classification using the support vector machine implementation of the svm light toolkit .
finally , mead is a widely used multi-document summarization and evaluation platform .
finally , mead is a widely used mds and evaluation platform .
the morphological disambiguation component of our parser is based on more and tsarfaty , modified to accommodate ud pos tags and morphological features .
the morphological disambiguator component of our parser is based on more and tsarfaty , modified only to accommodate ud pos tags and morphological features .
liu et al used conditional random fields for sentence boundary and edit word detection .
zhou and xu use a bidirectional wordlevel lstm combined with a conditional random field for semantic role labeling .
on the other hand , we use their surrounds ( i . e . , claim , reason , debate context ) as another attention vectors to get contextual representations , which work as final clues .
on the other hand , we represent their surrounds ( i.e. , reason , claim , debate context ) as another attention vector to get the contextual representations .
and it leads to faster translation speed and better translation quality due to the reduced search space .
it reduces the decoding time and improves the translation quality owing to reduced search space .
baroni and zamparelli present the lexical function model for the composition of adjectives and nouns .
to this end , baroni and zamparelli present a compositional model for adjectives and nouns .
table 3 shows results in terms of meteor and bleu .
table 2 shows the blind test results using bleu-4 , meteor and ter .
coreference resolution is the problem of identifying which noun phrases ( nps , or mentions ) refer to the same real-world entity in a text or dialogue .
coreference resolution is the task of grouping mentions to entities .
we used srilm to build a 4-gram language model with kneser-ney discounting .
we used the srilm toolkit to train a 4-gram language model on the english side of the training corpus .
klein and manning show that much of the gain in statistical parsing using lexicalized models comes from the use of a small set of function words .
klein and manning , for example , show that the performance of an unlexicalised model can be substantially improved by splitting the existing symbols down into finer categories .
we introduce a metric called sentiment annotation complexity ( sac ) .
our proposed metric is called sentiment annotation complexity ( sac ) .
semantic role labeling is the process of annotating the predicate-argument structure in text with semantic labels .
semantic role labeling is the problem of analyzing clause predicates in open text by identifying arguments and tagging them with semantic labels indicating the role they play with respect to the verb .
for this purpose , we turn to the expectation maximization algorithm .
in this work , we use the expectation-maximization algorithm .
the crf model has been widely used in nlp segmentation tasks , such as shallow parsing , named entity recognition , and word segmentation .
crfs has been used for sequential labeling problems such as text chunking and named entity recognition .
in this short paper , we propose a novel method to model rules as observed generation .
in this paper , we will explore the relationship among translation rules .
in this paper , we presented a mildly supervised method for identifying metaphorical verb usage .
in this paper we propose a method for identifying metaphorical usage in verbs .
koo et al used a clustering algorithm to produce word clusters on a large amount of unannotated data and represented new features based on the clusters for dependency parsing models .
koo et al used a word clusters trained on a large amount of unannotated data and designed a set of new features based on the clusters for dependency parsing models .
we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit .
for the semantic language model , we used the srilm package and trained a tri-gram language model with the default goodturing smoothing .
we use the rouge evaluation metrics , with r-2 measuring the bigram overlap between the system and reference summaries and r-su4 measuring the skip-bigram with the maximum gap length of 4 .
we use the rouge evaluation metrics , with r-1 and r-2 measuring the unigram and bigram overlap between the system and reference summaries , and r-su4 measuring the skip-bigram with the maximum gap length of 4 .
we trained svm models with rbf kernel using scikit-learn .
we implemented the different aes models using scikit-learn .
the feature weights of the log-linear models were trained with the help of minimum error rate training and optimized for 4-gram bleu on the development test set .
the smt systems are tuned on the dev development set with minimum error rate training using bleu accuracy measure as the optimization criterion .
to solve this dynamic state tracking problem , we propose a sequential labeling approach using linear-chain conditional random fields .
specifically , we adopt linear-chain conditional random fields as the method for sequence labeling .
we identify their arguments using a heuristic proposed in .
for english , we identify their arguments using a heuristic proposed in .
for all models , we use the 300-dimensional glove word embeddings .
we use the pre-trained glove vectors to initialize word embeddings .
koppel et al also suggested that syntactic features might be useful features , but only investigated this idea at a shallow level by treating rare pos bigrams as ungrammatical structures .
koppel et al suggested that syntactic features might be potentially useful , but only explored this idea at a rather shallow level by characterising ungrammatical structures with rare pos bi-grams .
for the character sequence level probabilities , we build -gram character language models using the srilm tool for each of the two languages presented in the training data using the annotated words .
to calculate language model features , we train traditional n-gram language models with ngram lengths of four and five using the srilm toolkit .
lexical simplification is a technique that substitutes a complex word or phrase in a sentence with a simpler synonym .
lexical simplification is the task of modifying the lexical content of complex sentences in order to make them simpler .
this paper uses a novel framework to restore the elided elements in the sentence , which is named abstract meaning representation .
this study focuses on the generation of a semantic representation that was proposed some years ago , the abstract meaning representation .
we tune the systems using minimum error rate training .
we use minimum error rate training to tune the decoder .
algorithm that has theoretical justification , gives a theoretical justification for the yarowsky algorithm , and shows that co-training and the yarowsky algorithm are based on different independence assumptions .
we have also given a theoretical analysis of the yarowsky algorithm for the first time , and shown that it can be justified by an independence assumption that is quite distinct from the independence assumption that co-training is based on .
we use srilm for n-gram language model training and hmm decoding .
we build a 9-gram lm using srilm toolkit with modified kneser-ney smoothing .
collobert et al first applies a convolutional neural network to extract features from a window of words .
collobert et al , 2011 ) trains a neural network to judge the validity of a given context .
we measure the translation quality with automatic metrics including bleu and ter .
we evaluated the translation quality using the bleu-4 metric .
gru has been shown to achieve comparable performance with less parameters than lstm .
nevertheless , gru has been experimentally proven to be comparable in performance to lstm .
the standard back-propagation algorithm is used for supervised training of the neural network .
the neural network approach casts sense resolution as a supervised learning paradigm .
we provide two novel ways to extend the bimodal models to support three or more modalities .
finally , we describe two ways to extend the model by incorporating three or more modalities .
the model weights were trained using the minimum error rate training algorithm .
parameters were tuned using minimum error rate training .
we train a trigram language model with the srilm toolkit .
we implement an in-domain language model using the sri language modeling toolkit .
theoretically , one can directly apply em to solve the problem .
when formulated like this , one can directly apply em to solve the problem .
for the mix one , we also train word embeddings of dimension 50 using glove .
we initialize the word embedding matrix with pre-trained glove embeddings .
unlike the existing work , we explore an implicit content-introducing method for neural conversation systems , which utilizes the additional cue word in a “ soft ” manner .
in this paper , we present an implicit content-introducing method for generative conversation systems , which incorporates cue words using our proposed hierarchical gated fusion unit ( hgfu ) in a flexible way .
experimental results confirm that fbrnn is competitive compared to the state-of-the-art .
these experiments demonstrate that fbrnn achieves competitive results compared to the current state-of-the-art .
relation extraction ( re ) is the process of generating structured relation knowledge from unstructured natural language texts .
relation extraction is the task of finding relationships between two entities from text .
twitter is a social platform which contains rich textual content .
twitter consists of a massive number of posts on a wide range of subjects , making it very interesting to extract information and sentiments from them .
and on the other hand , it is likely that information about the argumentative structure facilitates the identification of argument components .
third , the structure of argumentation is needed for recommending better arrangements of argument components and meaningful usage of discourse markers .
1 a bunsetsu is the linguistic unit in japanese that roughly corresponds to a basic phrase in english .
1a bunsetsu is a common unit when syntactic structures in japanese are discussed .
we used the moses mt toolkit with default settings and features for both phrase-based and hierarchical systems .
for training the translation model and for decoding we used the moses toolkit .
more concretely , faruqui and dyer use canonical correlation analysis to project the word embeddings in both languages to a shared vector space .
faruqui and dyer uses canonical correlation analysis that maps words from two different languages in to a common , shared space .
we evaluate the system generated summaries using the automatic evaluation toolkit rouge .
we evaluate our models with the standard rouge metric and obtain rouge scores using the pyrouge package .
ju et al designed a sequential stack of flat ner layers that detects nested entities .
ju et al present a dynamic end-to-end neural network model capable of handling an undetermined number of nesting levels .
text mining results are presented as a browsable variable hierarchy which allows users to inspect all mentions of a particular variable type in the text .
text mining results are presented in an innovative way as a browsable hierarchy ranging from most general to most specific variables , with links to their textual instances .
based on question-aware passage representation , we employ gated attention-based recurrent networks on passage against passage itself , aggregating evidence relevant to the current passage .
we first match the question and passage with gated attention-based recurrent networks to obtain the question-aware passage representation .
and exhibits stable performance across languages .
further , it exhibits stable performance across languages .
a classifier and a regressor have been trained jointly on top of a recurrent artificial neural network .
iubc makes use of a jointly trained classifier and regressor , and both models work on top of a recurrent neural network .
we used the tokenizer , pos tagger , lemmatizer and svmlight wrapper in the cleartk package .
our framework was built with the cleartk toolkit with its wrapper for svmlight .
collobert and weston trained jointly a single convolutional neural network architecture on different nlp tasks and showed that multitask learning increases the generalization of the shared tasks .
collobert and weston showed that neural networks can perform well on sequence labeling language processing tasks while also learning appropriate features .
we use the state-of-the-art phrase-based machine translation system moses perform our machine translation experiments .
we perform our translation experiments using an in-house state-of-the-art phrase-based smt system similar to moses .
in our experiments , we use 300-dimension word vectors pre-trained by glove .
we use pre-trained vectors from glove for word-level embeddings .
as textual features , we use the pretrained google news word embeddings , obtained by training the skip-gram model with negative sampling .
we use large 300-dim skip gram vectors with bag-of-words contexts and negative sampling , pre-trained on the 100b google news corpus .
in this work , we apply a standard phrase-based translation system .
in particular , we adopt the approach of phrase-based statistical machine translation .
they then extend their work by applying the page rank algorithm to ranking the wordnet senses in terms of how strongly a sense possesses a given semantic property .
in extending their work , the pagerank algorithm is applied to rank senses in terms of how strongly they are positive or negative .
we use a tree-lstm in our parser to model the sub-trees during parsing .
to capture the hierarchical relationship among codes , we build a tree lstm along the code tree .
and use this model as a regularization term with a bilingual word alignment model .
we design a generative model for word alignment that uses synonym information as a regularization term .
the weights of the log-linear interpolation were optimized by means of mert , using the news-commentary test set of the 2008 shared task as a development set .
the weights of the different feature functions were optimised by means of minimum error rate training on the 2013 wmt test set .
our ner model is built according to conditional random fields methods , by which we convert the problem of ner into that of sequence labeling .
we cast the problem of event property extraction as a sequence labeling task , using conditional random fields for learning and inference .
defined by these rules , our model searches for the best translation derivation and yields target translation simultaneously .
it searches for the best derivation through the scfg-motivated space defined by these rules and get target translation simultaneously .
the sdp is a kind of dependency parsing , and its task is to build a dependency structure for an input sentence and to label the semantic relation between a word and its head .
however , sdp is a special structure in which every two neighbor words are separated by a dependency relations .
a tri-gram language model is estimated using the srilm toolkit .
we also use a 4-gram language model trained using srilm with kneser-ney smoothing .