sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
the 5-gram kneser-ney smoothed language models were trained by srilm , with kenlm used at runtime .
the language model was a kneser-ney interpolated trigram model generated using the srilm toolkit .
relation extraction ( re ) is the task of identifying instances of relations , such as nationality ( person , country ) or place of birth ( person , location ) , in passages of natural text .
relation extraction is the task of recognizing and extracting relations between entities or concepts in texts .
a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke .
a trigram model was built on 20 million words of general newswire text , using the srilm toolkit .
we then made videos of every schedule for every sentence , using the festival speech synthesiser and the ruth talking head .
videos were then created of all of the schedules for all of the sentences , using the festival speech synthesiser and the ruth animated talking head .
in this paper , we proposed a novel algorithm , show-and-fool , for crafting adversarial examples .
in this paper , we tackle the aforementioned challenges by proposing a novel algorithm called show-and-fool .
the long short-term memory was first proposed by hochreiter and schmidhuber that can learn long-term dependencies .
long short-term memory was introduced by hochreiter and schmidhuber to overcome the issue of vanishing gradients in the vanilla recurrent neural networks .
nlp tasks are publicly available : datasets for word segmentation and pos tagging were released for the first vlsp evaluation campaign .
vncorenlp provides core nlp steps including word segmentation , pos tagging , ner and dependency parsing .
over multi-domain language identification and multi-domain sentiment analysis , we show our models to substantially outperform a baseline deep learning method , and set a new benchmark for state-of-the-art cross-domain .
evaluating on multi-domain language identification and multi-domain sentiment analysis , we show substantial improvements over standard domain adaptation techniques , and domain-adversarial training .
the character embeddings are computed using a method similar to word2vec .
the embedding layer was initialized using word2vec vectors .
for estimating monolingual word vector models , we use the cbow algorithm as implemented in the word2vec package using a 5-token window .
as monolingual baselines , we use the skip-gram and cbow methods of mikolov et al as implemented in the gensim package .
we use the standard generative dependency model with valence .
our experiments use the dependency model with valence .
in each plot , a single arrow signifies one word , pointing from the position of the original word .
in each plot , a single arrow signifies one word , pointing from the position of the original word embedding to the updated representation .
trigram language models are implemented using the srilm toolkit .
srilm toolkit is used to build these language models .
this is partly in line with sahlgren and lenci who observed that it is more challenging for neural-based models to train good vectors for low-frequency words .
this is in line with sahlgren and lenci who showed that dsms perform the best for medium to high-frequency ranges items .
the nodes are concepts ( or synsets as they are called in the wordnet ) .
wordnet is a general english thesaurus which additionally covers biological terms .
we use case-sensitive bleu-4 to measure the quality of translation result .
we evaluate the translation quality using the case-sensitive bleu-4 metric .
for chinese , we exploit wikipedia documents to train the same dimensional word2vec embeddings .
with english gigaword corpus , we use the skip-gram model as implemented in word2vec 3 to induce embeddings .
the universal dependencies is a worldwide project to provide multilingual syntactic resources of dependency structures with a uniformed tag set for all languages .
the universal dependencies project is a recent effort aimed at facilitating crosslingual parsing development through the standardization of dependency annotation schemes across languages .
in addition , instead of using the popular crf model , we use another sequence labeling model in this paper -- -the hidden markov support vector machines model .
instead of using crf model , we use the hidden markov support vector machines , which is also a sequence labeling model like crf .
additionally , we compile the model using the adamax optimizer .
we use the adam optimizer and mini-batch gradient to solve this optimization problem .
we used the pharaoh decoder for both the minimum error rate training and test dataset decoding .
we used minimum error rate training to optimize the feature weights .
we first create a sentence quotation graph to represent the conversation structure .
we first build a sentence quotation graph that captures the conversation structure among emails .
cite-p-17-1-3 used an lstm architecture to capture potential long-distance dependencies , which alleviates the limitation of the size of context window .
cite-p-17-5-5 used a linear-time incremental model which can also benefits from various kinds of features including word-based features .
in this paper , we present a reinforcement learning framework for inducing mappings from text to actions .
in this paper , we presented a reinforcement learning approach for inducing a mapping between instructions and actions .
b ing produces a much better translation : chef d ¡¯ etat-major de la defense du mali .
then b ing produces a much better translation : chef d¡¯¨¦tat-major de la d¨¦fense du mali veut plus d¡¯armes .
in this paper , we propose another phrase-level combination approach ¨c a paraphrasing model .
in this paper , we propose a paraphrasing model to address the task of system combination for machine translation .
coreference resolution is the process of finding discourse entities ( markables ) referring to the same real-world entity or concept .
coreference resolution is the problem of partitioning a sequence of noun phrases ( or mentions ) , as they occur in a natural language text , into a set of referential entities .
the weights of the different feature functions were optimised by means of minimum error rate training on the 2008 test set .
the feature weights for each system were tuned on development sets using the moses implementation of minimum error rate training .
we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus .
we train a 4-gram language model on the xinhua portion of the english gigaword corpus using the srilm toolkits with modified kneser-ney smoothing .
relation extraction ( re ) is the task of identifying instances of relations , such as nationality ( person , country ) or place of birth ( person , location ) , in passages of natural text .
relation extraction is the task of detecting and characterizing semantic relations between entities from free text .
the character embeddings are computed using a method similar to word2vec .
the words of input sentences are first converted to vector representations learned from word2vec tool .
we used moses , a phrase-based smt toolkit , for training the translation model .
we adapted the moses phrase-based decoder to translate word lattices .
in a naive implementation , a new phrase t . vpe is built by copying older ones and then combining the copies according to the constraints stated in a grammar .
in a naive implementation , a new phrase type is built by copying older ones and then combining the copies according to the constraints stated in a grammar rule .
to deal with this problem , we propose graph merging , a new perspective , for building flexible dependency graphs .
to deal with this problem , we propose graph merging , a new perspective , for building flexible representations .
our baseline is a phrase-based mt system trained using the moses toolkit .
we use the moses toolkit to train our phrase-based smt models .
we use the pre-trained glove 50-dimensional word embeddings to represent words found in the glove dataset .
we use 300 dimensional glove embeddings trained on the common crawl 840b tokens dataset , which remain fixed during training .
we used the pre-trained word embeddings that are learned using the word2vec toolkit on google news dataset .
we use publicly-available 1 300-dimensional embeddings trained on part of the google news dataset using skip-gram with negative sampling .
socher et al proposed the recursive neural network that has been proven to be efficient in terms of constructing sentences representations .
socher et al introduce a family of recursive neural networks for sentence-level semantic composition .
xiong et al develop a bottom-up decoder for btg that uses only phrase pairs .
feng et al use shift-reduce parsing to impose itg constraints on phrase permutation .
we use our implementation of hierarchical phrase-based smt , with standard features , for the smt experiments .
in our experiments the mt system used is hierarchical phrase-based system .
word sense disambiguation ( wsd ) is the task of identifying the correct meaning of a word in context .
word sense disambiguation ( wsd ) is a key enabling-technology .
language modeling is trained using kenlm using 5-grams , with modified kneser-ney smoothing .
the language models are 4-grams with modified kneser-ney smoothing which have been trained with the srilm toolkit .
the dataset we used in the present study is the online edition 2 of the world atlas of language structures .
we use the world atlas of language structure dataset , on which we conduct experiments .
modi et al extended the model of , which is an unsupervised model for inducing semantic roles , to jointly induce semantic roles and frames across verbs using the chinese restaurant process .
modi et al extended the model of to jointly induce semantic roles and frames using the chinese restaurant process , which is also used in our approach .
information extraction ( ie ) is the task of identifying information in texts and converting it into a predefined format .
information extraction ( ie ) is a technology that can be applied to identifying both sources and targets of new hyperlinks .
coreference resolution is a fundamental component of natural language processing ( nlp ) and has been widely applied in other nlp tasks ( cite-p-15-3-9 ) .
since coreference resolution is a pervasive discourse phenomenon causing performance impediments in current ie systems , we considered a corpus of aligned english and romanian texts to identify coreferring expressions .
for the lda based method , adding other content words , combined with an increased number of topics , can further improve the performance , achieving up to 14 . 23 % perplexity reduction .
in addition , we find that for the lda based adaptation scheme , adding more content words and increasing the number of topics can further improve the performance significantly .
sentiment analysis in twitter , which is a task of semeval , was firstly proposed in 2013 and not replaced until 2018 .
sentiment analysis in twitter is a particularly challenging task , because of the informal and “ creative ” writing style , with improper use of grammar , figurative language , misspellings and slang .
output string is guaranteed to conform to a given target grammar .
output always conforms to the given target grammar .
in this paper , we exploit non-local features as an estimate of long-distance dependencies .
in this paper , our approach describes how to exploit non-local information to a slu problem .
the smt systems were trained using the moses toolkit and the experiment management system .
the experiment management system from the open source moses smt toolkit was used to conduct the experiments .
these connections may be derived from work in language assessment and grade expectations such as found in , and .
these connections may be derived from work in language assessment and grade expectations such as found in .
the probabilistic language model is constructed on google web 1t 5-gram corpus by using the srilm toolkit .
a standard sri 5-gram language model is estimated from monolingual data .
word alignment , which can be defined as an object for indicating the corresponding words in a parallel text , was first introduced as an intermediate result of statistical translation models ( cite-p-13-1-2 ) .
word alignment is a critical first step for building statistical machine translation systems .
transition-based methods have given competitive accuracies and efficiencies for dependency parsing .
transition-based and graph-based models have attracted the most attention of dependency parsing in recent years .
zhuang et al present an algorithm for the extraction of opinion target -opinion word pairs .
zhuang et al present a supervised algorithm for the extraction of opinion expression -opinion target pairs .
therefore , we adopt the greedy feature selection algorithm as described in jiang et al to pick up positive features incrementally according to their contributions .
we adopt the greedy feature selection algorithm as described in jiang and ng to pick up positive features empirically and incrementally according to their contributions on the development data .
we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing .
further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus .
gold has been created to facilitate a more standardised use of basic grammatical features .
the idea behind gold is to facilitate a more standardised use of basic grammatical features .
therefore , we use the long short-term memory network to overcome this problem .
an effective solution for these problems is the long short-term memory architecture .
as described in this paper , we demonstrate the contribution of modality analysis for disease .
in this study , we use generic modality features to improve factuality analysis .
the first dataset is mov , which is a classical movie review dataset .
the first dataset is mov , which is a widely-used movie review dataset .
crfs have been shown to perform well on a number of nlp problems such as shallow parsing , table extraction , and named entity recognition .
crfs have been shown to perform well in a number of natural language processing applications , such as pos tagging , shallow parsing or np chunking , and named entity recognition .
ji and grishman employ an approach to propagate consistent event arguments across sentences and documents .
later , ji and grishman employed a rule-based approach to propagate consistent triggers and arguments across topic-related documents .
for training our system classifier , we have used scikit-learn .
we implement classification models using keras and scikit-learn .
the language models were built using srilm toolkits .
the srilm toolkit was used to build the 5-gram language model .
these embeddings are determined beforehand on a very large corpus typically using either the skip gram or the continuous bag of words variant of the word2vec model .
these word embeddings are learned in advance using a continuous skip-gram model , or other continuous word representation learning methods .
quirk et al extended path to treelets and put forward dependency treelet translation .
quirk et al and xiong et al used treelets to model the source dependency tree using synchronous grammars .
the spelling correction models from brill and moore and toutanova and moore use the noisy channel model approach to determine the types and weights of edit operations .
the spelling error model proposed by brill and moore allows generic string edit operations up to a certain length .
we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .
gram language model with modified kneser-ney smoothing is trained with the srilm toolkit on the epps , ted , newscommentary , and the gigaword corpora .
this is the first work that applies a state-of-the-art probabilistic parsing model to al for dependency parsing .
this paper for the first time applies a state-of-the-art probabilistic model to al with pa for dependency parsing .
the methods employed for gathering the data , preparation and compilation of dataset , used in offenseval shared task is described in zampieri et al .
the data collection methods used to compile the dataset used in offenseval is described in zampieri et al .
for evaluation we use mteval-v13a from the moses toolkit and tercom 3 to score our systems on the bleu respectively ter measures .
we use mteval from the moses toolkit an tercom to evaluate our systems on the bleu and ter measures .
bisk and hockenmaier used combinatory categorial grammar to learn syntactic dependencies from word strings .
bisk and hockenmaier use an em approach to induce a combinatory categorial grammar , based on very general linguistic assumptions .
the sg model is a popular choice to learn word embeddings by leveraging the relations between a word and its neighboring words .
the skip-gram model is a very popular technique for learning embeddings that scales to huge corpora and can capture important semantic and syntactic properties of words .
for word embeddings , we used popular pre-trained word vectors from glove .
we use the pre-trained glove vectors to initialize word embeddings .
wasserstein distance takes into account the cross-term relationship between different words in a principled fashion .
by using the wasserstein distance between distributions , the wordto-word semantic relationship is taken into account in a principled way .
eurowordnet is a multilingual lexical knowledge base comprised of hierarchical representations of lexical items for several european languages .
eurowordnet is a multilingual semantic lexicon with wordnets for several european languages , which are structured as the princeton wordnet .
klein and manning demonstrated that linguistically informed splitting of nonterminal symbols in treebank-derived grammars can result in accurate grammars .
klein and manning identified nonterminals that could valuably be split into fine-grained ones using hand-written linguistic rules .
sentiment classification is a well-studied and active research area ( cite-p-20-1-11 ) .
sentiment classification is a special task of text categorization that aims to classify documents according to their opinion of , or sentiment toward a given subject ( e.g. , if an opinion is supported or not ) ( cite-p-11-1-2 ) .
we will evaluate the performance of sampling distributions based on perplexities calculated using small , lightweight rnn language models .
we experimentally evaluate the heldout perplexity of models trained with our various importance sampling distributions .
for english , number and gender for common nouns are computed via a comparison of head lemma to head and using the number and gender data of bergsma and lin .
we compute number and gender for common nouns using the number and gender data provided by bergsma and lin .
a tree domain is a subset of strings over a linearly ordered set which is closed under prefix and left sister .
a tree domain is a set of node address drawn from n * ( that is , a set of strings of natural numbers ) in which c is the address of the root and the children of a node at address w occur at addresses w0 , wl , ... , in leftto-right order .
for our purpose we use word2vec embeddings trained on a google news dataset and find the pairwise cosine distances for all words .
we use the word2vec vectors with 300 dimensions , pre-trained on 100 billion words of google news .
ritchie et al used a combination of terms from citation contexts and existing index terms of a paper to improve indexing of cited papers .
for example , ritchie et al used a combination of terms from citation contexts and existing index terms of a paper to improve indexing of cited papers .
tag is a tree-rewriting system : the derivation process consists in applying operations to trees in order to obtain a ( derived ) tree whose sequence of leaves is a sentence .
a tag is a rewriting system that derives trees starting from a finite set of elementary trees .
we used implementations from scikitlearn , and the parameters of both classifiers were tuned on the development set using grid search .
we trained the three classifiers using the svm implementation in scikit-learn , and tuned hyper-parameters c and 纬 using 10-fold cross-validation with the train split .
dependency parsing is a simpler task than constituent parsing , since dependency trees do not have extra non-terminal nodes and there is no need for a grammar to generate them .
dependency parsing is the task of predicting the most probable dependency structure for a given sentence .
table 4 shows the bleu scores of the output descriptions .
the numbers in the table are bleu scores of different neural models .
we created 5-gram language models for every domain using srilm with improved kneserney smoothing on the target side of the training parallel corpora .
we train a 4-gram language model on the xinhua portion of the english gigaword corpus using the srilm toolkits with modified kneser-ney smoothing .
for parsing arabic texts into syntactic trees , we used the berkeley parser .
for our investigations , we used the berkeley parser as a source of grammar rule clusters .
in doing so we can achieve better word retrieval performance than language models with only n-gram context .
we aim to improve speech retrieval performance by augmenting traditional n-gram language models with different types of topic context .
svmhmms and crfs have been successfully applied to a range of sequential tagging tasks such as syllabification , chunk parsing and word segmentation .
crfs have been shown to perform well in a number of natural language processing applications , such as pos tagging , shallow parsing or np chunking , and named entity recognition .
in this paper , we name the problem of choosing the correct word from the homophone set .
in this paper , we used the decision list to solve the homophone problem .
we use the max-loss variant of the margin infused relaxed algorithm with a hamming-loss margin as is common in the dependency parsing literature .
we select the cutting-plane variant of the margin-infused relaxed algorithm with additional extensions described by eidelman .
we use a baseline parser to parse large-scale unannotated data .
first , we use a baseline parser to parse large-scale unannotated data .
system tuning was carried out using both k-best mira and minimum error rate training on the held-out development set .
feature weights were set with minimum error rate training on a development set using bleu as the objective function .
with the rise of social media , more and more user generated sentiment data have been shared on the web .
the rise of social media such as blogs and microblogs has fueled interest in sentiment analysis .
as a classifier , we choose a first-order conditional random field model .
to this end , we use first-and second-order conditional random fields .
mauser et al extended this model to condition it on source word cooccurrences .
mauser et al integrated a logistic regression model predicting target words from all the source words in a pbsmt .