sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
semantic embeddings are glove trained on twitter data 1 , word2vec , mi- .
the word embeddings and attribute embeddings are trained on the twitter dataset using glove .
for a fair comparison to our model , we used word2vec , that pretrain word embeddings at a token level .
we also used word2vec to generate dense word vectors for all word types in our learning corpus .
there is some work investigating features that directly indicate implicit sentiments .
there is some recent work investigating features that directly indicate implicit sentiments .
this paper discusses sampling strategies for building a dependency-analyzed corpus .
this paper unveils the essential characteristics of basic sampling strategies for a dependency-analyzed corpus .
in all cases , we used the implementations from the scikitlearn machine learning library .
we employed the machine learning tool of scikit-learn 3 , for training the classifier .
we furthermore use the distributed learning technique of iterative parameter mixing , where multiple models on several shards of the training data are trained in parallel and parameters are averaged after each epoch .
to speed up training using parallel processing , we use the iterative parameter mixing approach of mcdonald et al , where training data are split into several parts and weight updates are averaged after each pass through the training data .
online training of crfs using stochastic gradient descent was proposed by vishwanathan et al .
online training of crfs using sgd was proposed by vishwanathan et al .
but more importantly , this work highlights limitations of purely ir-based methods .
results point out the limitations of purely term-based methods to this challenging task .
ccg is a strongly lexicalized grammatical formalism , in which the vast majority of the decisions made during interpretation involve choosing the correct definitions of words .
however , ccg is a binary branching grammar , and as such , can not leave np structure underspecified .
we use liblinear with l2 regularization and default parameters to learn a model .
for implementation , we used the liblinear package with all of its default parameters .
luong et al , 2013 ) generates better word representation with recursive neural network .
luong et al train a recursive neural network for morphological composition , and show its effectiveness on word similarity task .
for example , vickrey et al built classifiers inspired by those used in word sense disambiguation to fill in blanks in a partially-completed translation .
vickrey et al built classifiers inspired by those used in wsd to fill in any blanks in a partially completed translation .
we used minimum error rate training for tuning on the development set .
we use minimum error rate training to tune the decoder .
in this paper , we propose a novel deep recurrent neural network ( rnn ) model for the joint processing of the keyword .
in this work , we proposed a novel deep recurrent neural network ( rnn ) model to combine keywords and context information to perform the keyphrase extraction task .
qiu et al build a framework that detects dissimilarities between sentences and makes its paraphrase judgment based on the significance of such dissimilarities .
qiu et al reported a method that detects the similarity of two sentences by heuristically comparing their predicate argument tuples , which are a type of syntactic parsing tree .
sentence compression is a standard nlp task where the goal is to generate a shorter paraphrase of a sentence .
sentence compression is the task of shortening a sentence while preserving its important information and grammaticality .
therefore , in this paper , we assume that there is a relationship between the canonical word order and the proportion of each word order in a large corpus and attempt to evaluate several claims on the canonical word order of japanese double object constructions on the basis of a large corpus .
thus , in this paper , we assume that there is a relationship between the canonical word order and the proportion of each word order in a large corpus and present a corpus-based analysis of canonical word order of japanese double object constructions .
we trained linear-chain conditional random fields as the baseline .
we primarily compared our model with conditional random fields .
in this paper , we propose a method which uses semi-supervised convolutional neural networks ( cnns ) to select in-domain training data .
in this paper , we try to address this challenge , i.e. , domain adaptation with very limited amounts of in-domain data .
instead , we follow callison-burch et al and lopez , and use a source language suffix array to extract only rules that will actually be used in translating a particular test set .
specifically , we follow callison-burch et al and use a source language suffix array to extract only those rules which will actually be used in translating a particular set of test sentences .
and there have not been clear results on whether having more layers helps .
there have not been clear results on whether adding more layers to nlms helps .
in our experiments , methods with higher rouge scores can indeed achieve better coverage of important units such as events , as shown in pyramid scores .
in our experiments , methods with higher rouge scores can indeed achieve better coverage of important units such as events , as shown in pyramid scores in table 2 .
statistical significance is computed using the bootstrap re-sampling approach proposed by koehn .
the statistical significance test is performed using the re-sampling approach .
for this task , we used the svm implementation provided with the python scikit-learn module .
within this subpart of our ensemble model , we used a svm model from the scikit-learn library .
we parse the source sentences using the stanford corenlp parser and linearize the resulting parses .
we obtained the pos tags and parse trees of the sentences in our datasets with the stanford pos tagger and the stanford parser .
the wordseye system lets users create 3d scenes by describing them in language .
the wordseye project generates 3d scenes from literal paragraph-length descriptions .
in their target domains , they have recently been shown to highly biased and correlate very poorly with human judgements for dialogue response evaluation ( cite-p-13-3-18 ) .
however , it has been shown that bleu and other word-overlap metrics are biased and correlate poorly with human judgements of response quality ( cite-p-13-3-18 ) .
chambers and jurafsky proposed a narrative chain model based on scripts .
chambers and jurafsky present a system which learns narrative chains from newswire texts .
we use a recurrent neural network with lstm cells to avoid the vanishing gradient problem when training long sequences .
we use different recurrent neural network architectures , where we consider using lstm and bi-lstm with different number of stacked layers .
we used glove vectors trained on common crawl 840b 4 with 300 dimensions as fixed word embeddings .
we also used pre-trained word embeddings , including glove and 300d fasttext vectors .
a 4-grams language model is trained by the srilm toolkit .
for lm training and interpolation , the srilm toolkit was used .
the lstm system uses glove embeddings as its pretrained word vectors .
the word embeddings are initialized using the pre-trained glove , and the embedding size is 300 .
mikolov et al uses a continuous skip-gram model to learn a distributed vector representation that captures both syntactic and semantic word relationships .
following the work by mikolov et al , continuous-bag-ofwords architecture with negative sampling is used to get 200 dimensional word vectors .
relation extraction is a fundamental task that enables a wide range of semantic applications from question answering ( cite-p-13-3-12 ) to fact checking ( cite-p-13-3-10 ) .
relation extraction is a traditional information extraction task which aims at detecting and classifying semantic relations between entities in text ( cite-p-10-1-18 ) .
a statistical significance test based on a bootstrap resampling method , as shown in koehn , was performed .
the significance tests were performed using the bootstrap resampling method .
a simile consists of four key components : the topic or tenor ( subject of the comparison ) , the vehicle ( object of the comparison ) , the event ( act or state ) , and a comparator ( usually β€œ as ” , β€œ like ” , or β€œ than ” ) ( cite-p-20-3-8 ) .
a simile is a form of figurative language that compares two essentially unlike things ( cite-p-20-3-11 ) , such as β€œ jane swims like a dolphin ” .
we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit .
we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .
sentiment classification is a useful technique for analyzing subjective information in a large number of texts , and many studies have been conducted ( cite-p-15-3-1 ) .
sentiment classification is a task to predict a sentiment label , such as positive/negative , for a given text and has been applied to many domains such as movie/product reviews , customer surveys , news comments , and social media .
in this paper , we introduce a new strategy for natural language supervision tasks that attempts to optimize supervision efficiency .
in this paper , we study the problem of manually correcting automatic annotations of natural language in as efficient a manner as possible .
the language model is a 3-gram language model trained using the srilm toolkit on the english side of the training data .
the language model is trained with the sri lm toolkit , on all the available french data without the ted data .
on a book review website , each book entry contains a title , the author ( s ) and an introduction of the book .
for example , on a book review website , each book entry contains a title , the author ( s ) and an introduction of the book .
researchers introduced the wassa-2017 shared task of detecting the intensity of emotion felt by the speaker of a tweet .
the wassa-2017 task on emotion intensity aims at detecting the intensity of emotion felt by the author of a tweet .
the translation outputs were evaluated with bleu and meteor .
translation results are evaluated using the word-based bleu score .
interpretants are selected from the lm corpora distributed by the translation task of wmt14 and ldc for english and spanish 1 .
the interpretants are selected from the lm corpora distributed by the translation task of wmt14 and the lm corpora provided by ldc for english and spanish 4 .
we use the word2vec tool to pre-train the word embeddings .
word2vec is an appropriate tool for this problem .
in this paper , we proposed a noisy-channel model for qa that can accommodate within a unified framework .
in this paper , we propose a new approach to qa in which the contribution of various resources and components can be easily assessed .
revealed that our method produces more informative summaries compared to several baselines .
experimental results show that our method produces summaries which are more informative compared to several competitive baselines .
relation classification is the task of assigning sentences with two marked entities to a predefined set of relations .
relation classification is a crucial ingredient in numerous information extraction systems seeking to mine structured facts from text .
in the first step , we build a bridge between the source and target domains .
in the first step , we generate a few high-confidence sentiment and topic seeds in the target domain .
instead , we use bleu scores since it is one of the primary metrics for machine translation evaluation .
since bleu is the main ranking index for all submitted systems , we apply bleu as the evaluation matrix for our translation system .
in this work we use the open-source toolkit moses .
we use moses , an open source toolkit for training different systems .
we measure the quality of the automatically created summaries using the rouge measure .
we evaluate the system generated summaries using the automatic evaluation toolkit rouge .
in the experiments that the proposed late fusion gives a better language modelling quality than the early fusion .
we show later in the experiments that the proposed late fusion gives a better language modelling quality than the early fusion .
relation extraction ( re ) is a task of identifying typed relations between known entity mentions in a sentence .
relation extraction ( re ) has been defined as the task of identifying a given set of semantic binary relations in text .
summarization data sets show that by incorporating the guided sentence compression model , our summarization system can yield significant performance gain as compared to the state-of-the-art .
our results show that by incorporating a guided sentence compression model , our summarization system can yield significant performance gain as compared to the state-of-the-art reported results .
and the name ambiguity problem , the entity linking decisions are critically depending on the knowledge of entities .
due to the name variation problem and the name ambiguity problem , the entity linking decisions are critically depending on the heterogenous knowledge of entities .
we present a novel two-stage technique for detecting speech disfluencies based on integer .
we presented a novel two-stage technique for detecting speech disfluencies based on ilp .
lda is a probabilistic model of text data which provides a generative analog of plsa , and is primarily meant to reveal hidden topics in text documents .
lda is a generative probabilistic model where documents are viewed as mixtures over underlying topics , and each topic is a distribution over words .
for the language model , we used srilm with modified kneser-ney smoothing .
we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing .
sentiment analysis is a much-researched area that deals with identification of positive , negative and neutral opinions in text .
the sentiment analysis is a field of study that investigates feelings present in texts .
to measure the importance of the generated questions , we use lda to identify the important subtopics 9 from the given body of texts .
furthermore , we employ unsupervised topic models to detect the topics of the queries as well as to enrich the target taxonomy .
luo et al use bell trees to represent the search space of the coreference resolution problem .
luo et al perform the clustering step within a bell tree representation .
work presents a preliminary effort on word segmentation problem in urdu .
this paper explains the problem of word segmentation in urdu .
in this paper , we proposed two polynomial time decoding algorithms using joint inference .
in this paper , we propose a new exact decoding algorithm for the joint model using dynamic programming .
for this reason , we used glove vectors to extract the vector representation of words .
we exploited glove vectors instead of one-hot vectors in order to facilitate generalization .
word sense disambiguation ( wsd ) is the problem of assigning a sense to an ambiguous word , using its context .
word sense disambiguation ( wsd ) is the task of assigning sense tags to ambiguous lexical items ( lis ) in a text .
we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus .
we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model .
in this paper , we propose a context-aware topic model for lexical selection , which not only models local contexts and global topics .
significantly different from them , we propose a new topic model that exploits both local contextual words and global topics for lexical selection .
it is well-known that chinese is a pro-drop language , meaning pronouns can be dropped from a sentence without causing the sentence to become ungrammatical or incomprehensible when the identity of the pronoun can be inferred from the context .
chinese is a meaning-combined language with very flexible syntax , and semantics are more stable than syntax .
since the similarity calculations in our framework involves vectorial representations for each word , we trained 300 dimensional glove vectors on the chinese gigaword corpus .
in order to better handle rare words , we initialized our word embeddings using 200 dimensional vectors trained with glove on data from wikipedia .
we use accuracy as our metric and optimize using the adam optimizer .
we train our model using adam optimization for better robustness across different datasets .
for the translation from german into english , german compounds were split using the frequencybased method described in .
in order to reduce the source vocabulary size translation , the german text was preprocessed by splitting german compound words with the frequencybased method described in .
word sense disambiguation ( wsd ) is a particular problem of computational linguistics which consists in determining the correct sense for a given ambiguous word .
word sense disambiguation ( wsd ) is the nlp task that consists in selecting the correct sense of a polysemous word in a given context .
we also train an initial phrase-based smt system with the available seed corpus .
we work with the phrase-based smt framework as the baseline system .
collobert et al initially introduced neural networks into the srl task .
collobert et al , 2011 ) trains a neural network to judge the validity of a given context .
the translation quality is evaluated by case-insensitive bleu and ter metrics using multeval .
translation quality is measured in truecase with bleu on the mt08 test sets .
bengio et al propose a feedforward neural network to train a word-level language model with a limited n-gram history .
in 2003 , bengio et al proposed a neural network architecture to train language models which produced word embeddings in the neural network .
translation results are given in terms of the automatic bleu evaluation metric as well as the ter metric .
translation results are reported on the standard mt metrics bleu , meteor , and per , position independent word error rate .
or , in words of dependency analysis by reduction , stepwise deletion of dependent elements within a sentence preserves its syntactic correctness .
similarly , in the dependency analysis by reduction , the authors assume that stepwise deletions of dependent elements within a sentence preserve its syntactic correctness .
as a classifier , we choose a first-order conditional random field model .
we use the mallet implementation of conditional random fields .
a main characteristic of question answering in restricted domains is the integration of domain-specific information that is either developed for question answering .
a major difference between open-domain question answering and restricted-domain question answering is the existence of domain-dependent information that can be used to improve the accuracy of the system .
thus , ouchi et al and iida et al focused on only intra-sentential zero anaphora .
owing to this complication , ouchi et al and shibata et al focused exclusively on intra-sentential argument analysis .
the model parameters were optimized with adadelta , using a maximum sentence length of 80 and a minibatch size of 80 .
all the neural network models were optimized using adadelta , with mini-batches of 256 samples .
this has led to the study of classes of dependency structures that lie between projective and unrestricted non-projective structures .
this has led to the study of sub-classes of the class of all non-projective dependency structures .
jiang et al put forward a ptc framework based on support vector machine .
jiang et al , 2007 ) put forward a ptc framework based on support vector machine .
we used the open source moses phrase-based mt system to test the impact of the preprocessing technique on translation results .
we use moses , a statistical machine translation system that allows training of translation models .
word sense induction ( wsi ) is the task of automatically discovering all senses of an ambiguous word in a corpus .
word sense induction ( wsi ) is the task of automatically finding sense clusters for polysemous words .
we used a support vector machine with an implementation of the original tree kernel .
we used svmlight together with the user defined kernel setting in our approach .
the 5-gram kneser-ney smoothed language models were trained by srilm , with kenlm used at runtime .
we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit .
we computed the translation accuracies using two metrics , bleu score , and lexical accuracy on a test set of 30 sentences .
the accuracy was measured using the bleu score and the string edit distance by comparing the generated sentences with the original sentences .
we use stochastic gradient descent with adagrad , l 2 regularization and minibatch training .
we train the parameters of the stages separately using adagrad with the perceptron loss function .
our english-french system is a phrase-based smt system with a combination of two decoders , moses and docent .
for chinese-english , we train a standard phrase-based smt system over the available 21,863 sentences .
thus , we decided to exploit word2vec , a technique described in that learns a vector representation for words .
to do this , we used the word2vec tool , which implements the continuous bag-of-words and skip-gram architectures for computing vector representations of words .
feature weights are tuned using minimum error rate training on the 455 provided references .
the feature weights are tuned with minimum error-rate training to optimise the character error rate of the output .
as antecedents , we implemented a global model for antecedent selection within the framework of markov logic networks .
our model integrates global constraints on top of a rich local feature set in the framework of markov logic networks .
we posit that there is a hidden structure that explains the correctness of an answer given the question and instructional materials and present a unified max-margin framework that learns to find these hidden structures ( given a corpus of question-answer pairs and instructional materials ) , and uses what it learns to answer novel elementary science questions .
we posit that there is a latent subgraph of the text meaning representation graph ( called snippet graph ) and a latent alignment of the question-answer graph onto this snippet graph that entails the answer ( see figure 1 for an example ) .
machine transliteration is the process of transforming a word written in a source language into a word in a target language without the aid of a bilingual dictionary .
machine transliteration is defined as automatic phonetic translation of names across languages .
gildea and jurafsky describe a system that uses completely syntactic features to classify the frame elements in a sentence .
gildea and jurafsky classify semantic role assignments using all the annotations in framenet , for example , covering all types of verbal arguments .
in the seemgo system , the subtask of aspect term extraction is implemented with the crf model that shows good performance .
the subtask of aspect category detection obtains the best result when applying the boosting method on the maximum entropy model , with the precision of 0.869 for restaurants .
table 6 : comparison of our approach to various baselines for low-resource tagging under f 1 .
table 5 : comparison of our approach to various baselines for low-resource tagging under token-level accuracy .