sentence1
stringlengths 16
446
| sentence2
stringlengths 14
436
|
---|---|
we use 300d glove vectors trained on 840b tokens as the word embedding input to the lstm .
|
we also used pre-trained word embeddings , including glove and 300d fasttext vectors .
|
ritter et al first introduced the mt technique into response generation .
|
ritter et al proposed an smt based method , which treats response generation as a machine translation task .
|
we use the lstm cell as described in , figure 3 , configured in a bi-directional structure , called bdlstm , shown in figure 4 as the core network in our system .
|
we use the lstm cell as described in , configured in a b-lstm shown in figure 2 , as the core network architecture in the system .
|
figure 1 also shows , in brackets , the augmented annotation used by hale et al .
|
figure 1 also shows , in brackets , the augmented annotation described above from hale et al .
|
seki et al proposed a probabilistic model for zero pronoun detection and resolution that used hand-crafted case frames .
|
seki et al proposed a probabilistic model for the sub-tasks of anaphoric identification and antecedent identification with the help of a verb dictionary .
|
the obtained scfs comprise the total 163 types of relatively fine-grained scfs , which are originally based on the scfs in the anlt and comlex dictionaries .
|
the obtained scfs comprise the total 163 scf types which are originally based on the scfs in the anlt and comlex dictionaries .
|
word segmentation can be formalized as a character classification problem , where each character in the sentence is given a boundary tag representing its position in a word .
|
chinese word segmentation can be formalized as the problem of sequence labeling , where each character in the sentence is given a boundary tag denoting its position in a word .
|
so , andrzejewski et al incorporated knowledge by must-link and can not -link primitives represented by a dirichlet forest prior .
|
so , andrzejewski et al incorporated domain-specific knowledge by must-link and can not -link primitives represented by a novel dirichlet forest prior .
|
fasttext pre-trained vectors are used for word embedding with embed size is 300 .
|
fasttext pre-trained vector is used for word embedding with embed size is 300 .
|
semantic parsing is a domain-dependent process by nature , as its output is defined over a set of domain symbols .
|
semantic parsing is the task of converting natural language utterances into their complete formal meaning representations which are executable for some application .
|
gu et al , cheng and lapata , and nallapati et al also utilized seq2seq based framework with attention modeling for short text or single document summarization .
|
see et al , gu et al , cheng and lapata introduced pointer networks extended with a copy mechanism for text summarisation .
|
sentiment classification is a well studied problem ( cite-p-13-3-6 , cite-p-13-1-14 , cite-p-13-3-3 ) and in many domains users explicitly provide ratings for each aspect making automated means unnecessary .
|
sentiment classification is the fundamental task of sentiment analysis ( cite-p-15-3-11 ) , where we are to classify the sentiment of a given text .
|
the language model is a 3-gram language model trained using the srilm toolkit on the english side of the training data .
|
the target language model is trained by the sri language modeling toolkit on the news monolingual corpus .
|
in this paper , we overview recent advances on taxonomy construction from text corpora .
|
in this paper , we present a survey on taxonomy learning from text corpora .
|
relation extraction is a key step towards question answering systems by which vital structured data is acquired from underlying free text resources .
|
relation extraction is a challenging task in natural language processing .
|
we evaluated the system using bleu score on the test set .
|
we evaluated the translation quality of the system using the bleu metric .
|
the language model was a 5-gram model with kneser-ney smoothing trained on the monolingual news corpus with irstlm .
|
the n-gram models were built using the irstlm toolkit on the dewac corpus , using the stopword list from nltk .
|
relation extraction is the task of finding relational facts in unstructured text and putting them into a structured ( tabularized ) knowledge base .
|
relation extraction ( re ) is the process of generating structured relation knowledge from unstructured natural language texts .
|
to build the lsa space , the singular value decomposition was realized using the program svdpackc , and the first 300 singular vectors were retained .
|
to build the semantic space proper , the singular value decomposition was realized with the program svdpackc , and the 300 first singular vectors were retained .
|
the base pcfg uses simplified categories of the stanford pcfg parser .
|
the parse trees are generated using the stanford parser .
|
text categorization is the task of assigning a text document to one of several predefined categories .
|
text categorization is the classification of documents with respect to a set of predefined categories .
|
coreference resolution is the process of linking together multiple referring expressions of a given entity in the world .
|
since coreference resolution is a pervasive discourse phenomenon causing performance impediments in current ie systems , we considered a corpus of aligned english and romanian texts to identify coreferring expressions .
|
ner is a fundamental component of many information extraction and knowledge discovery applications , including relation extraction , entity linking , question answering and data mining .
|
ner is a fundamental task in many natural language processing applications , such as question answering , machine translation , text mining , and information retrieval ( cite-p-15-3-11 , cite-p-15-3-6 ) .
|
we employ the arc-standard system , which maintains partially-constructed outputs using a stack , and orders the incoming words in the sentence in a queue .
|
in this paper , we employ the arc-standard system , which maintains partially-constructed outputs using a stack , and orders the incoming words in the input sentence in a queue .
|
we demonstrate that the improved relation detector enables our simple kbqa system to achieve state-of-the-art results on both single-relation and multi-relation kbqa tasks .
|
our model outperforms the previous methods on kb relation detection tasks and allows our kbqa system to achieve state-of-the-arts .
|
in this study , we adopt the event extraction task defined in the bionlp 2009 shared task as a model information extraction task .
|
in the context of the epe challenge we use the bionlp 2009 genia corpus and its associated evaluation program to measure the impact of different parses on event extraction performance .
|
issue is a key point to improve paraphrase generation systems .
|
improving on this major issue is a key point to improve paraphrase generation systems .
|
thus , our first evaluation metric is based on a popular coreference resolution measure , the b 3 score .
|
we evaluated using the two widely used performance measures for coreference resolution -muc score and b 3 .
|
the bleu metric has deeply rooted in the machine translation community and is used in virtually every paper on machine translation methods .
|
the bleu score , introduced in , is a highly-adopted method for automatic evaluation of machine translation systems .
|
bannard and callison-burch first presented the method to learn paraphrase phrases from a bilingual phrase table .
|
bannard and callison-burch introduced the pivot approach to extracting paraphrase phrases from bilingual parallel corpora .
|
we trained two 5-gram language models on the entire target side of the parallel data , with srilm .
|
we trained a 3-gram language model on the spanish side using srilm .
|
we use a popular word2vec neural language model to learn the word embeddings on an unsupervised tweet corpus .
|
we use the well-known word embedding model that is a robust framework to incorporate word representation features .
|
lin and hovy introduced an automatic summarization evaluation metric , called rouge , which was motivated by the mt evaluation metric , bleu .
|
lin and hovy developed an automatic summary evaluation system using n-gram cooccurrence statistics .
|
as in reichart and rappoport , we see large improvements when self-training on a small seed size without using the reranker .
|
reichart and rappoport show that the number of unknown words is a good indicator of the usefulness of self-training when applied to small seed data sets .
|
since we can assume that the reference page of a target entity is a true mention .
|
if the name of a target entity has a disambiguation page in wikipedia , we have two or more candidate reference pages .
|
negated event is the shortest group of words that is actually affected by the negation cue .
|
the negated event is the property that is negated by the cue .
|
user and product information can be used to effectively mitigate the problem caused by cold-start users and products .
|
user and product information can help by introducing a frequent user/product with similar attributes to the cold-start user/product .
|
in this paper , we adopt a constrained topic model incorporating prior knowledge to select attribute .
|
nevertheless , we believe it is possible to do better by using a constrained topic model instead of traditional attribute selection methods .
|
the feature weights δ½ i are trained in concert with the lm weight via minimum error rate training .
|
the weight parameter δ½ is tuned by a minimum error-rate training algorithm .
|
we used the pre-trained word embeddings that are learned using the word2vec toolkit on google news dataset .
|
we used the google news pretrained word2vec word embeddings for our model .
|
statistical significance in bleu differences was tested by paired bootstrap re-sampling .
|
statistical significance of difference from the baseline bleu score was measured by using paired bootstrap re-sampling .
|
in this paper , we propose learning continuous word representations as features for twitter sentiment classification .
|
we present a method that learns word embedding for twitter sentiment classification in this paper .
|
on conll β 00 syntactic chunking and conll β 03 named entity chunking ( english and german ) , the method exceeds the previous best systems ( including those which rely on hand-crafted resources .
|
the method produces performance higher than the previous best results on conll β 00 syntactic chunking and conll β 03 named entity chunking ( english and german ) .
|
all input segments in the output : one violation for each segment in the input that does not appear in the output .
|
max - maximize all input segments in the output : one violation for each segment in the input that does not appear in the output .
|
semantic role features extracted from parse trees was found superior to an information-theoretic measure of similarity and comparable to the level of human agreement .
|
the evaluation results showed that the skill similarity computation based on semantic role matching can outperform a standard statistical approach and reach the level of human agreement .
|
relation extraction ( re ) has been defined as the task of identifying a given set of semantic binary relations in text .
|
relation extraction is a subtask of information extraction that finds various predefined semantic relations , such as location , affiliation , rival , etc. , between pairs of entities in text .
|
much work has been done on arabic computational morphology .
|
there has been a considerable amount of work on arabic morphological analysis .
|
recently , several researchers proposed the use of the pivot language for phrase-based smt .
|
recently , several researchers proposed the use of the pivot language for phrase-based statistical machine translation .
|
the text is a joke that relies on the ambiguity of phrasing .
|
such text comprise of advice , recommendations and tips on a variety of points of interest .
|
semantic role labeling ( srl ) is a task of analyzing predicate-argument structures in texts .
|
semantic role labeling ( srl ) is the process of assigning semantic roles to strings of words in a sentence according to their relationship to the semantic predicates expressed in the sentence .
|
recent semantic and discourse annotation projects are paving the way for developments in semantic and discourse parsing as well .
|
fortunately , recent annotation projects have taken significant steps towards developing semantic and discourse annotated corpora .
|
ccg is a lexicalized grammar formalism -- a lexicon assigns each word to one or more grammatical categories .
|
however , ccg is a binary branching grammar , and as such , can not leave np structure underspecified .
|
in this paper , we propose s enti-lssvm , a latent structural svm based model for sentiment-oriented relation .
|
we proposed s enti -lssvm model for extracting instances of both sentiment polarities and comparative relations .
|
the srilm language modelling toolkit was used with interpolated kneser-ney discounting .
|
a 5-gram language model with kneser-ney smoothing was trained with srilm on monolingual english data .
|
relation extraction ( re ) is the task of identifying instances of relations , such as nationality ( person , country ) or place of birth ( person , location ) , in passages of natural text .
|
relation extraction is the task of detecting and classifying relationships between two entities from text .
|
the δ½ f are optimized by minimum-error training .
|
the parameter weight vector δ½ is trained by mert .
|
methods for fine-grained sentiment analysis are developed by hu and liu , ding et al and popescu and etzioni .
|
fine-grained sentiment analysis methods have been developed by hatzivassiloglou and mckeown , hu and liu and popescu and etzioni , among others .
|
for example , turian et al have improved the performance of chunking and named entity recognition by using word embedding also as one of the features in their crf model .
|
similarly , turian et al collectively used brown clusters , cw and hlbl embeddings , to improve the performance of named entity recognition and chucking tasks .
|
tokenization of the english data was done using the berkeley tokenizer .
|
the berkeley parser was used to obtain syntactic annotations .
|
in this section , we evaluate the log-linear model and compare it with the mle based model presented by bannard and callison-burch 6 .
|
in this paper , we generate paraphrases adopting the pivot-based method proposed by bannard and callison-burch in the first round .
|
long short term memory units are proposed in hochreiter and schmidhuber to overcome this problem .
|
lstm units are firstly proposed by hochreiter and schmidhuber to overcome gradient vanishing problem .
|
topic words actually harm the performance , due to the increase of noise .
|
in this case , some topic words can help reduce the perplexity .
|
the benchmark model for topic modelling is latent dirichlet allocation , a latent variable model of documents .
|
latent dirichlet allocation is a widely adopted generative model for topic modeling .
|
our system has much higher coverage than a hand-engineered fst analyzer , and is more accurate than a state-of-the-art .
|
our system is highly accurate , and has a much higher coverage than a carefully-crafted fst analyzer .
|
the kit system uses an in-house phrase-based decoder to perform translation .
|
the in-house phrase-based translation system is used for generating translations .
|
xiong et al presented a syntaxdriven bracketing model to predict whether two phrases are translated together or not , using syntactic features learned from training corpus .
|
xiong et al present a method that automatically learns syntactic constraints from training data for the itg based translation .
|
our 5-gram language model is trained by the sri language modeling toolkit .
|
in our implementation , we train a tri-gram language model on each phone set using the srilm toolkit .
|
sagae and tsujii co-train two dependency parsers by adding automatically parsed sentences for which the parsers agree to the training data .
|
sagae and tsujii emulate a single iteration of cotraining by using maxent and svm , selecting the sentences where both models agreed and adding these sentences to the training set .
|
new chunk definition takes into account both syntactic structure and predicate-argument .
|
the new chunk definition contains both syntactic structure and predicate-argument structure information .
|
monroe et al used a single dialect-independent model for segmenting all arabic dialects including msa .
|
monroe et al used a single dialect-independent model for segmenting egyptian dialect in addition to msa .
|
the cross-lingual textual entailment , recently proposed by and , is an extension of the textual entailment task .
|
cross-lingual textual entailment has been recently proposed by mehdad et al , 2011 ) as an extension of textual entailment .
|
nuhn and colleagues showed that beam search can significantly improve the speed of em-based decipherment , while providing comparable or even slightly better accuracy .
|
nuhn et al produce better results in faster time compared to ilp and em-based decipherment methods by employing a higher order language model and an iterative beam search algorithm .
|
to verify sentence generation quantitatively , we evaluated the sentences automatically using bleu score .
|
we computed the translation accuracies using two metrics , bleu score , and lexical accuracy on a test set of 30 sentences .
|
our system β s best result ranked 35 among 73 system runs with 0 . 7189 average pearson correlation over five test sets .
|
our system β s best result ranked 35 among 73 submitted runs with 0.7189 average pearson correlations over five test sets .
|
sentiment analysis ( sa ) is the task of determining the sentiment of a given piece of text .
|
sentiment analysis ( sa ) is the research field that is concerned with identifying opinions in text and classifying them as positive , negative or neutral .
|
table 4 shows the evaluation of the results of chinese to japanese translation in bleu scores .
|
table 2 shows the translation quality measured in terms of bleu metric with the original and universal tagset .
|
we use the 300-dimensional skip-gram word embeddings built on the google-news corpus .
|
we use distributed word vectors trained on the wikipedia corpus using the word2vec algorithm .
|
optimization method adapts the translation model to the online test sentence by redistributing the weight of each predefined submodels .
|
the online method adapts the translation model by redistributing the weight of each predefined submodels .
|
semantic role labeling ( srl ) is the task of labeling the predicate-argument structures of sentences with semantic frames and their roles ( cite-p-18-1-2 , cite-p-18-1-19 ) .
|
semantic role labeling ( srl ) is the task of identifying semantic arguments of predicates in text .
|
relation extraction is the key component for building relation knowledge graphs , and it is of crucial significance to natural language processing applications such as structured search , sentiment analysis , question answering , and summarization .
|
relation extraction is a fundamental task that enables a wide range of semantic applications from question answering ( cite-p-13-3-12 ) to fact checking ( cite-p-13-3-10 ) .
|
semantic role labeling ( srl ) is the process of extracting simple event structures , i.e. , β who β did β what β to β whom β , β when β and β where β .
|
semantic role labeling ( srl ) has been defined as a sentence-level natural-language processing task in which semantic roles are assigned to the syntactic arguments of a predicate ( cite-p-14-1-7 ) .
|
sentiment analysis ( sa ) is a field of knowledge which deals with the analysis of people β s opinions , sentiments , evaluations , appraisals , attitudes and emotions towards particular entities ( cite-p-17-1-0 ) .
|
sentiment analysis ( sa ) is the task of analysing opinions , sentiments or emotions expressed towards entities such as products , services , organisations , issues , and the various attributes of these entities ( cite-p-9-3-3 ) .
|
in recent years , ie has emerged as a critical building block in a wide range of enterprise applications , including financial risk .
|
information extraction ( ie ) is becoming a critical building block in many enterprise applications .
|
dependency parsing is a basic technology for processing japanese and has been the subject of much research .
|
dependency parsing is the task to assign dependency structures to a given sentence math-w-4-1-0-14 .
|
in both pre-training and fine-tuning , we adopt adagrad and l2 regularizer for optimization .
|
for all models , we use l 2 regularization and run 100 epochs of adagrad with early stopping .
|
for the ranking task , however , models trained with a large , publicly available set of mt data perform as well as those trained with non-native data .
|
however , for the ranking task , models trained on publicly available mt data generalize well , performing as well as those trained with a non-native corpus of size 10000 .
|
it is reported in that more than four million distinct out-of-vocabulary tokens occur in the edinburgh twitter corpus .
|
it is reported in , that more than 4 million distinct out-of-vocabulary tokens are found in the edinburgh twitter corpus .
|
we present an algorithm for aligning texts with their translations that is based only on internal evidence .
|
in this paper , we provide a method for aligning texts and translations based only on internal evidence .
|
we estimated lexical surprisal using trigram models trained on 1 million hindi sentences from emille corpus using the srilm toolkit .
|
we use sri language modeling toolkit to train a 5-gram language model on the english sentences of fbis corpus .
|
we use the simplified factual statement extractor model 6 of heilman and smith .
|
also , we compare our system with the rulebased system proposed by heilman and smith .
|
we proposed data-driven changes to neural mt training to better match the incremental decoding framework .
|
we additionally explore whether modifying the neural mt training to match the decoder can improve performance .
|
we use the popular moses toolkit to build the smt system .
|
we used a standard pbmt system built using moses toolkit .
|
the system was tuned with batch lattice mira .
|
smt systems were built with moses and tuned with batch mira .
|
and we evaluate this method using data from the semeval lexical substitution task .
|
1 we evaluate the method using the data from the english lexical substitution task for semeval-2007 .
|
arthur et al propose to improve the translation of rare content words through the use of translation probabilities from discrete lexicons .
|
arthur et al and feng et al try to incorporate a translation lexicon into nmt in order to obtain the correct translation of low-frequency words .
|
abstract meaning representation is a semantic formalism that expresses the logical meanings of english sentences in the form of a directed , acyclic graph .
|
abstract meaning representation is a semantic formalism in which the meaning of a sentence is encoded as a rooted , directed , acyclic graph .
|
however , the experiments in anderson et al failed to detect differential interactions of semantic models with brain areas .
|
anderson et al construct semantic models using visual data and show a high correlation to brain activation patterns from fmri .
|
we have made is that embedded sentences favour the occurrence of intrasentential antecedents .
|
we base our methodology on the fact that such antecedents are likely to occur in embedded sentences .
|
in this paper , we train our linear classifiers using liblinear 4 .
|
for these experiments we use a maximum entropy classifier using the liblinear toolkit 1 .
|
word-level alignment models does not have a strong impact on performance .
|
more sophisticated approaches that make use of syntax do not lead to better performance .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.