sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
we obtained distributed word representations using word2vec 4 with skip-gram .
we also used word2vec to generate dense word vectors for all word types in our learning corpus .
to pre-order the chinese sentences using the syntax-based reordering method proposed by , we utilize the berkeley parser .
for tree-to-string translation , we parse the english source side of the parallel data with the english berkeley parser .
in this paper the conditions under which a given probabilistic tag can be shown to be consistent .
this paper derives the conditions under which a given probabilistic tag can be shown to be consistent .
we use long shortterm memory networks to build another semanticsbased sentence representation .
we use a bidirectional long short-term memory rnn to encode a sentence .
event extraction is the task of detecting certain specified types of events that are mentioned in the source language data .
event extraction is the task of extracting and labeling all instances in a text document that correspond to a predefined event type .
multiword expressions are lexical items that can be decomposed into single words and display idiosyncratic features .
multiword expressions are defined as idiosyncratic interpretations that cross word boundaries or spaces .
inspired by the success of neural machine translation , recent studies use the encoder-decoder model with the attention mechanism .
attention-based neural machine translation systems are typically implemented with a recurrent neural network based encoder-decoder framework .
where a typical penn treebank grammar may have fewer than 100 nonterminals , we found that a ccg grammar derived from ccgbank contained nearly 1600 .
where a typical penn treebank grammar may have fewer than 100 nonterminals , we found that a ccg grammar derived from ccgbank contained over 1500 .
automatic metrics , such as bleu , are widely used in machine translation as a substitute for human evaluation .
current metrics to automatically evaluate machine translations , such as the popular bleu , are heavily based on string matching .
in this work , we introduce attr2vec , a novel framework for jointly learning embeddings for words and contextual attributes .
in this paper , we proposed attr2vec , a novel embedding model that can jointly learn a distributed representation for words and contextual attributes .
word segmentation is the foremost obligatory task in almost all the nlp applications where the initial phase requires tokenization of input into words .
word segmentation is a prerequisite for many natural language processing ( nlp ) applications on those languages that have no explicit space between words , such as arabic , chinese and japanese .
incorporating the morphological compositions ( surface forms ) of words , we decide to employ the latent meanings of the compositions ( underlying forms ) to train the word embeddings .
in this paper , we explore to employ the latent meanings of morphological compositions of words to train and enhance word embeddings .
in order to tune all systems , we use the k-best batch mira .
for tuning the feature weights , we applied batch-mira with -safe-hope .
the target language model is built on the target side of the parallel data with kneser-ney smoothing using the irstlm tool .
we also use a 4-gram language model trained using srilm with kneser-ney smoothing .
relation extraction is a challenging task in natural language processing .
relation extraction is the task of finding relationships between two entities from text .
we present an approach for tackling three important aspects of text normalization : sentence boundary disambiguation , disambiguation of capitalized words .
in this article we present a method that tackles sentence boundaries , capitalized words , and abbreviations in a uniform way through a document-centered approach .
we have presented a state-of-the-art subcategorisation acquisition system for free-word order languages , and used it to create a large subcategorisation frame .
we introduce a state-of-the-art system for the acquisition of subcategorisation frames ( scfs ) from large corpora , which can deal with languages with very free word order .
we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit .
we train a 4-gram language model on the xinhua portion of the english gigaword corpus using the srilm toolkits with modified kneser-ney smoothing .
the phraseextraction heuristics of were used to build the phrase-based smt systems .
the other experimental settings were concerned with hybrid word alignment training algorithms and the phraseextraction .
the parallel data for the first three language pairs is drawn from europarl v6 and from multiun for englishchinese .
we use the europarl parallel corpus 3 for all language pairs except for vietnamese-english .
coreference resolution is the process of determining whether two expressions in natural language refer to the same entity in the world .
coreference resolution is the task of identifying all mentions which refer to the same entity in a document .
for english , we used the syntactic relations section provided in the google analogy dataset that involves 10675 questions .
for english , we used google analogies dataset introduced by mikolov et al and bats collection .
discussants is identified , this information is then used to construct a signed network representation of the discussion thread .
attitude predictions are used to construct a signed network representation of the discussion thread .
hulpus et al make use of structured data from dbpedia to label topics .
hulpus et al make use of the structured data in dbpedia 1 to label topics .
discourse-new detection and coreference resolution can potentially address this error-propagation problem .
discourse-new detection is often tackled independently of coreference resolution .
we used the srilm toolkit to simulate the behavior of flexgram models by using count files as input .
we used the sri language modeling toolkit to calculate the log probability and two measures of perplexity .
coreference resolution is the process of finding discourse entities ( markables ) referring to the same real-world entity or concept .
since coreference resolution is a pervasive discourse phenomenon causing performance impediments in current ie systems , we considered a corpus of aligned english and romanian texts to identify coreferring expressions .
the various smt systems are evaluated using the bleu score .
performance is measured based on the bleu scores , which are reported in table 4 .
word sense disambiguation ( wsd ) is a natural language processing ( nlp ) task in which the correct meaning ( sense ) of a word in a given context is to be determined .
word sense disambiguation ( wsd ) is a particular problem of computational linguistics which consists in determining the correct sense for a given ambiguous word .
in this paper , we propose a text classification algorithm based on latent dirichlet allocation ( lda ) ( cite-p-13-1-1 ) .
in this paper we estimate approximate posterior inference using collapsed gibbs sampling ( cite-p-13-1-8 ) .
we created 5-gram language models for every domain using srilm with improved kneserney smoothing on the target side of the training parallel corpora .
we used kenlm with srilm to train a 5-gram language model based on all available target language training data .
semantic role labeling ( srl ) is the task of labeling predicate-argument structure in sentences with shallow semantic information .
semantic role labeling ( srl ) is a kind of shallow sentence-level semantic analysis and is becoming a hot task in natural language processing .
word segmentation is the first step of natural language processing for japanese , chinese and thai because they do not delimit words by whitespace .
therefore , word segmentation is a crucial first step for many chinese language processing tasks such as syntactic parsing , information retrieval and machine translation .
trigram language models are implemented using the srilm toolkit .
language models are built using the sri-lm toolkit .
haghighi et al presented a generative model based on canonical correlation analysis , in which monolingual features such as the context and orthographic substrings of words were taken into account .
haghighi , berg-kirkpatrick , and klein proposed a generative model for inducing a bilingual lexicon from monolingual text by exploiting orthographic and contextual similarities among the words in two different languages .
the second decoding method is to use conditional random field .
to this end , we use conditional random fields .
we propose a novel convolutional neural network with tree-based convolution kernels for relation classification .
this work presents a dependency parse tree based convolutional neural network for relation classification .
for creating the word embeddings , we used the tool word2vec 1 .
we obtained word embeddings for our experiments by using the open source google word2vec 1 .
automatic identification of south-slavic languages has been researched by ljube拧ic et al , tiedemann and ljube拧ic , ljube拧ic and kranjcic , and ljube拧ic and kranjcic .
distinguishing between south-slavic languages has been researched by ljube拧ic et al , tiedemann and ljube拧ic , ljube拧ic and kranjcic , and ljube拧ic and kranjcic .
our baseline is a phrase-based mt system trained using the moses toolkit .
our mt decoder is a proprietary engine similar to moses .
we rely on distributed representation based on the neural network skip-gram model of mikolov et al .
in order to deal with this challenge we rely on the negative sampling approach of mikolov et al .
transe is only suitable for 1-to-1 relations , there remain flaws for 1-to-n , n-to-1 and n-to-n relations .
transe is suitable for 1-to-1 relations , but has flaws when dealing with 1-to-n , n-to-1 and n-to-n relations .
we utilize a maximum entropy model to design the basic classifier for wsd and tc tasks .
the integrated dialect classifier is a maximum entropy model that we train using the liblinear toolkit .
we used a phrase-based smt model as implemented in the moses toolkit .
for the machine translation framework , we used phrase-based smt with the moses toolkit as a decoder .
case-insensitive bleu4 was used as the evaluation metric .
case-insensitive 4-gram bleu is used as evaluation metric .
the english side of the parallel corpus is trained into a language model using srilm .
a tri-gram language model is estimated using the srilm toolkit .
copy actions further improves this enhancement to reach + 2 . 39 .
adding our copy action mechanism further increases this improvement ( +2.39 ) .
you can try the demo at http : / / twine-mind . cloudapp . net / streaming-demo .
the demo is available at http : //twine-mind.cloudapp.net/streaming 1,2 .
semantic parsing is the task of translating natural language utterances to a formal meaning representation language ( cite-p-16-1-6 , cite-p-16-3-6 , cite-p-16-1-8 , cite-p-16-3-7 , cite-p-16-1-0 ) .
semantic parsing is the task of converting a sentence into a representation of its meaning , usually in a logical form grounded in the symbols of some fixed ontology or relational database ( cite-p-21-3-3 , cite-p-21-3-4 , cite-p-21-1-11 ) .
in this paper , we will improve upon collins ¡¯ algorithm by introducing a bidirectional searching strategy , so as to effectively utilize more context information .
in this paper , we propose guided learning , a new learning framework for bidirectional sequence classification .
the word vectors of vocabulary words are trained from a large corpus using the glove toolkit .
this model first embeds the words using 300 dimensional word embeddings created using the glove method .
viterbi algorithm is the only algorithm widely adopted in the nlp field that offers exact decoding .
we consider that it is a real alternative to the viterbi algorithm in various nlp tasks .
rhetorical structure theory is one of the most widely accepted frameworks for discourse analysis .
rhetorical structure theory posits a hierarchical structure of discourse relations between spans of text .
popovic and ney report the use of morphological and syntactic restructuring information for spanishenglish and serbian-english translation .
popovic and ney report the use of simple local transformation rules for spanish-english and serbianenglish translation .
the classic generative model approach to word alignment is based on ibm models 1-5 and the hmm model .
much of the additional work on generative modeling of 1-to-n word alignments is based on the hmm model .
these alternative ways of expressing the same information are called paraphrases .
sentences or phrases that convey the same meaning using different wording are called paraphrases .
we propose to formulate the inference problem in first-order ( arc-factored ) dependency parsing .
we present a first-order graph-based dependency parsing model which runs in edge linear time at expectation and with very high probability .
and showed that our method significantly improves the informativity of the generated compressions .
results show that the proposed method significantly improves the informativity of the generated compressions .
zhou et al proposed attention-based bi-directional lstm networks for relation classification task .
zhou et al proposed attention-based bidirectional lstm networks for relation classification task .
a context-free grammar ( cfg ) is a tuple math-w-3-1-1-9 , where math-w-3-1-1-22 is a finite set of nonterminal symbols , math-w-3-1-1-31 is a finite set of terminal symbols disjoint from n , math-w-3-1-1-44 is the start symbol and math-w-3-1-1-52 is a finite set of rules .
a context-free grammar ( cfg ) is a tuple math-w-2-5-5-22 , where vn and vt are finite , disjoint sets of nonterminal and terminal symbols , respectively , and s e vn is the start symbol .
english 4-gram language models with kneser-ney smoothing are trained using kenlm on the target side of the parallel training corpora and on the gigaword corpus .
a 3-gram language model is trained on the target side of the training data by the srilm toolkits with modified kneser-ney smoothing .
these language models were built up to an order of 5 with kneser-ney smoothing using the srilm toolkit .
language models were estimated using the sri language modeling toolkit with modified kneser-ney smoothing .
taxonomies , which serve as backbones for structured knowledge , are useful for many nlp applications .
taxonomies play an important role in many applications by organizing domain knowledge into a hierarchy of ‘ is-a ’ relations between terms .
collins and roark proposed an approximate incremental method for parsing .
collins and roark presented a linear parsing model trained with an averaged perceptron algorithm .
the language model is trained and applied with the srilm toolkit .
furthermore , we train a 5-gram language model using the sri language toolkit .
in this paper , we show that expressive kernels and deep neural networks can be combined in a common framework in order to ( i ) explicitly model structured information .
in this paper , we show that the nystro ? m based low-rank embedding of input examples can be used as the early layer of a deep feed-forward neural network .
that successfully models the compositional aspect of language , we apply a recursive neural network ( rnn ) framework to the task of identifying the political position evinced by a sentence .
building from those insights , we introduce a recursive neural network ( rnn ) to detect ideological bias on the sentence level .
a different evaluation metric based on the accuracy of the data is proposed in rozovskaya and roth .
a different evaluation metric based on the accuracy of the data before and after running the system was proposed in rozovskaya and roth .
for all models , we use markov chain monte carlo inference to find latent variables that best fit observed data .
to find the latent variables that best explain observed data , we use gibbs sampling , a widely used markov chain monte carlo inference technique .
for translation experiments , we use a phrase-based decoder that incorporates a set of standard features and a hierarchical reordering model .
more specifically , the baseline reordering model is a hierarchical phrase orientation model trained on all the available parallel data .
word sense disambiguation ( wsd ) is formally defined as the task of computationally identifying senses of a word in a context .
word sense disambiguation ( wsd ) is a widely studied task in natural language processing : given a word and its context , assign the correct sense of the word based on a predefined sense inventory ( cite-p-15-3-4 ) .
by a simple iterative self-labeling technique , transfer learning is still useful , even when the correct answers for target qa dataset .
finally , we show that transfer learning is helpful even in unsupervised scenarios when correct answers for target qa dataset examples are not available .
we evaluated the performance of the composition models on the test split of the dataset , using the rank evaluation proposed by baroni and zamparelli .
we also replicated that formulation , and found phrase ranking to be worse when compared to the partial least squares method described in baroni and zamparelli .
we used a phrase-based smt model as implemented in the moses toolkit .
our implementation of the segment-based imt protocol is based on the moses toolkit .
our knowledge acquisition method follows the scheme of conceptnet .
we use conceptnet and coreference resolution as external knowledge .
chinese users may make errors when they are typing .
users may make errors when they are typing in chinese words .
associated with each phrasal pattern is a conceptual template .
associated with each phrasal pattern is a conceptual template , which describes the meaning of the phrasal ~pattern , usually with references to the constituents of the associated phrase .
for the model implementation , we use the one provided by the opennmt-py toolkit .
we obtain the pre-tokenized dataset from the open-nmt project .
bilingual dictionaries of technical terms are important resources for many natural language processing tasks including statistical machine translation and cross-language information retrieval .
bilingual lexicons are fundamental resources in multilingual natural language processing tasks such as machine translation , cross-language information retrieval or computerassisted translation .
we measured performance using the bleu score , which estimates the accuracy of translation output with respect to a reference translation .
we used the bleu score to evaluate the translation accuracy with and without the normalization .
we used the pb smt system in moses 12 for je and kj translation tasks .
we used the phrase-based smt in moses 5 for the translation experiments .
li et al propose a hybrid method based on wordnet and the brown corpus to incorporate semantic similarity between words , semantic similarity between sentences , and word order similarity to measure the overall sentence similarity .
li et al presented an algorithm which takes account of semantic information and word order information implied in the sentence to calculate the similarity between very short texts of sentence length .
to start with , we replace word types with corresponding neural language model representations estimated using the skip-gram model .
here , we choose the skip-gram model and continuous-bag-of-words model for comparison with the lbl model .
zhao et al used maximum-entropy to train a switch variable to separate aspect and sentiment words .
zhao et al propose an extension of this model that is able to use various features of words and can distinguish aspect from opinion words .
the embeddings were trained over the english wikipedia using word2vec .
both files are concatenated and learned by word2vec .
to start with , we replace word types with corresponding neural language model representations estimated using the skip-gram model .
then , we use word embedding generated by skip-gram with negative sampling to convert words into word vectors .
we use the word2vec tool to pre-train the word embeddings .
the model parameters of word embedding are initialized using word2vec .
coreference resolution is a key problem in natural language understanding that still escapes reliable solutions .
coreference resolution is a task aimed at identifying phrases ( mentions ) referring to the same entity .
sentiment analysis is the task of identifying positive and negative opinions , sentiments , emotions and attitudes expressed in text .
sentiment analysis is a much-researched area that deals with identification of positive , negative and neutral opinions in text .
finally , the resulting kernel function is the cosine similarity between vector pairs , in line with .
the resulting kernel function is the cosine similarity between tweet vector pairs , in line with .
in constant time and space , we demonstrate that counter to expectations , simple single-pass clustering can outperform locality sensitive hashing for nearest neighbour search on streams .
contrary to expectations , we find that nearest neighbour search on a stream based on clustering performs faster than lsh for the same level of accuracy .
mikolov et al smoothed the original contexts distribution raising unigram frequencies to the power of alpha .
mikolov et al showed that the sg algorithm achieves better accuracies in tested cases .
as regards syntactic chunking , jess-cm significantly outperformed aso-semi for the same 15m-word unlabeled data size obtained from the wall street journal in 1991 .
as regards syntactic chunking , jess-cm significantly outperformed aso-semi for the same 15m-word unlabeled data size obtained from the wall street journal in 1991 as described in ( cite-p-18-1-0 ) .
as our supervised classification algorithm , we use a linear svm classifier from liblinear , with its default parameter settings .
we build all the classifiers using the l2-regularized linear logistic regression from the liblinear package .
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided .
in the experiments we trained 5-gram language models on the monolingual parts of the bilingual corpora using srilm .
relation extraction is a fundamental step in many natural language processing applications such as learning ontologies from texts ( cite-p-12-1-0 ) and question answering ( cite-p-12-3-6 ) .
relation extraction is a fundamental task that enables a wide range of semantic applications from question answering ( cite-p-13-3-12 ) to fact checking ( cite-p-13-3-10 ) .
as input , we find consistent benefits of our method on a suite of standard benchmark evaluation tasks .
most importantly , we find that they outperform the original vectors on benchmark tasks .
rules can contain unknown parameters that can be efficiently estimated from dialogue data .
unknown rule parameters can be automatically estimated from dialogue data using bayesian learning .
semantic parsing is the task of mapping natural language sentences to a formal representation of meaning .
semantic parsing is the task of mapping natural language sentences to complete formal meaning representations .
we use the standard stanford-style set of dependency labels .
we used the stanford parser to generate dependency trees of sentences .