sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
ratinov and roth and turian et al also explored this approach for name tagging .
turian et al showed that the optimal word embedding is task dependent .
after sentence segmentation and tokenization , we used the stanford ner tagger to identify per and org named entities from each sentence .
to extract terms we used lingua english tagger for finding single and multi-token nouns and the stanford named entity recognizer to extract named entities .
we use the scfg decoder cdec 4 and build grammars using its implementation of the suffix array extraction method described in lopez .
we built grammars using its implementation of the suffix array extraction method described in lopez .
to train our reranking models we used svm-light-tk 7 , which encodes structural kernels in svmlight solver .
to train our models , we use svm-light-tk 15 , which enables the use of structural kernels in svm-light .
we use the penn treebank as the linguistic data source .
we use the wsj corpus , a pos annotated corpus , for this purpose .
the translation results are evaluated with case insensitive 4-gram bleu .
experimental results are evaluated by caseinsensitive bleu-4 .
in this paper , we propose a cognitively-driven normalization system that integrates different human perspectives in normalizing the nonstandard tokens , including the enhanced letter .
in this paper , we propose a broad-coverage normalization system for the social media language without using the human annotations .
the distance used for clustering is based on a divergence-like distance between two language models that was originally proposed by juang and rabiner .
the similarity used for clustering is based on a divergence-like distance between two language models that was originally proposed by juang and rabiner .
the model weights were trained using the minimum error rate training algorithm .
the weights are trained using a procedure similar to on held-out test data .
for the “ complete ” model , we checked the top 20 answer candidates that ranked higher than the actual “ correct ” .
again for the “ complete ” model , we checked the top 20 answer candidates that ranked higher than the actual “ correct ” one .
from japanese newspaper articles , the proposed method outperformed a simple application of text-based ner to asr results in ner fmeasure by improving precision .
in experiments using svms , the proposed method showed a higher ner fmeasure , especially in terms of improving precision , than simply applying text-based ner to asr results .
relation extraction is a crucial task in the field of natural language processing ( nlp ) .
relation extraction is a core task in information extraction and natural language understanding .
our work is built upon the multimodal dialogue dataset that comprises of 150k chat sessions between the customer and sales agent .
our work is established upon the recently proposed multimodal dialogue dataset , consisting of ecommerce related conversations .
a 4-gram language model is trained on the monolingual data by srilm toolkit .
a knsmoothed 5-gram language model is trained on the target side of the parallel data with srilm .
this motivated huang and lowe to build a system based on syntax information .
huang and lowe implemented a hybrid approach to automated negation detection .
the language model is a 5-gram lm with modified kneser-ney smoothing .
this type of features are based on a trigram model with kneser-ney smoothing .
the affinity propagation clustering algorithm was implemented in python from scikit-learn framework .
it was implemented using multinomial naive bayes algorithm from scikit-learn .
because this method is fully automatic and can be applied to arbitrary html documents .
the characteristics of this method is that it is fully automatic and can be applied to arbitrary html documents .
using word2vec , we compute word embeddings for our text corpus .
we learn our word embeddings by using word2vec 3 on unlabeled review data .
though both methods show advantages over the basic systems .
however , their method does not show an advantage over the basic systems .
coreference resolution is the task of determining which mentions in a text refer to the same entity .
coreference resolution is the next step on the way towards discourse understanding .
second , we present a unified approach to these problems .
ideally , we would like to propose a unified approach to all the four problems .
twitter is a microblogging site where people express themselves and react to content in real-time .
twitter is a microblogging service that has 313 million monthly active users 1 .
pairwise similarity between 500 million terms was computed in 50 hours using 200 quad-core nodes .
the pairwise similarity between 500 million terms is computed in 50 hours using 200 quad-core nodes .
we develop translation models using the phrase-based moses smt system .
we use the moses toolkit to train our phrase-based smt models .
kalchbrenner et al introduced a convolutional architecture dubbed the dynamic convolutional neural network for the semantic modeling of sentences .
kalchbrenner et al proposed a dynamic convolution neural network with multiple layers of convolution and k-max pooling to model a sentence .
using multi-layered neural networks to learn word embeddings has become standard in nlp .
continuous representation of words and phrases are proven effective in many nlp tasks .
analysis shows that our initial solution is instrumental for making self-learning work without supervision .
this observation is used to build an initial solution that is later improved through self-learning .
we use the wsj portion of the penn treebank 4 , augmented with head-dependant information using the rules of yamada and matsumoto .
we also used a projective english dataset derived from the penn treebank by applying the standard head rules of yamada and matsumoto .
dyer et al propose a stack-lstm for transition-based parsing .
dyer et al introduce stack-lstms , which have the ability to recover earlier hidden states .
the n-gram model was a 3-gram model with kneser-ney smoothing trained using kenlm with its default settings .
the language models were 5-gram models with kneser-ney smoothing built using kenlm .
some loglinear models have been proposed to incorporate those features .
log linear models have been proposed to incorporate those features .
at the level of individual users ; however it is impractical to estimate independent sentiment classification models for each user with limited data .
however , due to the limited availability of user-specific opinionated data , it is impractical to estimate independent models for each user .
this paper describes an exponential family model of word sense which captures both occurrences and co-occurrences of words and senses .
this paper describes an exponential family model suited to performing word sense disambiguation .
third , adding the similarity of synonyms to extend the fst model .
in addition , we apply the synonyms similarity to expand the fst model .
for each question math-w-3-1-1-3 , let math-w-3-1-1-6 be the unstructured text and math-w-3-1-1-12 .
for each question math-w-3-1-1-3 , let math-w-3-1-1-6 be the unstructured text and math-w-3-1-1-12 the set of candidate answers to math-w-3-1-1-24 .
lin et al and yang et al proposed a hierarchical rnn network for document-level modeling as well as sentence-level modeling , at the cost of increased computational complexity .
liu et al proposed a context-sensitive rnn model that uses latent dirichlet allocation to extract topic-specific word embeddings .
word sense disambiguation ( wsd ) is the task of determining the meaning of an ambiguous word in its context .
word sense disambiguation ( wsd ) is the nlp task that consists in selecting the correct sense of a polysemous word in a given context .
qiu et al proposed double propagation to collectively extract aspect terms and opinion words based on information propagation over a dependency graph .
qiu et al propose a double propagation method to extract opinion word and opinion target simultaneously .
relation extraction systems have focused almost exclusively on syntactic parse trees .
several prior approaches to relation extraction have focused on using syntactic parse trees .
the system incorporates rasp , a domainindependent robust statistical parser , and a scf classifier which identifies 163 verbal scfs .
they were acquired automatically using a domain-independent statistical parsing toolkit , rasp , and a classifier which identifies verbal scfs .
responses tend to generate safe , commonplace responses ( e . g . , i don ¡¯ t know ) regardless of the input .
these models tend to generate safe , commonplace responses ( e.g. , i don¡¯t know ) regardless of the input .
the english side of the parallel corpus is trained into a language model using srilm .
the language model is a 3-gram language model trained using the srilm toolkit on the english side of the training data .
we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit .
we trained a 4-gram language model on this data with kneser-ney discounting using srilm .
tang et al proposed a novel method dubbed user product neural network which capture user-and product-level information for sentiment classification .
tang et al was first to incorporate user and product information into a neural network model for personalized rating prediction of products .
while argue that increasing the size of the test set gives even more reliable system scores than multiple references , this still does not solve the inadequacy of bleu and nist for sentence-level or small set evaluation .
while zhang and vogel argue that increasing the size of the test set gives even more reliable system scores than multiple references , this still does not solve the inadequacy of bleu and nist for sentence-level or small set evaluation .
word segmentation is the first step prior to word alignment for building statistical machine translations ( smt ) on language pairs without explicit word boundaries such as chinese-english .
word segmentation is the foremost obligatory task in almost all the nlp applications where the initial phase requires tokenization of input into words .
as the text databases available to users become larger and more heterogeneous , genre becomes increasingly important for computational linguistics .
as the text databases available to users become larger and more heterogeneous , genre becomes increasingly important for computational linguistics as a complement to topical and structural principles of classification .
in this paper , we propose a novel universal multilingual nmt approach focusing mainly on low resource languages .
in this paper , we propose a new universal machine translation approach focusing on languages with a limited amount of parallel data .
experiments demonstrate that our method outperforms competitive approaches in terms of topic coherence .
we show that our method outperforms three competitive approaches in terms of topic coherence on two different datasets .
in this paper , we propose two tailored optimization criteria for seq2seq to different conversation scenarios .
in this paper , we propose two new optimization criteria for seq2seq model to adapt different conversation scenario .
sentiment analysis is a much-researched area that deals with identification of positive , negative and neutral opinions in text .
sentiment analysis is the task in natural language processing ( nlp ) that deals with classifying opinions according to the polarity of the sentiment they express .
in this study , we investigate the relevance of analogical learning for english .
in this study , we revisit this learning paradigm and apply it to the transliteration task .
berland and charniak used a similar method for extracting instances of meronymy relation .
berland and charniak proposed a system for part-of relation extraction , based on the approach .
phrased in practice seem well-represented in the normative view of theory .
nearly all phrased reasons are adequately represented in theory .
to evaluate our method , we use the webquestions dataset , which contains 5,810 questions crawled via google suggest api .
we use the webquestions dataset as our main dataset , which contains 5,810 question-answer pairs .
we used minimum error rate training for tuning on the development set .
we use minimal error rate training to maximize bleu on the complete development data .
these automata are translated into definite relations and the lexical entries are adapted to call the definite relation corresponding to .
the refined automata are encoded as definite relations and each base lexical entry is extended to call the relation corresponding to its class .
for the cluster- based method , we use word2vec 2 which provides the word vectors trained on the google news corpus .
we derive 100-dimensional word vectors using word2vec skip-gram model trained over the domain corpus .
we then learn reranking weights using minimum error rate training on the development set for this combined list , using only these two features .
the feature weights for the log-linear combination of the features are tuned using minimum error rate training on the devset in terms of bleu .
in this paper , we present a comparative evaluation of syntactic parsers and their output .
this paper presents a comparative evaluation of several state-of-the-art english parsers based on different frameworks .
stance detection is the task of automatically determining whether the authors of a text are against or in favour of a given target .
stance detection is the task of assigning stance labels to a piece of text with respect to a topic , i.e . whether a piece of text is in favour of “ abortion ” , neutral , or against .
we evaluate system output automatically , using the bleu-4 modified precision score with the human written sentences as reference .
we measure translation performance by the bleu and meteor scores with multiple translation references .
blei et al proposed lda as a general bayesian framework and gave a variational model for learning topics from data .
li et al used a latent dirichlet allocation model to generate topic distribution features as the news representations .
takamura et al proposed using spin models for extracting semantic orientation of words .
takamura et al proposed using a spin model to predict word polarity .
to build the local language models , we use the srilm toolkit , which is commonly applied in speech recognition and statistical machine translation .
for language model scoring , we use the srilm toolkit training a 5-gram language model for english .
in recent preliminary work , however , we have succeeded in distinguishing arguments from adjuncts using corpus evidence .
in recent work , however , we succeed in distinguishing arguments from adjuncts using evidence extracted from a parsed corpus .
relation extraction is the problem of populating a target relation ( representing an entity-level relationship or attribute ) with facts extracted from natural-language text .
relation extraction ( re ) has been defined as the task of identifying a given set of semantic binary relations in text .
for standard phrase-based translation , galley and manning introduced a hierarchical phrase orientation model .
recently , galley and manning introduced a hierarchical model capable of analyzing alignments beyond adjacent phrases .
a 4-gram language model was trained on the monolingual data by the srilm toolkit .
the language models were interpolated kneser-ney discounted trigram models , all constructed using the srilm toolkit .
e ¡ê e is a triple math-w-6-6-0-121 is its head node , t ( e ) ¡ê n ? is a set of tail nodes and f ( e ) is a monotonic weight function .
each hyperarc e ¡ê e is a triple math-w-6-6-0-121 is its head node , t ( e ) ¡ê n ? is a set of tail nodes and f ( e ) is a monotonic weight function r |t ( e ) | to r and t ¡ê n is a target node .
in order to measure translation quality , we use bleu 7 and ter scores .
we evaluate the translation quality using the case-sensitive bleu-4 metric .
leveraging a multi-perspective matching algorithm , our model outperforms the existing state of the art .
experiments show that our model outperforms the existing state of the art using rich features .
for word embedding , we used pre-trained glove word vectors with 300 dimensions , and froze them during training .
we used 200 dimensional glove word representations , which were pre-trained on 6 billion tweets .
twitter is a widely used microblogging environment which serves as a medium to share opinions on various events and products .
twitter consists of a massive number of posts on a wide range of subjects , making it very interesting to extract information and sentiments from them .
we used srilm to build a 4-gram language model with kneser-ney discounting .
we build an open-vocabulary language model with kneser-ney smoothing using the srilm toolkit .
we train the cbow model with default hyperparameters in word2vec .
we use the word2vec framework in the gensim implementation to generate the embedding spaces .
coreference resolution is the problem of identifying which mentions ( i.e. , noun phrases ) refer to which real-world entities .
since coreference resolution is a pervasive discourse phenomenon causing performance impediments in current ie systems , we considered a corpus of aligned english and romanian texts to identify coreferring expressions .
we adopt pretrained embeddings for word forms with the provided training data by word2vec .
we train 300 dimensional word embedding using word2vec on all the training data , and fine-turning during the training process .
twitter is a huge microblogging service with more than 500 million tweets per day from different locations of the world and in different languages ( cite-p-10-1-6 ) .
twitter is a popular microblogging service which provides real-time information on events happening across the world .
in this paper , we explore a number of different auxiliary problems , and we are able to significantly improve the accuracy of the nombank srl task .
in this paper , we have presented a novel application of alternating structure optimization ( aso ) to the semantic role labeling ( srl ) task on nombank .
semantic parsing is the problem of mapping natural language strings into meaning representations .
semantic parsing is the task of translating text to a formal meaning representation such as logical forms or structured queries .
therefore , a text normalization process must be performed before any conven-tional nlp process is implemented .
for this reason , as noted by sproat et al , an sms normalization must be performed before a more conventional nlp process can be applied .
the language models are estimated using the kenlm toolkit with modified kneser-ney smoothing .
to generate the n-gram language models , we used the kenlm n-gram , language modeling tool .
we have proposed an input-splitting method for translating spoken-language .
this paper proposes an input-splitting method for robust spoken-language translation .
h r on a synonym choice task , where math-w-7-1-0-70 outperformed the bag-of-word model .
h r on a synonym choice task , where it outperforms the standard bag-of-word model for nouns and verbs .
we train a 5-gram language model with the xinhua portion of english gigaword corpus and the english side of the training set using the srilm toolkit .
we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .
collobert et al first applies a convolutional neural network to extract features from a window of words .
collobert et al use a convolutional neural network over the sequence of word embeddings .
the translation quality is evaluated by case-insensitive bleu and ter metric .
the translation quality is evaluated by bleu and ribes .
a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit .
a trigram model was built on 20 million words of general newswire text , using the srilm toolkit .
to prevent overfitting , we apply dropout operators to non-recurrent connections between lstm layers .
we also apply zoneout to the recurrent connections , as well as dropout .
event-similarity tasks are encouraging , indicating that our approach can outperform traditional vector-space model , and is suitable for distinguishing between topically very similar events .
we conduct preliminary experiments on two event-oriented tasks and show that the proposed approach can outperform traditional vector space model in recognizing identical real-world events .
we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing .
we used the srilm toolkit to generate the scores with no smoothing .
to learn grsemi-crfs , we employ adagrad , an adaptive stochastic gradient descent method which has been proved successful in similar tasks .
we use a minibatch stochastic gradient descent algorithm together with an adagrad optimizer .
we used the svm implementation provided within scikit-learn .
we used the svd implementation provided in the scikit-learn toolkit .
textual entailment is the task of automatically determining whether a natural language hypothesis can be inferred from a given piece of natural language text .
textual entailment is a similar phenomenon , in which the presence of one expression licenses the validity of another .
relation extraction is the task of predicting attributes and relations for entities in a sentence ( zelenko et al. , 2003 ; bunescu and mooney , 2005 ; guodong et al. , 2005 ) .
relation extraction ( re ) is the task of identifying instances of relations , such as nationality ( person , country ) or place of birth ( person , location ) , in passages of natural text .
we represent a set of features for the parsing model .
we represent the features based on the dlm .
huang et al , 2012 , build a similar model using k-means clustering , but also incorporate global textual features into initial context vectors .
huang et al , 2012 ) used the multi-prototype models to learn the vector for different senses of a word .
in terms of their part of speech , the correct part of speech is usually identifiedfrom the context .
since words are ambiguous in terms of their part of speech , the correct part of speech is usually identifiedfrom the context the word appears in .