sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
we trained two 5-gram language models on the entire target side of the parallel data , with srilm .
we used srilm to build a 4-gram language model with interpolated kneser-ney discounting .
we solve this sequence tagging problem using the mallet implementation of conditional random fields .
we use a conditional random field sequence model , which allows for globally optimal training and decoding .
analysis of the results can provide insight on which contextual information provide the most improvement in the task of sentiment-based community detection .
it has been shown that incorporating sentiment analysis can improve community detection when looking for sentiment-based communities .
we leverage a large amount of weakly-labelled training data .
in this work , we leverage a large amount of data to train a multi-layer cnn .
barzilay and lee offer an attractive frame work for constructing a context-specific hidden markov model of topic drift .
barzilay and lee present a hidden markov model based content model where the hidden states of the hmm represent the topics in the text .
faruqui and dyer presented a method that combine two monolingual vector spaces into a multilingual one by canonical correlation analysis .
faruqui and dyer propose a method based on canonical correlation analysis to produce more informed monolingual vectors using multilingual knowledge .
the translation results are evaluated with case insensitive 4-gram bleu .
the translation quality is evaluated by caseinsensitive bleu-4 metric .
the language models used were 7-gram srilm with kneser-ney smoothing and linear interpolation .
language models were built with srilm , modified kneser-ney smoothing , default pruning , and order 5 .
as a result , bus request cycle may conceivably be understood either as a corn- * when a sequence has length three or more .
as a result , bus request cycle may conceivably be understood either as a corn- * when a sequence has length three or more the order of modification may vary .
in figure 1 , ‘ police ’ is both an argument of ‘ arrest ’ and ‘ want ’ .
in figure 1 , ‘ police ’ is both an argument of ‘ arrest ’ and ‘ want ’ as the result of a control structure .
the srilm toolkit was used to build this language model .
the srilm toolkit was used to build the 5-gram language model .
we consider the task of automatic extraction of semantic frames .
we aim to extract frame-semantic structures from text .
stance detection is the task of estimating whether the attitude expressed in a text towards a given topic is ‘ in favour ’ , ‘ against ’ , or ‘ neutral ’ .
stance detection is a difficult task since it often requires reasoning in order to determine whether an utterance is in favor of or against a specific issue .
dependency parsing is the task of building dependency links between words in a sentence , which has recently gained a wide interest in the natural language processing community and has been used for many problems ranging from machine translation ( cite-p-12-1-4 ) to question answering ( zhou et al. , 2011a ) .
dependency parsing is a basic technology for processing japanese and has been the subject of much research .
pre-trained word embeddings provide a simple means to attain semi-supervised learning in natural language processing tasks .
word embeddings have recently gained popularity among natural language processing community .
luong et al , 2013 ) utilized recursive neural networks in which inputs are morphemes of words .
luong et al studied the problem of word representations for rare and complex words .
in addition to the attention model , we use byte pair encoding in the preprocessing step .
in order to reduce the vocabulary size , we apply byte pair encoding .
the promt smt system is based on the moses open-source toolkit .
the baseline system is a pbsmt engine built using moses with the default configuration .
callin et al designed a classifier based on a feed-forward neural network , which considered as features the preceding nouns and determiners along with their partsof-speech .
alternatively , to avoid extracting features from an anaphora resolution system , callin et al developed a classifier based on a feed-forward neural network , which considered mainly the preceding nouns , determiners and their part-of-speech as features .
we use srilm train a 5-gram language model on the xinhua portion of the english gigaword corpus 5th edition with modified kneser-ney discounting .
for the language model we use the corpus of 60,000 simple english wikipedia articles 3 and build a 3-gram language model with kneser-ney smoothing trained with srilm .
framenet is a comprehensive lexical database that lists descriptions of words in the frame-semantic paradigm .
framenet is a semantic resource which provides over 1200 semantic frames that comprise words with similar semantic behaviour .
event extraction is a challenging task , which aims to discover event triggers in a sentence and classify them by type .
event extraction is the task of detecting certain specified types of events that are mentioned in the source language data .
open information extraction systems aim to extract tuples consisting of relation phrases and their multiple associated argument phrases from an input sentence .
open information extraction systems aim at extracting textual triples of the form noun phrase-predicate-noun phrase .
global vectors for word representation is a global log-bilinear regression model which captures both global and local word co-occurrence statistics .
glove 15 is a global log-bilinear regression model for word embedding generation , which trains only on the nonzero elements in a co-occurrence matrix .
in our service architecture , our system is highly modular and can be easily extended .
all components of our system are highly modular which allows it to be easily extended with additional functionality .
table 2 displays the quality , of the automatic translations generated for the test partitions .
table 2 shows the translation quality measured in terms of bleu metric with the original and universal tagset .
performance of semantic parsing can be potentially improved by using discriminative reranking , which explores arbitrary global features .
the performance of semantic parsing can be potentially improved by using discriminative reranking , which explores arbitrary global features .
li et al presented a structured perceptron model to detect triggers and arguments jointly .
li et al presented a joint framework for ace event extraction based on structured perceptron with beam search .
coreference resolution is the task of partitioning a set of entity mentions in a text , where each partition corresponds to some entity in an underlying discourse model .
coreference resolution is the task of determining when two textual mentions name the same individual .
moro et al propose a graphbased approach which uses wikipedia and wordnet as lexical resources .
moro et al proposed another graph-based approach which uses wikipedia and wordnet in multiple languages as lexical resources .
the representations of words are pre-trained by glove , and all these embeddings are fine-tuned in the training process .
the 50-dimensional pre-trained word embeddings are provided by glove , which are fixed during our model training .
we perform pre-training using the skipgram nn architecture available in the word2vec tool .
we learn our word embeddings by using word2vec 3 on unlabeled review data .
to train the link embeddings , we use the speedy , skip-gram neural language model of mikolov et al via their toolkit word2vec .
regarding word embeddings , we use the ones trained by baziotis et al using word2vec and 550 million tweets .
our prototype system uses the stanford parser .
we implement some of these features using the stanford parser .
we report both unlabeled attachment score and labeled attachment score .
we report both unlabeled attachment scores and labeled attachment scores , ignoring punctuations .
for all machine learning results , we train a logistic regression classifier implemented in scikitlearn with l2 regularization and the liblinear solver .
for training the prediction model for good versus bad answers , we used an svm with a linear kernel as implemented in liblinear .
to solve the traditional recurrent neural networks , hochreiter and schmidhuber proposed the lstm architecture .
hochreiter and schmidhuber proposed long short-term memories as the specific version of rnn designed to overcome vanishing and exploding gradient problem .
we parsed all source side sentences using the stanford dependency parser and trained the preordering system on the entire bitext .
we pre-processed the data to add part-ofspeech tags and dependencies between words using the stanford parser .
our model achieves similar or better performance across datasets and meaning representations .
our model learns from natural language descriptions paired with meaning representations .
we pre-train the word embeddings using word2vec .
we use the skipgram model to learn word embeddings .
in this paper , we address the problem of token and sentence levels dialect identification in arabic .
in this paper , we present a hybrid approach for performing token and sentence levels dialect identification in arabic .
for our experiments we used the j48 decision trees in weka .
we used supervised learning classifiers from weka .
brown et al described a method based on the number of words contained in sentences .
the method used by brown et al measures sentence length in number of words .
while together they are shaped by evolving social norms , we perform personalized sentiment classification via shared model adaptation .
in the context of content-based sentiment classification , we interpret social norms as global model sharing and adaptation across users .
we measure the translation quality using a single reference bleu .
we evaluate text generated from gold mr graphs using the well-known bleu measure .
probabilistic context-free grammars underlie most high-performance parsers in one way or another .
at present , most high-performance parsers are based on probabilistic context-free grammars in one way or another .
semantic parsing is a fundamental technique of natural language understanding , and has been used in many applications , such as question answering ( cite-p-18-3-13 , cite-p-18-3-4 , cite-p-18-5-16 ) and information extraction ( cite-p-18-3-7 , cite-p-18-1-11 , cite-p-18-3-16 ) .
semantic parsing is the task of converting a sentence into a representation of its meaning , usually in a logical form grounded in the symbols of some fixed ontology or relational database ( cite-p-21-3-3 , cite-p-21-3-4 , cite-p-21-1-11 ) .
the model parameters are trained using minimum error-rate training .
the component features are weighted to minimize a translation error criterion on a development set .
to train our reranking models we used svm-light-tk 7 , which encodes structural kernels in svmlight solver .
to train our models , we adopted svm-light-tk 5 , which enables the use of the partial tree kernel in svm-light , with default parameters .
word sense disambiguation ( wsd ) is the problem of assigning a sense to an ambiguous word , using its context .
word sense disambiguation ( wsd ) is a difficult natural language processing task which requires that for every content word ( noun , adjective , verb or adverb ) the appropriate meaning is automatically selected from the available sense inventory 1 .
the word embedding is pre-trained using the skip-gram model in word2vec and fine-tuned during the learning process .
the pre-trained word embeddings were learned with the word2vec toolkit on a domain corpus which consists of about 490,000 student essays .
while there is a large body of work on bilingual comparable corpora , most of it is focused on learning word translations .
much of the work involving comparable corpora has focused on extracting word translations .
metaphorical uses of words tend to convey more emotion than their literal paraphrases in the same context .
hypothesis 1 : metaphorical uses of words tend to convey more emotion than their literal paraphrases in the same context .
we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing .
we also use a 4-gram language model trained using srilm with kneser-ney smoothing .
we use pre-trained 50 dimensional glove vectors 4 for word embeddings initialization .
we use the glove word vector representations of dimension 300 .
we train our crf models by maximizing conditional log-likelihood using stochastic gradient descent with an adaptive learning rate over mini-batches .
we train our neural model with stochastic gradient descent and use adagrad to update the parameters .
the berkeley parser is an efficient and effective parser that introduces latent annotations to learn high accurate context-free grammars directly from a treebank .
the berkeley parser is an efficient and effective parser that introduces latent annotations to refine syntactic categories to learn better pcfg grammars .
twitter is a communication platform which combines sms , instant messages and social networks .
twitter is a popular microblogging service which provides real-time information on events happening across the world .
as a part of our research , we had collected 12 , 000 news reports from five different international news sources over a period of ten years , to study systematic differences in news coverage .
as a part of our research , we had collected 12,000 news reports from five different international news sources over a period of ten years , to study systematic differences in news coverage on the rise of china , between western and chinese media .
in the initial stage , we train linear projection models on positive and negative training data separately .
in the initial stage , we train linear projection models on positive and negative training data separately and predict is-a relations jointly .
in our work , we adopt a knowledge-based word similarity method with wsd to measure the semantic similarity between two sentences .
therefore , in our sts system , we use a knowledge-based method to compute word similarity .
similarly , bastings et al used a graph convolutional encoder in combination with an rnn decoder to translate from dependency parsed source sentences .
bastings et al showed that incorporating syntactic structure such as dependency tree using graph convolutional encoders was beneficial for neural machine translation .
we computed the translation accuracies using two metrics , bleu score , and lexical accuracy on a test set of 30 sentences .
to measure translation accuracy , we use the automatic evaluation measures of bleu and ribes measured over all sentences in the test corpus .
to assign pos tags for the unlabeled data , we used the package tnt to train a pos tagger on training data .
as for pos tags , we discarded the original pos tags and assigned ctb style pos tags using a tnt-based tagger trained on the training data .
relation extraction is the key component for building relation knowledge graphs , and it is of crucial significance to natural language processing applications such as structured search , sentiment analysis , question answering , and summarization .
relation extraction ( re ) is the task of determining semantic relations between entities mentioned in text .
in this paper , we present an unsupervised methodology for propagating lexical co-occurrence vectors into an ontology .
we presented a framework for inducing ontological feature vectors from lexical co-occurrence vectors .
such bilingual word-based n-gram models were initially described in and extended in .
such bilingual word-based n-gram models were initially described in .
we used babeldomains to cluster training data by domain prior to applying a supervised hypernym discovery .
finally , we show the potential of babeldomains in a supervised learning setting , clustering training data by domain for hypernym discovery .
we use the l2-regularized logistic regression of liblinear as our term candidate classifier .
as our machine learning component we use liblinear with a l2-regularised l2-loss svm model .
we use the universal pos tagset proposed by petrov et al which has 12 pos tags that are applicable to both en and hi .
in the pos tag level , we basically used the universal tag-set proposed by petrov et al in mapping original tags into universal ones .
we used the pre-trained word embeddings that were learned using the word2vec toolkit on google news dataset .
we use the word2vec vectors with 300 dimensions , pre-trained on 100 billion words of google news .
approaches to solve nlp problems always benefited from having large amounts of data .
many nlp problems have benefited from having large amounts of data .
the semeval-2007 task 4 focused on relations between nominals .
the semeval-2007 task 04 aimed at relations between nominals .
we use the maximum entropy model for our classification task .
as a learning algorithm for our classification model , we used maximum entropy .
stroppa et al add source-side contextual features into a phrase based smt system by integrating context dependent phrasal translation probabilities learned using a decision-tree classifier .
stroppa et al added souce-side context features to a phrase-based translation system , including conditional probabilities of the same form that we use .
we use minimum error rate training with nbest list size 100 to optimize the feature weights for maximum development bleu .
then we use the standard minimum error-rate training to tune the feature weights to maximize the system潞s bleu score .
we trained a tri-gram hindi word language model with the srilm tool .
we trained a 3-gram language model on the spanish side using srilm .
following their settings , we trained a 5-gram language model using the kenlm toolkit 3 with modified kneser-ney smoothing on the two billion word ukwac english web corpus .
we compute the probability of a word fitting the gap using an n-gram language model trained over the two billion word ukwac english web corpus .
for training the model , we use the linear kernel svm implemented in the scikit-learn toolkit .
for our logistic regression classifier we use the implementation included in the scikit-learn toolkit 2 .
with the hr algebra , we are able to leverage the availability of syntactic parsers for ccg .
the hr algebra provides the building blocks for the manipulation of s-graphs .
in contrast , human feedback has relatively small impact on precision and recall .
in contrast , human feedback has a positive and statistically significant , but lower , impact on precision and recall .
four training and testing corpora were used in the first bakeoff , including the academia sinica corpus , the penn chinese treebank corpus , the hong kong city university corpus and the peking university corpus .
four training and testing corpora were used in the first bakeoff , including the academia sinica corpus , the penn chinese treebank corpus , the hong kong city university corpus , and the peking university corpus .
the bleu metric has deeply rooted in the machine translation community and is used in virtually every paper on machine translation methods .
the bleu metric has been widely accepted as an effective means to automatically evaluate the quality of machine translation outputs .
the word embedding vectors are generated from word2vec over the 5th edition of the gigaword .
the word vectors used in all approaches are taken from the word2vec google news model .
we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting .
we implemented this model using the srilm toolkit with the modified kneser-ney discounting and interpolation options .
thus , we train a 4-gram language model based on kneser-ney smoothing method using sri toolkit and interpolate it with the best rnnlms by different weights .
our translation model is implemented as an n-gram model of operations using srilm-toolkit with kneser-ney smoothing .
with the increasing amount of online literature in the biomedical domain , research can be greatly accelerated .
as the amount of online scientific literature in the biomedical domain increases , automatic processing has become a promising approach for accelerating research .
we used the disambig tool provided by the srilm toolkit .
we trained a 5-grams language model by the srilm toolkit .
the hierarchical phrase-based model is capable of capturing rich translation knowledge with the synchronous context-free grammar .
an hierarchical phrase-based model is a powerful method to cover any format of translation pairs by using synchronous context free grammar .
in the hmm forces all clusters but one to represent static hold , with the remaining cluster accounting for the transition movements between holds .
in addition , parameter tying in the hmm forces all clusters but one to represent static hold , with the remaining cluster accounting for the transition movements between holds .
relation extraction is a well-studied problem ( cite-p-12-1-6 , cite-p-12-3-7 , cite-p-12-1-5 , cite-p-12-1-7 ) .
relation extraction is the task of finding relations between entities in text , which is useful for several tasks such as information extraction , summarization , and question answering ( cite-p-14-3-7 ) .
katz and giesbrecht use meaning vectors for literal and non-literal expression classification .
katz and giesbrecht applied latent semantic analysis vectors to distinguish compositional from non-compositional uses of german expressions .
in this paper , we evaluated the relevance of information about aspectual type .
our goal with this work is to evaluate the impact of information about aspectual type on these tasks .
we then apply a max-over-time pooling operation over the feature map .
on the resulting c , we apply max pooling and take the maximum feature as the representative one .
however , most current models of machine translation do not account for this variation , instead .
neural machine translation ( nmt ) models are incapable of capturing this variation , however .
we used a caseless parsing model of the stanford parser for a dependency representation of the messages .
we used stanford corenlp to generate dependencies for the english data .
we used 5-gram models , estimated using the sri language modeling toolkit with modified kneser-ney smoothing .
we train a 4-gram language model on the xinhua portion of the english gigaword corpus using the srilm toolkits with modified kneser-ney smoothing .
we apply the stanford dependency parser to the probable event sentences to identify verb phrase candidates and to enforce syntactic constraints between the different types of event information .
we compute the syntactic features only for pairs of event mentions from the same sentence , using the stanford dependency parser .
tang et al proposed target-dependent lstm to capture the aspect information when modeling sentences .
tang et al proposed td-lstm and tc-lstm , where target information is automatically taken into account .
we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting .
we trained a 5-gram language model on the xinhua portion of gigaword corpus using the srilm toolkit .