sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
we use srilm to train a 5-gram language model on the target side of our training corpus with modified kneser-ney discounting .
we use srilm for training a trigram language model on the english side of the training corpus .
we have used data-driven exhaustive search within the brown corpus for this purpose .
we use the wsj corpus , a pos annotated corpus , for this purpose .
while together they are shaped by evolving social norms , we perform personalized sentiment classification via shared model adaptation .
to address these challenges , we propose to build personalized sentiment classification models via shared model adaptation .
choi and cardie address a sentiment analysis task by using a heuristic decision process based on wordlevel intermediate variables to represent polarity .
differ-ent from these rule-based methods , choi and cardie use a structured linear model to learn semantic compositionality relying on a set of manual features .
we use the moses translation system , and we evaluate the quality of the automatically produced translations by using the bleu evaluation tool .
for generating the translations from english into german , we used the statistical translation toolkit moses .
we use bleu scores as the performance measure in our evaluation .
we used bleu as our evaluation criteria and the bootstrapping method for significance testing .
bleu is a system for automatic evaluation of machine translation .
the bleu is a classical automatic evaluation method for the translation quality of an mt system .
like pavlopoulos et al , we initialize the word embeddings to glove vectors .
unlike dong et al , we initialize our word embeddings using a concatenation of the glove and cove embeddings .
although coreference resolution is a subproblem of natural language understanding , coreference resolution evaluation metrics have predominately been discussed in terms of abstract entities and hypothetical system errors .
coreference resolution is the task of clustering referring expressions in a text so that each resulting cluster represents an entity .
relation extraction is a fundamental step in many natural language processing applications such as learning ontologies from texts ( cite-p-12-1-0 ) and question answering ( cite-p-12-3-6 ) .
relation extraction is a well-studied problem ( cite-p-12-1-6 , cite-p-12-3-7 , cite-p-12-1-5 , cite-p-12-1-7 ) .
this paper presents a methodology to automatically extract and score positive interpretations from negated statements , as intuitively done by humans .
this paper presents an automated methodology to generate plausible positive interpretations from verbal negation , and score them based on their likelihood .
in which the geolinguistic dependence is obscured by noise , this can dramatically diminish the power of the test .
when the assumptions of these models are violated , the power to detect significant geolinguistic associations is diminished .
the environment of an agent is one or more other agents that continuously change their behavior because they are also learning .
in this case the environment of a learning agent is one or more other agents that can also be learning at the same time .
a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke .
our trigram word language model was trained on the target side of the training corpus using the srilm toolkit with modified kneser-ney smoothing .
sentiment analysis is a multi-faceted problem .
sentiment analysis is a research area in the field of natural language processing .
in which the geolinguistic dependence is obscured by noise , this can dramatically diminish the power of the test .
in realistic settings in which the geolinguistic dependence is obscured by noise , this can dramatically diminish the power of the test .
although coreference resolution is a subproblem of natural language understanding , coreference resolution evaluation metrics have predominately been discussed in terms of abstract entities and hypothetical system errors .
coreference resolution is the task of clustering a set of mentions in the text such that all mentions in the same cluster refer to the same entity .
since the information available in each pair is extremely limited we infuse contextual information by drawing on wordnet .
to capture interesting word pairs , we sample different senses of words using wordnet .
derivatives are computed via backpropagation through structure .
derivatives are computed efficiently via backpropagation through structure .
semantic role labeling ( srl ) is a form of shallow semantic parsing whose goal is to discover the predicate-argument structure of each predicate in a given input sentence .
semantic role labeling ( srl ) is the task of automatically annotating the predicate-argument structure in a sentence with semantic roles .
sato et al used decision trees to classify pauses longer than 750 ms as gap or pause .
sato et al used decision trees to determine whether the system should take the turn or not when the user pauses .
word sense disambiguation ( wsd ) is a problem long recognised in computational linguistics ( yngve 1955 ) and there has been a recent resurgence of interest , including a special issue of this journal devoted to the topic ( cite-p-27-8-11 ) .
word sense disambiguation ( wsd ) is a key enabling technology that automatically chooses the intended sense of a word in context .
we propose a hybrid model where a seq2seq model and a similarity-based retrieval model are combined to achieve further performance improvement .
in addition , we have designed a hybrid model which combines the seq2seq model and a retrieval model to further improve performance .
for the evaluation , we used bleu , which is widely used for machine translation .
we evaluated the translation quality using the bleu-4 metric .
global vectors for word representation is a global log-bilinear regression model which captures both global and local word co-occurrence statistics .
it is a global log-linear regression model that makes use of a global factorization model and local context window methods to represent words in a global vector space model .
in parallel to our work , cheng et al propose a similar semi-supervised framework to handle both source and target language monolingual data .
note that , in parallel to our efforts , cheng et al have explored the usage of both source and target monolingual data using a similar semi-supervised reconstruction method , in which two nmts are employed .
in an early effort , cite-p-15-3-5 developed an unscoped logical form where the above sentence is represented ( roughly ) .
in an early effort , cite-p-15-3-5 developed an unscoped logical form where the above sentence is represented ( roughly ) as the formula :
in this work , we employ the toolkit word2vec to pre-train the word embedding for the source and target languages .
we preinitialize the word embeddings by running the word2vec tool on the english wikipedia dump .
we will make use of pennconverter ( cite-p-12-1-11 ) .
as our baseline parser , we use maltparser ( cite-p-12-3-5 ) .
in particular , abstract meaning representation , is a novel representation of semantics .
in particular , abstract meaning representation has gained interest from the research community .
in which a word-based alignment model is used for lexical learning , and the parsing model itself can be seen as a syntax-based translation model .
a word alignment model is used for lexical acquisition , and the parsing model itself can be seen as a syntax-based translation model .
the smt system was tuned on the development set newstest10 with minimum error rate training using the bleu error rate measure as the optimization criterion .
the smt systems are tuned on the dev development set with minimum error rate training using bleu accuracy measure as the optimization criterion .
for example , collobert et al effectively used a multilayer neural network for chunking , part-ofspeech tagging , ner and semantic role labelling .
in particular , collobert et al and turian et al learn word embeddings to improve the performance of in-domain pos tagging , named entity recognition , chunking and semantic role labelling .
given limited data sampling , a language model estimation sometimes encounters with the zero count problem : the maximum likelihood .
given limited text data sampling , a language model estimation usually encounters with zero count problem when facing with data sparsity , which is not reliable .
for word and phrase pairs , sin is powerful and flexible in capturing sentence interactions for different tasks .
sin is powerful and flexible to model sentence interactions for different tasks .
we apply our system to the latest version of the xtag english grammar , which is a large-scale fb-ltag grammar .
we applied our system to the xtag english grammar 3 , which is a large-scale fb-ltag grammar for english .
sarcasm is a sophisticated form of communication in which speakers convey their message in an indirect way .
sarcasm is a form of verbal irony that is intended to express contempt or ridicule .
transliteration mining ( tm ) is the process of finding transliterations in parallel or comparable texts of different languages .
transliteration mining ( tm ) is the process of finding transliterated word pairs in parallel or comparable corpora .
the language model is a large interpolated 5-gram lm with modified kneser-ney smoothing .
the srilm toolkit was used for training the language models using kneser-ney smoothing .
we use the 300-dimensional skip-gram word embeddings built on the google-news corpus .
we use 300 dimension word2vec word embeddings for the experiments .
we train the twitter sentiment classifier on the benchmark dataset in semeval 2013 .
we conduct experiments on the benchmark twitter sentiment classification dataset from semeval 2013 .
recent studies show that character sequence labeling is an effective method of chinese word segmentation for machine learning .
recent studies show that character sequence labeling is an effective formulation of chinese word segmentation .
into the tool , and these words are used to locate information relevant to the input text .
the key words are used to retrieve information relevant to the input texts .
an idiom is a relatively frozen expression whose meaning can not be built compositionally from the meanings of its component words .
an idiom is a phrase whose meaning can not be obtained compositionally , i.e. , by combining the meanings of the words that compose it .
for a geometric interpretation , consider the paths of math-w-8-3-1-9 , math-w-8-3-1-11 and math-w-8-3-1-13 , leading from point math-w-8-3-1-18 .
to see this , consider math-w-3-3-5-81 and math-w-3-3-5-85 , and write it as math-w-3-3-5-100 for some math-w-3-3-5-105 and math-w-3-3-5-110 .
in the translation tasks , we used the moses phrase-based smt systems .
we used the phrase-based smt in moses 5 for the translation experiments .
we use several classifiers including logistic regression , random forest and adaboost implemented in scikit-learn .
we use the selectfrommodel 4 feature selection method as implemented in scikit-learn .
in our implementation , we train a tri-gram language model on each phone set using the srilm toolkit .
we used kenlm with srilm to train a 5-gram language model based on all available target language training data .
in this paper , we use this intuition to define a joint inference model that captures the interdependencies between verb .
in this paper , we use this idea to combine classifiers that were trained for two different tasks on different datasets using constraints to encode linguistic knowledge .
we follow the hyper-parameters settings by lample et al for this evaluation .
we use the same network parameters as lample et al except the two parameters introduced by our system .
blitzer et al proposed a structural correspondence learning method for domain adaptation and applied it to part-of-speech tagging .
in the semi-supervised setting , blitzer et al use structural correspondence learning and unlabeled data to adapt a part-of-speech tagger .
we learn a classifier for each of the three feature subspaces .
one such classifier is trained for each of our three overlapping feature subspaces .
relation extraction is the key component for building relation knowledge graphs , and it is of crucial significance to natural language processing applications such as structured search , sentiment analysis , question answering , and summarization .
relation extraction is the task of recognizing and extracting relations between entities or concepts in texts .
clarke and lapata used integer linear programming to infer globally optimal compression with linguistically motivated constraints .
clarke and lapata presented an unsupervised method that finds the best compression using integer linear programming .
we used the moses decoder , with default settings , to obtain the translations .
we used the moses toolkit for performing statistical machine translation .
experimental results demonstrate that our approach consistently outperforms the existing baseline methods .
the empirical evaluation demonstrates that our approach significantly outperforms baseline methods .
we train skip-gram word embeddings with the word2vec toolkit 1 on a large amount of twitter text data .
we use word2vec 1 toolkit to pre-train the character embeddings on the chinese wikipedia corpus .
relation extraction is a key step towards question answering systems by which vital structured data is acquired from underlying free text resources .
relation extraction is a fundamental task that enables a wide range of semantic applications from question answering ( cite-p-13-3-12 ) to fact checking ( cite-p-13-3-10 ) .
we analyzed these features on the dataset created by pitler and nenkova which associates human readability ratings with each document .
in our feature set , we included linguistic features introduced by pitler and nenkova and partially overlapping with those used in cohmetrix for predicting text quality .
in tables 1 and 2 , we compare our results with those obtained by ( cite-p-16-1-11 ) .
in tables 1 and 2 , we compare our results with those obtained by ( cite-p-16-1-11 ) on different models .
li et al jointly models chinese pos tagging and dependency parsing , and report the best tagging accuracy on ctb .
li et al report the state-of-theart accuracy on this ctb data , with a joint model of chinese pos tagging and dependency parsing .
with this change only , grasp was able to identify patterns for this new task , that were used to indicate the boundaries of a claim .
with this change only , grasp was able to identify patterns for this new task , that were used to indicate the boundaries of a claim with promising preliminary results .
collobert et al and zhou and xu worked on the english constituent-based srl task using neural networks .
collobert et al used word embeddings as input to a deep neural network for multi-task learning .
to leverage more massive web text with natural annotations , and further extend the strategy to other nlp problems .
this strategy makes the usage of natural annotations simple and universal , which facilitates the utilization of massive web text and the extension to other nlp problems .
pun generation is an interesting and challenging text generation task .
the inherent property of humor makes the pun generation task more challenging .
relevance feedback has been shown to improve retrieval .
relevance feedback is proven to significantly improve retrieval performance .
we used the pre-trained google embedding to initialize the word embedding matrix .
we use the 300-dimensional skip-gram word embeddings built on the google-news corpus .
a variety of log-linear models have been proposed to incorporate these features .
log linear models have been proposed to incorporate those features .
traditional topic models such as lda and plsa are unsupervised methods for extracting latent topics in text documents .
topic models such as lda and psla and their extensions have been popularly used to find topics in text documents .
in this approach , words are mapped into a continuous latent space using two embedding methods word2vec and glove .
this approach relies on word embeddings for the computation of semantic relatedness with word2vec .
although this work represents the first formal study of relationship questions that we are aware of , by no means are we claiming a solution — .
although this work represents the first formal study of relationship questions that we are aware of , by no means are we claiming a solution—we see this as merely the first step in addressing a complex problem .
relation extraction is the problem of populating a target relation ( representing an entity-level relationship or attribute ) with facts extracted from natural-language text .
relation extraction is the task of finding relations between entities in text , which is useful for several tasks such as information extraction , summarization , and question answering ( cite-p-14-3-7 ) .
in this paper , we present the lth coreference solver used in the closed track of the conll 2012 shared task .
second , we evaluate on the ontonotes 5 corpus as used in the conll 2012 coreference shared task .
our models improve crf , especially when small data sets are used .
our reranking model also improves fst and crf on media when small data sets are used .
buckels et al found commenting frequency to be positively associated with trolling enjoyment and cheng et al suggested that frequently active users are often associated with anti-social behaviour online .
buckels et al studied the characteristic traits of internet trolls by looking at commenting styles and personality inventories , and found strong positive relations among commenting frequency , trolling enjoyment and trolling behaviour and identity .
so the topic coherence metric is utilized to assess topic quality , which is consistent with human labeling .
in order to objectively measure the quality of aspects , we use coherence score as a metric which has been shown to correlate well with human judgment .
englishto-japanese dataset demonstrate that our proposed model considerably outperforms sequenceto-sequence attentional nmt models .
experimental results on the wat ’ 15 englishto-japanese translation dataset demonstrate that our proposed model achieves the best ribes score and outperforms the sequential attentional nmt model .
the itp nlu module parses one sentence , and maps its parse tree onto a discourse representation structure .
the formal semantic component of the system translates the disambiguated parse into a discourse representation structure .
word segmentation is the first step of natural language processing for japanese , chinese and thai because they do not delimit words by whitespace .
word segmentation is a classic bootstrapping problem : to learn words , infants must segment the input , because around 90 % of the novel word types they hear are never uttered in isolation ( cite-p-13-1-0 , cite-p-13-3-8 ) .
they generalize string transducers to the tree case and are defined in more detail in .
they generalize string transdu ers to the tree case and are defined in more detail .
a ∗ algorithm is 5 times faster than cky parsing , with no loss in accuracy .
our a ∗ algorithm is 5 times faster than cky parsing , with no loss in accuracy .
chen introduced a joint maximum n-gram model with syllabification for grapheme-to-phoneme conversion .
chen , 2003 ) introduced a conditional maximum entropy model with syllabification for grapheme-to-phoneme conversion .
we use different pretrained word embeddings such as glove 1 and fasttext 2 as the initial word embeddings .
for the actioneffect embedding model , we use pre-trained glove word embeddings as input to the lstm .
framenet is a comprehensive lexical database that lists descriptions of words in the frame-semantic paradigm .
framenet is a knowledgebase of frames , describing prototypical situations .
mikolov et al introduce a translation matrix for aligning embeddings spaces in different languages and show how this is useful for machine translation purposes .
mikolov et al proposed a method to use distributed representation of words and learns a linear mapping between vector space of different languages .
in this paper , we propose a fast and effective neural network for acsa and atsa based on convolutions and gating mechanisms .
in this paper , we proposed an efficient convolutional neural network with gating mechanisms for acsa and atsa tasks .
relation extraction is the task of detecting and classifying relationships between two entities from text .
relation extraction ( re ) has been defined as the task of identifying a given set of semantic binary relations in text .
we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing .
we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit .
in this work , we adopt the lexicon from bing liu which includes about 2000 positive words and 4700 negative words .
we use the lexicon created by hu and liu , which consists of 2,006 positive words and 4,783 negative words .
early topic models such as lda were typically evaluated using heldout likelihood or perplexity .
topic models are often evaluated quantitatively using perplexity and likelihood on held-out test data .
a shallow or partial parser , in the sense of , is also implemented and always activated before the complete parse takes place , in order to produce the default baseline output to be used by further computation in case of total failure .
a shallow or partial parser , in the sense of is also implemented and always activated before the complete parse takes place , in order to produce the default baseline output to be used by further computation in case of total failure .
and titov et al individually introduced two transition systems that can generate specific graphs rather than trees .
and titov et al individually studied two transition systems that can generate more general graphs rather than trees .
on a set of eight different languages , our method yields substantial accuracy gains over a traditional mdl-based approach in the task of nominal morphological induction .
we apply our model to eight inflecting languages , and induce nominal morphology with substantially higher accuracy than a traditional , mdlbased approach .
we downloaded glove data as the source of pre-trained word embeddings .
we used pre-trained word vectors of glove , trained on 2 billion words from twitter for english .
to obtain their corresponding weights , we adapted the minimum-error-rate training algorithm to train the outside-layer model .
the weights associated to feature functions are optimally combined using the minimum error rate training .
a ranks 3 strings relative to one another , while experiment b measures the naturalness of the string .
experiment a ranks 3 strings relative to one another , while experiment b measures the naturalness of the string .
which requires very limited human involvement .
existing approaches to this task require substantial human effort .
we use word2vec technique to compute the vector representation of all the tags .
we use word2vec as the vector representation of the words in tweets .
in previous work , hatzivassiloglou and mckeown propose a method to identify the polarity of adjectives .
hatzivassiloglou and mckeown proposed a supervised algorithm to determine the semantic orientation of adjectives .
phrase frequencies are obtained by counting all possible occurrences .
phrase pairs are built by combining minimal translation units and ordering information .