sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
in this paper , we proposed a new approach for analyzing the sentiment of figurative language .
therefore , the goal of our research is to find a new way to identify figurative meaning .
in the context of arabic dialect translation , sawaf built a hybrid mt system that uses both statistical and rulebased approaches for da-to-english mt .
in the context of da translation , sawaf introduced a hybrid mt system that uses statistical and rule-based approaches for da-to-en mt .
for training the model , we use the linear kernel svm implemented in the scikit-learn toolkit .
we use the selectfrommodel 4 feature selection method as implemented in scikit-learn .
hoffmann et al use a probabilistic graphical model for multi-instance , multi-label learning and extract over newswire text using freebase relations .
hoffmann et al present a multi-instance multi-label model for relation extraction through distant supervision .
we present novel evaluation paradigms for explanation methods for two classes of common nlp tasks ( see § 2 ) .
we have attempted to include all important local methods for nlp in our experiments ( see §3 ) .
corpus pattern analysis is concerned with the prototypical syntagmatic patterns with which words in use are associated .
corpus pattern analysis attempts to catalog norms of usage for individual words , specifying them in terms of context patterns .
we use the publicly available 300-dimensional word vectors of mikolov et al , trained on part of the google news dataset .
we use 300-dimensional vectors that were trained and provided by word2vec tool using a part of the google news dataset 4 .
for our purpose we use word2vec embeddings trained on a google news dataset and find the pairwise cosine distances for all words .
first , we train a vector space representations of words using word2vec on chinese wikipedia .
a residual connection is employed around each of two sub-layers , followed by layer normalization .
a residual connection and a layer normalization are then applied toq asq .
to get a dictionary of word embeddings , we use the word2vec tool 2 and train it on the chinese gigaword corpus .
word representations to learn word embeddings from our unlabeled corpus , we use the gensim im-plementation of the word2vec algorithm .
the language models in this experiment were trigram models with good-turing smoothing built using srilm .
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing .
figure 1 : a parse tree based on the treebank parse of wsj .
figure 7 : a parse produced by the unrestricted semantic model .
xue et al proposed a translation-based language model for question retrieval .
xue et al proposed a word-based translation language model for question retrieval .
in this paper , we propose an unsupervised approach for automatically detecting discussant .
in this paper , we presented an approach for subgroup detection in ideological discussions .
the weights used during the reranking are tuned using the minimum error rate training algorithm .
their weights are optimized using minimum error-rate training on a held-out development set for each of the experiments .
we use the 300-dimensional skip-gram word embeddings built on the google-news corpus .
we use the skip-gram model , trained to predict context tags for each word .
in this paper , we propose to represent each word with an expressive multimodal distribution , for multiple distinct meanings .
in this paper , we propose , to the best of our knowledge , the first probabilistic word embedding that can capture multiple meanings .
agrawal and an proposed a context-based approach to detect emotions from text at sentence level .
agrawal and an , 2012 ) proposed an unsupervised context-based approach to detect emotions from text at the sentence level .
szarvas et al produced the bioscope corpus , which consists of biomedical texts annotated with negation and uncertainty , and their scopes .
szarvas et al present the bioscope corpus , which consists of medical and biological texts annotated for negation and speculation together with their linguistic scope .
kalchbrenner et al showed that their dcnn for modeling sentences can achieve competitive results in this field .
kalchbrenner et al developed a cnnbased model that can be used for sentence modelling problems .
we demonstrate the degree to which mt system rankings are dependent on weights employed in the construction of the gold standard .
we demonstrated the degree to which mt system rankings are dependent on weights employed in the construction of the gold standard .
web-based models should therefore be used as a baseline for , rather than an alternative to , standard models .
rather , in our opinion , web-based models should be used as a new baseline for nlp tasks .
our intuition is that there is a significant correlation between the sentiment of spoken text and an actually expressed emotion by the person .
the intuition is the same under m 4 , but now each token in a message is given its own class assignment , according to a class distribution for that particular message .
we used the pre-trained word embeddings that are learned using the word2vec toolkit on google news dataset .
we used 300 dimensional skip-gram word embeddings pre-trained on pubmed .
a tri-gram language model is estimated using the srilm toolkit .
a standard sri 5-gram language model is estimated from monolingual data .
we trained the five classifiers using the svm implementation in scikit-learn .
we used the svm implementation provided within scikit-learn .
we tokenised and parsed the text to obtain dependency trees , using the stanford parser .
we used the stanford neural network parser to obtain dependency triples .
based on , rockt盲schel et al uses the attention-based technique to improve the performance of lstm-based recurrent neural network .
rockt盲schel et al propose neural network with attention mechanism , making neural networks interpretable .
relation extraction is the task of predicting semantic relations over entities expressed in structured or semi-structured text .
relation extraction is a core task in information extraction and natural language understanding .
to minimize the objective , we use the diagonal variant of adagrad with minibatches .
we use the diagonal variant of adagrad with minibatches , which is widely applied in deep learning literature , .
yessenalina and cardie represent each word as a matrix and use iterated matrix multiplication as phrase-level composition function .
yessenalina and cardie modeled each word as a matrix and used iterated matrix multiplication to present a phrase .
in this work , we proposed three new methods for training neural network language models and showed their efficiency both in terms of computational complexity and generalization performance .
in this work , we study the performance and behavior of two neural statistical language models so as to highlight some important caveats of the classical training algorithms .
the data comes from the conll 2000 shared task , which consists of sentences from the penn treebank wall street journal corpus .
the data consist of four-tuples of words , extracted from the wall street journal treebank by a group at ibm .
corpus-derived models of semantics have been extensively studied in the nlp and machine learning communities .
vector-space models of lexical semantics have been a popular and effective approach to learning representations of word meaning .
we set the feature weights by optimizing the bleu score directly using minimum error rate training on the development set .
we tune weights by minimizing bleu loss on the dev set through mert and report bleu scores on the test set .
the srilm toolkit was used to build this language model .
the srilm toolkit is used to train 5-gram language model .
the benchmark model for topic modelling is latent dirichlet allocation , a latent variable model of documents .
latent dirichlet allocation is one of the widely adopted generative models for topic modeling .
in recent years , various phrase translation approaches have been shown to outperform word-to-word translation models .
phrase-based statistical machine translation models have achieved significant improvements in translation accuracy over the original ibm word-based model .
we used a phrase-based smt model as implemented in the moses toolkit .
we implemented our method in a phrase-based smt system .
neelakantan et al proposed an extension of the skip-gram model combined with context clustering to estimate the number of senses for each word as well as learn sense embedding vectors .
neelakantan et al proposed the multisense skip-gram model , that jointly learns context cluster prototypes and word sense embeddings .
in a relatively high-dimensional feature space may suffer from the data sparseness problem .
however , the richer feature representations result in a high-dimensional feature space .
to calculate language model features , we train traditional n-gram language models with ngram lengths of four and five using the srilm toolkit .
we then lowercase all data and use all sentences from the modern dutch part of the corpus to train an n-gram language model with the srilm toolkit .
however , in practice , there are many domains , such as the biomedical domain , which involve nested , overlapping , discontinuous ne mentions .
however , in practice , there are many domains , such as the biomedical domain , in which there are nested , overlapping , and discontinuous entity mentions .
some recent work on active learning has started to include more realistic measures of the actual costs of annotation .
however , recently there is increased interest in measuring the true costs of annotation work when doing active learning .
we apply this approach to a knowledge base of approximately 500 , 000 beliefs extracted imperfectly from the web by nell .
we applied this approach to a knowledge base of approximately 500,000 beliefs extracted imperfectly from the web by nell .
in this paper , we present finite structure query ( fsq ) , a query tool for syntactically annotated .
in this paper , we presented fsq , a query tool for syntactically annotated corpora .
we present the inesc-id system for the 2015 semeval message polarity classification task .
we have presented the inesc-id system for the semeval 2015 message classification task .
topic assignment of each word is not independent , but rather affected by the topic .
the topic assignment for each word is irrelevant to all other words .
we train a cnn with one layer of convolution and max pooling on top of word embedding vectors trained on the google news corpus of size 300 .
we use word embedding vectors trained on the google news corpus of size 300 , to train a cnn with one layer of convolution and max pooling .
coreference resolution is the task of determining whether two or more noun phrases refer to the same entity in a text .
coreference resolution is the process of linking together multiple referring expressions of a given entity in the world .
we used the scikit-learn implementation of a logistic regression model using the default parameters .
within this subpart of our ensemble model , we used a svm model from the scikit-learn library .
sentiment analysis ( cite-p-12-3-17 ) is a popular research topic which has a wide range of applications , such as summarizing customer reviews , monitoring social media , and predicting stock market trends ( cite-p-12-1-4 ) .
sentiment analysis ( cite-p-8-1-20 ) is a task of predicting whether the text expresses a positive , negative , or neutral opinion in general or with respect to an entity of interest .
a bunsetsu consists of one independent word and more than zero ancillary words .
1 a bunsetsu is the linguistic unit in japanese that roughly corresponds to a basic phrase in english .
in this paper , we conduct a detailed study of the causes of spurious ambiguity .
however , to our knowledge , we give the first detailed analysis on spurious ambiguity of word alignment .
we use pre-trained glove vector for initialization of word embeddings .
for the actioneffect embedding model , we use pre-trained glove word embeddings as input to the lstm .
we pre-trained word embeddings using word2vec over tweet text of the full training data .
we trained word vectors with the two architectures included in the word2vec software .
semantic textual similarity is the task of measuring the degree to which two texts have the same meaning .
semantic textual similarity is the task of measuring the degree to which two text snippets have the same meaning .
coreference resolution is a key problem in natural language understanding that still escapes reliable solutions .
coreference resolution is the task of clustering a sequence of textual entity mentions into a set of maximal non-overlapping clusters , such that mentions in a cluster refer to the same discourse entity .
this study encodes distributional semantics into the triple-based background knowledge ranking model for better document enrichment .
zhang et al proposed a triple-based document enrichment framework which uses triples of spo as background knowledge .
named entity recognition ( ner ) is the task of detecting named entity mentions in text and assigning them to their corresponding type .
named entity recognition ( ner ) is a frequently needed technology in nlp applications .
word sense disambiguation ( wsd ) is a particular problem of computational linguistics which consists in determining the correct sense for a given ambiguous word .
word sense disambiguation ( wsd ) is the task of determining the correct meaning for an ambiguous word from its context .
the system is based on a statistical model whose parameters are trained discriminatively using annotated sentences in the amr bank corpus .
the approach is a statistical natural language generation system , trained discriminatively using sentences in the amr bank .
in the task-6 results ( cite-p-15-1-4 ) , our system was ranked 21th out of 85 participants with 0 . 6663 pearson-correlation .
in the task-6 results ( cite-p-15-1-4 ) , our system was ranked 21th out of 85 participants with 0.6663 pearson-correlation all competition rank .
solve this problem , we often first read each piece of text , collect some answer candidates , then focus on these candidates and combine their information to select the final answer .
we first extract answer candidates from passages , then select the final answer by combining information from all the candidates .
chang and han , sun and xu used rich statistical information as discrete features in a sequence labeling framework .
sun and xu enhanced the segmentation results by interpolating the statistics-based features derived from unlabeled data to a crfs model .
a handful of papers have leveraged this idea for summarization .
a handful of papers have studied system combination for summarization .
tuning is performed to maximize bleu score using minimum error rate training .
the decoding weights are optimized with minimum error rate training to maximize bleu scores .
the feature weights are tuned to optimize bleu using the minimum error rate training algorithm .
the log-linear model is then tuned as usual with minimum error rate training on a separate development set coming from the same domain .
it has been shown that images from google yield higher quality representations than comparable resources such as flickr and are competitive with hand-crafted datasets .
it has been shown that images from google yield higher-quality representations than comparable sources such as flickr .
dependency parsing is a valuable form of syntactic processing for nlp applications due to its transparent lexicalized representation and robustness with respect to flexible word order languages .
dependency parsing is a basic technology for processing japanese and has been the subject of much research .
we used pos tags predicted by the stanford pos tagger .
we depend on stanford pos tagger for getting pos tags of the corpus .
language models were built using the sri language modeling toolkit with modified kneser-ney smoothing .
language models were built with srilm , modified kneser-ney smoothing , default pruning , and order 5 .
we then used word2vec to train word embeddings with 512 dimensions on each of the prepared corpora .
the word embeddings required by our proposed methods were trained using the gensim 5 implementation of the skip gram version of word2vec .
target language models were trained on the english side of the training corpus using the srilm toolkit .
additionally , a back-off 2-gram model with goodturing discounting and no lexical classes was built from the same training data , using the srilm toolkit .
mikolov et al proposed a distributed word embedding model that allowed to convey meaningful information on vectors derived from neural networks .
mikolov et al presents a neural network-based architecture which learns a word representation by learning to predict its context words .
in addition to improving the original k & m noisy-channel model , we create unsupervised and semi-supervised models of the task .
we have created a supervised version of the noisy-channel model with some improvements over the k & m model .
our experiments indicate that mem significantly outperforms prior work in both sentence-level rating .
our experiments indicate that mem achieves better overall accuracy than alternative methods .
mikolov et al proposed a computationally efficient method for learning distributed word representation such that words with similar meanings will map to similar vectors .
mikolov et al proposed the word2vec method for learning continuous vector representations of words from large text datasets .
computational linguistics , volume 14 , number 3 , september 1988 47 quilici , dyer , and flowers recognizing and responding to plan-oriented misconceptions .
44 computational linguistics , volume 14 , number 3 , september 1988 quilici , dyer , and flowers recognizing and responding to plan-oriented misconceptions
with ca . 1000 instances , the proposed method increases the macro-average f-score and accuracy up to 50 % , compared to a baseline classifier .
for instance , with training sets of c.a . 1000 labeled instances , the proposed method brings improvements in accuracy and macro-average f-score up to 50 % compared to a baseline classifier .
using latent topical dimensions , the model is able to discriminate between different senses .
using the latent space , the model is able to discriminate between different word senses .
a typical user can most readily supply and identify the tables .
users typically know the database structure and contents .
we describe our contribution to the semeval-2015 shared task : sentiment analysis of figurative language in twitter .
this paper describes our contribution to the semeval-2015 task 11 on sentiment analysis of figurative language in twitter .
another approach is taken by , where , based on source and target language models , the authors calculated the difference of the cross-entropy values for a given sentence .
moore and lewis calculated the difference of the cross entropy values for a given sentence , based on language models from the source domain and the target domain .
galley and manning introduce the hierarchical phrase reordering model which increases the consistency of orientation assignments .
for standard phrase-based translation , galley and manning introduced a hierarchical phrase orientation model .
we trained a 5-gram language model on the xinhua portion of gigaword corpus using the srilm toolkit .
a 4-gram language model is trained on the xinhua portion of the gigaword corpus with the srilm toolkit .
we build an open-vocabulary language model with kneser-ney smoothing using the srilm toolkit .
we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .
bagga and baldwin , 1998b ) presented one of the first cdc systems , which relied solely on the contextual words of the named entities .
bagga and baldwin , 1998 ) proposed a method using the vector space model to disambiguate references to a person , place , or event across multiple documents .
relation extraction is the key component for building relation knowledge graphs , and it is of crucial significance to natural language processing applications such as structured search , sentiment analysis , question answering , and summarization .
relation extraction ( re ) is the task of recognizing the assertion of a particular relationship between two or more entities in text .
that , due to its computational complexity , it is difficult to straightforwardly apply previously studied techniques of bilingual term correspondence estimation from comparable corpora , especially in the case of large scale evaluation such as those presented in this paper .
first , we show that , due to its computational complexity , it is difficult to straightforwardly apply previously studied techniques of bilingual term correspondence estimation from comparable corpora , especially in the case of large scale evaluation such as those presented in this paper .
semantic role labeling ( srl ) is the task of labeling predicate-argument structure in sentences with shallow semantic information .
semantic role labeling ( srl ) is the task of identifying semantic arguments of predicates in text .
for the textual sources , we populate word embeddings from the google word2vec embeddings trained on roughly 100 billion words from google news .
we train skip-gram word embeddings with the word2vec toolkit 1 on a large amount of twitter text data .
we then lowercase all data and use all unique headlines in the training data to train a language model with the srilm toolkit .
we use srilm for training a trigram language model on the english side of the training data .
speech is a major component of modern user interfaces as it is the natural means of human communication .
speech is a single step within a larger system .
bilingual dictionaries are an essential resource in many multilingual natural language processing tasks such as machine translation and cross-language information retrieval .
bilingual lexicons play an important role in many natural language processing tasks , such as machine translation and cross-language information retrieval .
an alternative approach is based on a continuous representation of the words .
this lm approach is based a continuous representation of the words .
and , thus , reflects a better lexical choice of the content words .
furthermore , the models are often capable to produce a better lexical choice of content words .
semantic role labeling ( srl ) is the task of automatic recognition of individual predicates together with their major roles ( e.g . frame elements ) as they are grammatically realized in input sentences .
semantic role labeling ( srl ) is a kind of shallow sentence-level semantic analysis and is becoming a hot task in natural language processing .
the third baseline , a bigram language model , was constructed by training a 2-gram language model from the large english ukwac web corpus using the srilm toolkit with default good-turing smoothing .
an n-gram language model was then built from the sinica corpus released by the association for computational linguistics and chinese language processing using the srilm toolkit .
our baseline is a phrase-based mt system trained using the moses toolkit .
our smt system is a phrase-based system based on the moses smt toolkit .