sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
for the language model , we used srilm with modified kneser-ney smoothing .
the srilm language modelling toolkit was used with interpolated kneser-ney discounting .
the language model was generated from the europarl corpus using the sri language modeling toolkit .
language models were estimated using the sri language modeling toolkit with modified kneser-ney smoothing .
for smt decoding , we use the moses toolkit with kenlm for language model queries .
for the extraction of translation tables , we use the de facto standard smt toolbox moses with default settings .
in cwe , they learned word embeddings with its component characters .
in cwe , they learned word embeddings with its component characters embeddings .
our approach showed significant improvements over the best previously published work .
our approaches show improvements over the best previously published system for solving geometry problems .
tree representation is often too coarse or ambiguous to accurately capture the semantic relation information .
the treebank tag , unfortunately , is usually too coarse or too general to capture semantic information .
we propose a discriminative supervised learning approach for learning .
we propose a discriminative , feature-rich approach using large-margin learning .
bootstrapping also does better than monolingual bootstrapping .
further , it consistently performs better than monolingual bootstrapping .
we induce a topic-based vector representation of sentences by applying the latent dirichlet allocation method .
we can learn a topic model over conversations in the training data using latent dirchlet allocation .
therefore , we use the long short-term memory network to overcome this problem .
we use the long short-term memory architecture for recurrent layers .
automatic evaluation results are shown in table 1 , using bleu-4 .
the bleu score for all the methods is summarised in table 5 .
in this paper we propose to address the problem of automatic labelling of latent topics learned from twitter .
in this paper we proposed a novel alternative to topic labelling which do not rely on external data sources .
the sentiment analysis is a field of study that investigates feelings present in texts .
sentiment analysis is the task in natural language processing ( nlp ) that deals with classifying opinions according to the polarity of the sentiment they express .
twitter is a famous social media platform capable of spreading breaking news , thus most of rumour related research uses twitter feed as a basis for research .
twitter is a popular microblogging service , which , among other things , is used for knowledge sharing among friends and peers .
the target language model was a standard ngram language model trained by the sri language modeling toolkit .
the target fourgram language model was built with the english part of training data using the sri language modeling toolkit .
pang and lee cast this problem a classification task , and use machine learning method in a supervised learning framework .
pang and lee cast this problem as a classification task , and use machine learning method in a supervised learning framework .
cite-p-21-1-9 proposed a staggered decoding algorithm , which proves to be very efficient on .
cite-p-21-1-4 proposed an algorithm which opens necessary nodes in a lattice in searching the best sequence .
we also examine the possibility of using similarity metrics defined on wordnet .
thus , we observe a marginal improvement by using similarity-based metrics for wordnet .
rating of the target word could be a useful clue for determining whether the sense is literal or metaphorical .
identifying metaphorical word usage is important for reasoning about the implications of text .
for improving the word alignment , we use the word-classes that are trained from a monolingual corpus using the srilm toolkit .
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided .
for all experiments , we used a vocabulary of the first 100,000 word vectors in glove 7 .
as the word embeddings , we used the 300 dimension vectors pre-trained by glove 6 .
at semeval 2012 – 2015 , most of the top-performing sts systems used a regression algorithm to combine different measures of similarity .
each of our systems uses the semeval 2012–2015 sts datasets to train a ridge regression model that combines different measures of similarity .
neural language models based on recurrent neural networks and sequence-tosequence architectures have revolutionized the nlp world .
recurrent neural network architectures have proven to be well suited for many natural language generation tasks .
relation extraction is the task of finding relations between entities in text , which is useful for several tasks such as information extraction , summarization , and question answering ( cite-p-14-3-7 ) .
relation extraction ( re ) is the task of extracting instances of semantic relations between entities in unstructured data such as natural language text .
for our baseline we use the moses software to train a phrase based machine translation model .
we use the open source moses phrase-based mt system to test the impact of the preprocessing technique on translation quality .
and ( 2 ) twsss share common structure with sentences in the erotic domain .
second , twsss share common structure with sentences in the erotic domain .
our neural generator follows the standard encoder-decoder paradigm .
our nmt model follows the common attentional encoder-decoder networks .
zhou et al explore various features in relation extraction using svm .
zhou et al explore various features in relation extraction using support vector machine .
both yamamoto and sumita and foster and kuhn , extended this to include the translation model .
both yamamoto and sumita and foster and kuhn extended this work to include the translation model .
we use case-sensitive bleu-4 to measure the quality of translation result .
we use corpus-level bleu score to quantitatively evaluate the generated paragraphs .
the word-character hybrid model proposed by nakagawa and uchimoto shows promising properties for solving this problem .
nakagawa and uchimoto proposed a hybrid model for word segmentation and pos tagging using an hmm-based approach .
acquired knowledge regarding inter-topic preferences is useful not only for stance detection , but also for various real-world applications including public opinion survey , electoral campaigns , electoral predictions , and online debates .
this kind of knowledge is useful not only for stance detection across multiple topics but also for various real-world applications including public opinion surveys , electoral predictions , electoral campaigns , and online debates .
the language model is trained and applied with the srilm toolkit .
the srilm toolkit is used to train 5-gram language model .
coreference resolution is the problem of partitioning a sequence of noun phrases ( or mentions ) , as they occur in a natural language text , into a set of referential entities .
coreference resolution is the task of automatically grouping references to the same real-world entity in a document into a set .
similarly , zheng et al introduced a rotatory attention mechanism to achieve the representations of the targets , the left context and the right context , which were determined by each other .
ma et al and zheng et al both employed a bidirectional attention operation to achieve the representations of targets and contextual words determined by each other .
motivated by these psycholinguistic findings , we are currently investigating the role of eye gaze in spoken language understanding .
motivated by psycholinguistic findings , we are currently investigating the role of eye gaze in spoken language understanding for multimodal conversational systems .
we tackle this problem , and propose an endto-end neural crf autoencoder ( ncrf-ae ) model for semi-supervised learning on sequence labeling problems .
we proposed an endto-end neural crf autoencoder ( ncrf-ae ) model for semi-supervised sequence labeling .
we use glove word embeddings , an unsupervised learning algorithm for obtaining vector representations of words .
our word embeddings is initialized with 100-dimensional glove word embeddings .
we pre-trained word embeddings using word2vec over tweet text of the full training data .
we perform pre-training using the skip-gram nn architecture available in the word2vec 13 tool .
we used the dataset from the conll shared task for cross-lingual dependency parsing .
in this treebank , we followed the format of the conll tab-separated format for dependency parsing .
we do perform word segmentation in this work , using the stanford tools .
we use stanford corenlp for pos tagging and lemmatization .
experiments show the approach proposed in this paper enhances the domain portability of the chinese word segmentation model and prevents drastic decline in performance .
the experiment shows the approach can effectively promote the oov recall and lead to a higher overall performance .
in the majority of cases ( 68 % , table 4 ) we are able to detect more positive implicit meaning than previous work .
in the majority of cases ( 68 % , table 4 ) we are able to detect more positive implicit meaning than previous work considering a coarse-grained focus .
relation extraction is the task of detecting and classifying relationships between two entities from text .
relation extraction is a traditional information extraction task which aims at detecting and classifying semantic relations between entities in text ( cite-p-10-1-18 ) .
for this task , we used the svm implementation provided with the python scikit-learn module .
for all classifiers , we used the scikit-learn implementation .
fazly et al exploit this property in their unsupervised approach , referred to as cform .
moreover , the unsupervised cform method of fazly et al gives substantially higher accuracies than this supervised approach .
for this purpose , we extracted named entities from millions of tweets , using a twitter-tuned ner system .
to study the diversity of named entities in retweets , we used uw twitter nlp tools to extract nes from rt-data .
traditional topic models such as lda and plsa are unsupervised methods for extracting latent topics in text documents .
generative models like lda and plsa have been proved to be very successful in modeling topics and other textual information in an unsupervised manner .
hatzivassiloglou and mckeown extract polar adjectives by a weakly supervised method in which subjective adjectives are found by searching for adjectives that are conjuncts of a pre-defined set of polar seed adjectives .
hatzivassiloglou and mckeown extract sets of positive and negative adjectives from a large corpus using the insight that conjoined adjectives are generally of the same or different semantic orientation depending open the particular conjunction used .
in parallel to the phrase-based approach , the use of bilingual n-grams gives comparable results , as shown by crego et al .
in parallel to this phrasebased approach , the use of bilingual n-grams gives comparable results , as shown by crego et al .
the language model is trained on the target side of the parallel training corpus using srilm .
a knsmoothed 5-gram language model is trained on the target side of the parallel data with srilm .
we evaluate the performance of different translation models using both bleu and ter metrics .
we evaluated the translation quality using the bleu-4 metric .
we use the stanford part-of-speech tagger and chunker to identify noun and verb phrases in the sentences .
we apply a part-of-speech tagger and a dependency parser on all sentences of these three articles .
sentiment analysis is a fundamental problem aiming to give a machine the ability to understand the emotions and opinions expressed in a written text .
sentiment analysis is a much-researched area that deals with identification of positive , negative and neutral opinions in text .
broad coverage and disambiguation quality are critical for wsd .
broad coverage and disambiguation quality are critical for a word sense disambiguation system .
we use srilm for training a trigram language model on the english side of the training corpus .
we use srilm for training a trigram language model on the english side of the training data .
by cite-p-8-1-4 , recent attempts that apply either complex linguistic reasoning or attention-based complex neural network architectures achieve up to 76 % accuracy on benchmark sets .
while this does not seem like a challenging task , many recent attempts that apply either complex linguistic reasoning or deep neural networks achieve 65 % –76 % accuracy on benchmark sets .
we pretrain 200-dimensional word embeddings using word2vec on the english wikipedia corpus , and randomly initialize other hyperparameters .
for the textual sources , we populate word embeddings from the google word2vec embeddings trained on roughly 100 billion words from google news .
shallow semantic representations can prevent the sparseness of deep structural approaches and the weakness of cosine similarity based models .
shallow semantic representations , bearing a more compact information , could prevent the sparseness of deep structural approaches and the weakness of bow models .
in a web crawl , the distribution is quite likely to be more uniform , which means the senses will ¡° split the difference ¡± in the representation .
but in a web crawl , the distribution is quite likely to be more uniform , which means the senses will ¡°split the difference¡± in the representation and end up not being that similar to any instance of serve .
word embeddings have proven to be effective models of semantic representation of words in various nlp tasks .
word embeddings can be very useful for the sentiment analysis task because they are able to represent syntactic and semantic information of words .
and the superiority of the multimodal over the corpus-only approach has only been established when evaluations include such concepts .
most perceptual input to such models corresponds to concrete noun concepts and the superiority of the multimodal approach has only been established when evaluating on such concepts .
lexical substitution is a special case of automatic paraphrasing in which the goal is to provide contextually appropriate replacements for a given word , such that the overall meaning of the context is maintained .
naturally , lexical substitution is a very common first step in textual entailment recognition , which models semantic inference between a pair of texts in a generalized application independent setting ( cite-p-19-1-0 ) .
we train a trigram language model with the srilm toolkit .
we use the srilm toolkit to compute our language models .
in this work , we handle the medical concept .
in this work , we go beyond string matching .
in this paper , we describe an empirical study of chinese chunking .
in this paper , we conducted an empirical study of chinese chunking .
however , dependency parsing , which is a popular choice for japanese , can incorporate only shallow syntactic information , i.e. , pos tags , compared with the richer syntactic phrasal categories in constituency parsing .
dependency parsing is a valuable form of syntactic processing for nlp applications due to its transparent lexicalized representation and robustness with respect to flexible word order languages .
to learn noun vectors , we use a skip-gram model with negative sampling .
we use skip-gram with negative sampling for obtaining the word embeddings .
to represent the document , lstms have obvious advantage to model the compositional semantics and to capture the long distance dependencies between words .
it has obvious advantage to model the compositional semantics and to capture the long distance dependencies between words .
however , we need to modify this model to appropriately process more complicated sentences .
we extend this idea so that we can change the output length flexibly .
previous studies have shown significant improvements in translation performance through the segmentation of asr hypotheses .
prior work has shown that additional segmentation of asr hypotheses of these segments may be necessary to improve translation quality .
this dataset was created and employed for the sentiment analysis in twitter task in the 2013 editions of the semeval 4 workshop .
the benchmark corpus were made available with the semeval-2013 shared task on sentiment analysis in twitter .
this paper has presented a treatment of relational nouns which manages to maintain uniformity and generality .
the paper shows how this approach handles a variety of linguistic constructions involving relational nouns .
word sense disambiguation ( wsd ) is the task of determining the correct meaning ( “ sense ” ) of a word in context , and several efforts have been made to develop automatic wsd systems .
word sense disambiguation ( wsd ) is the task to identify the intended sense of a word in a computational manner based on the context in which it appears ( cite-p-13-3-4 ) .
throughout this work , we use mstperl , an unlabelled first-order non-projective single-best implementation of the mstparser of mcdonald et al , trained using 3 iterations of mira .
throughout this work , we use mstperl , an implementation of the unlabelled single-best mstparser of mcdonald et al , with first-order features and nonprojective parsing , trained using 3 iterations of mira .
to compensate the limit of in-domain data size , we use word2vec to learn the word embedding from a large amount of general-domain data .
in this run , we use a sentence vector derived from word embeddings obtained from word2vec .
we follow the pre-segmentation method described in to achieve the goal .
we follow the pre-segmentation method described in glass to achieve the goal .
case-insensitive nist bleu was used to measure translation performance .
caseinsensitive bleu is used to evaluate the translation results .
a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke .
the language models were interpolated kneser-ney discounted trigram models , all constructed using the srilm toolkit .
this architecture is very similar to the framework of uima .
the pipeline is based on the uima framework and contains many text analysis components .
we propose an event detection algorithm based on the sequence of community level emotion distribution .
we use the dirichlet distribution to model the generation process of the community emotion distribution .
a most recent work brought this idea even further , by incorporating structural constraints into the learning phase .
a most recent work brought this idea even further , by incorporating structural constraints into the learning phase as well ( cite-p-21-3-16 ) .
the model parameters in word embedding are pretrained using glove .
the word embeddings are initialized using the pre-trained glove , and the embedding size is 300 .
we used the moses mt toolkit with default settings and features for both phrase-based and hierarchical systems .
for phrase-based smt translation , we used the moses decoder and its support training scripts .
in the aspect of passage selection , cite-p-21-3-16 introduced a pipelined approach that rank the passages first and then read the selected passages .
in the aspect of passage selection , cite-p-21-3-16 introduced a pipelined approach that rank the passages first and then read the selected passages for answering questions .
it was trained on the webnlg dataset using the moses toolkit .
the phrase table was built using the scripts from the moses package .
on the noisy dataset shows that our meaning-based approach understands the meaning of each quantity .
the much higher accuracy of our system on the noisy dataset shows that our meaning-based approach understands the meaning of each quantity more .
all systems are evaluated using case-insensitive bleu .
the evaluation method is the case insensitive ibm bleu-4 .
we use distributed word vectors trained on the wikipedia corpus using the word2vec algorithm .
our cdsm feature is based on word vectors derived using a skip-gram model .
位 8 are tuned by minimum error rate training on the dev sets .
the 位 f are optimized by minimum-error training .
we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting .
furthermore , we train a 5-gram language model using the sri language toolkit .
coreference resolution is the process of determining whether two expressions in natural language refer to the same entity in the world .
coreference resolution is the problem of partitioning a sequence of noun phrases ( or mentions ) , as they occur in a natural language text , into a set of referential entities .
mikolov et al introduce a translation matrix for aligning embeddings spaces in different languages and show how this is useful for machine translation purposes .
mikolov et al used distributed representations of words to learn a linear mapping between vector spaces of languages and showed that this mapping can serve as a good dictionary between the languages .
then , we trained word embeddings using word2vec .
for feature building , we use word2vec pre-trained word embeddings .
with one of six pos-the windows of context .
the ac-the windows of context seems warranted .
for the fluency and grammaticality features , we train 4-gram lms using the development dataset with the sri toolkit .
we implement an in-domain language model using the sri language modeling toolkit .
for evaluation , caseinsensitive nist bleu is used to measure translation performance .
to measure the translation quality , we use the bleu score and the nist score .
socher et al present a model for compositionality based on recursive neural networks .
socher et al present a compositional model based on a recursive neural network .
llu铆s et al introduce a joint arc-factored model for parsing syntactic and semantic dependencies , using dualdecomposition to maximize agreement between the models .
llu铆s et al use a joint arcfactored model that predicts full syntactic paths along with predicate-argument structures via dual decomposition .
in our paper , we show that massive amounts of data can have a major impact on discourse processing research .
in our paper , we show that massive amounts of data can have a major impact on discourse processing research as well .