sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
nature is quite different from the other text genres of emails , essays and blogs .
the text samples include essays , emails , blogs , and chat .
finkel and manning proposed a crf-based constituency parser for nested named entities such that each named entity is a constituent in the parse tree .
finkel and manning propose a discriminative parsingbased method for nested named entity recognition , employing crfs as its core .
latent semantic analysis is used to measure semantic similarity between each pair of words .
latent semantic analysis has been used to reduce the dimensionality of semantic spaces leading to improved performance .
relation extraction ( re ) is a task of identifying typed relations between known entity mentions in a sentence .
relation extraction is the key component for building relation knowledge graphs , and it is of crucial significance to natural language processing applications such as structured search , sentiment analysis , question answering , and summarization .
for this language model , we built a trigram language model with kneser-ney smoothing using srilm from the same automatically segmented corpus .
our translation model is implemented as an n-gram model of operations using srilm-toolkit with kneser-ney smoothing .
to convert into a distributed representation here , a neural network for word embedding learns via the skip-gram model .
we apply the 3-phase learning procedure proposed by where we first create word embeddings based on the skip-gram model .
entity linking ( el ) is a central task in information extraction — given a textual passage , identify entity mentions ( substrings corresponding to world entities ) and link them to the corresponding entry in a given knowledge base ( kb , e.g . wikipedia or freebase ) .
entity linking ( el ) is the task of disambiguating mentions in text by associating them with entries in a predefined database of mentions ( persons , organizations , etc ) .
for phrase-based smt translation , we used the moses decoder and its support training scripts .
we adapted the moses phrase-based decoder to translate word lattices .
cite-p-14-3-19 proposed a deep learning method for learning multimodal representations by solving pseudo-supervised tasks to predict .
cite-p-14-3-13 proposed unsupervised multimodal learning based on deep restricted boltzmann machines ( rbms ) .
a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit from stolcke .
the targetside 4-gram language model was estimated using the srilm toolkit and modified kneser-ney discounting with interpolation .
the language models are estimated using the kenlm toolkit with modified kneser-ney smoothing .
an unpruned , modified kneser-ney-smoothed 4-gram language model is estimated using the kenlm toolkit .
word embeddings are considered one of the key building blocks in natural language processing and are widely used for various applications .
word embeddings have recently led to improvements in a wide range of tasks in natural language processing .
however , we use a large 4-gram lm with modified kneser-ney smoothing , trained with the srilm toolkit , stolcke , 2002 and ldc english gigaword corpora .
we implemented this model using the srilm toolkit with the modified kneser-ney discounting and interpolation options .
we used moses , a phrase-based smt toolkit , for training the translation model .
we used the moses decoder , with default settings , to obtain the translations .
coreference resolution is the task of determining when two textual mentions name the same individual .
coreference resolution is a challenging task , that involves identification and clustering of noun phrases mentions that refer to the same real-world entity .
we use a synchronous context free grammar translation system , a model which has yielded state-of-the-art results on many translation tasks .
as our baseline system , we employ a hierarchical phrase-based translation model , which is formally based on the notion of a synchronous context-free grammar .
we used the svd implementation provided in the scikit-learn toolkit .
specifically , we used the python scikit-learn module , which interfaces with the widely-used libsvm .
since bleu is the main ranking index for all submitted systems , we apply bleu as the evaluation matrix for our translation system .
we follow the standard machine translation procedure of evaluation , measuring bleu for every system .
crf training is usually performed through the l-bfgs algorithm and decoding is performed by the viterbi algorithm .
crf training is usually performed through the l-bfgs algorithm and decoding is performed by viterbi algorithm .
by combining the hal model and relevance feedback , the cip can induce semantic patterns from the unannotated web corpora .
the hal model , which is a cognitive motivated model , provides an informative infrastructure to make the cip capable of learning from unannotated corpora .
for pos tagging and syntactic parsing , we use the stanford nlp toolkit .
here we use stanford corenlp toolkit to deal with the co-reference problem .
see avramidis , failed to deliver a competitive result in the standard wmt setting for a reference ) .
see avramidis , has tried shallow nns , but failed to deliver a competitive result in the standard wmt setting for a reference ) .
the sri language modeling toolkit was used to train a trigram open-vocabulary language model with kneser-ney discounting on data that had boundary events inserted in the word stream .
the target language model was a trigram language model with modified kneser-ney smoothing trained on the english side of the bitext using the srilm tookit .
the distance between two languages is the divergence their lexical metrics .
the distance between two languages is a function of the number or fraction of these forms which are cognate between the two languages 1 .
on all datasets and models , we use 300-dimensional word vectors pre-trained on google news .
we use the word2vec vectors with 300 dimensions , pre-trained on 100 billion words of google news .
the language model was trained using srilm toolkit .
all language models were trained using the srilm toolkit .
we propose a hierarchical attention model which is jointly trained with the lstm network .
meanwhile , we propose a hierarchical attention mechanism for the bilingual lstm network .
framenet is a comprehensive lexical database that lists descriptions of words in the frame-semantic paradigm .
the framenet database provides an inventory of semantic frames together with a list of lexical units associated with these frames .
we ran mt experiments using the moses phrase-based translation system .
we conducted baseline experiments for phrasebased machine translation using the moses toolkit .
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing .
finally , we used kenlm to create a trigram language model with kneser-ney smoothing on that data .
the well-known phrasebased translation model has significantly advanced the progress of smt by extending translation units from single words to phrases .
the well-known phrase-based statistical translation model extends the basic translation units from single words to continuous phrases to capture local phenomena .
we use the stanford corenlp shift-reduce parsers for english , german , and french .
we use stanford corenlp for pos tagging and lemmatization .
in this paper , we introduced a supervised method for back-of-the-book indexing which relies on a novel set of features , including features .
in this paper , we introduce a supervised method for back-of-the-book index construction , using a novel set of linguistically motivated features .
recurrent neural networks are a type of neural network in which some hidden layer is connected to itself so that the previous hidden state can be used along with the input at the current step .
recurrent neural networks are a type of neural networks in which the hidden layer is connected to itself so that the previous hidden state is used along with the input at the current step .
we use two standard evaluation metrics bleu and ter , for comparing translation quality of various systems .
we use the automatic mt evaluation metrics bleu , meteor , and ter , to evaluate the absolute translation quality obtained .
for efficiency , we follow the hierarchical softmax optimization used in word2vec .
we use word2vec as the vector representation of the words in tweets .
with these resources , we propose a new task of japanese noun phrase segmentation .
in this paper , we proposed a new task of japanese noun phrase segmentation .
the translation quality is evaluated by case-insensitive bleu-4 metric .
translation quality is measured in truecase with bleu on the mt08 test sets .
the decoder uses a cky-style parsing algorithm and cube pruning to integrate the language model scores .
the decoder uses a cky-style parsing algorithm to integrate the language model scores .
uos uses dependency-parsed features from the corpus , which are then clustered into senses using the maxmax algorithm .
the uos system induces senses by building an ego-network of a word using dependency relations , which is subsequently clustered using the maxmax clustering algorithm .
key roles can be useful for tasks involving recognition and reasoning about processes .
simple role-based knowledge is essential for recognizing and reasoning about situations involving processes .
crf training is usually performed through the l-bfgs algorithm and decoding is performed by the viterbi algorithm .
crf training is usually performed through the typical l-bfgs algorithm and decoding is performed by viterbi algorithm .
our submission to the english-french task was a phrase-based statistical machine translation based on the moses decoder .
our system is based on the phrase-based part of the statistical machine translation system moses .
among them , maximum entropy obtained a good result for preposition and article correction using a large feature set .
among them , maximum entropy was generally used and obtained a good result for preposition and article correction using a large feature set .
seo et al and xiong et al applied different ways to match the question and the context with bidirectional attention .
xiong et al and seo et al employ variant coattention mechanism to match the question and passage mutually .
collobert et al first introduced an end-to-end neural-based approach with sequence-level training and uses a convolutional neural network to model the context window .
in collobert et al the authors proposed a deep neural network , which learns the word representations and produces iobes-prefixed tags discriminatively trained in an end-to-end manner .
at the same time , it has been shown that incorporating word representations can result in significant improvements for sequence labelling tasks .
distributed representations for words and sentences have been shown to significantly boost the performance of a nlp system .
as stated above , we call these fragments .
as stated above , we call these fragments the reference scope .
we used the icsi meeting data that contains naturally-occurring research meetings .
we used the icsi meeting corpus , which contains naturally occurring meetings , each about an hour long .
blanco and moldovan annotate focus on the negations marked with argm-neg role in propbank .
blanco and moldovan annotate focus of negation in the 3,993 negations marked with argm-neg semantic role in propbank .
we investigate the automatic labeling of spoken dialogue data , in order to train a classifier that predicts students ¡¯ emotional states .
in this paper we investigate the applicability of co-training to train classifiers that predict emotions in spoken dialogues .
the results show that srl information is very helpful for orl , which is consistent with previous studies .
results show that srl is highly effective for orl , which is consistent with previous findings .
as word vectors the authors use word2vec embeddings trained with the skip-gram model .
the word embeddings are initialized with 100-dimensions vectors pre-trained by the cbow model .
meanwhile , we adopt glove pre-trained word embeddings 5 to initialize the representation of input tokens .
we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training .
as discussed above , global word order mistakes often lead to incomprehensibility and misunderstanding .
otherwise , translations with wrong word order often lead to misunderstanding and incomprehensibility .
in this paper , we present a general method to leverage the metadata of category information within cqa pages to further improve the word embedding representations .
we firstly introduce a new metadata powered word embedding method , called mnet , to leverage the category information within cqa pages to obtain word representations .
since our feature set was too large for mert , we used k-best batch mira for tuning .
for tuning the feature weights , we applied batch-mira with -safe-hope .
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing .
we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing .
because it often requires much less training time in practice than batch training algorithms .
this learning framework is attractive because it often requires much less training time in practice than batch training algorithms .
we propose a new , simple model for selectional preference induction that uses corpus-based semantic similarity metrics , such as cosine or lin ’ s ( 1998 ) .
we propose a new , simple model for the automatic induction of selectional preferences , using corpus-based semantic similarity metrics .
we use the moses smt toolkit to test the augmented datasets .
we implement the pbsmt system with the moses toolkit .
bollen et al used tweet based public mood to predict the movement of dow jones industrial average index .
bollen et al use a sentiment analysis approach to predict the american stock market via twitter .
sarcasm is a sophisticated speech act which commonly manifests on social communities such as twitter and reddit .
sarcasm is defined as ‘ the use of irony to mock or convey contempt ’ 1 .
relation extraction is the task of automatically detecting occurrences of expressed relations between entities in a text and structuring the detected information in a tabularized form .
relation extraction is a key step towards question answering systems by which vital structured data is acquired from underlying free text resources .
we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus .
we train a kn-smoothed 5-gram language model on the target side of the parallel training data with srilm .
we initialize word embeddings with a pre-trained embedding matrix through glove 3 .
we use pre-trained vectors from glove for word-level embeddings .
to train our models , we use svm-light-tk 15 , which enables the use of structural kernels in svm-light .
to train our models , we adopted svm-light-tk 5 , which enables the use of the partial tree kernel in svm-light , with default parameters .
parallel corpora are currently exploited in a wide range of induction scenarios , including projection of morphologic , syntactic and semantic resources .
parallel corpora have proved to be a valuable resource not only for statistical machine translation , but also for crosslingual induction of morphological , syntactic and semantic analyses .
in future work , we intend to build on the work reported in this paper .
in future work , we intend to build on the work reported in this paper in several ways .
experiments on large scale real-life ¡° yahoo ! answers ¡± dataset revealed that t-scqa outperforms current state-of-the-art approaches .
experiments on large scale real-life ¡°yahoo ! answers¡± dataset reveals that scqa outperforms current state-of-the-art approaches based on translation models , topic models and deep neural network
on the official test sets , our model ranks 1st in the phrase-level subtask a ( among 11 teams ) and 2nd on the message-level subtask .
our system ranks 1st on the official test set of the phrase-level and 2nd on the message-level subtask .
we split each document into sentences using the sentence tokenizer of the nltk toolkit .
for the newsgroups and sentiment datasets , we used stopwords from the nltk python package .
a 5-gram language model of the target language was trained using kenlm .
a 5-gram language model built using kenlm was used for decoding .
sarcasm is a form of speech in which speakers say the opposite of what they truly mean in order to convey a strong sentiment .
sarcasm is a form of verbal irony that is intended to express contempt or ridicule .
we examine different linguistic features for sentimental polarity classification , and perform a comparative study on this task between blog and review data .
we evaluate the genre effect between blogs and review data and show the difference of feature effectiveness .
not every aspect of syntactic structure is shared across languages .
some syntactic properties are universal across languages .
and show how such a tool can be employed in augmenting a lexical knowledge base built from a conventional mrd with thesaurus information .
we also show how such a tool can be employed in augmenting a lexical knowledge base built from a conventional mrd with thesaurus information .
in order to alleviate the data sparseness in chunk-based translation , we applied the back-off translation method .
in order to alleviate the data sparseness in chunk-based translation , we take a stepwise back-off translation strategy .
jeon et al also discussed methods for grouping similar questions based on using the similarity between answers in the archive .
jeon et al demonstrates that similar answers are a good indicator of similar questions .
this paper introduces an alternative graph-based approach which is unsupervised and less computationally intensive .
this paper introduces an unsupervised graph-based method that selects textual labels for automatically generated topics .
usage of such a domain-specific corpus based on a pattern-based representation is vital .
we show that the usage of a domain-specific corpus is vital .
and our algorithm that learns when to collaborate obtains further improvement on both qa and qg tasks .
we contribute a generative collaborative network that learns when to collaborate and yields empirical improvements on two qa tasks .
named entity disambiguation ( ned ) is the task of resolving ambiguous mentions of entities to their referent entities in a knowledge base ( kb ) ( e.g. , wikipedia ) .
named entity disambiguation is the task of linking an entity mention in a text to the correct real-world referent predefined in a knowledge base , and is a crucial subtask in many areas like information retrieval or topic detection and tracking .
opennmt additionally supports multi-gpu training .
opennmt is a complete nmt implementation .
word segmentation is the first step of natural language processing for japanese , chinese and thai because they do not delimit words by whitespace .
word segmentation is a prerequisite for many natural language processing ( nlp ) applications on those languages that have no explicit space between words , such as arabic , chinese and japanese .
named entity disambiguation is the task of linking entity mentions to their intended referent , as represented in a knowledge base , usually derived from wikipedia .
named entity disambiguation is the task of linking an entity mention in a text to the correct real-world referent predefined in a knowledge base , and is a crucial subtask in many areas like information retrieval or topic detection and tracking .
yannakoudakis et al formulate aes as a pairwise ranking problem by ranking the order of pair essays based on their quality .
yannakoudakis et al formulate aes as a pair-wise ranking problem by ranking the order of pair essays .
in recent years , there has been a drive to scale semantic parsing to large databases such as freebase .
very recently , researchers have started developing semantic parsers for large , generaldomain knowledge bases like freebase and dbpedia .
twitter is a rich resource for information about everyday events – people post their tweets to twitter publicly in real-time as they conduct their activities throughout the day , resulting in a significant amount of mundane information about common events .
twitter is a famous social media platform capable of spreading breaking news , thus most of rumour related research uses twitter feed as a basis for research .
we use the moses smt toolkit to test the augmented datasets .
our baseline system is an standard phrase-based smt system built with moses .
rewrite rules have been widely used in several areas of natural language processing , including syntax , morphology , phonology and speech processing .
context sensitive rewrite rules have been widely used in several areas of natural language processing .
multi-task learning using a related auxiliary task can lead to stronger generalization and better regularized models .
the goal of multi-task learning is to learn related tasks jointly in order to improve their models over independently learned one .
to train the link embeddings , we use the speedy , skip-gram neural language model of mikolov et al via their toolkit word2vec .
our second method is based on the recurrent neural network language model approach to learning word embeddings of mikolov et al and mikolov et al , using the word2vec package .
the log-linear feature weights are tuned with minimum error rate training on bleu .
the log linear weights for the baseline systems are optimized using mert provided in the moses toolkit .
hence , this model is similar to the skip-gram model in word embedding .
we use a cws-oriented model modified from the skip-gram model to derive word embeddings .
relation extraction is the problem of populating a target relation ( representing an entity-level relationship or attribute ) with facts extracted from natural-language text .
relation extraction is the task of finding relationships between two entities from text .
successful discriminative parsers have used generative models to reduce training time and raise accuracy above generative baselines .
successful discriminative parsers have relied on generative models to reduce training time and raise accuracy above generative baselines .
we trained and tested the model on data from the penn treebank .
we used the penn wall street journal treebank as training and test data .
language models were built using the srilm toolkit 16 .
all language models were trained using the srilm toolkit .
the model is a log-linear model over synchronous cfg derivations .
following , we adopt a general log-linear model .