sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
incorporating the morphological compositions ( surface forms ) of words , we decide to employ the latent meanings of the compositions ( underlying forms ) to train the word embeddings .
in this paper , we explored a new direction to employ the latent meanings of morphological compositions rather than the internal compositions themselves to train word embeddings .
relation extraction ( re ) is the task of determining semantic relations between entities mentioned in text .
relation extraction is the task of detecting and characterizing semantic relations between entities from free text .
the combination of the discrete fourier transform and lpc technique is plp .
the combination of the discrete fourier transform and lpc technique is called plp .
in this paper , we are concerned about two generally well understood operators on feature functions .
in this paper , we formalize feature extraction from an algebraic perspective .
named entity ( ne ) transliteration is the process of transcribing a ne from a source language to a target language based on phonetic similarity between the entities .
named entity transliteration is the process of producing , for a name in a source language , a set of one or more transliteration candidates in a target language .
classifier we use the l2-regularized logistic regression from the liblinear package , which we accessed through weka .
we use the l2-regularized logistic regression of liblinear as our term candidate classifier .
we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .
we used srilm for training the 5-gram language model with interpolated modified kneser-ney discounting , .
second , our model achieves the best results to date on the kbp 2016 english and chinese event .
our model achieves the best results to date on the kbp 2016 english and chinese datasets .
the candidate examples that lead to the most disagreements among the different learners are considered to have the highest tuv .
the candidate examples that led to the most disagreements among the different learners are considered to have the highest tuv .
recently , stevens et al used an aggregate version of this metric to evaluate large amounts of topic models .
for example , stevens et al showed that this metric is strongly correlated with expert estimates .
conditional random fields are undirected graphical models represented as factor graphs .
conditional random fields are discriminative structured classification models for sequential tagging and segmentation .
we use the word2vec tool to pre-train the word embeddings .
we used the pre-trained google embedding to initialize the word embedding matrix .
we use case-insensitive bleu-4 and rouge-l as evaluation metrics for question decomposition .
we evaluate the translation quality using the case-insensitive bleu-4 metric .
lstms are a special kind of recurrent neural network capable of learning long-term dependencies by effectively handling the vanishing or exploding gradient problem .
long short-term memory network is a type of recurrent neural network , and specifically addresses the issue of learning long-term dependencies .
our 5-gram language model is trained by the sri language modeling toolkit .
a 4-grams language model is trained by the srilm toolkit .
we use the stanford pos-tagger and name entity recognizer .
we use stanford ner for named entity recognition .
feature weights are tuned using minimum error rate training on the 455 provided references .
the decoding weights are optimized with minimum error rate training to maximize bleu scores .
we present a new , multilingual data-driven method for coreference resolution as implemented in the swizzle system .
we have introduced a new data-driven method for multilingual coreference resolution , implemented in the swizzlesystem .
we used the logistic regression implementation in scikit-learn for the maximum entropy models in our experiments .
we used the scikit-learn implementation of a logistic regression model using the default parameters .
for all models , we use fixed pre-trained glove vectors and character embeddings .
for the actioneffect embedding model , we use pre-trained glove word embeddings as input to the lstm .
for the word-embedding based classifier , we use the glove pre-trained word embeddings .
for the mix one , we also train word embeddings of dimension 50 using glove .
the feature weights for the log-linear combination of the features are tuned using minimum error rate training on the devset in terms of bleu .
the parameters of the log-linear model are tuned by optimizing bleu on the development data using mert .
word sense disambiguation ( wsd ) is the task of determining the meaning of an ambiguous word in its context .
word sense disambiguation ( wsd ) is the task of determining the correct meaning ( “ sense ” ) of a word in context , and several efforts have been made to develop automatic wsd systems .
psl is primarily designed to support mpe inference .
psl is a probabilistic logic framework designed to have efficient inference .
over the last few years , several large scale knowledge bases such as freebase , nell , and yago have been developed .
the recent years have shown a large number of knowledge bases such as yago , wikidata and freebase .
semantic parsing is the task of mapping natural language utterances to machine interpretable meaning representations .
semantic parsing is a domain-dependent process by nature , as its output is defined over a set of domain symbols .
tai et al proposed a tree-like lstm model to improve the semantic representation .
tai et al , and le and zuidema extended sequential lstms to tree-structured lstms by adding branching factors .
in recent years , there is a growing interest in sharing personal opinions on the web , such as product reviews , economic analysis , political polls , etc .
hence , in recent years , there is a research trend towards statistical dialogue management .
bakeoffs show that our system is competitive with the best in the literature , achieving the highest reported f-scores for a number of corpora .
our system is competitive with the best systems , obtaining the highest reported f-scores on a number of the bakeoff corpora .
since coreference resolution is a pervasive discourse phenomenon causing performance impediments in current ie systems , we considered a corpus of aligned english and romanian texts to identify coreferring expressions .
coreference resolution is the task of determining when two textual mentions name the same individual .
the word embeddings were obtained using word2vec 2 tool .
word embeddings were created using the word2vec , the skip-gram architecture was used .
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing .
for all experiments , we used a 4-gram language model with modified kneser-ney smoothing which was trained with the srilm toolkit .
argument mining is a trending research domain that focuses on the extraction of arguments and their relations from text .
argument mining ( am ) is a relatively new research area which involves , amongst others , the automatic detection in text of arguments , argument components , and relations between arguments ( see ( cite-p-10-1-13 ) for an overview ) .
coreference resolution is a key problem in natural language understanding that still escapes reliable solutions .
coreference resolution is a field in which major progress has been made in the last decade .
semantic parsing is a domain-dependent process by nature , as its output is defined over a set of domain symbols .
semantic parsing is the problem of mapping natural language strings into meaning representations .
word sense disambiguation ( wsd ) is the task of assigning sense tags to ambiguous lexical items ( lis ) in a text .
word sense disambiguation ( wsd ) is a particular problem of computational linguistics which consists in determining the correct sense for a given ambiguous word .
although there are many implementations for trie , we use a double-array in our task .
therefore , we use double array trie structure for implementation .
incorporating eye gaze information with recognition hypotheses is beneficial for the reference resolution task .
in addition , incorporating eye gaze with word confusion networks further improves performance .
and we are able to significantly improve the accuracy of the nombank srl task .
however , we have been unable to use unlabeled data to improve the accuracy .
called nc-lfg ' s , dc-lfg ' s and fc-lfg ' s are proposed , two of which can be recognized in polynomial time .
consequently , deterministic fts ' , dc-lfg 's and fc-lfg 's can be recognized in polynomial time .
in the example sentence , this generated the subsequent sentence “ us urges israel plan .
in the example sentence , this generated the subsequent sentence “ us urges israel plan . ”
in this paper , we describe the tagging strategies that can be found in the literature .
for this paper , we have tested the tagging strategies that can be found in the literature .
we use case-sensitive bleu-4 to measure the quality of translation result .
we evaluated the translation quality using the case-insensitive bleu-4 metric .
the target-side language models were estimated using the srilm toolkit .
the model was built using the srilm toolkit with backoff and good-turing smoothing .
we show that such a system provides an accuracy rivaling that of experts .
our approach has an accuracy that rivals that of expert agreement .
a 5-gram language model was built using srilm on the target side of the corresponding training corpus .
srilm toolkit was used to create up to 5-gram language models using the mentioned resources .
for instance , ‘ seq-kd + seq-inter + word-kd ’ in table 1 means that the model was trained on seq-kd data and fine-tuned towards seq-inter data .
9 for instance , ‘ seq-kd + seq-inter + word-kd ’ in table 1 means that the model was trained on seq-kd data and fine-tuned towards seq-inter data with the mixture cross-entropy loss at the word-level .
empty categories are elements in parse trees that lack corresponding overt surface .
an empty category is an element in a parse tree that does not have a corresponding surface word .
our parser is based on the shift-reduce parsing process from sagae and lavie and wang et al , and therefore it can be classified as a transition-based parser , .
our parser is based on the shift-reduce parsing process from sagae and lavie and wang et al , and therefore it can be classified as a transition-based parser .
we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training .
the statistics for these datasets are summarized in settings we use glove vectors with 840b tokens as the pre-trained word embeddings .
we used datasets distributed for the 2006 and 2007 conll shared tasks .
our data is taken from the conll 2006 and 2007 shared tasks .
we use the rouge toolkit for evaluation of the generated summaries in comparison to the gold summaries .
for evaluation , we compare each summary to the four manual summaries using rouge .
word embedding has been extensively studied in recent years .
work in representation learning for nlp has largely focused on improving word embeddings .
one example is the open mind commonsense project , a project to mine commonsense knowledge to which 14,500 participants contributed nearly 700,000 sentences .
one example is the open mind commonsense project , 2 a project to mine commonsense knowledge to which 14500 participants contributed nearly 700,000 facts .
this paper has shown the effectiveness of our technique for dependency parsing of long sentences .
this paper proposes a method for dependency parsing of monologue sentences based on sentence segmentation .
in this work , we propose the dual tensor model , a neural architecture that ( 1 ) models asymmetry more explicitly than existing models and ( 2 ) explicitly captures the translation of unspecialized distributional vectors into specialized embeddings .
in this work , we propose the dual tensor model , a neural architecture with which we explicitly model the asymmetry and capture the translation between unspecialized and specialized word embeddings via a pair of tensors .
the statistical significance test is performed by the re-sampling approach .
significance testing is done using sign test by bootstrap re-sampling with 100 samples .
for example , the rst bank , based on rhetorical structure theory , assumes a tree representation to subsume the complete text of the discourse .
in particular , the t2d system employs rules that map text annotated with discourse structures , along the lines of rhetorical structure theory , to specific dialogue sequences .
language models were built using the sri language modeling toolkit with modified kneser-ney smoothing .
language models were estimated using the sri language modeling toolkit with modified kneser-ney smoothing .
work presents a discussion about the use of baseline algorithms in src and evaluation .
this work discusses the evaluation of baseline algorithms for web search results clustering .
for the classification task , we use pre-trained glove embedding vectors as lexical features .
we calculate cosine similarity using pretrained glove word vectors 7 to find similar words to the seed word .
for all experiments , we used a 4-gram language model with modified kneser-ney smoothing which was trained with the srilm toolkit .
in the experiments we trained 5-gram language models on the monolingual parts of the bilingual corpora using srilm .
during evaluation , we employ rouge as our evaluation metric .
we use bleu , rouge , and meteor scores as automatic evaluation metrics .
following newman et al , we use a pointwise mutual information score to measure the topic coherence .
we employ normalised pointwise mutual information which outperforms other metrics in measuring topic coherence .
moreover , emotions and mood can influence the speaking behavior of a person and the characteristics of the sound in speech .
it is commonly assumed and confirmed in several studies that emotions and mood can influence the speaking behavior of a person and the characteristics of the sound in speech .
in addition , we add an attention mechanism to make the seq2seq baseline stronger .
specifically , we employ the seq2seq model with attention implemented in opennmt .
recently , neural networks have gained tremendous popularity and success in text classification and opinion mining .
convolutional neural networks have obtained good results in text classification , which usually consist of convolutional and pooling layers .
our model is an extension of the transition-based parsing framework described by nivre for dependency tree parsing .
the dependency parser we use is an implementation of a transition-based dependency parser .
in order to estimate the terms f and f the corpus was automatically parsed by cass , a robust chunk parser designed for the shallow analysis of noisy text .
to estimate the term f , the corpus was automatically parsed by cass , a robust chunk parser designed for the shallow analysis of noisy text .
brown and levinson created a theory of politeness , articulating a set of strategies which people employ to demonstrate different levels of politeness .
brown and levinson articulated a taxonomy of politeness strategies , distinguishing broadly between the notion of positive and negative politeness .
we used the bleu score to evaluate the translation accuracy with and without the normalization .
in this work , we calculated automatic evaluation scores for the translation results using a popular metrics called bleu .
in the acoustic model , in this paper , we investigate the problem of word fragment identification .
in this paper , we have investigated the problem of word fragment detection from a new approach .
and our algorithm is based on perceptron learning .
we apply this novel learning algorithm to pos tagging .
we use support vector machines , a maximum-margin classifier that realizes a linear discriminative model .
as a supervised classifier , we use support vector machines with a linear kernel ) .
latent dirichlet allocation is one of the most popular topic models used to mine large text data sets .
topic model is one of the most popular approaches to learn hidden representations of text .
speller is crucial to search engine in improving web search relevance .
a query speller is crucial to search engine in improving web search relevance .
the dclm model extracted the class information from the history words through a dirichlet distribution in calculating the n-gram probabilities .
however , in the dclm model , the class information of the history words was obtained from the n-gram events of the corpus .
discourse parsing is a difficult , multifaceted problem involving the understanding and modeling of various semantic and pragmatic phenomena as well as understanding the structural properties that a discourse graph can have .
discourse parsing is the process of assigning a discourse structure to the input provided in the form of natural language .
we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit .
we used trigram language models with interpolated kneser-kney discounting trained using the sri language modeling toolkit .
dependency parsing is a topic that has engendered increasing interest in recent years .
dependency parsing is a fundamental task for language processing which has been investigated for decades .
berger and lafferty proposed the use of translation models for document retrieval .
berger and lafferty introduce a probabilistic approach to ir based on statistical machine translation models .
we trained kneser-ney discounted 5-gram language models on each available corpus using the srilm toolkit .
we trained a 4-gram language model on the xinhua portion of gigaword corpus using the sri language modeling toolkit with modified kneser-ney smoothing .
then we apply the max-over-time pooling to get a single vector representation .
next , we adopt the widelyused max-over-time pooling operation to obtain the final features膲 h from c h .
dimension component is driven by rule-based heuristics .
intrasentential quality is evaluated with rule-based heuristics .
relation extraction is the task of automatically detecting occurrences of expressed relations between entities in a text and structuring the detected information in a tabularized form .
relation extraction ( re ) is the task of assigning a semantic relationship between a pair of arguments .
the language models were trained with kneser-ney backoff smoothing using the sri language modeling toolkit , .
the language models are 4-grams with modified kneser-ney smoothing which have been trained with the srilm toolkit .
coreference resolution is the task of determining which mentions in a text refer to the same entity .
coreference resolution is a field in which major progress has been made in the last decade .
in this paper , we investigate an alternative approach by training relation parameters jointly with an autoencoder .
we have investigated a dimension reduction technique which trains a kb embedding model jointly with an autoencoder .
we then created trigram language models from a variety of sources using the srilm toolkit , and measured their perplexity on this data .
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided .
for example , blitzer et al proposed a domain adaptation method based on structural correspondence learning .
for example , blitzer et al investigate domain adaptation for sentiment analysis .
and performs knowledge-based label transfer from rich external knowledge sources to large-scale corpora .
it follows the distant supervision paradigm and performs knowledge-based label transfer from rich external knowledge sources to large corpora .
in social media especially , there is a large diversity in terms of both the topic and language , necessitating the modeling of multiple languages simultaneously .
social media is a rich source of rumours and corresponding community reactions .
collobert et al first introduced an end-to-end neural-based approach with sequence-level training and uses a convolutional neural network to model the context window .
collobert et al adjust the feature embeddings according to the specific task in a deep neural network architecture .
such a forest is called a dependency tree .
a dependency tree is a rooted , directed spanning tree that represents a set of dependencies between words in a sentence .
we use the word2vec tool to pre-train the word embeddings .
we use the pre-trained word2vec embeddings provided by mikolov et al as model input .
this paper has proposed an incremental parser based on an adjoining operation .
to solve the problem , this paper proposes an incremental parsing method based on an adjoining operation .
it is a probabilistic framework proposed by for labeling and segmenting structured data , such as sequences , trees and lattices .
crf is a well-known probabilistic framework for segmenting and labeling sequence data .
second , we utilize word embeddings 3 to represent word semantics in dense vector space .
we use the cnn model with pretrained word embedding for the convolutional layer .
information extraction ( ie ) is a task of identifying 憽甪acts挕 ? ( entities , relations and events ) within unstructured documents , and converting them into structured representations ( e.g. , databases ) .
information extraction ( ie ) is a main nlp aspects for analyzing scientific papers , which includes named entity recognition ( ner ) and relation extraction ( re ) .
we used glove vectors trained on common crawl 840b 4 with 300 dimensions as fixed word embeddings .
we use 300d glove vectors trained on 840b tokens as the word embedding input to the lstm .