sentence1
stringlengths 16
446
| sentence2
stringlengths 14
436
|
---|---|
lda is a probabilistic model that can be used to model and discover underlying topic structures of documents .
|
lda is a generative model that learns a set of latent topics for a document collection .
|
we build a state of the art phrase-based smt system using moses .
|
we use the moses toolkit to train our phrase-based smt models .
|
we use byte pair encoding with 45k merge operations to split words into subwords .
|
for nmt , we applied byte pair encoding to split word into subword segments for both source and target languages .
|
in this paper , we studied how to modify an lstm model for deletion-based sentence compression .
|
in this paper , we focus on deletion-based sentence compression , which is a spacial case of extractive sentence compression .
|
the language model used in our paraphraser and the clarke and lapata baseline system is a kneser-ney discounted 5-gram model estimated on the gigaword corpus using the srilm toolkit .
|
the language model is a trigram-based backoff language model with kneser-ney smoothing , computed using srilm and trained on the same training data as the translation model .
|
conditional random fields have been successfully applied to several ie tasks in the past .
|
conditional random fields are popular models for many nlp tasks .
|
mikolov et al further proposed continuous bagof-words and skip-gram models , which use a simple single-layer architecture based on inner product between two word vectors .
|
the skip-gram and continuous bag-of-words models of mikolov et al propose a simple single-layer architecture based on the inner product between two word vectors .
|
our system is based on the phrase-based part of the statistical machine translation system moses .
|
we adapted the moses phrase-based decoder to translate word lattices .
|
in order to efficiently train parameters , we apply a reparameterization technique ( cite-p-22-3-6 , cite-p-22-1-10 ) .
|
in order to efficiently train parameters , we apply a reparameterization technique ( cite-p-22-3-6 , cite-p-22-1-10 ) on the variational lower bound .
|
the dataset used was a 1 million sentence aligned english-french corpus , taken from the europarl corpus .
|
the german-to-english baseline phrasebased system was trained on the europarl v7 corpus .
|
erkan et al defines similarity functions based on cosine similarity and edit distance between dependency paths , and then incorporate them in svm and knn learning for ppi extraction .
|
erkan et al first define two similarity functions based on cosine similarity and edit distance among dependency paths between two entities , and then incorporate them in semi-supervised learning for ppi extraction using svm and knn classifiers .
|
blitzer et al investigate domain adaptation for pos tagging using the method of structural correspondence learning .
|
blitzer et al proposed a structural correspondence learning method for domain adaptation and applied it to part-of-speech tagging .
|
in a unified framework , our model provides an effective way to capture context information at different levels for better lexical selection in smt .
|
the translation probabilities derived from our model are integrated into smt to allow collective lexical selection with both local and global informtion .
|
the model weights were trained using the minimum error rate training algorithm .
|
the parameter weights are optimized with minimum error rate training .
|
we use moses to train our phrasebased statistical mt system using the same parallel text as the nmt model , with the addition of common crawl , 10 for phrase extraction .
|
we used the phrase-based smt model , as implemented in the moses toolkit , to train an smt system translating from english to arabic .
|
using machine translation tools , we use the bidirectional lstm network to model the documents in both of the source and the target languages .
|
we use the bilingual bidirectional lstms to model the word sequences in the source and target languages .
|
liu et al suggested incorporating additional network architectures to further improve the performance of sdp-based methods , which uses a recursive neural network to model the sub-tree .
|
liu et al proposed a recursive neural network designed to model the subtrees , and cnn to capture the most important features on the shortest dependency path .
|
for data preparation and processing we use scikit-learn .
|
for the classifiers we use the scikit-learn machine learning toolkit .
|
sites show that the two new techniques enable classification algorithms to significantly improve the accuracy of the current state-of-the-art techniques .
|
they help achieve significantly higher accuracy than the current state-of-the-art techniques and systems .
|
we used the moses toolkit to build mt systems using various alignments .
|
we preprocessed the training corpora with scripts included in the moses toolkit .
|
coreference resolution is the task of clustering a set of mentions in the text such that all mentions in the same cluster refer to the same entity .
|
since coreference resolution is a pervasive discourse phenomenon causing performance impediments in current ie systems , we considered a corpus of aligned english and romanian texts to identify coreferring expressions .
|
this resource was a commissioned translation of the basic traveling expression corpus sentences from english and french to the different dialects .
|
this resource was created as a commissioned translation of the basic traveling expression corpus sentences from english and french to the different dialects .
|
burkett and klein adopted a transformation-based method to learn a sequence of monolingual tree transformations for translation .
|
burkett and klein utilized a transformation-based method to learn a sequence of monolingual tree transformations for translation .
|
the latter proposed by the conll-2008 shared task is also called semantic dependency parsing , which annotates the heads of arguments rather than phrasal arguments .
|
the conll 2008-2009 shared tasks introduced a variant where semantic dependencies are annotated rather than phrasal arguments .
|
for part-of-speech tagging of the sentences , we used stanford pos tagger .
|
we use the stanford pos tagger for english and french to tag all sentence pairs .
|
the target language model is built on the target side of the parallel data with kneser-ney smoothing using the irstlm tool .
|
we have used the srilm with kneser-ney smoothing for training a language model for the first stage of decoding .
|
the penn discourse tree bank is the largest resource to date that provides a discourse annotated corpus in english .
|
the penn discourse treebank is the largest corpus richly annotated with explicit and implicit discourse relations and their senses .
|
in this task , we used conditional random fields .
|
we selected conditional random fields as the baseline model .
|
hovy et al utilized hypernyms and synonyms in wordnet to expand queries for increasing recall .
|
hovy et al utilized wordnet hypernyms and synonyms to expand queries to increase recall .
|
word sense disambiguation along with the lexical senses from wordnet are used for this task .
|
word sense disambiguation is performed using babelnet with the wordnet sense inventory .
|
relation extraction is the task of automatically detecting occurrences of expressed relations between entities in a text and structuring the detected information in a tabularized form .
|
relation extraction ( re ) is the process of generating structured relation knowledge from unstructured natural language texts .
|
we use the webquestions dataset as our main dataset , which contains 5,810 question-answer pairs .
|
we use the webquestions dataset , which contains 5,810 question-answer pairs .
|
the parsing uses parsito , which is a transition-based parser using a neural-network classifier .
|
the final parsing step is performed using parsito , which is a transitionbased parser with a neural-network classifier .
|
dependency parsing is a very important nlp task and has wide usage in different tasks such as question answering , semantic parsing , information extraction and machine translation .
|
dependency parsing is a crucial component of many natural language processing systems , for tasks such as text classification ( o ? zgu ? r and gu ? ngo ? r , 2010 ) , statistical machine translation ( cite-p-13-3-0 ) , relation extraction ( cite-p-13-1-1 ) , and question answering ( cite-p-13-1-3 ) .
|
coreference resolution is the process of finding discourse entities ( markables ) referring to the same real-world entity or concept .
|
coreference resolution is the task of partitioning a set of mentions ( i.e . person , organization and location ) into entities .
|
word alignment is equivalent to orthogonal non-negative matrix factorisation .
|
here , we view word alignment as matrix factorisation .
|
most recently , bansal and klein improved the berkeley parser by using surface counts from google n-grams .
|
more recently , bansal and klein proposed features for both dependency and constituency parsing based on web counts from the google n-grams corpus .
|
we have used the srilm with kneser-ney smoothing for training a language model for the first stage of decoding .
|
we use a pbsmt model where the language model is a 5-gram lm with modified kneser-ney smoothing .
|
morante and daelemans present a machine-learning approach to this task , using token-level , lexical information only .
|
morante and daelemans use the bioscope corpus to approach the problem of identifying cues and scopes via supervised machine learning .
|
relation extraction is a challenging task in natural language processing .
|
relation extraction is the problem of populating a target relation ( representing an entity-level relationship or attribute ) with facts extracted from natural-language text .
|
and applying a set of constraints , we restrict the space of possible tts templates under consideration , while still allowing new and more accurate templates to emerge from the training data .
|
in this paper , we explore methods for restricting the space of possible tts templates under consideration , while still allowing good templates to emerge directly from the data as much as possible .
|
the n-gram language models are trained using the srilm toolkit or similar software developed at hut .
|
uedin has used the srilm toolkit to train the language model and relies on kenlm for language model scoring during decoding .
|
this type of data has been found to yield the best correlation with eye-tracking data when different styles of presentation were compared for english .
|
this had the best correlation with eye-tracking data when different styles of presentation were compared for english .
|
for these experiments we use a maximum entropy classifier using the liblinear toolkit 2 .
|
for these experiments we use a maximum entropy classifier using the liblinear toolkit 1 .
|
sentiment analysis is a technique to classify documents based on the polarity of opinion expressed by the author of the document ( cite-p-16-1-13 ) .
|
sentiment analysis is the task in natural language processing ( nlp ) that deals with classifying opinions according to the polarity of the sentiment they express .
|
in this paper , we adopt continuous bag-of-word in word2vec as our context-based embedding model .
|
we revisit skip-gram model , as one of the most popular context-based embedding approaches .
|
the combiner we use here is implemented using a rule-based classifier , ripper .
|
in order to extract rules from the annotated data , we use a rule-based classifier , ripper .
|
attentional state properties modeled by centering can account for these differences .
|
we describe how the attentional state properties modeled by centering can account for these differences .
|
freeparser uses a domain-independent architecture to automatically identify sentences relevant to each new database .
|
using a self-supervised architecture , freeparser automatically labels these sentences , and then trains a semantic parser for all of freebase .
|
firstly , we explicitly show that concept-drift is pervasive and serious in real bug report streams .
|
we demonstrate that concept drift is a real , pervasive issue for learning from issue tracker streams .
|
twitter is a huge microblogging service with more than 500 million tweets per day from different locations of the world and in different languages ( cite-p-10-1-6 ) .
|
twitter is a popular microblogging service , which , among other things , is used for knowledge sharing among friends and peers .
|
however , due to their heterogeneous characteristics , mwes present a tough challenge for both linguistic and computational work .
|
because of their frequency and their peculiar behaviour , mwes pose a great challenge to the creation of natural language processing systems .
|
all system component weights were tuned using minimum error-rate training , with three tuning runs for each condition .
|
feature weights were set with minimum error rate training on a tuning set using bleu as the objective function .
|
in the project , we focus on content-related criteria .
|
in this work , we chose to start with criteria related to content choice .
|
we built a 5-gram language model on the english side of europarl and used the kneser-ney smoothing method and srilm as the language model toolkit .
|
we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing .
|
in section 6 , the proposed word embeddings show evident improvements on sentiment classification , as compared to the base model .
|
in section 6 , the proposed word embeddings show evident improvements on sentiment classification , as compared to the base model word2vec and other baselines using the same lexical resource .
|
we base our extrinsic evaluation on the seminal work of collobert et al on the use of neural methods for nlp .
|
we compare the proposed model to our implementation of the iobes-based model described in collobert et al , applied to mwe tagging .
|
we use the 300-dimensional skip-gram word embeddings built on the google-news corpus .
|
for feature building , we use word2vec pre-trained word embeddings .
|
we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting .
|
we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing .
|
empirical experiments on a labeled set of words show that the proposed method outperforms the state of the art methods .
|
the method is experimentally tested using a manually labeled set of positive and negative words .
|
finin et al use amazons mechanical turk service 2 and crowdflower 3 to annotate named entities in tweets and train a crf model to evaluate the effectiveness of human labeling .
|
finin et al use amazons mechanical turk service 3 and crowdflower 4 to annotate named entities in tweets and train a crf model to evaluate the effectiveness of human labeling .
|
we applied liblinear via its scikitlearn python interface to train the logistic regression model with l2 regularization .
|
we use the wrapper of the scikit learn python library over the liblinear logistic regression implementation .
|
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing .
|
we trained a standard 5-gram language model with modified kneser-ney smoothing using the kenlm toolkit on 4 billion running words .
|
r the brown corpus is in some sense fundamentally more difficult .
|
r the brown corpus is in some sense fundamentally more difficult for this problem .
|
apart from the original space of features , we have used the so called svd features , obtained from the projection of the feature vectors into the reduced space .
|
apart from the original space of features , we have the so called svd features , obtained from the projection of the feature vectors into the reduced space .
|
udpipe 1 . 2 participated in the shared task , placing as the 8th best system , while achieving low running times .
|
udpipe 1.1 provided a strong baseline for the task , placing as the 13 th ( out of 33 ) best system in the official ranking .
|
to train the link embeddings , we use the speedy , skip-gram neural language model of mikolov et al via their toolkit word2vec .
|
we use the perplexity computation method of mikolov et al suitable for skip-gram models .
|
we use srilm toolkits to train two 4-gram language models on the filtered english blog authorship corpus and the xinhua portion of gigaword corpus , respectively .
|
we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .
|
agate is an acronym for general architecture for text engineering .
|
ttea is an acronym for text engineering architecture .
|
in this context , we present a method based on character p-grams that we designed for the arabic dialect identification shared task of the dsl 2016 challenge .
|
we have presented a method based on character p-grams for the arabic dialect identification shared task of the dsl 2016 challenge .
|
ucca ¡¯ s representation is guided by conceptual notions and has its roots in the cognitive linguistics tradition .
|
ucca is supported by extensive typological cross-linguistic evidence and accords with the leading cognitive linguistics theories .
|
and , based on theoretical and empirical desiderata , we outline a more comprehensive framework to model the acquisition of allophonic rules .
|
therefore , the main extension towards a comprehensive model of the acquisition of allophonic rules would be to include acoustic indicators .
|
we compute statistical significance using the approximate randomization test .
|
we test this hypothesis with an approximate randomization approach .
|
automated essay scoring utilizes natural language processing and machine learning techniques to automatically rate essays written for a target prompt .
|
automated essay scoring utilizes the nlp techniques to automatically rate essays written for given prompts , namely , essay topics , in an educational setting .
|
krishnakumaran and zhu use the isa relation in wordnet for metaphor recognition .
|
krishnakumaran and zhu use hyponymy relation in wordnet to detect semantic violations .
|
borrowing is a major type of word formation in japanese , and numerous foreign words ( proper names or neologisms etc . ) are continuously being imported from other languages ( cite-p-26-3-22 ) .
|
borrowing is the pervasive linguistic phenomenon of transferring and adapting linguistic constructions ( lexical , phonological , morphological , and syntactic ) from a “ donor ” language into a “ recipient ” language ( cite-p-10-3-16 ) .
|
many existing active learning methods are to select the most uncertain examples using various measures .
|
many existing active learning methods are based on selecting the most uncertain examples using various measures .
|
coreference resolution is a key task in natural language processing ( cite-p-13-1-8 ) aiming to detect the referential expressions ( mentions ) in a text that point to the same entity .
|
coreference resolution is a challenging task , that involves identification and clustering of noun phrases mentions that refer to the same real-world entity .
|
contrary to both intuition and past conclusions , results show no significant evidence of reference bias .
|
results showed no significant evidence of reference bias , contrary to prior reports and intuition .
|
named entity linking is the task of mapping mentions of named entities to their canonical reference in a knowledge base .
|
entity resolution is the task of mapping mentions of entities in text to corresponding records in a knowledge base .
|
vector representations of words and phrases have been successfully applied in many natural language processing tasks .
|
distributed representations of words have been widely used in many natural language processing tasks .
|
the lingo grammar matrix is situated theoretically within head-driven phrase structure grammar , a lexicalist , constraint-based framework .
|
this model is inspired by formalisms based on structural features like head-driven phrase structure grammar .
|
this means in practice that the language model was trained using the srilm toolkit .
|
uedin has used the srilm toolkit to train the language model and relies on kenlm for language model scoring during decoding .
|
for all models , we use fixed pre-trained glove vectors and character embeddings .
|
for word-level embedding e w , we utilize pre-trained , 300-dimensional embedding vectors from glove 6b .
|
we used the scikit-learn python machine learning library to implement the feature extraction pipeline and the support vector machine classifier .
|
we used scikit-learn 4 for more details , a machine learning library for python , to build a question classifier based on the svm algorithm and linear kernel function .
|
for the automatic evaluation , we used the bleu metric from ibm .
|
we report the mt performance using the original bleu metric .
|
by the base form of the head verb , we achieve a better statistical word alignment performance , and are able to better estimate the translation model and generalize to unseen verb forms during translation .
|
this leads to an improved statistical word alignment performance , and has the advantages of improving the translation model and generalizing to unseen verb forms , during translation .
|
we substitute our language model and use mert to optimize the bleu score .
|
we use case-sensitive bleu-4 to measure the quality of translation result .
|
since its introduction , topic modeling has been tailored to perform better on short texts such as microblogs .
|
topic models , such as plsa and lda , have shown great success in discovering latent topics in text collections .
|
we use the stanford nlp pos tagger to generate the tagged text .
|
we use the stanford part of speech tagger to annotate each word with its pos tag .
|
for language model , we use a trigram language model trained with the srilm toolkit on the english side of the training corpus .
|
we use srilm for training a trigram language model on the english side of the training corpus .
|
sentiment analysis is the task of identifying positive and negative opinions , sentiments , emotions and attitudes expressed in text .
|
sentiment analysis is a growing research field , especially on web social networks .
|
recently , attentive neural networks have shown success in several nlp tasks such as machine translation , image captioning , speech recognition and document classification .
|
recently , rnns with attention mechanisms have demonstrated success in various nlp tasks , such as machine translation , parsing , image captioning , and textual entailment .
|
relation extraction ( re ) is the task of identifying instances of relations , such as nationality ( person , country ) or place of birth ( person , location ) , in passages of natural text .
|
relation extraction ( re ) has been defined as the task of identifying a given set of semantic binary relations in text .
|
that word ( containing at least one character ) is the appropriate unit for chinese language processing .
|
word is usually adopted as the smallest unit in most tasks of chinese language processing .
|
hasegawa et al , 2004 ) use ner to identify frequently co-occurring entities as likely relation phrases .
|
hasegawa et al 2004 , used large corpora and an extended named entity tagger to find novel relations and their participants .
|
we also use a 4-gram language model trained using srilm with kneser-ney smoothing .
|
we use 5-gram models with modified kneser-ney smoothing and interpolated back-off .
|
since our dataset is not so large , we make use of pre-trained word embeddings , which are trained on a much larger corpus with word2vec toolkit .
|
in all of our experiments , the word embeddings are trained using word2vec on the wikipedia corpus .
|
we have used the srilm with kneser-ney smoothing for training a language model for the first stage of decoding .
|
we choose modified kneser ney as the smoothing algorithm when learning the ngram model .
|
long short-term memory is a special type of rnn that leverages multiple gate vectors and a memory cell vector to solve the vanishing and exploding gradient problems of training rnns .
|
long short term memory is a variant of recurrent neural network , which enables to address the gradient vanishing and exploding problems in rnn via introducing gate mechanism and memory cell .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.