sentence1
stringlengths 16
446
| sentence2
stringlengths 14
436
|
---|---|
in this paper , we propose a novel method for semi-supervised learning of non-projective log-linear dependency parsers using directly expressed linguistic prior knowledge .
|
in this paper , we developed a novel method for the semi-supervised learning of a non-projective crf dependency parser that directly uses linguistic prior knowledge as a training signal .
|
magatti et al introduced an approach for labelling topics that relied on two hierarchical knowledge resources labelled by humans , while lau et al proposed selecting the most representative word from a topic as its label .
|
magatti et al introduced an approach for labelling topics that relied on two hierarchical knowledge resources labelled by humans , the google directory and the openoffice english thesaurus .
|
for training the model , we use the linear kernel svm implemented in the scikit-learn toolkit .
|
we use the logistic regression classifier as implemented in the skll package , which is based on scikitlearn , with f1 optimization .
|
barzilay and mckeown used a corpus-based method to identify paraphrases from a corpus of multiple english translations of the same source text .
|
barzilay and mckeown utilized multiple english translations of the same source text for paraphrase extraction .
|
for all data sets , we trained a 5-gram language model using the sri language modeling toolkit .
|
we used the srilm toolkit to train a 4-gram language model on the xinhua portion of the gigaword corpus , which contains 238m english words .
|
nowadays , there is a huge amount of textual data coming from different sources of information .
|
nowadays , there is a high increase in the publication of scientific articles every year , which demonstrates that we are living in an emerging knowledge era .
|
bleu as the most famous evaluation metric calculates an overall score via geometric mean of precisions on different ngrams .
|
the most well-known automatic evaluation metric in nlp is bleu for mt , based on n-gram matching precisions .
|
this baseline uses pre-trained word embeddings using word2vec cbow and fasttext .
|
the model parameters of word embedding are initialized using word2vec .
|
a 4-gram language model was trained on the target side of the parallel data using the srilm toolkit .
|
the sri language modeling toolkit was employed to train a five-gram japanese lm on the training set .
|
for our experiments , we use moses as the baseline system which can support lattice decoding .
|
in our experiments , we used moses as the baseline system which can support lattice decoding .
|
mln framework has been adopted for several natural language processing tasks and achieved a certain level of success .
|
in recent years , mln has been adopted for several natural language processing tasks and achieved a certain level of success .
|
to obtain these features , we use the word2vec implementation available in the gensim toolkit to obtain word vectors with dimension 300 for each word in the responses .
|
for word embeddings , we use an in-house java re-implementation of word2vec to build 300-dimensional vector representations for all types that occur at least 10 times in our unannotated corpus .
|
transliteration is a subtask in ne translation , which translates nes based on the phonetic similarity .
|
transliteration is a process of translating a foreign word into a native language by preserving its pronunciation in the original language , otherwise known as translationby-sound .
|
we implement the weight tuning component according to the minimum error rate training method .
|
we perform minimum-error-rate training to tune the feature weights of the translation model to maximize the bleu score on development set .
|
we use a popular word2vec neural language model to learn the word embeddings on an unsupervised tweet corpus .
|
we use the word2vec tool to train monolingual vectors , 6 and the cca-based tool for projecting word vectors .
|
this problem can be cast as an instance of synchronous itg parsing .
|
a better approach is to reduce this problem to an instance of synchronous itg parsing .
|
the pcfg parser , used into our experiments , is the berkeley parser 2 .
|
the probabilistic parser , used into our experiments , is the berkeley parser 3 .
|
however , chang et al argue that evaluations optimizing for perplexity encourage complexity at the cost of human interpretability .
|
however , chang et al have demonstrated that models with high perplexity do not necessarily generate semantically coherent topics in human perception .
|
argument mining is a trending research domain that focuses on the extraction of arguments and their relations from text .
|
argument mining is a core technology for enabling argument search in large corpora .
|
we use a pointer-generator network , which is a combination of a seq2seq model with attention and a pointer network .
|
we integrate le into a pointer-generator network , which is a state-of-theart neural summarization model .
|
in an evaluation on 826 essays , our approach significantly outperforms four baselines , one of which relies on features previously developed specifically for stance classification .
|
in an evaluation on 826 argumentative essays , our learning-based approach , which combines our novel features with n-gram features and faulkner ’ s features , significantly outperformed four baselines , including our reimplementation of faulkner ’ s system .
|
the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd ) .
|
word sense disambiguation ( wsd ) is the task to identify the intended sense of a word in a computational manner based on the context in which it appears ( cite-p-13-3-4 ) .
|
to exploit these kind of labeling constraints , we resort to conditional random fields .
|
to this end , we use first-and second-order conditional random fields .
|
in this paper , we discuss the benefits of tightly coupling speech recognition and search components .
|
in this paper , we have presented techniques for tightly coupling asr and search .
|
in addition , we utilize the pre-trained word embeddings with 300 dimensions from for initialization .
|
first , we initialize all words that exist in the vocabulary with pre-trained 300 dimension word2vec .
|
we use the linear svm classifier from scikit-learn .
|
we used the implementation of random forest in scikitlearn as the classifier .
|
bollmann and s酶gaard and bollmann et al recently showed that we can obtain more robust historical text normalization models by exploiting synergies across historical text normalization datasets and with related tasks .
|
bollmann and s酶gaard reported that a deep neural network architecture improves the normalization of historical texts , compared to both baseline using conditional random fields and norma tool .
|
following , we use 伪 , 纬 to represent a scfg rule extracted from the training corpus , where 伪 and 纬 are source and target strings , respectively .
|
following , 伪 , 纬 is used to represent a synchronous context free grammar rule extracted from the training corpus , where 伪 and 纬 are the source-side and target-side rule respectively .
|
socher et al propose to use recursive neural networks to learn syntactic-aware compositionality upon words .
|
socher et al applied recursive autoencoders to address sentencelevel sentiment classification problems .
|
to control overfitting in the maxent models , we used box-type inequality constraints .
|
for this purpose , we use the maximum entropy modeling with inequality constraints .
|
for evaluation metric , we used bleu at the character level .
|
we adopted the case-insensitive bleu-4 as the evaluation metric .
|
the penn discourse treebank is the largest available annotated corpora of discourse relations over 2,312 wall street journal articles .
|
the penn discourse treebank , developed by prasad et al , is currently the largest discourse-annotated corpus , consisting of 2159 wall street journal articles .
|
we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting .
|
we then created trigram language models from a variety of sources using the srilm toolkit , and measured their perplexity on this data .
|
the feature weights for the log-linear combination of the features are tuned using minimum error rate training on the devset in terms of bleu .
|
the weights in the log-linear model are tuned by minimizing bleu loss through mert on the dev set for each language pair and then report bleu scores on the test set .
|
we implemented this model using the srilm toolkit with the modified kneser-ney discounting and interpolation options .
|
we estimate a 5-gram language model using interpolated kneser-ney discounting with srilm .
|
the decoding problem has been proved to be np-complete even when the translation model is ibm model 1 and the language model is bi-gram .
|
in fact , it has been shown that the decoding problem for the presented machine translation models is np-complete .
|
the language models in this experiment were trigram models with good-turing smoothing built using srilm .
|
the language models used were 7-gram srilm with kneser-ney smoothing and linear interpolation .
|
in a distributional similarity-based model for selectional preferences is introduced , reminiscent of that of pantel and lin .
|
erk introduced a distributional similarity-based model for selectional preferences , reminiscent of that of pantel and lin .
|
le and mikolov presented the paragraph vector algorithm to learn a fixed-size feature representation for documents .
|
le and mikolov introduced a distributed memory model with paragraph vectors .
|
we used the pre-trained google embedding to initialize the word embedding matrix .
|
for all three classifiers , we used the word2vec 300d pre-trained embeddings as features .
|
as the submission system , the cnn architecture itself would have ranked within the top ten of this sentiment analysis task .
|
just as the submission system , the cnn architecture itself would have ranked within the top ten of this sentiment analysis task .
|
we train a 5-gram language model with the xinhua portion of english gigaword corpus and the english side of the training set using the srilm toolkit .
|
we train an english language model on the whole training set using the srilm toolkit and train mt models mainly on a 10k sentence pair subset of the acl training set .
|
as our final set of baselines , we extend two simple techniques proposed by that use element-wise addition and multiplication operators to perform composition .
|
as our final set of baselines , we extend two simple techniques proposed by mitchell and lapata that use element-wise addition and multiplication operators to perform composition .
|
a trigram language model with modified kneser-ney discounting and interpolation was used as produced by the srilm toolkit .
|
we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit .
|
semantic role labeling ( srl ) is the task of identifying the arguments of lexical predicates in a sentence and labeling them with semantic roles ( cite-p-13-3-3 , cite-p-13-3-11 ) .
|
semantic role labeling ( srl ) is a major nlp task , providing a shallow sentence-level semantic analysis .
|
we use the popular word2vec 1 tool proposed by mikolov et al to extract the vector representations of words .
|
as a point of comparison , we will also present results from the word2vec model of mikolov et al trained on the same underlying corpus as our models .
|
to get a dictionary of word embeddings , we use the word2vec tool 2 and train it on the chinese gigaword corpus .
|
we perform pre-training using the skip-gram nn architecture available in the word2vec 13 tool .
|
blacoe and lapata , 2012 ) demonstrate the effectiveness of combining latent representations with simple element-wise operations , for the purpose of identifying semantic similarity amongst larger text units .
|
alternatively , blacoe and lapata show that latent word representations can be combined with simple elementwise operations to identify the semantic similarity of larger units of text .
|
more recently , li and roth have developed a machine learning approach which uses the snow learning architecture .
|
li and roth have developed a machine learning approach which uses the snow learning architecture .
|
the eckgs follow the simple event model , which represents events as instances through uris with relations to their participants , location , and time .
|
within gaf , instances are represented according to the simple event model using a unique uri and relations to actors , places and time .
|
in this paper , we explore use of word embeddings to capture context .
|
this paper shows the benefit of features based on word embedding for sarcasm detection .
|
our model is evaluated on the english penn treebank , chinese short message and swb-fisher .
|
our main experiments are performed on dependency trees extracted from english wsj treebank .
|
stance detection is the task of automatically determining whether the authors of a text are against or in favour of a given target .
|
stance detection is the task of automatically determining from the text whether the author of the text is in favor of , against , or neutral towards a proposition or target .
|
socher et al present a novel recursive neural network for relation classification that learns vectors in the syntactic tree path that connects two nominals to determine their semantic relationship .
|
socher et al , 2012 ) introduced a recursive neural network model to learn compositional vector representations for phrases and sentences of arbitrary syntactic types and length .
|
word sense disambiguation ( wsd ) is the task of determining the correct meaning ( “ sense ” ) of a word in context , and several efforts have been made to develop automatic wsd systems .
|
word sense disambiguation ( wsd ) is the task of identifying the correct sense of an ambiguous word in a given context .
|
in natural language , a word often assumes different meanings , and the task of determining the correct meaning , or sense , of a word in different contexts is known as word sense disambiguation ( wsd ) .
|
word sense disambiguation ( wsd ) is a natural language processing ( nlp ) task in which the correct meaning ( sense ) of a word in a given context is to be determined .
|
semantic parsing is the task of mapping natural language utterances to machine interpretable meaning representations .
|
semantic parsing is the task of mapping a natural language ( nl ) sentence into a complete , formal meaning representation ( mr ) which a computer program can execute to perform some task , like answering database queries or controlling a robot .
|
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing .
|
for all experiments , we used a 5-gram english language model trained on the afp and xinua portions of the gigaword v3 corpus with modified kneser-ney smoothing .
|
in a previous work ( cite-p-17-1-14 ) , we have shown that the relationship between the nouns in a noun-noun compound can be characterized using verbs extracted from the web , but .
|
in a previous work ( cite-p-17-1-14 ) , we have shown that the relationship between the nouns in a noun-noun compound can be characterized using verbs extracted from the web , but we provided no formal evaluation .
|
we used moses , a phrase-based smt toolkit , for training the translation model .
|
we used the phrasebased translation system in moses 5 as a baseline smt system .
|
moreover , the principle of sensitivity states that when producing a referring expression , the speaker should prefer features which the hearer is known to be able to interpret and perceive .
|
also , the principle of sensitivity states that when producing a referring expression , one should prefer features the hearer is known to be able to interpret and see .
|
conditional random fields are undirected graphical models to calculate the conditional probability of values on designated output nodes given values on designated input nodes .
|
conditional random fields are undirected graphical models trained to maximize the conditional probability of the desired outputs given the corresponding inputs .
|
in this paper we propose a data-driven approach for generating short children ’ s stories .
|
in this paper we proposed a novel method to computational story telling .
|
with the advent of recurrent neural network based language models , some rnn based nlg systems have been proposed .
|
recently , a recurrent neural network architecture was proposed for language modelling .
|
we substitute our language model and use mert to optimize the bleu score .
|
in order to measure translation quality , we use bleu 7 and ter scores .
|
here , we choose the skip-gram model and continuous-bag-of-words model for comparison with the lbl model .
|
we also evaluate a number of methods based directly on word vectors of the continuous bag-of-words model .
|
peng et al achieved better results by using a conditional random field model .
|
liu et al focused on the sentence boundary detection task , by making use of conditional random fields .
|
we adopt pretrained embeddings for word forms with the provided training data by word2vec .
|
in our word embedding training , we use the word2vec implementation of skip-gram .
|
we use the constrained decoding feature included in moses to this purpose .
|
we used a phrase-based smt model as implemented in the moses toolkit .
|
in this paper , we present a chunk based partial parser , following ideas from ( cite-p-21-1-0 ) , which is used to to generate shallow syntactic structures from speech .
|
in thispaper , we present a chunk based partialparsing system for spontaneous , conversational speech in unrestricteddomains .
|
in this paper , we reformulated the traditional linear vector-space models as tensor-space models .
|
in this paper , we shift the model from vector-space to tensor-space .
|
this project elaborates on two experiments carried out to analyze the sentiment of tweets , namely , subtask a and subtask b from semeval-2016 task 4 .
|
this project elaborates on two experiments carried out to analyze the sentiment of tweets from semeval-2016 task 4 subtask a and subtask b .
|
we present trip-maml , which extends the trip-ma 1 dataset of cite-p-14-3-1 .
|
we have presented trip-maml a multilingual extension of trip-ma , originally presented in ( cite-p-14-3-1 ) .
|
we dedicate to the topic of aspect ranking , which aims to automatically identify important aspects of a product from consumer reviews .
|
in this paper , we dedicate to the topic of aspect ranking , which aims to automatically identify important product aspects from online consumer reviews .
|
blitzer et al apply the structural correspondence learning algorithm to train a crossdomain sentiment classifier .
|
blitzer et al investigate domain adaptation for sentiment classifiers , focusing on online reviews for different types of products .
|
coreference resolution is a set partitioning problem in which each resulting partition refers to an entity .
|
coreference resolution is the task of determining whether two or more noun phrases refer to the same entity in a text .
|
coreference resolution is the problem of identifying which mentions ( i.e. , noun phrases ) refer to which real-world entities .
|
coreference resolution is the task of partitioning a set of entity mentions in a text , where each partition corresponds to some entity in an underlying discourse model .
|
zhang and clark improve this model by using both character and word-based decoding .
|
more recently , zhang and clark proposed an efficient character-based decoder for their word-based model .
|
multilingual applications frequently involve dealing with proper names , but names are often missing .
|
in multilingual applications such as clir and machine translation , all types of names must be translated .
|
the srilm toolkit was used to build the trigram mkn smoothed language model .
|
ngram features have been generated with the srilm toolkit .
|
words representations as vectors in a multidimensional space allows to capture the semantic and syntactic properties of the language .
|
word embedding provides an unique property to capture semantics and syntactic information of different words .
|
the weights 位 m in the log-linear model were trained using minimum error rate training with the news 2009 development set .
|
the feature weights for each system were tuned on development sets using the moses implementation of minimum error rate training .
|
mt , there is a tradeoff between taking advantage of linguistic analysis , versus allowing the model to exploit linguistically unmotivated mappings learned from parallel training data .
|
in any such system , there is a natural tension between taking advantage of the linguistic analysis , versus allowing the model to use linguistically unmotivated mappings learned from parallel training data .
|
named entity recognition ( ner ) is the process by which named entities are identified and classified in an open-domain text .
|
named entity recognition ( ner ) is a frequently needed technology in nlp applications .
|
in this paper , we have pointed to another methodological challenge in designing machine reading tasks : different writing tasks used to generated .
|
11 in this paper , we have pointed to another methodological challenge in designing machine reading tasks : different writing tasks used to generated the data affect writing style , confounding classification problems .
|
the induction of selectional preferences from corpus data was pioneered by resnik .
|
the idea of inducing selectional preferences from corpora was introduced by resnik .
|
recently , convolutional neural networks have yielded best performance on many text classification tasks .
|
deep neural networks have shown great promises at capturing salient features for these complex tasks .
|
sentiment analysis is a recent attempt to deal with evaluative aspects of text .
|
sentiment analysis is a growing research field , especially on web social networks .
|
part-of-speech ( pos ) tagging is a critical task for natural language processing ( nlp ) applications , providing lexical syntactic information .
|
part-of-speech ( pos ) tagging is the task of assigning each of the words in a given piece of text a contextually suitable grammatical category .
|
in this paper , we present a unified model for both word sense representation and disambiguation .
|
in this paper , we present a unified model for word sense representation and disambiguation that uses one representation per sense .
|
text classification is a well-studied problem in machine learning , natural language processing , and information retrieval .
|
text classification is a fundamental problem in natural language processing ( nlp ) .
|
we obtained distributed word representations using word2vec 4 with skip-gram .
|
in both cases , we computed 1 the word embeddings using the word2vec implementation of gensim .
|
representation learning is the dominant technique for unsupervised domain adaptation , but existing approaches have two major weaknesses .
|
representation learning is a promising technique for discovering features that allow supervised classifiers to generalize from a source domain dataset to arbitrary new domains .
|
following , we use the word analogical reasoning task to evaluate the quality of word embeddings .
|
second , we utilize word embeddings 3 to represent word semantics in dense vector space .
|
we trained the pos tagger using the aforementioned sections of the atb .
|
we trained the parser on the training portion of patb part 3 .
|
word sense disambiguation ( wsd ) is the task of identifying the correct meaning of a word in context .
|
word sense disambiguation ( wsd ) is a key task in computational lexical semantics , inasmuch as it addresses the lexical ambiguity of text by making explicit the meaning of words occurring in a given context ( cite-p-18-3-10 ) .
|
our semantic parser is based on the dual-rnn sequence-to-sequence architecture with attention originally proposed for neural machine translation .
|
our system for this shared task 1 is based on an encoder-decoder model proposed by bahdanau et al for neural machine translation .
|
the system used a tri-gram language model built from sri toolkit with modified kneser-ney interpolation smoothing technique .
|
we used a 5-gram language model with modified kneser-ney smoothing implemented using the srilm toolkit .
|
for this task , we used the svm implementation provided with the python scikit-learn module .
|
once we have extracted all the features , we train a linear svm using python based scikit learn library for the purpose of classification .
|
akkaya et al , martn-wanton et al perform sentiment classification of individual sentences .
|
akkaya et al , martn-wanton et al deal with sentiment classification of sentences .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.