text
stringlengths
82
736
label
int64
0
1
we ran mt experiments using the moses phrase-based translation system---we used moses , a state-of-the-art phrase-based smt model , in decoding
1
we use the selectfrommodel 4 feature selection method as implemented in scikit-learn---we implemented the algorithms in python using the stochastic gradient descent method for nmf from the scikit-learn package
1
for each one of the 6 languages which our approach covers , we built a phrase-based machine translation model using the moses toolkit---pitler et al used the data from vadas and curran for a parser applicable on base noun phrases of any length including coordinations
0
the next section gives an overview of related work---in this section , we summarize related work
1
accordingly , we have trained 3- and 5-dimensional models for english and german syllable structure---we initialize the embedding layer using embeddings from dedicated word embedding techniques word2vec and glove
0
for run 2 , we use wapiti , an efficient off-the-shelf linear-chain crf sequence classifier---to train a crf model , we use the wapiti sequence labelling toolkit
1
following pitler et al , we report in table 1 figures for the training sets of six languages used in the conll-x shared task on dependency parsing---in order to provide results on additional languages , we present in table 3 a comparison to the work of gillenwater et al , using the conll-x shared task data
1
however , we use a large 4-gram lm with modified kneser-ney smoothing , trained with the srilm toolkit , stolcke , 2002 and ldc english gigaword corpora---mikolov et al proposed a faster skip-gram model word2vec 5 which tries to maximize classification of a word based on another word in the same sentence
0
this paper describes our system participation in the semeval-2017 task 8 ¡®rumoureval : determining rumour veracity and support for rumours¡¯---to calculate language model features , we train traditional n-gram language models with ngram lengths of four and five using the srilm toolkit
0
later , it has been applied in natural language processing tasks and outperformed traditional models such as bag of words , n-grams and their tfidf variants---performance is measured in terms of perplexity
0
we used a freely-available pretrained model of 300 dimensions trained on approximately 100 billion words from news articles---we used all post bodies in the unlabeled dataset to train a skip-gram model of 50 dimensions
1
we use a standard maximum entropy classifier implemented as part of mallet---we utilize a maximum entropy model to design the basic classifier used in active learning for wsd
1
recently , a series of methods have been developed , which train a classifier for each label , organize the classifiers in a partially ordered structure and take predictions produced by the former classifiers as the latter classifiers ’ features---style methods have been developed , which train a classifier for each label , organize the classifiers in a partially ordered structure and take predictions produced by the former classifiers
1
for all models , we use l 2 regularization and run 100 epochs of adagrad with early stopping---for the optimization process , we apply the diagonal variant of adagrad with mini-batches
1
some opinion mining methods in english rely on the english lexicon sentiwordnet for extracting word-level sentiment polarity---in this paper , we propose a novel family of recurrent neural network unit : the context-dependent additive recurrent neural network ( carnn ) that is designed specifically to leverage
0
most previous studies on meeting summarization have focused on extractive summarization---coreference resolution is a key problem in natural language understanding that still escapes reliable solutions
0
word sense disambiguation ( wsd ) is the nlp task that consists in selecting the correct sense of a polysemous word in a given context---word sense disambiguation ( wsd ) is the task of identifying the correct sense of an ambiguous word in a given context
1
the language models are trained on the corresponding target parts of this corpus using the sri language model tool---the target language model is trained by the sri language modeling toolkit on the news monolingual corpus
1
we implemented a greedy transition-based parser , and used rich contextual features following zhang and nivre---the parsing model is a shift-reduce dependency parser , using the higherorder features from zhang and nivre
1
in their proposed model , yang et al . ( 2016 ) use bidirectional gru modules to represent segments as well as documents , whereas we use a more efficient cnn encoder to compose words into segment vectors 2 ( i.e. , math-w-3-4-5-220 )---we measure machine translation performance using the bleu metric
0
in a headed tree , each terminal word can be uniquely labeled with a governing word and grammatical relation---in the introduction , nonce2vec is designed with a view to be an essential component of an incremental concept
0
we use the 300-dimensional pre-trained word2vec 3 word embeddings and compare the performance with that of glove 4 embeddings---in addition , we utilize the pre-trained word embeddings with 300 dimensions from for initialization
1
word alignment is a natural language processing task that aims to specify the correspondence between words in two languages ( cite-p-19-1-0 )---recently , distributional features have also been used directly to train classifiers that classify pairs of words as being synonymous or not
0
under this setting , we compare our method to the spin model described in---our model by construction is similar to approach based on the ising spin model described in
1
word sense disambiguation ( wsd ) is the task of determining the correct meaning for an ambiguous word from its context---evaluation measures are sufficient to discern simulated from real dialogs
0
high quality word embeddings have been proven helpful in many nlp tasks---here , we use negative sampling as a speed-up technique
0
we first sentencesegment the gigaword corpus using the nltk sentence segmenter---more concretely , faruqui and dyer use canonical correlation analysis to project the word embeddings in both languages to a shared vector space
0
a multiword expression can be defined as a combination of words for which syntactic or semantic properties of the whole expression can not be obtained from its parts---a multiword expression can be defined as any word combination for which the syntactic or semantic properties of the whole expression can not be obtained from its parts
1
topic models are commonly inferred using either collapsed gibbs sampling rosen-zvi et al 2004 ) or methods based on variational inference---conventional topic models learning approaches are based on gibbs sampling or variational expectation maximization algorithm
1
marcu and wong propose a model to learn lexical correspondences at the phrase level---marcu and wong proposed a phrase-based context-free joint probability model for lexical mapping
1
carreras et al and koo et al introduced frameworks for joint learning of phrase-structure and dependency parsers and showed improvements on both tasks for english---carreras et al and rush et al introduced frameworks for joint learning of phrase and dependency structures , and showed improvements on both tasks for english
1
stolcke et al point out that the use of dialogue acts is a useful first level of analysis for describing discourse structure---stolcke et al used hmms as a general model of discourse with an application to speech acts in conversations
1
kambhatla leverages lexical , syntactic and semantic features , and feeds them to a maximum entropy model---kambhatla employs maximum entropy models to combine diverse lexical , syntactic and semantic features derived from the text for relation extraction
1
corry relies on a rich linguistically motivated feature set , which has , however , been manually reduced to 64 features for efficiency reasons---in a language generation system , a content planner typically uses one or more “ plans ” to represent the content to be included in the output
0
we evaluate our semantic parser on the webques-tions dataset , which contains 5,810 question-answer pairs---for evaluation , we use the webques-tions , a benchmark dataset for qa on freebase
1
the language model was constructed using the srilm toolkit with interpolated kneser-ney discounting---we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing
1
we also use a 4-gram language model trained using srilm with kneser-ney smoothing---for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing
1
combined with other features , they obtained a test accuracy of 75.29 % on the toefl11 dataset---and combined with customarily used word n-grams , they have high performance in terms of accuracy , when tested on the toefl11 corpus
1
our model is a structured conditional random field---we primarily compared our model with conditional random fields
1
luong et al , 2013 ) utilized recursive neural networks in which inputs are morphemes of words---for which the automatic evaluation metrics proposed to date for machine translation and automatic summarization can be seen as particular instances
0
given context vectors , lin and pantel used a symmetric similarity metric to find candidate paraphrases---lin and pantel use a standard monolingual corpus to generate paraphrases , based on dependancy graphs and distributional similarity
1
klein and manning , for example , show that the performance of an unlexicalised model can be substantially improved by splitting the existing symbols down into finer categories---klein and manning show that much of the gain in statistical parsing using lexicalized models comes from the use of a small set of function words
1
we estimate a 5-gram language model using interpolated kneser-ney discounting with srilm---we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus
1
in a second baseline model , we also incorporate 300-dimensional glove word embeddings trained on wikipedia and the gigaword corpus---we use 300-dimensional glove vectors trained on 6b common crawl corpus as word embeddings , setting the embeddings of outof-vocabulary words to zero
1
grammar induction is the task of inducing high-level rules for application of grammars in spoken dialogue systems---the language model is a trigram-based backoff language model with kneser-ney smoothing , computed using srilm and trained on the same training data as the translation model
0
a 4-gram language model is trained on the xinhua portion of the gigaword corpus with the srilm toolkit---a trigram language model with modified kneser-ney discounting and interpolation was used as produced by the srilm toolkit
1
it is worth noting that the morpheme feature is employed to better represent the compositional semantics inside chinese words---we employ a new feature ( morpheme feature ) which is particularly appropriate for chinese
1
to solve this task we use a multi-class support vector machine as implemented in the liblinear library---we train a linear support vector machine classifier using the efficient liblinear package
1
twitter is a popular microblogging service , which , among other things , is used for knowledge sharing among friends and peers---twitter is a well-known social network service that allows users to post short 140 character status update which is called “ tweet ”
1
we also use a 4-gram language model trained using srilm with kneser-ney smoothing---we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing
1
to integrate their strengths , in this paper , we propose a forest-based tree sequence to string translation model---in this paper , we propose a forest-based tree sequence to string model , which is designed to integrate the strengths of the forest-based and the tree
1
sarcasm is defined as ‘ the use of irony to mock or convey contempt ’ 1---sarcasm is defined as ‘ a cutting , often ironic remark intended to express contempt or ridicule ’ 1
1
figure 6 shows that our approaches consistently outperform the baseline and the state-of-the-art methods with diverse feature sparsity degrees---features demonstrate that our low-rank matrix completion approach significantly outperforms the baseline and the state-of-the-art methods
1
both corpora were extracted from the open parallel corpus opus---the central component of our non-parametric bayesian model are pitman-yor processes , which are a generalization of the dirichlet processes
0
different dialogue act labeling standards and datasets have been provided , including switchboard-damsl , icsi-mrda and ami---hermjakob implemented a shift-reduce parser for korean trained on very limited data , and sarkar and han used an earlier version of the treebank to train a lexicalized tree adjoining grammar
0
we apply our model to the english portion of the conll 2012 shared task data , which is derived from the ontonotes corpus---we train and evaluate our model on the english corpus of the conll-2012 shared task
1
we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model---for the tree-based system , we applied a 4-gram language model with kneserney smoothing using srilm toolkit trained on the whole monolingual corpus
1
lexical simplification is a subtask of text simplification ( cite-p-16-3-3 ) concerned with replacing words or short phrases by simpler variants in a context aware fashion ( generally synonyms ) , which can be understood by a wider range of readers---lexical simplification is a specific case of lexical substitution where the complex words in a sentence are replaced with simpler words
1
we use srilm to build 5-gram language models with modified kneser-ney smoothing---this type of features are based on a trigram model with kneser-ney smoothing
1
the system is an almost delexicalized parser which does not need training data to analyze romance languages---and exhibits stable performance across languages
0
sun and xu uses punctuation information as discrete feature in a sequence labeling framework , which shows improvement compared to the pure sequence labeling approach---chang and han , sun and xu used rich statistical information as discrete features in a sequence labeling framework
1
latent dirichlet allocation is a generative model that overcomes some of the limitations of plsi by using a dirichlet prior on the topic distribution---latent dirichlet allocation is a generative probabilistic topic model where documents are represented as random mixtures over latent topics , characterized by a distribution over words
1
kalchbrenner et al proposed a dynamic convolution neural network with multiple layers of convolution and k-max pooling to model a sentence---kalchbrenner et al developed a cnnbased model that can be used for sentence modelling problems
1
sentiment analysis is a fundamental problem aiming to give a machine the ability to understand the emotions and opinions expressed in a written text---we use pre-trained 50-dimensional word embeddings vector from glove
0
1a bunsetsu is a common unit when syntactic structures in japanese are discussed---a bunsetsu consists of one independent word and zero or more ancillary words
1
the berkeley parser is an efficient and effective parser that introduces latent annotations to refine syntactic categories to learn better pcfg grammars---the berkeley parser is an efficient and effective parser that introduces latent annotations to learn high accurate context-free grammars directly from a treebank
1
in this work , we propose a multi-space variational encoder-decoder framework for labeled sequence transduction problem---in this work , we propose a new framework for labeled sequence transduction problems : multi-space variational encoder-decoders
1
finally , we explain how to let our model additionally learn the language ’ s canonical word order---additionally letting our model learn the language ’ s canonical word order improves its performance and leads to the highest semantic parsing
1
in this paper , we have discussed possibilities to translate via pivot languages on the character level---in this paper we investigate the use of character-level translation models to support the translation from and to under-resourced languages
1
this capability is very desirable as shown by the success of the rule-based deterministic approach of raghunathan et al in the conll shared task 2011---in fact , the rule-based system of raghunathan et al exhibited the top score in the recent conll evaluation
1
we have shown co-training to be a promising approach for predicting emotions with spoken dialogue data---we investigate the applicability of co-training to train classifiers that predict emotions in spoken dialogues
1
unfortunately , this approach is difficult to utilize because it requires multiple segmenters that behave differently on the same input---however , much of this work has relied on multiple segmenters that perform differently on the same input
1
we have not yet succeeded , however , in combining the benefits of both prosody and the hbm---and while we have not been able to usefully employ both prosody and the hbm technique together , our hbm is competitive
1
semantic parsing is the task of transducing natural language ( nl ) utterances into formal meaning representations ( mrs ) , commonly represented as tree structures---semantic parsing is the task of mapping natural language sentences to a formal representation of meaning
1
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided---in addition , we use an english corpus of roughly 227 million words to build a target-side 5-gram language model with srilm in combination with kenlm
1
we present a graph-based semi-supervised learning for the question-answering ( qa ) task for ranking candidate sentences---salehi et al show that word embeddings are more accurate in predicting compositionality than a simplistic count-based dsm
0
different types of architectures such as feedforward neural networks and recurrent neural networks have since been used for language modeling---recurrent neural network architectures have proven to be well suited for many natural language generation tasks
1
for improving the word alignment , we use the word-classes that are trained from a monolingual corpus using the srilm toolkit---we train a 4-gram language model on the xinhua portion of the english gigaword corpus using the srilm toolkits with modified kneser-ney smoothing
1
stance detection is the task of automatically determining from the text whether the author of the text is in favor of , against , or neutral towards a proposition or target---we used moses as the implementation of the baseline smt systems
0
based on uima , it allows for efficient parallel processing of large volumes of text---distribution allows for straightforward high-performance nlp processing
1
lexical simplification is a popular task in natural language processing and it was the topic of a successful semeval task in 2012 ( cite-p-14-1-9 )---we used the pre-trained word embeddings that were learned using the word2vec toolkit on google news dataset
0
gildea and jurafsky are the only ones applying selectional preferences in a real srl task---gildea and jurafsky is the only one applying selectional preferences in a real srl task
1
in this paper , we propose a novel approach for disfluency detection---recently , inversion transduction grammars , namely itg , have been used to constrain the search space for word alignment
0
to train the link embeddings , we use the speedy , skip-gram neural language model of mikolov et al via their toolkit word2vec---we use a popular word2vec neural language model to learn the word embeddings on an unsupervised tweet corpus
1
dependency parsing is the task of building dependency links between words in a sentence , which has recently gained a wide interest in the natural language processing community---therefore , dependency parsing is a potential “ sweet spot ” that deserves investigation
1
this single endto-end nmt model outperforms the best conventional smt system ( cite-p-20-1-5 ) and achieves a state-of-the-art performance---single nmt model achieves state-of-the-art performance and outperforms the best conventional model
1
moore and lewis calculated the difference of the cross entropy values for a given sentence , based on language models from the source domain and the target domain---another approach is taken by , where , based on source and target language models , the authors calculated the difference of the cross-entropy values for a given sentence
1
coreference resolution is a field in which major progress has been made in the last decade---coreference resolution is the task of determining which mentions in a text refer to the same entity
1
mikolov et al observed a strong similarity of the geometric arrangements of corresponding concepts between the vector spaces of different languages , and suggested that a crosslingual mapping between the two vector spaces is technically plausible---and more importantly , if vectors learned for languages are manually rotated , mikolov et al observed that languages share similar geometric arrangements in vector spaces
1
in this work , we tackle the challenge of extracting bursty phrases without any restriction of forms---in this work , we aim to accurately and exhaustively extract bursty phrases of arbitrary forms
1
in this paper , we propose a constrained word lattice to combine smt and tm at phrase-level---in this paper , we propose using a constrained word lattice , which encodes input phrases and tm constraints
1
the availability of large document-summary corpora have opened up new possibilities for using statistical text generation techniques for abstractive summarization---availability of large document-summary corpora , as we discuss in section 3 , has opened up new possibilities for applying statistical text generation approaches to summarization
1
the skip-gram and continuous bag-of-words models of mikolov et al propose a simple single-layer architecture based on the inner product between two word vectors---mikolov et al further proposed continuous bagof-words and skip-gram models , which use a simple single-layer architecture based on inner product between two word vectors
1
for this , we utilize the publicly available glove 1 word embeddings , specifically ones trained on the common crawl dataset---we use 300 dimensional glove embeddings trained on the common crawl 840b tokens dataset , which remain fixed during training
1
in this paper we addressed the problem of recommending questions from large archives of community question answering data based on users¡¯ information needs---to address the aforementioned problems of the vsm model , the sentiment vector space model ( s-vsm ) is proposed
0
user simulation is frequently used to train statistical dialog managers for task-oriented domains---dialogs has become vital for training statistical dialog managers in task-oriented domains
1
for minimum error rate tuning , we use nist mt-02 as the development set for the translation task---we perform minimum-error-rate training to tune the feature weights of the translation model to maximize the bleu score on development set
1
with the svm reranker , we obtain a significant improvement in bleu scores over white & rajkumar¡¯s averaged perceptron model on both development and test data---with the svm reranker , we obtain a significant improvement in bleu scores over white & rajkumar ¡¯ s averaged perceptron model
1
the word embeddings are initialized using the pre-trained glove , and the embedding size is 300---and we used a graph kernel instead of a sequence kernel to measure the similarity between pairs of documents
0
tuning is performed to maximize bleu score using minimum error rate training---the weights used during the reranking are tuned using the minimum error rate training algorithm
1