text
stringlengths 82
736
| label
int64 0
1
|
---|---|
in sections 3 and 4 , we present results of experiments that investigate how humans use colour terms for reference in production and comprehension---convolutional neural networks have recently achieved remarkably strong performance also on the practically important task of sentence classification | 0 |
a letter-trigram language model with sri lm toolkit was then built using the target side of ne pairs tagged with the above position information---a 5-gram language model was created with the sri language modeling toolkit and trained using the gigaword corpus and english sentences from the parallel data | 1 |
we used scikit-learn 4 for more details , a machine learning library for python , to build a question classifier based on the svm algorithm and linear kernel function---we used the scikit-learn python machine learning library to implement the feature extraction pipeline and the support vector machine classifier | 1 |
we parse the senseval test data using the stanford parser generating the output in dependency relation format---we convert both data sets to stanford dependencies with the stanford dependency converter | 1 |
we use the wordsim353 dataset , divided into similarity and relatedness categories---we assess intrinsic embedding quality by considering correlation with human judgment on the wordsim353 test set | 1 |
this grammar consists of a lexicon which pairs words or phrases with regular expression functions---each grammar consists of a set of rules evaluated in a leftto-right fashion over the input annotations , with multiple grammars cascaded together and evaluated bottom-up | 1 |
generally phrase-based smt models outperform word-based ones---phrase-based smt systems have been shown to outperform word-based approaches | 1 |
relation extraction ( re ) is the task of extracting instances of semantic relations between entities in unstructured data such as natural language text---relation extraction is the task of tagging semantic relations between pairs of entities from free text | 1 |
sentiwordnet is another popular lexical resource for opinion mining---in the intrinsic evaluation , we use bleu , which was proposed as an automatic evaluation measure for smt , and human judgments | 0 |
we use liblinear logistic regression module to classify document-level embeddings---we use liblinear 9 to solve the lr and svm classification problems | 1 |
semantic parsing is the task of converting a sentence into a representation of its meaning , usually in a logical form grounded in the symbols of some fixed ontology or relational database ( cite-p-21-3-3 , cite-p-21-3-4 , cite-p-21-1-11 )---semantic parsing is the task of translating natural language utterances into a machine-interpretable meaning representation ( mr ) | 1 |
callison-burch et al used pivot languages for paraphrase extraction to handle the unseen phrases for phrase-based smt---in mt , callison-burch et al utilized paraphrases of unseen source phrases to alleviate data sparseness | 1 |
distributed representations of words have been widely used in many natural language processing tasks---word embeddings have recently led to improvements in a wide range of tasks in natural language processing | 1 |
coreference resolution is a challenging task , that involves identification and clustering of noun phrases mentions that refer to the same real-world entity---coreference resolution is a set partitioning problem in which each resulting partition refers to an entity | 1 |
mauser et al integrated a logistic regression model predicting target words from all the source words in a pbsmt---mauser et al presented discriminative lexicon models to predict target words | 1 |
as a byproduct , this approach further provides a new , effective perspective on handling those missing relations---as a byproduct , this provides a new perspective on handling missing relations | 1 |
semantic parsing is the problem of translating human language into computer language , and therefore is at the heart of natural language understanding---semantic parsing is the task of mapping a natural language ( nl ) sentence into a completely formal meaning representation ( mr ) or logical form | 1 |
our model can make full use of all informative sentences and alleviate the wrong labelling problem for distant supervised relation extraction---show that , our model can make full use of all informative sentences and effectively reduce the influence of wrong labelled instances | 1 |
the words of input sentences are first converted to vector representations learned from word2vec tool---the character embeddings are computed using a method similar to word2vec | 1 |
for 23 nouns 8 shown in as examples of nouns used as both mass and count nouns , accuracy was calculated using the bnc and ten-fold cross validation---for 25 nouns shown in as examples of nouns used as both mass and count nouns , accuracy on the bnc was calculated using ten-fold cross validation | 1 |
in the dr subtask , the system achieved the median score in phase 1 and obtained a lower r in phase 2 , but in both cases it performs better than baseline---the srilm toolkit was used to build this language model | 0 |
for optimization , we used adam with default parameters---we used adam optimizer with its standard parameters | 1 |
we use 4-gram language models in both tasks , and conduct minimumerror-rate training to optimize feature weights on the dev set---we adapt the minimum error rate training algorithm to estimate parameters for each member model in co-decoding | 1 |
in the most likely scenario ¨c porting a parser to a novel domain for which there is little or no annotated data ¨c the improvements can be quite large---and thus both of the case dependencies and specific sense restriction selected by the proposed method have much contribution to improving the performance in subcategorization preference test | 0 |
word alignment models were first introduced in statistical machine translation---we use the pre-trained word2vec embeddings provided by mikolov et al as model input | 0 |
we define a conditional random field for this task---as a classifier , we choose a first-order conditional random field model | 1 |
coreference resolution is a task aimed at identifying phrases ( mentions ) referring to the same entity---coreference resolution is the task of automatically grouping references to the same real-world entity in a document into a set | 1 |
coreference resolution is the task of determining when two textual mentions name the same individual---coreference resolution is the task of grouping all the mentions of entities 1 in a document into equivalence classes so that all the mentions in a given class refer to the same discourse entity | 1 |
we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings---we represent terms using pre-trained glove wikipedia 6b word embeddings | 1 |
these tools are , however , not directly applicable to the task of multi-document summarization---tools are not well-suited for this task , as they do not support cross-document annotations , the modeling of complex tasks | 1 |
the feature weights are tuned with minimum error-rate training to optimise the character error rate of the output---the feature weights of the translation system are tuned with the standard minimum-error-ratetraining to maximize the systems bleu score on the development set | 1 |
thus , event extraction is a difficult task and requires substantial training data---event extraction is a challenging task , which aims to discover event triggers in a sentence and classify them by type | 1 |
sentence compression is a paraphrasing task where the goal is to generate sentences shorter than given while preserving the essential content---sentence compression is the task of shortening a sentence while preserving its important information and grammaticality | 1 |
translation quality is measured by case-insensitive bleu on newstest13 using one reference translation---the translation quality is evaluated by case-insensitive bleu and ter metrics using multeval | 1 |
coreference resolution is a multi-faceted task : humans resolve references by exploiting contextual and grammatical clues , as well as semantic information and world knowledge , so capturing each of these will be necessary for an automatic system to fully solve the problem---since coreference resolution is a pervasive discourse phenomenon causing performance impediments in current ie systems , we considered a corpus of aligned english and romanian texts to identify coreferring expressions | 1 |
sentiment analysis is the task of identifying the polarity ( positive , negative or neutral ) of review---sentiment analysis is a natural language processing ( nlp ) task ( cite-p-10-1-14 ) which aims at classifying documents according to the opinion expressed about a given subject ( federici and dragoni , 2016a , b ) | 1 |
since discourse is a natural form of communication , it favors the observation of the patient ’ s functionality in everyday life---discourse is a structurally organized set of coherent text segments | 1 |
we use the cdec decoder 5 and induce scfg grammars from two sets of symmetrized alignments using the method described by chiang---for direct translation , we use the scfg decoder cdec 4 and build grammars using its implementation of the suffix array extraction method described in lopez | 1 |
we present a weakly-supervised induction method to assign semantic information to food items---we present a semi-supervised graph-based approach to induce these food | 1 |
in this work , we look at methods for bootstrapping the production of these statistical models without having an annotated treebank---in this work , we show that , by taking advantage of the constrained nature of these hpsg grammars , we can learn a discriminative parse selection model from raw text | 1 |
for this , we used the combination of the entire swedish-english europarl corpus and the smultron data---for our baseline , we used a small parallel corpus of 30k english-spanish sentences from the europarl corpus | 1 |
for example , turian et al have improved the performance of chunking and named entity recognition by using word embedding also as one of the features in their crf model---similarly , turian et al find that using brown clusters , cw embeddings and hlbl embeddings for name entity recognition and chunking tasks together gives better performance than using these representations individually | 1 |
among others , there are studies using phrase-based statistical machine translation , which does not limit the types of grammatical errors made by a learner---there are several studies about grammatical error correction using phrase-based statistical machine translation | 1 |
we used the svm implementation provided within scikit-learn---we trained the five classifiers using the svm implementation in scikit-learn | 1 |
in this work , we propose a role identification model , which iteratively optimizes a team member role assignment that can predict the teamwork quality to the utmost extent---we utilise liblinear-java 3 with the l2-regularised l2-loss linear svm setting for the svm implementation , and snowball 4 for the stemmer | 0 |
for the word-embedding based classifier , we use the glove pre-trained word embeddings---we use 300d glove vectors trained on 840b tokens as the word embedding input to the lstm | 1 |
for language modeling , we use the english gigaword corpus with 5-gram lm implemented with the kenlm toolkit---for example , citation structure or rebuttal links , was used as extra information to model agreements or disagreements in debate posts and to infer their labels | 0 |
djuric et al , 2015 ) highlighted the effectiveness of comment embeddings in detection of hate speech , by joint modelling comments and words using continuous-bag of words to generate a low dimensional embedding---the translation quality is evaluated by bleu and ribes | 0 |
furthermore , we put a particular focus on recording the interactions between the users and the annotation tool---as a key property of our tool , we store all intermediate annotation results and record the user – system interaction | 1 |
knowledge bases are usually highly incomplete---because the knowledge base is incomplete | 1 |
we propose a framework for generating an abstractive summary from a semantic model of a multimodal document---into the semantic model , we can produce unified summaries of multimodal documents , resulting in an abstract | 1 |
heilman and smith used tree kernels to search for the alignment that yields the lowest tree edit distance---heilman and smith presented a classification-based approach with tree-edit features extracted from a tree kernel | 1 |
all features were log-linearly combined and their weights were optimized by performing minimum error rate training---the weights 位 m in the log-linear model were trained using minimum error rate training with the news 2009 development set | 1 |
furthermore , the emotions in different dataset can be varied---then , the type of the emotions can be interpreted by observing the top | 1 |
these models are built on recently translated sentences---models are able to substantially improve translation | 1 |
we present hyp , an open-source toolkit that provides data structures and algorithms to process weighted directed hypergraphs---we have presented hyp , an open-source toolkit for representing and manipulating weighted directed hypergraphs , including functionality for learning arc | 1 |
kiela and bottou showed that transferring representations from deep convolutional neural networks yield much better performance than bag-of-visual-words in multi-modal semantics---such cnnderived image representations have been found to be of higher quality than traditional bag of visual words models that were previously used in multi-modal semantics | 1 |
we use a set of 318 english function words from the scikit-learn package---misra et al use a latent dirichlet allocation topic model to find coherent segment boundaries | 0 |
we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus---we trained a standard 5-gram language model with modified kneser-ney smoothing using the kenlm toolkit on 4 billion running words | 1 |
we estimate the parameters by maximizingp using the expectation maximization algorithm---in this work , we use the expectation-maximization algorithm | 1 |
many words have multiple meanings , and the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd )---the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd ) | 1 |
our algorithm filters incorrect inference rules and identifies the directionality of the correct ones---for our baseline we use the moses software to train a phrase based machine translation model | 0 |
semantic role labeling ( srl ) is a task of automatically identifying semantic relations between predicate and its related arguments in the sentence---semantic role labeling ( srl ) consists of finding the arguments of a predicate and labeling them with semantic roles ( cite-p-9-1-5 , cite-p-9-3-0 ) | 1 |
in our implementation , we train a tri-gram language model on each phone set using the srilm toolkit---we use srilm to train a 5-gram language model on the target side of our training corpus with modified kneser-ney discounting | 1 |
the basic idea of the neural network lm is to project the word indices onto a continuous space and to use a probability estimator operating on this space---the basic idea of this approach is to project the word indices onto a continuous space and to use a probability estimator operating on this space | 1 |
our discriminative model is a linear model trained with the margin-infused relaxed algorithm---in this work , we use the margin infused relaxed algorithm with a hamming-loss margin | 1 |
the weights associated to feature functions are optimally combined using the minimum error rate training---the parameter for each feature function in log-linear model is optimized by mert training | 1 |
we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit---we use srilm to train a 5-gram language model on the target side of our training corpus with modified kneser-ney discounting | 1 |
a back-off 2-gram model with good-turing discounting and no lexical classes was also created from the training set , using the srilm toolkit ,---a 5-gram language model was created with the sri language modeling toolkit and trained using the gigaword corpus and english sentences from the parallel data | 1 |
this observation is evidence that the neural network can find good representations for pos tagging---but the neural networks leverage these features for improving tagging | 1 |
for this target word the synonym ¡®record¡¯ was picked , which matches ¡®disc¡¯ in its musical sense---in contrast , cnn is able to extract local and position-invariant features well | 0 |
in addition , we investigate the utility of incorporating additional specialized features tailored to peer review---in particular , we consider conditional random fields and a variation of autoslog | 0 |
we use pre-trained word embeddings of size 300 provided by---we evaluated the translation quality using the case-insensitive bleu-4 metric | 0 |
we use 300 dimension word2vec word embeddings for the experiments---then , we trained word embeddings using word2vec | 1 |
we describe the development of xhpsg , a large-scale english grammar in the hpsg formalism translated from the xtag grammar---we constructed a type signature for the xtag english grammar , an existing broad-coverage grammar of english | 1 |
we use online learning to train model parameters , updating the parameters using the adagrad algorithm---we apply online training , where model parameters are optimized by using adagrad | 1 |
we investigate active learning techniques to reduce the size of these datasets and thus annotation effort---we investigate active learning ( al ) techniques to reduce the size of the dataset | 1 |
the phrase-level and sentence-level precision of the generated paraphrases exceed 60 % and 55 % , respectively---lexical simplification is the task to find and substitute a complex word or phrase in a sentence with its simpler synonymous expression | 0 |
we use a pbsmt model where the language model is a 5-gram lm with modified kneser-ney smoothing---for example , bengio et al introduced a model that learns word vector representations as part of a simple neural network architecture for language modeling | 0 |
while negation is a grammatical category which comprises various kinds of devices to reverse the truth value of a proposition , speculation is a grammatical category which expresses the attitude of a speaker towards a statement in terms of degree of certainty , * corresponding author reliability , subjectivity , sources of information , and perspective ( cite-p-20-1-12 )---we also report the results using bleu and ter metrics | 0 |
1 hindi is a verb final language with free word order and a rich case marking system---hindi is a verb final , flexible word order language and therefore , has frequent occurrences of non-projectivity in its dependency structures | 1 |
since katakana words are basically transliterations from english , back-transliterating katakana noun compounds is also useful for splitting---memory-based learning , also known as instance-based , example-based , or lazy learning , is a supervised inductive learning algorithm for learning classification tasks | 0 |
this intuition has been exploited in some systems to produce summaries---systems that use the discourse structure to produce summaries are also based on this intuition | 1 |
finally , we apply our model to the recently established semeval-2015 diachronic text evaluation subtasks---we quantitatively evaluate our model on the semeval-2015 benchmark datasets released as part of the diachronic text evaluation exercise | 1 |
we also use a 4-gram language model trained using srilm with kneser-ney smoothing---we implemented this model using the srilm toolkit with the modified kneser-ney discounting and interpolation options | 1 |
the frequent frame you it , for example , largely identifies verbs , as shown in , taken from child-directed speech in the childes database---this setting is the same as that used in other studies | 0 |
commonly used word vectors are word2vec , glove and fasttext---all word vectors are trained on the skipgram architecture | 1 |
the maximum entropy approach presents a powerful framework for the combination of several knowledge sources---under the maximum entropy framework , evidence from different features can be combined with no assumptions of feature independence | 1 |
semantic role labeling ( srl ) is defined as the task to recognize arguments for a given predicate and assign semantic role labels to them---semantic role labeling ( srl ) is the process of producing such a markup | 1 |
in all submitted systems , we use the phrase-based moses decoder---for phrase-based smt translation , we used the moses decoder and its support training scripts | 1 |
k枚nig et al looked also at mci and ad subjects and examined vocal features using support vector machine---mead is centroid based multi-document summarizer which generates summaries using cluster centroids produced by topic detection and tracking system | 0 |
the database of typological features we used is the online edition 8 of the world atlas of language structures---the dataset we used in the present study is the online edition 2 of the world atlas of language structures | 1 |
information extraction ( ie ) is the process of finding relevant entities and their relationships within textual documents---information extraction ( ie ) is a task of identifying 憽甪acts挕 ? ( entities , relations and events ) within unstructured documents , and converting them into structured representations ( e.g. , databases ) | 1 |
we investigate linguistic features that correlate with the readability of texts for adults with intellectual disabilities ( id )---we present a corpus of texts with readability judgments from adults with id ; ( 2 ) we propose a set of cognitively-motivated features which operate at the discourse level | 1 |
thus , we use an attention mechanism to focus on the important words---we run skip-gram model on training dataset , and use the obtained word vector to initialize the word embedding part of model input | 0 |
kawahara and uchimoto used a separately trained binary classifier to select reliable sentences as additional training data---kawahara and uchimoto used a separately trained binary classifier to select sentences as additional training data | 1 |
many words have multiple meanings , and the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd )---sentiment analysis is a much-researched area that deals with identification of positive , negative and neutral opinions in text | 0 |
word similarity is typically low for synonyms that have many word senses since information about different senses are mashed together---we apply online training , where model parameters are optimized by using adagrad | 0 |
in particular , svms achieve high generalization even with training data of a very high dimension---svms are known to achieve high generalization performance even with input data of high dimensional | 1 |
for the feature-based system we used logistic regression classifier from the scikit-learn library---we use the logistic regression classifier in the skll package , which is based on scikit-learn , optimizing for f 1 score | 1 |