text
stringlengths
82
736
label
int64
0
1
we propose using source-language monolingual models and resources to paraphrase the source text prior to translation---we propose an approach that consists in directly replacing unknown source terms , using source-language resources and models
1
we present a graph-based semi-supervised learning for the question-answering ( qa ) task for ranking candidate sentences---in this paper , we applied a graph-based ssl algorithm to improve the performance of qa task
1
hu et al proposes integration of constraints coming in the form of first order logic rules during training of nns---hu et al enabled a neural network to learn simultaneously from labeled instances as well as logic rules
1
the system used a tri-gram language model built from sri toolkit with modified kneser-ney interpolation smoothing technique---the target language model was a trigram language model with modified kneser-ney smoothing trained on the english side of the bitext using the srilm tookit
1
the proposed rnns approach achieved a performance comparable to the existing state-of-the-art models at sentence-level qe---we measure the translation quality with automatic metrics including bleu and ter
0
this shows that rl is possible even from small amounts of fairly reliable human feedback , pointing to a great potential for applications at larger scale---in machine translation proportions , this result points towards a great potential for larger-scaler applications of rl from human feedback
1
for learning coreference decisions , we used a maximum entropy model---as a model learning method , we adopt the maximum entropy model learning method
1
the word vectors used in all approaches are taken from the word2vec google news model---the word embedding vectors are generated from word2vec over the 5th edition of the gigaword
1
neural networks have recently gained much attention as a way of inducing word vectors---neural models , with various neural architectures , have recently achieved great success
1
all tokens are first mapped to distributed word representations , pre-trained using word2vec on the google news corpus---this baseline uses pre-trained word embeddings using word2vec cbow and fasttext
1
the baseline of our approach is a statistical phrase-based system which is trained using moses---the feature weights are tuned to optimize bleu using the minimum error rate training algorithm
0
a 5-gram language model was built using srilm on the target side of the corresponding training corpus---the target-side language models were estimated using the srilm toolkit
1
in this work we use word embeddings of mikolov et al to represent the semantics of words and compounds---we adapt the models of mikolov et al and mikolov et al to infer feature embeddings
1
we build a french tagger based on englishfrench data from the europarl corpus---we use the europarl english-french parallel corpus plus around 1m segments of symantec translation memory
1
we present a visual-linguistic mapping for zsl in the case where words and visual categories are both represented by distributions---in this paper , we consider zsl in the case where both visual and linguistic concepts are represented by gaussian distribution
1
for the mix one , we also train word embeddings of dimension 50 using glove---we use pre-trained vectors from glove for word-level embeddings
1
compared with 2nd-order phrase model of pei et al , our basic model occasionally performs worse in recovering long distant dependencies---for the classification task , we use pre-trained glove embedding vectors as lexical features
0
we use the publicly available 300-dimensional word vectors of mikolov et al , trained on part of the google news dataset---we follow mikolov et al to use skip-gram based word2vec to compute embeddings , and conduct training on the english articles in the latest 2015 wikipedia dump
1
we exploit the wikipedia section structure to generate a large dataset of weakly labeled triplets of sentences with no human involvement---component gathers lexical statistics from an unannotated corpus of newswire text
0
first-order factoid question answering assumes that the question can be answered by a single fact in a knowledge base ( kb )---question answering ( qa ) assumes that the question can be answered by a single fact in a knowledge base ( kb )
1
word vector models are a good way of modelling lexical semantics , since they are robust , conceptually simple and mathematically well defined---we created a data collection for research , development and evaluation of a method for automatically answering why-questions ( why-qa )
0
we use publicly-available 1 300-dimensional embeddings trained on part of the google news dataset using skip-gram with negative sampling---we initialize our word representation using publicly available word2vec trained on google news dataset and keep them fixed during training
1
a 3-gram language model was trained from the target side of the training data for chinese and arabic , using the srilm toolkit---the target language model was a trigram language model with modified kneser-ney smoothing trained on the english side of the bitext using the srilm tookit
1
notice that the 2 } 3 } fan-out of the non-terminal math-w-7-15-1-87 is 2---in the value of the input fan-out bound math-w-19-1-0-19
1
in this paper , we present an implicit content-introducing method for generative conversation systems , which incorporates cue words using our proposed hierarchical gated fusion unit ( hgfu ) in a flexible way---unlike the existing work , we explore an implicit content-introducing method for neural conversation systems , which utilizes the additional cue word in a ¡° soft ¡± manner
1
event extraction is a task in information extraction where mentions of predefined events are extracted from texts---event extraction is a particularly challenging information extraction task , which intends to identify and classify event triggers and arguments from raw text
1
the language model has an embedding size of 250 and two lstm layers with a hidden size of 1000---in this paper , we perform an analysis of the human perceptions of edit importance while reviewing documents
0
we make use of a factorization model in which words , together with their window-based context words and their dependency relations , are linked to latent dimensions---arabic is a morphologically rich language , in which a word carries not only inflections but also clitics , such as pronouns , conjunctions , and prepositions
0
we use minimum error rate training to tune the feature weights of hpb for maximum bleu score on the development set with serval groups of different start weights---we use 4-gram language models in both tasks , and conduct minimumerror-rate training to optimize feature weights on the dev set
1
as dependency relations directly model the semantics structure of a sentence , shen et al introduce dependency language model to better account for the generation of target sentences---in this section , we describe the observed data , latent variables , and auxiliary variables of the problem
0
sentiment analysis is the process of identifying and extracting subjective information using natural language processing ( nlp )---in order to provide word clusters for our experiments , we used the brown clustering algorithm
0
classifiers trained with features constructed from our model achieve significant better predictive performance than the state-of-the-art---classifiers trained with our features significantly outperform the state-of-the-art results
1
chen et al propose gated recursive neural networks , a variant of grconvs , to solve chinese word segmentation problem---we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings
0
the target-normalized hierarchical phrase-based model is based on a more general hierarchical phrase-based model---the hierarchical model is built on a weighted synchronous contextfree grammar
1
our learned models of the best wizard¡¯s behavior combine features available to wizards with some that are not , such as recognition confidence and acoustic model scores---our learned models of the best wizard ¡¯ s behavior combine features that are available to wizards with some that are not , such as recognition confidence and acoustic model scores
1
kaji and kitsuregawa outline a method of building sentiment lexicons for japanese using structural cues from html documents---kaji and kitsuregawa propose a method for building sentiment lexicon for japanese from html pages
1
we used kenlm with srilm to train a 5-gram language model based on all available target language training data---we created 5-gram language models for every domain using srilm with improved kneserney smoothing on the target side of the training parallel corpora
1
the training and development data for our task was taken from prior work on twitter ner , which distinguishes 10 different named entity types---the training and development data for our task was taken from previous work on twitter ner , which distinguishes 10 different named entity types
1
we use the hierarchical phrase-based machine translation model from the open-source cdec toolkit , and datasets from the workshop on machine translation---we propose a new bridging operation that generates predicates based on adjacent predicates
0
socher et al model the two sentences with recursive neural networks , and then feed similarity scores between words and phrases to a cnn with dynamic pooling to capture sentence interactions---socher et al utilized parsing to model the hierarchical structure of sentences and uses unfolding recursive autoencoders to learn representations for single words and phrases acting as nonleaf nodes in the tree
1
the information can be adjusted concretely by hand in each case of incorrect analysis---information can be obtained from cases where the system incorrectly analyzes sentences
1
the language model pis implemented as an n-gram model using the srilm-toolkit with kneser-ney smoothing---a 4-gram language model is trained on the monolingual data by srilm toolkit
1
the model parameters are trained using minimum error-rate training---coreference resolution is the task of determining whether two or more noun phrases refer to the same entity in a text
0
in addition , we explore methods to improve phrase structure parsing for learner english---in this paper , we first propose a phrase structure annotation scheme for learner english
1
we propose a novel text normalization model based on learning edit operations from labeled data while incorporating features induced from unlabeled data via character-level neural text embeddings---we suggest a simple , supervised character-level string transduction model which easily incorporates features automatically learned from large amounts of unlabeled data
1
the evaluation protocol and metrics were very similar to which allowed us to do indirect comparison to previous work---ten of these concepts were identical to ones used in , which allowed us to compare our results to recent work in case of english
1
for estimating monolingual word vector models , we use the cbow algorithm as implemented in the word2vec package using a 5-token window---word representations to learn word embeddings from our unlabeled corpus , we use the gensim im-plementation of the word2vec algorithm
1
the results show that srl information is very helpful for orl , which is consistent with previous studies---results show that srl is highly effective for orl , which is consistent with previous findings
1
word sense disambiguation ( wsd ) is a task to identify the intended sense of a word based on its context---word sense disambiguation ( wsd ) is the task of determining the meaning of an ambiguous word in its context
1
we use pretrained 300-dimensional english word embeddings---for dependency parsing , the performance improves log-linearly with the number of parameters ( unique n-grams )
0
koehn and schroeder described a procedure for domain adaptation that was using two translation models in decoding , one trained on in-domain data and the other on out-of-domain data---koehn and schroeder investigated domain adaptations by integrating in-domain and out-of-domain language models as log-linear features in an smt model
1
we induce a topic-based vector representation of sentences by applying the latent dirichlet allocation method---a few recent studies have highlighted the potential and importance of developing paraphrase identification and semantic similarity techniques specifically for tweets
0
experimental results on real-world datasets show that our model achieves significant and consistent improvements on relation extraction as compared with baselines---experimental results on real-world datasets show that , our model can make full use of those sentences containing only one target entity , and achieves significant and consistent improvements on relation extraction
1
this paper presents a graph-theoretic model of the acquisition of lexical syntactic representations---hence we use the expectation maximization algorithm for parameter learning
0
crucially , our approach combines the strengths of entity-mention models and mention-ranking models---that combines the strengths of mention rankers and entity-mention models
1
most recently , mcdonald et al investigate a structured model for jointly classifying the sentiment of text at varying levels of granularity---training is done through stochastic gradient descent over shuffled mini-batches with adadelta update rule
0
the proposed models empirically show consistent improvement over the previous methods in both the bleu and err evaluation metrics---the weights of the different feature functions were optimised by means of minimum error rate training
0
we were able to show that performance improves with increased depth , using up to 29 convolutional layers---and we were able to show that increasing the depth up to 29 convolutional layers steadily improves performance
1
each translation model is tuned using mert to maximize bleu---semantic role labeling ( srl ) is a task of analyzing predicate-argument structures in texts
0
all annotations were done using the brat rapid annotation tool---all annotations were carried out with the brat rapid annotation tool
1
for our purpose we use word2vec embeddings trained on a google news dataset and find the pairwise cosine distances for all words---lin and pantel use a standard monolingual corpus to generate paraphrases , based on dependancy graphs and distributional similarity
0
we used the treetagger for lemmatisation as well as part-of-speech tagging---given such parallel data , we can easily train an encoder-decoder model that takes a sentence and target syntactic template
0
relation extraction ( re ) is the process of generating structured relation knowledge from unstructured natural language texts---relation extraction is a fundamental task in information extraction
1
we present a graph algorithm that decides satisfiability of normal dominance constraints in polynomial time---we identify the natural fragment of normal dominance constraints and show that its satisfiability problem is in deterministic polynomial time
1
previous research has shown the usefulness of using pretrained word vectors to improve the performance of various models---recent works reveal that modifying word vectors during training could capture polarity information for the sentiment words effectively
1
we use the selectfrommodel 4 feature selection method as implemented in scikit-learn---for the svm classifier we use the python scikitlearn library
1
the dmv is a singlestate head automata model over lexical word classes -pos tags---the dmv is a singlestate head automata model which is based on pos tags
1
to train a crf model , we use the wapiti sequence labelling toolkit---we use wapiti tagger to train a standard crf tagger with iob tags for phrase chunking
1
a 5-gram language model with kneser-ney smoothing was trained with srilm on monolingual english data---the language models in this experiment were trigram models with good-turing smoothing built using srilm
1
in this paper , we propose a new space and a new metric for computing this distance---in this paper , we proposed a noisy-channel model for qa that can accommodate within a unified framework
1
morante and daelemans use the bioscope corpus to approach the problem of identifying cues and scopes via supervised machine learning---morante and daelemans present a machine-learning approach to this task , using token-level , lexical information only
1
we obtained the pos tags and parse trees of the sentences in our datasets with the stanford pos tagger and the stanford parser---we parse the source sentences using the stanford corenlp parser and linearize the resulting parses
1
we use srilm to build 5-gram language models with modified kneser-ney smoothing---the language model is a large interpolated 5-gram lm with modified kneser-ney smoothing
1
phrasebased smt models are tuned using minimum error rate training---the model weights are automatically tuned using minimum error rate training
1
stephanie seneff tina : a natural language system for spoken language applications anisms---tina : a natural language system for spoken language applications
1
the primary requirement ( and challenge ) here is to deal with multi-membership , i.e. , one item may belong to multiple different semantic classes---here is dealing with multi-membership : an item may belong to multiple semantic classes ; and we need to discover as many as possible the different semantic classes
1
the bleu score for all the methods is summarised in table 5---table 1 shows the performance for the test data measured by case sensitive bleu
1
given the basic nature of the semantic classes and word sense disambiguation algorithms used , we think there is ample room for future improvements---given the basic nature of the semantic classes and wsd algorithms , we think there is room for future improvements
1
dependency parsing is a way of structurally analyzing a sentence from the viewpoint of modification---dependency parsing is the task of predicting the most probable dependency structure for a given sentence
1
event schema induction is the task of learning a representation of events ( e.g. , bombing ) and the roles involved in them ( e.g , victim and perpetrator )---we evaluated the translation quality using the bleu-4 metric
0
at the sub-sentential level , munteanu and marcu extracted sub-sentential translation pairs from comparable corpora based on the loglikelihood-ratio of word translation probability---in this paper , we propose elden , an el system which increases nodes and edges of the kg
0
this is often measured by correlation with human judgment---peters et al propose a deep neural model that generates contextual word embeddings which are able to model both language and semantics of word use
0
bandyopadhyay et al , 2011 , and sentiment analysis---bandyopadhyay et al , 2011 , sentiment analysis , and many other applications
1
we train the cbow model with default hyperparameters in word2vec---for feature building , we use word2vec pre-trained word embeddings
1
relation extraction is the task of finding relationships between two entities from text---relation extraction is a fundamental step in many natural language processing applications such as learning ontologies from texts ( cite-p-12-1-0 ) and question answering ( cite-p-12-3-6 )
1
the process of identifying the correct meaning , or sense of a word in context , is known as word sense disambiguation ( wsd )---word sense disambiguation ( wsd ) is the task of determining the meaning of a word in a given context
1
we use the adam optimizer and mini-batch gradient to solve this optimization problem---relation extraction is the task of automatically detecting occurrences of expressed relations between entities in a text and structuring the detected information in a tabularized form
0
relation extraction is the task of detecting and classifying relationships between two entities from text---relation extraction is a fundamental step in many natural language processing applications such as learning ontologies from texts ( cite-p-12-1-0 ) and question answering ( cite-p-12-3-6 )
1
caseinsensitive bleu is used to evaluate the translation results---for evaluation , caseinsensitive nist bleu is used to measure translation performance
1
research is a collaborative effort to increase knowledge---the research described here is a further development of several strands of previous research
1
to cope with the unit problem , we propose a character-based chunking method---in which we perform the chunking process based on character units
1
reduction can significantly improve the conciseness of automatic summaries---reduction system can improve the conciseness of generated summaries significantly
1
our translation model is implemented as an n-gram model of operations using the srilm toolkit with kneser-ney smoothing---we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting
1
for automated scoring of unrestricted spontaneous speech , speech proficiency has been evaluated primarily on aspects of pronunciation , fluency , vocabulary and language usage but not on aspects of content and topicality---for automated scoring of unrestricted , spontaneous speech , most automated systems have estimated the non-native speakers ¡¯ speaking proficiency primarily based on low-level speaking-related features , such as pronunciation , intonation , rhythm , rate of speech , and fluency
1
we used a 5-gram language model trained on 126 million words of the xinhua section of the english gigaword corpus , estimated with srilm---wikipedia is a resource of choice exploited in many nlp applications , yet we are not aware of recent attempts to adapt coreference resolution to this resource
0
the promt smt system is based on the moses open-source toolkit---the smt baseline system is built upon the opensource mt toolkit moses 9
1
a 4-gram language model was trained on the monolingual data by the srilm toolkit---our results show that the visual model outperforms the language-only model
0
summarization is the process of condensing a source text into a shorter version while preserving its information content---summarization is the task of condensing a piece of text to a shorter version that contains the main information from the original
1
we use the word2vec skip-gram model to learn initial word representations on wikipedia---to get a dictionary of word embeddings , we use the word2vec tool 2 and train it on the chinese gigaword corpus
1
psycholinguistic experiments have shown that eye gaze is tightly linked to human language processing---work has also shown that eye gaze has a potential to improve reference resolution
1

Dataset Card for "parasci_data"

More Information needed

Downloads last month
28