sentence1
stringlengths 16
446
| sentence2
stringlengths 14
436
|
---|---|
to discover all taxonomic relations , i . e . if a pair of terms is not in the training set , it may become a negative example in the learning process , and will be classified as a non-taxonomic relation .
|
moreover , if a pair of terms is not contained in the training set , there is high possibility that it will become a negative example in the learning process , and will likely be recognized as a non-taxonomic relation .
|
each turn is represented as a 300-dimensional vector using the pretrained word2vec embedding model that is trained on google news .
|
each token is represented by its embedding obtained from a pretrained word embedding model trained on part of google news dataset .
|
for the neural models , we use 100-dimensional glove embeddings , pre-trained on wikipedia and gigaword .
|
for the classification task , we use pre-trained glove embedding vectors as lexical features .
|
the feature weights for the log-linear combination of the features are tuned using minimum error rate training on the devset in terms of bleu .
|
the optimisation of the feature weights of the model is done with minimum error rate training against the bleu evaluation metric .
|
during the last decade , statistical machine translation systems have evolved from the original word-based approach into phrase-based translation systems .
|
in the last decade , statistical machine translation has been advanced by expanding the basic unit of translation from word to phrase and grammar .
|
we also use mini-batch adagrad for optimization and apply dropout .
|
we use adagrad for deciding the feature update step .
|
in this paper we have described affecthor , the system which we submitted to the semeval-2018 affects in tweets .
|
in this paper we describe our submission to semeval-2018 task 1 : affects in tweets .
|
goldwater et al showed that modeling dependencies between adjacent words dramatically improves word segmentation accuracy .
|
goldwater et al used hierarchical dirichlet processes to induce contextual word models .
|
we also report the results using bleu and ter metrics .
|
we evaluate the translation quality using the case-sensitive bleu-4 metric .
|
we selected conditional random fields as the baseline model .
|
to this end , we use conditional random fields .
|
we tune weights by minimizing bleu loss on the dev set through mert and report bleu scores on the test set .
|
we tune model weights using minimum error rate training on the wmt 2008 test data .
|
it is implemented with another rnn with lstm cells with an attention mechanism and a softmax layer .
|
this system is a basic encoderdecoder with an attention mechanism .
|
we use a set of 318 english function words from the scikit-learn package .
|
we use the linearsvc classifier as implemented in scikit-learn package 17 with the default parameters .
|
experimental results indicate that this combination outperforms the unsupervised boosting method .
|
experimental results also show that all boosting methods outperform their corresponding methods without boosting .
|
pitler and nenkova use the entity grid to capture the coherence of a text for readability assessment .
|
the entity grid is applied to readability assessment by pitler and nenkova .
|
we used the pharaoh decoder for both the minimum error rate training and test dataset decoding .
|
we used minimum error rate training to tune the feature weights for maximum bleu on the development set .
|
we use skip-gram with negative sampling for obtaining the word embeddings .
|
we derive 100-dimensional word vectors using word2vec skip-gram model trained over the domain corpus .
|
we use stanford corenlp for feature generation .
|
we use stanford corenlp for pos tagging and lemmatization .
|
sentiment analysis in twitter is a particularly challenging task , because of the informal and “ creative ” writing style , with improper use of grammar , figurative language , misspellings and slang .
|
as sentiment analysis in twitter is a very recent subject , it is certain that more research and improvements are needed .
|
the exponential log-linear model weights of both the smt and re-scoring stages of our system were set by tuning the system on development data using the mert procedure by means of the publicly available zmert toolkit 1 .
|
the exponential log-linear model weights of our system are set by tuning the system on development data using the mert procedure by means of the publicly available zmert toolkit 1 .
|
we present a novel framework for word alignment that incorporates synonym knowledge collected from monolingual linguistic resources .
|
we proposed a novel framework that incorporates synonyms from monolingual linguistic resources in a word alignment generative model .
|
relation extraction is the task of automatically detecting occurrences of expressed relations between entities in a text and structuring the detected information in a tabularized form .
|
relation extraction is the task of finding relationships between two entities from text .
|
for decoding , we used moses with the default options .
|
we used the moses toolkit with its default settings .
|
we used the stanford parser to extract dependency features for each quote and response .
|
for ptb pos tags , we tagged the text with the stanford parser .
|
fomicheva and specia investigate bias in monolingual evaluation of mt and conclude reference bias to be a serious issue , with human annotators strongly biased by the reference translation provided .
|
following this intuition , fomicheva and specia carry out an investigation into bias in monolingual evaluation of mt and conclude that in a monolingual setting , human assessors of mt are strongly biased by the reference translation .
|
since our method is applicable to various monotone submodular objective functions and can find almost optimal solutions .
|
thus we obtain a fast greedy method for compressive summarization , which works with various monotone submodular objective functions and enjoys an approximation guarantee .
|
in this task , we use the 300-dimensional 840b glove word embeddings .
|
we use the glove vectors of 300 dimension to represent the input words .
|
in the dataset is accompanied by a single judgement .
|
each caption in the dataset is accompanied by a single judgement .
|
verbnet is a very large lexicon of verbs in english that extends levin with explicitly stated syntactic and semantic information .
|
verbnet is a verb lexicon with syntactic and semantic information for english verbs , referring to levin verb classes to construct the lexical entries .
|
for a fair comparison to our model , we used word2vec , that pretrain word embeddings at a token level .
|
for our experiments reported here , we obtained word vectors using the word2vec tool and the text8 corpus .
|
a 4-gram language model is trained on the monolingual data by srilm toolkit .
|
we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit .
|
the idea of distant supervision has widely used in the task of relation extraction .
|
distant supervision has been successfully used for the problem of relation extraction .
|
we consider natural language generation from the abstract meaning representation .
|
the latter category is exemplified by abstract meaning representation .
|
we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing .
|
we also use a 4-gram language model trained using srilm with kneser-ney smoothing .
|
burstein et al employ this idea for evaluating coherence in student essays .
|
burstein et al , used it for an educational purpose , and used it to predict the readability of essays .
|
mccarthy et al provided a partial solution by describing a method to predict the predominant sense , or the most frequent sense , of a word in a corpus .
|
mccarthy et al propose a method for automatically identifying the predominant sense in a given domain .
|
coreference resolution is the next step on the way towards discourse understanding .
|
coreference resolution is the problem of partitioning a sequence of noun phrases ( or mentions ) , as they occur in a natural language text , into a set of referential entities .
|
previous applications of recursive neural networks to supervised relation extraction are based on constituency-based parsers .
|
recursive neural network and convolutional neural network have proven powerful in relation classification .
|
the task we study is part of the news evaluation campaign conducted in 2009 .
|
this task is part of the news evaluation campaign conducted in 2009 .
|
these models were implemented using the package scikit-learn .
|
the two baseline methods were implemented using scikit-learn in python .
|
in this paper , we demonstrate that significant gains can instead be achieved by using a more constrained , linguistically motivated grammar .
|
here , we extend the first approach , and show that better lexical generalization provides significant performance gains .
|
that achieves this by learning a situated model of meaning from an unlabeled video corpus .
|
this paired corpus is used to train a situated model of meaning that significantly improves video retrieval performance .
|
collobert et al propose a multi-task learning framework with dnn for various nlp tasks , including part-of-speech tagging , chunking , named entity recognition , and semantic role labelling .
|
for example , collobert et al effectively used a multilayer neural network for chunking , part-ofspeech tagging , ner and semantic role labelling .
|
mcdonald and pereira presented a graph-based parser that can generate graphs in which a word may depend on multiple heads , and evaluated it on the danish treebank .
|
mcdonald and pereira presented a graph-based parser that can generate dependency graphs in which a word may depend on multiple heads .
|
we use the stanford rule-based system for coreference resolution .
|
we included the stanford coreference resolution system in our model for this reason .
|
conditional random fields are popular models for many nlp tasks .
|
conditional random field is one of the most effective approaches used in ner tasks .
|
conditional random fields are undirected graphical models trained to maximize a conditional probability .
|
crfs are undirected graphical models trained to maximize a conditional probability .
|
coherence is the property of a good human-authored text that makes it easier to read and understand than a randomly-ordered collection of sentences .
|
coherence is a central aspect in natural language processing of multi-sentence texts .
|
to train our model we use markov chain monte carlo sampling .
|
we use gibbs sampling , a markov chain monte carlo method , to sample from the posterior .
|
the srilm toolkit was used for training the language models using kneser-ney smoothing .
|
the language models are 4-grams with modified kneser-ney smoothing which have been trained with the srilm toolkit .
|
sarcasm is a pervasive phenomenon in social media , permitting the concise communication of meaning , affect and attitude .
|
sarcasm is defined as ‘ the use of irony to mock or convey contempt ’ 1 .
|
we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing .
|
we used srilm -sri language modeling toolkit to train several character models .
|
that , we believe , brings us a step further in understanding the benefits of multi-task learning .
|
this , we believe , is an important step toward understanding how mtl works .
|
we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing .
|
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided .
|
a task that is similar to ours is the task of keywords to question generation that has been addressed recently in zheng et al .
|
a task that is similar to ours is the task of keywords-to-question generation that has been addressed recently in zheng et al .
|
we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing .
|
we used 5-gram models , estimated using the sri language modeling toolkit with modified kneser-ney smoothing .
|
we use an nmt-small model from the opennmt framework for the neural translation .
|
specifically , we employ the seq2seq model with attention implemented in opennmt .
|
we use approximate randomization for significance testing .
|
for assessing significance , we apply the approximate randomization test .
|
we use glove vectors with 100 dimensions trained on wikipedia and gigaword as word embeddings .
|
with regard to inputs , we use 50-d glove word embeddings pretrianed on wikipedia and gigaword and 5-d postion embedding .
|
dimensionality reduction makes the global distributional pattern of a word available in a profile .
|
a dimensionality reduction creates a space representing the syntactic categories of unambiguous words .
|
semantic role labeling ( srl ) is the process of assigning semantic roles to strings of words in a sentence according to their relationship to the semantic predicates expressed in the sentence .
|
semantic role labeling ( srl ) is the task of identifying semantic arguments of predicates in text .
|
word sense disambiguation ( wsd ) is formally defined as the task of computationally identifying senses of a word in a context .
|
word sense disambiguation ( wsd ) is the task of assigning sense tags to ambiguous lexical items ( lis ) in a text .
|
we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus .
|
a 4-gram language model is trained on the xinhua portion of the gigaword corpus with the srilm toolkit .
|
in natural language , a word often assumes different meanings , and the task of determining the correct meaning , or sense , of a word in different contexts is known as word sense disambiguation ( wsd ) .
|
word sense disambiguation ( wsd ) is the task of identifying the correct meaning of a word in context .
|
the framenet database provides an inventory of semantic frames together with a list of lexical units associated with these frames .
|
framenet is a semantic resource which provides over 1200 semantic frames that comprise words with similar semantic behaviour .
|
we obtained both phrase structures and dependency relations for every sentence using the stanford parser .
|
we compute the syntactic features only for pairs of event mentions from the same sentence , using the stanford dependency parser .
|
we employ the sentiment analyzer in stanford corenlp to do so .
|
we use stanford corenlp for pos tagging and lemmatization .
|
importantly , word embeddings have been effectively used for several nlp tasks .
|
word embeddings are critical for high-performance neural networks in nlp tasks .
|
we have used opennmt and marian nmt toolkit to train and test the nmt system .
|
we used marian toolkit 13 to build competitive nmt systems based on the transformer architecture .
|
elaborate qualitative and quantitative experimental analyses show the effectiveness of our models .
|
the qualitative and quantitative experimental analyses demonstrate the efficacy of our models .
|
therefore , we employ negative sampling and adam to optimize the overall objective function .
|
we use a binary cross-entropy loss function , and the adam optimizer .
|
in this paper , we present a novel self-training strategy , which uses information retrieval ( ir ) to collect a cluster of related documents .
|
in this paper , we apply a novel self-training process on an existing state-of-the-art baseline system .
|
lin and pantel use a standard monolingual corpus to generate paraphrases , based on dependancy graphs and distributional similarity .
|
the new approach follows the methodology of lin and pantel for dynamically determining paraphrases in a corpus by measuring the similarity of paths between nodes in syntactic de-pendency trees .
|
therefore , dependency parsing is a potential “ sweet spot ” that deserves investigation .
|
dependency parsing is a way of structurally analyzing a sentence from the viewpoint of modification .
|
in this paper , we present a reinforcement learning framework for inducing mappings from text to actions .
|
in this paper , we present a reinforcement learning approach for mapping natural language instructions to sequences of executable actions .
|
we use the 100-dimensional pre-trained word embeddings trained by word2vec 2 and the 100-dimensional randomly initialized pos tag embeddings .
|
with english gigaword corpus , we use the skip-gram model as implemented in word2vec 3 to induce embeddings .
|
we used a phrase-based smt model as implemented in the moses toolkit .
|
in all submitted systems , we use the phrase-based moses decoder .
|
t盲ckstr枚m et al explore the use of mixed type and token annotations in which a tagger is learned by projecting information via parallel text .
|
a different approach to cross-lingual pos tagging is proposed by t盲ckstr枚m et al who couple token and type constraints in order to guide learning .
|
we use srilm toolkits to train two 4-gram language models on the filtered english blog authorship corpus and the xinhua portion of gigaword corpus , respectively .
|
we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .
|
math-w-15-1-1-45 itself is efficient in the length of the string .
|
a language math-w-3-1-2-135 is some subset of math-w-3-1-2-140 .
|
shallow semantic representations , bearing a more compact information , could prevent the sparseness of deep structural approaches .
|
shallow semantic representations can prevent the weakness of cosine similarity based models .
|
kim and hovy proposed to map the semantic frames of framenet into opinion holder and target for only adjectives and verbs .
|
kim and hovy map the semantic frames of framenet into opinion holder and target for adjectives and verbs to identify these components .
|
in this paper , we propose a bigram based supervised method for extractive document summarization .
|
in this paper , we leverage the ilp method as a core component in our summarization system .
|
we use srilm toolkit to train a trigram language model with modified kneser-ney smoothing on the target side of training corpus .
|
we trained a 4-gram language model on this data with kneser-ney discounting using srilm .
|
noraset et al showed that rnn language models can be used to learn to generate definitions for common english words .
|
noraset et al propose the task of generating definitions based on word embeddings for interpretability purposes .
|
the algorithm is specified by means of deduction rules , following , and can be implemented using standard tabular techniques .
|
the algorithm is formulated using the framework of parsing as deduction , extended with weights .
|
klebanov et al used concreteness as a feature with baseline features and optimal weighting technique .
|
klebanov et al approach was based on optimal weighting to obtain optimal f-score which lead to comparatively higher recall .
|
feature weights are tuned using minimum error rate training on the 455 provided references .
|
the weights of the different feature functions were optimised by means of minimum error rate training .
|
to the active dual supervision setting , we use the reconstruction error to evaluate the value of feature and example labels .
|
then by making use of the reconstruction error criterion in matrix factorization , we propose a unified scheme to evaluate the value of feature and example labels .
|
in this paper , we propose a new method for deriving a kernel from a probabilistic model which is specifically tailored to reranking tasks .
|
in this paper we propose a method for defining kernels in terms of a probabilistic model of parsing .
|
as for chinese discourse parser , we build a pipeline system following the annotation procedure of chinese discourse treebank .
|
and we build a chinese discourse parser following the annotation procedure of chinese discourse treebank .
|
semantic parsing is the task of converting natural language utterances into formal representations of their meaning .
|
semantic parsing is the task of mapping a natural language ( nl ) sentence into a completely formal meaning representation ( mr ) or logical form .
|
sentiment analysis is the task in natural language processing ( nlp ) that deals with classifying opinions according to the polarity of the sentiment they express .
|
the sentiment analysis is a field of study that investigates feelings present in texts .
|
information extraction ( ie ) is the task of generating structured information , often in the form of subject-predicate-object relation triples , from unstructured information such as natural language text .
|
information extraction ( ie ) is the process of identifying events or actions of interest and their participating entities from a text .
|
table 1 shows the performance for the test data measured by case sensitive bleu .
|
table 1 summarizes test set performance in bleu , nist and ter .
|
topkara , topkara , and atallah and topkara et al used machine translation evaluation metrics bleu and nist , automatically measuring how close a stego sentence is to the original .
|
topkara et al and topkara et al used machine translation evaluation metrics bleu and nist , automatically measuring how close a stego sentence is to the original .
|
in order to get better translation results , we generate n-best hypotheses with an ensemble model and then train a re-ranker using k-best mira on the validation set .
|
therefore , we generate 50-best hypothesis from the ensemble system and then tune the model weights with batch-mira on the development set to maximize the bleu score .
|
the idea of distinguishing between general and domain-specific examples is due to daum茅 and marcu , who used a maximum-entropy model with latent variables to capture the degree of specificity .
|
daum茅 iii and marcu use an empirical bayes model to estimate a latent variable model grouping instances into domain-specific or common across both domains .
|
we used srilm to build a 4-gram language model with kneser-ney discounting .
|
we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit .
|
we have presented multirc , a reading comprehension dataset in which questions require reasoning over multiple sentences to be answered .
|
in this work , we propose a multi-sentence qa challenge in which questions can be answered only using information from multiple sentences .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.