sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
mikolov et al proposed a distributed word embedding model that allowed to convey meaningful information on vectors derived from neural networks .
recently , mikolov et al presented a shallow network architecture that is specifically for learning word embeddings , known as the word2vec model .
we used a regularized maximum entropy model .
we use the maximum entropy model for our classification task .
for decoding , we used moses with the default options .
in the translation tasks , we used the moses phrase-based smt systems .
we use the moses smt toolkit to test the augmented datasets .
we use the moses toolkit to train various statistical machine translation systems .
we propose a novel neural language model that learns a recurrent neural network ( rnn ) ( cite-p-10-5-4 ) on top of the syntactic dependency .
in this paper we proposed a novel language model , dependency rnn , which incorporates syntactic dependencies into the rnn formulation .
we applied the proposed methods to nlp tasks , and found that our methods can achieve the same high performance .
in experiments with nlp tasks , we show that the proposed method can extract effective combination features , and achieve high performance with very few features .
choi et al examine opinion holder extraction using crfs with various manually defined linguistic features and patterns automatically learnt by the autoslog system .
choi et al examine opinion holder extraction using crfs with several manually defined linguistic features and automatically learnt surface patterns .
word sense disambiguation ( wsd ) is the task of identifying the correct meaning of a word in context .
word sense disambiguation ( wsd ) is a key enabling-technology that automatically chooses the intended sense of a word in context .
trigram language models were estimated using the sri language modeling toolkit with modified kneser-ney smoothing .
language models were built using the sri language modeling toolkit with modified kneser-ney smoothing .
semantic role labeling was pioneered by gildea and jurafsky .
automatic semantic role labeling was first introduced by gildea and jurafsky .
relation extraction is a crucial task in the field of natural language processing ( nlp ) .
relation extraction ( re ) is the task of extracting instances of semantic relations between entities in unstructured data such as natural language text .
i will describe this approach and show why it fails to explain the opacity of indexicals .
unfortunately , we have seen that this kind of theory can not explain opaque indexicals .
pennacchiotti and pantel , 2009 ) proposed an ensemble semantic framework that mixes distributional and patternbased systems with a large set of features from a web-crawl , query logs , and wikipedia .
pennacchiotti and pantel , 2009 , fused information from pattern-based and distributional systems using an ensemble method and a rich set of features derived from query logs , web-crawl and wikipedia .
according to , arg 2 is defined as the argument following a connective , however , arg 1 can be located within the same sentence as the connective , in some previous or following sentence .
arg2 is taken as the argument which occurs in the same sentence as the connective and is therefore syntactically associated with it .
we have used penn tree bank parsing data with the standard split for training , development , and test .
for training and evaluating the itsg parser , we employ the penn wsj treebank .
we complement the neural approaches with a simple neural network that uses word representations , namely a continuous bag-of-words model .
our normalization approach is based on continuous distributed word vector representations , namely the state-of-the-art method word2vec .
a simile is a form of figurative language that compares two essentially unlike things ( cite-p-20-3-11 ) , such as β€œ jane swims like a dolphin ” .
a simile is a figure of speech comparing two fundamentally different things .
coreference resolution is the problem of identifying which noun phrases ( nps , or mentions ) refer to the same real-world entity in a text or dialogue .
coreference resolution is the process of linking together multiple expressions of a given entity .
rhetorical structure theory posits a hierarchical structure of discourse relations between spans of text .
rhetorical structure theory has contributed a great deal to the understanding of the discourse of written documents .
our model is a first order linear chain conditional random field .
the resulting model is an instance of a conditional random field .
text categorization is a fundamental and traditional task in natural language processing ( nlp ) , which can be applied in various applications such as sentiment analysis ( cite-p-18-3-12 ) , question classification ( cite-p-18-3-24 ) and topic classification ( cite-p-18-3-13 ) .
text categorization is a classical text information processing task which has been studied adequately ( cite-p-18-1-9 ) .
our model is based on the standard lstm encoder-decoder model with an attention mechanism .
our systems are based on the encoder-decoder model with the attention mechanism , which is also known as the rnnsearch model .
in both sets of experiments , we assess the impact of features relating to conversation .
in this research we aim to detect subjective sentences in multimodal conversations .
the language model is a 5-gram with interpolation and kneser-ney smoothing .
the language model was constructed using the srilm toolkit with interpolated kneser-ney discounting .
nist datasets show that our approach results in significant improvements in both directions .
experiments on chineseenglish nist datasets show that our approach leads to significant improvements .
to obtain low dimensional representations of words in their syntactic roles , and to leverage modularity in the tensor for easy training with online algorithms .
our method maintains the parameters as a low-rank tensor to obtain low dimensional representations of words in their syntactic roles , and to leverage modularity in the tensor for easy training with online algorithms .
barzilay and mckeown extract both singleand multiple-word paraphrases from a monolingual parallel corpus .
barzilay and mckeown extracted both single-and multiple-word paraphrases from a sentence-aligned corpus for use in multi-document summarization .
we use srilm toolkit to build a 5-gram language model with modified kneser-ney smoothing .
a 4-gram language model generated by sri language modeling toolkit is used in the cube-pruning process .
our model additionally learn the language ’ s canonical word order .
note that our model does not contain knowledge about the specific word order of the language .
our results show that the visual model outperforms the language-only model .
our results show that the vision-based model outperforms the language-only model on our dataset .
in this paper , we describe a phrase-based unigram model for statistical machine translation .
in this paper , we described a phrase-based unigram model for statistical machine translation .
the feature weights are tuned to optimize bleu using the minimum error rate training algorithm .
the minimum error rate training was used to tune the feature weights .
we compute the interannotator agreement in terms of the bleu score .
we evaluate the translation quality using the case-sensitive bleu-4 metric .
we use randomization test to calculate statistical significance .
we use approximate randomization for significance testing .
phrase translation strategy was statistically significantly better than that of the sentence translation strategy .
the phrase translation strategy significantly outperformed the sentence translation strategy .
recently , mikolov et al introduced an efficient way for inferring word embeddings that are effective in capturing syntactic and semantic relationships in natural language .
mikolov et al and mikolov et al introduce efficient methods to directly learn high-quality word embeddings from large amounts of unstructured raw text .
high quality word embeddings have been proven helpful in many nlp tasks .
importantly , word embeddings have been effectively used for several nlp tasks .
under the neural setting , we find that it is preferable to solve open targeted sentiment .
in this paper , we exploit structured neural models for open targeted sentiment .
we use the word2vec tool to pre-train the word embeddings .
we trained word embeddings using word2vec on 4 corpora of different sizes and types .
we use publicly available word embeddings trained on wikipedia , pubmed , and pmc .
we use pre-trained word embeddings of moen et al , which are publicly available .
word sense disambiguation ( wsd ) is the task of automatically determining the correct sense for a target word given the context in which it occurs .
word sense disambiguation ( wsd ) is the task of determining the correct meaning ( β€œ sense ” ) of a word in context , and several efforts have been made to develop automatic wsd systems .
blitzer et al experimented with structural correspondence learning , which focuses on finding frequently occurring pivot features that occur commonly across domains in the unlabeled data but equally characterize source and target domains .
blitzer et al induced a correspondence between features from a source and target domain based on structural correspondence learning over unlabelled target domain data .
we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting .
we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit .
relation extraction ( re ) is the task of recognizing the assertion of a particular relationship between two or more entities in text .
relation extraction is the task of recognizing and extracting relations between entities or concepts in texts .
grenager et al , 2005 ) presents an unsupervised hmm based on the observation that the segmented fields tend to be of multiple words length .
grenager et al , 2005 , used a first order hmm which has a diagonal transition matrix and a specialized boundary model .
we ran mt experiments using the moses phrase-based translation system .
we used the moses pbsmt system for all of our mt experiments .
the 5-gram target language model was trained using kenlm .
the language model was trained using kenlm .
table 5 shows the bleu and per scores obtained by each system .
table 4 shows the comparison of the performances on bleu metric .
named entity recognition ( ner ) is a fundamental information extraction task that automatically detects named entities in text and classifies them into predefined entity types such as person , organization , gpe ( geopolitical entities ) , event , location , time , date , etc .
named entity recognition ( ner ) is a key technique for ie and other natural language processing tasks .
one-on-one tutoring has been shown to be a very effective form of instruction .
human one-to-one tutoring often yields significantly higher learning gains than classroom instruction .
the pos tags used in the reordering model are obtained using the treetagger .
the reference corpora and data sets are pos tagged with the ims treetagger .
event coreference resolution is the task of identifying event mentions and clustering them such that each cluster represents a unique real world event .
moreover , since event coreference resolution is a complex task that involves exploring a rich set of linguistic features , annotating a large corpus with event coreference information for a new language or domain of interest requires a substantial amount of manual effort .
the feature weights are tuned to optimize bleu using the minimum error rate training algorithm .
the log-linear parameter weights are tuned with mert on the development set .
the srilm toolkit was used to build this language model .
the target-side language models were estimated using the srilm toolkit .
galley and manning introduce the hierarchical phrase reordering model which increases the consistency of orientation assignments .
galley and manning propose a shift-reduce algorithm to integrate a hierarchical reordering model into phrase-based systems .
a : stokely-van camp bought the formula and started marketing the drink as gatorade .
a : stokely-van camp bought the formula and started marketing the drink as gatorade in 1967 .
we train a 4-gram language model on the xinhua portion of the gigaword corpus using the sri language toolkit with modified kneser-ney smoothing .
we train a trigram language model with modified kneser-ney smoothing from the training dataset using the srilm toolkit , and use the same language model for all three systems .
word embeddings have also been used in several nlp tasks including srl .
high quality word embeddings have been proven helpful in many nlp tasks .
we used the scikit-learn implementation of a logistic regression model using the default parameters .
we implement logistic regression with scikit-learn and use the lbfgs solver .
recent work has shown that the capability to automatically identify problematic situations ( e . g . , speech recognition errors ) can help control and adapt dialog strategies to improve performance .
recent studies have also shown that the capability to automatically identify problematic situations during interaction can significantly improve the system performance .
das and chen , pang et al , turney , dave et al , .
das and chen , pang et al , turney , dave et al , pang and lee , .
translation quality is measured by case-insensitive bleu on newstest13 using one reference translation .
the evaluation metric for the overall translation quality is caseinsensitive bleu4 .
named entity recognition ( ner ) is the task of identifying and typing phrases that contain the names of persons , organizations , locations , and so on .
named entity recognition ( ner ) is the task of finding rigid designators as they appear in free text and classifying them into coarse categories such as person or location ( cite-p-24-4-6 ) .
processing units ( gpus ) have previously been used to accelerate cky chart evaluation , but gains over cpu parsers were modest .
gpus have previously been used to accelerate cky evaluation , but gains over cpu parsers were modest .
then , we trained word embeddings using word2vec .
we pre-train the word embedding via word2vec on the whole dataset .
chinese corpus justifies the effectiveness of our global argument inference model over a state-of-the-art baseline .
the experimental results ensure that our global argument inference model outperforms the state-of-the-art system .
we have shown co-training to be a promising approach for predicting emotions .
our results show that co-training can be highly effective when a good set of features are chosen .
the language model is trained and applied with the srilm toolkit .
the srilm toolkit was used to build this language model .
turian et al showed that the optimal word embedding is task dependent .
turian et al learned a crf model using word embeddings as input features for ner and chunking tasks .
math-w-17-1-1-46 ( the context around math-w-17-1-1-55 ) .
note that math-w-3-1-1-52 , the empty string .
we propose a minimally supervised method for multilingual paraphrase extraction from definition sentences .
we propose a minimally supervised method for multilingual paraphrase extraction .
we propose a new method for translation acquisition which uses a set of synonyms to acquire translations .
we proposed a new method for translation acquisition which uses a set of synonyms to acquire translations .
multi-task learning has resulted in successful systems for various nlp tasks , especially in cross-lingual settings .
multi-task joint modeling has been shown to effectively improve individual tasks .
which is composed of three cascaded components : the tagging of sr phrase , the identification of semantic-role-phrase and semantic dependency parsing .
the system includes three cascaded components : the tagging semantic role phrase , the identification of semantic role phrase , phrase and frame semantic dependency parsing .
the biomedical event extraction task in this work is adopted from the genia event extraction subtask of the well-known bionlp shared task , .
in this study , we adopt the event extraction task defined in the bionlp 2009 shared task as a model information extraction task .
we used adam optimization with the original parameters that are the default , and the loss function used is cross-entropy .
we further used adam to optimize the parameters , and used cross-entropy as the loss function .
we used the wapiti toolkit , based on the linear-chain crfs framework .
we use wapiti , a state-of-the-art crf implementation , with a standard feature set .
conditional random fields are undirected graphical models used for labeling sequential data .
conditional random fields are probabilistic models for labelling sequential data .
this provides the first state-of-the-art benchmark on this data subset .
our model also outperforms state-of-the-art results on the shell noun dataset .
the translation quality is evaluated by bleu and ribes .
the translation quality is evaluated by case-insensitive bleu-4 .
intrinsic evaluation in nlg has often relied on human input , typically in the form of ratings of or responses to questionnaires .
intrinsic nlg evaluations often involve ratings of text quality or responses to questionnaires , with some studies using post-editing by human experts .
lexical entailments is a prominent component within the textual entailment recognition paradigm , which models semantic inference .
overall , lexical entailment is suggested as a useful model for lexical substitution needs in semantic-oriented applications .
we extract fragments for every sentence from the stanford syntactic parse tree .
we use stanford corenlp to dependency parse sentences and extract the subjects and objects of verbs .
entity linking ( el ) is a central task in information extraction β€” given a textual passage , identify entity mentions ( substrings corresponding to world entities ) and link them to the corresponding entry in a given knowledge base ( kb , e.g . wikipedia or freebase ) .
entity linking ( el ) is the task of mapping specific textual mentions of entities in a text document to an entry in a large catalog of entities , often called a knowledge base or kb , and is one of the major tasks in the knowledge-base population track at the text analysis conference ( tac ) ( cite-p-23-3-1 ) .
the 5-gram kneser-ney smoothed language models were trained by srilm , with kenlm used at runtime .
the language models were interpolated kneser-ney discounted trigram models , all constructed using the srilm toolkit .
recent advances in neural machine translation have changed the direction of machine translation community .
in recent years , neural machine translation has achieved great advancement .
the decoder is capable of both cnf parsing and earley-style parsing with cube-pruning .
the decoder uses a cky-style parsing algorithm and cube pruning to integrate the language model scores .
summary generation remains a significant open problem for natural language processing .
summarization of large texts is still an open problem in natural language processing .
merlo and stevenson classify a smaller number of 60 english verbs into three verb classes , by utilising supervised decision trees .
merlo and stevenson presented an automatic classification of three types of english intransitive verbs , based on argument structure and heuristics to thematic relations .
we apply standard tuning with mert on the bleu score .
we use the mert algorithm for tuning and bleu as our evaluation metric .
we use the transformer model from vaswani et al which is an encoder-decoder architecture that relies mainly on a self-attention mechanism .
vaswani et al came up with a highly parallelizable architecture called transformer which uses the self-attention to better encode a sequences .
link grammar is a highly lexieal , context-free formalism that does not rely on constituent structure .
link grammar is a context-free lexicalized grammar without explicit constituents .
people who use augmentative and alternative communication devices communicate slowly , often below 10 words per minute compared to 150 wpm or higher for speech .
even still , communication rates with aac devices are often below 10 words per minute , compared to the common 130-200 words per minute speech rate of speaking people .
we propose to use the generalized perceptron framework to integrate srl-derived ( and other ) features .
we also propose to use the generalized perceptron learning framework to integrate srl-derived features with other features .
supervised machine learning was applied to monitor the performance of the rule-based method .
ongoing work aim to improve the rule-based method and combine it with a supervised machine learning algorithm .
in our approach , we explore how the high-level structure of human-authored documents can be used to produce well-formed comprehensive overview .
we use the high-level structure of human-authored texts to automatically induce a domain-specific template for the topic structure of a new overview .
we use 300 dimensional glove embeddings trained on the common crawl 840b tokens dataset , which remain fixed during training .
the 50-dimensional pre-trained word embeddings are provided by glove , which are fixed during our model training .
in this paper , we attempt to address this imbalance for graph-based parsing .
in this paper , we abandon exact search in graph-based parsing in favor of freedom in feature scope .
lda is one of the most common topic models which assumes each document is a mixture of various topics and each word is generated with multinomial distribution conditioned on a topic .
lda is a three-level hierarchical bayesian model where each document is a multinomial distribution over topics , and each topic is a multinomial distribution over the vocabulary .
the umls semantic types have also been successfully used for the medical domain , such as in .
for instance , the umls semantic types were integrated into the biotop ontology and previously used for medical qa .