sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
show that , our model can make full use of all informative sentences and effectively reduce the influence of wrong labelled instances .
our model can make full use of all informative sentences and alleviate the wrong labelling problem for distant supervised relation extraction .
here is that none of the prior methods for named-entity disambiguation is robust enough to cope with such difficult inputs .
the conclusion here is that none of the prior methods for named-entity disambiguation is robust enough to cope with such difficult inputs .
sentence hypothesis is selected as the final output of our system .
the highest scoring sentence hypothesis is selected as the final output of our system .
crowdsourcing is a scalable and inexpensive data collection method , but collecting high quality data efficiently requires thoughtful orchestration of crowdsourcing jobs .
crowdsourcing is a cheap and increasingly-utilized source of annotation labels .
we trained a 4-gram language model on this data with kneser-ney discounting using srilm .
we also use a 4-gram language model trained using srilm with kneser-ney smoothing .
we use the linear svm classifier from scikit-learn .
for all classifiers , we used the scikit-learn implementation .
word alignment is the problem of annotating parallel text with translational correspondence .
word alignment is a central problem in statistical machine translation ( smt ) .
nevertheless , gru has been experimentally proven to be comparable in performance to lstm .
gru and lstm have been shown to yield comparable performance .
table 3 reports the translation performance as measured by bleu for the dif-ferent configurations and language pairs described in section 5 .
table 2 shows the translation quality measured in terms of bleu metric with the original and universal tagset .
scarton and specia propose a number of discourse-informed features in order to predict bleu and ter at document level .
scarton and specia apply pseudoreferences , document-aware and discourse-aware features for document-level quality prediction , using bleu and ter as quality scores .
we use srilm to train a 5-gram language model on the target side of our training corpus with modified kneser-ney discounting .
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided .
a more recent development was the use of conditional random field for pos tagging .
a relatively more recent approach for slu is based on conditional random fields .
relation extraction is the task of automatically detecting occurrences of expressed relations between entities in a text and structuring the detected information in a tabularized form .
relation extraction is a subtask of information extraction that finds various predefined semantic relations , such as location , affiliation , rival , etc. , between pairs of entities in text .
we trained a trigram language model on the chinese side , with the srilm toolkit , using the modified kneser-ney smoothing option .
our trigram word language model was trained on the target side of the training corpus using the srilm toolkit with modified kneser-ney smoothing .
we used the pre-trained google embedding to initialize the word embedding matrix .
to encode the original sentences we used word2vec embeddings pre-trained on google news .
we train randomly initialized word embeddings of size 500 for the dialog model and use 300 dimentional glove embeddings for reranking classifiers .
we use glove vectors with 200 dimensions as pre-trained word embeddings , which are tuned during training .
semantic role labeling ( srl ) is a task of automatically identifying semantic relations between predicate and its related arguments in the sentence .
semantic role labeling ( srl ) is defined as the task to recognize arguments for a given predicate and assign semantic role labels to them .
information extraction ( ie ) is a technology that can be applied to identifying both sources and targets of new hyperlinks .
information extraction ( ie ) is the process of identifying events or actions of interest and their participating entities from a text .
for all data sets , we trained a 5-gram language model using the sri language modeling toolkit .
we used the sri language modeling toolkit to train lms on our training data for each ilr level .
we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting .
further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus .
for training our system classifier , we have used scikit-learn .
for all classifiers , we used the scikit-learn implementation .
we evaluated translation quality based on the caseinsensitive automatic evaluation score bleu-4 .
we evaluate the translation quality using the case-sensitive bleu-4 metric .
we use the stanford dependency parser to extract nouns and their grammatical roles .
we use the stanford parser to generate a dg for each sentence .
we parse the source sentences using the stanford corenlp parser and linearize the resulting parses .
we extract fragments for every sentence from the stanford syntactic parse tree .
the starting point of our approach is the observation that a head-annotated treebank ( obeying the constraint that every nonterminal node has exactly one daughter marked as head ) defines a unique lexicalized tree substitution grammar ( obeying the constraint that every elementary tree has exactly one lexical anchor ) .
starting point of our approach is the observation that a head-annotated treebank defines a unique lexicalized tree substitution grammar .
rush et al and nallapati et al employed attention-based sequenceto-sequence framework only for sentence summarization .
gu et al , cheng and lapata , and nallapati et al also utilized seq2seq based framework with attention modeling for short text or single document summarization .
2 an eojeol is a korean spacing unit ( similar to an english word ) , which usually consists of one or more stem morphemes and a series of functional morphemes .
an eojeol is a surface level form consisting of more than one combined morpheme .
target-side language affects how well an nmt encoder captures these semantic phenomena .
we note that the target-side language affects how an nmt source-side encoder captures these semantic phenomena .
borin and wang et al used pivot languages to improve word alignment .
wang et al focus on learning a word alignment model without a source-target corpus .
we use the logistic regression classifier in the skll package , which is based on scikit-learn , optimizing for f 1 score .
for our logistic regression classifier we use the implementation included in the scikit-learn toolkit 2 .
the stanford parser was used to generate the dependency parse information for each sentence .
we use the stanford parser to generate a dg for each sentence .
we trained word embeddings using word2vec on 4 corpora of different sizes and types .
for word embeddings , we trained a skip-gram model over wikipedia , using word2vec .
we implement logistic regression with scikit-learn and use the lbfgs solver .
for the feature-based system we used logistic regression classifier from the scikit-learn library .
word embeddings are initialized from glove 100-dimensional pre-trained embeddings .
word embeddings are initialized with glove 27b trained on tweets and are trainable parameters .
we show that a multi-task learning setup where natural subtasks of the full am problem are added as auxiliary tasks improves performance .
moreover , we find that jointly learning β€˜ natural ’ subtasks , in a multi-task learning setup , improves performance .
we used the pre-trained word embeddings that were learned using the word2vec toolkit on google news dataset .
for our purpose we use word2vec embeddings trained on a google news dataset and find the pairwise cosine distances for all words .
we preprocessed the corpus with tokenization and true-casing tools from the moses toolkit .
we used a phrase-based smt model as implemented in the moses toolkit .
semeval is the international workshop on semantic evaluation , formerly senseval .
semeval is a yearly event in which international teams of researchers work on tasks in a competition format where they tackle open research questions in the field of semantic analysis .
although wordnet is a fine resources , we believe that ignoring other thesauri is a serious oversight .
unfortunately , wordnet is a fine-grained resource , encoding sense distinctions that are difficult to recognize even for human annotators ( cite-p-13-1-2 ) .
algorithms presented in this paper are not specific to bitext projections and can be used for learning from partial parses .
apart from bitext projections , this work can be extended to other cases where learning from partial structures is required .
we use 300-dimensional glove vectors trained on 6b common crawl corpus as word embeddings , setting the embeddings of outof-vocabulary words to zero .
for word-level embedding e w , we utilize pre-trained , 300-dimensional embedding vectors from glove 6b .
we use a minibatch stochastic gradient descent algorithm together with an adagrad optimizer .
to optimize model parameters , we use the adagrad algorithm of duchi et al with l2 regularization .
we use the 100-dimensional glove 4 embeddings trained on 2 billions tweets to initialize the lookup table and do fine-tuning during training .
we pre-trained the word embeddings with glove on english gigaword 2 and we fine-tune them during training .
to this end , we use and build on several recent advances in neural domain adaptation such as adversarial training ( cite-p-25-1-10 ) and domain separation network ( cite-p-25-1-3 ) , proposing a new adversarial training scheme .
to this end , we use and build on several recent advances in neural domain adaptation such as adversarial training ( cite-p-25-1-10 ) and domain separation network ( cite-p-25-1-3 ) , proposing a new effective adversarial training scheme .
and therefore should only be influenced by languages with similar properties .
in contrast , the ordering decisions are only influenced by languages with similar properties .
blitzer et al investigate domain adaptation for sentiment classifiers , focusing on online reviews for different types of products .
blitzer et al investigate domain adaptation for sentiment classifiers using structural correspondence learning .
neural machine translation has become the primary paradigm in machine translation literature .
in recent years , neural machine translation based on encoder-decoder models has become the mainstream approach for machine translation .
to maximize the joint likelihood of the discourse relations and the text , it is possible to marginalize over discourse relations at test time , outperforming language models that do not account for discourse structure .
furthermore , by marginalizing over latent discourse relations at test time , we obtain a discourse informed language model , which improves over a strong lstm baseline .
for sampling nodes , non-interactive active learning algorithms exclude expert annotators ‘¯ human labels .
non-interactive algorithms do not use human labels during the learning process .
we use a conditional random field formalism to learn a model from labeled training data that can be applied to unseen data .
we use the crf learning algorithm , which consists in a framework for building probabilistic models to label sequential data .
for word embeddings , we report the results of pennington et al and collobert and weston .
to be able to use non-annotated corpus data for training , we use the method proposed by collobert and weston .
alignment , can benefit from a wealth of effective , well established ip techniques , including convolution-based filters , texture analysis and hough transform .
therefore , the bcp can benefit from a wealth of effective , well established ip techniques , including convolution-based filtering , texture analysis , and hough transform .
our learned models of the best wizard ’ s behavior combine features that are available to wizards with some that are not , such as recognition confidence and acoustic model scores .
our learned models of the best wizard ’ s behavior combine features available to wizards with some that are not , such as recognition confidence and acoustic model scores .
coreference resolution is the process of finding discourse entities ( markables ) referring to the same real-world entity or concept .
coreference resolution is the process of linking multiple mentions that refer to the same entity .
this paper focuses on unsupervised methods which we argue are useful for broad coverage .
this paper examines the benefits of system combination for unsupervised wsd .
during the last decade , statistical machine translation systems have evolved from the original word-based approach into phrase-based translation systems .
over the last decade , phrase-based statistical machine translation systems have demonstrated that they can produce reasonable quality when ample training data is available , especially for language pairs with similar word order .
we use the stanford parser for obtaining all syntactic information .
we used the stanford parser to generate the grammatical structure of sentences .
for our baseline we use the moses software to train a phrase based machine translation model .
for phrase-based smt translation , we used the moses decoder and its support training scripts .
further , we apply a 4-gram language model trained with the srilm toolkit on the target side of the training corpus .
incometo select the most fluent path , we train a 5-gram language model with the srilm toolkit on the english gigaword corpus .
in comparison with other studies , leacock and chodorow lacked collocations , ng and lee lacked local context , and escudero used local context and collocations with smaller sizes .
leacock and chodorow used an nb classifier , and indicated that by combining topic context and local context they could achieve higher accuracy .
we evaluate the performance of our parser on four linguistic data sets : those used in the recent semeval task on semantic dependency parsing .
we extend this algorithm into a practical parser and evaluate its performance on four linguistic data sets used in semantic dependency parsing .
we used two decoders in the experiments , moses 9 and our inhouse hierarchical phrase-based smt , .
we built a hierarchical phrase-based mt system based on weighted scfg .
negation is a grammatical category which comprises various kinds of devices to reverse the truth value of a proposition ( cite-p-18-3-8 ) .
negation is a grammatical category that comprises devices used to reverse the truth value of propositions .
similarly , hua applied synonyms relationships between two different languages to automatically acquire english synonymous collocations .
similarly , hua wu applied synonyms relationship between two different languages to automatically acquire english synonymous collocation .
wubben et al and coster and kauchak apply phrase based machine translation to the task of text simplification .
coster and kauchak and specia , drawing on work by caseli et al , use standard statistical machine translation machinery for text simplification .
it is found that each of the english equivalent synsets occurs in each separate class of english verbnet .
it also has been found that each of the english equivalent synsets occurs in each separate class of english verbnet .
word sense disambiguation ( wsd ) is a particular problem of computational linguistics which consists in determining the correct sense for a given ambiguous word .
word sense disambiguation ( wsd ) is a key task in computational lexical semantics , inasmuch as it addresses the lexical ambiguity of text by making explicit the meaning of words occurring in a given context ( cite-p-18-3-10 ) .
we use sri language modeling toolkit to train a 5-gram language model on the english sentences of fbis corpus .
we train a 4-gram language model on the xinhua portion of english gigaword corpus by srilm toolkit .
blitzer et al investigate domain adaptation for pos tagging using the method of structural correspondence learning .
blitzer et al propose an effective algorithm for unsupervised domain adaptation , called structural correspondence learning .
sequence-to-sequence learningin this work , we follow the encoder-decoder architecture proposed by bahdanau et al .
in this paper , we introduce a new lightweight context-aware model based on the attention encoder-decoder model proposed by bahdanau et al .
a snippet is a brief window of text extracted by a search engine around the query term in a document .
a snippet consists of a title , a short summary of a web page and a hyperlink to the web page .
to evaluate segment translation quality , we use corpus level bleu .
to evaluate the evidence span identification , we calculate f-measure on words , and bleu and rouge .
word-level measures were not able to differentiate between different senses of one word , while sense-level measures actually increase correlation when shifting to sense similarities .
word-level measures were not able to differentiate between different senses of one word , while sense-level measures could even increase correlation when shifting to sense similarities .
we propose a novel approach to model relational knowledge based on low-rank subspace regularization .
we proposed a novel framework for modeling relational knowledge in word embeddings using rank-1 subspace regularization .
commonly used kernels in nlp are string kernels and tree kernels .
essk is the simple extension of the word sequence kernel and string subsequence kernel .
our labeled data comes from the penn treebank and consists of about 40,000 sentences from wall street journal articles annotated with syntactic information .
our out-of-domain data is the wall street journal portion of the penn treebank which consists of about 40,000 sentences annotated with syntactic information .
other parsers , such as that of lombardo and lesmo , use grammars with cfg-like rules which encode the preferred order of dependents for each given governor .
other parsers , such as that of lombardo and lesmo , use grammars with context-free like rules which encode the preferred order of dependents for each given governor , as defined by gaifman .
semantic applications typically extract information from intermediate structures derived from sentences , such as dependency .
semantic applications , such as qa or summarization , typically extract sentence features from a derived intermediate structure .
a : appleseed , whose real name was john chapman , planted many trees .
a : appleseed , whose real name was john chapman , planted many trees in the early 1800s .
table 2 presents the translation performance in terms of various metrics such as bleu , meteor and translation edit rate .
table 3 shows the results in bleu , translation edit rate , and position-independent word-error rate , obtained with moses and our hierarchical phrase-based smt , respectively .
dependency parsing is a very important nlp task and has wide usage in different tasks such as question answering , semantic parsing , information extraction and machine translation .
dependency parsing is a core task in nlp , and it is widely used by many applications such as information extraction , question answering , and machine translation .
we extract the 4096-dimension full-connected layer of 19-layer vggnet as the vector representation of images .
for this paper we used the penultimate layer of the 16-layer variant of vggnet .
conditional random fields are undirected graphical models that are conditionally trained .
crfs are undirected graphical models which define a conditional distribution over labellings given an observation .
discourse parsing is a fundamental task in natural language processing that entails the discovery of the latent relational structure in a multi-sentence piece of text .
and while discourse parsing is a document level task , discourse segmentation is done at the sentence level , assuming that sentence boundaries are known .
we found that using a maximum phrase length of 7 for the translation model and a 5-gram language model produces the best results in terms of bleu scores for our sape model .
we found that using a maximum phrase length of 10 for the translation model and a 6-gram language model produces the best results in terms of bleu scores for our sape model .
pcdc system must have access to global information regarding the coreference space .
thus , a pcdc system must have access to global information regarding the pnms .
based on the findings , we define a syntactic type system for the time expression , and propose a type-based time expression .
based on the findings , we propose a type-based approach named syntime 1 for time expression recognition .
in our system implementation , we design a general and configurable platform .
a general , configurable platform was designed for our model .
previous systems for opinion expression markup have typically used simple feature sets which have allowed the use of efficient off-the-shelf sequence labeling methods based on viterbi search .
previous systems for opinionated expression markup have typically used simple feature sets which have allowed the use of efficient off-theshelf sequence labeling methods based on viterbi search .
in this study , we propose a co-training approach to improving the classification .
we propose a co-training approach to making use of unlabeled chinese data .
neural machine translation has become the primary paradigm in machine translation literature .
neural network models for machine translation are now largely successful for many language pairs and domains .
the weights of the log-linear interpolation were optimized by means of mert , using the news-commentary test set of the 2008 shared task as a development set .
the weights 位 m in the log-linear model were trained using minimum error rate training with the news 2009 development set .
for decoding , we used the state-of-the-art phrasebased smt toolkit moses with default options , except for the distortion limit .
we used the moses mt toolkit with default settings and features for both phrase-based and hierarchical systems .
we use the scikit-learn toolkit as our underlying implementation .
we use the scikit-learn machine learning library to implement the entire pipeline .
in recent years , neural machine translation has achieved great advancement .
neural machine translation has witnessed great successes in recent years .
word segmentation is a fundamental task for processing most east asian languages , typically chinese .
word segmentation is a fundamental task for chinese language processing .
for letter-to-phoneme conversion , best results are obtained when allfive strategies are combined : word accuracy is raised to 65 . 5 % relative to 61 . 7 % .
by use of a very simple strategy for silence avoidance , the results for letter-to-phoneme conversion were marginally increased from 61.7 % to 61.9 % words correct and from 91.6 % to 91.8 % phonemes correct .
the same technique was used by hall et al to combine six transition-based parsers in the best performing system in the conll 2007 shared task .
this weighting scheme , which we will refer to as the default model , was later used by hall et al to achieve the best overall score in the conll 2007 shared task by combining six different parsers .
relation extraction is the problem of populating a target relation ( representing an entity-level relationship or attribute ) with facts extracted from natural-language text .
relation extraction is a traditional information extraction task which aims at detecting and classifying semantic relations between entities in text ( cite-p-10-1-18 ) .
discourse is a structurally organized set of coherent text segments .
since discourse is a natural form of communication , it favors the observation of the patient ’ s functionality in everyday life .