sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
we also extract subject-verbobject event representations , using the stanford partof-speech tagger and maltparser .
this requires part-of-speech tagging the glosses , for which we use the stanford maximum entropy tagger .
we use srilm to train a 5-gram language model on the xinhua portion of the english gigaword corpus 5th edition with modified kneser-ney discounting .
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided .
for instance , metrics such as bleu tend to favour longer n-gram matchings , and are , thus , biased towards word ordering .
for example , metrics such as bleu , nist , and ter rely on word n-gram surface matching .
the annotation scheme leans on the universal stanford dependencies complemented with the google universal pos tagset and the interset interlingua for morphological tagsets .
the ud scheme is built on the google universal part-of-speech tagset , the interset interlingua of morphosyntactic features , and stanford dependencies .
we select the cutting-plane variant of the margin-infused relaxed algorithm with additional extensions described by eidelman .
our discriminative model is a linear model trained with the margin-infused relaxed algorithm .
word embeddings have boosted performance in many natural language processing applications in recent years .
high quality word embeddings have been proven helpful in many nlp tasks .
the emu speech database system defines an annotation scheme involving temporal constraints of precedence and overlap .
in the emu speech database system the hierarchical relation between levels has to be made explicit .
we used support vector machines , a maximum-margin classifier that realizes a linear discriminative model .
we build discriminative models using support vector machines for ranking .
in this paper , we approach the spelling correction problem for indic languages .
in this paper , we show how to build an automatic spelling corrector for resource-scarce languages .
we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing .
we used the srilm toolkit to train a 4-gram language model on the xinhua portion of the gigaword corpus , which contains 238m english words .
key phrases tend to have close semantics to the title phrases .
as a matter of fact , key phrases often have close semantics to title phrases .
relation extraction is a challenging task in natural language processing .
relation extraction is the task of recognizing and extracting relations between entities or concepts in texts .
we used a manually created list of definitively positive and negative words and an automatically generated list of words and their associated sentiment polarities in the sentiment140 lexicon .
we used the sentiment lexicon provided by opinion-lexicon and a list of sentiment hashtags provided by the nrc hashtag sentiment lexicon .
marton et al use a monolingual text on the source side to find paraphrases to oov words for which the translations are available .
callison-burch et al and marton et al augmented the translation phrase table with paraphrases to translate unknown phrases .
some known systems for mapping free text to umls are saphire , metamap , indexfinder , and nip .
some prominent systems to map free text to umls include saphire , metamap , indexfinder , and nip .
in summary , we rely for most but not all languages on the tokenization and sentence splitting provided by the udpipe baseline .
for sentence segmentation and tokenization , we rely on the udpipe predicted data files .
curran and lin use syntactic features in the vector definition .
pereira and lin use syntactic features in the vector definition .
document plans are induced automatically from training data and are represented intuitively by pcfg rules .
content plans are represented intuitively by a set of grammar rules that operate on the document level and are acquired automatically from training data .
the target-normalized hierarchical phrase-based model is based on a more general hierarchical phrase-based model .
hiero is a hierarchical phrase-based statistical mt framework that generalizes phrase-based models by permitting phrases with gaps .
to evaluate the performance for different feature dimensions , we use chi-squared feature selection algorithm to select 10k and 30k features .
thus , we perform feature selection by using the bayesian information criterion to reduce noise and improve the performance .
ccg is a linguistically motivated categorial formalism for modeling a wide range of language phenomena .
ccg is a linguistically-motivated categorial formalism for modeling a wide range of language phenomena .
we used the srilm toolkit to train a 4-gram language model on the xinhua portion of the gigaword corpus , which contains 238m english words .
we trained a 4-gram language model on the xinhua portion of gigaword corpus using the sri language modeling toolkit with modified kneser-ney smoothing .
abstract meaning representation is a semantic formalism that expresses the logical meanings of english sentences in the form of a directed , acyclic graph .
abstract meaning representation is a semantic formalism where the meaning of a sentence is encoded as a rooted , directed graph .
we evaluate our approach on the basis of nyt10 , a dataset developed by and then widely used in distantly supervised relation extraction .
we evaluate our model on a widely used dataset 1 which is developed by and has also been used by .
collins et al , 2005 ) analyze german clause structure and propose six types of rules for transforming german parse trees with respect to english word order .
collins et al described six types of transforming rules to reorder the german clauses in german-to-english translation .
we apply this technique to parser adaptation .
here we apply this technique to parser adaptation .
we used the logistic regression implemented in the scikit-learn library with the default settings .
we use logistic regression with l2 regularization , implemented using the scikit-learn toolkit .
in this paper , we propose to translate videos directly to sentences using a unified deep neural network .
in this paper , we propose to translate from video pixels to natural language with a single deep neural network .
the phrase structure trees produced by the parser are further processed with the stanford conversion tool to create dependency graphs .
these features consist of parser dependencies obtained from the stanford dependency parser for the context of the target word .
usually , such methods need intermediary machine translation system or a bilingual dictionary to bridge the language gap .
most often these methods depend on an intermediary machine translation system or a bilingual dictionary to bridge the language gap .
doing this required us to use the dynamic oracle of goldberg and nivre during training in order to produce configurations that exercise the non-monotonic transitions .
we achieve this by following goldberg and nivre in using a dynamic oracle to create partially labelled training data .
sentiment analysis is a multi-faceted problem .
sentiment analysis is a nlp task that deals with extraction of opinion from a piece of text on a topic .
we built a 5-gram language model from it with the sri language modeling toolkit .
we use the srilm toolkit to compute our language models .
we report decoding speed and bleu score , as measured by sacrebleu .
we measure translation quality via the bleu score .
keyphrase extraction is a fundamental task in natural language processing that facilitates mapping of documents to a set of representative phrases .
keyphrase extraction is a natural language processing task for collecting the main topics of a document into a list of phrases .
word re-embedding based on manifold learning can help the original space .
re-embedding the space using a manifold learning stage can rectify this .
okazaki et al proposed an approach to improve the chronological sentence ordering method by using precedence relation technology .
okazaki et al propose a metric that assess continuity of pairwise sentences compared with the gold standard .
previous work by koo et al and miller et al suggests that different levels of cluster granularity may be useful in natural language processing tasks with discriminative training .
looking at learning curves , koo et al show that the use of word clusters can also be used to compensate for reduced training data for the parser .
turney and littman compute the point wise mutual information of the target term with each seed positive and negative term as a measure of their semantic association .
turney and littman calculate the pointwise mutual information of a given word with positive and negative sets of sentiment words .
evaluation results show that the proposed procedure can achieve competitive performance in terms of bleu score and slot error rate .
the results show that the proposed adaptation recipe improves not only the objective scores but also the user¡¯s perceived quality of the system .
subjectivity feature can significantly improve the accuracy of a word sense disambiguation system .
adding subjectivity labels to wordnet could also support automatic subjectivity analysis .
in this work , we apply a standard phrase-based translation system .
we implement our approach in the framework of phrase-based statistical machine translation .
a trigram language model with modified kneser-ney discounting and interpolation was used as produced by the srilm toolkit .
a 5-gram language model with kneser-ney smoothing was trained with srilm on monolingual english data .
in a similar vein , hashtags can also serve as noisy labels .
hashtags have also been used as noisy sentiment labels .
coreference resolution is a set partitioning problem in which each resulting partition refers to an entity .
coreference resolution is a multi-faceted task : humans resolve references by exploiting contextual and grammatical clues , as well as semantic information and world knowledge , so capturing each of these will be necessary for an automatic system to fully solve the problem .
we use a standard long short-term memory model to learn the document representation .
we process the embedded words through a multi-layer bidirectional lstm to obtain contextualized embeddings .
sutton et al presented dynamic conditional random fields , a generalization of the traditional linear-chain crf that allow representation of interaction among labels .
mccallum et al , sutton et al proposed dynamic conditional random fields , the generalization of linear-chain crfs , that have complex graph structure .
to train our model we use markov chain monte carlo sampling .
we use markov chain monte carlo as an alternative to dp search .
with the best embeddings , our system was ranked third in the scenario .
with the best embeddings , our system was ranked third in the scenario 1 with the micro f1 score of 0.38 .
we use the logistic regression implementation of liblinear wrapped by the scikit-learn library .
we use a random forest classifier , as implemented in scikit-learn .
this paper proposes a history-based structured learning approach that jointly extracts entities and relations .
in this paper , we proposed a history-based structured learning approach that jointly detects entities and relations .
shen et al and mi and liu develop a generative dependency language model for string-to-dependency and tree-to-tree models .
shen et al proposed a string-to-dependency target language model to capture long distance word orders .
in this paper we presented a new model for unsupervised relation extraction which operates over tuples .
in this paper we present an unsupervised approach to relational information extraction .
to calculate the constituent-tree kernels st and sst we used the svm-light-tk toolkit .
we used svm-light-tk , which enables the use of the partial tree kernel .
foltz et al used latent semantic analysis to compute a coherence value for texts .
foltz et al use latent semantic analysis to model the smoothness of transitions between adjacent segments of an essay .
since unification is a non-directional operation , we are able to treat forward as well as backward reference .
unification is a central operation in recent computational linguistic research .
heintz et al and strzalkowski et al focused on modeling topical structure of text to identify metaphor .
strzalkowski et al acquired a set of topic chains by linking semantically related words in a given text .
the integrated dialect classifier is a maximum entropy model that we train using the liblinear toolkit .
we use a standard maximum entropy classifier implemented as part of mallet .
the system used a tri-gram language model built from sri toolkit with modified kneser-ney interpolation smoothing technique .
the language model was a 5-gram language model estimated on the target side of the parallel corpora by using the modified kneser-ney smoothing implemented in the srilm toolkit .
we use srilm for training the 5-gram language model with interpolated modified kneser-ney discounting .
a 5-gram language model with kneser-ney smoothing is trained using s-rilm on the target language .
senses of a real ambiguous word have been modeled by picking out the most similar monosemous morpheme from a chinese hierarchical lexicon .
lu et al modeled senses of a real ambiguous word by picking out the most similar monosemous morpheme from a chinese hierarchical lexicon .
the dialog manager is a nuance proprietary tool inspired by ravenclaw .
the dialogue manager is based on the ravenclaw framework .
we have used the srilm with kneser-ney smoothing for training a language model for the first stage of decoding .
our translation model is implemented as an n-gram model of operations using the srilm toolkit with kneser-ney smoothing .
we use the rmsprop optimization algorithm to minimize the mean squared error loss function over the training data .
we use the rmsprop optimization algorithm to minimize a loss function over the training data .
coreference resolution is a key task in natural language processing ( cite-p-13-1-8 ) aiming to detect the referential expressions ( mentions ) in a text that point to the same entity .
coreference resolution is the task of partitioning the set of mentions of discourse referents in a text into classes ( or ‘ chains ’ ) corresponding to those referents ( cite-p-12-3-14 ) .
faruqui and dyer uses canonical correlation analysis that maps words from two different languages in to a common , shared space .
multi-cca is an extension of faruqui and dyer , performing canonical correlation analysis for multiple languages using english as the pivot .
we extend the constrained lattice training of tackstrom et al . ( 2013 ) to non-linear conditional .
we also extend the constrained lattice training method of ta ? ckstro ? m et al . ( 2013 ) from linear crfs to non-linear crfs .
in this paper , we propose a new approach to obtain temporal relations from absolute time value ( a . k . a . time anchors ) , which is suitable for texts containing rich temporal information .
in this work , we propose a new approach to obtain temporal relations from time anchors , i.e . absolute time value , of all mentions .
language modeling is the task of estimating the probability of sequences of words in a language and is an important component in , among other applications , automatic speech recognition ( rabiner and juang , 1993 ) and machine translation ( cite-p-25-3-17 ) .
language modeling is a fundamental task in natural language processing and is routinely employed in a wide range of applications , such as speech recognition , machine translation , etc ’ .
future work will consider joint models of discourse structure and coreference , and consideration of coreference .
future work should therefore consider joint models of discourse analysis and coreference resolution .
pang and lee have combined polarity and subjectivity analysis and proposed a technique to filter out objective sentences of movie reviews based on finding minimum cuts in graphs .
pang and lee propose a graph-based method which finds minimum cuts in a document graph to classify the sentences into subjective or objective .
that would be useful for creation of a reusable , human-readable category .
these again are too scattered to be appropriate for a human-readable index into a document collection .
the documents were tokenized , chunked , and labeled with irex 8 named entity types , and transformed into context features .
the documents were tokenized by jtag , chunked , and labeled with irex 8 named entity types by crfs using minimum classification error rate , and transformed into features .
we perform the mert training to tune the optimal feature weights on the development set .
we optimise the feature weights of the model with minimum error rate training against the bleu evaluation metric .
word alignment is the task of identifying word correspondences between parallel sentence pairs .
word alignment is a fundamental problem in statistical machine translation .
with word class models , the baseline can be improved by up to 1 . 4 % b leu and 1 . 0 % t er on the french¡úgerman task and 0 . 3 % b leu and 1 . 1 % t er on the german¡úenglish task .
by using word class models , we can improve our respective baselines by 1.4 % b leu and 1.0 % t er on the french¡úgerman task and 0.3 % b leu and 1.1 % t er on the german¡úenglish task .
mcclosky et al applied the method later on english out-of-domain texts which show good accuracy gains too .
mcclosky et al applied the method later on out-of-domain texts which show good accuracy gains too .
this study is called morphological analysis .
morphological analysis is a staple of natural language processing for broad languages .
we present a novel relational learning framework that learns entity and relationship .
we design a novel objective that leverage entity linkage and build an efficient multi-task training procedure .
that replaces all rare words with an unknown word symbol .
all of them eliminate the need to replace rare words with the unknown word symbol .
in this task , we use the 300-dimensional 840b glove word embeddings .
we used 100 dimensional glove embeddings for this purpose .
we used standard classifiers available in scikit-learn package .
we used the svm implementation of scikit learn .
comma splices are one of the errors addressed in the 2014 conll shared task on grammatical error correction .
run-on sentences and comma splices were among the 28 error types introduced in the conll-2014 shared task .
we evaluate the performance of different translation models using both bleu and ter metrics .
to measure the translation quality , we use the bleu score and the nist score .
words and phrases are taken from three domains : general english , english twitter , and arabic twitter .
the words and phrases are taken from three domains : general english , english twitter , and arabic twitter .
this paper has described a stacked subword model for joint chinese .
this paper describes a novel stacked subword model .
we are the first to suggest a general semi-supervised protocol that is driven by soft constraints .
in this paper , we suggest a method for incorporating domain knowledge in semi-supervised learning algorithms .
in this paper , we describe our experience with automatic alignment of sentences in parallel english-chinese texts .
we describe our experience with automatic alignment of sentences in parallel english-chinese texts .
other terms used in the literature include implied meanings , implied alternatives and semantically similar .
other terms used in the literature include implied meanings , implied alternatives and semantically similars .
we ran the decoder in a single pass using crossword acoustic modeling and a trigram wordbased backoff model built with the cmu toolkit .
we built a trigram language model smoothed with absolute discounting using the cmu-slm toolkit .
we use the rouge toolkit for evaluation of the generated summaries in comparison to the gold summaries .
we use the rouge 1 to evaluate our framework , which has been widely applied for summarization evaluation .
dependency parsing are the standard tasks in the nlp community .
dependency parsing and semantic role labeling are two standard tasks in the nlp community .
as a statistical significance test , we used bootstrap resampling .
we performed significance testing using paired bootstrap resampling .
in this paper , we present the lth coreference solver used in the closed track of the conll 2012 shared task .
in this paper , our coreference resolution system for conll-2012 shared task is summarized .
we measure machine translation performance using the bleu metric .
we measure the translation quality with automatic metrics including bleu and ter .
to smt , this paper proposes a novel , probabilistic approach to reordering which combines the merits of syntax and phrase-based smt .
this paper proposes a novel , probabilistic approach to reordering which combines the merits of syntax and phrase-based smt .
yarowsky presented an approach that significantly reduces the amount of labeled data needed for word sense disambiguation .
yarowsky proposes a method for word sense disambiguation , which is based on monolingual bootstrapping .
stance detection is the task of automatically determining from text whether the author of the text is in favor of , against , or neutral towards a proposition or target .
stance detection is the task of assigning stance labels to a piece of text with respect to a topic , i.e . whether a piece of text is in favour of “ abortion ” , neutral , or against .
sennrich et al introduced an effective approach based on encoding rare and out-of-vocabulary words as sequences of subword units .
sennrich et al introduced a subword-level nmt model using subword-level segmentation based on the byte pair encoding algorithm .
in this paper , we propose a syllable-based method for tweet normalization .
in this paper , a syllable-based tweet normalization method is proposed for social media text normalization .