sentence1
stringlengths 16
446
| sentence2
stringlengths 14
436
|
---|---|
there already exist quite extensive implemented formal hpsg grammars for english , spanish , german , and japanese .
|
in hpsg , the most extensive grammars are those of english , german , and japanese .
|
further , the word embeddings are initialized with glove , and not tied with the softmax weights .
|
the pretrained word embeddings are from glove , and the word embedding dimension d w is 300 .
|
in order to strive for a model with high explanatory value , we use a linear regression model with lasso regularization .
|
in order to strive for a model with high explanatory value , we use linear regression , with l1 regularization .
|
word sense disambiguation ( wsd ) is a problem long recognised in computational linguistics ( yngve 1955 ) and there has been a recent resurgence of interest , including a special issue of this journal devoted to the topic ( cite-p-27-8-11 ) .
|
word sense disambiguation ( wsd ) is a task to identify the intended sense of a word based on its context .
|
predictions of individual features can then be combined according to their predictive strength , resulting in a model , whose parameters can be reliably and efficiently estimated .
|
then the predictions made by individual features can be combined into a mixture model , in which the prediction of each feature is weighted according to its predictive strength .
|
we trained a 4-gram language model with kneser-ney smoothing and unigram caching using the sri-lm toolkit .
|
we used a 5-gram language model with modified kneser-ney smoothing , built with the srilm toolkit .
|
in addition , we utilize the pre-trained word embeddings with 300 dimensions from for initialization .
|
we use the 100-dimensional pre-trained word embeddings trained by word2vec 2 and the 100-dimensional randomly initialized pos tag embeddings .
|
we use publicly-available 1 300-dimensional embeddings trained on part of the google news dataset using skip-gram with negative sampling .
|
we use the publicly available 300-dimensional word vectors of mikolov et al , trained on part of the google news dataset .
|
for our purpose we use word2vec embeddings trained on a google news dataset and find the pairwise cosine distances for all words .
|
we first train a word2vec model on fr-wikipedia 11 to obtain non contextual word vectors .
|
for improving shift-reduce parsing , we propose a novel neural model to predict the constituent hierarchy .
|
we proposed a novel constituent hierarchy predictor based on recurrent neural networks , aiming to capture global sentential information .
|
we use a java implementation 2 of svm from liblinear , with the original parameter values used by the nrc canada system .
|
we use the logistic regression implementation of liblinear wrapped by the scikit-learn library .
|
to peer reviews , we will thus propose new specialized features to address these issues .
|
in addition , we investigate the utility of incorporating additional specialized features tailored to peer review .
|
we represent terms using pre-trained glove wikipedia 6b word embeddings .
|
for the word-embedding based classifier , we use the glove pre-trained word embeddings .
|
word sense disambiguation is the process of determining which sense of a homograph is correct in a given context .
|
word sense disambiguation is the process of determining which sense of a word is used in a given context .
|
in this paper , we propose a model to measure the similarity of a sentence pair .
|
in this paper , we focus on solving spm problem by measuring semantic similarity between two sentences .
|
meanwhile , we adopt glove pre-trained word embeddings 5 to initialize the representation of input tokens .
|
in a second baseline model , we also incorporate 300-dimensional glove word embeddings trained on wikipedia and the gigaword corpus .
|
zeng et al proposed piecewise convolution neural networks .
|
zeng et al proposed the first neural relation extraction with distant supervision .
|
we adapt the minimum error rate training algorithm to estimate parameters for each member model in co-decoding .
|
we perform minimum-error-rate training to tune the feature weights of the translation model to maximize the bleu score on development set .
|
we extend the original binary adversarial training to multi-class , which not only enables multiple tasks to be jointly trained .
|
moreover , we extend binary adversarial training to multi-class , which enable multiple tasks to be jointly trained .
|
in table 6 we show a more fine-grained breakdown inspired by a similar analysis in durrett and klein .
|
in table 3 we examine , using an analysis similar to that in durrett and klein , where the unpipelined models go wrong .
|
work leads to significant improvement on parsing accuracy .
|
all above work leads to significant improvement on parsing accuracy .
|
coreference resolution is the task of grouping all the mentions of entities 1 in a document into equivalence classes so that all the mentions in a given class refer to the same discourse entity .
|
coreference resolution is a challenging task , that involves identification and clustering of noun phrases mentions that refer to the same real-world entity .
|
in semisupervised parsing , it is a promising strategy to project the dependency structures from a resource-rich language to a resource-scarce one .
|
this provides a new strategy for resource-scarce languages to train high-precision dependency parsers .
|
twitter is a microblogging site where people express themselves and react to content in real-time .
|
twitter is a popular microblogging service , which , among other things , is used for knowledge sharing among friends and peers .
|
for this purpose , we turn to the expectation maximization algorithm .
|
thus , we propose a new approach based on the expectation-maximization algorithm .
|
taglda is a representative latent topic model by extending latent dirichlet allocation .
|
a widely used topic modeling method is the latent dirichlet allocation model , which is proposed by blei .
|
but an event is usually expressed by multiple sentences in one document .
|
but an event is usually expressed with multiple sentences in a document .
|
the penn discourse treebank is another annotated discourse corpus .
|
the penn discourse treebank is a large corpus annotated with discourse relations , .
|
we then present simple concave models for dependency grammar induction that are easy to implement .
|
we then present concave models for dependency grammar induction and validate them experimentally .
|
xiao et al present a topic similarity model based on lda that produces a feature that weights grammar rules based on topic compatibility .
|
xiao et al introduce a topic similarity model to select the synchronous rules for hierarchical phrase-based translation .
|
in the next step , the distribution of each word in the base corpus is compared to the distribution of the same noun in a reference corpus using the log-likelihood ratio .
|
in the next step , the distribution of each noun in the base corpus is compared to the distribution of the same noun in a reference corpus 3 using the log-likelihood ratio .
|
the semantic roles in the example are labeled in the style of propbank , a broad-coverage human-annotated corpus of semantic roles and their syntactic realizations .
|
the semantic roles in the examples are labeled in the style of propbank , a broadcoverage human-annotated corpus of semantic roles and their syntactic realizations .
|
xue et al proposed to linearly mix two different estimations by combining language model and translation model into a unified framework , called trlm .
|
xue et al proposed to linearly mix two different estimations by combining language model and word-based translation model into a unified framework , called translm .
|
klementiev et al use a multitask learning framework to encourage the word representations learned by neural language models to agree cross-lingually .
|
klementiev et al treat the task as a multi-task learning problem where each task corresponds to a single word , and task relatedness is derived from co-occurrence statistics in bilingual parallel data .
|
the penn discourse treebank corpus is the best-known resource for obtaining english connectives .
|
one of the most important resources for discourse connectives in english is the penn discourse treebank .
|
blitzer et al experimented with structural correspondence learning , which focuses on finding frequently occurring pivot features that occur commonly across domains in the unlabeled data but equally characterize source and target domains .
|
blitzer et al proposed structural correspondence learning to identify the correspondences among features between different domains via the concept of pivot features .
|
we used the stanford corenlp tools for lemmatization , pos tagging and parsing , and created a svm classifier with a linear kernel using svmlin .
|
we first removed all sgml mark-up , and performed sentence-breaking and tokenization using the stanford corenlp toolkit .
|
the target-side language models were estimated using the srilm toolkit .
|
the language models in this experiment were trigram models with good-turing smoothing built using srilm .
|
we experimented using the standard phrase-based statistical machine translation system as implemented in the moses toolkit .
|
we used the open source moses phrase-based mt system to test the impact of the preprocessing technique on translation results .
|
background-knowledge-based topics generated from the wikipedia relation repository can significantly improve the performance over the state-of-the-art relation detection approaches .
|
by leveraging the knowledge extracted from the wikipedia relation repository , our approach significantly improves the performance over the state-of-the-art approaches on ace data .
|
we prove a number of useful results about probabilistic context-free grammars ( pcfgs ) .
|
this article proves a number of useful properties of probabilistic context-free grammars ( pcfgs ) .
|
we also extract subject-verbobject event representations , using the stanford partof-speech tagger and maltparser .
|
we use stanford log-linear partof-speech tagger to produce pos tags for the english side .
|
bracketing transduction grammar is a special case of synchronous context free grammar .
|
bracketing transduction grammar is a binary and simplified synchronous context-free grammar with only one non-terminal symbol .
|
discourse parsing is a fundamental task in natural language processing that entails the discovery of the latent relational structure in a multi-sentence piece of text .
|
discourse parsing is the process of discovering the latent relational structure of a long form piece of text and remains a significant open challenge .
|
we split each document into sentences using the sentence tokenizer of the nltk toolkit .
|
we followed the xml schema of the npschat corpus provided with the nltk in marking-up the corpus .
|
relation extraction is the task of tagging semantic relations between pairs of entities from free text .
|
relation extraction is the task of finding relationships between two entities from text .
|
liu et al proposed a context-sensitive rnn model that uses latent dirichlet allocation to extract topic-specific word embeddings .
|
lin et al proposes a hierarchical recurrent neural network language model to consider sentence history information in word prediction .
|
in that it allows situated cues , such as the set of visible objects , to directly influence parsing and learning .
|
the joint nature provides crucial benefits by allowing situated cues , such as the set of visible objects , to directly influence learning .
|
we suggest a simple , supervised character-level string transduction model which easily incorporates features automatically learned from large amounts of unlabeled data .
|
we propose a novel text normalization model based on learning edit operations from labeled data while incorporating features induced from unlabeled data via character-level neural text embeddings .
|
we investigate a graph based semi-supervised learning algorithm , a label propagation ( lp ) algorithm , for relation extraction .
|
here we investigate a label propagation algorithm ( lp ) ( cite-p-16-3-4 ) for relation extraction task .
|
these sentence examples were blog articles in the balanced corpus of contemporary written japanese core data .
|
the pool is 747 blog sentences 5 from the balanced corpus of contemporary written japanese .
|
word segmentation is a prerequisite for many natural language processing ( nlp ) applications on those languages that have no explicit space between words , such as arabic , chinese and japanese .
|
therefore , word segmentation is a preliminary and important preprocess for chinese language processing .
|
the scarcity of such corpora , in particular for specialized domains and for language pairs not involving english , pushed researchers to investigate the use of comparable corpora .
|
the scarcity of such corpora in particular for specialized domains and for language pairs not involving english pushed researchers to investigate the use of comparable corpora .
|
le and mikolov introduce paragraph vector to learn document representation from semantics of words .
|
le and mikolov extended the word embedding learning model by incorporating paragraph information .
|
parameters are updated through backpropagation with adagrad for speeding up convergence .
|
we use mini-batch update and adagrad to optimize the parameter learning .
|
as wikidata did not exist at that time , the authors relied on the structured infoboxes included in some wikipedia articles .
|
as wikidata did not exist at that time , the authors relied on the structured infoboxes included in some wikipedia articles for a relational representation of wikipedia content .
|
chung and gildea reported work on just detecting just a small subset of the empty categories posited in the chinese treebank .
|
chung and gildea reported their recover of empty categories improved the accuracy of machine translation both in korean and in chinese .
|
we implemented the different aes models using scikit-learn .
|
we used the svm implementation provided within scikit-learn .
|
we use different pretrained word embeddings such as glove 1 and fasttext 2 as the initial word embeddings .
|
we use the glove vectors of 300 dimension to represent the input words .
|
our phrase-based smt system is similar to the alignment template system described in och and ney .
|
our smt-based query expansion techniques are based on a recent implementation of the phrasebased smt framework .
|
feature weights are optimized using the lattice-based variant of mert on either wmt10 or mt08 .
|
the log linear weights for the baseline systems are optimized using mert provided in the moses toolkit .
|
word embeddings have been empirically shown to preserve linguistic regularities , such as the semantic relationship between words .
|
in particular , the cooccurrence based embeddings of words in a corpus has been demonstrated to encode meaningful semantic relationships between them .
|
relation classification is the task of identifying the semantic relation holding between two nominal entities in text .
|
relation classification is a crucial ingredient in numerous information extraction systems seeking to mine structured facts from text .
|
extractive summarization is a widely used approach to designing fast summarization systems .
|
extractive summarization is a sentence selection problem : identifying important summary sentences from one or multiple documents .
|
we apply a composite regularizer that drives entire rows of the coefficient matrix to zero , yielding compact , interpretable models .
|
by imposing a composite ` 1 , ¡þ regularizer , we obtain structured sparsity , driving entire rows of coefficients to zero .
|
we have addressed here the problem of classification .
|
we address here the problem of word translation disambiguation .
|
in this paper , we propose a novel attentional nmt with source dependency representation .
|
we proposed a novel attentional nmt with source dependency representation to capture source long-distance dependencies .
|
brill et al exploited non-aligned monolingual web search engine query logs to acquire katakana -english transliteration pairs .
|
brill et al applied this model for extracting katakana-english transliteration pairs from query logs .
|
word segmentation is a fundamental task for processing most east asian languages , typically chinese .
|
word segmentation is a classic bootstrapping problem : to learn words , infants must segment the input , because around 90 % of the novel word types they hear are never uttered in isolation ( cite-p-13-1-0 , cite-p-13-3-8 ) .
|
the language model component uses the srilm lattice-tool for weight assignment and nbest decoding .
|
the language model was constructed using the srilm toolkit with interpolated kneser-ney discounting .
|
kalchbrenner et al introduced a convolutional neural network for sentence modeling that uses dynamic k-max pooling to better model inputs of varying sizes .
|
collobert et al , kalchbrenner et al , and kim use convolutional networks to deal with varying length sequences .
|
a tat is capable of generating both terminals and nonterminals and performing reordering .
|
a tat is capable of generating both terminals and nonterminals and performing reordering at both low and high levels .
|
on bsus is constructed to capture the semantic information of texts .
|
in this work , the semantic nodes are bsus extracted from texts .
|
that does not require dependency parsing .
|
crucially , this does not require parsing documents .
|
we assume that a morphological analysis consists of three processes : tokenization , dictionary lookup , and disambiguation .
|
that is , since the morphological analysis is the first-step in most nlp applications , the sentences with incorrect word spacing must be corrected for their further processing .
|
for input representation , we used glove word embeddings .
|
for representing words , we used 100 dimensional pre-trained glove embeddings .
|
we pre-train the word embedding via word2vec on the whole dataset .
|
we train the cbow model with default hyperparameters in word2vec .
|
to address the above problems , liu et al propose to use forest-to-string rules to enhance the expressive power of their tree-to-string model .
|
liu et al proposed forest-to-string rules to capture the non-syntactic phrases in their tree-to-string model .
|
all the weights of those features are tuned by using minimal error rate training .
|
the nnlm weights are optimized as the other feature weights using minimum error rate training .
|
after standard preprocessing of the data , we train a 3-gram language model using kenlm .
|
the language model storage of target language uses the implementation in kenlm which is trained and queried as a 5-gram model .
|
we pre-train the word embedding via word2vec on the whole dataset .
|
we use word2vec to train the word embeddings .
|
lan et al , 2013 , present a multi-task learning based system which can effectively use synthetic data for implicit discourse relation recognition .
|
lan et al present a multi-task learning framework , using explicit relation identification as auxiliary tasks to help main task on implicit relation identification .
|
context words such as finger and arm are typical of the hand meaning of palm , whereas coconut and oil are typical of its tree meaning .
|
for example , context words such as finger and arm are typical of the hand meaning of palm , whereas coconut and oil are typical of its tree meaning .
|
refinement models have linear time complexity in set size allowing for practical online use in set expansion systems .
|
both proposed refinement models have linear time complexity in set size allowing for practical online use in set expansion systems .
|
sun and xu explored several statistical features derived from both unlabeled data to help improve character-based word segmentation .
|
sun and xu utilized the features derived from large-scaled unlabeled text to improve chinese word segmentation .
|
the first is the so-pmi method described in turney and littman .
|
the second alternative uses the semi-supervised lsa-based method of turney and littman .
|
we develop translation models using the phrase-based moses smt system .
|
we use moses , a statistical machine translation system that allows training of translation models .
|
training is done through stochastic gradient descent over shuffled mini-batches with adadelta update rule .
|
training is done using stochastic gradient descent over mini-batches with the adadelta update rule .
|
the systems were automatically evaluated using bleu on held-out evaluation sets .
|
each system is optimized using mert with bleu as an evaluation measure .
|
we used the svm-light-tk 5 to train the reranker with a combination of tree kernels and feature vectors .
|
svm-light-tk 5 is used to train the reranker with a combination of tree kernels and feature vectors .
|
using well calibrated probabilities helps in estimating the sense priors .
|
using sense priors estimated by logistic regression further improves performance .
|
we represent each word by a vector with length 300 .
|
we use the glove word vector representations of dimension 300 .
|
sentiment classification is a well-studied and active research area ( cite-p-20-1-11 ) .
|
sentiment classification is a task of predicting sentiment polarity of text , which has attracted considerable interest in the nlp field .
|
we pre-train the word embeddings using word2vec .
|
then , we trained word embeddings using word2vec .
|
callison-burch et al tackle the problem of unseen phrases in smt by adding source language paraphrases to the phrase table with appropriate probabilities .
|
callison-burch et al used pivot languages for paraphrase extraction to handle the unseen phrases for phrase-based smt .
|
the development of the speaker-dependent arm system is described in detail in .
|
an overview of the development of the speaker-dependent alum system is presented in .
|
the skip-gram model adopts a neural network structure to derive the distributed representation of words from textual corpus .
|
we derive 100-dimensional word vectors using word2vec skip-gram model trained over the domain corpus .
|
coreference resolution is the process of finding discourse entities ( markables ) referring to the same real-world entity or concept .
|
coreference resolution is a well known clustering task in natural language processing .
|
other popular , pre-trained word embeddings include glove , word2vec over twitter , and fasttext .
|
the most common word embeddings used in deep learning are word2vec , glove , and fasttext .
|
this problem has been studied by jelinek and lafferty and by stolcke .
|
efficient algorithms for its solution have been proposed by jelinek and lafferty and stolcke .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.