sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
socher et al proposed a more complex and flexible framework based on matrix-vector representations .
socher et al also extended word representations beyond simple vectors .
markov logic combines first-order logic and probabilistic graphical models in a unified representation .
markov logic networks combine markov networks with first-order logic in a probabilistic framework .
the in-house phrase-based translation system is used for generating translations .
the in-house phrase-based decoder is used to perform decoding .
in the decoding phase , our model can also generate a numerical value .
for the second problem , the model needs to be fed with information on delivery time .
cui et al developed a dependency-tree based information discrepancy measure .
cui et al proposed a system utilizing fuzzy relation matching guided by statistical models .
word sense disambiguation ( wsd ) is the task of identifying the correct sense of an ambiguous word in a given context .
word sense disambiguation ( wsd ) is a widely studied task in natural language processing : given a word and its context , assign the correct sense of the word based on a predefined sense inventory ( cite-p-15-3-4 ) .
the feature definitions are inspired by the set which yielded the best results when combined in a naive bayes model on several senseval-2 lexical sample tasks .
the feature set consists of positionsensitive , syntactic , and local collocational features , since these features yielded the best results when combined in a na茂ve bayes model on several senseval-2 lexical sample tasks .
this paper presented the first study on cross-domain text classification in presence of multiple domains with disparate label sets .
this paper presents firstof-its-kind transfer learning algorithm for cross-domain classification with multiple source domains and disparate label sets .
we use pre-trained vectors from glove for word-level embeddings .
we use pre-trained 100 dimensional glove word embeddings .
by first providing a small number of seeding users , then the system ranks the friend list according to how likely a user belongs to the group indicated by the seeds .
the system takes friend seeds provided by users and generates a ranked list according to the likelihood of a test user being in the group .
burkett and klein induce node-alignments of syntactic trees with a log-linear model , in order to guide bilingual parsing .
burkett and klein propose a reranking based method for joint constituent parsing of bitext , which can make use of structural correspondence features in both languages .
that makes it possible to also address the question of how these changes happened by uncovering the cognitive mechanisms and cultural processes that drive language evolution .
this demonstration shows how fcg can be used to operationalise the cultural processes and cognitive mechanisms that underly language evolution and change .
we measure the inter-annotator agreement using the kappa coefficient .
we use the 魏 statistic to measure inter-annotator agreements for emotion annotation .
the word embedding is chosen as the glove 100-dimensional embedding .
the word embeddings are identified using the standard glove representations .
distributional semantic models [ baroni and lenci ] are based on the distributional hypothesis of meaning [ harris ] assuming that semantic similarity between words is a function of the overlap of their linguistic contexts .
distributional semantic models are based on the distributional hypothesis of meaning assuming that semantic similarity between words is a function of the overlap of their linguistic contexts .
relation extraction ( re ) is the process of generating structured relation knowledge from unstructured natural language texts .
relation extraction is the task of finding semantic relations between two entities from text .
bengio et al proposed a probabilistic neural network language model for word representations .
bengio et al presented a neural network language model where word embeddings are simultaneously learned along with a language model .
to avoid imposing hard independence assumptions , it also allows us to impose linguistically appropriate soft biases on the learning process .
but it also allows us to make use of our a priori knowledge by imposing structurally specified and linguistically appropriate biases on the search for a good history representation .
we investigate the disambiguation of 7 highly ambiguous verbs in english-portuguese .
we experimented with this approach to disambiguate 7 highly ambiguous verbs in englishportuguese translation .
the penn discourse treebank is the largest corpus richly annotated with explicit and implicit discourse relations and their senses .
the penn discourse treebank is the largest available discourse-annotated corpus in english .
we use scikitlearn as machine learning library .
we used the svd implementation provided in the scikit-learn toolkit .
in order to map queries and documents into the embedding space , we make use of recurrent neural network with the long short-term memory architecture that can deal with vanishing and exploring gradient problems .
in particular , we use a rnn based on the long short term memory unit , designed to avoid vanishing gradients and to remember some long-distance dependences from the input sequence .
we use large 300-dim skip gram vectors with bag-of-words contexts and negative sampling , pre-trained on the 100b google news corpus .
we derive 100-dimensional word vectors using word2vec skip-gram model trained over the domain corpus .
semantic parsing is the task of mapping natural language sentences to a formal representation of meaning .
semantic parsing is the task of translating natural language utterances to a formal meaning representation language ( cite-p-16-1-6 , cite-p-16-3-6 , cite-p-16-1-8 , cite-p-16-3-7 , cite-p-16-1-0 ) .
the maximum likelihood estimates are smoothed using good-turing discounting .
the phrase translation probabilities are smoothed with good-turing smoothing .
in the word-choice task , the cross-lingual measures achieved a significantly higher coverage than the monolingual measure .
in task ( 1 ) , cross-lingual measures are superior to conventional monolingual measures based on a wordnet .
to learn the weights associated with the parameters used in our model , we have used a learning framework called mira .
for parameter optimization , we have used an online large margin algorithm called mira .
another group of features involves wordnet word synonym sets .
another group of features are derived using wordnet .
through the method , various kinds of collocations induced by key strings are retrieved .
through the method , various range of collocations , especially domain specific collocations , are retrieved .
in this paper , we propose a method to integrate korean-specific subword information to learn korean word vectors and show improvements over previous baselines .
in this paper , we look at improving distributed word representations for korean using knowledge about the unique linguistic structure of korean .
we used the sri language modeling toolkit to train a fivegram model with modified kneser-ney smoothing .
we estimated 5-gram language models using the sri toolkit with modified kneser-ney smoothing .
sentence compression is the task of producing a summary at the sentence level .
sentence compression is a standard nlp task where the goal is to generate a shorter paraphrase of a sentence .
a portmanteau is a type of compound word that fuses the sounds and meanings of two component words .
portmanteaux are new words that fuse both the sounds and meanings of their component words .
the quality of the translation was assessed by the bleu index , calculated using a perl script provided by nist .
the bleu metric was used to automatically evaluate the quality of the translations .
in section 4 , we describe tools allowing to efficiently access wikipedia ¡¯ s edit history .
thus , in section 4 , we present a tool to efficiently access wikipedia¡¯s edit history .
the language models were built using srilm toolkits .
for lm training and interpolation , the srilm toolkit was used .
relation extraction ( re ) is a task of identifying typed relations between known entity mentions in a sentence .
relation extraction is a key step towards question answering systems by which vital structured data is acquired from underlying free text resources .
to deliver their ideas , the authors need to determine which school of thought this sentence is to portray .
before writing a sentence to deliver their ideas , the authors need to determine which school of thought this sentence is to portray .
negation is a complex phenomenon present in all human languages , allowing for the uniquely human capacities of denial , contradiction , misrepresentation , lying , and irony ( horn and wansing , 2015 ) .
negation is a grammatical category that comprises devices used to reverse the truth value of propositions .
relation extraction ( re ) is the task of identifying instances of relations , such as nationality ( person , country ) or place of birth ( person , location ) , in passages of natural text .
relation extraction ( re ) is the process of generating structured relation knowledge from unstructured natural language texts .
in this paper , we present the task of process extraction , in which events within a process .
in this paper , we formally define the task of process extraction and present automatic extraction methods .
probabilistic models have had much success in applications because of their flexibility .
discriminative probabilistic models are very popular in nlp because of the latitude they afford in designing features .
the best performing nmt systems use an attention mechanism that focuses the attention of the decoder on parts of the source sentence .
an attention-based nmt system uses a bidirectional rnn as an encoder and a decoder that emulates searching through a source sentence during decoding .
word sense disambiguation ( wsd ) is the task of determining the meaning of a word in a given context .
word sense disambiguation ( wsd ) is formally defined as the task of computationally identifying senses of a word in a context .
our model uses non-negative matrix factorization -nmf in order to find latent dimensions .
our model uses non-negative matrix factorization in order to find latent dimensions .
whereas they outperform original-based lms , lms compiled from texts that were translated from the source language .
however , lms based on texts translated from the source language still outperform lms translated from other languages .
on training sentences , we obtained a precision rate of 82 % and a recall rate of 85 % .
on test sentences , we obtained a precision rate of 79 % and a recall rate of 77 % .
identification of user intent has played an important role in conversational systems .
in conversational systems , understanding user intent is critical to the success of interaction .
the feature weights are tuned to optimize bleu using the minimum error rate training algorithm .
the optimisation of the feature weights of the model is done with minimum error rate training against the bleu evaluation metric .
the meteor-derived features are the most effective ones .
meteorderived features are the most effective ones in our experiment .
we describe our contributions to the complex word identification task of semeval 2016 .
we introduce the sv000gg systems : two ensemble methods for the complex word identification task of semeval 2016 .
the word embeddings are initialized with 100-dimensions vectors pre-trained by the cbow model .
the embedded word vectors are trained over large collections of text using variants of neural networks .
though our method uses only title words and unlabeled data , it shows reasonably comparable performance in comparison with that of the supervised naive .
from results of our experiments , our method showed reasonably comparable performance compared with a supervised method .
we develop translation models using the phrase-based moses smt system .
we use the moses toolkit to train various statistical machine translation systems .
we will adopt the adaptor grammar framework used by b枚rschinger and johnson to explore the utility of syllable weight as a cue to word segmentation by way of its covariance with stress .
we modify the adaptor grammar word segmentation model of b枚rschinger and johnson to compare the utility of syllable weight and stress cues for finding word boundaries , both individually and in combination .
lei et al also use low-rank tensor learning in the context of dependency parsing , where like in our case dependencies are represented by conjunctive feature spaces .
lei et al proposed to learn features by representing the cross-products of some primitive units with low-rank tensors for dependency parsing .
for this task , we used the svm implementation provided with the python scikit-learn module .
specifically , we used the python scikit-learn module , which interfaces with the widely-used libsvm .
metric pairs show a significant difference in correlation with human judgment .
this is often measured by correlation with human judgment .
the grammar matrix is couched within the head-driven phrase structure grammar framework .
the grammar matrix is written within the hpsg framework , using minimal recursion semantics for the semantic representations .
relation extraction is the task of predicting attributes and relations for entities in a sentence ( zelenko et al. , 2003 ; bunescu and mooney , 2005 ; guodong et al. , 2005 ) .
relation extraction is the task of finding relationships between two entities from text .
we trained word embeddings using word2vec on 4 corpora of different sizes and types .
we trained word2vec on a 1-billion mixed corpus , preprocessed by lemmatization and compound splitting .
we compared the performances of the systems using two automatic mt evaluation metrics , the sentence-level bleu score 3 and the document-level bleu score .
in addition to these two key indicators , we evaluated the translation quality using an automatic measure , namely bleu score .
two subsections review typical methods for each phase .
the following two subsections review typical methods for each phase .
we used the srilm toolkit to create 5-gram language models with interpolated modified kneser-ney discounting .
we trained a 4-gram language model on the xinhua portion of gigaword corpus using the sri language modeling toolkit with modified kneser-ney smoothing .
dependency parsing is a topic that has engendered increasing interest in recent years .
dependency parsing is the task of building dependency links between words in a sentence , which has recently gained a wide interest in the natural language processing community .
in this paper , we attempted to define a measure of distributional semantic content .
in this paper , we propose to measure the ‘ semantic content ’ of lexical items , as modelled by distributional representations .
on this dataset and show that our method predicts the correct equation in 70 % of the cases and that in 60 % of the time .
we evaluate our method on this dataset and show that our method predicts the correct equation in 70 % of the cases and that in 60 % of the time we also ground all variables correctly .
for the features , we directly adopt those described in lin et al , knott .
for the features , we directly adopt those described in lin et al , knott , 1996 .
coreference resolution is a field in which major progress has been made in the last decade .
coreference resolution is a task aimed at identifying phrases ( mentions ) referring to the same entity .
barzilay and mckeown utilized multiple english translations of the same source text for paraphrase extraction .
however , barzilay and mckeown did similar work to corpus-based identification of general paraphrases from multiple english translations of the same source text .
however , this estimator underestimates the entropy , as it does not take into account unseen types , which is especially problematic for small texts .
however , it has been shown that this method underestimates the entropy , especially for small texts .
in this paper , we re-embed pre-trained word embeddings with a stage of manifold learning .
in this paper we presented a new method to re-embed words from offthe-shelf embeddings based on manifold learning .
sentiment classification is a useful technique for analyzing subjective information in a large number of texts , and many studies have been conducted ( cite-p-15-3-1 ) .
sentiment classification is a hot research topic in natural language processing field , and has many applications in both academic and industrial areas ( cite-p-17-1-16 , cite-p-17-1-12 , cite-p-17-3-4 , cite-p-17-3-3 ) .
the vmf distribution has been used to model directional data .
such a representation is well-suited for directional data .
semantic parsing is the task of mapping natural language utterances to machine interpretable meaning representations .
semantic parsing is the task of mapping natural language sentences to complete formal meaning representations .
all model weights were trained on development sets via minimum-error rate training with 200 unique n-best lists and optimizing toward bleu .
all features were log-linearly combined and their weights were optimized by performing minimum error rate training .
we used 5-gram models , estimated using the sri language modeling toolkit with modified kneser-ney smoothing .
for the tree-based system , we applied a 4-gram language model with kneserney smoothing using srilm toolkit trained on the whole monolingual corpus .
twitter is the medium where people post real time messages to discuss on the different topics , and express their sentiments .
twitter is a communication platform which combines sms , instant messages and social networks .
word sense induction ( wsi ) is the task of automatically identifying the senses of words in texts , without the need for handcrafted resources or manually annotated data .
word sense induction ( wsi ) is the task of automatically discovering all senses of an ambiguous word in a corpus .
we use latent dirichlet allocation , or lda , to obtain a topic distribution over conversations .
a particular generative model , which is well suited for the modeling of text , is called latent dirichlet allocation .
we used the target side of the parallel corpus and the srilm toolkit to train a 5-gram language model .
for both languages , we used the srilm toolkit to train a 5-gram language model using all monolingual data provided .
we compared sn models with two different pre-trained word embeddings , using either word2vec or fasttext .
we obtained distributed word representations using word2vec 4 with skip-gram .
since the meaning of a sentence consists of both structureivl which enable us to propose a uniform framework foranalyzing both proposition and modality .
the meaning of a sentence ~ 1 is a relation between the utterance situation u ( =d , c ) and a described situations .
although coreference resolution is a subproblem of natural language understanding , coreference resolution evaluation metrics have predominately been discussed in terms of abstract entities and hypothetical system errors .
coreference resolution is the process of linking together multiple expressions of a given entity .
mutalik et al developed another rule based system called negfinder that recognizes negation patterns in biomedical text .
mutalik et al developed negfinder , a rulebased system that recognises negated patterns in medical documents .
in this paper , we explore within-and across-culture deception detection .
in this paper , we addressed the task of deception detection within- and across-cultures .
to calculate language model features , we train traditional n-gram language models with ngram lengths of four and five using the srilm toolkit .
we use sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .
discourse parsing is a difficult , multifaceted problem involving the understanding and modeling of various semantic and pragmatic phenomena as well as understanding the structural properties that a discourse graph can have .
discourse parsing is a challenging task and is crucial for discourse analysis .
in this work , we introduce an extension to the continuous bag-of-words model .
our departure point is the continuous bag-of-words model introduced in .
based on word2vec , we obtained both representations using the skipgram architecture with negative sampling .
as embedding vectors , we used the publicly available representations obtained from the word2vec cbow model .
combinatory categorial grammar is a syntactic theory that models a wide range of linguistic phenomena .
combinatory categorial grammar is a lexicalized grammar formalism that has been used for both broad coverage syntactic parsing and semantic parsing .
our latent model uses a factorization technique called non-negative matrix factorization in order to find latent dimensions .
our model uses non-negative matrix factorization in order to find latent dimensions .
we evaluate cpra on benchmark data created from freebase .
we have tested cpra on benchmark data created from freebase .
experiments on the benchmark datasets show that our model achieve better results than previous neural network models .
the test results on the benchmark dataset show that our model outperforms previous neural network models .
using sentence-aligned corpora , the proposed model learns distributed representations .
the proposed model has a simple loss function and only uses sentence-aligned data for learning the shared representations .
the statistical phrase-based systems were trained using the moses toolkit with mert tuning .
the smt system is implemented using moses and the nmt system is built using the fairseq toolkit .
we used data from the conll-x shared task on multilingual dependency parsing .
the treebank data in our experiments are from the conll shared-tasks on dependency parsing .
in view of this background , this paper presents a novel error correction framework called error case frames .
in view of this background , this paper presents a novel error correction framework called error case frames an example of which is shown in fig.2 .
other method is to combine the outputs of different mt systems trained using different aligners .
the other is to combine outputs of different mt systems trained using different aligners .
we develop a novel technique for amr parsing that uses learning to search .
we develop a novel technique to parse english sentences into amr using learning to search .