sentence1
stringlengths 16
446
| sentence2
stringlengths 14
436
|
---|---|
in transe and transh , the embeddings of entities and relations are in the same space .
|
both transe and transh assume that entities and relations are in the same vector space .
|
the language model used was a 5-gram with modified kneserney smoothing , built with srilm toolkit .
|
the system used a tri-gram language model built from sri toolkit with modified kneser-ney interpolation smoothing technique .
|
in this paper , we take a lexicon-based , unsupervised approach to considering sentiment consistency for translation .
|
in this paper , we are interested in explicitly modeling sentiment knowledge for translation .
|
we use the logistic regression classifier as implemented in the skll package , which is based on scikitlearn , with f1 optimization .
|
we use several classifiers including logistic regression , random forest and adaboost implemented in scikit-learn .
|
information extraction ( ie ) is the task of extracting factual assertions from text .
|
information extraction ( ie ) is the process of identifying events or actions of interest and their participating entities from a text .
|
we used a trigram language model trained on gigaword , and minimum error-rate training to tune the feature weights .
|
we used minimum error rate training to tune the feature weights for maximum bleu on the development set .
|
we conduct an empirical analysis of feature sets and report on the different characteristics of truthful and deceptive language .
|
we analyze a set of linguistic features in both truthful and deceptive responses to interview questions .
|
although we expect that better use of language specific knowledge would improve the results , it would defeat one of the goals of this work .
|
we expect that more language specific knowledge used to discover accurate equivalence classes would result in performance improvements .
|
relation extraction ( re ) is the task of determining semantic relations between entities mentioned in text .
|
relation extraction ( re ) is the process of generating structured relation knowledge from unstructured natural language texts .
|
dye et al developed a system based on scripts of common interactions .
|
dye et al introduce a system that utilizes scripts for specific situations .
|
in this study , we adopted evaluation metrics that comprise two classes , namely refined prerequisite skills and readability , for analyzing the quality of rc .
|
in this study , our goal is to investigate how these two types of difficulty , namely “ answering questions ” and “ reading text , ” are correlated in rc .
|
for relation extraction , we mitigated noise from using predicted entity types .
|
thus , we allow the relation extraction system to compensate for errors of entity typing .
|
moreover , we release a chinese zero anaphora corpus of 100 documents , which adds a layer of annotation to the manually-parsed sentences in the chinese treebank ( ctb ) .
|
moreover , we release a chinese zero anaphora corpus of 100 documents , which adds a layer of annotation to the manually-parsed sentences in the chinese treebank ( ctb ) 6.0 .
|
adding more complex features may not improve the performance much , and may even hurt the performance .
|
adding more complex features may not improve the performance much or may even hurt the performance .
|
in grammar , a part-of-speech ( pos ) is a linguistic category of words , which is generally defined by the syntactic or morphological behavior of the word in question .
|
in grammar , a part-of-speech ( pos ) is a linguistic category of words , generally defined by the syntactic or morphological behavior of the word in question .
|
lakoff and johnson , 1980 ) a mapping of a concept of argument to that of war is employed here .
|
lakoff and johnson , 1980 ) according to lakoff and johnson , a mapping of a concept of argument to that of war is employed here .
|
examples include search , access to yellow page services , email 5 , blog 6 , faq retrieval 7 etc .
|
other examples are search , access to yellow page services , email 1 , blog 2 , faq retrieval 3 etc .
|
and the results show that humorous review prediction can supply good indicators for identifying helpful reviews .
|
these humorous review predictions can also supply good indicators for identifying helpful reviews .
|
for evaluation , we used the case-insensitive bleu metric with a single reference .
|
we evaluated the translation quality using the bleu-4 metric .
|
we described our submissions to the semantic text similarity task .
|
we participated in the english sts and interpretable similarity subtasks .
|
sentence compression is the task of shortening a sentence while preserving its important information and grammaticality .
|
sentence compression is a paraphrasing task where the goal is to generate sentences shorter than given while preserving the essential content .
|
nowadays , most of smt systems implement the well known lexicalized reordering model .
|
the lexicalized reordering models have become the de facto standard in modern phrase-based systems .
|
the bilda model is a straightforward multilingual extension of the standard lda model .
|
lda is a representative probabilistic topic model of document collections .
|
wordnet is a comprehensive lexical resource for word-sense disambiguation ( wsd ) , covering nouns , verbs , adjectives , adverbs , and many multi-word expressions .
|
unfortunately , wordnet is a fine-grained resource , encoding sense distinctions that are often difficult to recognize even for human annotators ( cite-p-15-1-6 ) .
|
as input , the information about the current target word can be combined with the context word information and processed in the hidden layers .
|
the bnnjm uses the current target word as input , so the information about the current target word can be combined with the context word information and processed in hidden layers .
|
however , dependency parsing , which is a popular choice for japanese , can incorporate only shallow syntactic information , i.e. , pos tags , compared with the richer syntactic phrasal categories in constituency parsing .
|
dependency parsing is a way of structurally analyzing a sentence from the viewpoint of modification .
|
in this study , we focus on investigating the feasibility of using automatically inferred personal traits .
|
in this paper , we present a comprehensive analysis of the relationship between personal traits and brand preferences .
|
berland and charniak proposed a similar method for part-whole relations .
|
berland and charniak proposed a system for part-of relation extraction , based on the approach .
|
as a baseline for this comparison , we use morfessor categories-map .
|
in addition , we compare against the morfessor categories-map system .
|
this monotonically enriched structure can then serve as a context for incremental language understanding , as the author claims , although this part is not further developed by roark .
|
this monotonically enriched structure can then serve as a context for incremental language understanding , as the author claims , although this part , which we take up here , is not further developed by roark .
|
in this paper , we propose a novel supervised approach that can incorporate rich sentence features into bayesian topic models .
|
in this paper , we propose a novel supervised approach based on revised supervised topic model for query-focused multi document summarization .
|
we employ scikit-learn for building our classifiers .
|
we use the scikit-learn toolkit as our underlying implementation .
|
on the english penn treebank , revealed that our framework obtains competitive performance on constituency parsing and state-of-the-art results on single-model language modeling .
|
on the english penn treebank , we achieve competitive performance on constituency parsing and state-of-the-art single-model language modeling score .
|
the language models are 4-grams with modified kneser-ney smoothing which have been trained with the srilm toolkit .
|
these language models were built up to an order of 5 with kneser-ney smoothing using the srilm toolkit .
|
on a large scale , to maximize system performance , we explore different unsupervised feature learning methods to take advantage of a large amount of unsupervised social media data .
|
we have explored a comprehensive set of single-view feature learning methods to take advantage of a large amount of unsupervised social media data .
|
named entity recognition ( ner ) is a well-known problem in nlp which feeds into many other related tasks such as information retrieval ( ir ) and machine translation ( mt ) and more recently social network discovery and opinion mining .
|
named entity recognition ( ner ) is the task of identifying and classifying phrases that denote certain types of named entities ( nes ) , such as persons , organizations and locations in news articles , and genes , proteins and chemicals in biomedical literature .
|
two models could even share derivations with each other if they produce the same structures .
|
therefore , one model can share translations and even derivations with other models .
|
the constituent-context model is the first model achieving better performance than the trivial right branching baseline in the unsupervised english grammar induction task .
|
the constituent context model for inducing constituency parses was the first unsupervised approach to surpass a right-branching baseline .
|
minimum error training under bleu was used to optimise the feature weights of the decoder with respect to the dev2006 development set .
|
feature weights were set with minimum error rate training on a development set using bleu as the objective function .
|
finally , we used kenlm to create a trigram language model with kneser-ney smoothing on that data .
|
for the fst representation , we used the the opengrm-ngram language modeling toolkit and used an n-gram order of 4 , with kneser-ney smoothing .
|
bleu is the most commonly used metric for machine translation evaluation .
|
bleu and rouge are the standard similarity metrics used in machine translation and text summarisation .
|
co-occurrence space models represent the meaning of a word as a vector in high-dimensional space .
|
vector space models of word meaning represent words as points in a highdimensional semantic space .
|
on a standard benchmark data set , we achieve new state-of-the-art performance , reducing error in average f1 by 36 % , and word error rate by 78 % .
|
on a standard benchmark data set , we achieve new state-of-the-art performance , reducing error in average f1 by 36 % , and word error rate by 78 % in comparison with the previous best svm results .
|
as ¡® constrained ¡¯ , which used only the provided training and development data .
|
tjp was focused on the ¡®constrained¡¯ task , which used only training and development data provided .
|
in the proposed tutorial , we will give a systematic discussion on the problem of knowledge base reasoning , for which extensive studies have been conducted recently .
|
this tutorial will present an organized picture of recent research on knowledge base construction and reasoning .
|
keyphrase extraction is the task of extracting a selection of phrases from a text document to concisely summarize its contents .
|
keyphrase extraction is a fundamental task in natural language processing that facilitates mapping of documents to a set of representative phrases .
|
word sense disambiguation is the task of assigning sense labels to occurrences of an ambiguous word .
|
word sense disambiguation is the process of determining which sense of a word is used in a given context .
|
turning to comparable corpora , shao and ng presented a hybrid method to mine new translations from chinese-english comparable corpora , combining both transliteration and context information .
|
shao and ng proposed a method by combining both context and transliteration information for the task of mining new word translations .
|
second , we describe how to incorporate vector space similarity into random walk inference over kbs , reducing the feature sparsity inherent in using surface .
|
second , we introduce the use of vector space similarity in random walk inference in order to reduce the sparsity of surface forms .
|
the quality of the translation was assessed by the bleu index , calculated using a perl script provided by nist .
|
the translations were evaluated with the widely used bleu and nist scores .
|
thus , we pre-train the embeddings on a huge unlabeled data , the chinese wikipedia corpus , with word2vec toolkit .
|
we use the pre-trained 300-dimensional word2vec embeddings trained on google news 1 as input features .
|
madamira is a system developed for morphological analysis and disambiguation of arabic text .
|
madamira is a tool designed for morphological analysis and disambiguation of modern standard arabic .
|
to train monolingual word embeddings we used fasttext with default parameters except the dimension of the vectors which is 300 .
|
to train monolingual word embeddings we used fasttext which employs subword information for better quality representations .
|
we use a fourgram language model with modified kneser-ney smoothing as implemented in the srilm toolkit .
|
we apply sri language modeling toolkit to train a 4-gram language model with kneser-ney smoothing .
|
keyphrase extraction is the problem of automatically extracting important phrases or concepts ( i.e. , the essence ) of a document .
|
keyphrase extraction is a natural language processing task for collecting the main topics of a document into a list of phrases .
|
for language model , we used sri language modeling toolkit to train a 4-gram model with modified kneser-ney smoothing .
|
for the language model , we used sri language modeling toolkit to train a trigram model with modified kneser-ney smoothing on the 31 , 149 english sentences .
|
we use the l2-regularized logistic regression of liblinear as our term candidate classifier .
|
we use the multi-class logistic regression classifier from the liblinear package 2 for the prediction of edit scripts .
|
usefulness of the results , we apply the device-dependent readability to news article recommendation .
|
the usefulness of the device-dependent readability is proven by applying it to news article recommendation .
|
polarity classification is the task of separating the subjective statements into positives and negatives .
|
polarity classification is the basic task of sentiment analysis in which the polarity of a given text should be classified into three categories : positive , negative or neutral .
|
relation extraction is a subtask of information extraction that finds various predefined semantic relations , such as location , affiliation , rival , etc. , between pairs of entities in text .
|
relation extraction is a core task in information extraction and natural language understanding .
|
morphological disambiguation is the process of assigning one set of morphological features to each individual word in a text , according to the word context .
|
morphological disambiguation is the task of selecting the correct morphological parse for a given word in a given context .
|
bilingual lexica provide word-level semantic equivalence information across languages , and prove to be valuable for a range of cross-lingual natural language processing tasks .
|
bilingual lexicons play an important role in many natural language processing tasks , such as machine translation and cross-language information retrieval .
|
a model of this form involves learning the parameters .
|
the approach involves perceptron training of a model with hidden variables .
|
named entity typing is a fundamental building block for many natural-language processing tasks .
|
named entity typing is the task of detecting the type ( e.g. , person , location , or organization ) of a named entity in natural language text .
|
v-measure assesses the quality of a clustering solution by explicitly measuring its homogeneity and its completeness .
|
v-measure assesses a cluster solution by considering its homogeneity and its completeness .
|
we introduce pre-post-editing , possibly the most basic form of interactive translation , as a touch-based interaction with iteratively improved translation .
|
we have introduced pre-post-editing , a minimalist interactive machine translation paradigm where a user is only asked to spot text fragments that may be used in the final translation .
|
the log linear weights for the baseline systems are optimized using mert provided in the moses toolkit .
|
the weights 位 m are usually optimized for system performance as measured by bleu .
|
we employ scikit-learn for building our classifiers .
|
we use scikitlearn as machine learning library .
|
segmentation corresponds to a graph partitioning that optimizes the normalized-cut criterion .
|
we formalize segmentation as a graph-partitioning task that optimizes the normalized cut criterion .
|
the corpus consists of introductory sections from approximately 2,000 wikipedia articles in which references to the main subject have been annotated .
|
the corpus consists of introductory sections from approximately 1,000 wikipedia articles in which single and plural references to all people mentioned in the text have been annotated .
|
we run our experiments using an in-house phrase-based smt system similar to moses , with features including lexicalized reordering , linear distortion with limit 5 , and lexical weighting .
|
we used the phrase-based model moses for the experiments with all the standard settings , including a lexicalized reordering model , and a 5-gram language model .
|
we utilize the google news dataset created by mikolov et al , which consists of 300-dimensional vectors for 3 million words and phrases .
|
we use 300-dimensional vectors that were trained and provided by word2vec tool using a part of the google news dataset 4 .
|
conditional random fields are a type of discriminative probabilistic model proposed for labeling sequential data .
|
conditional random fields are a class of graphical models which are undirected and conditionally trained .
|
mimus follows the information state update approach to dialogue management , and supports english , german and spanish , with the possibility of changing language .
|
mimus follows the information state update approach to dialogue management , and has been developed under the eu¨cfunded talk project ( cite-p-14-3-9 ) .
|
in this paper , we presented techniques of text distortion that can significantly enhance the robustness of authorship attribution methods .
|
in this paper , we present a novel method that enhances authorship attribution effectiveness by introducing a text distortion step before extracting stylometric measures .
|
we further used a 5-gram language model trained using the srilm toolkit with modified kneser-ney smoothing .
|
for the language model , we used sri language modeling toolkit to train a trigram model with modified kneser-ney smoothing on the 31 , 149 english sentences .
|
it is found that each of the english equivalent synsets occurs in each separate class of english verbnet .
|
it also has been found that each of the english equivalent synsets occurs in each separate class of english verbnet .
|
we set all feature weights using minimum error rate training , and we optimize their number on the development dataset .
|
we perform the mert training to tune the optimal feature weights on the development set .
|
inspired by the work of vincent et al and he et al , we build multi-layer model to learn more abstract entity representations .
|
inspired by the work in , we use auto-encoder to learn the representations for classes and properties .
|
in the english lexical substitution task , the system achieved the top result for picking the best substitute .
|
the system achieved promising results for the english lexical sample and english lexical substitution tasks .
|
through natural conversational interaction , this paper proposes a probabilistic model that computes timing dependencies among different types of behaviors .
|
based on these previous attempts , this study proposes a multimodal interaction model by focusing on task manipulation , and predicts conversation states using probabilistic reasoning .
|
in future work , our measure could be simplified by implementing the bias .
|
in future work , our measure could be simplified by implementing the bias as a single scaling parameter .
|
co-training is a representive bootstrapping method , which starts with a set of labeled data , and increase the amount of annotated data using some amounts of unlabeled data in an incremental way .
|
the co-training algorithm is a specific semi-supervised learning approach which starts with a set of labeled data and increases the amount of labeled data using the unlabeled data by bootstrapping .
|
to train the models we use the default stochastic gradient descent classifier provided by scikit-learn .
|
finally , we combine all the above features using a support vector regression model which is implemented in scikit-learn .
|
word alignment is a crucial early step in the training of most statistical machine translation ( smt ) systems , in which the estimated alignments are used for constraining the set of candidates in phrase/grammar extraction ( cite-p-9-3-5 , cite-p-9-1-4 , cite-p-9-3-0 ) .
|
word alignment is a key component of most endto-end statistical machine translation systems .
|
and we have presented a method to apply the information from partial parsing to full syntactic parsers that use a variant of the cyk algorithm .
|
in this paper , we propose a method to limit the combinatorial explosion by restricting the cyk chart parsing algorithm based on the output of a chunk parser .
|
in social media especially , there is a large diversity in terms of both the topic and language , necessitating the modeling of multiple languages simultaneously .
|
social media is a popular public platform for communicating , sharing information and expressing opinions .
|
quirk et al also generate sentential paraphrases using a monolingual corpus .
|
quirk et al apply smt tools to generate paraphrases of input sentences in the same language .
|
we train a linear support vector machine classifier using the efficient liblinear package .
|
for implementation , we used the liblinear package with all of its default parameters .
|
in the document , we try to model the interactions between document , question and answer by computing the attention score of question to document and question to answer .
|
for the problem that the answer is not explicitly mentioned in the document , we model the interactions between document , question and answers by using attention mechanism .
|
in each step , the algorithm selects hypotheses from the queue .
|
in each step , only one hypothesis from the queue is allowed to be considered .
|
word sense disambiguation ( wsd ) is a fundamental task and long-standing challenge in natural language processing ( nlp ) .
|
word sense disambiguation ( wsd ) is the task of identifying the correct sense of an ambiguous word in a given context .
|
in the document , we model the interactions between document , question and answers by using attention mechanism .
|
the interactions between document , question and answers are modeled by attention mechanism and a variety of manual features are used to improve model performance .
|
that extends the rational speech act model from cite-p-21-3-1 to incorporate updates to listeners ¡¯ beliefs as discourse proceeds .
|
our model extends the rational speech act model from cite-p-21-3-1 to incorporate updates to listeners¡¯ beliefs as discourse proceeds .
|
for the neural models , we use 100-dimensional glove embeddings , pre-trained on wikipedia and gigaword .
|
we use the pre-trained glove 50-dimensional word embeddings to represent words found in the glove dataset .
|
relation extraction is a key step towards question answering systems by which vital structured data is acquired from underlying free text resources .
|
relation extraction ( re ) is the task of determining semantic relations between entities mentioned in text .
|
we use the genia event extraction task as a representative example of complex knowledge extraction .
|
in this study , we adopt the event extraction task defined in the bionlp 2009 shared task as a model information extraction task .
|
a pun is a form of wordplay , which is often profiled by exploiting polysemy of a word or by replacing a phonetically similar sounding word for an intended humorous effect .
|
a pun is a word used in a context to evoke two or more distinct senses for humorous effect .
|
we trained a trigram language model on the chinese side , with the srilm toolkit , using the modified kneser-ney smoothing option .
|
we use the sri language model toolkit to train a 5-gram model with modified kneser-ney smoothing on the target-side training corpus .
|
we employ word2vec as the unsupervised feature learning algorithm , based on a raw corpus of over 90 million messages extracted from chinese weibo platform .
|
we initialize our model with 300-dimensional word2vec toolkit vectors generated by a continuous skip-gram model trained on around 100 billion words from the google news corpus .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.