sentence1
stringlengths
16
446
sentence2
stringlengths
14
436
the weights 位 m in the log-linear model were trained using minimum error rate training with the news 2009 development set .
the weights of the different feature functions were optimised by means of minimum error rate training on the 2013 wmt test set .
li et al use sense paraphrases to estimate probabilities of senses and carry out wsd .
li et al design several systems that use latent topics to find a most likely sense based on the sense paraphrases and context .
in this setting , traditional transfer methods will always predict the same label .
unlike these methods , our approach assumes no label annotations in the target domain .
each of them was lemmatised and tagged using the treetagger .
the corpus was automatically pos-tagged with treetagger .
table 2 presents the translation performance in terms of various metrics such as bleu , meteor and translation edit rate .
the mt performance in terms of translation edit rate and bleu is shown in figure 4 .
semantic role labeling ( srl ) is the task of identifying the arguments of lexical predicates in a sentence and labeling them with semantic roles ( cite-p-13-3-3 , cite-p-13-3-11 ) .
semantic role labeling ( srl ) is the task of automatic recognition of individual predicates together with their major roles ( e.g . frame elements ) as they are grammatically realized in input sentences .
searchqa and triviaqa show that our system achieves significant and consistent improvement as compared to all baseline methods .
experimental results on real-world datasets show that our model can capture useful information from noisy data and achieve significant improvements on ds-qa as compared to all baselines .
the third diachronic distributional model we will consider comes from bamler and mandt .
this choice of hyperparameters comes from bamler and mandt .
morphologically , arabic is a non-concatenative language .
arabic is a morphologically rich language that is much more challenging to work , mainly due to its significantly larger vocabulary .
we have presented an approach to building a test collection from an existing collection of research papers .
we present an approach to building a test collection of research papers .
representations of word context are based on potential substitutes of a word .
thus substitute vectors represent individual word contexts , not word types .
we extend the existing word embedding learning algorithm and develop three neural networks to learn sswe .
following , we develop a new convolutional neural network based semantic model for semantic parsing .
cook et al and fazly et al take a different approach , which crucially relies on the concept of canonical form .
cook et al and fazly et al rely crucially on the concept of canonical form .
in this paper , we , the fbk-tr team , describe our system participating in task 3 .
in this paper , we describe our system participating in the task 3 , at semeval 2014 .
sentiment analysis is the process of identifying and extracting subjective information using natural language processing ( nlp ) .
sentiment analysis is the task in natural language processing ( nlp ) that deals with classifying opinions according to the polarity of the sentiment they express .
to build a corpus for robocup , 300 pieces of coach advice were randomly selected from the log files of the 2003 robocup coach competition , which were manually translated into english .
for clang , 300 instructions were randomly selected from the log files of the 2003 robocup coach competition and manually translated into english .
in this paper , we apply kernel methods , which enable an efficient comparison of structures .
in this paper , we have engineered and studied several models for relation learning .
zarrieθ„½ and kuhn argue that multiword expressions can be reliably detected in parallel corpora by using dependency-parsed , word-aligned sentences .
on the other hand , zarrieθ„½ and kuhn make use of translational correspondences when identifying multiword expressions .
although word embeddings have been successfully employed in many nlp tasks , the application of word embeddings in re is very recent .
word embeddings have become increasingly popular lately , proving to be valuable as a source of features in a broad range of nlp tasks .
heilman et al extended this approach and worked towards retrieving relevant reading materials for language learners in the reap 3 project .
heilman et al combined a language modeling approach with grammarbased features to improve readability assessment for first and second language texts .
we use the ontonotes datasets from the conll 2011 shared task 6 , only for training the out-of-the-box system .
throughout this work , we use the datasets from the conll 2011 shared task 2 , which is derived from the ontonotes corpus .
we propose novel linear associative units ( lau ) to reduce the gradient propagation length inside the recurrent unit .
we propose a linear associative unit ( lau ) which makes a fusion of both linear and nonlinear transformation inside the recurrent unit .
neural networks have recently gained much attention as a way of inducing word vectors .
more recently , neural networks have become prominent in word representation learning .
the language model component uses the srilm lattice-tool for weight assignment and nbest decoding .
uedin has used the srilm toolkit to train the language model and relies on kenlm for language model scoring during decoding .
multi-task learning using a related auxiliary task can lead to stronger generalization and better regularized models .
multi-task learning can integrate different objectives into one model and has previously been shown to help improve model generalisation .
this is a gui-enabled convenience tool that manages datasets and uses the python-based scikitlearn machine learning toolkit .
kindred is a python package that builds upon the stanford corenlp framework and the scikit-learn machine learning library .
methods make use of the information from only one language side .
the existing methods use only the information in either language side .
tests show that using a situated model significantly improves performances over traditional language modeling methods .
results indicate that integration of situational context dramatically improves performance over traditional methods alone .
we used kenlm with srilm to train a 5-gram language model based on all available target language training data .
we used the srilm toolkit to train a 4-gram language model on the english side of the training corpus .
we propose a robust nonparanormal model to learn the stochastic dependencies among the image , the candidate descriptions , and the popular votes .
we propose a robust nonparanormal approach ( cite-p-23-3-18 ) to model the multimodal stochastic dependencies among images , text , and votes .
ideally the third item can be estimated by the forward-backward algorithm recursively for the firstorder or second-order hmms .
ideally , it can be estimated by using the forward-backward algorithm recursively for the first-order or second-order hmms .
we use the mallet implementation of conditional random fields .
our system is based on the conditional random field .
specifically , we use the liblinear svm package as it is well-suited to text classification tasks with large numbers of features and texts .
in particular , we use the liblinear 3 package which has been shown to be efficient for text classification problems such as this .
semantic role labeling ( srl ) is a kind of shallow semantic parsing task and its goal is to recognize some related phrases and assign a joint structure ( who did what to whom , when , where , why , how ) to each predicate of a sentence ( cite-p-24-3-4 ) .
semantic role labeling ( srl ) is the process of extracting simple event structures , i.e. , β€œ who ” did β€œ what ” to β€œ whom ” , β€œ when ” and β€œ where ” .
for implicit discourse relation recognition , previous works attempted to automatically generate training data by removing explicit discourse connectives from sentences .
due to the lack of benchmark data for implicit discourse relation analysis , earlier work used unlabeled data to generate synthetic implicit discourse data .
for english , we use the stanford parser for both pos tagging and cfg parsing .
we use the stanford parser for obtaining all syntactic information .
to get a dictionary of word embeddings , we use the word2vec tool 2 and train it on the chinese gigaword corpus .
for word-level embeddings , we pre-train the word vectors using word2vec on the gigaword corpus mentioned in section 4 , and the text of the training dataset .
using the deep learning framework caffe , we extracted image embeddings from a deep convolutional neural network that was trained on the imagenet classification task .
we extract the 4096-dimensional pre-softmax layer from a for-ward pass through a convolutional neural network , which has been pretrained on the imagenet classification task using caffe .
park and levy proposed a language-modeling approach to whole sentence error correction but their model is not competitive with individually trained models .
park and levy proposed an em-based unsupervised approach to perform whole sentence grammar correction , but the types of errors must be predetermined to learn the parameters for their noisy channel model .
the dimensionality of our word embedding layer was set to size 300 , and we use publicly available pre-trained glove word embeddings that we finetune during training .
since it is operated on the word level , we use pre-trained 300-dimensional glove embeddings and keep them fixed during training .
a pun is a form of wordplay in which a word suggests two or more meanings by exploiting polysemy , homonymy , or phonological similarity to another word , for an intended humorous or rhetorical effect ( cite-p-15-3-1 ) .
a pun is a form of wordplay in which one signifier ( e.g. , a word or phrase ) suggests two or more meanings by exploiting polysemy , or phonological similarity to another signifier , for an intended humorous or rhetorical effect .
we use the skll and scikit-learn toolkits .
we used the svm implementation of scikit learn .
segmentation is the first step in a discourse parser , a system that constructs discourse trees from elementary discourse units .
segmentation is the task of dividing a stream of data ( text or other media ) into coherent units .
the probabilistic language model is constructed on google web 1t 5-gram corpus by using the srilm toolkit .
all feature models are estimated in the in-domain corpus with standard techniques .
to test whether a performance difference is statistically significant , we conduct significance tests following the paired bootstrap approach .
to compare the relative quality of different metrics , we apply bootstrapping re-sampling on the data , and then use paired t-test to determine the statistical significance of the correlation differences .
previous methods have used the first or second order co-occurrence , parts of speech , and local collocations .
previous methods have used first or second order co-occurrences , parts of speech , and grammatical relations .
by an unsupervised one , we may raise the question as to whether the end of supervised nlp comes in sight .
we will raise the question whether the end of supervised parsing is in sight .
we used the 300-dimensional glove word embeddings learned from 840 billion tokens in the web crawl data , as general word embeddings .
we use the pre-trained glove 50-dimensional word embeddings to represent words found in the glove dataset .
in this proposal , we propose a corpus based study to examine doctor-patient conversation of antibiotic treatment negotiation .
in this proposal , we propose a corpus-based study of doctor-patient conversations of antibiotic treatment negotiation in pediatric consultations .
our model is effective in dealing with negation phrases ( a typical case of sentiment expressed by sequence ) .
an interesting case study on negation expression processing shows a promising potential of the architecture dealing with complex sentiment phrases .
sentiment analysis is a recent attempt to deal with evaluative aspects of text .
sentiment analysis is the process of identifying and extracting subjective information using natural language processing ( nlp ) .
tiedemann propose cache-based language and translation models , which are built on recently translated sentences .
tiedemann proposed a cache-model to enforce consistent translation of phrases across the document .
as well as temporal constraints , we are able to extract text-to-text .
we are able to achieve significantly better results than with a text-to-words wtmf model .
this paper has described an unsupervised method for inducing semantic frames from instances of each verb .
this paper presents a method for automatically building verb-specific semantic frames from a large raw corpus .
the results of automatic evaluation and manual assessment of title quality show that the output of our system is consistently ranked higher than that of non-hierarchical baselines .
the results of automatic evaluation and manual assessment confirm the benefits of this design : our system is consistently ranked higher than non-hierarchical baselines .
we parse each document using stanford corenlp in order to acquire both dependency , named entity , and coreference resolution features .
for these data , we preprocess the text including using stanford corenlp to split the review documents into sentences and tokenizing all words .
in addition , we use an english corpus of roughly 227 million words to build a target-side 5-gram language model with srilm in combination with kenlm .
to rerank the candidate texts , we used a 5-gram language model trained on the europarl corpus using kenlm .
more recently , watanabe et al and chiang et al presented a learning algorithm using the mira technique .
recently , watanabe et al and chiang et al have developed tuning methods using the mira algorithm as a nucleus .
in this paper , we propose an approach for identifying curatable articles .
in this paper , we demonstrate how our system is constructed .
the log-lineal combination weights were optimized using mert .
the decoding weights were optimized with minimum error rate training .
and the task includes resolving not just a certain type of noun phrase ( e . g . , pronouns ) .
the approach learns from a small , annotated corpus and the task includes resolving not just pronouns but general noun phrases .
keyphrases are useful for a variety of tasks such as summarization , information retrieval and document clustering .
keyphrases are useful in many tasks such as information retrieval , document summarization or document clustering .
mada-arz is an egyptian arabic extension of the morphological analysis and disambiguation of arabic tool .
mada-arz is an egyptian arabic extension of the morphological analysis and disambiguation of arabic .
from the business language testing service ( bulats ) , the proposed approach is found to outperform gps and dnns with mcd in uncertainty-based rejection .
this method outperforms gps and monte-carlo dropout in uncertainty based rejection for automatic assessment .
it includes a top-level ontology developed following the procedure outlined by russell and norvig and originally covered the tourism domain encoding knowledge about sights , historical persons and buildings .
it includes a top-level developed following the procedure outlined by russell and norvig and originally covered the tourism domain encoding knowledge about sights , historical persons and buildings .
part-of-speech tagging is the assignment of syntactic categories ( tags ) to words that occur in the processed text .
part-of-speech tagging is a key process for various tasks such as ` information extraction , text-to-speech synthesis , word sense disambiguation and machine translation .
for all experiments , we used a 4-gram language model with modified kneser-ney smoothing which was trained with the srilm toolkit .
for the language model , we used sri language modeling toolkit to train a trigram model with modified kneser-ney smoothing on the 31 , 149 english sentences .
the level of the agreement was then assessed using the kappa statistic .
the interannotator agreement was measured using the kappa statistic .
negation is a linguistic phenomenon that can alter the meaning of a textual segment .
negation is a complex phenomenon present in all human languages , allowing for the uniquely human capacities of denial , contradiction , misrepresentation , lying , and irony ( horn and wansing , 2015 ) .
wizard could also tell when the query results did not contain the requested title .
the most successful wizard could also tell when the query results did not contain the requested title .
with an empirical evaluation on the penn discourse treebank ( pdtb ) ( cite-p-11-3-7 ) dataset , which yields an f 1 score of 0 . 485 .
our approach achieves an f 1 score of 0.485 on the implicit relation labeling task for the penn discourse treebank .
in portantly , this type of evaluation can measure how well , fourth darpa speech and natural language .
in portantly , this type of evaluation can measure how well , fourth darpa speech and natural language worka structural theory of text can perform .
the tilde system is a moses phrase-based smt system that was trained on the tilde mt platform .
the smt system is a standard phrase-based system that was trained on the tilde mt platform with moses .
we implemented all models in python using the pytorch deep learning library .
we used pytorch to implement the embedding model and gurobi as our ilp solver .
the berkeley framenet project is an ongoing effort of building a semantic lexicon for english based on the theory of frame semantics .
the berkeley framenet is an ongoing project for building a large lexical resource for english with expert annotations based on frame semantics .
jacy is a hand-crafted japanese hpsg grammar that provides semantic information as well as linguistically motivated analysis of complex constructions .
jacy is a type of hand-crafted japanese grammar based on hpsg that can compute a detailed semantic representation .
the penn discourse treebank corpus is the best-known resource for obtaining english connectives .
the penn discourse treebank is the largest available discourseannotated resource in english .
they used stanford parser to create the parse trees for all sentences .
the stanford parser was used to generate the dependency parse information for each sentence .
gram language models are trained over the target-side of the training data , using srilm with modified kneser-ney discounting .
a 5-gram language model with kneser-ney smoothing is trained using s-rilm on the target language .
vector based word representation has powerful capability that captures the phenomenon that words having the similar meanings should appear together .
the word embeddings can provide word vector representation that captures semantic and syntactic information of words .
the annotation scheme leans on the universal stanford dependencies complemented with the google universal pos tagset and the interset interlingua for morphological tagsets .
the annotation scheme is based on an evolution of stanford dependencies , google universal part-ofspeech tags , and the interset interlingua for morphosyntactic tagsets .
to train our reranking models we used svm-light-tk 7 , which encodes structural kernels in svmlight solver .
to train our models , we adopted svm-light-tk 7 , which enables the use of structural kernels in svm-light , with default parameters .
collobert et al adjust the feature embeddings according to the specific task in a deep neural network architecture .
collobert et al apply generic neural network architectures to several sequence labelling tasks and obtain competitive results despite of the task-specific variations .
klementiev et al treat the task as a multi-task learning problem where each task corresponds to a single word , and task relatedness is derived from co-occurrence statistics in bilingual parallel data .
klementiev et al treated the task as a multi-task learning problem where each task corresponds to a single word , and the task relatedness is derived from cooccurrence statistics in bilingual parallel corpora .
later , miwa and bansal have implemented an end-to-end neural network to construct a context representation for joint entity and relation extraction .
miwa and bansal adopted a bidirectional tree lstm model to jointly extract named entities and relations under a dependency tree structure .
word sense disambiguation ( wsd ) is a difficult natural language processing task which requires that for every content word ( noun , adjective , verb or adverb ) the appropriate meaning is automatically selected from the available sense inventory 1 .
word sense disambiguation ( wsd ) is a widely studied task in natural language processing : given a word and its context , assign the correct sense of the word based on a predefined sense inventory ( cite-p-15-3-4 ) .
for language models , we use the srilm linear interpolation feature .
in the case of the trigram model , we expand the lattice with the aid of the srilm toolkit .
we successfully apply the attention scheme to detect word senses and learn representations according to contexts with the favor of the sememe annotation .
we also analyze several cases in wsd and wrl , which confirms our models are capable of selecting appropriate word senses with the favor of sememe attention .
word sense disambiguation is the task of determining the particular sense of a word from a given set of pre-defined senses .
word sense disambiguation is the task of computationally determining the meaning of a word in its context .
our base model is a transition-based neural parser of chen and manning .
our parsing model is built based on the work of chen and manning .
we use the pre-trained glove 50-dimensional word embeddings to represent words found in the glove dataset .
as input to the aforementioned model , we are going to use dense representations , and more specifically pre-trained word embeddings , such as glove .
we used the meetings from the icsi meeting data , which are recordings of naturally occurring meetings .
we used the icsi meeting corpus , which contains naturally occurring meetings , each about an hour long .
at a large scope , they facilitate tremendously the costly but unavoidable process of semi-automatic lexical acquisition .
nevertheless , large-scope lrs are justified because they facilitate the unavoidable process of large-scale semi-automatic lexical acquisition .
semantic role labeling ( srl ) is the task of labeling predicate-argument structure in sentences with shallow semantic information .
semantic role labeling ( srl ) is the process of extracting simple event structures , i.e. , β€œ who ” did β€œ what ” to β€œ whom ” , β€œ when ” and β€œ where ” .
we trained a trigram language model on the chinese side , with the srilm toolkit , using the modified kneser-ney smoothing option .
we used srilm to build a 4-gram language model with interpolated kneser-ney discounting .
our system is based on the phrase-based part of the statistical machine translation system moses .
our translation system is an in-house phrasebased system analogous to moses .
the discursive relations used in this work came from the penn discourse treebank .
they had shown that the penn discourse treebank style discourse relations are useful .
we employed the machine learning tool of scikit-learn 3 , for training the classifier .
for the feature-based system we used logistic regression classifier from the scikit-learn library .
the smt systems were built using the moses toolkit .
we use the popular moses toolkit to build the smt system .
socher et al , 2012 , uses a recursive neural network in relation extraction , and further use lstm .
socher et al train a composition function using a neural network-however their method requires annotated data .