id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_93400 | We now evaluate the efficiency and accuracy of SBTDM. | the generative model is: The variable φ represents the distribution of topics given words, P (Z|W ). | neutral |
train_93401 | Training random field models is challenging due to numerical intractability of the normalizing constants Z l (λ) and expectations p λ,l [f ]. | the TDRF using the "w+c+ws+cs+cpw" features with class number 200 performs comparable to the RNNLM in both perplexity and WER. | neutral |
train_93402 | With the experimental setup mentioned before, we want to evaluate performance of this property of our model. | first, n k,d is often a sparse vector, as a document most likely contains only a few of the topics. | neutral |
train_93403 | We propose a faster inference technique using Cholesky decomposition of covariance matrices which can be applied to both the Gibbs and variational/EM method. | 1 https://code.google.com/p/word2vec/ 2.2 Latent Dirichlet Allocation (LDA) LDA (Blei et al., 2003) is a probabilistic topic model of corpora of documents which seeks to represent the underlying thematic structure of the document collection. | neutral |
train_93404 | Hence, ranking-based evaluations, where judges are asked to rank the output of 2 to 5 systems, have been used in recent years, which has yielded much higher inter-annotator agreement (Callison-Burch et al., 2007). | finally, we have presented evidence suggesting that using the pairwise hidden layers is advantageous over simpler flat models. | neutral |
train_93405 | These compact representations are in turn based on word and sentence embeddings, which are learned using neural networks. | network parameters: We train our neural network using SGD with adagrad, an initial learning rate of η = 0.01, mini-batches of size 30, and L 2 regularization with a decay parameter λ = 1e −4 . | neutral |
train_93406 | They obtained promising results using syntactic and discourse-based structures. | datasets: We train our neural models on WMT11 and we evaluate them on WMT12. | neutral |
train_93407 | , t n ): (a) We only consider source phrases p of length at most 10 (i.e., i − i < 10 for p = [i, i ]). | in this manner, we combine the advantages of the hierarchical phrase-based approach on the source side and the tree-based approach with discontinuiety on the target side. | neutral |
train_93408 | Syntax-based systems have become widely used because of their ability to handle non-local reordering and other linguistic phenomena better than phrase-based models (Och and Ney, 2004). | next, we lift the notion of consistently aligned phrase pairs to our rule spans. | neutral |
train_93409 | We use the decoder provided by MBOT-Moses of Braune et al. | such fixed phrases will often be assembled inconsistently by substitution from small fragments. | neutral |
train_93410 | Our method also trains its parameters without any pre-training or post-training procedure. | experimental results show that with the basic features of a hierarchical phrase-based machine translation system, our method produce translations that are better than a linear model. | neutral |
train_93411 | Similarly, the probability of a plaintext word e j taking a value l given samples for all other plaintext words, #(e j−1 ) −j + e α e,e j−1 . | we observe consistent improvement throughout the experiment. | neutral |
train_93412 | As shown in Figure 1, we start a few sampling processes each with a different random sample. | unlike our learning setting, their approach relied on large amounts of translation pairs learned from parallel data to train their linear transformations. | neutral |
train_93413 | Enabled if there is a parent and we did not just come down from it: • DOW N j : move to the child j of the current vertex. | we represent these problems as sequence prediction machine learning tasks, which we address using recurrent neural networks. | neutral |
train_93414 | We consider a number of methods to map the natural language description of a problem into its formal program representation. | our levels of agreement are likely to be greater than suggested by measures in the table. | neutral |
train_93415 | Starting with some initial classifiers and a training set of NL and AST pairs, we search for the most likely derivation. | the function used to score derivations is a simple matching heuristic relying on the overlap between query terms and program identifiers. | neutral |
train_93416 | Thirdly, we demonstrate experimentally, in Section 5, that multiple many-tomany alignments may be an extremely useful first step in boosting the performance of a G2P model. | clearly, a system with more information should not perform worse than a system with less information (unless the additional information is highly noisy), but it is a priori not clear at all how the extra information can be included, as Bhargava and Kondrak (2012) note: output predictions may be in distinct alphabets and/or follow different conventions, and simple rule-based conversions may even deteriorate a baseline system's performance. | neutral |
train_93417 | For example, we convert word tomorrow into to-mor-row. | but as the lack of prior knowledge, we cannot judge an optimal λ. | neutral |
train_93418 | Dictionary lookup of Internet slang 2 is performed to filter those ill-OOV words whose correct forms are not single words. | the impact of normalization or NSW detection on NER has not been well studied in social media domain. | neutral |
train_93419 | Similarly, during decoding in NSW detection, we need the Basic Features 1. | most of previous work on normalization assumed that they already knew which tokens are NSW that need normalization. | neutral |
train_93420 | are paraphrases in this corpus. | this filtering can be done efficiently and yields a manageable number of quadruples on which to computeK k . | neutral |
train_93421 | The testing set contains 1517 pairs for 89 questions. | it can be very useful in a rewriting rule to type a wildcard link with the relation holonym, as this provides constrained semantic roles to the linked wildcards in the rule, thus holonym would be a good variable type. | neutral |
train_93422 | The pairs in both datasets were then rated for their plausibility by 27 human subjects, and their judgements were aggregated into a gold standard. | the dataset consists of 52 verb metaphors and their human-produced literal paraphrases. | neutral |
train_93423 | In the future, it would be interesting to derive the information about predicate-argument relations from low-level visual features directly. | in addition, the image features tend to be sparse for abstract concepts, reducing both the quality and the coverage of abstract clusters. | neutral |
train_93424 | For instance, they can be generalised to acquire SPs from unbalanced corpora of different sizes (e.g. | some of them are still figuratively used. | neutral |
train_93425 | In 1.4 beta, most of adjunct usages of "ni" are mixed up with the argument usages of "ni", making the identification of dative cases seemingly easy. | (2009) is the best for the NOM and ACC cases, and Sasano and Kurohashi (2011) the best for the DAT cases. | neutral |
train_93426 | Supervision Unlike many recent composition models (Kalchbrenner and Blunsom, 2013;Kalchbrenner et al., 2014;Socher et al., 2012;Socher et al., 2013, among others), the context-prediction objective of C-PHRASE does not require annotated data, and it is meant to provide generalpurpose representations that can serve in different tasks. | we use syntax to determine the target units that we build representations for (in the sense that we jointly learn representations of their constituents). | neutral |
train_93427 | The most salient of these is semantic role labels, such as the ARG0 and destination arcs in Figure 2. | a standard semantics and annotation guideline for aMR alignment is left for future work; our accuracy should be considered only an informal metric. | neutral |
train_93428 | We consider two metrics, IED and END, which measure accuracy based on the action sequence and environment, respectively. | the robot is also an object in the environment. | neutral |
train_93429 | We use planner and a simulator that to-gether specify a deterministic mapping from the current environment e i and a logical form z i to a new environment e i+1 . | to previous work , we use postconditions instead of action sequence for two main reasons. | neutral |
train_93430 | Since the dependencies between constituents can be an exponential number and representing structures in learning algorithms is rather challenging, automatic feature engineering through kernel methods (Shawe-Taylor and Cristianini, 2004;Moschitti, 2006) can be a promising direction. | we used default parameters both for P T K and SP T K whereas we selected h and D parameters of N SP DK that obtained the best average accuracy using a 5-fold cross validation on the training set. | neutral |
train_93431 | (specified after the ± sign) shows that in most cases the system differences are significant. | with the previous kernels these similarities are computed intra-pair, e.g., between a 1 and a 2 . | neutral |
train_93432 | In user-sentiment assumption test, we use absolute rating difference ||rating a − rating b || as the measurement between two reviews a and b. | these models only use semantics of texts, while ignoring users who express the sentiment and products which are evaluated, both of which have great influences on interpreting the sentiment of text. | neutral |
train_93433 | Furthermore, given that .41, 0, .59 , .40, 0, .60 , and .45, 0, .55 give virtually identical information to a sentiment analyst, it seems unreasonable to expect exactly one to be the correct polarity tag for reliable and the other two incorrect. | over all of our experiments, the resulting systems of constraints can be as small as 2 constraints with 2 variables Table 5: Micro-WNop -SWD Inconsistencies and as large as 3,330 constraints with 4,946 variables. | neutral |
train_93434 | This would imply, e.g., that if a sentence in the progressive subreddit conveys an ostensibly positive sentiment about the political commentator 'Ollie', 4 then this sentence is likely to have been intended ironically. | we use the Stanford Sentiment Analysis tool (Socher et al., 2013) to infer sentiment. | neutral |
train_93435 | Automatically classifying instances with multiple possible categories is sometimes much more difficult than classifying instances with a single label. | empirical evaluation shows that Baseline DFG-both our DFG approach performs significantly better than the state-of-the-art. | neutral |
train_93436 | 2) Context dependency: Two instances from the same context are more likely to share the same emotion label than those from a random selection. | this approach first utilizes a Bayesian network to infer the relationship among the labels and then employ them in the classifier. | neutral |
train_93437 | Wan (2009) proposed a co-training approach to address the cross-lingual sentiment classification problem. | we propose a new bootstrapping mechanism, based on a principle called dual-view sentiment consensus. | neutral |
train_93438 | This parsing phase is grounded on a set of patterns (see Figure 3). | this first and pioneering version of the system shows encouraging results for the different tasks performed by the system that concern the detection of relevant like/dislike expressions (substantial agreement with a Fleiss kappa at 0.61), the categorization of the expressions between like and dislike (almost perfect agreement with a Fleiss kappa at 0.84) -polarity assignment -and the identification of the target type (53% of agreement between the reference and the system output). | neutral |
train_93439 | This first and pioneering version of the system shows encouraging results for the different tasks performed by the system that concern the detection of relevant like/dislike expressions (substantial agreement with a Fleiss kappa at 0.61), the categorization of the expressions between like and dislike (almost perfect agreement with a Fleiss kappa at 0.84) -polarity assignment -and the identification of the target type (53% of agreement between the reference and the system output). | the system defines relevantAttExpr(usrSentence) == T rue, even if any sentence matching with a relevant pattern has been found in the user's sentence. | neutral |
train_93440 | These words are not represented in E. One way to deal with this case, is to simply set the embeddings of unknown words to zero. | this was the top system in the 2013 edition of SemEval. | neutral |
train_93441 | Labeled data is, however, expensive to obtain, while unlabeled data is widely available. | finally, Section 7 draws the conclusions. | neutral |
train_93442 | The structured skip-gram models the following probability: Here, w i ∈ {1, 0} v×1 is a one-hot representation of w = i. | this simple method brings two funda-mental advantages. | neutral |
train_93443 | We also include two other approaches that are related to the one here proposed, where a neural network initialized with pre-trained word embeddings is used to learn relevant features. | for a given task, only a subset of all the latent aspects captured by the word embeddings will be useful. | neutral |
train_93444 | There are 5000, 2000 and 3000 word types. | it is possible to derive word representations by exploiting word co-occurrence patterns in large samples of unlabeled text. | neutral |
train_93445 | Then, the resulting network is searched to select the best scoring word at each node. | table 4 reports the WER results obtained on tst2013 by ROVER methods fed with: different numbers of hypotheses (from 3 to 8), at different granularity levels (whole utterance vs. segment), ranked with different models (random, RR1, RR2 and MLR) trained with different sets of features 8 http://sourceforge.net/p/lemur/wiki/ RankLib/ (Basic, WordBased, Basic+WordBased). | neutral |
train_93446 | The decrease of precision is misleading, though, due to the small number of occurrences it has been computed on. | ask the parser to perform tokenization will not always solve the problem. | neutral |
train_93447 | For a given time step (step 2 as an example), argument and predicate are specified with different color. | smaller d s means that it is easy to make prediction that long history is unnecessary. | neutral |
train_93448 | The above four features are concatenated to be the input representation at this time step for the following LSTM layers. | the traditional feature templates are only good at describing the properties in neighborhood and a small mistake in syntactic tree will results in large deviation in SRL tagging. | neutral |
train_93449 | Third, we utilize the Dropout strategy to address the overfitting prob-lem. | for example, in figure 1, for the sentence "the assets are sold", our parser can construct the parse tree by performing the action sequence {sh-DT, sh-NNS, rr-NP, sh-VBP, sh-VBN, ru-VP, rr-VP, rr-S}. | neutral |
train_93450 | If the feature set is too small, it might underfit the model and leads to low performance. | table 4 presents results of different experimental configurations for English. | neutral |
train_93451 | In order to jointly assign POS tags and construct a constituent structure for an input sentence, we define the following actions for the action set T , following Wang and Xue (2014): • SHIFT-X (sh-x): remove the first word from β, assign a POS tag X to the word and push it onto the top of σ; • REDUCE-UNARY-X (ru-x): pop the top subtree from σ, construct a new unary node labeled with X for the subtree, then push the new subtree back onto σ. | to achieve competitive performance, they had to combine the learned features with the traditional hand-crafted features. | neutral |
train_93452 | (2014) describe several uses for arclevel constraints in transition-based parsing. | alternatively, more punctuationspecific features to account for its myriad roles in syntax could serve to improve performance. | neutral |
train_93453 | Different with standard RNN, there are no nonterminal nodes in dependency tree. | 1) They first summed up all child nodes into a dense vector v c and then composed subtree representation from v c and vector parent node. | neutral |
train_93454 | A set of agenda B = B 0 , B 1 , • • • maintains the k-best states for each step j at B j , which is first initialized by inserting the axiom in B 0 . | enriched models incur numerous parameters and sparsity issues, and are insufficient for capturing various syntactic phenomena. | neutral |
train_93455 | Parameter estimation is performed in parallel by distributing training instances asynchronously in each shard and by updating locally copied parameters using the sub-gradients computed from the distributed mini-batches (Dean et al., 2012). | our parser differs in that we do not differentiate left or right head words. | neutral |
train_93456 | We generate the top 10 best candidate parse trees using 10 fold cross validation for each sentence in the training data. | in the future, we would like to extend our technique to other real valued kernels such as the string kernels and tagging kernels. | neutral |
train_93457 | This is a chain rule, which means that any feature that has f j as its component can also be pruned safely. | the objective function for learning L 1 norm SVMs is: where is the hinge loss function for the i-th sample. | neutral |
train_93458 | We found that SRL features, both in isolation and together with standard syntactic features, improve parsing performance, both when measured using full-sentence F-score, and in terms of incremental F-score. | central to the discriminative approach is the exploration of features that cannot be straightforwardly embedded into the parser using a dynamic program. | neutral |
train_93459 | Neural probabilistic parsers are attractive for their capability of automatic feature combination and small data sizes. | our integrated approximated search and learning framework allows rich global features. | neutral |
train_93460 | Treebanks are key resources for developing accurate statistical parsers. | constituency trees were converted to basic non-collapsed dependency trees using Stanford Dependencies (De Marneffe et al., 2006). | neutral |
train_93461 | To remedy this problem, we use a classifier (specifically logistic regression) to determine whether a modified tree should be used. | building treebanks is expensive and timeconsuming for humans. | neutral |
train_93462 | In that way, the dictionary A can be readily obtained either using bilingual lexicon induction approaches (Koehn and Knight, 2002;Mann and Yarowsky, 2001;Haghighi et al., 2008), or from resources like Wiktionary 5 and Panlex. | to verify this, we further conduct experiments under both settings using the PROJ+Cluster model. | neutral |
train_93463 | (2012), which assigns a target word to the cluster with which it is most often aligned: This method also has the drawback that words that do not occur in the alignment dictionary (OOV) cannot be assigned a cluster. | experiments show that by incorporating lexical features, the performance of cross-lingual dependency parsing can be improved significantly. | neutral |
train_93464 | That said, these methods have the advantage that they are capable of capturing some language-specific syntactic patterns which our approach cannot. | let .., N S } be the alignment dictionary, where c i,j is the number of times when the i th target word w T i is aligned to the j th source word w S j . | neutral |
train_93465 | In general, we expect our bilingual word embeddings to preserve translational equivalences. | in order to improve the robustness of projection, we utilize a morphology-inspired mechanism, to propagate embeddings from in-vocabulary words to out-ofvocabulary (OOV) words. | neutral |
train_93466 | In contrast, we are interested in extracting discourse relations with minimal additional annotation, relying primarily on the available question-answer pairs. | "Sally, Sally, come home", Sally's mom calls out. | neutral |
train_93467 | The non-generic sentence (1b) roughly speaking provides ABox content for a machine-readable knowledge base, i.e., knowledge about particular instances, e.g, "A is an instance of B / has property X". | the major contributions of this work include the study of genericity both on the NP-and clauselevel, and the study of the interaction of these two levels. | neutral |
train_93468 | As shown in Theorem 4.1, we can take the element-wise power transformation on counts (such as the power of 1, 2/3, 1/2 in this template) while preserving the representational meaning of word embeddings under the Brown model interpretation. | for our setting, the analogous weighted least squares optimization is: where 2 . | neutral |
train_93469 | The CCA scaling, combined with the square-root transformation, gives the best overall performance. | table 3 shows the result for both 500 and 1000 dimensions. | neutral |
train_93470 | In word similarity, spectral methods generally excel, with CCA consistently performing the best. | it has been found by many (including ourselves) that setting β = 1 yields substantially worse representations than setting β ∈ {0, 0.5} (Levy et al., 2015). | neutral |
train_93471 | Though recent knowledge graph embeddings (Lin et al., 2015;Wang et al., 2014) integrate the relational structure among entities, they primarily target at link prediction and lack an explicit relatedness measure. | we use the wikipedia snapshot from Jan 12nd, 2015 as our training data and KB. | neutral |
train_93472 | Though the shortest path can be selected, it ignores other related category nodes and loses rich information. | the comparison between Ours and Ours-NoH further reveals the effect of integrating the hierarchy in learning entity vectors. | neutral |
train_93473 | Finally, we want to incorporate all these elements in a single model, with the morphological and word order elements of the model working in harmony. | in the case of these original models and also the CBOM model, we follow Mikolov et al. | neutral |
train_93474 | 's (2013b) method for making the word-analogy predictions in terms of addition and subtraction: smaller ≈ bigger − big + small. | recent work by Mikolov et al. | neutral |
train_93475 | Our semantic parsing model defines a distribution over logical forms given by the domaingeneral grammar G and additional rules triggered by the input utterance x. | ... (1) by builder (∼30 minutes) (2) via domain-general grammar (3) via crowdsourcing (∼5 hours) (4) by training a paraphrasing model Figure 1: Functionality-driven process for building semantic parsers. | neutral |
train_93476 | All types (e.g., person) have the syntactic category TYPENP, and all entities (e.g., Figure 2: Deriving a logical form z (red) and a canonical utterance c (green) from the grammar G. Each node contains a syntactic category and a logical form, which is generated by applying a rule. | our compact grammar precisely specifies the logical functionality. | neutral |
train_93477 | Such feature is widely used because it's simple and surprisingly efficient in many tasks. | recurrent models outperform rAE indicates that task-specific composing and representation learning with less syntactic information lead to a better result. | neutral |
train_93478 | The purpose of this method is to investigate whether there are patterns of the price movement in the history of the stock. | continuous Dirichlet Process Mixture (cDPM) model was used to learn the daily topic set of Twitter messages to predict the stock market (Si et al., 2013). | neutral |
train_93479 | First, a rule-based algorithm is applied to identify the category of each word in the documents. | one important missing thing is that opinions or sentiments are expressed on topics or aspects of companies. | neutral |
train_93480 | The value of k depends on the size of the training set and the occurrences of each tag. | for instance, as shown in figure 2, the vector of 'is very interesting' can be composed from the vector of the left node 'is' and that of the right node 'very interesting'. | neutral |
train_93481 | From now on, y is a vector representing the dependency trees corresponding to the whole corpus. | at each iteration t, the convex function f is approximated by a linear function defined by its gradient at the current point z t . | neutral |
train_93482 | We propose to use a feature-rich discriminative parser, and to learn the parameters of this parser using a convex quadratic objective function. | we would then have to use a higher-order parser, such as the ones described by McDonald and Pereira (2006) and Koo and Collins (2010). | neutral |
train_93483 | Overall, our approach outperforms the three baselines, with an absolute improvement of 13 points over the extended valence grammar with posterior sparsity and 8 points over the model with universal syntactic rules. | this might be inefficient since Algorithm 1: Frank-Wolfe algorithm for t ∈ {1, ..., T } do Compute the gradient: Solve the linear program: Take the Frank-Wolfe step: it does not use the structure of the polytope and, in particular, the fact that one can easily minimize a linear function over the tree polytope using the minimum weight spanning tree algorithm. | neutral |
train_93484 | Figure 1 specifies several dependencies: of is a dependent of director, executive vice president and director are conjuncts and and is the coordinator. | also, barks has a right child loudly; this generates a half bracket before V R . | neutral |
train_93485 | In general, the lexical head of a derived category is determined by the (primary) functor, so that the lexical head of a category X or X|Z 1 |...|Z n that resulted from combining X|Y and Y or Y|Z 1 |...|Z n is identical to the lexical head of X. | we will focus on the English CCGbank but these details apply with only minor changes to German and Chinese as well. | neutral |
train_93486 | In contrast, our approach directly computes a cost for actions based on coreference evaluation metrics. | this formulates the agent's task so it only has two actions to chose from instead of a number of actions proportional to the number of clusters squared. | neutral |
train_93487 | We incorporate two different mention pair models into our system. | for each pair, we make a binary decision on whether or not the clusters containing these pairs should be merged. | neutral |
train_93488 | In this paper we introduce a novel coreference system that combines the advantages of mention pair and entity-centric systems with model stacking. | training the agent on the gold labels alone would unrealistically teach it to make decisions under the assumption that all previous decisions were correct, potentially causing it to over-rely on information from past actions. | neutral |
train_93489 | Our methodology, outlined as Algorithm 1, is inspired by the recent work of Ganchev and Das (2013) on cross-lingual learning of sequence models. | the third and eighth rows in table 3 show that this baseline is stronger than the delexicalized baseline, but still 6-8 points away from the supervised systems. | neutral |
train_93490 | Note that all systems predict the same candidate mentions; however a final post-processing discards all mentions that ended up in singleton entities, for compliance with the official scorer. | we propose cross-lingual coreference resolution as a way of transferring information from a rich-resource language to build coreference resolvers for languages with scarcer resources; as a testbed, we transfer from English to Spanish and to Brazilian Portuguese. | neutral |
train_93491 | Since HIPTM can no longer access the votes in the test data, its performance drops significantly compared with VOTE. | marginal counts are denoted by • and the superscript −b,m excludes the assignment for token w b,m from the corresponding count. | neutral |
train_93492 | 3 To improve topic interpretability, issue nodes have an informed prior from the Congressional Bills Project {φ k } (Table 1). | h3 captures the more strident Tea Party framing of Obamacare as an unconstitutional government takeover. | neutral |
train_93493 | In the more subtle task of validating each topic token (Group precision) we see a greater variance among the two labeler groups. | the model learns both the schema for a KB, and a set of facts that are related to that schema, thus combining the processes of KB population and ontology construction. | neutral |
train_93494 | Our sparse sampler runs even faster, and takes less than a second per iteration. | we have also observed that there is a point at which using additional processors actually slows running time. | neutral |
train_93495 | We use both lexicalized features, where all possible pairs (f (x), g(z)) form distinct features, and binary unlexicalized features indicating whether f (x) and g(z) have a string match. | example inputs and outputs are shown in Figure 1. | neutral |
train_93496 | More recent work sacrifices compositionality in favor of using more open-ended knowledge bases such as Freebase (Cai and Yates, 2013;Berant et al., 2013;Fader et al., 2014;Reddy et al., 2014). | we also anchor all numerical values (numbers, dates, percentages, etc.) | neutral |
train_93497 | As our language for logical forms, we use lambda dependency-based compositional semantics (Liang, 2013), or lambda DCS, which we briefly describe here. | even these broader knowledge sources still define a x1: "Greece held its last Summer Olympics in which year?" | neutral |
train_93498 | In graph parsing, we already know the identity of all nodes and edges in sub-s-graphs (as nodes and edges in SG), and must thus pay attention that merge operations do not accidentally fuse or duplicate them. | our grammars describe how to build graphs from smaller pieces. | neutral |
train_93499 | If f : A B and g : A B are partial functions, we let the partial function f ∪ g be defined if for all a ∈ A with both f (a) and g(a) defined, we have f (a) = g(a). | iRTGs extend naturally to a synchronous grammar formalism by adding more homomorphisms and algebras. | neutral |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.