id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_1500 | In this paper, we model the union of cross-lingual links provided by all editions of Wikipedia as an undirected graph G = (V, E) with edge weights w(e) for e ∈ E. In our experiments, we simply honour each individual link equally by defining w(e) = 2 if there are reciprocal links between the two pages, 1 if there is a single link, and 0 otherwise. | our framework is flexible enough to deal with more advanced weighting schemes, e.g. | contrasting |
train_1501 | Active learning (AL) has been applied to SMT recently but they were interested in starting with a tiny seed set of data, and they stopped their investigations after only adding a relatively tiny amount of data as depicted in Figure 1. | we are interested in applying AL when a large amount of data already exists as is the case for many important lanuage pairs. | contrasting |
train_1502 | Also note how relatively early on in the process previous studies were terminated. | the focus of our main experiments doesn't even begin until much higher performance has already been achieved with a period of diminishing returns firmly established. | contrasting |
train_1503 | The other major difference is that measure annotation cost by # of sentences. | we bring to light some potential drawbacks of this practice, showing it can lead to different conclusions than if other annotation cost metrics are used, such as time and money, which are the metrics that we use. | contrasting |
train_1504 | From Figure 3 it might appear that VG selection is better than random selection, achieving higher-performing systems with fewer translations in the labeled data. | it is important to take care when measuring annotation costs (especially for relatively complicated tasks such as translation). | contrasting |
train_1505 | For example, in the troops stationed in Iraq, the verb stationed is a VBN; troops is the head of the phrase. | for the troops vacationed in Iraq, the verb vacationed is a VBD and also the head. | contrasting |
train_1506 | Obviously, it needs exponential time to compute the above fractional counts. | due to the property of forest that compactly represents all the parse trees, the posterior probability of a subtree in a forest, | , , can be easily computed in an Inside-Outside fashion as the product of three parts: the outside probability of its root node, the probabilities of parse hyperedges involved in the subtree, and the inside probabilities of its leaf nodes (Lari and Young, 1990;Mi and Huang, 2008). | contrasting |
train_1507 | In SRL, constituents are used as the labeling units to form the labeled arguments. | previous work shows that if we use complete constituent (MCT) as done in SRL to represent relation instance, there is a large performance drop compared with using the path-enclosed tree (PT) 6 . | contrasting |
train_1508 | This paper defines Strictly k-Piecewise (SP k ) distributions and shows how they too can be efficiently estimated from positive data. | with the Markov assumption, our assumption is that the probability of the next symbol is conditioned on the previous set of discontiguous subsequences of length k − 1 in the string. | contrasting |
train_1509 | A straightforward way for cross-language document summarization is to translate the summary from the source language to the target language by using machine translation services. | though machine translation techniques have been advanced a lot, the machine translation quality is far from satisfactory, and in many cases, the translated texts are hard to understand. | contrasting |
train_1510 | Crossover is performed under the assumption that new solutions can be improved by re-using the good parts of old solutions. | it is good to keep some part of population from one generation to the next. | contrasting |
train_1511 | One may come up with more sophisticated base distributions. | the main point of the base distribution is to encode a controllable preference towards simpler rules; we therefore make the simplest possible assumption. | contrasting |
train_1512 | 2 2 One reviewer was concerned that since we explicitly disallow insertion rules in our sampling procedure, our model that generates such rules wastes probability mass and is therefore "deficient". | we regard sampling as a separate step from the data generation process, in which we can formulate more effective algorithms by using our domain knowledge that our data set was created by annotators who were instructed to delete words only. | contrasting |
train_1513 | It is important to note that the sampler described can move from any derivation to any other derivation with positive probability (if only, for example, by virtue of fully merging and then resegmenting), which guarantees convergence to the posterior (3). | some of these transition probabilities can be extremely small due to passing through low probability states with large elementary trees; in turn, the sampling procedure is prone to local modes. | contrasting |
train_1514 | Such annotated resources are scarce and expensive to create, motivating the need for unsupervised or semi-supervised techniques (Poon and Domingos, 2009). | unsupervised methods have their own challenges: they are not always able to discover semantic equivalences of lexical entries or logical forms or, on the contrary, cluster semantically different or even opposite expressions (Poon and Domingos, 2009). | contrasting |
train_1515 | Similarly, in the example mentioned earlier, when describing a forecast for a day with expected south winds, texts in the group can use either "south wind" or "southerly" to indicate this fact but no texts would verbalize it as "wind from west", and therefore these expressions will be assigned to different semantic clusters. | it is important to note that the phrase "wind from west" may still appear in the texts, but in reference to other time periods, underlying the need for modeling alignment between grouped texts and their latent meaning representation. | contrasting |
train_1516 | An even simpler technique would be to parse texts in a random order conditioning each meaning m k for k ∈ {1,..., K} on all the previous semantics m <k = m 1 ,..., m k−1 : Here, and in further discussion, we assume that the above search problem can be efficiently solved, exactly or approximately. | a major weakness of this algorithm is that decisions about components of the composite semantic representation (e.g., argument values) are made only on the basis of a single text, which first mentions the corresponding aspects, without consulting any future texts k > k, and these decisions cannot be revised later. | contrasting |
train_1517 | Note that, when generating fields, the Markov chain is defined over fields and the transition parameters are independent of the field values r if ij . | when drawing a word, the distribution of words is conditioned on the value of the corresponding field. | contrasting |
train_1518 | When the world state is observable, learning does not require any approximations, as dynamic programming (a form of the forward-backward algorithm) can be used to infer the posterior distribution on the E-step (Liang et al., 2009). | when the state is latent, dependencies are not local anymore, and approximate inference is required. | contrasting |
train_1519 | Note that the words "sun", "cloudiness" or "gaps" were not appearing in the labeled part of the data, but seem to be assigned to correct categories. | correlation between rain and overcast, as also noted in (Liang et al., 2009), results in the wrong assignment of the rain-related words to the field value corresponding to very cloudy weather. | contrasting |
train_1520 | There are many other potential applications, including automated storytelling , anaphora resolution (McTear, 1987), and information extraction (Rau et al., 1989). | it is also commonly accepted that the large-scale manual formalization of scripts is infeasible. | contrasting |
train_1521 | For instance, Mooney (1990) describes an early attempt to acquire causal chains, and Smith and Arnold (2009) use a graph-based algorithm to learn temporal script structures. | to our knowledge, such approaches have never been shown to generalize sufficiently for wide coverage application, and none of them was rigorously evaluated. | contrasting |
train_1522 | Jones and Thompson (2003) describe an approach to identifying different natural language realizations for the same event considering the temporal structure of a scenario. | they don't aim to acquire or represent the temporal structure of the whole script in the end. | contrasting |
train_1523 | Adding the function/content word split to the HMM structure improves both EM and VB estimation in terms of both tag matching accuracy and information. | these measures look at the parser only in isolation. | contrasting |
train_1524 | Experiments show that direct multi-participant models do not generalize to held out data, and likely never will, for practical reasons. | the Extended-Degree-of-Overlap model represents a suitable candidate for future work in this area, and is shown to successfully predict the distribution of speech in time and across participants in previously unseen conversations. | contrasting |
train_1525 | This policy is simplistic, and there is significant scope for more detailed back-off and interpolation. | such techniques infer values for under-estimated probabilities from shorter truncations of the conditioning history. | contrasting |
train_1526 | This shows that its ability to generalize to unseen data is higher than that of direct models. | in the easier mismatched D+C condition, it is outperformed by the CI model due to behavior differences among participants, which the EDO model small groups and large groups, represented in their study by K = 5 and K = 10, and noted that there is a smooth transition between the two extremes; this provides some scope for interpolating small-and large-group models, and the EDO framework makes this possible. | contrasting |
train_1527 | A user simulation for NLG is very similar, in that it is a predictive model of the most likely next user act. | 4 this NLG predicted user act does not actually change the overall dialogue state (e.g. | contrasting |
train_1528 | Stability Previous work on stress in N/V pairs (Sherman, 1975;Phillips, 1984) has emphasized change, in particular {2,2}→{1,2} (the most common change). | an important aspect of the diachronic dynamics of N/V pairs is stability: most N/V pairs do not show variation or change. | contrasting |
train_1529 | As in Model 4, the bifurcation structure implies all 6 possible changes between the three FPs. | change to {1,2} entails crossing the hyperplanes λ 11 =λ 12 and λ 2 =λ 12 , and is thus now frequency dependent. | contrasting |
train_1530 | (In Model 5, stable variation very near {1,1} or {2,2} is possible.) | {1,1} and {2,2} are diachronically very stable stress patterns, suggesting that at least for this model set, assuming mistransmission in the learner is problematic. | contrasting |
train_1531 | If we were to rely on Levenshtein distance, these words would seem to be a highly attractive match as cognates: they are nearly identical, essentially differing in only a single character. | no linguist would posit that these two words are related. | contrasting |
train_1532 | tween some approximating messageμ(w) and the true message µ(w). | messages are not always probability distributions and -because the number of possible strings is in principle infinitethey need not sum to a finite number. | contrasting |
train_1533 | So, the rule would yield the production q 1 w − → σ(γ(q 3 ), q 2 ) in the domain projection. | a deleting rule such as q x 2 ) necessitates the introduction of a new nonterminal ⊥ that can generate all of T Σ with weight 1. | contrasting |
train_1534 | Unfortunately, of the classes that preserve recognizability, only wLNT is closed under composition (Gécseg and Steinby, 1984;Baker, 1979;Fülöp and Vogler, 2009). | the general lack of composability of tree transducers does not preclude us from conducting forward application of a cascade. | contrasting |
train_1535 | The main purpose of this paper has been to present novel algorithms for performing application. | it is important to demonstrate these algorithms on real data. | contrasting |
train_1536 | The previously described implicit grammar G I defines a posterior distribution P (d I |s) over a sentence s via a large, indexed PCFG. | this distribution has the property that, when marginalized, it is equivalent to a posterior distribution P (d|s) over derivations in the correspondingly-weighted all-fragments grammar G. even with an explicit representation of G, we would not be able to tractably compute the parse that maximizes P (t|s) = d∈t P (d|s) = d I ∈t P (d I |s) (Sima'an, 1996). | contrasting |
train_1537 | (1998), Petrov and Klein (2007), ), that a certain amount of pruning helps accuracy, perhaps by promoting agreement between the coarse and full grammars (model intersection). | these 'fortuitous' search errors give only a small improvement and the peak accuracy is almost equal to the parsing accuracy without any pruning (as seen in Figure 5). | contrasting |
train_1538 | The implicit all-fragments approach (Section 2.2) avoids explicit extraction of all rule fragments. | the number of indexed symbols in our implicit grammar G I is still large, because every node in each training tree (i.e., every symbol token) has a unique indexed symbol. | contrasting |
train_1539 | Sutton and McCallum (2005) adopted a probabilistic SRL system to re-rank the N-best results of a probabilistic syntactic parser. | they reported negative results, which they blamed on the inaccurate probability estimates from their locally trained SRL model. | contrasting |
train_1540 | In particular, the recent shared tasks of CoNLL 2008(Surdeanu et al., 2008Hajic et al., 2009) tackled joint parsing of syntactic and semantic dependencies. | all the top 5 reported systems decoupled the tasks, rather than building joint models. | contrasting |
train_1541 | The integrated parsing approach as shown in Section 4.2 performs syntactic and semantic parsing synchronously. | to traditional syntactic parsers where no semantic role-related information is used, it may be interesting to investigate the contribution of such information in the syntactic parsing model, due to the availability of such information in the syntactic parsing process. | contrasting |
train_1542 | Therefore, such syntactic errors should be avoidable by considering those arguments already obtained in the bottom-up parsing process. | taking those expected semantic roles into account would help the syntactic parser. | contrasting |
train_1543 | After the replacement, the reviews about IBM, Apple, and Dell will not share vocabularies with each other. | for any three created words which represent the same English word, we add three edges among them, and therefore we get a simulated dictionary graph for our PCLSA model. | contrasting |
train_1544 | The words 'fault', 'mistake', 'fail' or 'miss' can be used as the nonliteral paraphrases. | it is also highly likely that these words are used to describe a scenario in a baseball game, in which 'drop the ball' is used literally. | contrasting |
train_1545 | For example θ Sentence specifies the distribution of documents in the corpus. | it is easy to see that these distributions do not influence the topic distributions; indeed, the expansions of the Sentence nonterminal are completely determined by the document distribution in the corpus, and are not affected by θ Sentence ). | contrasting |
train_1546 | For example, in an NP coreference application, if we could determine that Bill and Hillary are both first names then we could infer that Bill Clinton and Hillary Clinton are likely to refer to distinct individuals. | because Mr in Mr Clinton is not a first name, it is possible that Mr Clinton and Bill Clinton refer to the same individual (Elsner et al., 2009). | contrasting |
train_1547 | Section 2.1), and (b) given the dependency graph of the sentence embedding the annotation phrase, we consider the distance between words for each dependency link within the annotation phrase and consider the maximum over such dis- 8 In preliminary experiments our set of basic features comprised additional features providing information on the usage of stop words in the annotation phrase and on the number of paragraphs, sentences, and words in the respective annotation example. | since we found these features did not have any significant impact on the model, we removed them. | contrasting |
train_1548 | The fact that a Weakly Interactive system can simulate the result of an experiment proposed in support of the Strongly Interactive hypothesis is initially counter-intuitive. | this naturally falls out from our decision to use a probabilistic model: a lower probability, even in an unambiguous structure, is associated with increased reading difficulty. | contrasting |
train_1549 | 2 Common characteristics of RTE systems re-ported by their designers were the use of structured representations of shallow semantic content (such as augmented dependency parse trees and semantic role labels); the application of NLP resources such as Named Entity recognizers, syntactic and dependency parsers, and coreference resolvers; and the use of special-purpose ad-hoc modules designed to address specific entailment phenomena the researchers had identified, such as the need for numeric reasoning. | it is not possible to objectively assess the role these capabilities play in each system's performance from the system outputs alone. | contrasting |
train_1550 | The understanding that the second sentence of the text entails the hypothesis draws on two coreference relationships, namely that he is Oswald, and that the Kennedy in question is President Kennedy. | the utilization of discourse information for such inferences has been so far limited mainly to the substitution of nominal coreferents, while many aspects of the interface between discourse and semantic inference needs remain unexplored. | contrasting |
train_1551 | The results of recent studies, as reported in Section 2.2, seem to show that current resolution of discourse references in RTE systems hardly affects performance. | our intuition is that these results can be attributed to four major limitations shared by these studies: (1) the datasets, where discourse phenomena were not well repre-sented; (2) the off-the-shelf coreference resolution systems which may have been not robust enough; (3) the limitation to nominal coreference; and (4) overly simple integration of reference information into the inference engines. | contrasting |
train_1552 | Sometimes, these rules are generally applicable (e.g., 'Alaska → Arctic'). | often they are context-specific. | contrasting |
train_1553 | Template extraction We parse the corpus with a dependency parser and extract all propositional templates from every parse tree, employing the procedure used by Lin and Pantel (2001). | we only consider templates containing a predicate term and arguments 3 . | contrasting |
train_1554 | 's in that both try to learn graph edges given a transitivity constraint. | there are two key differences in the model and in the optimization algorithm. | contrasting |
train_1555 | The most well known international evaluation on the factoid QA task is the Text REtrieval Conference (TREC) 1 , and the annotated questions and answers released by TREC have become important resources for the researchers. | when facing a non-factoid question such as why, how, or what about, however, almost no automatic QA systems work very well. | contrasting |
train_1556 | In other words, the coreference task as defined by MUC and ACE is geared toward only identifying coreference relations anchored to an entity within the text. | to this research trend, investigations of referential behaviour in real world situations have continued to gain interest in the language generation community (Di Eugenio et al., 2000;Byron, 2005;van Deemter, 2007;Foster et al., 2008;Spanger et al., 2009), aiming at applications such as human-robot interaction. | contrasting |
train_1557 | Thus, in addition to the artificial nature of interaction, such as using keyboard input, this corpus only records restricted types of data. | though the annotated corpus by Spanger et al. | contrasting |
train_1558 | In the former case, the model with extra-linguistic information improved by about 22% compared with the baseline model. | in the latter case, the accuracy improved by only 7% over the baseline model. | contrasting |
train_1559 | First, inference under arbitrary priors can become complex. | in the simple case of our diagonal covariance Gaussians, the gradient of the observed data likelihood can be computed directly using the DMV's expected counts and maximum-likelihood estimation can be accomplished by applying standard gradient optimization methods. | contrasting |
train_1560 | That feature will be used in the log-linear attachment probability for English. | because that feature does not show up in any other language, it is not usefully controlled by the prior. | contrasting |
train_1561 | It is reasonable to worry that the improvements from these multilingual models might be partially due to having more total training data in the multilingual setting. | we found that halving the amount of data used to train the English, Dutch, and Swedish (the languages with the most training data) monolingual models did not substantially affect their performance, suggesting that for languages with several thousand sentences or more, the increase in statistical support due to additional monolingual data was not an important effect (the DMV is a relatively low-capacity model in any case). | contrasting |
train_1562 | The ALL-PAIRS model achieved an average relative error reduction of 17.1%, certainly outperforming both the simple phylogenetic models. | the rich phylogeny of the LINGUISTIC model, which incorporates linguistic constraints, outperformed the freer ALLPAIRS model. | contrasting |
train_1563 | For this reason we do not assign a cluster to punctuation marks and we report results using this policy, which we recommend for future work. | to be able to directly compare with previous work, we also report results for the full POS tag set. | contrasting |
train_1564 | The settings of the various experiments vary in terms of the exact annotation scheme used (coarse or fine grained) and the size of the test set. | the score differences are sufficiently large to justify the claim that our algorithm is currently the best performing algorithm on the PTB-WSJ corpus for POS induction from plain text 4 . | contrasting |
train_1565 | Some of the methods are suitable for retrieval of numerical attributes. | most of them do not exploit the numerical nature of the attribute data. | contrasting |
train_1566 | All of the above numerical attribute extraction systems utilize only direct information available in the discovered object-attribute co-occurrences and their contexts. | as we show, indirect information available for comparable objects can contribute significantly to the selection of the obtained values. | contrasting |
train_1567 | In the QA scenario, if we are given the full question and not just the (object, attribute) pair we can add terms appearing in the question and having a strong PMI with the object (this can be estimated using any fixed corpus). | this is not essential. | contrasting |
train_1568 | Since our goal is to obtain a value for the single requested object, if at the end of this stage we remain with a single value, no further processing is needed. | if we obtain a set of values or no values at all, we have to utilize comparison data to select one of the retrieved values or to approximate the value in case we do not have an exact answer. | contrasting |
train_1569 | Furthermore, part-whole relations are de-facto benchmarks for evaluating the performance of general relation extraction systems (Pantel and Pennacchiotti, 2006;Beamer et al., 2008;Pyysalo et al., 2009). | these relation extraction efforts have overlooked the ontological distinctions between the different types of part-whole relations. | contrasting |
train_1570 | 5 Initially,we selected seeds from WordNet (Fellbaum, 1998) (for English) and EuroWordNet (Vossen, 1998) (for Dutch) to initialize the IE algorithm. | we found that these pairs, such as acinos-mother of thyme or radarschermradarapparatuur (radar screen -radar equipment, hardly co-occured with reasonable frequency in Wikipedia sentences, hindering pattern extraction. | contrasting |
train_1571 | Although this assumption is not always correct in practice, we consider it a reasonable approximation given what we empirically observed in our training data. | to standard CRFs, semi-Markov CRFs directly model the segmentation of an input sequence as well as a classification of the segments (Sarawagi and Cohen, 2004) In this case, the features f (s j−1 , s j , x) are defined on segments instead of on word tokens. | contrasting |
train_1572 | Although there is no significant topic drift in this case, there are not many relevant terms apart from the query terms. | the same query performs very well in English with all the documents in the feedback set of the English corpus being relevant, thus resulting in informative feedback terms such as {bovin, scientif, recherch}. | contrasting |
train_1573 | As can be observed, this causes terms like sampras to come up in the MultiPRF model. | while the MultiPRF model has some terms pertaining to Men's Winners of Wimbledon as well, the original feedback model suffers from severe topic drift, with irrelevant terms such as {telefonbuch, telekom} also amongst the top terms. | contrasting |
train_1574 | In traditional information retrieval (IR) bag-of-word representation is the most common way to express information needs. | in opinion retrieval, information need target at relevant opinion, and this renders bag-of-word representation ineffective. | contrasting |
train_1575 | In Figure 3, we can see that the word pair that has links to many documents can be assigned a high weight to denote a strong associative degree between the topic term and a sentiment word, and it likely expresses a relevant opinion. | if a document has links to many word pairs, the document is with many relevant opinions, and it will result in high ranking. | contrasting |
train_1576 | They divided the sentiment words into query-dependent and query-independent by utilizing several sentiment expansion techniques, and integrated them into a mixed model. | in this model, the contribution of a sentiment word was its corresponding incremental mean average precision value. | contrasting |
train_1577 | On the one hand, word pair can identify the relevant opinion according to intra-sentence contextual information. | it can measure the degree of a relevant opinion by considering the inter-sentence contextual information. | contrasting |
train_1578 | A Music RS could be developed along the lines of Product RSs. | music RSs recommend individual tracks, not full albums, e.g. | contrasting |
train_1579 | The utilization of tf i in classification is rather straightforward and intuitive but, as previously discussed, usually results in decreased accuracy in sentiment analysis. | using idf to assign weights to features is less intuitive, since it only provides information about the general distribution of term i amongst documents of all classes, without providing any additional evidence of class preference. | contrasting |
train_1580 | Datasets One dataset we employed is the automatic content extraction (ACE) (ACE-Event, 2005). | the utilization of the ACE corpus for the task of solving event coreference is limited because this resource provides only withindocument event coreference annotations using a restricted set of event types such as LIFE, BUSI-NESS, CONFLICT, and JUSTICE. | contrasting |
train_1581 | In ACE, nominal predicates corefer with their subject (2), and appositive phrases corefer with the noun they are modifying (3). | they do not fall under the identity relation in OntoNotes, which follows the linguistic understanding of coreference according to which nominal predicates and appositives express properties of an entity rather than refer to a second (coreferent) entity (van Deemter and Kibble, 2000). | contrasting |
train_1582 | The ACE scores for the automatically preprocessed models in Table 7 are about 3% lower than those based on OntoNotes gold-standard data in Table 5, providing evidence for the advantage offered by gold-standard preprocessing information. | the similar-if not higher-scores of OntoNotes can be attributed to the use of the annotated ACE entity types. | contrasting |
train_1583 | However, they have a major limitation that they do not have a principled mechanism to guarantee grammaticality on the target side, since there is no linguistic tree structure of the output. | string-to-tree systems explicitly model the grammaticality of the output by using target syntactic trees. | contrasting |
train_1584 | Both string-toconstituency system (e.g., (Galley et al., 2006;) and string-to-dependency model (Shen et al., 2008) have achieved significant improvements over the state-of-the-art formally syntax-based system Hiero (Chiang, 2007). | those systems also have some limitations that they run slowly (in cubic time) , and do not utilize the useful syntactic information on the source side. | contrasting |
train_1585 | We first use pattern-matching algorithm of to convert F c into a translation forest, each hyperedge of which is associated with a constituency to dependency translation rule. | pattern-matching failure 2 at a node v f will cut the derivation path and lead to translation failure. | contrasting |
train_1586 | Of course, the characteristics of our aligned corpus may not hold for other annotated corpora or other language pairs. | we hope that the overall effectiveness of our modeling approach will influence future annotation efforts to build corpora that are consistent with this interpretation. | contrasting |
train_1587 | This model scores overlapping phrasal rules contained within terminal blocks that result from including or excluding possible links. | this model does not score bispans that cross bracketing of ITG derivations. | contrasting |
train_1588 | As can be seen in table 1, states are rather immanent operations and achievements are those occur in a single moment or operations related to perception level. | activity and accomplishment are processes (transeunt operations) in traditional philosophy. | contrasting |
train_1589 | Since each extracted action has its probability, we can use the value as a feature for state / activity verb classification. | a verb may appear in different contexts and can have multiple Evidence of a word w being an action in eHow is denoted as E(w) where variable t is the sum of individual action probability values in D w the set of documents from which the word w has been extracted as an action. | contrasting |
train_1590 | In each group, MAD is the rightmost bar. | to graph construction from structured tables as in Sections 3.1, 3.2, in this section we use hypernym tuples extracted by TextRunner (Banko et al., 2007), an open domain IE system, to construct the graph. | contrasting |
train_1591 | We find that the objective minimized by LP-ZGL (Equation 1) is underregularized, i.e., its model parameters (Ŷ ) are not constrained enough, compared to MAD (Equation 3, specifically the third term), resulting in overfitting in case of highly connected graphs. | mAD is able to avoid such overfitting because of its minimization of a well regularized objective (Equation 3). | contrasting |
train_1592 | This further illustrates the advantage of aggregating information across sources (Talukdar et al., 2008;Pennacchiotti and Pantel, 2009). | we are the first, to the best of our knowledge, to demonstrate the effectiveness of attributes in class-instance acquisition. | contrasting |
train_1593 | (Pennacchiotti and Pantel, 2006) proposed an algorithm for automatically ontologizing semantic relations into WordNet. | despite its high precision entries, WordNet's limited coverage makes it impossible for relations whose arguments are not present in WordNet to be incorporated. | contrasting |
train_1594 | Initially, we decided to conduct an automatic evaluation comparing our results to knowledge bases that have been extracted in a similar way (i.e., through pattern application over unstructured text). | it is not always possible to perform a complete comparison, because either researchers have not fully explored the same relations we have studied, or for those relations that overlap, the gold standard data was not available. | contrasting |
train_1595 | For the "live in" relation, both repositories contain the same city and country names. | the recursive pattern learned arguments like pain, effort which express a manner of living, and locations like slums, box. | contrasting |
train_1596 | It follows immediately from Grenander's result that and hence λ opt (x, θ * ) ≤ µ x /τ for any θ * , hence Λ(x; θ I ) ≤ µ x /τ . | if we choose θ 0 = θ I , we have that p(z | θ 0 , x) > µ x for some z , hence, for θ * such that it assigns probability τ on z , we have that and hence λ opt (x, θ * ; θ ) > µ x /τ , so Λ(x; θ ) > µ x /τ . | contrasting |
train_1597 | The wide domains of the variables may be advantageous for some integer programming solvers as well. | it creates an integer programming problem of size exponential in the number of sets. | contrasting |
train_1598 | These results show some significant improvements in vector length for the larger modules. | they do not reveal the entire story. | contrasting |
train_1599 | That is a shallow minimum; both λ = 120 and λ = 160 gave vector lengths of 986, and the length slowly increased with greater λ. | the fastest bit-count on this architec- ture, using a technique first published by Wegner (1960), requires time increasing with the number of nonzero bits it counts; and a similar effect would appear on a word-by-word basis even if we used a constant-time per-word count. | contrasting |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.