{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:58:34.091158Z" }, "title": "Box-To-Box Transformations for Modeling Joint Hierarchies", "authors": [ { "first": "Shib", "middle": [], "last": "Sankar Dasgupta", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Massachusetts", "location": { "settlement": "Amherst" } }, "email": "" }, { "first": "Lorraine", "middle": [], "last": "Xiang", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Massachusetts", "location": { "settlement": "Amherst" } }, "email": "xiangl@cs.umass.edu" }, { "first": "Michael", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Massachusetts", "location": { "settlement": "Amherst" } }, "email": "" }, { "first": "Dongxu", "middle": [], "last": "Boratko", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Massachusetts", "location": { "settlement": "Amherst" } }, "email": "mboratko@cs.umass.edu" }, { "first": "Andrew", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Massachusetts", "location": { "settlement": "Amherst" } }, "email": "" }, { "first": "", "middle": [], "last": "Mccallum", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Massachusetts", "location": { "settlement": "Amherst" } }, "email": "mccallum@cs.umass.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Learning representations of entities and relations in structured knowledge bases is an active area of research, with much emphasis placed on choosing the appropriate geometry to capture the hierarchical structures exploited in, for example, ISA or HASPART relations. Box embeddings (Vilnis et al., 2018; Li et al., 2019; Dasgupta et al., 2020), which represent concepts as n-dimensional hyperrectangles, are capable of embedding hierarchies when training on a subset of the transitive closure. In Patel et al. (2020), the authors demonstrate that only the transitive reduction is required and further extend box embeddings to capture joint hierarchies by augmenting the graph with new nodes. While it is possible to represent joint hierarchies with this method, the parameters for each hierarchy are decoupled, making generalization between hierarchies infeasible. In this work, we introduce a learned box-to-box transformation that respects the structure of each hierarchy. We demonstrate that this not only improves the capability of modeling cross-hierarchy compositional edges but is also capable of generalizing from a subset of the transitive reduction.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Learning representations of entities and relations in structured knowledge bases is an active area of research, with much emphasis placed on choosing the appropriate geometry to capture the hierarchical structures exploited in, for example, ISA or HASPART relations. Box embeddings (Vilnis et al., 2018; Li et al., 2019; Dasgupta et al., 2020), which represent concepts as n-dimensional hyperrectangles, are capable of embedding hierarchies when training on a subset of the transitive closure. In Patel et al. (2020), the authors demonstrate that only the transitive reduction is required and further extend box embeddings to capture joint hierarchies by augmenting the graph with new nodes. While it is possible to represent joint hierarchies with this method, the parameters for each hierarchy are decoupled, making generalization between hierarchies infeasible. In this work, we introduce a learned box-to-box transformation that respects the structure of each hierarchy. We demonstrate that this not only improves the capability of modeling cross-hierarchy compositional edges but is also capable of generalizing from a subset of the transitive reduction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Representation learning for hierarchical relations is crucial in natural language processing because of the hierarchical nature of common knowledge, for example, (Athiwaratkun and Wilson, 2018; Vendrov et al., 2016; Vilnis et al., 2018; Nickel and Kiela, 2017) . The ISA relation represents meaningful hierarchical relationships between concepts and plays an essential role in generalization for other relations, such as the generalization of based on , and . The fundamental nature of the ISA relation means that it is inherently involved in a large amount of compositional reasoning involving other relations.", "cite_spans": [ { "start": 180, "end": 211, "text": "(Athiwaratkun and Wilson, 2018;", "ref_id": "BIBREF1" }, { "start": 212, "end": 233, "text": "Vendrov et al., 2016;", "ref_id": "BIBREF23" }, { "start": 234, "end": 254, "text": "Vilnis et al., 2018;", "ref_id": "BIBREF24" }, { "start": 255, "end": 278, "text": "Nickel and Kiela, 2017)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Modeling hierarchies is essentially the problem of modeling a poset, or partially ordered set. The task of inferring missing edges that requires learning a transitive relation, was introduced in Vendrov et al. (2016) . The authors also introduce a model based on the reverse product order on R n , which essentially models concepts as infinite cones. Region-based representations have been effective in representing hierarchical data, as containment between regions is naturally transitive. Vilnis et al. (2018) introduced axis-aligned hyperrectangles (or boxes) that are provably more flexible than cones, and demonstrated state-of-the-art performance in multiple tasks.", "cite_spans": [ { "start": 210, "end": 216, "text": "(2016)", "ref_id": null }, { "start": 491, "end": 511, "text": "Vilnis et al. (2018)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Thus far, not as much effort has been put into modeling joint hierarchies. Patel et al. (2020) proposed to simultaneously model ISA and HASPART hierarchies from Wordnet (Miller, 1995) . In order to do so, they effectively augmented the graph by duplicating the nodes to create a single massive hierarchy. Their model assigns two separate box embeddings B ISA and B HASPART for each node n, where these two do not share any common parameter between them, and therefore misses out on a large amount of semantic relatedness between ISA and HASPART .", "cite_spans": [ { "start": 75, "end": 94, "text": "Patel et al. (2020)", "ref_id": "BIBREF18" }, { "start": 169, "end": 183, "text": "(Miller, 1995)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we propose a box-to-box transformation which translates and dilates box representations between hierarchies. Our proposed model shares information between the ISA and HASPART hierarchies via this transformation as well as crosshierarchy containment training objectives. We compare BOX-TRANSFORM MODEL with multiple strong baselines under different settings. We substantially outperform the prior TWO-BOX MODEL while training with only the transitive reduction (which is informally the minimal graph with the same connectivity as the original hierarchy) of both hierarchies and predicting inferred composition edges. As mentioned above, our model's shared learned features should allow for more generalization, and we test this by training on a subset of the transitive reduction, where we find we are able to outperform strong baselines. Finally, we perform a detailed analysis of the model's capacity to predict compositional edges and transitive closure edges, both from an overfitting and generalization standpoint, identifying subsets where further improvement is needed. The source code for our model and the dataset can be found in https://github.com/ iesl/box-to-box-transform.git.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recent advances in representing one single hierarchy mainly fall in two categories: 1) representing hierarchies in non-Euclidian space (eg. hyperbolic space, due to the curvature's inductive bias to model tree-like structures) 2) using region-based representations instead of vectors for each node in the hierarchy (Erk, 2009) . Hyperbolic space has been shown to be efficient in representing hierarchical relations, but also encounters difficulties in training (Nickel and Kiela, 2017; Ganea et al., 2018b; Chamberlain et al., 2017) .", "cite_spans": [ { "start": 315, "end": 326, "text": "(Erk, 2009)", "ref_id": "BIBREF9" }, { "start": 462, "end": 486, "text": "(Nickel and Kiela, 2017;", "ref_id": "BIBREF16" }, { "start": 487, "end": 507, "text": "Ganea et al., 2018b;", "ref_id": "BIBREF11" }, { "start": 508, "end": 533, "text": "Chamberlain et al., 2017)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Categorization models in psychology often represent a concept as a region (Nosofsky, 1986; Smith et al., 1988; Hampton, 1991) . Vilnis and McCallum (2015) and Athiwaratkun and Wilson (2018) use Gaussian distributions to embed each word in the corpus, the latter of which uses thresholded divergences which amount to region representations. Vendrov et al. (2016) and Lai and Hockenmaier (2017) make use of the reverse product order on R n + , which effectively results in cone representations. Vilnis et al. (2018) further extend this cone representation to axis-aligned hyper-rectangles (or boxes), and demonstrate state-of-theart performance on modeling hierarchies. Various training improvement methods for box embeddings have been proposed (Li et al., 2019; , the most recent of which, GumbelBox, use a latent noise model where box parameters are represented via Gumbel distributions to improve on the loss landscape by making the gradient smooth for the geometric operations involved with box embeddings.", "cite_spans": [ { "start": 74, "end": 90, "text": "(Nosofsky, 1986;", "ref_id": "BIBREF17" }, { "start": 91, "end": 110, "text": "Smith et al., 1988;", "ref_id": "BIBREF20" }, { "start": 111, "end": 125, "text": "Hampton, 1991)", "ref_id": "BIBREF12" }, { "start": 128, "end": 154, "text": "Vilnis and McCallum (2015)", "ref_id": "BIBREF25" }, { "start": 159, "end": 189, "text": "Athiwaratkun and Wilson (2018)", "ref_id": "BIBREF1" }, { "start": 340, "end": 361, "text": "Vendrov et al. (2016)", "ref_id": "BIBREF23" }, { "start": 366, "end": 392, "text": "Lai and Hockenmaier (2017)", "ref_id": "BIBREF13" }, { "start": 493, "end": 513, "text": "Vilnis et al. (2018)", "ref_id": "BIBREF24" }, { "start": 743, "end": 760, "text": "(Li et al., 2019;", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Region representations are also used for tasks which do not require modeling hierarchy. In Vilnis et al. (2018) , the authors also model conditional probability distributions using box embeddings. Abboud et al. (2020) and Ren et al. (2020) take a different approach, using boxes for their capacity to contain many vectors to provide slack in the loss function when modeling knowledge base triples or representing logical queries, respectively. Ren et al. (2020) also made use of an action on boxes similar to ours, involving translation and dilation, however our work differs in both the task (i.e. representing logical queries vs. joint hierarchies) and approach, as their model represents entities using vectors and a loss function based on a box-to-vector distance. The inductive bias of hyperbolic space is also exploited to model multiple relations, Ganea et al. (2018a) learn hyperbolic transformations for multiple relations using Poincare embeddings, and show model improvement in low computational resource settings. Patel et al. (2020) , which our work is most similar to, represent joint hierarchies using box embeddings. However, they represent each concept with two boxes ignoring the internal semantics of the concepts.", "cite_spans": [ { "start": 91, "end": 111, "text": "Vilnis et al. (2018)", "ref_id": "BIBREF24" }, { "start": 197, "end": 217, "text": "Abboud et al. (2020)", "ref_id": "BIBREF0" }, { "start": 855, "end": 875, "text": "Ganea et al. (2018a)", "ref_id": "BIBREF10" }, { "start": 1026, "end": 1045, "text": "Patel et al. (2020)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Modeling joint hierarchies shares some similarities with knowledge base completion, however the goals of the two settings are different. When modeling joint hierarchies you are attempting to learn simultaneous transitive relations, and potentially learn relevant compositional edges involving these relations. For knowledge base completion, on the other hand, you may be learning many different relations, and primarily seek to recover edges which were removed rather than inferring new compositional edges. Still, the models which perform knowledge base completion can be applied to this task, as the data can be viewed as knowledge base triples with only 2 relations. There have been multiple works that aim to build better knowledge representation (Bordes et al., 2013; Trouil-Figure 2 : An overview of BOX-TRANSFORM MODEL on joint ISA and HASPART hierarchies. Composition edges are created following certain rules and it should be correctly inferred for a well-trained model. The ISA Wing box is transformed into a HASPART Wing box representing concepts that has wings, and Bird is a subset of it. Same follows for Appendage, and the monotonicity in the ISA space is preserved in HASPART space. lon et al., 2016; Sun et al., 2019; Balazevic et al., 2019b) . Most relevant, (Chami et al., 2020; Balazevic et al., 2019a) recently proposed KG embedding methods which embeds entities in the Poincar\u00e9 ball model of hyperbolic space. These models are intended to capture relational patterns present in multi-relational graphs, with a particular emphasis on hierarchical relations.", "cite_spans": [ { "start": 751, "end": 772, "text": "(Bordes et al., 2013;", "ref_id": "BIBREF5" }, { "start": 773, "end": 788, "text": "Trouil-Figure 2", "ref_id": null }, { "start": 1199, "end": 1216, "text": "lon et al., 2016;", "ref_id": null }, { "start": 1217, "end": 1234, "text": "Sun et al., 2019;", "ref_id": "BIBREF21" }, { "start": 1235, "end": 1259, "text": "Balazevic et al., 2019b)", "ref_id": "BIBREF3" }, { "start": 1277, "end": 1297, "text": "(Chami et al., 2020;", "ref_id": null }, { "start": 1298, "end": 1322, "text": "Balazevic et al., 2019a)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Introduced in (Vilnis et al., 2018) , a box lattice model (or box model) is a geometric embedding which captures partial orders and lattice structure using n-dimensional hyper-rectangles. Formally, we define the set of boxes B in R n as", "cite_spans": [ { "start": 14, "end": 35, "text": "(Vilnis et al., 2018)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Box Lattice Model", "sec_num": "3.1" }, { "text": "B(R n ) = {[x 1 , x 1 ] \u00d7 \u2022 \u2022 \u2022 \u00d7 [x d , x d ]}, (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box Lattice Model", "sec_num": "3.1" }, { "text": "where x i , x j \u2208 R, and we represent all degenerate boxes where x i > x i with \u2205. A box model for a set S is a function Box : S \u2192 B(R n ) which captures some desirable properties of the set S. As the name implies, the box lattice model is particularly suited to representing partial orders and lattice structures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box Lattice Model", "sec_num": "3.1" }, { "text": "Definition 1 (Poset). A partially ordered set, or poset, is a set P along with a relation such that, for each a, b, c \u2208 P , we have The authors note that there are natural geometric operations which form a lattice structure on B:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box Lattice Model", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Box(x) \u2227 Box(y) := i [max(x i , y i ), min(x i , y i )], (2) Box(x) \u2228 Box(y) := i [min(x i , y i ), max(x i , y i )],", "eq_num": "(3)" } ], "section": "Box Lattice Model", "sec_num": "3.1" }, { "text": "In other words, the meet of two boxes is the smallest containing box, and the join is the intersection, or \u2205 if the boxes are disjoint. These geometric operations map very neatly to hierarchies, where the meet of two nodes is their closest common ancestor and the join is the closest common descendent (or \u2205 if no such node exists). The ability of this model to capture lattice structure using geometric operations makes it a natural choice to embed hierarchies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box Lattice Model", "sec_num": "3.1" }, { "text": "In Vilnis et al. (2018) , the authors also introduced a probabilistic interpretation of box embeddings and a learning method which was improved upon in Li et al. (2019) and . By using a probability measure \u00b5 on R d (or by constraining the space to [0, 1] d ), one can calculate box volumes as \u00b5(Box(X)). The pullback of this measure yields a probability measure on S, and thus the box model can be imbued with valid probabilistic semantics. In particular, since the box space B is closed under intersection, we can calculate joint probabilities by computing P (X, Y ) = \u00b5(Box(X) \u2227 Box(Y )) and similarly compute conditional probabilities as", "cite_spans": [ { "start": 3, "end": 23, "text": "Vilnis et al. (2018)", "ref_id": "BIBREF24" }, { "start": 152, "end": 168, "text": "Li et al. (2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Box Model Training", "sec_num": "3.2" }, { "text": "P (X | Y ) = \u00b5(Box(X) \u2227 Box(Y )) \u00b5(Box(Y ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Box Model Training", "sec_num": "3.2" }, { "text": ". 4The conversion from a poset or lattice structure to probabilistic semantics is accomplished by assigning conditional probabilities, namely a b if and only if P (b | a) = 1. We note that the properties required of the relation follow as a natural consequence of the axioms for conditional probability. Apart from providing rigor and interpretability, the calibrated probabilistic semantics also inform and facilitate the training procedure for box embeddings, which is accomplished via gradient descent using KL-divergence with respect to the aforementioned probability distribution as a loss function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Box Model Training", "sec_num": "3.2" }, { "text": "As one might expect, care must be taken to handle the case when boxes are disjoint, as there is no gradient. In Vilnis et al. (2018) the authors made use of the lattice structure to derive a lower bound on the probability, and Li et al. 2019introduced an approximation to Gaussian convolution over the boxes which similarly handled the case of disjoint boxes. Dasgupta et al. (2020) improves this further by taking a random process perspective, ensembling over an entire family of box models. The endpoints of boxes are represented using Gumbel distributions, that is", "cite_spans": [ { "start": 112, "end": 132, "text": "Vilnis et al. (2018)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Box Model Training", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "GumbelBox(X) = i [X i , X i ], X i \u223c MaxGumbel(\u00b5 i , \u03b2), X i \u223c MinGumbel(\u00b5 i , \u03b2),", "eq_num": "(5)" } ], "section": "Probabilistic Box Model Training", "sec_num": "3.2" }, { "text": "where \u00b5, \u03b2 are the location and scale parameters of the Gumbel distribution respectively. The MaxGumbel distribution is given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Box Model Training", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f (x; \u00b5, \u03b2) = 1 \u03b2 exp(\u2212 x\u2212\u00b5 \u03b2 \u2212 e \u2212 x\u2212\u00b5 \u03b2 ),", "eq_num": "(6)" } ], "section": "Probabilistic Box Model Training", "sec_num": "3.2" }, { "text": "and the MinGumbel distribution given by negating x an \u00b5. The Gumbel distribution was chosen due to it's min/max stability, making the set of Gumbel boxes closed under intersection, i.e. the intersection of two Gumbel boxes is another Gumbel box. We denote the space of all such boxes as G. The expected volume of a Gumbel box can be efficiently calculated analytically, and in Dasgupta et al. (2020) the authors use this expected volume to calculate the conditional probabilities mentioned in equation 4. This training method leads to improved performance on many tasks, and is particularly beneficial when embedding trees, thus we will use GumbelBox in our setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Box Model Training", "sec_num": "3.2" }, { "text": "Many existing methods have been proposed for modeling a single hierarchy, however entities are often simultaneously part of multiple hierarchies, for example hypernymy (i.e. ISA ) and meronomy (i.e. HASPART ). Furthermore, useful information can be shared across inferred compositional edges between the two hierarchies. For example, as shown in 2, based on and , we can infer . Due to the compositional nature of these relations, we can infer not only the per-relation transitive closure edges but also the compositional edges, i.e . Formally, for two hierarchical relations r 1 and r 2 , composition edges can be formulated following certain rules. In figure 2, the rules are designed as follows: for , < x 1 , ISA , Head> represent the sub-class of Head, and is the super-class of Tail. Composition edges can be generated as < x 1 ,HASPART ,x 2 >, < x 1 ,HASPART ,Tail> or < Head ,HASPART ,x 2 >. These compositional edges are identified in Patel et al. (2020) , where it is observed that a model which effectively captures both hierarchies should correctly predict not only over the transitive closure of each individual relation but also on these compositional edges.", "cite_spans": [ { "start": 1063, "end": 1082, "text": "Patel et al. (2020)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Modeling Joint Hierarchies", "sec_num": "3.3" }, { "text": "As mentioned previously, our goal is to not only capture intra-relation transitivity, but also require the model to capture cross-hierarchy compositional edges; that is, for a set S with two partial orders 1 , which allows us to recover them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box-to-Box Transformation", "sec_num": "4.1" }, { "text": "As shown in Dasgupta et al. (2020), Gumbel boxes are able to model hierarchies, we would like to benefit from this capability, particularly for modeling the ISA hierarchy, and thus we seek to learn a function f 1 : S \u2192 G, where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box-to-Box Transformation", "sec_num": "4.1" }, { "text": "a 1 b \u21d0\u21d2 E[\u00b5(f 1 (a) \u2229 f 1 (b))] E[\u00b5(f 1 (a))] = 1. (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box-to-Box Transformation", "sec_num": "4.1" }, { "text": "For a given Gumbel box,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box-to-Box Transformation", "sec_num": "4.1" }, { "text": "f (x) = d i=1 [X i , X i ], X i \u223c MaxGumbel(\u00b5 i , \u03b2), X i \u223c MinGumbel(\u00b5 i + \u2206 i , \u03b2). (8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box-to-Box Transformation", "sec_num": "4.1" }, { "text": "where the free parameters are \u00b5 i and \u2206 i . To simultaneously model a second relation, we train a function \u03d5 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box-to-Box Transformation", "sec_num": "4.1" }, { "text": "G \u2192 G such that a 2 b \u21d0\u21d2 E[\u00b5(\u03d5(f 1 (a)) \u2229 f 1 (b))] E[\u00b5(\u03d5(f 1 (a)))] = 1. (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box-to-Box Transformation", "sec_num": "4.1" }, { "text": "For notational simplicity, we abbreviate f 2 = \u03d5 \u2022 f 1 . We choose the transformation \u03d5 to operate on the \"min\" coordinate of a Gumbel box and the \"sidelengths\", that is, we transform a given Gumbel box", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box-to-Box Transformation", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f (x) = d i=1 [X i , X i ], X i \u223c MaxGumbel(\u00b5 i , \u03b2), X i \u223c MinGumbel(\u00b5 i + \u2206 i , \u03b2). (10) to \u03d5 (GumbelBox(X)) = d i=1 [Y i , Y i ],", "eq_num": "(11)" } ], "section": "Box-to-Box Transformation", "sec_num": "4.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box-to-Box Transformation", "sec_num": "4.1" }, { "text": "Y i \u223c MaxGumbel(\u03b8 i \u00b5 i + b i , \u03b2) Y i \u223c MinGumbel(\u03b8 i \u00b5 i +b i +softplus(\u03b8 i \u2206 i +b i ), \u03b2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box-to-Box Transformation", "sec_num": "4.1" }, { "text": "and the \u03b8 i , \u03b8 i , b i , b i are learned parameters. This effectively translates and dilates the location parameters of the Gumbel distributions which represent the \"corners\" of a given Gumbel box. We call this model the BOX-TRANSFORM MODEL .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box-to-Box Transformation", "sec_num": "4.1" }, { "text": "The softplus function is used here as a way to ensure the max coordinate remains larger than the min, and it also provides a simple overflow protection for the expected box volume, as might happen with side-lengths larger than one in high dimensions. While mathematically simple, this transformation allows for parameter sharing between the embedding of a concept with respect to 1 and with respect to 2 . Importantly, the transformation is capable of capturing both a global translation and dilation as well as a scaled transformation of the existing learned representation, allowing the absolute position in space (which, for previous box embedding models, was irrelevant) to potentially capture relevant features of the entities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Box-to-Box Transformation", "sec_num": "4.1" }, { "text": "Remark 1. The lack of a transformation on f 1 (b) is not an oversight. Using figure 2 as an example, if we consider the Bird box as representative of \"all things which are birds\", and the HASPART Wing box as the representative of \"all thing which have wings\", then encouraging containment of the Bird box inside the HASPART Wing box is quite natural. This conceptual motivation is precisely captured by the lack of a transformation on f 1 (b). This also coincides with the probabilistic semantics discussed in section 3.2, and is also the method employed by (Patel et al., 2020) , where this cross-hierarchy containment objective is soley responsible for any flow of information between hierarchies in the TWO-BOX MODEL .", "cite_spans": [ { "start": 558, "end": 578, "text": "(Patel et al., 2020)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Box-to-Box Transformation", "sec_num": "4.1" }, { "text": "There are two main differences between our model and the model introduced in Patel et al. (2020), the TWO-BOX MODEL . First, the TWO-BOX MODEL preceded the Gumbel box model, and instead uses the Soft box model from (Li et al., 2019) . To ensure that the benefits from our model are not conflated with the improvements from using Gumbel boxes we also train a TWO-BOX MODEL from (Patel et al., 2020) which makes use of Gumbel boxes.", "cite_spans": [ { "start": 215, "end": 232, "text": "(Li et al., 2019)", "ref_id": "BIBREF14" }, { "start": 377, "end": 397, "text": "(Patel et al., 2020)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Connection to Two-Box Model", "sec_num": "4.2" }, { "text": "Second, both models use different boxes to represent different relations, however, TWO-BOX MODEL allows both boxes to have free parameters, relying on containment between boxes representing different relations to pass information. Under the framework we have currently presented, this would be equivalent to learning two functions, f 1 and f 2 , both of which have separate parameters for the min and side length of the boxes for each entity. While such a model has significant representational capacity, we would expect that it would suffer greatly from a lack of generalization. We evaluate this hypothesis by creating a second test, discussed in section 5.4, which removes edges from the transitive reduction of the training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Connection to Two-Box Model", "sec_num": "4.2" }, { "text": "We demonstrate the efficacy of BOX-TRANSFORM MODEL by using the joint hierarchy that has been created by Patel et al. (2020) from WordNet (Miller, 1995) . In this dataset, hypernymy (ISA ) and meronymy (HASPART ) are two hierarchical relations of WordNet over noun sysnets, which are 82, 114 in total. Individually, the hypernymy part of the hierarchy contains 82, 114 nodes (i.e., all the synsets) with 84, 363 edges in its transitive reduction and the meronymy portion has 11, 235 synsets (out of 82, 114 synsets) with 9, 678 edges in its transitive reduction.", "cite_spans": [ { "start": 105, "end": 124, "text": "Patel et al. (2020)", "ref_id": "BIBREF18" }, { "start": 138, "end": 152, "text": "(Miller, 1995)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "5.1" }, { "text": "Joint Hierarchy In order to evaluate the performance on the joint hierarchy, Patel et al. (2020) created composition edges using the inter-relational semantics between hypernymy and meronymy. In particular they use the following composition rules:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "ISA \u2022 ISA \u2022 \u2022 \u2022 ISA 0 or 1 or 2 times \u2022 HASPART \u2022 ISA \u2022 ISA \u2022 \u2022 \u2022 ISA 0 or 1 or 2 times = HASPART .", "eq_num": "(12)" } ], "section": "Dataset", "sec_num": "5.1" }, { "text": "To illustrate from Figure 2 , \u2227 \u2227 implies that . In total, 189, 613 composition edges are generated by the method described above for evaluation of the model on the joint hierarchy task. For each test/validation edge, a fixed set of negative samples of size 10 was generated by corrupting the head and tail 5 times each. The overall statistics for the dataset is provided in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 19, "end": 27, "text": "Figure 2", "ref_id": null }, { "start": 457, "end": 464, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Dataset", "sec_num": "5.1" }, { "text": "We have also created a second training dataset which further removes part of the transitive reduction to evaluate the models on their generalization capability (refer to Section 5.4 & 5.5). The dataset used for those section has different statistics and they are reported in the respective sections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "5.1" }, { "text": "We compare BOX-TRANSFORM MODEL against geometric embedding methods as well as knowledge base completion methods. We give a brief description for each baseline below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Models and Training Details", "sec_num": "5.2" }, { "text": "As mentioned in 4.2, Patel et al. (2020) extends the idea of Box embeddings (Vilnis et al., 2018; Li et al., 2019) to model joint hierarchies by defining two boxes per node, one for each relation.", "cite_spans": [ { "start": 21, "end": 40, "text": "Patel et al. (2020)", "ref_id": "BIBREF18" }, { "start": 76, "end": 97, "text": "(Vilnis et al., 2018;", "ref_id": "BIBREF24" }, { "start": 98, "end": 114, "text": "Li et al., 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "TWO-BOX MODEL :", "sec_num": "1." }, { "text": "2. Order Embeddings: (Vendrov et al., 2016) treats each concept as axis parallel cones in positive orthant. We considered two different cone parameters for each entity following the TWO-BOX MODEL (Patel et al., 2020) .", "cite_spans": [ { "start": 21, "end": 43, "text": "(Vendrov et al., 2016)", "ref_id": "BIBREF23" }, { "start": 196, "end": 216, "text": "(Patel et al., 2020)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "TWO-BOX MODEL :", "sec_num": "1." }, { "text": "3. Poincar\u00e9 Embeddings: (Nickel and Kiela, 2017) & Hyperbolic Entailment Cones (Ganea et al., 2018b) : Tree-structured data are best captured in hyperbolic space (Chamberlain et al., 2017) . Thus in Nickel and Kiela (2017) , the authors learn embedding on ndimensional Poincar\u00e9 ball. For similar reasons, Ganea et al. (2018b) uses the hyperbolic space however they extend the hyperbolic point embeddings to entailment cones. Again, for these models, two separate parameters are considered for each entity. (Bordes et al., 2013; Sun et al., 2019) : This task can be posed as knowledge base completion for a KB with only two relations. Thus we evaluate TransE and RotatE which are simple yet effective methods for knowledge base embeddings, which achieve state-of-the-art for many knowledge base embedding tasks. Unlike the TWO-BOX MODEL (Patel et al., 2020) or the other baselines, these methods have shared representation for each entity, and thus they are expected to generalise better on missing edges.", "cite_spans": [ { "start": 24, "end": 48, "text": "(Nickel and Kiela, 2017)", "ref_id": "BIBREF16" }, { "start": 79, "end": 100, "text": "(Ganea et al., 2018b)", "ref_id": "BIBREF11" }, { "start": 162, "end": 188, "text": "(Chamberlain et al., 2017)", "ref_id": "BIBREF6" }, { "start": 199, "end": 222, "text": "Nickel and Kiela (2017)", "ref_id": "BIBREF16" }, { "start": 305, "end": 325, "text": "Ganea et al. (2018b)", "ref_id": "BIBREF11" }, { "start": 506, "end": 527, "text": "(Bordes et al., 2013;", "ref_id": "BIBREF5" }, { "start": 528, "end": 545, "text": "Sun et al., 2019)", "ref_id": "BIBREF21" }, { "start": 836, "end": 856, "text": "(Patel et al., 2020)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "TWO-BOX MODEL :", "sec_num": "1." }, { "text": "5. Hyperbolic KG Embeddings (Balazevic et al., 2019a; Chami et al., 2020) : We also compared our method against recently proposed KG embedding methods based on hyperbolic embeddings to model hierarchical structures present in KGs. The Multi-Relational Poincar\u00e9 model (MuRP) (Balazevic et al., 2019a) learns relation-specific transforms of the entities that are embedded in hyperbolic space. The RoTH (Chami et al., 2020) parameterize the relation specific transformations as hyperbolic rotation, where as the AttH (Chami et al., 2020) combines hyperbolic reflection and rotation using attention.", "cite_spans": [ { "start": 28, "end": 53, "text": "(Balazevic et al., 2019a;", "ref_id": "BIBREF2" }, { "start": 54, "end": 73, "text": "Chami et al., 2020)", "ref_id": null }, { "start": 400, "end": 420, "text": "(Chami et al., 2020)", "ref_id": null }, { "start": 514, "end": 534, "text": "(Chami et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "TransE and RotatE", "sec_num": "4." }, { "text": "More training details are in Appendix A.2. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TransE and RotatE", "sec_num": "4." }, { "text": "In order to demonstrate the ability of the model to capture partially ordered (tree-like) data most embedding methods (Ganea et al., 2018b; Nickel and Kiela, 2017; Patel et al., 2020) train their model on the transitive reduction and predict on the transitive closure. For an evaluation on modeling the joint hierarchy, therefore, it is natural to train the models only on the transitive reduction of hypernymy and meronymy and evaluate on the composition edges, as done in Patel et al. (2020) . We report the F1 score (with 1:10 negatives) for those edges in table 2. The threshold used for the classification is determined by maximizing the F1 score on the validation set.", "cite_spans": [ { "start": 118, "end": 139, "text": "(Ganea et al., 2018b;", "ref_id": "BIBREF11" }, { "start": 140, "end": 163, "text": "Nickel and Kiela, 2017;", "ref_id": "BIBREF16" }, { "start": 164, "end": 183, "text": "Patel et al., 2020)", "ref_id": "BIBREF18" }, { "start": 474, "end": 493, "text": "Patel et al. (2020)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Composition Edges from Transitive Reduction", "sec_num": "5.3" }, { "text": "From Table 2 , we observe that BOX-TRANSFORM MODEL outperforms the other baselines by a significant margin. As mentioned in Patel et al. (2020) and so do we observe that in the next section 5.4 that the Poincar\u00e9 embeddings and Hyperbolic entailment cones do face difficulty in learning when presented only with transitive reduction edges. However, the hyperbolic KG method Atth RoTH are able to learn the composite edges to a certain extent. The performance gain of RotH over its euclidean counterpart RotE can be attributed to its inductive bias towards modeling hierarchies. The performance of Box embedding method as proposed by Patel et al. (2020) performs at par order embedding method. However using GumbelBox formulation (Dasgupta et al., 2020), we observe significant performance boost as GumbelBox improves the local identifiability of the parameter space. Still, the capability of the BOX-TRANSFORM MODEL to benefit from shared cross-hierarchy features allows it to substantially outperform even this improved version of the TWO-BOX MODEL . This is likely due to the fact that the inductive bias provided by the transformation is more in line with the data; the model can benefit from the containments learned as a result of the ISA relation, and learn a HASPART transformation which potentially preserves these containments.", "cite_spans": [ { "start": 124, "end": 143, "text": "Patel et al. (2020)", "ref_id": "BIBREF18" }, { "start": 632, "end": 651, "text": "Patel et al. (2020)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 5, "end": 12, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Composition Edges from Transitive Reduction", "sec_num": "5.3" }, { "text": "In Patel et al. (2020) , and also in our previous experiment, we already observe that box embedding methods are highly capable of to recovering the transitive closure (in our case, composition edges) given the transitive reduction only. In this experiment, we train with even less of the transitive reduction, moving some of these edges to the test Table 3 , we observe that BOX-TRANSFORM MODEL outperforms all the baseline methods by a large extent. Although the two box model is performing worse than BOX-TRANSFORM MODEL , it is able to beat other baselines. Out of the two Knowledge base completion methods TransE performs the best and achieves comparative performance to two box model. Although the hyperbolic KG embeddings were able to perform well on the composite edges, their generalization performance is relatively lower than other KG embedding methods. We also observe that the RotE model that was under performing in composite edges, outperforms RotH by some margin in this generalization setting. We select the top three best performing methods for further analysis for each type of edges in the graph.", "cite_spans": [ { "start": 3, "end": 22, "text": "Patel et al. (2020)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 349, "end": 356, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Learning from Incomplete Transitive Reduction", "sec_num": "5.4" }, { "text": "Training on a subset of the transitive reduction showed that our model could generalize to composition edges even with the absence of essential edges to make such prediction. We further perform evaluation analysis using the same training data with the best-performed model selected by maximizing the f1 score on composition edges. We evaluate the model performance on the transitive closure for each hierarchy (ISA and HASPART ), and the composition edges on the joint hierarchy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance analysis on different splits", "sec_num": "5.5" }, { "text": "For each single hierarchy, some edges are removed from the transitive reduction X to create the incomplete transitive reduction training data X1. Evaluating the transitive closure of X directly evaluates the model's performance on each hierarchy, denoted as TC(X). This can be further divided into three categories: dataset that evaluates model ability to capture transitive closure of X1, TC(X1), dataset that evaluates model generalization ability on missing edges X \u2212 X1, and dataset that evaluates model's extended generalization ability on TC(X) \u2212 TC(X1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance analysis on different splits", "sec_num": "5.5" }, { "text": "Composition edges from the joint hierarchy can be analyzed the same way. COMP(X, Y ) represent all the composition edges in the full wordnet dataset, composed by ISA transitive reduction X and HASPART transitive reduction Y . It can be further divided into two categories: data that evaluate model overfitting ability to capture COMP(X 1 , Y 1 ) where X 1 and Y 1 is the corresponding training ISA and HASPART data in section 5.4, and data that evaluate model generalization ability on learning logical operations COMP(X, Y ) \u2212 COMP(X 1 , Y 1 ). The detailed statistics on each of these splits are provided in Appendix A.4. The evaluation dataset is created by randomly creating negative examples with the pos: neg ratio 1:10. We select the top 3 best models from section 5.4, then choose the threshold that maximized the F1 score for the validation data of each split and report the test F1. As shown in table 4 and table 5, our model performs the best overall across different dataset splits. BOX-TRANSFORM MODEL performs much better on the full transitive closure of ISA , and all the composition edges. In general, BOX-TRANSFORM MODEL performs much better on transitive closure and composition edges by a large margin in all overfitting settings. TransE does better on predicting removed edges from the transitive reduction (which serves more as an analysis of the model's capability, as it is not a typical evaluation for partial order completion), however we note that our model does surprisingly well on the ISA missing edges, which we attribute to the shared semantics between the hierarchy made possible by this boxto-box transformation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance analysis on different splits", "sec_num": "5.5" }, { "text": "We proposed a box-to-box transformation that facilitates sharing of learned features across hierarchies when modeling joint hierarchies. We demonstrate the BOX-TRANSFORM MODEL is capable of achieving state-of-the-art performance compared with other strong baseline models when predicting compositional edges across a joint hierarchy. Furthermore, the model also outperforms other models when modeling the transitive closure of each relation independently. In the future, we aim to extend the current model from two relations to multiple relations in order to obtain more generalization from hierarchical ISA edges.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "A.1 Dataset creation steps from Section 5.4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Appendix", "sec_num": null }, { "text": "In order to remove edges from the transitive reductions, we iterate through the transitive reduction edges of meronymy. With 0.5 probability we choose the edge for further processing. For each chosen HASPART edge, we select an outgoing ISA edge and pair them. We drop the ISA edge from the pair with 0.9 probability (the ratio of HASPART to ISA transitive reduction) and drop the HASPART edge in case the ISA is not dropped already. This procedure ensures that all the edge removals happen around the composition edges, thus, the results reflect the models true capacity to generalize well for this joint hierarchy task. We evaluate the model on the composition edges, the removed reduction edges, and the closure edges with 251783 in numbers which we split into two parts for validation and test. In Table 3 , we report the F1 score on this aggregated evaluation data with 1:10 fixed true negatives.", "cite_spans": [], "ref_spans": [ { "start": 801, "end": 808, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "A Appendix", "sec_num": null }, { "text": "In our experiments, we have kept the number of parameters same across all the methods. We use 5 dimensional box embeddings for the Two Box Model (Patel et al., 2020) . Since box embeddings are specified using min and side length in the same dimension. Thus we compare with 10 dimensional order embeddings, Poincar\u00e9 embeddings, and hyperbolic entailment cones. However, since the above mentioned methods has two different number of parameters for each node, we use 20 dimensional vectors for RotatE, TransE to account for that. Our BOX-TRANSFORM MODEL uses 10 dimension box embeddings for similar reason.", "cite_spans": [ { "start": 145, "end": 165, "text": "(Patel et al., 2020)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "A.2 Training Details", "sec_num": null }, { "text": "Hyperparameter range: We use Bayesian hypermeter optimizer with Hyperband algorithm for all the methods using the web interface (Biewald, 2020) . The hyperparameter ranges are Gumbel\u03b2 \u2208 [0.001, 3], Softplus temperature for box volume T \u2208 [1, 30], lr \u2208 [0.0005, 1], batch size \u2208 {8096, 2048, 1024, 512}, number of negative samples \u2208 [2, 30] for all the methods. For max margin trainging we searched for the margin \u2208 [1, 50].", "cite_spans": [ { "start": 128, "end": 143, "text": "(Biewald, 2020)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "A.2 Training Details", "sec_num": null }, { "text": "The best hyperparameters for our method and a few competitive baselines are provided in appropriate config files along with the source code. We will make the code public after the anonymity period.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.2 Training Details", "sec_num": null }, { "text": "In order to remove edges from the transitive reductions, we iterate through the transitive reduction edges of meronymy. With 0.5 probability we choose the edge for further processing. For each chosen HASPART edge, we select an outgoing ISA edge and pair them. We drop the ISA edge from the pair with 0.9 probability (the ratio of HASPART to ISA transitive reduction) and drop the HASPART edge in case the ISA is not dropped already.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.2 Training Details", "sec_num": null }, { "text": "This procedure ensures that all the edge removals happen around the composition edges, thus, the results reflect the models true capacity to generalize well for this joint hierarchy task. We evaluate the model on the composition edges, the removed reduction edges, and the closure edges with 251783 in numbers which we split into two parts for validation and test. In Table 3 , we report the F1 score on this aggregated evaluation data with 1:10 fixed true negatives.", "cite_spans": [], "ref_spans": [ { "start": 368, "end": 375, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "A.2 Training Details", "sec_num": null }, { "text": "We plot 2-dimensional box embeddings to inspect the quality of our proposed BOX-TRANSFORM MODEL . Please refer to Figure 3 . Here, we use the box embedding parameters of the best performing model from experiment 5.3 ( Table 2) . Note that, the model is 10 dimensional. However, for a perfectly trained model for the hierarchical tree-like data, we should observe more numbers of full containments, i.e., containment along each dimension. Thus, we pick two dimensions randomly out of the 10-d to visualize the box embeddings.", "cite_spans": [], "ref_spans": [ { "start": 114, "end": 122, "text": "Figure 3", "ref_id": null }, { "start": 218, "end": 226, "text": "Table 2)", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "A.3 Visualization", "sec_num": null }, { "text": "In the example in Figure 3 (next page), the facts that and would enable us to infer that . This is a particular example of the compositional edges. We observe from the Figure 3 that the HASPART transformation of the \"Car Door\" and \"Door\" successfully encloses the ISA transformation of the \"Car\", thus our model is able infer that composition edge . All the other composite edges such as , etc. can be similarly inferred from the visualization.", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 26, "text": "Figure 3", "ref_id": null }, { "start": 231, "end": 239, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "A.3 Visualization", "sec_num": null }, { "text": "A.4 Details of the splits from Section 5.5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.3 Visualization", "sec_num": null }, { "text": "We report the performance of our method on different splits which are qualitatively different from each other. The detailed statistics of these splits can be found in Table 6 & 7. ", "cite_spans": [], "ref_spans": [ { "start": 167, "end": 179, "text": "Table 6 & 7.", "ref_id": null } ], "eq_spans": [], "section": "A.3 Visualization", "sec_num": null }, { "text": ", we want a model capable of learning (a 1 b) \u2227 (b 2 c) =\u21d2 a 2 c and (a 2 b) \u2227 (b 1 c) =\u21d2 a 2 c . Furthermore, we hope to do so without including these compositional edges in our training data, with the expectation that the embedding parameters capture relevant structure", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We the anonymous reviewers for their constructive feedback. This work was supported in part by the Center for Intelligent Information Retrieval and the Center for Data Science, in part by the Chan Zuckerberg Initiative, in part by the National Science Foundation under Grant No. IIS-1763618, in part by University of Southern California subcontract no. 123875727 under Office of Naval Research prime contract no. N660011924032, and in part by University of Southern California subcontract no. 89341790 under Defense Advanced Research Projects Agency prime contract no. FA8750-17-C-0106. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "(a) Example of Joint Hierarchy extracted from the WordNet dataset.(b) We plot the transformed ISA box for \"Sedan\" & \"Car\" and transformed HASPART box for \"Door\", \"Car Door\", \"Movable Barrier\" on the same space. The transformations do preserve the containment and provide an consistent assignment of box embedddings for the example on left. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Boxe: A box embedding model for knowledge base completion", "authors": [ { "first": "Ralph", "middle": [], "last": "Abboud", "suffix": "" }, { "first": "\u0130smaililkan", "middle": [], "last": "Ceylan", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Lukasiewicz", "suffix": "" }, { "first": "Tommaso", "middle": [], "last": "Salvatori", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 34th Annual Conference on Neural Information Processing Systems NeurIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ralph Abboud,\u0130smail\u0130lkan Ceylan, Thomas Lukasiewicz, and Tommaso Salvatori. 2020. Boxe: A box embedding model for knowledge base completion. In Proceedings of the 34th Annual Conference on Neural Information Processing Systems NeurIPS.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Hierarchical density order embeddings", "authors": [ { "first": "Ben", "middle": [], "last": "Athiwaratkun", "suffix": "" }, { "first": "Andrew", "middle": [ "Gordon" ], "last": "Wilson", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ben Athiwaratkun and Andrew Gordon Wilson. 2018. Hierarchical density order embeddings. In Interna- tional Conference on Learning Representations.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Multi-relational poincar\u00e9 graph embeddings", "authors": [ { "first": "Ivana", "middle": [], "last": "Balazevic", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Allen", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Hospedales", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "4463--4473", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivana Balazevic, Carl Allen, and Timothy Hospedales. 2019a. Multi-relational poincar\u00e9 graph embeddings. In Advances in Neural Information Processing Sys- tems, volume 32, pages 4463-4473. Curran Asso- ciates, Inc.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "TuckER: Tensor factorization for knowledge graph completion", "authors": [ { "first": "Ivana", "middle": [], "last": "Balazevic", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Allen", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Hospedales", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivana Balazevic, Carl Allen, and Timothy Hospedales. 2019b. TuckER: Tensor factorization for knowledge graph completion. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Experiment tracking with weights and biases. Software available from wandb", "authors": [ { "first": "Lukas", "middle": [], "last": "Biewald", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lukas Biewald. 2020. Experiment tracking with weights and biases. Software available from wandb.com.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Translating embeddings for modeling multi-relational data", "authors": [ { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Usunier", "suffix": "" }, { "first": "A", "middle": [], "last": "Garcia-Duran", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Oksana", "middle": [], "last": "Yakhnenko", "suffix": "" } ], "year": 2013, "venue": "Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antoine Bordes, Nicolas Usunier, A. Garcia-Duran, Ja- son Weston, and Oksana Yakhnenko. 2013. Translat- ing embeddings for modeling multi-relational data. In Neural Information Processing Systems.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Neural embeddings of graphs in hyperbolic space. 13th international workshop on mining and learning from graphs held in conjunction with KDD", "authors": [ { "first": "James", "middle": [ "R" ], "last": "Benjamin Paul Chamberlain", "suffix": "" }, { "first": "Marc", "middle": [ "Peter" ], "last": "Clough", "suffix": "" }, { "first": "", "middle": [], "last": "Deisenroth", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Paul Chamberlain, James R. Clough, and Marc Peter Deisenroth. 2017. Neural embeddings of graphs in hyperbolic space. 13th international workshop on mining and learning from graphs held in conjunction with KDD.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Improving local identifiability for probabilistic box embeddings", "authors": [ { "first": "Michael", "middle": [], "last": "Shib Sankar Dasgupta", "suffix": "" }, { "first": "Dongxu", "middle": [], "last": "Boratko", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "", "middle": [], "last": "Vilnis", "suffix": "" }, { "first": "Lorraine", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Li", "suffix": "" }, { "first": "", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2020, "venue": "Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shib Sankar Dasgupta, Michael Boratko, Dongxu Zhang, Luke Vilnis, Xiang Lorraine Li, and Andrew McCallum. 2020. Improving local identifiability for probabilistic box embeddings. In Neural Informa- tion Processing Systems.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Representing words as regions in vector space", "authors": [ { "first": "Katrin", "middle": [], "last": "Erk", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katrin Erk. 2009. Representing words as regions in vector space. In Proceedings of the Thirteenth Con- ference on Computational Natural Language Learn- ing.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Hyperbolic neural networks", "authors": [ { "first": "Octavian", "middle": [], "last": "Ganea", "suffix": "" }, { "first": "Gary", "middle": [], "last": "B\u00e9cigneul", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Hofmann", "suffix": "" } ], "year": 2018, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5345--5355", "other_ids": {}, "num": null, "urls": [], "raw_text": "Octavian Ganea, Gary B\u00e9cigneul, and Thomas Hof- mann. 2018a. Hyperbolic neural networks. In Ad- vances in neural information processing systems, pages 5345-5355.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Hyperbolic entailment cones for learning hierarchical embeddings", "authors": [ { "first": "", "middle": [], "last": "Octavian-Eugen", "suffix": "" }, { "first": "Gary", "middle": [], "last": "Ganea", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "B\u00e9cigneul", "suffix": "" }, { "first": "", "middle": [], "last": "Hofmann", "suffix": "" } ], "year": 2018, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Octavian-Eugen Ganea, Gary B\u00e9cigneul, and Thomas Hofmann. 2018b. Hyperbolic entailment cones for learning hierarchical embeddings. In International Conference on Machine Learning.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The combination of prototype concepts. The psychology of word meanings", "authors": [ { "first": "A", "middle": [], "last": "James", "suffix": "" }, { "first": "", "middle": [], "last": "Hampton", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "91--116", "other_ids": {}, "num": null, "urls": [], "raw_text": "James A Hampton. 1991. The combination of proto- type concepts. The psychology of word meanings, pages 91-116.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Learning to predict denotational probabilities for modeling entailment", "authors": [ { "first": "Alice", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hockenmaier", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alice Lai and Julia Hockenmaier. 2017. Learning to predict denotational probabilities for modeling en- tailment. In Proceedings of the 15th Conference of the European Chapter of the Association for Compu- tational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Smoothing the geometry of probabilistic box embeddings", "authors": [ { "first": "Xiang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Vilnis", "suffix": "" }, { "first": "Dongxu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Boratko", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiang Li, Luke Vilnis, Dongxu Zhang, Michael Bo- ratko, and Andrew McCallum. 2019. Smoothing the geometry of probabilistic box embeddings. In Inter- national Conference on Learning Representations.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "WordNet: a lexical database for English", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1995, "venue": "Communications of the ACM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A Miller. 1995. WordNet: a lexical database for English. Communications of the ACM.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Poincar\u00e9 embeddings for learning hierarchical representations", "authors": [ { "first": "Maximilian", "middle": [], "last": "Nickel", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" } ], "year": 2017, "venue": "Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maximilian Nickel and Douwe Kiela. 2017. Poincar\u00e9 embeddings for learning hierarchical representa- tions. In Neural Information Processing Systems.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Attention, similarity, and the identification-categorization relationship", "authors": [ { "first": "M", "middle": [], "last": "Robert", "suffix": "" }, { "first": "", "middle": [], "last": "Nosofsky", "suffix": "" } ], "year": 1986, "venue": "Journal of experimental psychology: General", "volume": "115", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert M Nosofsky. 1986. Attention, similar- ity, and the identification-categorization relation- ship. Journal of experimental psychology: General, 115(1):39.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Representing joint hierarchies with box embeddings", "authors": [ { "first": "Dhruvesh", "middle": [], "last": "Patel", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Shib Sankar Dasgupta", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Boratko", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Li", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Vilnis", "suffix": "" }, { "first": "", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dhruvesh Patel, Shib Sankar Dasgupta, Michael Bo- ratko, Xiang Li, Luke Vilnis, and Andrew McCal- lum. 2020. Representing joint hierarchies with box embeddings. Automated Knowledge Base Construc- tion.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Query2box: Reasoning over knowledge graphs in vector space using box embeddings", "authors": [ { "first": "Weihua", "middle": [], "last": "Hongyu Ren", "suffix": "" }, { "first": "Jure", "middle": [], "last": "Hu", "suffix": "" }, { "first": "", "middle": [], "last": "Leskovec", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hongyu Ren, Weihua Hu, and Jure Leskovec. 2020. Query2box: Reasoning over knowledge graphs in vector space using box embeddings. International Conference on Learning Representations.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Combining prototypes: A selective modification model", "authors": [ { "first": "E", "middle": [], "last": "Edward", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" }, { "first": "N", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "", "middle": [], "last": "Osherson", "suffix": "" }, { "first": "J", "middle": [], "last": "Lance", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Rips", "suffix": "" }, { "first": "", "middle": [], "last": "Keane", "suffix": "" } ], "year": 1988, "venue": "Cognitive science", "volume": "12", "issue": "4", "pages": "485--527", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward E Smith, Daniel N Osherson, Lance J Rips, and Margaret Keane. 1988. Combining prototypes: A selective modification model. Cognitive science, 12(4):485-527.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Rotate: Knowledge graph embedding by relational rotation in complex space", "authors": [ { "first": "Zhiqing", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Zhi-Hong", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Jian-Yun", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Tang", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. Interna- tional Conference on Learning Representations.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Complex embeddings for simple link prediction", "authors": [ { "first": "Th\u00e9o", "middle": [], "last": "Trouillon", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Welbl", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "\u00c9ric", "middle": [], "last": "Gaussier", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Bouchard", "suffix": "" } ], "year": 2016, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Th\u00e9o Trouillon, Johannes Welbl, Sebastian Riedel,\u00c9ric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In Interna- tional Conference on Machine Learning.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Order-embeddings of images and language", "authors": [ { "first": "Ivan", "middle": [], "last": "Vendrov", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "Sanja", "middle": [], "last": "Fidler", "suffix": "" }, { "first": "Raquel", "middle": [], "last": "Urtasun", "suffix": "" } ], "year": 2016, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. 2016. Order-embeddings of images and language. In International Conference on Learning Representations.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Probabilistic embedding of knowledge graphs with box lattice measures", "authors": [ { "first": "Luke", "middle": [], "last": "Vilnis", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Shikhar", "middle": [], "last": "Murty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mc-Callum", "suffix": "" } ], "year": 2018, "venue": "Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luke Vilnis, Xiang Li, Shikhar Murty, and Andrew Mc- Callum. 2018. Probabilistic embedding of knowl- edge graphs with box lattice measures. In Associa- tion for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Word representations via gaussian embedding. International Conference on Learning Representations", "authors": [ { "first": "Luke", "middle": [], "last": "Vilnis", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luke Vilnis and Andrew McCallum. 2015. Word rep- resentations via gaussian embedding. International Conference on Learning Representations.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "An example Box Embedding representation of the ISA hierarchy where" }, "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "1. a a (reflexivity) 2. if a b and b a then a = b (antisymmetry) 3. if a b and b c then a c (transitivity) Definition 2 (Lattice). A lattice is a poset where each pair of elements have a unique upper bound called the join, denoted by \u2227, and a unique lower bound called the meet, denoted by \u2228." }, "TABREF0": { "type_str": "table", "num": null, "content": "
TransitiveTransitiveValidationTest
ReductionClosure(pos/neg)(pos/neg)
Hypernym84,363661,12728,838/ 288,380 28,838/ 288,380
Meronym9,67830,3335,164/ 51,6405,164/ 51,640
Composite Edge --94,807/ 948,070 94,806/ 948,070
", "html": null, "text": "Details of the hypernymy, meronymy hierarchies and the composition edges." }, "TABREF1": { "type_str": "table", "num": null, "content": "
MethodsF1 score
Poincar\u00e9 Embeddings43.8
Hyperbolic Entailment Cones44.0
TransE57.0
RotatE51.0
Order Embeddings68.5
MuRP21.4
AttH51.3
RotE51.5
RotH55.8
TWO-BOX MODEL (Patel et al., 2020)68.1
TWO-BOX MODEL (with GumbelBox) 73.7
BOX-TRANSFORM MODEL82.2
", "html": null, "text": "Test F1 scores(%)of various methods for predicting the Composition edges." }, "TABREF2": { "type_str": "table", "num": null, "content": "
MethodsF1 score
Poincar\u00e9 Embeddings33.5
Hyperbolic Entailment Cones36.0
TransE57.0
RotatE55.0
Order Embeddings54.5
MuRP20.1
AttH27.0
RotE48.8
RotH46.7
TWO-BOX MODEL (with GumbelBox) 58.9
BOX-TRANSFORM MODEL63.9
", "html": null, "text": "Test F1 scores(%) of various methods for generalization capability." }, "TABREF3": { "type_str": "table", "num": null, "content": "
Extended
TypeOverall TC(X)Overfitting TC(X1)Generalization X-X1Generalization TC(X) -TC(X1)
-(X-X1)
TransE52.952.166.546.0
Two Box ModelISA47.858.919.922.9
BOX-TRANSFORM MODEL57.360.065.944.4
TransE59.963.056.148.3
Two Box ModelHASPART51.654.840.837.8
BOX-TRANSFORM MODEL58.864.233.425.4
", "html": null, "text": "Single hierarchy F1 score (%) analysis on ISA and HASPART . The overall dataset is the combination of overfitting, generalization and extended generalization" }, "TABREF4": { "type_str": "table", "num": null, "content": "
OverallOverfittingGeneralization
COMP(X, Y)COMP(X1, Y1)COMP(X, Y) -COMP(X1, Y1)
TransE58.870.168.6
Two Box Model62.572.763.6
BOX-TRANSFORM MODEL69.686.170.0
set. Now, reconstruction of the closure and the
composition edges require models to generalize
over the missing parts of the graph. We train on
9175 meronymy edges and 80372 hypernymy edges
and test/validate on an aggregated pool of 251783
edges. Please refer to the Appendix A.1 for details
on dataset creation and statistics.
From
", "html": null, "text": "Joint hierarchy F1 score (%) analysis. The overall data is the combination of overfitting and generalization." } } } }