{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:34:43.534473Z" }, "title": "TempCaps: A Capsule Network-based Embedding Model for Temporal Knowledge Graph Completion", "authors": [ { "first": "Guirong", "middle": [], "last": "Fu", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Zhao", "middle": [], "last": "Meng", "suffix": "", "affiliation": {}, "email": "zhmeng@ethz.ch" }, { "first": "Zhen", "middle": [], "last": "Han", "suffix": "", "affiliation": {}, "email": "zhen.han@campus.lmu.de" }, { "first": "Zifeng", "middle": [], "last": "Ding", "suffix": "", "affiliation": {}, "email": "zifeng.ding@siemens.com" }, { "first": "Yunpu", "middle": [], "last": "Ma", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Matthias", "middle": [], "last": "Schubert", "suffix": "", "affiliation": {}, "email": "schubert@dbs.ifi.lmu.de" }, { "first": "Volker", "middle": [], "last": "Tresp", "suffix": "", "affiliation": {}, "email": "volker.tresp@siemens.com" }, { "first": "Roger", "middle": [], "last": "Wattenhofer", "suffix": "", "affiliation": {}, "email": "wattenhofer@ethz.ch" }, { "first": "Eth", "middle": [], "last": "Z\u00fcrich", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Lmu", "middle": [], "last": "Munich", "suffix": "", "affiliation": {}, "email": "" }, { "first": "A", "middle": [ "G" ], "last": "Siemens", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Temporal knowledge graphs store the dynamics of entities and relations during a time period. However, typical temporal knowledge graphs often suffer from incomplete dynamics with missing facts in real-world scenarios. Hence, modeling temporal knowledge graphs to complete the missing facts is important. In this paper, we tackle the temporal knowledge graph completion task by proposing TempCaps, which is a Capsule networkbased embedding model for Temporal knowledge graph completion. TempCaps models temporal knowledge graphs by introducing a novel dynamic routing aggregator inspired by Capsule Networks. Specifically, TempCaps builds entity embeddings by dynamically routing retrieved temporal relation and neighbor information. Experimental results demonstrate that TempCaps reaches state-of-the-art performance for temporal knowledge graph completion. Additional analysis also shows that TempCaps is efficient 1 .", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Temporal knowledge graphs store the dynamics of entities and relations during a time period. However, typical temporal knowledge graphs often suffer from incomplete dynamics with missing facts in real-world scenarios. Hence, modeling temporal knowledge graphs to complete the missing facts is important. In this paper, we tackle the temporal knowledge graph completion task by proposing TempCaps, which is a Capsule networkbased embedding model for Temporal knowledge graph completion. TempCaps models temporal knowledge graphs by introducing a novel dynamic routing aggregator inspired by Capsule Networks. Specifically, TempCaps builds entity embeddings by dynamically routing retrieved temporal relation and neighbor information. Experimental results demonstrate that TempCaps reaches state-of-the-art performance for temporal knowledge graph completion. Additional analysis also shows that TempCaps is efficient 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Knowledge graphs (KGs) organize and integrate information in a structured manner, which is humanreadable and suitable for computer processing. This advantage of knowledge graphs is helping to bridge the gap between humans and computers. Numerous real-world applications have benefited from KGs. In particular, recent advances in artificial intelligence have motivated researchers to use knowledge graphs to boost performance in downstream applications, including natural language processing (IV et al., 2019; Bosselut et al., 2019) and computer vision (Yu et al., 2021; Marino et al., 2017) .", "cite_spans": [ { "start": 491, "end": 508, "text": "(IV et al., 2019;", "ref_id": "BIBREF13" }, { "start": 509, "end": 531, "text": "Bosselut et al., 2019)", "ref_id": "BIBREF2" }, { "start": 552, "end": 569, "text": "(Yu et al., 2021;", "ref_id": "BIBREF30" }, { "start": 570, "end": 590, "text": "Marino et al., 2017)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Despite the usefulness of knowledge graphs, existing knowledge graphs are often incomplete, which means important facts might be missing. To tackle this problem, researchers have developed various methods for the task of knowledge graph completion (Nickel et al., 2011; Bordes et al., 2013) , aiming to recover missing facts for existing knowledge graphs. In particular, Nguyen et al. (2019) explored the Capsule Network (Cap-sNet) (Sabour et al., 2017) for modeling knowledge graphs. CapsE(Nguyen et al., 2019) demonstrate that each dimension of the entity, as well as relation, embeddings also have diverse variations in different contexts. Thus, they used capsules to encode many characteristics in the embedding triple and represent the entries at the corresponding dimension, showing superior performance to other KG models.", "cite_spans": [ { "start": 248, "end": 269, "text": "(Nickel et al., 2011;", "ref_id": "BIBREF21" }, { "start": 270, "end": 290, "text": "Bordes et al., 2013)", "ref_id": "BIBREF0" }, { "start": 356, "end": 391, "text": "In particular, Nguyen et al. (2019)", "ref_id": null }, { "start": 432, "end": 453, "text": "(Sabour et al., 2017)", "ref_id": "BIBREF22" }, { "start": 485, "end": 511, "text": "CapsE(Nguyen et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Existing studies, including CapsE, focus on completing static knowledge graphs. In reality, however, multi-relational data is often time-dependent. Moreover, static knowledge graphs fail to adequately describe the changing essence of the world, indicating that knowledge or facts being true in the past might not always stay true. For instance, social networks constantly change. Static knowledge graphs fail to model these changes. To this end, temporal knowledge graphs (tKGs) are introduced to grasp these dynamic changes. Specifically, temporal facts are represented as a quadruple by extending the static triplet with a timestamp describing when these facts occurred, i.e. (Barack Obama, inaugurated, president of the US, 2009) . Similar to static KG, tKGs also suffer from the problem of incompleteness, making the task of temporal knowledge graph completion eminent (Bordes et al., 2013; Lin et al., 2015) .", "cite_spans": [ { "start": 723, "end": 732, "text": "US, 2009)", "ref_id": null }, { "start": 873, "end": 894, "text": "(Bordes et al., 2013;", "ref_id": "BIBREF0" }, { "start": 895, "end": 912, "text": "Lin et al., 2015)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we take advantage of the Capsule Network paradigm and generalize it for modeling tKGs. We introduce TempCaps, which is a Capsule network-based embedding model for Temporal knowledge graph completion. As shown in Figure 1, TempCaps consists of a neighbor selector, an entity embedding layer, a dynamic routing aggregator and a multi-layer perceptron (MLP) decoder. Unlike CapsE, we incorporate the temporal information of tKGs into our model: First, we pose temporal constraints on neighbor selection by introducing a time window. At a given time step, we only take the neighbors that interact with the source entity within the time window into account for capturing the entity features. Second, we propose a time-dependent dynamic routing mechanism that incorporates time information into routing weight matrix. Third, we exploit the temporal weighting vectors generated during the dynamic routing to calculate the output probability, which reflects how tightly lower capsules connect with higher capsules.", "cite_spans": [], "ref_spans": [ { "start": 227, "end": 233, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contributions are in the following: (i) We propose TempCaps, which leverages Capsule Networks by dynamically routing retrieved temporal relations and neighboring entities. An advantage of our model is that different capsules can capture different aspects of the same entity. Such advantage is important for modeling temporal knowledge graphs, which are dynamic, and often involve one entity in multiple timestamps. (ii) Our TempCaps improves the performance of temporal knowledge graph completion. Experimental results show that our model achieves state-of-the-art performance on the GDELT and ICEWS datasets. Furthermore, our model is light-weighted and efficient compared to previous methods for modeling tKGs. (iii) As far as we know, we are the first to use Capsule Networks for tKGs. Our experiments show that by leveraging dynamic routing, TempCaps is suitable for both discrete and continuous timestamps and can be easily generalized to unseen timestamps. (iv) We conduct additional ablation studies to understand how each part of TempCaps contributes to the model performance. We also show that TempCaps is efficient by analyzing time and space complexity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Knowledge Graph Embedding (KGE) maps entities and relations into low-dimensional continuous vectors. Two types of KGEs, including static KGE and temporal KGE, have attracted attention from the community. In the rest of this subsection, we give an overview of static and temporal KGE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Graph Embedding", "sec_num": "2.1" }, { "text": "Static Knowledge Graph Embedding. Embedding approaches for static KGs can generally be categorized into bilinear models and transition-based models. TransE (Bordes et al., 2013) leverages the transition-based approach, which measures the plausibility of a triple as the distance between the object entity's embedding and the embedding of the subject after the relational transition. Similarly, by using additional projection vectors, Wang et al. (2014) extend TransE to translate entity embeddings into the vector space of relations. Other works including RESCAL (Nickel et al., 2011) , DisMult , and SimplE (Kazemi and Poole, 2018) use a bilinear score function, which represents predicates as linear transformations of entity embeddings. However, these KGE methods are not suitable for tKGs as they cannot capture the temporal dynamics of tKGs.", "cite_spans": [ { "start": 156, "end": 177, "text": "(Bordes et al., 2013)", "ref_id": "BIBREF0" }, { "start": 434, "end": 452, "text": "Wang et al. (2014)", "ref_id": "BIBREF27" }, { "start": 563, "end": 584, "text": "(Nickel et al., 2011)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Knowledge Graph Embedding", "sec_num": "2.1" }, { "text": "Temporal Knowledge Graph Embedding. Temporal KGE approaches aim to capture both temporal and relational information to improve the performance of the completion task. Han et al. (2021b) assessed well-known temporal embeddings of tKGE models via an extensive experimental study and released the first open unified open-source framework for temporal KG completion models with full composability. HyTE (Dasgupta et al., 2018) embeds time information in the entity-relation space by arranging a temporal hyperplane to each timestamp and uses TransE as interaction model to compute the plausibility score of facts. DE-SimplE (Goel et al., 2020) extends SimplE by exploring the diachronic function to model entity embeddings at different timestamps. TA-DistMult (Garc\u00eda-Dur\u00e1n et al., 2018) utilizes recurrent neural networks to learn time-aware representations of relations and adopt DistMult as the score function. Moreover, Han et al. (2020a) introduced a non-Euclidean embedding approach that learns evolving entity representations in a product of Riemannian manifolds. Besides, Han et al. (2022) enhanced temporal knowledge embedding using contextualized language representations and achived state-of-the-art results. Besides the completion task, researchers have also paid attention to use temporal KGE for forecasting on tKGs (Trivedi et al., 2017; Jin et al., 2020; Han et al., 2020b Han et al., ,c, 2021a . Forecasting tasks predict future links based on past observations, while the completion tasks interpolate missing links at observed timestamps. In this work, we focus on the tKG completion task. Sabour et al. (2017) propose Capsule Networks to capture different entities in images by leveraging dynamic routing between different layers of Capsule Networks. As a result, capsule Networks reach comparable or even better performance when compared to convolutional neural networks, while at the same time being more efficient and more robust to affine transformation. Following Sabour et al. (2017) , researchers have proposed various methods to improve the performance of Capsule Networks. Hahn et al. (2019) boost the performance of Capsule Networks by using a novel self-routing mechanism. Tsai et al. (2020) propose to use inverted dot-product attention routing to improve Capsule Networks. We give more details on the basics of Capsule Networks in Section 3.2.2.", "cite_spans": [ { "start": 167, "end": 185, "text": "Han et al. (2021b)", "ref_id": "BIBREF12" }, { "start": 620, "end": 639, "text": "(Goel et al., 2020)", "ref_id": "BIBREF5" }, { "start": 920, "end": 938, "text": "Han et al. (2020a)", "ref_id": "BIBREF7" }, { "start": 1076, "end": 1093, "text": "Han et al. (2022)", "ref_id": null }, { "start": 1326, "end": 1348, "text": "(Trivedi et al., 2017;", "ref_id": "BIBREF24" }, { "start": 1349, "end": 1366, "text": "Jin et al., 2020;", "ref_id": "BIBREF14" }, { "start": 1367, "end": 1384, "text": "Han et al., 2020b", "ref_id": "BIBREF8" }, { "start": 1385, "end": 1406, "text": "Han et al., ,c, 2021a", "ref_id": null }, { "start": 1604, "end": 1624, "text": "Sabour et al. (2017)", "ref_id": "BIBREF22" }, { "start": 1984, "end": 2004, "text": "Sabour et al. (2017)", "ref_id": "BIBREF22" }, { "start": 2097, "end": 2115, "text": "Hahn et al. (2019)", "ref_id": "BIBREF6" }, { "start": 2199, "end": 2217, "text": "Tsai et al. (2020)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Knowledge Graph Embedding", "sec_num": "2.1" }, { "text": "Apart from the vision domain, previous work has shown that Capsule Networks are also useful for modeling static knowledge graphs. (Nguyen et al., 2019) propose CapsE, which represents each triplet fact (subject, relation, object) in a knowledge graph as a 3-column matrix, each of which corresponds to an entity in a fact. CapsE reaches state-of-the-art performance on static knowledge graph completion tasks.", "cite_spans": [ { "start": 130, "end": 151, "text": "(Nguyen et al., 2019)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Capsule Network", "sec_num": "2.2" }, { "text": "This paper proposes TempCaps, which uses Capsule Networks to model tKGs. Despite all previous works on Capsule Networks, we are the first to model tKGs with Capsule Networks to the best of our knowledge. Experimental results show that TempCaps achieves competitive performance on the temporal knowledge graph completion task. We present the details of TempCaps in Section 3.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Capsule Network", "sec_num": "2.2" }, { "text": "A temporal knowledge graph (tKG) is a collection of valid facts with temporal information. A fact in tKG is a quadruple of (s, r, o, t), which consists of subject s, relation r, object o, and timestamp t. We use E, R, and T to denote the sets of entities, relations, and timestamps involved in at least one fact in a given tKG. |E|, |R| and |T | are the number of elements in each set, respectively. A tKG can be viewed as the union of KG snapshots at each timestamp. Formally, we have:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Formulation", "sec_num": "3.1" }, { "text": "G = G(t 1 ) \u222a G(t 2 ) \u222a \u2022 \u2022 \u2022 G(t i ) \u2022 \u2022 \u2022 \u222a G(t max ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Formulation", "sec_num": "3.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Formulation", "sec_num": "3.1" }, { "text": "G(t i ) = {(s, r, o, t i )|t i \u2208 T } is a snapshot of G at timestamp t i , and t max = max(t i |t i \u2208 T ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Formulation", "sec_num": "3.1" }, { "text": "Temporal Knowledge Graph Completion (TKGC) aims to predict unobserved missing facts from incomplete tKGs. In TKGC, both unobserved and observed facts share the same period of time. Let O be the observed true facts from a complete tKG G (G contains both observed true facts and to-be-predicted facts), we denote the set of missing facts as\u014c = G \\ O which should be predicted in the context of TKGC. In our work, we only consider predicting the missing subject or the missing object of the missing facts. For every missing fact (s, r, o, t) \u2208\u014c, two prediction queries (s, r, ?, t) and (?, r, o, t) are generated, and our model aims to rank the ground-truth subject entity s from (?, r, o, t), as well as the ground-truth object entity o from (s, r, ?, t), as high as possible among all candidate entities. For simplicity, we present the equations and illustrate our method with only object prediction. During training and evaluation of our experiments, we include both subject prediction and object prediction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Formulation", "sec_num": "3.1" }, { "text": "We propose TempCaps, a Capsule network-based embedding model for Temporal knowledge graph completion. TempCaps first selects two types of neighboring entities, i.e., local entities and global relational entities, for each entity of the tKG. Then it learns the embeddings of entities based on the retrieved neighbors using a dynamic routing module (see Section 3.2.5). Finally, TempCaps ranks the entities from the candidate set by feeding the embeddings of the entities to a scoring module. Figure 1 gives an illustration of TempCaps.", "cite_spans": [], "ref_spans": [ { "start": 491, "end": 499, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Overview", "sec_num": "3.2.1" }, { "text": "Capsule networks are built with two critical components: capsules and the dynamic routing mechanism.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Capsule Network", "sec_num": "3.2.2" }, { "text": "A capsule is a set of neurons processing different information about an entity, and the activities of the neurons within an active capsule represent the various properties of a particular entity (Sabour et al., 2017) . We use a squash function proposed by Sabour et al. to ", "cite_spans": [ { "start": 195, "end": 216, "text": "(Sabour et al., 2017)", "ref_id": "BIBREF22" }, { "start": 256, "end": 272, "text": "Sabour et al. to", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Capsule Network", "sec_num": "3.2.2" }, { "text": "Figure 1: Overview of TempCaps. Assume we want to predict the ground truth object of a prediction query (Biden, Make statement, ?, 2021-05-03), given all the observed facts. TempCaps first selects different types of neighboring entities of the query subject Biden, and embeds these neighbors with capsules. Then it utilizes the dynamic routing aggregator to learn Biden's contextualized embedding. A multi-layer perceptron (MLP) decoder takes the learned embedding and performs a multi-class classification over all candidates, producing scores for every entity in the candidate set. The entity with the highest score (Iran in this example) is the predicted object.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Biden", "sec_num": null }, { "text": "vector stays between 0 and 1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Biden", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "v j = \u2225s j \u2225 2 1 + \u2225s j \u2225 2 s j \u2225s j \u2225 ,", "eq_num": "(1)" } ], "section": "Biden", "sec_num": null }, { "text": "where s j is the input of a capsule and v j is its squashed output. Routing by agreements regulates how capsules communicate between layers. The dynamic routing mechanism (Sabour et al., 2017) works as follows. All output vectors u i of capsules in the lower layer are first multiplied by a weight matrix W ij . Then, the weighted sum of newly obtained vectors are input into a capsule s j in the next layer:", "cite_spans": [ { "start": 171, "end": 192, "text": "(Sabour et al., 2017)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Biden", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "u j|i = W ij u i , s j = i c ij\u00fbj|i ,", "eq_num": "(2)" } ], "section": "Biden", "sec_num": null }, { "text": "where c ij is the coupling coefficient between capsule i and capsule j. In our work, we initialize each entity's embedding with a capsule in the first capsule layer. By performing routing by agreements, we achieve information aggregation between an entity and its selected neighbors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Biden", "sec_num": null }, { "text": "Similar to static KGs, in tKGs, we can still treat entities as nodes (relations as edges). Inspired by previous works in graph neural network (Kipf and Welling, 2017; Velickovic et al., 2018; Xu et al., 2019) , where the embeddings of nodes are derived by the n-hop neighbors of the nodes, TempCaps computes the embedding of each node, i.e., entity in the context of tKGs, by leveraging information from the temporal neighbors of that node in the tKG. Given a prediction query (s, r, ?, t), Temp-Caps selects two types of neighbors, namely, local entities and global relational entities, for the query subject s. A local entity is an object entity o \u2032 which originates from an observed fact (s, r, o \u2032 , t \u2032 ), where t \u2032 can be any timestamp within a fixed range around the query timestamp. We denote the set of all local entities at all timestamps as E l (s, r):", "cite_spans": [ { "start": 142, "end": 166, "text": "(Kipf and Welling, 2017;", "ref_id": "BIBREF16" }, { "start": 167, "end": 191, "text": "Velickovic et al., 2018;", "ref_id": "BIBREF26" }, { "start": 192, "end": 208, "text": "Xu et al., 2019)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Neighbor Selector", "sec_num": "3.2.3" }, { "text": "E l (s, r) = {o \u2032 |(s, r, o \u2032 , t \u2032 ), max(t \u2212 \u2206t e , t 1 ) \u2264 t \u2032 \u2264 min(t + \u2206t e , t max )}}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neighbor Selector", "sec_num": "3.2.3" }, { "text": "To avoid including excessive entities into E l , Tem-pCaps samples local entities from all observed facts within a pre-defined time window \u2206t e .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neighbor Selector", "sec_num": "3.2.3" }, { "text": "A global relational entity is an object entity o \u2032 which originates from an observed fact (s \u2032 , r, o \u2032 , t \u2032 ), where s \u2032 can be any entity and t \u2032 can be any timestamp within a fixed range around the query timestamp. We denote the set of all local entities at all timestamps as E g :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neighbor Selector", "sec_num": "3.2.3" }, { "text": "E g (r) = {o \u2032 |(s \u2032 , r, o \u2032 , t \u2032 ), max(t \u2212 \u2206t r , t 1 ) \u2264 t \u2032 \u2264 min(t + \u2206t r , t max )}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neighbor Selector", "sec_num": "3.2.3" }, { "text": "Similarly, global relational entities are selected within a time window \u2206t r .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neighbor Selector", "sec_num": "3.2.3" }, { "text": "We further define the set of all selected neighbors as E n = {E l , E g }. By restricting neighbors within time windows around the query timestamp, TempCaps selects entities that have greater influence on the query subject s. We employ different time windows to select local entities, and global relational entities as different types of neighbors have different influence on the query subject s. We treat the time windows, i.e., \u2206t e and \u2206t r , as hyperparameters during finetuning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neighbor Selector", "sec_num": "3.2.3" }, { "text": "In CapsNet (Sabour et al., 2017) , the log prior probability b ij between two capsules i and j are learned depending on the locations and the types of both capsules. It is used to compute the coupling coefficient stated in Equation 2:", "cite_spans": [ { "start": 11, "end": 32, "text": "(Sabour et al., 2017)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Temporal Weighting Function", "sec_num": "3.2.4" }, { "text": "c ij = exp(b ij ) k exp(b ik )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Temporal Weighting Function", "sec_num": "3.2.4" }, { "text": ". Inspired by CapsNet, we initialize the log prior probability between the query subject s and its selected neighbor o \u2032 with a temporal weighting function, as we consider the time difference between these two entities as the difference of capsule locations. The intuition is that, for a prediction query (s, r, ?, t), a neighbor that connects with s near to t should have more influence on s than a temporally-farther neighbor. Hence, we assign a higher probability to nearer neighbors than farther neighbors. Formally, given a prediction query (s, r, ?, t) and a selected neighbor o \u2032 (derived from an observed fact at t \u2032 ), b o \u2032 is initialized as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Temporal Weighting Function", "sec_num": "3.2.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "b o \u2032 = \u03b3 + 1 \u03b3 + |t \u2032 \u2212 t| + 1 ,", "eq_num": "(3)" } ], "section": "Temporal Weighting Function", "sec_num": "3.2.4" }, { "text": "where \u03b3 is a hyperparameter. Figure 2 illustrates the temporal weighting function with different \u03b3. The temporal weighting function with a lower \u03b3 leads to higher differences in the values of coupling coefficients regarding various neighboring entities.", "cite_spans": [], "ref_spans": [ { "start": 29, "end": 37, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Temporal Weighting Function", "sec_num": "3.2.4" }, { "text": "Based on the selected neighboring entities from the neighbor selector, TempCaps then learns the representation of an entity by leveraging a dynamic routing aggregator. Inspired by CapsE (Nguyen et al., 2019) that uses Capsule Networks to model static KGs, we design two layers of capsules for Temp-Caps, and then apply a modified dynamic routing algorithm. The first capsule layer consists of N capsules, where N is the number of the selected neighboring entities from the neighbor selector. Assume we have a prediction query (s, r, ?, t), and for the query subject s, we have the selected neighbors E n . For every neighboring entity e \u2208 E n , a capsule maps its embedding u (0) with a multi-layer perceptron to obtain u (1) . Then in the second capsule layer, we use the dynamic routing algorithm to compute the contextualized representation e s of the query subject s. Let \u03c3(\u2022) be an activation function, we use the following functions to compute contextualized representations:", "cite_spans": [ { "start": 722, "end": 725, "text": "(1)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Dynamic Routing Aggregator", "sec_num": "3.2.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "u (1) i = \u03c3(Wu (0) i + \u03f5),", "eq_num": "(4)" } ], "section": "Dynamic Routing Aggregator", "sec_num": "3.2.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "e = DynamicRouting(u (1) 1 , \u2022 \u2022 \u2022 , u (1) N ),", "eq_num": "(5)" } ], "section": "Dynamic Routing Aggregator", "sec_num": "3.2.5" }, { "text": "where W is the weighting matrix, \u03f5 is a bias, and N is the number of selected neighbors. Algorithm 1 shows the details of the dynamic routing module.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Routing Aggregator", "sec_num": "3.2.5" }, { "text": "The multi-layer perceptron (MLP) decoder takes the representation e from the dynamic routing module as the input and estimates the probabilities of all candidates being the predicted answer by leveraging a softmax function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MLP Decoder", "sec_num": "3.2.6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P MLP (o|s, r, t) = exp(\u03c3(W MLP e o + \u03f5 MLP )) o \u2032 \u2208E exp(\u03c3(W MLP e o \u2032 + \u03f5 MLP )) ,", "eq_num": "(6)" } ], "section": "MLP Decoder", "sec_num": "3.2.6" }, { "text": "where W MLP is a weight matrix, \u03f5 MLP \u2208 R |E| is a bias vector, and \u03c3(\u2022) is the activation function. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MLP Decoder", "sec_num": "3.2.6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c i \u2190 exp(b i ) k exp(b k ) ; end for all capsule i \u2208 second capsule layer do s \u2190 i c i u", "eq_num": "(0)" } ], "section": "MLP Decoder", "sec_num": "3.2.6" }, { "text": "i ; end for all capsule i \u2208 second capsule layer do e \u2190 squash(s); end for all capsule i \u2208 first capsule layer do", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MLP Decoder", "sec_num": "3.2.6" }, { "text": "b i \u2190 b i + u (0) i \u22ba \u2022 e end end", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MLP Decoder", "sec_num": "3.2.6" }, { "text": "Following previous works about tKG reasoning (Jin et al., 2020; Zhu et al., 2021) , we treat temporal knowledge graph completion as a multiclass classification task, where each class corresponds to a candidate entity. The learning objective is to minimize the negative log-likelihood L on all observed facts with the object (or subject) masked during training:", "cite_spans": [ { "start": 45, "end": 63, "text": "(Jin et al., 2020;", "ref_id": "BIBREF14" }, { "start": 64, "end": 81, "text": "Zhu et al., 2021)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation and Inference", "sec_num": "3.2.7" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L = \u2212 (s,r,o,t)\u2208G log[P (o|s, r, t)],", "eq_num": "(7)" } ], "section": "Parameter Estimation and Inference", "sec_num": "3.2.7" }, { "text": "where P (o|s, r, t) = (1 \u2212 \u03b1) \u2022 P MLP (o|s, r, t) + \u03b1 \u2022 P DyR (o|s, r, t) is the probability of the entity o being the ground truth missing object given (s, r, ?, t). This probability consists of two parts: P MLP (o|s, r, t) and P DyR (o|s, r, t), where P MLP (o|s, r, t) is defined by Equation 6 and P DyR (o|s, r, t) is the softmax output c from the last iteration of Algorithm 1. For the entities not selected into the set of neighbors, we force the value of their P DyR to 0. \u03b1 \u2208 [0, 1] is the balancing parameter that controls the importance of each probability term. During inference time, for a prediction query (s, r, ?, t), we follow the training process and retrieve the combined probabilities of all entities. The candidate entity with the highest combined probability is selected as the model prediction:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation and Inference", "sec_num": "3.2.7" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "o pred = arg max o \u2032 \u2208E P (o|s, r, t).", "eq_num": "(8)" } ], "section": "Parameter Estimation and Inference", "sec_num": "3.2.7" }, { "text": "The learning objective for subject prediction is similar. We omit it in the paper for simplicity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter Estimation and Inference", "sec_num": "3.2.7" }, { "text": "Datasets We use three datasets for evaluation in our experiments: Global Database of Events, Language, and Tone (GDELT) (Leetaru and Schrodt, 2013) , two subsets of Integrated Crisis Early Warning System (ICEWS) (Boschee et al., 2015) , i.e, ICEWS05-15 and ICEWS14. GDELT collects human societal-scale behaviors and events occurring from April 1, 2015, to March 31, 2016 in news media. The ICEWS dataset records political events with timestamps. ICEWS14 and ICEWS05-15 are two subsets from ICEWS, which contains events in 2014, and from 2005 to 2015, respectively. For all our experiments, we split the dataset by 80%/10%/10% for train/validation/test. Table 2 gives the statistics of the datasets.", "cite_spans": [ { "start": 120, "end": 147, "text": "(Leetaru and Schrodt, 2013)", "ref_id": "BIBREF17" }, { "start": 212, "end": 234, "text": "(Boschee et al., 2015)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 653, "end": 660, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "Metrics For each fact (s, r, o, t) in the dataset, we create two sub-tasks: (1) predicting the object (s, r, ?, t) and (2) predicting the subject (?, r, o, t).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "We report four metrics for the two tasks separately and take the average between the two sub-tasks. The metrics we used are MRR and Hits@1/3/10. Let |Q| denote the number of queries. MRR, defined as 1 |Q| i 1 rank i , is the average of reciprocal ranks. Hits@", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "K = 1 |Q| i 1[rank i \u2264 K]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "shows the ratio of the cases where the ground-truth entities are ranked within the top K. We filter the candidate object set during evaluation in the same manner as (Goel et al., 2020) do. During the evaluation, in one timestamp, a subject may be connected with multiple objects under the same relation. Hence, objects except the groundtruth o are not necessarily wrong. We therefore filter the candidate set E during evaluation. In other words, instead of considering all the entities E, the model gives the rank of the actual missing object among entities in o\u222a\u0112 t , where\u0112 t are entities not connected to s under r at time t. To be specific,\u0112 t = {o \u2032 |(s, r, o \u2032 , t) / \u2208 G t }. Baselines We compare the performance of our model with both static and temporal state-of-the-art", "cite_spans": [ { "start": 165, "end": 184, "text": "(Goel et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "ICEWS05-15 ICEWS14", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GDELT", "sec_num": null }, { "text": "MRR Hits@1 Hits@3 Hits@10 MRR Hits@1 Hits@3 Hits@10 MRR Hits@1 Hits@3 Hits@10 Table 2 : Statistics on datasets. The columns are the name of the dataset, the number of all entities, the number of all relation types, the number of facts in the train/validation/test sets, the time gap, and total time gaps. In the column Gap, \"H\" indicates hours. For example, \"24H\" means that the difference between two consecutive timestamps is 24 hours.", "cite_spans": [], "ref_spans": [ { "start": 78, "end": 85, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "GDELT", "sec_num": null }, { "text": "KG embedding models. The static models include TransE (Bordes et al., 2013) , DistMult and SimplE (Kazemi and Poole, 2018) while temporal models are HyTE (Dasgupta et al., 2018) , TA-DistMult (Garc\u00eda-Dur\u00e1n et al., 2018) , and DE-SimplE (Goel et al., 2020) . Implementations Details All our experiments are conducted on a single Titan Xp GPU. We use the ADAM optimizer with a weight decay rate of 1e-5. In addition, we set the learning rate to 1e-3, batch size to 300, the initial entity embedding size to 100, the size of the linear transformation in dynamic routing aggregator to 200 \u00d7 100, the routing iteration times as 1, the temporal weighting decay \u03b3 to 4, the loss balancing factor \u03b1 to 0.1 and dropout rate to 0.3. The neighborhood candidate numbers are 80 for local entities and 40 for global relational entities. Table 1 gives the results of our model performance. We can observe that our model reaches state-of-theart performance on the GDELT and ICEWS05-15 datasets. On GDELT, our model outperforms the baseline models on all four metrics. For MRR, our model outperforms the second-best model by 2.8%, and leads Hits@1 by 3.9%. On ICEWS05-15, our model is state-of-the-art on two of the most important metrics, MRR and Hits@1. Additionally, our model leads the second-best model by 3.1% for Hits@1, indicating that our model can retrieve the ground-truth entity with high accuracy.", "cite_spans": [ { "start": 54, "end": 75, "text": "(Bordes et al., 2013)", "ref_id": "BIBREF0" }, { "start": 149, "end": 177, "text": "HyTE (Dasgupta et al., 2018)", "ref_id": null }, { "start": 180, "end": 219, "text": "TA-DistMult (Garc\u00eda-Dur\u00e1n et al., 2018)", "ref_id": null }, { "start": 236, "end": 255, "text": "(Goel et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 823, "end": 830, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "GDELT", "sec_num": null }, { "text": "On ICEWS14, our model is not the best but is still comparable to the state-of-the-art model. For example, our model reaches an MRR of 48.9% on ICEWS14, while the best-performed model DE-SimplE reaches an MRR of 52.6%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "We study the following hyperparameters or design choices on ICEWS14: (1) the number of candidate entities (local/global relational);(2) the length of visible time window (t r , t e , t a ); (3) the number of routing iterations; (4) the temporal weighting decay rate \u03b3; (5) whether or not we use an MLP decoder in the final layer of the model; (6) the loss balancing factor \u03b1. Table 3 details the results of the ablation studies.", "cite_spans": [], "ref_spans": [ { "start": 376, "end": 383, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Ablation Studies", "sec_num": "4.3" }, { "text": "From model variants on the number of candidate numbers, we can see that mixing different types of neighbors is helping. The local entities are particularly helpful, and adding global relational entities further improves the performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Studies", "sec_num": "4.3" }, { "text": "For the length of visible time window, the optimal number is 6 days (t r = t a = 3 days) according to the results in Figure 3(a) . We argue that a too-short window results in insufficient information, while a too-long window would contain too much noise, which might harm the model performance.", "cite_spans": [], "ref_spans": [ { "start": 117, "end": 128, "text": "Figure 3(a)", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Ablation Studies", "sec_num": "4.3" }, { "text": "From Figure 4 , we can see how the number of routing iterations affects model performance and that the dynamic routing aggregator outperforms the mean aggregator on MRR. Finally, figure 3(b) illustrates the model performance when using different weight decay rates \u03b3, where we can observe that the optimal value of \u03b3 is 4. Additionally, we notice that dropping the final MLP decoder results decreases model performance (see Table 3 ). In Figure 3 (c), we show that a loss balance factor \u03b1 = 0.1 leads to better performance than when setting \u03b1 = 0. This indicates that our model benefits from both P MLP and P DyR .", "cite_spans": [], "ref_spans": [ { "start": 5, "end": 13, "text": "Figure 4", "ref_id": "FIGREF4" }, { "start": 170, "end": 190, "text": "Finally, figure 3(b)", "ref_id": "FIGREF3" }, { "start": 424, "end": 431, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 438, "end": 446, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Ablation Studies", "sec_num": "4.3" }, { "text": "We analyze the space and time complexity of our model from the empirical and theoretical points of view. As is shown in Figure 1 , the trainable parameters of our model consists of three parts:", "cite_spans": [], "ref_spans": [ { "start": 120, "end": 128, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Analysis", "sec_num": "4.4" }, { "text": "(1) E 1 \u2208 R |E|\u00d7D 1 in the entity embedding layer, (2) W 1 \u2208 R D 2 \u00d7D 1 and e 1 \u2208 R D 2 in the dynamic routing aggregator and (3) W 2 \u2208 R |E|\u00d7D 2 and e 2 \u2208 R |E|\u00d7D 2 in the final MLP decoder. In summary, our model has O(|E|) parameters, which is optimal for representing a knowledge graph with |E| entities. In our experiments, taking the ICEWS14 dataset as an example, each training epoch costs only 54 seconds on average, and the total evaluation process for the testing dataset costs 21 seconds. This indicates our model is efficient both in training and inference and saves considerable time and memory compared to previous works for temporal knowledge graph completion. The space complexity of the embedding computation before aggregation is O (|B|D 1 D 2 ) where |B| is the batch size and D i is the embedding size defined in the model. Then, the space complexity of going through the dynamic routing aggregator (Algorithm 1) is O(r|B||C|D 2 2 ), where |C| is the candidate number. At last, the MLP decoder takes another O(|B||E|D 2 ), where |E| is the total number of entities. Thus, for each epoch of training or testing, the space complexity is O(|Q|(D 1 D 2 + |C|D 2 2 + |E|D 2 )), which can be simplified as O(c \u2022 |Q||E|). Here c is a constant related to pre-defined parameters, and |Q| is the training/testing dataset size.", "cite_spans": [], "ref_spans": [ { "start": 749, "end": 762, "text": "(|B|D 1 D 2 )", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Analysis", "sec_num": "4.4" }, { "text": "In this paper, we propose TempCaps, which is a light-weighted Capsule Network-based embedding model for temporal knowledge graph completion. TempCaps consists of a neighbor selector, a dynamic routing aggregator, and an MLP decoder. Experimental results show that TempCaps reaches state-of-the-art performance on the GDELT and ICEWS05-15 dataset. We conduct additional ablation studies to understand how each part of Temp-Caps and hyperparameter choices contribute to the model performance. Our analysis also shows that TempCaps is efficient both in time and space. In the future, we plan to extend TempCaps to forecasting in temporal knowledge graphs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [ { "text": "This work has been funded by the German Federal Ministry of Education and Research (BMBF) under Grant No. 01IS18036A. The authors of this work take full responsibilities for its content.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Translating embeddings for modeling multirelational data", "authors": [ { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Usunier", "suffix": "" }, { "first": "Alberto", "middle": [], "last": "Garc\u00eda-Dur\u00e1n", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Oksana", "middle": [], "last": "Yakhnenko", "suffix": "" } ], "year": 2013, "venue": "NeurIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antoine Bordes, Nicolas Usunier, Alberto Garc\u00eda- Dur\u00e1n, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In NeurIPS.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "ICEWS Coded Event Data", "authors": [ { "first": "Elizabeth", "middle": [], "last": "Boschee", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Lautenschlager", "suffix": "" }, { "first": "O'", "middle": [], "last": "Sean", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Brien", "suffix": "" }, { "first": "James", "middle": [], "last": "Shellman", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Starz", "suffix": "" }, { "first": "", "middle": [], "last": "Ward", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elizabeth Boschee, Jennifer Lautenschlager, Sean O'Brien, Steve Shellman, James Starz, and Michael Ward. 2015. ICEWS Coded Event Data.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "COMET: commonsense transformers for automatic knowledge graph construction", "authors": [ { "first": "Antoine", "middle": [], "last": "Bosselut", "suffix": "" }, { "first": "Hannah", "middle": [], "last": "Rashkin", "suffix": "" }, { "first": "Maarten", "middle": [], "last": "Sap", "suffix": "" }, { "first": "Chaitanya", "middle": [], "last": "Malaviya", "suffix": "" }, { "first": "Asli", "middle": [], "last": "Celikyilmaz", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chai- tanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: commonsense transformers for auto- matic knowledge graph construction. In ACL.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Hyte: Hyperplane-based temporally aware knowledge graph embedding", "authors": [ { "first": "Swayambhu Nath", "middle": [], "last": "Shib Sankar Dasgupta", "suffix": "" }, { "first": "Partha", "middle": [ "P" ], "last": "Ray", "suffix": "" }, { "first": "", "middle": [], "last": "Talukdar", "suffix": "" } ], "year": 2018, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shib Sankar Dasgupta, Swayambhu Nath Ray, and Partha P. Talukdar. 2018. Hyte: Hyperplane-based temporally aware knowledge graph embedding. In EMNLP.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Learning sequence encoders for temporal knowledge graph completion", "authors": [ { "first": "Alberto", "middle": [], "last": "Garc\u00eda-Dur\u00e1n", "suffix": "" }, { "first": "Sebastijan", "middle": [], "last": "Dumancic", "suffix": "" }, { "first": "Mathias", "middle": [], "last": "Niepert", "suffix": "" } ], "year": 2018, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alberto Garc\u00eda-Dur\u00e1n, Sebastijan Dumancic, and Math- ias Niepert. 2018. Learning sequence encoders for temporal knowledge graph completion. In EMNLP.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Diachronic embedding for temporal knowledge graph completion", "authors": [ { "first": "Rishab", "middle": [], "last": "Goel", "suffix": "" }, { "first": "Marcus", "middle": [], "last": "Seyed Mehran Kazemi", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Brubaker", "suffix": "" }, { "first": "", "middle": [], "last": "Poupart", "suffix": "" } ], "year": 2020, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rishab Goel, Seyed Mehran Kazemi, Marcus Brubaker, and Pascal Poupart. 2020. Diachronic embedding for temporal knowledge graph completion. In AAAI.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Self-routing capsule networks", "authors": [ { "first": "Taeyoung", "middle": [], "last": "Hahn", "suffix": "" }, { "first": "Myeongjang", "middle": [], "last": "Pyeon", "suffix": "" }, { "first": "Gunhee", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taeyoung Hahn, Myeongjang Pyeon, and Gunhee Kim. 2019. Self-routing capsule networks. In NeurIPS.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Dyernie: Dynamic evolution of riemannian manifold embeddings for temporal knowledge graph completion", "authors": [ { "first": "Zhen", "middle": [], "last": "Han", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yunpu", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Volker", "middle": [], "last": "Tresp", "suffix": "" } ], "year": 2020, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhen Han, Peng Chen, Yunpu Ma, and Volker Tresp. 2020a. Dyernie: Dynamic evolution of riemannian manifold embeddings for temporal knowledge graph completion. In EMNLP.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Explainable subgraph reasoning for forecasting on temporal knowledge graphs", "authors": [ { "first": "Zhen", "middle": [], "last": "Han", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yunpu", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Volker", "middle": [], "last": "Tresp", "suffix": "" } ], "year": 2020, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhen Han, Peng Chen, Yunpu Ma, and Volker Tresp. 2020b. Explainable subgraph reasoning for forecast- ing on temporal knowledge graphs. In ICLR.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Temporal knowledge graph forecasting with neural ode", "authors": [ { "first": "Zhen", "middle": [], "last": "Han", "suffix": "" }, { "first": "Zifeng", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Yunpu", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Yujia", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Volker", "middle": [], "last": "Tresp", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2101.05151" ] }, "num": null, "urls": [], "raw_text": "Zhen Han, Zifeng Ding, Yunpu Ma, Yujia Gu, and Volker Tresp. 2021a. Temporal knowledge graph forecasting with neural ode. arXiv preprint arXiv:2101.05151.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Hinrich Sch\u00fctze, and Volker Tresp. 2022. Enhanced temporal knowledge embeddings with contextualized language representations", "authors": [ { "first": "Zhen", "middle": [], "last": "Han", "suffix": "" }, { "first": "Ruotong", "middle": [], "last": "Liao", "suffix": "" }, { "first": "Beiyan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zifeng", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Heinz", "middle": [], "last": "K\u00f6ppl", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2203.09590" ] }, "num": null, "urls": [], "raw_text": "Zhen Han, Ruotong Liao, Beiyan Liu, Yao Zhang, Zifeng Ding, Heinz K\u00f6ppl, Hinrich Sch\u00fctze, and Volker Tresp. 2022. Enhanced temporal knowledge embeddings with contextualized language represen- tations. arXiv preprint arXiv:2203.09590.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Graph hawkes neural network for forecasting on temporal knowledge graphs", "authors": [ { "first": "Zhen", "middle": [], "last": "Han", "suffix": "" }, { "first": "Yunpu", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Yuyi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "G\u00fcnnemann", "suffix": "" }, { "first": "Volker", "middle": [], "last": "Tresp", "suffix": "" } ], "year": 2020, "venue": "AKBC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhen Han, Yunpu Ma, Yuyi Wang, Stephan G\u00fcnnemann, and Volker Tresp. 2020c. Graph hawkes neural net- work for forecasting on temporal knowledge graphs. In AKBC.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Time-dependent entity embedding is not all you need: A re-evaluation of temporal knowledge graph completion models under a unified framework", "authors": [ { "first": "Zhen", "middle": [], "last": "Han", "suffix": "" }, { "first": "Gengyuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yunpu", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Volker", "middle": [], "last": "Tresp", "suffix": "" } ], "year": 2021, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhen Han, Gengyuan Zhang, Yunpu Ma, and Volker Tresp. 2021b. Time-dependent entity embedding is not all you need: A re-evaluation of temporal knowl- edge graph completion models under a unified frame- work. In EMNLP.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Barack's wife hillary: Using knowledge graphs for fact-aware language modeling", "authors": [ { "first": "L", "middle": [], "last": "Robert", "suffix": "" }, { "first": "Nelson", "middle": [ "F" ], "last": "Logan", "suffix": "" }, { "first": "Matthew", "middle": [ "E" ], "last": "Liu", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2019, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert L. Logan IV, Nelson F. Liu, Matthew E. Peters, Matt Gardner, and Sameer Singh. 2019. Barack's wife hillary: Using knowledge graphs for fact-aware language modeling. In ACL.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Recurrent event network: Autoregressive structure inferenceover temporal knowledge graphs", "authors": [ { "first": "Woojeong", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Meng", "middle": [], "last": "Qu", "suffix": "" }, { "first": "Xisen", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Ren", "suffix": "" } ], "year": 2020, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Woojeong Jin, Meng Qu, Xisen Jin, and Xiang Ren. 2020. Recurrent event network: Autoregressive struc- ture inferenceover temporal knowledge graphs. In EMNLP.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Simple embedding for link prediction in knowledge graphs", "authors": [ { "first": "David", "middle": [], "last": "Seyed Mehran Kazemi", "suffix": "" }, { "first": "", "middle": [], "last": "Poole", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seyed Mehran Kazemi and David Poole. 2018. Simple embedding for link prediction in knowledge graphs. In NeurIPS.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Semisupervised classification with graph convolutional networks", "authors": [ { "first": "N", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Max", "middle": [], "last": "Kipf", "suffix": "" }, { "first": "", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2017, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas N. Kipf and Max Welling. 2017. Semi- supervised classification with graph convolutional networks. In ICLR.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Gdelt: Global data on events, location, and tone", "authors": [ { "first": "Kalev", "middle": [], "last": "Leetaru", "suffix": "" }, { "first": "Philip", "middle": [ "A" ], "last": "Schrodt", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kalev Leetaru and Philip A. Schrodt. 2013. Gdelt: Global data on events, location, and tone. ISA Annual Convention.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Learning entity and relation embeddings for knowledge graph completion", "authors": [ { "first": "Yankai", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xuan", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2015, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embed- dings for knowledge graph completion. In AAAI.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "The more you know: Using knowledge graphs for image classification", "authors": [ { "first": "Kenneth", "middle": [], "last": "Marino", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Abhinav", "middle": [], "last": "Gupta", "suffix": "" } ], "year": 2017, "venue": "CVPR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenneth Marino, Ruslan Salakhutdinov, and Abhinav Gupta. 2017. The more you know: Using knowledge graphs for image classification. In CVPR.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A capsule network-based embedding model for knowledge graph completion and search personalization", "authors": [ { "first": "Thanh", "middle": [], "last": "Dai Quoc Nguyen", "suffix": "" }, { "first": "Tu", "middle": [ "Dinh" ], "last": "Vu", "suffix": "" }, { "first": "Dat", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Dinh", "middle": [ "Q" ], "last": "Quoc Nguyen", "suffix": "" }, { "first": "", "middle": [], "last": "Phung", "suffix": "" } ], "year": 2019, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dai Quoc Nguyen, Thanh Vu, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Q. Phung. 2019. A capsule network-based embedding model for knowl- edge graph completion and search personalization. In NAACL.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A three-way model for collective learning on multi-relational data", "authors": [ { "first": "Maximilian", "middle": [], "last": "Nickel", "suffix": "" }, { "first": "Hans-Peter", "middle": [], "last": "Volker Tresp", "suffix": "" }, { "first": "", "middle": [], "last": "Kriegel", "suffix": "" } ], "year": 2011, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In ICML.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Dynamic routing between capsules", "authors": [ { "first": "Sara", "middle": [], "last": "Sabour", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Frosst", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sara Sabour, Nicholas Frosst, and Geoffrey E. Hin- ton. 2017. Dynamic routing between capsules. In NeurIPS.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "TimeTraveler: Reinforcement learning for temporal knowledge graph forecasting", "authors": [ { "first": "Haohai", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Jialun", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Yunpu", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Zhen", "middle": [], "last": "Han", "suffix": "" }, { "first": "Kun", "middle": [], "last": "He", "suffix": "" } ], "year": 2021, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haohai Sun, Jialun Zhong, Yunpu Ma, Zhen Han, and Kun He. 2021. TimeTraveler: Reinforcement learn- ing for temporal knowledge graph forecasting. In EMNLP.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Know-evolve: Deep reasoning in temporal knowledge graphs", "authors": [ { "first": "Rakshit", "middle": [], "last": "Trivedi", "suffix": "" }, { "first": "Hanjun", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yichen", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Le", "middle": [], "last": "Song", "suffix": "" } ], "year": 2017, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rakshit Trivedi, Hanjun Dai, Yichen Wang, and Le Song. 2017. Know-evolve: Deep reasoning in temporal knowledge graphs. In ICML.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Capsules with inverted dot-product attention routing", "authors": [ { "first": "Yao-Hung Hubert", "middle": [], "last": "Tsai", "suffix": "" }, { "first": "Nitish", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Hanlin", "middle": [], "last": "Goh", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2020, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yao-Hung Hubert Tsai, Nitish Srivastava, Hanlin Goh, and Ruslan Salakhutdinov. 2020. Capsules with in- verted dot-product attention routing. In ICLR.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Graph attention networks", "authors": [ { "first": "Petar", "middle": [], "last": "Velickovic", "suffix": "" }, { "first": "Guillem", "middle": [], "last": "Cucurull", "suffix": "" }, { "first": "Arantxa", "middle": [], "last": "Casanova", "suffix": "" }, { "first": "Adriana", "middle": [], "last": "Romero", "suffix": "" }, { "first": "Pietro", "middle": [], "last": "Li\u00f2", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2018, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li\u00f2, and Yoshua Bengio. 2018. Graph attention networks. In ICLR.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Knowledge graph embedding by translating on hyperplanes", "authors": [ { "first": "Zhen", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jianwen", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jianlin", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Zheng", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2014, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by trans- lating on hyperplanes. In AAAI.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "How powerful are graph neural networks?", "authors": [ { "first": "Keyulu", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Weihua", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Jure", "middle": [], "last": "Leskovec", "suffix": "" }, { "first": "Stefanie", "middle": [], "last": "Jegelka", "suffix": "" } ], "year": 2019, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2019. How powerful are graph neural net- works? In ICLR.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Embedding entities and relations for learning and inference in knowledge bases", "authors": [ { "first": "Bishan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" } ], "year": 2015, "venue": "ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In ICLR.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Ernie-vil: Knowledge enhanced vision-language representations through scene graphs", "authors": [ { "first": "Fei", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Jiji", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Weichong", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Hua", "middle": [], "last": "Hao Tian", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wu", "suffix": "" }, { "first": "", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2021, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fei Yu, Jiji Tang, Weichong Yin, Yu Sun, Hao Tian, Hua Wu, and Haifeng Wang. 2021. Ernie-vil: Knowledge enhanced vision-language representations through scene graphs. In AAAI.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Learning from history: Modeling temporal knowledge graphs with sequential copy-generation networks", "authors": [ { "first": "Cunchao", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Muhao", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Changjun", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Guangquan", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2021, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cunchao Zhu, Muhao Chen, Changjun Fan, Guangquan Cheng, and Yan Zhang. 2021. Learning from history: Modeling temporal knowledge graphs with sequen- tial copy-generation networks. In AAAI.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "Temporal weighting function with different \u03b3. The horizontal axis is t and the vertical axis is the value of weight(t)." }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "number of iteration m output :e, c for all capsule i \u2208 first capsule layer do b i \u2190 weight(t i ) end for m iterations do for all capsule i \u2208 first capsule layer do" }, "FIGREF2": { "uris": null, "num": null, "type_str": "figure", "text": "(a) Time window size (days).(b) Temporal weighting decay \u03b3.(c) Balancing factor \u03b1." }, "FIGREF3": { "uris": null, "num": null, "type_str": "figure", "text": "Ablation studies. The configuration in our final model is marked in orange." }, "FIGREF4": { "uris": null, "num": null, "type_str": "figure", "text": "Ablation studies. Effects of the number of iterations. indicates the model uses a mean aggregator, otherwise the model uses the dynamic routing aggregator." }, "TABREF0": { "type_str": "table", "text": "guarantee that the length of the", "html": null, "content": "
observed TKG facts
Russia
(2021-03-29, Russia) local neighbor:China
(2021-04-29, China)India
(2021-05-01, North Korea)
......
(Biden,(2021-05-02, Nayib Bukele) global relational neighbor:Iran
Make statement,Neighbor(2021-05-02, Malaysia)
?, 2021-05-03)Selector(2021-05-02, India)Entity Embedding
Layer
...
Inference: Iran
MLP DecoderDynamic Routing Aggregator
", "num": null }, "TABREF2": { "type_str": "table", "text": "Model performance on GDELT, ICEWS05-15 and ICEWS14. We use MRR and Hits@1/3/10 as our evaluation metric. Results of the baseline models are directly adapted from the original papers. \"-\" indicates the number is not available.", "html": null, "content": "
Dataset#Ent # Rel#Train#Valid#TestGap #Gaps
ICEWS147,12823072,8268,9418,96324H365
ICEWS05-15 10,488251368,96246,27546,092 24H 4,017
GDELT500202,735,685 341,961 341,961 24H366
", "num": null }, "TABREF4": { "type_str": "table", "text": "The complete results of our ablation studies. * indicates configurations used in our final model.", "html": null, "content": "", "num": null } } } }