{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:45:16.701000Z" }, "title": "A New Surprise Measure for Extracting Interesting Relationships between Persons", "authors": [ { "first": "Hidetaka", "middle": [], "last": "Kamigaito", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tokyo Institute of Technology", "location": {} }, "email": "kamigaito@lr.pi.titech.ac.jp" }, { "first": "Jingun", "middle": [], "last": "Kwon", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tokyo Institute of Technology", "location": {} }, "email": "" }, { "first": "Young-In", "middle": [], "last": "Song", "suffix": "", "affiliation": { "laboratory": "", "institution": "Naver Corporation", "location": {} }, "email": "song.youngin@navercorp.com" }, { "first": "Manabu", "middle": [], "last": "Okumura", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tokyo Institute of Technology", "location": {} }, "email": "" }, { "first": "Tim", "middle": [], "last": "Burton", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Johnny", "middle": [], "last": "Depp", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Chien-Ming", "middle": [], "last": "Wang", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Suzuki", "middle": [], "last": "Ichiro", "suffix": "", "affiliation": {}, "email": "" }, { "first": "John", "middle": [], "last": "Lennon", "suffix": "", "affiliation": {}, "email": "" }, { "first": "George", "middle": [], "last": "Harrison", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Paul", "middle": [], "last": "Mccartney", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "One way to enhance user engagement in search engines is to suggest interesting facts to the user. Although relationships between persons are important as a target for text mining, there are few effective approaches for extracting the interesting relationships between persons. We therefore propose a method for extracting interesting relationships between persons from natural language texts by focusing on their surprisingness. Our method first extracts all personal relationships from dependency trees for the texts and then calculates surprise scores for distributed representations of the extracted relationships in an unsupervised manner. The unique point of our method is that it does not require any labeled dataset with annotation for the surprising personal relationships. The results of the human evaluation show that the proposed method could extract more interesting relationships between persons from Japanese Wikipedia articles than a popularity-based baseline method. We demonstrate our proposed method as a chrome plugin on google search.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "One way to enhance user engagement in search engines is to suggest interesting facts to the user. Although relationships between persons are important as a target for text mining, there are few effective approaches for extracting the interesting relationships between persons. We therefore propose a method for extracting interesting relationships between persons from natural language texts by focusing on their surprisingness. Our method first extracts all personal relationships from dependency trees for the texts and then calculates surprise scores for distributed representations of the extracted relationships in an unsupervised manner. The unique point of our method is that it does not require any labeled dataset with annotation for the surprising personal relationships. The results of the human evaluation show that the proposed method could extract more interesting relationships between persons from Japanese Wikipedia articles than a popularity-based baseline method. We demonstrate our proposed method as a chrome plugin on google search.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Interesting facts are useful information for a variety of important tasks. For example, in data mining, the interesting facts can enhance user engagement in search engines (Fatma et al., 2017) . In natural language processing, the interesting facts can improve user experience with automatic conversation systems (Niina and Shimada, 2018) . However, if we rely on experts to gather the interesting facts, the cost becomes quite high.", "cite_spans": [ { "start": 172, "end": 192, "text": "(Fatma et al., 2017)", "ref_id": "BIBREF3" }, { "start": 313, "end": 338, "text": "(Niina and Shimada, 2018)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As a solution, several approaches have been developed to extract interesting facts automatically. Lin and Chalupsky (2003) proposed a set of unsupervised link discovery methods that can compute interestingness on graph data represented as a set of entities connected by a set of binary relations. Prakash et al. (2015) extracted interesting sentences about movie entities from Wikipedia articles and ordered them based on their interestingness by utilizing Rank-SVM, trained in a supervised manner. Tsurel et al. (2017) proposed an algorithm that automatically mines trivia facts from Wikipedia by utilizing its category structure. Their approach can rank categories for an entity based on their trivia quality induced from the categories. Fatma et al. (2017) proposed a method for automatically mining trivia facts for an entity of a given domain in knowledge graphs by utilizing deep convolutional neural networks, trained in a supervised manner. Korn et al. (2019) mined trivia facts from superlative tables in Wikipedia articles. Kwon et al. (2020) proposed a method to obtain sentences including trivia facts with utilizing paragraph structures in Wikipedia articles.", "cite_spans": [ { "start": 98, "end": 122, "text": "Lin and Chalupsky (2003)", "ref_id": "BIBREF9" }, { "start": 297, "end": 318, "text": "Prakash et al. (2015)", "ref_id": "BIBREF13" }, { "start": 499, "end": 519, "text": "Tsurel et al. (2017)", "ref_id": "BIBREF16" }, { "start": 740, "end": 759, "text": "Fatma et al. (2017)", "ref_id": "BIBREF3" }, { "start": 949, "end": 967, "text": "Korn et al. (2019)", "ref_id": "BIBREF6" }, { "start": 1034, "end": 1052, "text": "Kwon et al. (2020)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, some of these approaches work only on structured datasets such as knowledge graphs or Wikipedia categories. In addition, while supervised approaches can work on unstructured natural language texts, the applicable domain is restricted due to the lack of annotated datasets. Hence, the current approaches for extracting interesting facts are considered limited. In particular, although relationships between persons are important as a target for text mining, there are few effective approaches for extracting interesting relationships between persons. Figure 1 shows examples of interesting relationships between persons. 1 The first example is a famous film director who initially had a fairly low regard for an actor who is now extremely famous and successful. The second example is about a famous baseball player who asked another famous baseball player for an autograph. The third example relates to famous musicians engaged in something completely unrelated to music. These examples illustrate that surprisingness is an important factor in interesting personal relationships.", "cite_spans": [ { "start": 629, "end": 630, "text": "1", "ref_id": null } ], "ref_spans": [ { "start": 559, "end": 567, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, to extract such interesting relationships, we focus on surprising relationships between persons. We propose a method that extracts relationships between persons from natural language texts and then scores their surprise scores based on the Mahalanobis distance (De Maesschalck et al., 2000) , which has been used in the outlier detection task. Our proposed method first extracts all personal relationships from dependency trees for each sentence and then calculates the surprise scores of the extracted relationships on a continuous vector space in an unsupervised manner. As such, our method does not require any labeled dataset for extracting the surprising personal relationships.", "cite_spans": [ { "start": 276, "end": 305, "text": "(De Maesschalck et al., 2000)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The results of our human evaluation show that the proposed method could extract more interesting relationships between persons from Japanese Wikipedia articles than a popularity-based baseline method. Furthermore, as shown in Figure 2 , we incorporated our method into a google chrome plu-gin. You can watch our demo video for this plugin at a shared directory in our google drive.", "cite_spans": [], "ref_spans": [ { "start": 226, "end": 234, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Figure 3 provides an overview of the entire process of extracting sentences that may include interesting personal relationships about a target person from given documents. The extraction procedure is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting Interesting Relationships between Persons", "sec_num": "2" }, { "text": "1. Construct dependency trees from sentences in the target documents through an automatic dependency parser.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting Interesting Relationships between Persons", "sec_num": "2" }, { "text": "2. Extract personal relationships that are represented as tuples of persons and their relationships from the obtained dependency trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting Interesting Relationships between Persons", "sec_num": "2" }, { "text": "3. Calculate scores for whether the extracted personal relationships are interesting or not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting Interesting Relationships between Persons", "sec_num": "2" }, { "text": "4. Select top-k personal relationships and sentences that include the target person based on the calculated scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting Interesting Relationships between Persons", "sec_num": "2" }, { "text": "The details of each step are described in the following subsections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting Interesting Relationships between Persons", "sec_num": "2" }, { "text": "We use a dependency parser for extracting personal relationships from sentences. First, we parse given sentences with the parser and obtain their dependency trees. Next, if a sentence includes more than one person name, we extract pairs of two names e i and e j . We also extract a set p k that includes words {w 1 , \u2022 \u2022 \u2022 , w n } in the shortest path between e i and e j on the dependency tree. These elements are represented as a tuple r l as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting Personal Relationships", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "r l = (e i , e j , p k ).", "eq_num": "(1)" } ], "section": "Extracting Personal Relationships", "sec_num": "2.1" }, { "text": "Because r l is a tuple, it satisfies r l,0 = e i , r l,1 = e j , and r l,2 = p k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting Personal Relationships", "sec_num": "2.1" }, { "text": "For calculating a score of interestingness for r l , we encode e i , e j , and p k into fixed-dimensional continuous vectors by utilizing the skip-gram model (Mikolov et al., 2013) . When training the model, we treat a person name as a single word. Hereafter, we represent the vector of a word w i as E w i . Thus, the person names e i and e j are represented as E e i and E e j , respectively. Figure 3 : Overview of our proposed method for extracting interesting relationships between persons from given documents.", "cite_spans": [ { "start": 158, "end": 180, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 395, "end": 403, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Representation of Personal Relationships", "sec_num": "2.2" }, { "text": "To cope with person names e i with few occurrences, that might cause the sparseness problem, we map person names e i to clusters, whose number is smaller than the number of person names. We represent a cluster that e i is assigned to as C e i . We use k-means as a clustering method to ensure that these clusters are based on the cosine similarity between the vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representation of Personal Relationships", "sec_num": "2.2" }, { "text": "Unlike the person names, the relationship between two persons, p k , is represented as a set of words. For encoding the set of words representing the relationship into the continuous vector space, we use smooth inverse frequency (SIF) (Arora et al., 2017), 2 which can encode a sequence of words into a continuous vector by utilizing the frequencies of the words for calculating the weighted sum of the word vectors. Algorithm 1 describes the details of the procedure for obtaining the vector representation of each personal relationship. Through this procedure, we can get V p k , which is the vector representation of p k in r l included in Rel, where Rel is a set of all personal relationships in the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representation of Personal Relationships", "sec_num": "2.2" }, { "text": "In this section, we describe our scoring method for extracting interesting relationships between persons. Our method tries to take into account the following three aspects of the interestingness: Popularity, Surprisingness, and Commonness. The scoring method is based on our assumption that an unusual relationship in a commonly observed pair of two famous persons increases the interestingness, and thus, such a relationship is interesting. The popularity calculates the fame of the persons, the surprisingness calculates the rareness of the relationship, and the commonness calculates how 2 https://github.com/PrincetonML/SIF Algorithm 1 Vector representation for each relationship. Input: All personal relationships Rel. Output: Vectors for each personal relationship", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scoring Personal Relationships", "sec_num": "2.3" }, { "text": "{V p k |p k = r l,0 , r l,0 \u2208 Rel}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scoring Personal Relationships", "sec_num": "2.3" }, { "text": "Calculate a weighted sum of the word vectors for each r l based on a word frequency f (w m ) of a word w m and hyper-parameter a.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scoring Personal Relationships", "sec_num": "2.3" }, { "text": "1: for all relation p k in Rel do 2: V p k \u2190 1 |p k | w m \u2208p k a a+f (w m ) E w m 3: end for", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scoring Personal Relationships", "sec_num": "2.3" }, { "text": "Form a matrix A whose columns are {V p k |p k = r l,0 , r l,0 \u2208 Rel} and then obtain left singular vector u through singular value decomposition (SVD).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scoring Personal Relationships", "sec_num": "2.3" }, { "text": "Transform the original vectors V p k with the obtained u.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4: u \u2190 SV D(A)", "sec_num": null }, { "text": "5: for all relation p k in Rel do 6: V p k \u2190 uu V p k 7: end for", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4: u \u2190 SV D(A)", "sec_num": null }, { "text": "often the pair of the persons commonly appears. The next subsections explain the scores for each aspect in detail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4: u \u2190 SV D(A)", "sec_num": null }, { "text": "To judge whether the relationships between persons are interesting or not, the reader must know them in advance. From this viewpoint, we consider that the popularity of each person is an important factor in judging whether the relationship between the persons is interesting. Taking this assumption into account, we define S ppl (e j ), the popularity for e j , as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Popularity", "sec_num": "2.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "S ppl (e j ) = log(1 + f req(e j )),", "eq_num": "(2)" } ], "section": "Popularity", "sec_num": "2.3.1" }, { "text": "where f req(\u2022) is a function that returns the frequency of the input element. S ppl (e i ) can be similarly defined. Note that we use Wikipedia articles for counting the frequency of entities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Popularity", "sec_num": "2.3.1" }, { "text": "We assume that a surprising personal relationship is a kind of outlier in a set of personal relationships. We use the Mahalanobis distance (De Maesschalck et al., 2000) in the outlier detection task for defining the surprisingness of a personal relationship. Since both the persons and their relationships are represented as continuous vectors, we use a multivariate normal distribution to handle them. If the dimensions of continuous vectors are independent with each other, the variance-covariance matrix of the multivariate normal distribution becomes a diagonal matrix. Under this condition, the Mahalanobis distance is defined as follows:", "cite_spans": [ { "start": 139, "end": 168, "text": "(De Maesschalck et al., 2000)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Surprisingness", "sec_num": "2.3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Outlier(x i ; X) = D j=1 (x i,j \u2212\u03bc j ) 2 \u03c3 2 j ,", "eq_num": "(3)" } ], "section": "Surprisingness", "sec_num": "2.3.2" }, { "text": "where D is a dimension size of x. As explained later, while we consider vector representations of entities as elements of X for the commonness, we consider vector representations of relationships between persons as elements of X for the surprisingness. Both the elements are based on co-occurrence of persons. Thus, these may encounter the sparseness problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Surprisingness", "sec_num": "2.3.2" }, { "text": "To deal with the sparseness problem of the elements in X, we use a maximum a posterior probability (MAP) estimation to calculate the mean\u03bc and variance\u03c3. Assuming that each dimension of the continuous vectors obey a normal distribution whose prior distribution of the mean is also a normal distribution N (\u03b1, \u03b2 2 ) with mean \u03b1 and variance \u03b2 2 , the mean\u03bc and the standard deviation \u03c3 is estimated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Surprisingness", "sec_num": "2.3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u00b5 = \u03b1 \u03c3 \u03c3 + \u03b2 \u03b2 |X| i=1 x i |X|\u03b2 \u03b2 + \u03c3 \u03c3 ,", "eq_num": "(4)" } ], "section": "Surprisingness", "sec_num": "2.3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c3 = 1 |X| |X| i=1 (\u03bc \u2212 x i ) (\u03bc \u2212 x i ),", "eq_num": "(5)" } ], "section": "Surprisingness", "sec_num": "2.3.2" }, { "text": "where |X| is the number of elements in X, and is an element-wise product. To use Eq. (3) for calculating surprisingness for a given personal relationship, we need to consider a set Set e i ,e j , * whose elements are relationships between persons e i and e j . However, considering a pair of entities may cause the sparseness problem. To avoid the problem, we use clusters again (as explained in Section 2.2) for representing e i and e j to define Set e i ,e j , * as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Surprisingness", "sec_num": "2.3.2" }, { "text": "Set e i ,e j , * ={p k = r n,2 |C r n,0 = C e i \u2227 C r n,1 = C e j \u2227 r n \u2208 Rel}. (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Surprisingness", "sec_num": "2.3.2" }, { "text": "By using Set e i ,e j , * , the surprisingness of a relationship p k between e i and e j , S sup , is calculated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Surprisingness", "sec_num": "2.3.2" }, { "text": "S sup (e i , e j , p k ) =Outlier(V p k ; {V p k |p k \u2208 Set e i ,e j , * }). (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Surprisingness", "sec_num": "2.3.2" }, { "text": "When calculating the outlier scores in Eq. 7, we estimate the prior mean \u03b1 and prior variance \u03b2 2 through a maximum likelihood estimation, based on the whole vector representation of personal relationships in the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Surprisingness", "sec_num": "2.3.2" }, { "text": "To determine whether relationships between persons are surprising or not, people must know the ordinary relationships between them in advance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Commonness", "sec_num": "2.3.3" }, { "text": "For example, in Ex.3 of Figure 1 , to be surprised by this sentence, the readers must know the common relationships between Ringo Starr and the other members of The Beatles. Since they know that singing, playing a music, etc. are the common relationship among the members of The Beatles, they can be surprised by the phrase \"went to the bottom of the sea\" in the sentence. Thus, considering how often a pair of persons have a relationship can support our surprisingness. Based on the assumption, our commonness measures how common a pair of two persons.", "cite_spans": [], "ref_spans": [ { "start": 24, "end": 32, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Commonness", "sec_num": "2.3.3" }, { "text": "Since counting the co-occurrence between two persons may cause the sparseness problem, we use continuous vectors for calculating this score. Specifically, we use the minus valued score of Eq.(3), based on the assumption that a pair of two persons is the common pair if it is not an outlier. To use Eq.(3) for calculating commonness, we need to use a set Set e i , * whose elements are a person who has a relationship with a person e i . To avoid the sparseness problem, we represent e i as a cluster again (as explained in Section 2.2) and define Set e i , * as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Commonness", "sec_num": "2.3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Set e i , * = {e j = r n,1 |C r n,0 = C e i \u2227 r n \u2208 Rel},", "eq_num": "(8)" } ], "section": "Commonness", "sec_num": "2.3.3" }, { "text": "where Rel is a set that includes all relationships between persons in the corpus. By using Set e i , * , commonness S com from e j to e i is calculated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Commonness", "sec_num": "2.3.3" }, { "text": "S com (e i |e j ) (9) = \u2212 Outlier(E e i ; {E e i |e i \u2208 Set e j , * }). 10S com (e j |e i ) is defined similarly. Because S com (e i |e j ) and S com (e j |e i ) do not return the same score, we simply use their average for our final score. When calculating the outlier scores in Eq.(10) , we estimate the prior mean \u03b1 and prior variance \u03b2 2 through a maximum likelihood estimation based on the whole word vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Commonness", "sec_num": "2.3.3" }, { "text": "For ranking personal relationships, we combine all the above three scores. Because these scores have different ranges with each other, we scale them with z-score normalization (Kreyszig, 1979) . Let the mean of S ppl , S com , and S sup on all relationships be respectively \u00b5 ppl , \u00b5 com , and \u00b5 sup , and let the variance of S ppl , S com , and S sup on all relationships be respectively \u03c3 ppl , \u03c3 com , and \u03c3 sup . The final score of the interestingness for the target entity e i is defined as follows:", "cite_spans": [ { "start": 176, "end": 192, "text": "(Kreyszig, 1979)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Selecting Top-k Personal Relationships", "sec_num": "2.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "S int (e i , e j , p k ) (11) =\u03bb ppl \u2022 S ppl (e j ) \u2212 \u00b5 ppl \u03c3 ppl (12) +\u03bb com \u2022 1 2 \u2022 S com (e i |e j ) \u2212 \u00b5 com \u03c3 com + f racS com (e j |e i ) \u2212 \u00b5 com \u03c3 com (13) +\u03bb sup \u2022 S sup (e i , e j , p k ) \u2212 \u00b5 sup \u03c3 sup ,", "eq_num": "(14)" } ], "section": "Selecting Top-k Personal Relationships", "sec_num": "2.4" }, { "text": "where \u03bb ppl , \u03bb com and \u03bb sup are weights for adjusting the importance of each score. We tune these weights by using our validation dataset (explained in the next section). Based on S int (e i , e j , p k ), we extract top-k relationships that include the target person e i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Selecting Top-k Personal Relationships", "sec_num": "2.4" }, { "text": "We conducted human evaluation to determine how well our proposed method can extract interesting relationships between persons. The next subsections describe the details of our experimental settings and the evaluation results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "We used sentences in Japanese Wikipedia as our evaluation dataset. We listed articles whose category includes the word \"person\" as person names and then selected the persons who have more than five relationships from various domains (e.g., anime, manga, novel, actor, music, movie, sports, comedy, and talent) based on their frequencies in Japanese Wikipedia. To remove historical persons, we selected only those who are categorized as \"living persons\". Finally, we obtained a total of 50 persons for the test dataset and 12 persons for the validation dataset through this process. We next extracted sentences that include personal relationships for the selected persons by using each of the compared methods, that we will describe in the next subsection. We put the top five sentences ranked by each method that include personal relationships for each selected person in the test dataset. If the same sentence was already included in the dataset, we skip it. After this procedure, for each of the compared methods, 250 sentences were included in the test dataset. To provide contextual information, we added the title of the article where the sentences were found to the sentences in the test dataset. The validation dataset was constructed in the same way for the 12 persons. All personal relationships were extracted with CaboCha, 3 a chunk-based Japanese dependency parser, with the NEologd dictionary (Sato et al., 2017) . 4 To filter the personal relationships in compound sentences, we ignored any personal relationships that include multiple predicates. When a sentence lacks its subject, we complement it with the title of the article that contains the sentence. Furthermore, we filtered any sentences starting with a pronoun or conjugation because such sentences are not understandable without the surrounding sentences.", "cite_spans": [ { "start": 1406, "end": 1425, "text": "(Sato et al., 2017)", "ref_id": "BIBREF15" }, { "start": 1428, "end": 1429, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "3.1.1" }, { "text": "We evaluated the performance of the proposed methods and several baselines on our test dataset. The following methods were used as the baselines:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compared Methods", "sec_num": "3.1.2" }, { "text": "\u2022 Rand: This method randomly selects five personal relationships for each person.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compared Methods", "sec_num": "3.1.2" }, { "text": "\u2022 Pop: This method selects five personal relationships on the basis of only the popularity score (Eq.(2)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compared Methods", "sec_num": "3.1.2" }, { "text": "We used the following as our proposed methods:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compared Methods", "sec_num": "3.1.2" }, { "text": "\u2022 Pop+Com: This method selects five personal relationships on the basis of the combined score of the popularity (Eq.(12) ) and the commonness (Eq.(13)). Similar to Eq.(11), we tuned the weight parameters \u03bb ppl and \u03bb com on the validation dataset.", "cite_spans": [], "ref_spans": [ { "start": 112, "end": 120, "text": "(Eq.(12)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Compared Methods", "sec_num": "3.1.2" }, { "text": "\u2022 Pop+Sup: This method selects five personal relationships on the basis of the combined score of the popularity (Eq.(12)) and the surprisingness (Eq. 14). Similar to Eq.(11), we tuned the weight parameters \u03bb ppl and \u03bb sup on the validation dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compared Methods", "sec_num": "3.1.2" }, { "text": "\u2022 Pop+Com+Sup: This method selects five personal relationships on the basis of a combination of the popularity, the commonness, and the surprisingness (Eq.(11)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Compared Methods", "sec_num": "3.1.2" }, { "text": "Prior to running these baselines and proposed methods, we obtained word vectors from Japanese Wikipedia articles by utilizing word2vec. 5 In this step, all sentences were tokenized using MeCab 6 with the NEologd dictionary. We further tuned the word vectors by utilizing a retrofitting approach (Faruqui et al., 2015) 7 with Wikipedia's category information to consider similarities between persons. The retrofitting approach can refine word vectors using graph information by making word vectors close to each other when they have a link in the graph. To construct a graph for personal similarities, we linked two words if a Wikipedia category includes the words. Because some person names have several articles due to their ambiguity, we skipped such words in this step. 8 In the end, we reran the retrofitting with the default hyperparameters. Then, we mapped the obtained word vectors of person names to 300 clusters estimated by k-means. When calculating the vectors for each personal relationship, we set a in SIF to 1.0.", "cite_spans": [ { "start": 136, "end": 137, "text": "5", "ref_id": null }, { "start": 295, "end": 317, "text": "(Faruqui et al., 2015)", "ref_id": "BIBREF2" }, { "start": 773, "end": 774, "text": "8", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Compared Methods", "sec_num": "3.1.2" }, { "text": "5 https://code.google.com/archive/p/ word2vec/ 6 http://taku910.github.io/mecab/ 7 https://github.com/mfaruqui/ retrofitting 8 Note that in Wikipedia, to disambiguate such words, brackets in article titles indicate their ambiguity. Thus, we can skip ambiguous titles based on the brackets. k = 1 k = 2 k = 3 k = 4 k = 5 Table 1 : Evaluation results of rescaled 5-scale scores (%). The bold values indicate the best scores. \u2020 indicates that the difference of the score from the best baseline is statistically significant. 10 We tuned weight parameters in our methods on our validation dataset, which were created for 12 person names in Japanese Wikipedia, and which are not overlapped with the test dataset. We gathered 123 relationships related to the selected persons. Because ranking the degree of interestingness for the gathered relationships would be very costly, we simply attached a label of whether it is interesting or not to them. After that, we estimated the weight parameters by utilizing logistic regression. In Pop+Com, estimated \u03bb pop and \u03bb com were respectively 0.79 and 0.21; in Pop+Sup, estimated \u03bb pop and \u03bb sup were respectively 0.80 and 0.20; and in Pop+Com+Sup, estimated \u03bb pop , \u03bb com , and \u03bb sup were respectively 0.67, 0.17, and 0.16.", "cite_spans": [ { "start": 521, "end": 523, "text": "10", "ref_id": null } ], "ref_spans": [ { "start": 320, "end": 327, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Compared Methods", "sec_num": "3.1.2" }, { "text": "The extracted top five sentences for each method were evaluated in terms of interestingness by six human raters, who rated them on a five-point Likert scale ranging from one to five (Larger is better.). For this rating, we used Lancers, 9 a Japanese cloud sourcing service. We showed personal relationships and their sentences to the raters. For interpretability, we rescaled the rating in the range from 0.0 to 1.0 (Preston and Colman, 2000) . In this rescaling, the five scales, 1, 2, 3, 4, and 5, are respectively mapped to 0.0, 0.25, 0.5, 0.75, and 1.0. We averaged the scores of all k-best results for each method. Table 1 shows the results of the five-scale scores. Pop+Sup achieved statistically significant improvement over the baselines when k = 1. This result can support our expectation that the surprisingness has a strong correlation to the interesting-ness of relationships between persons. In addition, Pop+Com+Sup achieved statistically significant improvement over the baselines when k = 1, and outperformed the scores of Pop+Sup when k = 1, 2, 3. These results indicate that the commonness can also support the interestingness, especially for a small number of k. When k is larger than 2, all scores are close compared with the scores at k = 1. This tendency may suggest that the number of interesting personal relationships is limited for each person.", "cite_spans": [ { "start": 416, "end": 442, "text": "(Preston and Colman, 2000)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 620, "end": 627, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "3.1.3" }, { "text": "As shown in Figure 2 , our demonstration system presents the top five interesting relationships between persons at the top of the search results based on the current search query. This demonstration system consists of server and client sides. The working process of the system follows the order:", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Demonstration System", "sec_num": "4" }, { "text": "1. In the client side, our google chrome plugin makes a query based on the name of the person input in the google search form.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Demonstration System", "sec_num": "4" }, { "text": "2. The server-side distributes personal relationships of the person included in the given query to the client-side by loading from the pre-computed personal relationships and their scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Demonstration System", "sec_num": "4" }, { "text": "3. After receiving the result, the client-side shows the result below the search form. If the server does not return any personal relationship, the plugin does not have any action for the search result.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Demonstration System", "sec_num": "4" }, { "text": "The client-side was implemented on jQuery libraries, and the server-side was implemented on python 3.0 with utilizing http.server module. We chose Pop+Com+Sup as our demonstration system because this model achieved the best result in the human evaluation in the cases of k = 1, 2, and 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Demonstration System", "sec_num": "4" }, { "text": "There have been several approaches for extracting interesting facts. We can divide them into supervised and unsupervised approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "The unsupervised approaches have been commonly used for this type of extraction. Merzbacher (2002) proposed a method that mines good trivia questions from a relational database based on predefined rules. Lin and Chalupsky (2003) proposed a set of unsupervised link discovery methods that can compute interestingness on graph data that is represented as a set of entities connected by a set of binary relations. Tsurel et al. (2017) proposed an algorithm that automatically mines trivia facts from Wikipedia by utilizing its category structure. Their approach can rank the entity's categories by their trivia quality, which is induced by the category. Korn et al. (2019) mined trivia facts from superlative tables in Wikipedia articles. They utilized a template-based approach for semi-automatically generating natural language statements as fun facts. Their work had actually been incorporated into the search engine by Google. Kwon et al. (2020) proposed a method to obtain sentences including trivia facts by focusing on a tendency of the Wikipedia article structure that a paragraph containing trivial facts is not similar to other paragraphs in a article.", "cite_spans": [ { "start": 81, "end": 98, "text": "Merzbacher (2002)", "ref_id": "BIBREF10" }, { "start": 204, "end": 228, "text": "Lin and Chalupsky (2003)", "ref_id": "BIBREF9" }, { "start": 411, "end": 431, "text": "Tsurel et al. (2017)", "ref_id": "BIBREF16" }, { "start": 651, "end": 669, "text": "Korn et al. (2019)", "ref_id": "BIBREF6" }, { "start": 928, "end": 946, "text": "Kwon et al. (2020)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "The supervised approaches have also been used for extracting interesting facts. Gamon et al. (2014) proposed models that predict the level of interest a user gives to various text spans in a document by observing the user's browsing behavior via clicks from one page to another. Prakash et al. (2015) constructed a labeled dataset for movie entities and proposed a method for extracting interesting sentences from Wikipedia articles and ordering them based on interestingness by utilizing Rank-SVM trained with the constructed dataset. Fatma et al. (2017) proposed a method for automatically mining trivia facts for an entity of a given domain in knowledge graphs by utilizing deep convolutional neural networks trained in a supervised manner.", "cite_spans": [ { "start": 80, "end": 99, "text": "Gamon et al. (2014)", "ref_id": "BIBREF4" }, { "start": 279, "end": 300, "text": "Prakash et al. (2015)", "ref_id": "BIBREF13" }, { "start": 536, "end": 555, "text": "Fatma et al. (2017)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "In this paper, we proposed a method for extracting interesting relationships between persons from natural language texts in an unsupervised manner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Human evaluation of the personal relationships extracted from Japanese Wikipedia articles showed that the proposed method improved the interestingness compared to a popularity-based baseline. Through the result, we can conclude that considering the surprisingness of relationships between persons is effective in improving the interestingness of the extracted results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Furthermore, to demonstrate our proposed method, we incorporated the method into a google chrome plugin, which can work on google search.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "As future work, we will investigate ways to extract personal relationships based on more detailed information about a dependency tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "These examples were extracted from Japanese Wikipedia articles and then were translated into English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/taku910/cabocha 4 https://github.com/neologd/ mecab-ipadic-neologd", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.lancers.jp/10 We used paired-bootstrap-resampling(Koehn, 2004) with 10,000 random samples (p < 0.05).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We are thankful to the research group at Naver Corporation for supporting our research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A simple but tough-to-beat baseline for sentence embeddings", "authors": [ { "first": "Sanjeev", "middle": [], "last": "Arora", "suffix": "" }, { "first": "Yingyu", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Tengyu", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence em- beddings.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The mahalanobis distance. Chemometrics and intelligent laboratory systems", "authors": [ { "first": "Delphine", "middle": [], "last": "Roy De Maesschalck", "suffix": "" }, { "first": "D\u00e9sir\u00e9 L", "middle": [], "last": "Jouan-Rimbaud", "suffix": "" }, { "first": "", "middle": [], "last": "Massart", "suffix": "" } ], "year": 2000, "venue": "", "volume": "50", "issue": "", "pages": "1--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roy De Maesschalck, Delphine Jouan-Rimbaud, and D\u00e9sir\u00e9 L Massart. 2000. The mahalanobis distance. Chemometrics and intelligent laboratory systems, 50(1):1-18.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Retrofitting word vectors to semantic lexicons", "authors": [ { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Jesse", "middle": [], "last": "Dodge", "suffix": "" }, { "first": "K", "middle": [], "last": "Sujay", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Jauhar", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Hovy", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manaal Faruqui, Jesse Dodge, Sujay K. Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of NAACL.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The unusual suspects: Deep learning based mining of interesting entity trivia from knowledge graphs", "authors": [ { "first": "Nausheen", "middle": [], "last": "Fatma", "suffix": "" }, { "first": "K", "middle": [], "last": "Manoj", "suffix": "" }, { "first": "Manish", "middle": [], "last": "Chinnakotla", "suffix": "" }, { "first": "", "middle": [], "last": "Shrivastava", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI'17", "volume": "", "issue": "", "pages": "1107--1113", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nausheen Fatma, Manoj K. Chinnakotla, and Manish Shrivastava. 2017. The unusual suspects: Deep learning based mining of interesting entity trivia from knowledge graphs. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelli- gence, AAAI'17, page 1107-1113. AAAI Press.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Predicting interesting things in text", "authors": [ { "first": "Michael", "middle": [], "last": "Gamon", "suffix": "" }, { "first": "Arjun", "middle": [], "last": "Mukherjee", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2014, "venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "1477--1488", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Gamon, Arjun Mukherjee, and Patrick Pan- tel. 2014. Predicting interesting things in text. In Proceedings of COLING 2014, the 25th Inter- national Conference on Computational Linguistics: Technical Papers, pages 1477-1488, Dublin, Ireland. Dublin City University and Association for Compu- tational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Statistical significance tests for machine translation evaluation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "388--395", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceed- ings of the 2004 Conference on Empirical Meth- ods in Natural Language Processing, pages 388- 395, Barcelona, Spain. Association for Computa- tional Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Automatically generating interesting facts from wikipedia tables", "authors": [ { "first": "Flip", "middle": [], "last": "Korn", "suffix": "" }, { "first": "Xuezhi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "You", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Cong", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 International Conference on Management of Data, SIGMOD '19", "volume": "", "issue": "", "pages": "349--361", "other_ids": { "DOI": [ "10.1145/3299869.3314043" ] }, "num": null, "urls": [], "raw_text": "Flip Korn, Xuezhi Wang, You Wu, and Cong Yu. 2019. Automatically generating interesting facts from wikipedia tables. In Proceedings of the 2019 International Conference on Management of Data, SIGMOD '19, page 349-361, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Advanced engineering mathematics", "authors": [ { "first": "", "middle": [], "last": "Erwin", "suffix": "" }, { "first": "", "middle": [], "last": "Kreyszig", "suffix": "" } ], "year": 1979, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erwin. Kreyszig. 1979. Advanced engineering mathe- matics /, 4th ed. edition. Wiley,, New York :.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Hierarchical trivia fact extraction from Wikipedia articles", "authors": [ { "first": "Jingun", "middle": [], "last": "Kwon", "suffix": "" }, { "first": "Hidetaka", "middle": [], "last": "Kamigaito", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "4825--4834", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.424" ] }, "num": null, "urls": [], "raw_text": "Jingun Kwon, Hidetaka Kamigaito, Young-In Song, and Manabu Okumura. 2020. Hierarchical trivia fact extraction from Wikipedia articles. In Proceed- ings of the 28th International Conference on Com- putational Linguistics, pages 4825-4834, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Using unsupervised link discovery methods to find interesting facts and connections in a bibliography dataset", "authors": [ { "first": "Hans", "middle": [], "last": "Shou-De Lin", "suffix": "" }, { "first": "", "middle": [], "last": "Chalupsky", "suffix": "" } ], "year": 2003, "venue": "SIGKDD Explor. Newsl", "volume": "5", "issue": "2", "pages": "173--178", "other_ids": { "DOI": [ "10.1145/980972.981000" ] }, "num": null, "urls": [], "raw_text": "Shou-de Lin and Hans Chalupsky. 2003. Using un- supervised link discovery methods to find interest- ing facts and connections in a bibliography dataset. SIGKDD Explor. Newsl., 5(2):173-178.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Automatic generation of trivia questions", "authors": [ { "first": "Matthew", "middle": [], "last": "Merzbacher", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 13th International Symposium on Foundations of Intelligent Systems, ISMIS '02", "volume": "", "issue": "", "pages": "123--130", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Merzbacher. 2002. Automatic generation of trivia questions. In Proceedings of the 13th Interna- tional Symposium on Foundations of Intelligent Sys- tems, ISMIS '02, page 123-130, Berlin, Heidelberg. Springer-Verlag.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems", "volume": "2", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed represen- tations of words and phrases and their composition- ality. In Proceedings of the 26th International Con- ference on Neural Information Processing Systems -Volume 2, NIPS'13, page 3111-3119, Red Hook, NY, USA. Curran Associates Inc.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Trivia score and ranking estimation using support vector regression and RankNet", "authors": [ { "first": "Kazuya", "middle": [], "last": "Niina", "suffix": "" }, { "first": "Kazutaka", "middle": [], "last": "Shimada", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation, Hong Kong. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kazuya Niina and Kazutaka Shimada. 2018. Trivia score and ranking estimation using support vector re- gression and RankNet. In Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation, Hong Kong. Association for Com- putational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Did you know? -mining interesting trivia for entities from wikipedia", "authors": [ { "first": "Abhay", "middle": [], "last": "Prakash", "suffix": "" }, { "first": "Manoj", "middle": [], "last": "Kumar Chinnakotla", "suffix": "" }, { "first": "Dhaval", "middle": [], "last": "Patel", "suffix": "" }, { "first": "Puneet", "middle": [], "last": "Garg", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015", "volume": "", "issue": "", "pages": "3164--3170", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abhay Prakash, Manoj Kumar Chinnakotla, Dhaval Pa- tel, and Puneet Garg. 2015. Did you know? -mining interesting trivia for entities from wikipedia. In Pro- ceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, pages 3164-3170.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Optimal number of response categories in rating scales: Reliability, validity, discriminating power, and respondent preferences", "authors": [ { "first": "Carolyn", "middle": [], "last": "Preston", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Colman", "suffix": "" } ], "year": 2000, "venue": "Acta psychologica", "volume": "104", "issue": "", "pages": "1--15", "other_ids": { "DOI": [ "10.1016/S0001-6918(99)00050-5" ] }, "num": null, "urls": [], "raw_text": "Carolyn Preston and Andrew Colman. 2000. Optimal number of response categories in rating scales: Re- liability, validity, discriminating power, and respon- dent preferences. Acta psychologica, 104:1-15.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Implementation of a word segmentation dictionary called mecab-ipadic-neologd and study on how to use it effectively for information retrieval (in japanese)", "authors": [ { "first": "Toshinori", "middle": [], "last": "Sato", "suffix": "" }, { "first": "Taiichi", "middle": [], "last": "Hashimoto", "suffix": "" }, { "first": "Manabu", "middle": [], "last": "Okumura", "suffix": "" } ], "year": 2017, "venue": "NLP, pages NLP2017-B6-1. The Association for Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Toshinori Sato, Taiichi Hashimoto, and Manabu Oku- mura. 2017. Implementation of a word segmen- tation dictionary called mecab-ipadic-neologd and study on how to use it effectively for information re- trieval (in japanese). In NLP, pages NLP2017-B6-1. The Association for Natural Language Processing.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Fun facts: Automatic trivia fact extraction from wikipedia", "authors": [ { "first": "David", "middle": [], "last": "Tsurel", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Pelleg", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Guy", "suffix": "" }, { "first": "Dafna", "middle": [], "last": "Shahaf", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, WSDM '17", "volume": "", "issue": "", "pages": "345--354", "other_ids": { "DOI": [ "10.1145/3018661.3018709" ] }, "num": null, "urls": [], "raw_text": "David Tsurel, Dan Pelleg, Ido Guy, and Dafna Shahaf. 2017. Fun facts: Automatic trivia fact extraction from wikipedia. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, WSDM '17, page 345-354, New York, NY, USA. Association for Computing Machinery.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Example sentences that contain interesting relationships between persons.", "type_str": "figure", "uris": null, "num": null }, "FIGREF1": { "text": "A screenshot of our chrome plugin. For the Japanese search query Hayao Miyazaki, top five interesting relationships are presented at the top of the search results. The red texts are translations of them.", "type_str": "figure", "uris": null, "num": null }, "TABREF0": { "type_str": "table", "num": null, "text": "", "html": null, "content": "
DocumentsParse Dependency Tree SentenceROOT Ranks (Target: for three autographs Extracted personal relationships (Tim Burton, Johnny Depp, had the impression was) 0.2 (Tim Burton, Ed Wood, know) 0.3 (Tim Burton, Ed Wood, met) 0.1 (Ringo Starr, Paul McCartney, joins) 0.4 (Ringo Starr, John Lennon, went) 0.5 (Chien-Ming Wang, Suzuki Ichiro, met) 0.3 (Ringo Starr, Paul McCartney, met) 0.2 (Ringo Starr, John Lennon, sing) Starr, along with the Beatles' companions 0.3 1 (Ringo Starr, John Lennon, went) 2 (Ringo Starr, Paul McCartney, joins) 3 (Ringo Starr, John Lennon, sing) 4 (Ringo Starr, Paul McCartney, met) 5 (Ringo Starr, George Harrison, play) 1. In the film Yellow Submarine, after hearing the crisis of Pepper Land, Ringo Output: Top-k relationships . (Chien-Ming Wang, Suzuki Ichiro, play) 0.5 with their sentences
Extract shortest path(Ringo Starr, George Harrison, play)0.1John Lennon \u2026 2. Former Beatles members, except Paul
(Chien-Ming Wang, Suzuki Ichiro, asked)0.4McCartney\u2026
" } } } }