|
{ |
|
"paper_id": "C08-1024", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:26:29.970060Z" |
|
}, |
|
"title": "Looking for Trouble", |
|
"authors": [ |
|
{ |
|
"first": "Stijn", |
|
"middle": [], |
|
"last": "De Saeger", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Language Infrastructure Group", |
|
"institution": "National Institute of Information and Communications Technology", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Kentaro", |
|
"middle": [], |
|
"last": "Torisawa", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Language Infrastructure Group", |
|
"institution": "National Institute of Information and Communications Technology", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Jun", |
|
"middle": [ |
|
"'" |
|
], |
|
"last": "Ichi Kazama", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper presents a method for mining potential troubles or obstacles related to the use of a given object. Some example instances of this relation are medicine, side effect and amusement park, height restriction. Our acquisition method consists of three steps. First, we use an unsupervised method to collect training samples from Web documents. Second, a set of expressions generally referring to troubles is acquired by a supervised learning method. Finally, the acquired troubles are associated with objects so that each of the resulting pairs consists of an object and a trouble or obstacle in using that object. To show the effectiveness of our method we conducted experiments using a large collection of Japanese Web documents for acquisition. Experimental results show an 85.5% precision for the top 10,000 acquired troubles, and a 74% precision for the top 10% of over 60,000 acquired object-trouble pairs.", |
|
"pdf_parse": { |
|
"paper_id": "C08-1024", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper presents a method for mining potential troubles or obstacles related to the use of a given object. Some example instances of this relation are medicine, side effect and amusement park, height restriction. Our acquisition method consists of three steps. First, we use an unsupervised method to collect training samples from Web documents. Second, a set of expressions generally referring to troubles is acquired by a supervised learning method. Finally, the acquired troubles are associated with objects so that each of the resulting pairs consists of an object and a trouble or obstacle in using that object. To show the effectiveness of our method we conducted experiments using a large collection of Japanese Web documents for acquisition. Experimental results show an 85.5% precision for the top 10,000 acquired troubles, and a 74% precision for the top 10% of over 60,000 acquired object-trouble pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The Stanford Encyclopedia of Philosophy defines an artifact as \". . . an object that has been intentionally made or produced for a certain purpose\". Because of this purpose-orientedness, most human actions relating to an object or artifact fall into two broad categories -actions relating to its intended use (e.g. reading a book), and the preparations necessary therefore (like buying the book). Information concerning potential obstacles, harmful effects or troubles that interfere with this intended use is therefore highly relevant to the user. Licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported license (http://creativecommons.org/licenses/by-nc-sa/3.0/). Some rights reserved.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "While some such troubles are self-evident, others represent a genuine obstacle whose existence was thusfar unknown to the user. For example, in early 2008 a food poisoning case caused a big media stir in Japan when dozens of people fell ill after eating Chinese-imported frozen food products containing residual traces of toxic pesticides. While supposedly the presence of toxic chemicals in imported frozen foods had already been established on several occasions before, until the recent incidents public awareness of these facts remained low. In retrospect, a publicly available system suggesting \"residual agrichemicals\" as a potential danger with the consumption of \"frozen foods\" based on information mined from a large collection of Web documents might have led to earlier detection of this crisis. From the viewpoint of manufacturers as well, regularly monitoring the Internet for product names and associated troubles may allow them to find out about perceived flaws in their products sooner and avoid large scale recalls and damage to their brand.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "For a less dramatic example, searching for \"Tokyo Disneyland\" on the Internet typically yields many commercial sites offering travel deals, but little or no information about potential obstacles such as \"height restrictions\" (constraints on who can enjoy a given attraction 1 ) and \"traffic jams\" (a necessary preparation for enjoying a theme park is actually getting there in time). Ofter users have no way of finding out about this until they actually go there.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "These examples demonstrate the importance of a highly accurate automatic method for acquiring what we will call \"object-trouble\" relationspairs e o , e t in which the thing referred to by e t constitutes an (actual or potential) trouble, obstacle or risk in the context of use of an object e o .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Large scale acquisition of this type of contextual knowledge has not been thoroughly studied so far. In this paper, we propose a method for automatically acquiring Japanese noun phrases referring to troubles, (henceforth referred to as trouble expressions), and associating them with expressions denoting artifacts, objects or facilities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our acquisition method consists of three steps. As a first step, we use an unsupervised method for efficiently collecting training data from a Web corpus. Then, a set of expressions denoting troubles is acquired by a supervised learning method -Support Vector Machines (Vapnik, 1998) -trained on this data. Finally, the acquired trouble expressions are paired with noun phrases referring to objects, using a combination of pairwise mutual information and a verb-noun dependency filter based on statistics in a Web corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 269, |
|
"end": 283, |
|
"text": "(Vapnik, 1998)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A broad focus on noun-verb dependenciesand in particular the distinction between dependency relations with negated versus non-negated verbs -is the main characteristic of our method. While this distinction did not prove useful for improving the supervised classifier's performance in step 2, it forms the basis underlying the unsupervised method for training sample selection in the first step, and the final filtering mechanism in the third step.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The rest of this paper is organized as follows. Section 2 points out related work. Section 3 examines the notion of trouble expressions and their evidences. Section 4 describes our method, whose experimental results are discussed in Section 5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our goal of automatically acquiring object-trouble pairs from Web documents is perhaps best viewed as a problem of semantic relation extraction. Recently the Automatic Content Extraction (ACE) program (Doddington et al., 2004 ) is a wellknown benchmark task concerned with the automatic recognition of semantic relations from unstructured text. Typical target relations include \"Reaction\" and \"Production\" (Pantel and Pennacchiootti, 2006) , \"person-affiliation\" and \"organization-location\" (Zelenko et al., 2002) , \"part-whole\" (Berland and Charniak, 1999; Girju et al., 2006) and temporal precedence relations between events (Chklovski and Pantel, 2004; Torisawa, 2006) . Our current task of acquiring \"objecttrouble\" relations is new and object-trouble rela-tions are inherently more abstract and indirect than relations like \"person-affiliation\" -they crucially depend on additional knowledge about whether and how a given object's use might be hampered by a specific trouble.", |
|
"cite_spans": [ |
|
{ |
|
"start": 201, |
|
"end": 225, |
|
"text": "(Doddington et al., 2004", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 406, |
|
"end": 439, |
|
"text": "(Pantel and Pennacchiootti, 2006)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 491, |
|
"end": 513, |
|
"text": "(Zelenko et al., 2002)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 529, |
|
"end": 557, |
|
"text": "(Berland and Charniak, 1999;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 558, |
|
"end": 577, |
|
"text": "Girju et al., 2006)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 627, |
|
"end": 655, |
|
"text": "(Chklovski and Pantel, 2004;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 656, |
|
"end": 671, |
|
"text": "Torisawa, 2006)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Another line of research closely related to our work is the recognition of semantic orientation and sentiment analysis (Turney, 2002; Takamura et al., 2006; Kaji and Kitsuregawa, 2006) . Clearly troubles should be associated with a negative orientation of an expression, but studies on the acquisition of semantic orientation traditionally do not bother with the context of evaluation. While recent work on sentiment analysis has started to associate sentiment-related attribute-evaluation pairs to objects (Kobayashi et al., 2007) , these attributes usually concern intrinsic properties of the objects, such as a digital camera's colors -they do not extend to sentiment-related factors external to the object like \"traffic jams\" for theme parks. The acquisition method proposed in this work addresses both these matters.", |
|
"cite_spans": [ |
|
{ |
|
"start": 119, |
|
"end": 133, |
|
"text": "(Turney, 2002;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 134, |
|
"end": 156, |
|
"text": "Takamura et al., 2006;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 157, |
|
"end": 184, |
|
"text": "Kaji and Kitsuregawa, 2006)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 507, |
|
"end": 531, |
|
"text": "(Kobayashi et al., 2007)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Finally, our task of acquiring trouble expressions can be regarded as hyponymy acquisition, where target expressions are hyponyms of the word \"trouble\". Although we used the classical lexico-syntactic patterns for hyponymy acquisition (Hearst, 1992; Imasumi, 2001; Ando et al., 2003) to reflect this intuition, our experiments show we were unable to attain satisfactory performance using lexico-syntactic patterns alone. Thus, we also use verb-noun dependencies as evidence in learning (Pantel and Ravichandran, 2004; Shinzato and Torisawa, 2004) . We treat the evidences uniformly as elements in a feature vector given to a supervised learning method, which allowed us to extract a considerably larger number of trouble expressions than could be acquired by sparse lexicosyntactic patterns alone, while still keeping decent precision. What kind of hyponymy relations can be acquired by noun-verb dependencies is still an open question in NLP. In this work we show that at least trouble expressions can successfully be acquired based on noun-verb dependency information alone.", |
|
"cite_spans": [ |
|
{ |
|
"start": 235, |
|
"end": 249, |
|
"text": "(Hearst, 1992;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 250, |
|
"end": 264, |
|
"text": "Imasumi, 2001;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 265, |
|
"end": 283, |
|
"text": "Ando et al., 2003)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 486, |
|
"end": 517, |
|
"text": "(Pantel and Ravichandran, 2004;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 518, |
|
"end": 546, |
|
"text": "Shinzato and Torisawa, 2004)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In section 1 we have characterized trouble expressions as a kind of \"trouble\" that occurs in the specific context of using some object, in other words: as hyponyms of \"trouble\". Hence one source of evidence for acquisition are hyponymy relations with \"trouble\" or its synonyms. Another characterization of trouble expressions is to think of them as obstacles in a broad sense: things that prevent certain actions from being undertaken properly. In this sense traffic jams and sickness are troubles since they prevent people from going places and doing things. This assumption underlies a second important class of evidences for learning. More precisely, the evidence used for learning is classified into three categories: (i) lexico-syntactic patterns for hyponymy relations, (ii) dependency relations between expressions and negated verbs, and (iii) dependency relations between expressions and non-negated verbs. The first two categories are assumed to contain positive evidence of trouble expressions, while we assumed the third to function mostly as negative evidence. Our experiments show that (i) turns out to be less useful than expected, while the combination of (ii) and (iii) alone already gave quite reasonable precision in acquiring trouble expressions. Each category of evidence is described further below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trouble Expressions and Features for Their Acquisition", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Since trouble expressions are hyponyms of \"trouble\", one obvious way of acquiring trouble expressions is to use classical lexico-syntactic patterns for hyponymy acquisition (Hearst, 1992) . Table 1 lists some of the patterns proposed in studies on hyponymy acquisition for Japanese (Ando et al., 2003; Imasumi, 2001 ) that are utilized in this work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 187, |
|
"text": "(Hearst, 1992)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 283, |
|
"end": 302, |
|
"text": "(Ando et al., 2003;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 303, |
|
"end": 316, |
|
"text": "Imasumi, 2001", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 190, |
|
"end": 198, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Lexico-syntactic patterns for hyponymy", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In actual acquisition, we instantiated the hypernym positions in the patterns by Japanese translations of \"trouble\" and its synonyms, namely toraburu (troubles), sainan (accidents), saigai (disasters) and shougai (obstacles or handicaps), and used the instantiated patterns as evidence. Hereafter, we call these patterns LSPHs (Lexico-Syntactic Patterns for Hyponymy).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexico-syntactic patterns for hyponymy", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We expect expressions that frequently refer to troubles to have a distinct dependency profile, by which we mean a specific set of dependency relations with verbs (i.e. occurrences in specific argument positions). If T is a trouble expression, then given a sufficiently large corpus one would expect to find a reasonable number of instantiations of patterns like the following:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency relations with Verbs", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 T kept X from doing Y . \u2022 X didn't enjoy Y because of T .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency relations with Verbs", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Similarly, \"X enjoyed T \" would present negative evidence for T being a trouble expression. Rather than single out a set of particular dependency relations suspected to be indicative of trouble expressions, we let a supervised classifier learn an appropriate weight for each feature in a large vector of dependency relations. Two classes of dependency relations proved to be especially beneficial in determining trouble candidates in an unsupervised manner, so we discuss them in more detail below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency relations with Verbs", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Dependency relations with negated verbs Following our characterization of troubles as things that prevent specific actions from taking place, we expect a good deal of trouble expressions to appear in patterns like the following.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency relations with Verbs", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 X cannot go to Y because of T .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency relations with Verbs", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 X did not enjoy Y because of T .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency relations with Verbs", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The important points in the above are (i) the negated verbs and (ii) the mention of T as the reason for not verb-ing. The following are Japanese translations of the above patterns. Here P denotes postpositions (Japanese case markers), V stands for verbs and the phrase \"because of\" is translated as the postposition de.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency relations with Verbs", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 T de Y ni ikenai.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency relations with Verbs", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "P P V (cannot go)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency relations with Verbs", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 T de X ga tanoshikunakatta.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency relations with Verbs", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "P P V (did not enjoy)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency relations with Verbs", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We refer to the following dependency relations between expressions marked with the postposition de and negated verbs in these patterns as DNVs (Dependencies to Negated Verbs).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency relations with Verbs", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "T de \u2192 negated verb", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency relations with Verbs", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency relations with Verbs", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We allow any verb to be the negated verb, expecting that inappropriate verbs will be less weighted by machine learning techniques. For instance, the dependency relations to negated verbs with an originally negative orientation such as \"suffer\" and \"die\" will not work as positive examples for trouble expressions. Unfortunately, these patterns still present only weak evidence for trouble expressions. The precision of the trouble expressions collected using DNV patterns is extremely low -around 6.5%. This is due to the postposition de's ambiguitybesides \"because of\" relations it also functions as a marker for location, time and instrument relations, among others. As a result, non-trouble expressions such as \"by car\" (instrument) and \"in Tokyo\" (location) are marked by the postposition de as well.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency relations with Verbs", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We consider a second class of dependency relations, acting mostly as a counter to the noisy expressions introduced by the ambiguity of the postposition de.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency relations with Verbs", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The final type of evidence is formulated as the following dependency relation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency relations with non-negated verbs", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We call this type of relation DAVs (Dependencies to Affirmative Verbs). The use of these patterns is motivated by the intuition that noisy expressions found with DNVs, such as expressions about locations or instruments, will also frequently appear with non-negated verbs. That is, if you observe \"cannot go to Y (by / because of) X\" and X is not a trouble expression, then you can expect to find \"can go to Y (by / because of) X\" as well.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "T de \u2192 non-negated verb", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our initial expectation was that the DNV and DAV evidences observed with the postposition de alone would contain sufficient information to obtain an accurate classifier, but this expectation was not borne out by our early experiments. As it turns out, using dependency relations to verbs in all argument positions as features to the SVM resulted roughly in a 10\u223c15% increase in precision. Therefore in our final experiments we let the DNV and DAV evidence consist of dependencies with four additional postpositions (ha, ga, wo and ni), which are used to indicate topicalization, subject, object and indirect object. We found that the SVM was quite successful in learning a dependency profile for trouble expressions based on this information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "T de \u2192 non-negated verb", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Nonetheless, the DNV/DAV patterns proved to be useful besides as evidence for supervised learning, for instance in gathering sufficient trouble candidates and sample selection when preparing training data 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "T de \u2192 non-negated verb", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As mentioned, our method for finding troubles in using some objects consists of three steps, described in more detail below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Step 1 Gather training data with a sufficient amount of positive samples using an unsupervised method to reduce the workload of manual annotation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Step 2 Collect expressions commonly perceived as troubles by using the evidences described in the previous section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Step 3 Identify pairs of trouble expressions and objects such that the trouble expressions represent an obstacle in using the objects.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We considered noun phrases observed with the LSPH and DNV evidences as candidate trouble expressions. However, we still found only 7% of the samples observed with these evidences to be real troubles. Because of the diversity of our evidences (dependencies with verbs) we need a reasonable amount of positive samples in order to obtain an accurate classifier. Without some sample selection scheme, we would have to manually annotate about 8000 samples in order to obtain only 560 positive samples in the training data. For this reason we used the following scoring function as an unsupervised method for sample selection.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Step 1: Gathering Training Data", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Score(e) = f LSPH (e) + f DNV (e) f LSPH (e) + f DNV (e) + f DAV (e)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Step 1: Gathering Training Data", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Here, f LSPH (e), f DNV (e) and f DAV (e) are the frequencies that expression e appears with the respective evidences. Intuitively, this function gives a large score to expressions that occur frequently with the positive evidences for trouble expressions (LSPHs and DNVs), or those that appear rarely with the negative evidences (DAVs). In preparing training data we ranked all candidates according to the above score, and annotated N elements from the top and bottom of the ranking as training data. In our experiments, the top elements included a reasonable number of positive samples (25.8%) while there were almost none in the worst elements.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Step 1: Gathering Training Data", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In this step our aim is to acquire expressions often associated with troubles. We use a supervised classifier, namely Support Vector Machines (SVMs) (Vapnik, 1998) for distinguishing troubles from non-troubles, based on the evidences described above. Each dimension of the feature vector presented to the SVM corresponds to the observation of a particular evidence (i.e., these are binary features). We tried using frequencies instead of binary feature values but could not find any significant improvement in performance. After learning we sort the candidate trouble expressions according to their distance to the hyperplane learned by the SVM, and consider the top N expressions in the sorted list as true trouble expressions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 163, |
|
"text": "(Vapnik, 1998)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Step 2: Finding Trouble Expressions", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In this third stage we rank possible combinations of objects and trouble expressions acquired in the previous step according to their degree of association and apply a filter using negated verbs to the top pairs in the ranking. The final output of our method is the top N pairs that survived the filtering. We describe each step below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Step 2: Identifying Object-Trouble Pairs", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Generating Object-Trouble Pairs To generate and rank object-trouble pairs we use a variant of pairwise mutual information that scores an objecttrouble pair e o , e t based on the observed frequency of the following pattern. e o no e t P", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Step 2: Identifying Object-Trouble Pairs", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The postposition no is a genitive case marker, and the whole pattern can be translated as \"e t of / in e o \". We assume that appearance of expression e t in this pattern refers to a trouble in using the object e o .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Step 2: Identifying Object-Trouble Pairs", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "More precisely, we generate all possible combinations of trouble expression and objects and rank them according to the following score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Step 2: Identifying Object-Trouble Pairs", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "I(e o , e t ) =", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Step 2: Identifying Object-Trouble Pairs", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "f (\"e o no e t \") f (\"e o \")f (\"e t \") (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Step 2: Identifying Object-Trouble Pairs", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "where f (e) denotes an expression e's frequency. This score is large when the pattern \"e o no e t \" is observed more frequently than can be expected from e o and e t 's individual frequencies. Frequency data for all noun phrases was precomputed for the whole Web corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Step 2: Identifying Object-Trouble Pairs", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Filtering Object-Trouble Pairs The filtering in the second step is based on the following assumption.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Step 2: Identifying Object-Trouble Pairs", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Assumption If a trouble expression e t refers to a trouble in using an object e o , there is a verb v such that v frequently co-occurs with e o and v has the following dependency relation with e t . e t de \u2192 negated v", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Step 2: Identifying Object-Trouble Pairs", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The intuition behind this assumption can be explained as follows. First, if e o denotes an object or artifact then its frequently co-occurring verbs are likely to be related to a use of e o . Second, if e t is a trouble in using e o , there is some action associated with e o that e t prevents or hinders, implying that e t should be observed with its negation. For instance, if \"traffic jam\" is a trouble in using an amusement park, then we can expect the following pattern to appear also in a corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Step 2: Identifying Object-Trouble Pairs", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 juutai de yuuenchi ni ikenai. traffic jam P theme park P V (cannot go) cannot go to a theme park because of a traffic jam", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Step 2: Identifying Object-Trouble Pairs", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The verb \"to go\" co-occurs often with the noun \"theme park\" and the above pattern contains the dependency relation \"traffic jam de \u2192 cannot go\". Substituting v in the hypothesis for \"to go\", the assumption becomes valid. Because of data sparseness the above pattern may not actually appear in the corpus, but even so the dependency relation \"traffic jam de \u2192 cannot go\" may be observed with other facilities, and thus making the assumption hold anyway. As a final filtering procedure, we gathered K verbs most frequently co-occurring with each object and checked if the trouble expression in the pair has dependency relations with the K verbs in negated form and the postposition de. If none of the K verbs has such a dependency with the trouble expression, the pair is discarded. Otherwise, it is produced as the final output of our method.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Step 2: Identifying Object-Trouble Pairs", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We extracted noun phrases observed in LSPH, DNV and DAV patterns from 6 \u00d7 10 9 sentences in about 10 8 crawled Japanese Web documents, and used the LSPH and DNV data 3 as candidate trouble expressions. After restricting the noun phrases to those observed more than 10 times in the evidences, we had 136,212 noun phrases. We denote this set as D. Extracting 200 random samples from D we found the ratio of troubles to non-troubles was around 7% and thus expected to find about 10, 000 real trouble expressions in D. 4 Using the sample selection method described in Section 4.2 we prepared 6,500 annotated samples taken from D as training data. The top 3,500 samples included 912 positive samples and the worst 3,000 had just 9 positives, thereby confirming the effectiveness of the scoring function for selecting a reasonable amount of positive samples. Our final training data thus contained 14% positives.", |
|
"cite_spans": [ |
|
{ |
|
"start": 515, |
|
"end": 516, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Finding Trouble Expressions", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "For the feature vectors we included dependencies with all verbs occurring more than 30 times in our Web corpus. Besides the LSPH, DNV and DAV evidences discussed previously, we also included 10 additional binary features indicating for each of the five postpositions whether the expression was observed with DNV or DAV evidence at all, and found that including this information improved performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Finding Trouble Expressions", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We trained a classifier with a polynomial kernel of degree 1 on these evidences using the software TinySVM 5 , and evaluated the results obtained by the supervised acquisition method by asking three human raters whether a randomly selected sample expression denotes a kind of \"trouble\" in general situations. More specifically, we asked whether the expression is a kind of toraburu (trouble), sainan (accident), saigai (disaster) or shougai (obstacle or handicap). 6 For various combinations of evidences (described below), we presented 200 randomly sampled expressions from the top 10,000 expressions ranked according to the distance to the hyperplane learned by the SVM. Samples of all the compared methods are merged and shuffled before evaluation. The kappa statistic for assessing the inter-rater agreement was 0.78, indicating substantial agreement according to Landis and Koch, 1977. 7 We made no effort to remove samples used in training from the experiment, and found that the samples scored by the raters (1281 in total, after removal of duplicates) contained 67 training samples. The 200 samples from the \"full\" classifier contained 12 of these. Fig. 1 shows the precision of the acquired trouble expressions compared to the samples labeled as troubles by all three raters. We sorted the samples according to their distance to the SVM hyperplane and plotted the precision of the top N samples. The best overall precision (85.5%) was obtained by a classifier trained on the full combination of evidences (labeled \"full\" in Fig. 1 ), maintaining over 90% precision for the top 70% of the 200 samples.", |
|
"cite_spans": [ |
|
{ |
|
"start": 868, |
|
"end": 892, |
|
"text": "Landis and Koch, 1977. 7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1157, |
|
"end": 1163, |
|
"text": "Fig. 1", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 1533, |
|
"end": 1539, |
|
"text": "Fig. 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Finding Trouble Expressions", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The remaining results show the relative contributions of the evidences. They were obtained by retraining the \"full\" classifier with a particular set of evidences removed, respectively LSPH evidences (labeled \"w/o LSPH\"), DNV evidences (\"w/o DNV\"), DAV evidences (\"w/o DAV\") and the 10 features indicating the observation of he had no knowledge of the experimental setting nor had seen the acquired data prior to the experiment. 7 This kappa value was calculated over the sum total of samples presented to the raters for scoring (duplicates removed). As Fig. 1 shows, leaving out DNV and even LSPH evidences did not affect performance as much as we expected, while leaving out the DAV dependencies gave more than 20% worse results. Of further interest is the importance of the binary features for DAV/DNV presence per postposition (\"w/o sum DAV/DNV\"). The absence of these 10 binary features accounts for a 10% precision loss compared to the full feature set (75%).", |
|
"cite_spans": [ |
|
{ |
|
"start": 428, |
|
"end": 429, |
|
"text": "7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 553, |
|
"end": 559, |
|
"text": "Fig. 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Finding Trouble Expressions", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We also compared it with a baseline method using only lexico-syntactic patterns. We extracted 100 random noun phrases from the LSPH evidence in D for evaluation (\"LSPH\" in Fig. 1) . The precision for this method was 31%, confirming that lexico-syntactic patterns for hyponymy constitute fairly weak evidence for predicting trouble expressions when used alone. \"Score\" shows the precision of the top 100 samples output by our Score function from section 4. Finally, \"random\" (drawn as a straight line) denotes 100 random samples from D and roughly corresponds to our estimate of 7% true positives.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 172, |
|
"end": 179, |
|
"text": "Fig. 1)", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Finding Trouble Expressions", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "For the second step, we assumed the top 10,000 expressions obtained by our best-scoring supervised learning method (\"full\" in the previous experiments) to be trouble expressions, and proceeded to combine them with terms denoting artifacts or facilities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Identifying Object-Trouble Pairs", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We randomly picked 2,500 words that appeared as direct objects of the verbs kau (\"to buy\"), tsukau (\"to use\"), tsukuru (\"to make\"), taberu (\"to eat\") and tanoshimu (\"to enjoy\") Figure 3 : Examples of acquired object-trouble pairs more than 500 times in our Web corpus, assuming that this would yield a representative set of noun phrases denoting objects or artifacts. 8 Combining this set of objects with the acquired trouble expressions gave a list of 61,873 object-trouble pairs (all pairs e o , e t with at least one occurrence of the pattern \"e o no e t \"). Of this list, 58,570 pairs survived the DNV filtering step and form the final output of our method. For the DNV filtering, we used the top 30 verbs most frequently co-occurring with the object. We again evaluated the resulting object-trouble pairs by asking three human raters whether the presented pairs consist of an object and an expression referring to an actual or potential trouble in using the object. The kappa statistic was 0.60, indicating moderate inter-rater agreement. Fig. 2 shows the precision of the acquired pairs when comparing with what are considered true object-trouble relations by all three raters. Some examples of the pairs obtained by our method are listed in table 3 along with their ranking and the number of raters who judged the pair to be correct.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 177, |
|
"end": 185, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1044, |
|
"end": 1050, |
|
"text": "Fig. 2", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Identifying Object-Trouble Pairs", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The precision for our proposed method when considering the top 10% of pairs ranked by the I score and filtered by the method described in section 4.3 is 71.5% (\"top 10% proposed\" in Fig. 2) , which is actually worse than the results obtained without the final DNV filtering (\"top 10% MI\", 74%). For the first half of all samples however, we do observe some performance increase by the filtering, though both methods appear to converge in the second half of the graph. This tendency is mirrored closely when considering the results for the top 50% of all pairs (respectively \"top 50% proposed\" and \"top 50% MI\" in Fig. 2) . The 15% decrease in precision compared to top 10% results indicates that performance drops gradually when moving to the lower ranked pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 182, |
|
"end": 189, |
|
"text": "Fig. 2)", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 613, |
|
"end": 620, |
|
"text": "Fig. 2)", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Identifying Object-Trouble Pairs", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We have presented an automatic method for finding potential troubles in using objects, mainly artifacts and facilities. Our method acquired 10,000 trouble expressions with 85.5% precision, and over 6000 pairs of objects and trouble expressions with 74% precision.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Concluding Remarks and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Currently, we are developing an Internet search engine frontend that issues warnings about potential troubles related to search keywords. Although we were able to acquire object-trouble pairs with reasonable precision, we plan to make a large-scale highly precise list of troubles by manually checking the output of our method. We expect such a list to lead to even more acurate object-trouble pair acquisition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Concluding Remarks and Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "For example, one has to be over 3 ft. tall to get on the Splash Mountain.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We will discuss yet another use of the DNV evidence in step 2 of our acquisition method.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We restricted noun phrases from the DNV data to those found with the postposition de, as these are most likely to refer to troubles.4 Thus, in the experiments we evaluated the top 10,000 samples output by our method.5 http://chasen.org/\u223ctaku/software/TinySVM/ 6 Actually one of the raters is a co-author of this paper, but", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We manually removed pronouns from this set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Automatic extraction of hyponyms from newspaper using lexicosyntactic patterns", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Ando", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Sekine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Ishizaki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "IPSJ SIG Technical Report 2003-NL-157", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "77--82", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ando, M., S. Sekine, and S. Ishizaki. 2003. Automatic extraction of hyponyms from newspaper using lexi- cosyntactic patterns. In IPSJ SIG Technical Report 2003-NL-157, pages 77-82. in Japanese.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Finding parts in very large corpora", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Berland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proc. of ACL-1999", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "57--64", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Berland, M. and E. Charniak. 1999. Finding parts in very large corpora. In Proc. of ACL-1999, pages 57- 64.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Verbocean: Mining the web for fine-grained semantic verb relations", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Chklovski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Pantel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proc. of EMNLP-04", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chklovski, T. and P. Pantel. 2004. Verbocean: Mining the web for fine-grained semantic verb relations. In Proc. of EMNLP-04.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "The Automatic Content Extraction (ACE) Program-Tasks, Data, and Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Doddington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Przybocki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Ramshaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Strassel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Weischedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of LREC 2004", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "837--840", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Doddington, G., A. Mitchell, M. Przybocki, L. Ramshaw, S. Strassel, and R. Weischedel. 2004. The Automatic Content Extraction (ACE) Program-Tasks, Data, and Evaluation. Proceedings of LREC 2004, pages 837-840.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Automatic discovery of part-whole relations", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Girju", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Badulescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Moldvan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Computational Linguistics", |
|
"volume": "32", |
|
"issue": "1", |
|
"pages": "83--135", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Girju, R., A. Badulescu, and D. Moldvan. 2006. Au- tomatic discovery of part-whole relations. Computa- tional Linguistics, 32(1):83-135.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Automatic acquisition of hyponyms from large text corpora", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Hearst", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proc. of COLING'92", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "539--545", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hearst, M. 1992. Automatic acquisition of hyponyms from large text corpora. In Proc. of COLING'92, pages 539-545.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Automatic acqusition of hyponymy relations from coordinated noun phrases and appositions. Master's thesis", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Imasumi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Imasumi, K. 2001. Automatic acqusition of hyponymy relations from coordinated noun phrases and apposi- tions. Master's thesis, Kyushu Institute of Technol- ogy.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Automatic construction of polarity-tagged corpus from html documents", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Kaji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kitsuregawa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proc. of COLING/ACL 2006", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "452--459", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kaji, N. and M. Kitsuregawa. 2006. Automatic con- struction of polarity-tagged corpus from html docu- ments. In Proc. of COLING/ACL 2006, pages 452- 459. (poster session).", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Extracting aspect-evaluation and aspect-of relations in opinion mining", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Kobayashi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Inui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Matsumoto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proc. of EMNLP-CoNLL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1065--1074", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kobayashi, N., K. Inui, and Y. Matsumoto. 2007. Ex- tracting aspect-evaluation and aspect-of relations in opinion mining. In Proc. of EMNLP-CoNLL 2007, pages 1065-1074.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Espresso: Leveranging generic patterns for automatically harvesting semantic relations", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Pantel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Pennacchiootti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proc. of COLING/ACL-06", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "113--120", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pantel, P. and M. Pennacchiootti. 2006. Espresso: Leveranging generic patterns for automatically harvesting semantic relations. In Proc. of COLING/ACL-06, pages 113-120.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Automatically labelling semantic classes", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Pantel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Ravichandran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proc. of HLT/NAACL-04", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "321--328", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pantel, P. and D. Ravichandran. 2004. Automatically labelling semantic classes. In Proc. of HLT/NAACL- 04, pages 321-328.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Acquiring hyponymy relations from web documents", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Shinzato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Torisawa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "HLT/NAACL-04", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "73--80", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shinzato, K. and K. Torisawa. 2004. Acquir- ing hyponymy relations from web documents. In HLT/NAACL-04, pages 73-80.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Latent variable models for semantic orientation of phrases", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Takamura", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Inui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Okumura", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proc. of EACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "201--208", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Takamura, H., T. Inui, and M. Okumura. 2006. Latent variable models for semantic orientation of phrases. In Proc. of EACL 2006, pages 201-208.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Acquiring inference rules with temporal constraints by using japanese coordinated sentences and noun-verb co-occurrences", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Torisawa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "HLT-NAACL. The Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Torisawa, K. 2006. Acquiring inference rules with temporal constraints by using japanese coordinated sentences and noun-verb co-occurrences. In Moore, R.C., J.A. Bilmes, J. Chu-Carroll, and M. Sanderson, editors, HLT-NAACL. The Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Thumbs up or thumbs down? semantic orientation applied to unsupervised classification of reviews", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Turney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proc. of ACL'02", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "417--424", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Turney, P. 2002. Thumbs up or thumbs down? seman- tic orientation applied to unsupervised classification of reviews. In Proc. of ACL'02, pages 417-424.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Statistical Learning Theory", |
|
"authors": [ |
|
{ |
|
"first": "Vladimir", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Vapnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vapnik, Vladimir N. 1998. Statistical Learning The- ory. Wiley-Interscience.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Kernel methods for relation extraction", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Zelenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Aone", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Richardella", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "EMNLP '02: Proceedings of the ACL-02 conference on Empirical methods in natural language processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "71--78", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zelenko, D., C. Aone, and A. Richardella. 2002. Ker- nel methods for relation extraction. In EMNLP '02: Proceedings of the ACL-02 conference on Empirical methods in natural language processing, pages 71- 78, Morristown, NJ, USA. Association for Compu- tational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Performance of trouble expression acquisition (all 3 raters)" |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Performance of object-trouble pair acquisition (3 raters) DAV/DNV evidence per postposition (\"w/o sum DAV/DNV\")." |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"text": "1. hyponym ni nita hypernym (hyponym similar to hypernym) 2. hyponym to yobareru hypernym (hypernym called hyponym) 3. hyponym igai no hypernym (hypernym other than hyponym) 4. hyponym no youna hypernym (hypernym like hyponym) 5. hyponym to iu hypernym (hypernym called hyponym) 6. hyponym nado(no|,) hypernym (hypernym such as hyponym)", |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"text": "Japanese lexico-syntactic patterns for hyponymy relations", |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |