Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S12-1023",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:24:15.113120Z"
},
"title": "Regular polysemy: A distributional model",
"authors": [
{
"first": "Gemma",
"middle": [],
"last": "Boleda",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Texas at Austin",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ICL University of Heidelberg",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Jason",
"middle": [],
"last": "Utt",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IMS University of Stuttgart",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Many types of polysemy are not word specific, but are instances of general sense alternations such as ANIMAL-FOOD. Despite their pervasiveness, regular alternations have been mostly ignored in empirical computational semantics. This paper presents (a) a general framework which grounds sense alternations in corpus data, generalizes them above individual words, and allows the prediction of alternations for new words; and (b) a concrete unsupervised implementation of the framework, the Centroid Attribute Model. We evaluate this model against a set of 2,400 ambiguous words and demonstrate that it outperforms two baselines.",
"pdf_parse": {
"paper_id": "S12-1023",
"_pdf_hash": "",
"abstract": [
{
"text": "Many types of polysemy are not word specific, but are instances of general sense alternations such as ANIMAL-FOOD. Despite their pervasiveness, regular alternations have been mostly ignored in empirical computational semantics. This paper presents (a) a general framework which grounds sense alternations in corpus data, generalizes them above individual words, and allows the prediction of alternations for new words; and (b) a concrete unsupervised implementation of the framework, the Centroid Attribute Model. We evaluate this model against a set of 2,400 ambiguous words and demonstrate that it outperforms two baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "One of the biggest challenges in computational semantics is the fact that many words are polysemous. For instance, lamb can refer to an animal (as in The lamb squeezed through the gap) or to a food item (as in Sue had lamb for lunch). Polysemy is pervasive in human language and is a problem in almost all applications of NLP, ranging from Machine Translation (as word senses can translate differently) to Textual Entailment (as most lexical entailments are sense-specific).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The field has thus devoted a large amount of effort to the representation and modeling of word senses. The arguably most prominent effort is Word Sense Disambiguation, WSD (Navigli, 2009) , an in-vitro task whose goal is to identify which, of a set of predefined senses, is the one used in a given context.",
"cite_spans": [
{
"start": 172,
"end": 187,
"text": "(Navigli, 2009)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In work on WSD and other tasks related to polysemy, such as word sense induction, sense alternations are treated as word-specific. As a result, a model for the meaning of lamb that accounts for the relation between the animal and food senses cannot predict that the same relation holds between instances of chicken or salmon in the same type of contexts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A large number of studies in linguistics and cognitive science show evidence that there are regularities in the way words vary in their meaning (Apresjan, 1974; Lakoff and Johnson, 1980; Copestake and Briscoe, 1995; Pustejovsky, 1995; Gentner et al., 2001; Murphy, 2002) , due to general analogical processes such as regular polysemy, metonymy and metaphor. Most work in theoretical linguistics has focused on regular, systematic, or logical polysemy, which accounts for alternations like ANIMAL-FOOD. Sense alternations also arise from metaphorical use of words, as dark in dark glass-dark mood, and also from metonymy when, for instance, using the name of a place for a representative (as in Germany signed the treatise). Disregarding this evidence is empirically inadequate and leads to the well-known lexical bottleneck of current word sense models, which have serious problems in achieving high coverage (Navigli, 2009) .",
"cite_spans": [
{
"start": 144,
"end": 160,
"text": "(Apresjan, 1974;",
"ref_id": "BIBREF1"
},
{
"start": 161,
"end": 186,
"text": "Lakoff and Johnson, 1980;",
"ref_id": "BIBREF17"
},
{
"start": 187,
"end": 215,
"text": "Copestake and Briscoe, 1995;",
"ref_id": "BIBREF6"
},
{
"start": 216,
"end": 234,
"text": "Pustejovsky, 1995;",
"ref_id": "BIBREF31"
},
{
"start": 235,
"end": 256,
"text": "Gentner et al., 2001;",
"ref_id": "BIBREF11"
},
{
"start": 257,
"end": 270,
"text": "Murphy, 2002)",
"ref_id": "BIBREF25"
},
{
"start": 909,
"end": 924,
"text": "(Navigli, 2009)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We believe that empirical computational semantics could profit from a model of polysemy 1 which (a) is applicable across individual words, and thus capable of capturing general patterns and generalizing to new words, and (b) is induced in an unsupervised fashion from corpus data. This is a long-term goal with many unsolved subproblems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The current paper presents two contributions towards this goal. First, since we are working on a relatively unexplored area, we introduce a formal framework that can encompass different approaches (Section 2). Second, we implement a concrete instantiation of this framework, the unsupervised Centroid Attribute Model (Section 3), and evaluate it on a new task, namely, to detect which of a set of words instantiate a given type of polysemy (Sections 4 and 5). We finish with some conclusions and future work (Section 7).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In addition to introducing formal definitions for terms commonly found in the literature, our framework provides novel terminology to deal with regular polysemy in a general fashion (cf. Table 1 ; capital letters designate sets and small letters elements of sets). 2 For a lemma l like lamb, we want to know how well a meta alternation (such as ANIMAL-FOOD) explains a pair of its senses (such as the animal and food senses of lamb). 3 This is formalized through the function score, which maps a meta alternation and two senses onto a score. As an example, let lamb anm denote the ANIMAL sense of lamb, lamb fod the FOOD sense, and lamb hum the PERSON sense. Then, an appropriate model of meta alternations should predict that score(animal, food, lamb anm , lamb fod ) is greater than score(animal, food, lamb anm , lamb hum ).",
"cite_spans": [
{
"start": 265,
"end": 266,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 187,
"end": 194,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Formal framework",
"sec_num": "2"
},
{
"text": "Meta alternations are defined as unordered pairs of meta senses, or cross-word senses like ANIMAL. The meta senses M can be defined a priori or induced from data. They are equivalence classes of senses to which they are linked through the function meta. A sense s instantiates a meta sense m iff meta(s) = m. Functions inst and sns allow us to define meta senses and lemma-specific senses in terms of actual instances, or occurrences of words in context. We decompose the score function into two parts: a representation function rep A that maps a meta alternation into some suitable representation for meta alternations, A, and a compatibility function comp that compares the relation between the senses of a word to the meta alternation's representation. Thus, comp \u2022 rep A = score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal framework",
"sec_num": "2"
},
{
"text": "L set of lemmas I L set of (lemma-wise) instances S L set of (lemma-wise) senses inst : L \u2192 \u2118(I L ) mapping lemma \u2192 instances sns : L \u2192 \u2118(S L ) mapping lemma \u2192 senses M set of meta senses meta : S L \u2192 M mapping senses \u2192 meta senses A \u2286 M \u00d7 M set of meta alternations (MAs) A set of MA representations score : A \u00d7 S 2 L \u2192 R scoring function for MAs rep A : A \u2192 A MA representation function comp : A \u00d7S 2 L \u2192 R compatibility function",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal framework",
"sec_num": "2"
},
{
"text": "The Centroid Attribute Model (CAM) is a simple instantiation of the framework defined in Section 2, designed with two primary goals in mind. First, it is a data-driven model. Second, it does not require any manual sense disambiguation, a notorious bottleneck.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Centroid Attribute Model",
"sec_num": "3"
},
{
"text": "To achieve the first goal, CAM uses a distributional approach. It represents the relevant entities as co-occurrence vectors that can be acquired from a large corpus (Turney and Pantel, 2010) . To achieve the second goal, CAM represents meta senses using monosemous words only, that is, words whose senses all correspond to one meta sense. 4 Examples are cattle and robin for the meta sense ANIMAL. We define the vector for a meta sense as the centroid (average vector) of the monosemous words instantiating it. In turn, meta alternations are represented by the centroids of their meta senses' vectors.",
"cite_spans": [
{
"start": 165,
"end": 190,
"text": "(Turney and Pantel, 2010)",
"ref_id": "BIBREF38"
},
{
"start": 339,
"end": 340,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Centroid Attribute Model",
"sec_num": "3"
},
{
"text": "This strategy is not applicable to test lemmas, which instantiate some meta alternation and are by definition ambiguous. To deal with these without",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Centroid Attribute Model",
"sec_num": "3"
},
{
"text": "vec I : I L \u2192 R k instance vector computation C : R k\u00d7m \u2192 R k centroid computation vec L : L \u2192 R k lemma (type) vector computation rep M : M \u2192 R k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Centroid Attribute Model",
"sec_num": "3"
},
{
"text": "meta sense representation Table 3 : Additional notation and signatures for CAM explicit sense disambiguation, CAM represents lemmas by their type vectors, i.e., the centroid of their instances, and compares their vectors (attributes) to those of the meta alternation -hence the name.",
"cite_spans": [],
"ref_spans": [
{
"start": 26,
"end": 33,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Centroid Attribute Model",
"sec_num": "3"
},
{
"text": "CoreLex: A Semantic Inventory. CAM uses CoreLex (Buitelaar, 1998) as its meta sense inventory. CoreLex is a lexical resource that was designed specifically for the study of polysemy. It builds on WordNet (Fellbaum, 1998), whose sense distinctions are too fine-grained to describe general sense alternations. CoreLex defines a layer of abstraction above WordNet consisting of 39 basic types, coarsegrained ontological classes (Table 2) . These classes are linked to one or more Wordnet anchor nodes, which define a mapping from WordNet synsets onto basic types: A synset s maps onto a basic type b if b has an anchor node that dominates s and there is no other anchor node on the path from b and s. 5 We adopt the WordNet synsets as S, the set of senses, and the CoreLex basic types as our set of meta senses M . The meta function (mapping word senses onto meta senses) is given directly by the anchor mapping defined in the previous paragraph. This means that the set of meta alternations is given by the set of pairs of basic types. Although basic types do not perfectly model meta senses, they constitute an approximation that allows us to model many prominent alternations such as ANIMAL-FOOD.",
"cite_spans": [
{
"start": 48,
"end": 65,
"text": "(Buitelaar, 1998)",
"ref_id": "BIBREF3"
},
{
"start": 698,
"end": 699,
"text": "5",
"ref_id": null
}
],
"ref_spans": [
{
"start": 425,
"end": 434,
"text": "(Table 2)",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "The Centroid Attribute Model",
"sec_num": "3"
},
{
"text": "Vectors for Meta Senses and Alternations. All representations used by CAM are co-occurrence vectors in R k (i.e., A := R k ). Table 3 lists new concepts that CAM introduces to manipulate vector representations. vec I returns a vector for a lemma instance, vec L a (type) vector for a lemma, and C the centroid of a set of vectors.",
"cite_spans": [],
"ref_spans": [
{
"start": 126,
"end": 133,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Centroid Attribute Model",
"sec_num": "3"
},
{
"text": "We leave vec I and C unspecified: we will experiment with these functions in Section 4. CAM does fix the definitions for vec L and rep A . First, vec L defines a lemma's vector as the centroid of its instances:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Centroid Attribute Model",
"sec_num": "3"
},
{
"text": "vec L (l) = C{vec I (i) | i \u2208 inst(l)} (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Centroid Attribute Model",
"sec_num": "3"
},
{
"text": "Before defining rep A , we specify a function rep M that computes vector representations for meta senses m. In CAM, this vector is defined as the centroid of the vectors for all monosemous lemmas whose WordNet sense maps onto m:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Centroid Attribute Model",
"sec_num": "3"
},
{
"text": "rep M (m) = C{vec L (l) | meta(sns(l)) = {m}} (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Centroid Attribute Model",
"sec_num": "3"
},
{
"text": "Now, rep A can be defined simply as the centroid of the meta senses instantiating a:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Centroid Attribute Model",
"sec_num": "3"
},
{
"text": "rep A (m 1 , m 2 ) = C{rep M (m 1 ), rep M (m 2 )} (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Centroid Attribute Model",
"sec_num": "3"
},
{
"text": "Predicting Meta Alternations. The final component of CAM is an instantiation of comp (cf. Table 1) , i.e., the degree to which a sense pair (s 1 , s 2 ) matches a meta alternation a. Since CAM does not represent these senses separately, we define comp as",
"cite_spans": [],
"ref_spans": [
{
"start": 90,
"end": 98,
"text": "Table 1)",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "The Centroid Attribute Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "comp(a, s 1 , s 2 ) = sim(a, vec L (l)) so that {s 1 , s 2 } = sns(l)",
"eq_num": "(4)"
}
],
"section": "The Centroid Attribute Model",
"sec_num": "3"
},
{
"text": "The complete model, score, can now be stated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Centroid Attribute Model",
"sec_num": "3"
},
{
"text": "score(m, m , s, s ) = sim(rep A (m, m ), vec L (l)) so that {s, s } = sns(l) (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Centroid Attribute Model",
"sec_num": "3"
},
{
"text": "CAM thus assesses how well a meta alternation a = (m, m ) explains a lemma l by comparing the centroid of the meta senses m, m to l's centroid.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Centroid Attribute Model",
"sec_num": "3"
},
{
"text": "Discussion. The central feature of CAM is that it avoids word sense disambiguation, although it still relies on a predefined sense inventory (Word-Net, through CoreLex). Our use of monosemous words to represent meta senses and meta alternations goes beyond previous work which uses monosemous words to disambiguate polysemous words in context (Izquierdo et al., 2009; Navigli and Velardi, 2005) .",
"cite_spans": [
{
"start": 343,
"end": 367,
"text": "(Izquierdo et al., 2009;",
"ref_id": "BIBREF14"
},
{
"start": 368,
"end": 394,
"text": "Navigli and Velardi, 2005)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Centroid Attribute Model",
"sec_num": "3"
},
{
"text": "Because of its focus on avoiding disambiguation, CAM simplifies the representation of meta alternations and polysemous words to single centroid vectors. In the future, we plan to induce word senses (Sch\u00fctze, 1998; Pantel and Lin, 2002; Reisinger and Mooney, 2010) , which will allow for more flexible and realistic models. ",
"cite_spans": [
{
"start": 198,
"end": 213,
"text": "(Sch\u00fctze, 1998;",
"ref_id": "BIBREF35"
},
{
"start": 214,
"end": 235,
"text": "Pantel and Lin, 2002;",
"ref_id": "BIBREF29"
},
{
"start": 236,
"end": 263,
"text": "Reisinger and Mooney, 2010)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Centroid Attribute Model",
"sec_num": "3"
},
{
"text": "We test CAM on the task of identifying which lemmas of a given set instantiate a specific meta alternation. We let the model rank the lemmas through the score function (cf. Table (1) and Eq. 5) and evaluate the ranked list using Average Precision. While an alternative would be to rank meta alternations for a given polysemous lemma, the method chosen here has the benefit of providing data on the performance of individual meta senses and meta alternations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "All modeling and data extraction was carried out on the written part of the British National Corpus (BNC; Burnage and Dunlop (1992) ) parsed with the C&C tools (Clark and Curran, 2007) . 6 For the evaluation, we focus on disemous words, words which instantiate exactly two meta senses according to WordNet. For each meta alternation (m, m ), we evaluate CAM on a set of disemous targets (lemmas that instantiate (m, m )) and disemous distractors (lemmas that do not). We define three types of distractors: (1) distractors sharing m with the targets (but not m ), (2) distractors sharing m with the targets (but not m), and (3) distractors sharing neither. In this way, we ensure that CAM cannot obtain good results by merely modeling the similarity of targets to either m or m , which would rather be a coarse-grained word sense modeling task.",
"cite_spans": [
{
"start": 106,
"end": 131,
"text": "Burnage and Dunlop (1992)",
"ref_id": "BIBREF4"
},
{
"start": 160,
"end": 184,
"text": "(Clark and Curran, 2007)",
"ref_id": "BIBREF5"
},
{
"start": 187,
"end": 188,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "To ensure that we have enough data, we evaluate CAM on all meta alternations with at least ten targets that occur at least 50 times in the corpus, discarding nouns that have fewer than 3 characters or contain non-alphabetical characters. The distractors are cho-sen so that they match targets in frequency. This leaves us with 60 meta alternations, shown in Table 5. For each meta alternation, we randomly select 40 lemmas as experimental items (10 targets and 10 distractors of each type) so that a total of 2,400 lemmas is used in the evaluation. 7 Table 4 shows four targets and their distractors for the meta alternation ANIMAL-FOOD. 8",
"cite_spans": [],
"ref_spans": [
{
"start": 551,
"end": 558,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "To measure success on this task, we use Average Precision (AP), an evaluation measure from IR that reaches its maximum value of 1 when all correct items are ranked at the top (Manning et al., 2008) . It interpolates the precision values of the top-n prediction lists for all positions n in the list that contain a target. Let T = q 1 , . . . , q m be the list of targets, and let P = p 1 , . . . , p n be the list of predictions as ranked by the model. Let I(x i ) = 1 if p i \u2208 T , and zero otherwise. Then AP (P,",
"cite_spans": [
{
"start": 175,
"end": 197,
"text": "(Manning et al., 2008)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measure and Baselines",
"sec_num": "4.2"
},
{
"text": "T ) = 1 m m i=1 I(x i ) i j=1 I(x i ) i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measure and Baselines",
"sec_num": "4.2"
},
{
"text": ". AP measures the quality of the ranked list for a single meta alternation. The overall quality of a model is given by Mean Average Precision (MAP), the mean of the AP values for all meta alternations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measure and Baselines",
"sec_num": "4.2"
},
{
"text": "We consider two baselines: (1) A random baseline that ranks all lemmas in random order. This baseline is the same for all meta alternations, since the distribution is identical. We estimate it by sampling. (2) A meta alternation-specific frequency baseline which orders the lemmas by their corpus frequencies. This baseline uses the intuition that frequent words will tend to exhibit more typical alternations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measure and Baselines",
"sec_num": "4.2"
},
{
"text": "There are four more parameters to set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Parameters",
"sec_num": "4.3"
},
{
"text": "Definition of vector space. We instantiate the vec I function in three ways. All three are based on dependency-parsed spaces, following our intuition that topical similarity as provided by window-based spaces is insufficient for this task. The functions differ in the definition of the space's dimensions, incorporating different assumptions about distributional differences among meta alternations. The first option, gram, uses grammatical paths of lengths 1 to 3 as dimensions and thus characterizes lemmas and meta senses in terms of their grammatical context (Schulte im Walde, 2006), with a total of 2,528 paths. The second option, lex, uses words as dimensions, treating the dependency parse as a co-occurrence filter (Pad\u00f3 and Lapata, 2007) , and captures topical distinctions. The third option, gramlex, uses lexicalized dependency paths like obj-see to mirror more fine-grained semantic properties (Grefenstette, 1994) . Both lex and gramlex use the 10,000 most frequent items in the corpus. Vector elements. We use \"raw\" corpus cooccurrence frequencies as well as log-likelihoodtransformed counts (Lowe, 2001) as elements of the co-occurrence vectors. Definition of centroid computation. There are three centroid computations in CAM: to combine instances into lemma (type) vectors (function vec L in Eq. (1)); to combine lemma vectors into meta sense vectors (function rep M in Eq. (2)); and to combine meta sense vectors into meta alternation vectors (function rep A in Eq. (3)).",
"cite_spans": [
{
"start": 724,
"end": 747,
"text": "(Pad\u00f3 and Lapata, 2007)",
"ref_id": "BIBREF28"
},
{
"start": 907,
"end": 927,
"text": "(Grefenstette, 1994)",
"ref_id": "BIBREF12"
},
{
"start": 1107,
"end": 1119,
"text": "(Lowe, 2001)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Parameters",
"sec_num": "4.3"
},
{
"text": "For vec L , the obvious definition of the centroid function is as a micro-average, that is, a simple average over all instances. For rep M and rep A , there is a design choice: The centroid can be computed by micro-averaging as well, which assigns a larger weight to more frequent lemmas (rep M ) or meta senses (rep A ). Alternatively, it can be computed by macro-averaging, that is, by normalizing the individual vectors before averaging. This gives equal weight to the each lemma or meta sense, respectively. Macro-averaging in rep A thus assumes that senses are equally distributed, which is an oversimplification, as word senses are known to present skewed distributions (McCarthy et al., 2004) and vectors for words with a predominant sense will be similar to the dominant meta sense vector. Micro-averaging partially models sense skewedness under the assumption that word frequency correlates with sense frequency. Similarity measure. As the vector similarity measure in Eq. 5, we use the standard cosine similarity (Lee, 1999) . It ranges between \u22121 and 1, with 1 denoting maximum similarity. In the current model where the vectors do not contain negative counts, the range is [0; 1].",
"cite_spans": [
{
"start": 676,
"end": 699,
"text": "(McCarthy et al., 2004)",
"ref_id": "BIBREF23"
},
{
"start": 1023,
"end": 1034,
"text": "(Lee, 1999)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Parameters",
"sec_num": "4.3"
},
{
"text": "Effect of Parameters The four parameters of Section 4.3 (three space types, macro-/micro-averaging for rep M and rep A , and log-likelihood transformation) correspond to 24 instantiations of CAM. Figure 1 shows the influence of the four parameters. The only significant difference is tied to the use of lexicalized vector spaces (gramlex / lex are better than gram). The statistical significance of this difference was verified by a t-test (p < 0.01). This indicates that meta alternations can be characterized better through fine-grained semantic distinctions than by syntactic ones.",
"cite_spans": [],
"ref_spans": [
{
"start": 196,
"end": 204,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "The choice of micro-vs. macro-average does not have a clear effect, and the large variation observed in Figure 1 suggests that the best setup is dependent on the specific meta sense or meta alternation being modeled. Focusing on meta alternations, whether the two intervening meta senses should be balanced or not can be expected to depend on the frequencies of the concepts denoted by each meta sense, which vary for each case. Indeed, for AGENT-HUMAN, the alternation which most benefits from the micro-averaging setting, the targets are much more similar to the HU-MAN meta sense (which is approximately 8 times as frequent as AGENT) than to the AGENT meta sense. The latter contains anything that can have an effect on something, e.g. emulsifier, force, valium. The targets for AGENT-HUMAN, in contrast, contain words such as engineer, manipulator, operative, which alternate between an agentive role played by a person and the person herself. While lacking in clear improvement, loglikelihood transformation tends to reduce variance, consistent with the effect previously found in selectional preference modeling (Erk et al., 2010) .",
"cite_spans": [
{
"start": 1118,
"end": 1136,
"text": "(Erk et al., 2010)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 104,
"end": 112,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Overall Performance Although the performance of the CAM models is still far from perfect, all 24 models obtain MAP scores of 0.35 or above, while the random baseline is at 0.313, and the overall frequency baseline at 0.291. Thus, all models consistently outperform both baselines. A bootstrap resampling test (Efron and Tibshirani, 1994) con-firmed that the difference to the frequency baseline is significant at p < 0.01 for all 24 models. The difference to the random baseline is significant at p < 0.01 for 23 models and at p < 0.05 for the remaining model. This shows that the models capture the meta alternations to some extent. The best model uses macro-averaging for rep M and rep A in a log-likelihood transformed gramlex space and achieves a MAP of 0.399. Table 5 breaks down the performance of the best CAM model by meta alternation. It shows an encouraging picture: CAM outperforms the frequency baseline for 49 of the 60 meta alternations and both baselines for 44 (73.3%) of all alternations. The performance shows a high degree of variance, however, ranging from 0.22 to 0.71.",
"cite_spans": [
{
"start": 309,
"end": 337,
"text": "(Efron and Tibshirani, 1994)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 765,
"end": 772,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Meta alternations vary greatly in their difficulty. Since CAM is an attribute similarity-based approach, we expect it to perform better on the alternations whose meta senses are ontologically more similar. We next test this hypothesis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis by Meta Alternation Coherence",
"sec_num": null
},
{
"text": "Let D m i = {d ij } be the set of distractors for the targets T = {t j } that share the meta sense m i , and D R = {d 3j } the set of random distractors. We define the coherence \u03ba of an alternation a of meta senses m 1 , m 2 as the mean (\u00f8) difference between the similarity of each target vector to a and the similarity of the corresponding distractors to a, or formally \u03ba(a) = \u00f8 sim(rep",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis by Meta Alternation Coherence",
"sec_num": null
},
{
"text": "A (m 1 , m 2 ), vec L (t j )) \u2212 sim(rep A (m 1 , m 2 ), vec L (d ij )), for 1 \u2264 i \u2264 3 and 1 \u2264 j \u2264 10.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis by Meta Alternation Coherence",
"sec_num": null
},
{
"text": "That is, \u03ba measures how much more similar, on average, the meta alternation vector is to the target vectors than to the distractor vectors. For a meta alternation with a higher \u03ba, the targets should be easier to distinguish from the distractors. Figure 2 plots AP by \u03ba for all meta alternations. As we expect from the definition of \u03ba, AP is strongly correlated with \u03ba. However, there is a marked Y shape, i.e., a divergence in behavior between high\u03ba and mid-AP alternations (upper right corner) and mid-\u03ba and high-AP alternations (upper left corner).",
"cite_spans": [],
"ref_spans": [
{
"start": 246,
"end": 254,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Analysis by Meta Alternation Coherence",
"sec_num": null
},
{
"text": "In the first case, meta alternations perform worse than expected, and we find that this typically points to missing senses, that is, problems in the underlying lexical resource (WordNet, via CoreLex). For instance, the FOOD-PLANT distractor almond is given Table 5 : Meta alternations and their average precision values for the task. The random baseline performs at 0.313 while the frequency baseline ranges from 0.255 to 0.369 with a mean of 0.291. Alternations for which the model outperforms the frequency baseline are in boldface (mean AP: 0.399, standard deviation: 0.119).",
"cite_spans": [],
"ref_spans": [
{
"start": 257,
"end": 264,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis by Meta Alternation Coherence",
"sec_num": null
},
{
"text": "grs-psy democracy, faculty, humanism, regime, pro-sta bondage, dehydration, erosion,urbanization psy-sta anaemia,delight, pathology, sensibility hum-prt bum, contractor, peter, subordinate grp-psy category, collectivism, socialism, underworld a PLANT sense by WordNet, but no FOOD sense. In the case of SOCIAL GROUP-GEOGRAPHICAL LOCA-TION, distractors laboratory and province are missing SOCIAL GROUP senses, which they clearly possess (cf. The whole laboratory celebrated Christmas). This suggests that our approach can help in Word Sense Induction and thesaurus construction. In the second case, meta alternations perform better than expected: They have a low \u03ba, but a high AP. These include grs-psy, pro-sta, psy-sta, hum-prt and grp-psy. These meta alternations involve fairly abstract meta senses such as PSYCHO-LOGICAL FEATURE and STATE. 9 Table 6 lists a sample of targets for the five meta alternations involved. The targets are clearly similar to each other on the level of their meta senses. However, they can occur in very different semantic contexts. Thus, here it is the underlying model (the gramlex space) that can explain the lower than average coherence. It is striking that CAM can account for abstract words and meta alternations between these, given that it uses first-order co-occurrence information only. 9 An exception is hum-prt. It has a low coherence because many WordNet lemmas with a PART sense are body parts. ",
"cite_spans": [
{
"start": 1327,
"end": 1328,
"text": "9",
"ref_id": null
}
],
"ref_spans": [
{
"start": 846,
"end": 853,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Analysis by Meta Alternation Coherence",
"sec_num": null
},
{
"text": "As noted in Section 1, there is little work in empirical computational semantics on explicitly modeling sense alternations, although the notions that we have formalized here affect several tasks across NLP subfields. Most work on regular sense alternations has focused on regular polysemy. A pioneering study is Buitelaar (1998) , who accounts for regular polysemy through the CoreLex resource (cf. Section 3). A similar effort is carried out by Tomuro (2001) , but he represents regular polysemy at the level of senses. Recently, Utt and Pad\u00f3 (2011) explore the differences between between idiosyncratic and regular polysemy patterns building on CoreLex. Lapata (2000) focuses on the default meaning arising from word combinations, as opposed to the polysemy of single words as in this study.",
"cite_spans": [
{
"start": 312,
"end": 328,
"text": "Buitelaar (1998)",
"ref_id": "BIBREF3"
},
{
"start": 446,
"end": 459,
"text": "Tomuro (2001)",
"ref_id": "BIBREF37"
},
{
"start": 531,
"end": 550,
"text": "Utt and Pad\u00f3 (2011)",
"ref_id": "BIBREF41"
},
{
"start": 656,
"end": 669,
"text": "Lapata (2000)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "6"
},
{
"text": "Meta alternations other than regular polysemy, such as metonymy, play a crucial role in Information Extraction. For instance, the meta alternation SOCIAL GROUP-GEOGRAPHICAL LOCATION corresponds to an ambiguity between the LOCATION-ORGANIZATION Named Entity classes which is known to be a hard problem in Named Entity Recognition and Classification (Markert and Nissim, 2009) . Metaphorical meta alternations have also received attention recently (Turney et al., 2011) On a structural level, the prediction of meta alternations shows a clear correspondence to analogy prediction as approached in Turney (2006) (carpenter:wood is analogous to mason:stone, but not to photograph:camera). The framework defined in Section 2 conceptualizes our task in a way parallel to that of analogical reasoning, modeling not \"first-order\" semantic similarity, but \"second-order\" semantic relations. However, the two tasks cannot be approached with the same methods, as Turney's model relies on contexts linking two nouns in corpus sentences (what does A do to B?). In contrast, we are interested in relations within words, namely between word senses. We cannot expect two different senses of the same noun to co-occur in the same sentence, as this is discouraged for pragmatic reasons (Gale et al., 1992) .",
"cite_spans": [
{
"start": 348,
"end": 374,
"text": "(Markert and Nissim, 2009)",
"ref_id": "BIBREF22"
},
{
"start": 446,
"end": 467,
"text": "(Turney et al., 2011)",
"ref_id": "BIBREF39"
},
{
"start": 595,
"end": 608,
"text": "Turney (2006)",
"ref_id": "BIBREF40"
},
{
"start": 1268,
"end": 1287,
"text": "(Gale et al., 1992)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "6"
},
{
"text": "A concept analogous to our notion of meta sense (i.e., senses beyond single words) has been used in previous work on class-based WSD (Yarowsky, 1992; Curran, 2005; Izquierdo et al., 2009) , and indeed, the CAM might be used for class-based WSD as well. However, our emphasis lies rather on modeling polysemy across words (meta alternations), something that is absent in WSD, class-based or not. The only exception, to our knowledge, is Ando (2006), who pools the labeled examples for all words from a dataset for learning, implicitly exploiting regularities in sense alternations.",
"cite_spans": [
{
"start": 133,
"end": 149,
"text": "(Yarowsky, 1992;",
"ref_id": "BIBREF42"
},
{
"start": 150,
"end": 163,
"text": "Curran, 2005;",
"ref_id": "BIBREF7"
},
{
"start": 164,
"end": 187,
"text": "Izquierdo et al., 2009)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "6"
},
{
"text": "Meta senses also bear a close resemblance to the notion of semantic class as used in lexical acquisition (Hindle, 1990; Merlo and Stevenson, 2001; Schulte im Walde, 2006; Joanis et al., 2008) . However, in most of this research polysemy is ignored. A few exceptions use soft clustering for multiple assignment of verbs to semantic classes (Pereira et al., 1993; Rooth et al., 1999; Korhonen et al., 2003) , and Boleda et al. (to appear) explicitly model regular polysemy for adjectives.",
"cite_spans": [
{
"start": 105,
"end": 119,
"text": "(Hindle, 1990;",
"ref_id": "BIBREF13"
},
{
"start": 120,
"end": 146,
"text": "Merlo and Stevenson, 2001;",
"ref_id": "BIBREF24"
},
{
"start": 147,
"end": 170,
"text": "Schulte im Walde, 2006;",
"ref_id": "BIBREF34"
},
{
"start": 171,
"end": 191,
"text": "Joanis et al., 2008)",
"ref_id": "BIBREF15"
},
{
"start": 339,
"end": 361,
"text": "(Pereira et al., 1993;",
"ref_id": "BIBREF30"
},
{
"start": 362,
"end": 381,
"text": "Rooth et al., 1999;",
"ref_id": "BIBREF33"
},
{
"start": 382,
"end": 404,
"text": "Korhonen et al., 2003)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "6"
},
{
"text": "We have argued that modeling regular polysemy and other analogical processes will help improve current models of word meaning in empirical computational semantics. We have presented a formal framework to represent and operate with regular sense alternations, as well as a first simple instantiation of the framework. We have conducted an evaluation of different implementations of this model in the new task of determining whether words match a given sense alternation. All models significantly outperform the baselines when considered as a whole, and the best implementation outperforms the baselines for 73.3% of the tested alternations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "We have two next steps in mind. The first is to become independent of WordNet by unsupervised induction of (meta) senses and alternations from the data. This will allow for models that, unlike CAM, can go beyond \"disemous\" words. Other improvements on the model and evaluation will be to develop more informed baselines that capture semantic shifts, as well as to test alternate weighting schemes for the co-occurrence vectors (e.g. PMI) and to use larger corpora than the BNC.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "The second step is to go beyond the limited in-vitro evaluation we have presented here by integrating alternation prediction into larger NLP tasks. Knowledge about alternations can play an important role in counteracting sparseness in many tasks that involve semantic compatibility, e.g., testing the applicability of lexical inference rules (Szpektor et al., 2008) .",
"cite_spans": [
{
"start": 342,
"end": 365,
"text": "(Szpektor et al., 2008)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "Our work is mostly inspired in research on regular polysemy. However, given the fuzzy nature of \"regularity\" in meaning variation, we extend the focus of our attention to include other types of analogical sense construction processes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We re-use inst as a function that returns the set of instances for a sense: SL \u2192 \u2118(IL) and assume that senses partition lemmas' instances: \u2200l : inst(l) = s\u2208sns(l) inst(s).3 Consistent with the theoretical literature, this paper focuses on two-way polysemy. See Section 7 for further discussion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "10.8% of noun types in the corpus we use are monosemous and 2.3% are disemous, while, on a token level, 23.3% are monosemous and 20.2% disemous.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is necessary because some classes have non-disjoint anchor nodes: e.g., ANIMALs are a subset of LIVING BEINGs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The C&C tools were able to reliably parse about 40M words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Dataset available at http://www.nlpado.de/ sebastian/data.shtml.8 Note that this experimental design avoids any overlap between the words used to construct sense vectors (one meta sense) and the words used in the evaluation (two meta senses).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research is partially funded by the Spanish Ministry of Science and Innovation (FFI2010-15006, TIN2009-14715-C04-04), the AGAUR (2010 BP-A00070), the German Research Foundation (SFB 732), and the EU (PASCAL2; FP7-ICT-216886). It is largely inspired on a course by Ann Copestake at U. Pompeu Fabra (2008). We thank Marco Baroni, Katrin Erk, and the reviewers of this and four other conferences for valuable feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Applying alternating structure optimization to word sense disambiguation",
"authors": [
{
"first": "Ando",
"middle": [],
"last": "Rie Kubota",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 10th Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "77--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rie Kubota Ando. 2006. Applying alternating structure optimization to word sense disambiguation. In Proceed- ings of the 10th Conference on Computational Natural Language Learning, pages 77-84, New York City, NY.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Regular polysemy. Linguistics",
"authors": [
{
"first": "",
"middle": [],
"last": "Iurii Derenikovich Apresjan",
"suffix": ""
}
],
"year": 1974,
"venue": "",
"volume": "142",
"issue": "",
"pages": "5--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iurii Derenikovich Apresjan. 1974. Regular polysemy. Linguistics, 142:5-32.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Modeling regular polysemy: A study of the semantic classification of Catalan adjectives",
"authors": [
{
"first": "Gemma",
"middle": [],
"last": "Boleda",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
},
{
"first": "Toni",
"middle": [],
"last": "Badia",
"suffix": ""
}
],
"year": null,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gemma Boleda, Sabine Schulte im Walde, and Toni Badia. to appear. Modeling regular polysemy: A study of the semantic classification of Catalan adjectives. Computa- tional Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "CoreLex: An ontology of systematic polysemous classes",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Buitelaar",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of Formal Ontologies in Information Systems",
"volume": "",
"issue": "",
"pages": "221--235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Buitelaar. 1998. CoreLex: An ontology of sys- tematic polysemous classes. In Proceedings of For- mal Ontologies in Information Systems, pages 221-235, Amsterdam, The Netherlands.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Encoding the British National Corpus",
"authors": [
{
"first": "Gavin",
"middle": [],
"last": "Burnage",
"suffix": ""
},
{
"first": "Dominic",
"middle": [],
"last": "Dunlop",
"suffix": ""
}
],
"year": 1992,
"venue": "English Language Corpora: Design, Analysis and Exploitation, Papers from the Thirteenth International Conference on English Language Research on Computerized Corpora",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gavin Burnage and Dominic Dunlop. 1992. Encoding the British National Corpus. In Jan Aarts, Pieter de Haan, and Nelleke Oostdijk, editors, English Language Corpora: Design, Analysis and Exploitation, Papers from the Thirteenth International Conference on En- glish Language Research on Computerized Corpora. Rodopi, Amsterdam.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Widecoverage efficient statistical parsing with ccg and log-l inear models",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Curran",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Clark and James R. Curran. 2007. Wide- coverage efficient statistical parsing with ccg and log-l inear models. Computational Linguistics, 33(4).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semi-productive Polysemy and Sense Extension",
"authors": [
{
"first": "Ann",
"middle": [],
"last": "Copestake",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
}
],
"year": 1995,
"venue": "Journal of Semantics",
"volume": "12",
"issue": "1",
"pages": "15--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ann Copestake and Ted Briscoe. 1995. Semi-productive Polysemy and Sense Extension. Journal of Semantics, 12(1):15-67.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Supersense tagging of unknown nouns using semantic similarity",
"authors": [
{
"first": "James",
"middle": [],
"last": "Curran",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)",
"volume": "",
"issue": "",
"pages": "26--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Curran. 2005. Supersense tagging of unknown nouns using semantic similarity. In Proceedings of the 43rd Annual Meeting of the Association for Computa- tional Linguistics (ACL'05), pages 26-33, Ann Arbor, Michigan.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An Introduction to the",
"authors": [
{
"first": "Bradley",
"middle": [],
"last": "Efron",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Tibshirani",
"suffix": ""
}
],
"year": 1994,
"venue": "Bootstrap. Monographs on Statistics and Applied Probability",
"volume": "57",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bradley Efron and Robert Tibshirani. 1994. An Introduc- tion to the Bootstrap. Monographs on Statistics and Applied Probability 57. Chapman & Hall.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A flexible, corpus-driven model of regular and inverse selectional preferences",
"authors": [
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Ulrike",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
}
],
"year": 2010,
"venue": "Computational Linguistics",
"volume": "36",
"issue": "4",
"pages": "723--763",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katrin Erk, Sebastian Pad\u00f3, and Ulrike Pad\u00f3. 2010. A flexible, corpus-driven model of regular and inverse selectional preferences. Computational Linguistics, 36(4):723-763.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "WordNet: an electronic lexical database",
"authors": [
{
"first": "London",
"middle": [],
"last": "Mit",
"suffix": ""
},
{
"first": "",
"middle": [
"A"
],
"last": "William",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [
"W"
],
"last": "Gale",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the 1992 ARPA Human Language Technologies Workshop",
"volume": "",
"issue": "",
"pages": "233--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum, editor. 1998. WordNet: an elec- tronic lexical database. MIT, London. William A. Gale, Kenneth W. Church, and David Yarowsky. 1992. One sense per discourse. In Proceed- ings of the 1992 ARPA Human Language Technologies Workshop, pages 233-237, Harriman, NY.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Metaphor is like analogy",
"authors": [
{
"first": "Dedre",
"middle": [],
"last": "Gentner",
"suffix": ""
},
{
"first": "Brian",
"middle": [
"F"
],
"last": "Bowdle",
"suffix": ""
},
{
"first": "Phillip",
"middle": [],
"last": "Wolff",
"suffix": ""
},
{
"first": "Consuelo",
"middle": [],
"last": "Boronat",
"suffix": ""
}
],
"year": 2001,
"venue": "The analogical mind: Perspectives from Cognitive Science",
"volume": "",
"issue": "",
"pages": "199--253",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dedre Gentner, Brian F. Bowdle, Phillip Wolff, and Con- suelo Boronat. 2001. Metaphor is like analogy. In D. Gentner, K. J. Holyoak, and B. N. Kokinov, edi- tors, The analogical mind: Perspectives from Cognitive Science, pages 199-253. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Explorations in Automatic Thesaurus Discovery",
"authors": [
{
"first": "Gregory",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregory Grefenstette. 1994. Explorations in Automatic Thesaurus Discovery. Kluwer Academic Publishers.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Noun classification from predicateargument structures",
"authors": [
{
"first": "Donald",
"middle": [],
"last": "Hindle",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the 28th Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "268--275",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Donald Hindle. 1990. Noun classification from predicate- argument structures. In Proceedings of the 28th Meet- ing of the Association for Computational Linguistics, pages 268-275.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "An empirical study on class-based word sense disambiguation",
"authors": [
{
"first": "Rub\u00e9n",
"middle": [],
"last": "Izquierdo",
"suffix": ""
},
{
"first": "Armando",
"middle": [],
"last": "Su\u00e1rez",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Rigau",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009)",
"volume": "",
"issue": "",
"pages": "389--397",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rub\u00e9n Izquierdo, Armando Su\u00e1rez, and German Rigau. 2009. An empirical study on class-based word sense disambiguation. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 389-397, Athens, Greece.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A general feature space for automatic verb classification",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Joanis",
"suffix": ""
},
{
"first": "Suzanne",
"middle": [],
"last": "Stevenson",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "James",
"suffix": ""
}
],
"year": 2008,
"venue": "Natural Language Engineering",
"volume": "14",
"issue": "03",
"pages": "337--367",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Joanis, Suzanne Stevenson, and David James. 2008. A general feature space for automatic verb classifica- tion. Natural Language Engineering, 14(03):337-367.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Clustering polysemic subcategorization frame distributions semantically",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "Yuval",
"middle": [],
"last": "Krymolowski",
"suffix": ""
},
{
"first": "Zvika",
"middle": [],
"last": "Marx",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "64--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Korhonen, Yuval Krymolowski, and Zvika Marx. 2003. Clustering polysemic subcategorization frame distributions semantically. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 64-71.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Metaphors We Live By",
"authors": [
{
"first": "George",
"middle": [],
"last": "Lakoff",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Lakoff and Mark Johnson. 1980. Metaphors We Live By. University of Chicago Press.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The Acquisition and Modeling of Lexical Knowledge: A Corpus-based Investigation of Systematic Polysemy",
"authors": [
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mirella Lapata. 2000. The Acquisition and Modeling of Lexical Knowledge: A Corpus-based Investigation of Systematic Polysemy. Ph.D. thesis, University of Edinburgh.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Measures of distributional similarity",
"authors": [
{
"first": "Lillian",
"middle": [
"Lee"
],
"last": "",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 37th Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lillian Lee. 1999. Measures of distributional similarity. In Proceedings of the 37th Annual Meeting on Asso- ciation for Computational Linguistics, pages 25-32, College Park, MA.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Towards a theory of semantic space",
"authors": [
{
"first": "Will",
"middle": [],
"last": "Lowe",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 23rd Annual Meeting of the Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "576--581",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Will Lowe. 2001. Towards a theory of semantic space. In Proceedings of the 23rd Annual Meeting of the Cogni- tive Science Society, pages 576-581, Edinburgh, UK.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Introduction to Information Retrieval",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Prabhakar",
"middle": [],
"last": "Raghavan",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Prabhakar Raghavan, and Hin- rich Sch\u00fctze. 2008. Introduction to Information Re- trieval. Cambridge University Press, Cambridge, UK, 1st edition.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Data and models for metonymy resolution",
"authors": [
{
"first": "Katja",
"middle": [],
"last": "Markert",
"suffix": ""
},
{
"first": "Malvina",
"middle": [],
"last": "Nissim",
"suffix": ""
}
],
"year": 2009,
"venue": "Language Resources and Evaluation",
"volume": "43",
"issue": "2",
"pages": "123--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katja Markert and Malvina Nissim. 2009. Data and models for metonymy resolution. Language Resources and Evaluation, 43(2):123-138.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Using automatically acquired predominant senses for word sense disambiguation",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Koeling",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the ACL SENSEVAL-3 workshop",
"volume": "",
"issue": "",
"pages": "151--154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diana McCarthy, Rob Koeling, Julie Weeds, and John Car- roll. 2004. Using automatically acquired predominant senses for word sense disambiguation. In Proceedings of the ACL SENSEVAL-3 workshop, pages 151-154.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Automatic verb classification based on statistical distributions of argument structure",
"authors": [
{
"first": "Paola",
"middle": [],
"last": "Merlo",
"suffix": ""
},
{
"first": "Suzanne",
"middle": [],
"last": "Stevenson",
"suffix": ""
}
],
"year": 2001,
"venue": "Computational Linguistics",
"volume": "27",
"issue": "3",
"pages": "373--408",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paola Merlo and Suzanne Stevenson. 2001. Automatic verb classification based on statistical distributions of argument structure. Computational Linguistics, 27(3):373-408.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The Big Book of Concepts",
"authors": [
{
"first": "Gregory",
"middle": [
"L"
],
"last": "Murphy",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregory L. Murphy. 2002. The Big Book of Concepts. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Structural semantic interconnections: a knowledge-based approach to word sense disambiguation",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "Paola",
"middle": [],
"last": "Velardi",
"suffix": ""
}
],
"year": 2005,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "27",
"issue": "7",
"pages": "1075--1086",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli and Paola Velardi. 2005. Structural se- mantic interconnections: a knowledge-based approach to word sense disambiguation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(7):1075- 1086, July.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Word sense disambiguation: A survey",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2009,
"venue": "ACM Computing Surveys",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM Computing Surveys, 41:10:1-10:69, February.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Dependencybased construction of semantic space models",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "2",
"pages": "161--199",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Pad\u00f3 and Mirella Lapata. 2007. Dependency- based construction of semantic space models. Compu- tational Linguistics, 33(2):161-199.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Discovering word senses from text",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
},
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACM SIGKDD Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "613--619",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Pantel and Dekang Lin. 2002. Discovering word senses from text. In Proceedings of ACM SIGKDD Conference on Knowledge Discovery and Data Mining 2002, pages 613-619, Edmonton.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Distributional clustering of English words",
"authors": [
{
"first": "C",
"middle": [
"N"
],
"last": "Fernando",
"suffix": ""
},
{
"first": "Naftali",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Tishby",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the 31st Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "183--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fernando C. N. Pereira, Naftali Tishby, and Lillian Lee. 1993. Distributional clustering of English words. In Proceedings of the 31st Meeting of the Association for Computational Linguistics, pages 183-190, Columbus, OH.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "The Generative Lexicon",
"authors": [
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Pustejovsky. 1995. The Generative Lexicon. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Multiprototype vector-space models of word meaning",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Reisinger",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Raymond",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 11th Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-2010)",
"volume": "",
"issue": "",
"pages": "109--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Reisinger and Raymond J. Mooney. 2010. Multi- prototype vector-space models of word meaning. In Proceedings of the 11th Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-2010), pages 109-117.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Inducing a semantically annotated lexicon via EM-based clustering",
"authors": [
{
"first": "Mats",
"middle": [],
"last": "Rooth",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": ""
},
{
"first": "Detlef",
"middle": [],
"last": "Prescher",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mats Rooth, Stefan Riezler, Detlef Prescher, Glenn Car- roll, and Franz Beil. 1999. Inducing a semantically annotated lexicon via EM-based clustering. In Proceed- ings of the 37th Annual Meeting of the Association for Computational Linguistics, College Park, MD.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Experiments on the automatic induction of German semantic verb classes",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2006,
"venue": "Computational Linguistics",
"volume": "32",
"issue": "2",
"pages": "159--194",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabine Schulte im Walde. 2006. Experiments on the automatic induction of German semantic verb classes. Computational Linguistics, 32(2):159-194.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Automatic word sense discrimination",
"authors": [
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1998,
"venue": "Computational Linguistics",
"volume": "24",
"issue": "1",
"pages": "97--123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hinrich Sch\u00fctze. 1998. Automatic word sense discrimi- nation. Computational Linguistics, 24(1):97-123.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Contextual preferences",
"authors": [
{
"first": "Idan",
"middle": [],
"last": "Szpektor",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Bar-Haim",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Goldberger",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "683--691",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Idan Szpektor, Ido Dagan, Roy Bar-Haim, and Jacob Gold- berger. 2008. Contextual preferences. In Proceed- ings of the 46th Annual Meeting of the Association for Computational Linguistics, pages 683-691, Columbus, Ohio.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Tree-cut and a lexicon based on systematic polysemy",
"authors": [
{
"first": "Noriko",
"middle": [],
"last": "Tomuro",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies, NAACL '01",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noriko Tomuro. 2001. Tree-cut and a lexicon based on systematic polysemy. In Proceedings of the second meeting of the North American Chapter of the Asso- ciation for Computational Linguistics on Language technologies, NAACL '01, pages 1-8, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "From frequency to meaning: Vector space models of semantics",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Artificial Intelligence Research",
"volume": "37",
"issue": "",
"pages": "141--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37:141-188.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Literal and metaphorical sense identification through concrete and abstract context",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "Yair",
"middle": [],
"last": "Neuman",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Assaf",
"suffix": ""
},
{
"first": "Yohai",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "680--690",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Turney, Yair Neuman, Dan Assaf, and Yohai Cohen. 2011. Literal and metaphorical sense identification through concrete and abstract context. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 680-690, Edinburgh, Scot- land, UK.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Similarity of semantic relations",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2006,
"venue": "Computational Linguistics",
"volume": "32",
"issue": "",
"pages": "379--416",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney. 2006. Similarity of semantic relations. Computational Linguistics, 32:379-416.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Ontology-based distinction between polysemy and homonymy",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Utt",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 9th International Conference on Computational Semantics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Utt and Sebastian Pad\u00f3. 2011. Ontology-based distinction between polysemy and homonymy. In Pro- ceedings of the 9th International Conference on Com- putational Semantics, Oxford, UK.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Word-sense disambiguation using statistical models of Roget's categories trained on large corpora",
"authors": [
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the 14th conference on Computational linguistics",
"volume": "2",
"issue": "",
"pages": "454--460",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Yarowsky. 1992. Word-sense disambiguation using statistical models of Roget's categories trained on large corpora. In Proceedings of the 14th conference on Computational linguistics -Volume 2, COLING '92, pages 454-460, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Effect of model parameters on performance. A data point is the mean AP (MAP) across all meta alternations for a specific setting.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF3": {
"text": "Average Precision and Coherence (\u03ba) for each meta alternation. Correlation: r = 0.743 (p < 0.001)",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF0": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Notation and signatures for our framework.",
"html": null
},
"TABREF2": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "CoreLex's basic types with their corresponding WordNet anchors. CAM adopts these as meta senses.",
"html": null
},
"TABREF3": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>carp</td><td>amphibian (anm-art)</td><td>mousse (art-fod)</td><td>appropriation (act-mea)</td></tr><tr><td colspan=\"2\">duckling ape (anm-hum)</td><td>parsley (fod-plt)</td><td>scissors (act-art)</td></tr><tr><td>eel</td><td>leopard (anm-sub)</td><td>pickle (fod-sta)</td><td>showman (agt-hum)</td></tr><tr><td>hare</td><td>lizard (anm-hum)</td><td>pork (fod-mea)</td><td>upholstery (act-art)</td></tr></table>",
"text": "TargetsDistractors with meta sense anm Distractors with meta sense fod Random distractors",
"html": null
},
"TABREF4": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Sample of experimental items for the meta alternation anm-fod. (Abbreviations are listed inTable 2.)",
"html": null
},
"TABREF5": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Sample targets for meta alternations with high AP and mid-coherence values.",
"html": null
}
}
}
}