Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C10-1049",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:57:44.962509Z"
},
"title": "A Structured Vector Space Model for Hidden Attribute Meaning in Adjective-Noun Phrases",
"authors": [
{
"first": "Matthias",
"middle": [],
"last": "Hartung",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Heidelberg University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Heidelberg University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present an approach to model hidden attributes in the compositional semantics of adjective-noun phrases in a distributional model. For the representation of adjective meanings, we reformulate the pattern-based approach for attribute learning of Almuhareb (2006) in a structured vector space model (VSM). This model is complemented by a structured vector space representing attribute dimensions of noun meanings. The combination of these representations along the lines of compositional semantic principles exposes the underlying semantic relations in adjective-noun phrases. We show that our compositional VSM outperforms simple pattern-based approaches by circumventing their inherent sparsity problems.",
"pdf_parse": {
"paper_id": "C10-1049",
"_pdf_hash": "",
"abstract": [
{
"text": "We present an approach to model hidden attributes in the compositional semantics of adjective-noun phrases in a distributional model. For the representation of adjective meanings, we reformulate the pattern-based approach for attribute learning of Almuhareb (2006) in a structured vector space model (VSM). This model is complemented by a structured vector space representing attribute dimensions of noun meanings. The combination of these representations along the lines of compositional semantic principles exposes the underlying semantic relations in adjective-noun phrases. We show that our compositional VSM outperforms simple pattern-based approaches by circumventing their inherent sparsity problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In formal semantic theory, the compositional semantics of adjective-noun phrases can be modeled in terms of selective binding (Pustejovsky, 1995) , i.e. the adjective selects one of possibly several roles or attributes 1 from the semantics of the noun.",
"cite_spans": [
{
"start": 126,
"end": 145,
"text": "(Pustejovsky, 1995)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) a. a blue car b. COLOR(car)=blue",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we define a distributional framework that models the compositional process underlying the modification of nouns by adjectives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We focus on property-denoting adjectives as they are valuable for acquiring concept representations for, e.g., ontology learning. An approach for automatic subclassification of property-denoting adjectives is presented in Hartung and Frank (2010) . Our goal is to expose, for adjective-noun phrases as in (1a), the attribute in the semantics of the noun that is selected by the adjective, while not being overtly realized on the syntactic level. The semantic information we intend to capture for (1a) is formalized in (1b).",
"cite_spans": [
{
"start": 222,
"end": 246,
"text": "Hartung and Frank (2010)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Ideally, this kind of knowledge could be extracted from corpora by searching for patterns that paraphrase (1a), e.g. the color of the car is blue. However, linguistic patterns that explicitly relate nouns, adjectives and attributes are very rare.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We avoid these sparsity issues by reducing the triple r= noun, attribute, adjective that encodes the relation illustrated in (1b) to tuples r \u2032 = noun, attribute and r \u2032\u2032 = attribute, adjective , as suggested by Turney and Pantel (2010) for similar tasks. Both r \u2032 and r \u2032\u2032 can be observed much more frequently in text corpora than r. Moreover, this enables us to model adjective and noun meanings as distinct semantic vectors that are built over attributes as dimensions. Based on these semantic representations, we make use of vector composition operations in order to reconstruct r from r \u2032 and r \u2032\u2032 . This, in turn, allows us to infer complete noun-attribute-adjective triples from individually acquired noun-attribute and adjective-attribute representations.",
"cite_spans": [
{
"start": 212,
"end": 236,
"text": "Turney and Pantel (2010)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The contributions of our work are as follows: (i) We propose a framework for attribute selection based on structured vector space models (VSM), using as meaning dimensions attributes elicited by adjectives; (ii) we complement this novel representation of adjective meaning with structured vectors for noun meanings similarly built on attributes as meaning dimensions; (iii) we propose a composition of these representations that mirrors principles of compositional semantics in mapping adjective-noun phrases to their corresponding ontological representation; (iv) we propose and evaluate several metrics for the selection of meaningful components from vector representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Adjective-noun meaning composition has not been addressed in a distributional framework before (cf. Mitchell and Lapata (2008) ). Our approach leans on related work on attribute learning for ontology induction and recent work in distributional semantics.",
"cite_spans": [
{
"start": 100,
"end": 126,
"text": "Mitchell and Lapata (2008)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Attribute learning. Early approaches to attribute learning include Hatzivassiloglou and McKeown (1993) , who cluster adjectives that denote values of the same attribute. A weakness of their work is that the type of the attribute cannot be made explicit. More recent attempts to attribute learning from adjectives are Cimiano (2006) and Almuhareb (2006) . Cimiano uses attributes as features to arrange sets of concepts in a lattice. His approach to attribute acquisition harnesses adjectives that occur frequently as concept modifiers in corpora. The association of adjectives with their potential attributes is performed by dictionary look-up in WordNet (Fellbaum, 1998) . Similarly, Almuhareb (2006) uses adjectives and attributes as (independent) features for the purpose of concept learning. He acquires adjectiveattribute pairs using a pattern-based approach.",
"cite_spans": [
{
"start": 67,
"end": 102,
"text": "Hatzivassiloglou and McKeown (1993)",
"ref_id": "BIBREF9"
},
{
"start": 317,
"end": 331,
"text": "Cimiano (2006)",
"ref_id": "BIBREF4"
},
{
"start": 336,
"end": 352,
"text": "Almuhareb (2006)",
"ref_id": null
},
{
"start": 655,
"end": 671,
"text": "(Fellbaum, 1998)",
"ref_id": null
},
{
"start": 685,
"end": 701,
"text": "Almuhareb (2006)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "As a major limitation, these approaches are confined to adjective-attribute pairs. The polysemy of adjectives that can only be resolved in the context of the modified noun is entirely neglected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "From a methodological point of view, our work is similar to Almuhareb's, as we will also build on lexico-syntactic patterns for attribute selection. However, we extend the task to involve nouns and rephrase his approach in a distributional framework based on the composition of structured vector representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Distributional semantics. We observe two recent trends in distributional semantics research: (i) The use of VSM tends to shift from measuring unfocused semantic similarity to capturing increasingly fine-grained semantic information by incorporating more linguistic structure. Following Baroni and Lenci (to appear) , we refer to such models as structured vector spaces. (ii) Distributional methods are no longer confined to word meaning, but are noticeably extended to capture meaning on the phrase level. Prominent examples for (i) are Pad\u00f3 and Lapata (2007) and Rothenh\u00e4usler and Sch\u00fctze (2009) who use syntactic dependencies rather than single word cooccurrences as dimensions of semantic spaces. Erk and Pad\u00f3 (2008) extend this idea to the argument structure of verbs, while also accounting for compositional meaning aspects by modelling predication over arguments. Hence, their work is also representative for (ii). Baroni et al. (2010) use lexico-syntactic patterns to represent concepts in a structured VSM whose dimensions are interpretable as empirical manifestations of properties. We rely on similar techniques for the acquisition of structured vectors, whereas our work focusses on exposing the hidden meaning dimensions involved in compositional processes underlying concept modification.",
"cite_spans": [
{
"start": 286,
"end": 314,
"text": "Baroni and Lenci (to appear)",
"ref_id": null
},
{
"start": 537,
"end": 559,
"text": "Pad\u00f3 and Lapata (2007)",
"ref_id": "BIBREF15"
},
{
"start": 564,
"end": 596,
"text": "Rothenh\u00e4usler and Sch\u00fctze (2009)",
"ref_id": "BIBREF18"
},
{
"start": 700,
"end": 719,
"text": "Erk and Pad\u00f3 (2008)",
"ref_id": "BIBREF5"
},
{
"start": 921,
"end": 941,
"text": "Baroni et al. (2010)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The commonly adopted method for modelling compositionality in VSM is vector composition (Mitchell and Lapata, 2008; Widdows, 2008) . Showing the benefits of vector composition for language modelling, Mitchell and Lapata (2009) emphasize its potential to become a standard method in NLP.",
"cite_spans": [
{
"start": 88,
"end": 115,
"text": "(Mitchell and Lapata, 2008;",
"ref_id": "BIBREF13"
},
{
"start": 116,
"end": 130,
"text": "Widdows, 2008)",
"ref_id": "BIBREF21"
},
{
"start": 200,
"end": 226,
"text": "Mitchell and Lapata (2009)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The approach pursued in this paper builds on both lines of research sketched in (i) and (ii) in that we model a specific meaning layer in the semantics of adjectives and nouns in a structured VSM. Vector composition is used to expose their hidden meaning dimensions on the phrase level. adjectives, as in (2). The triple r can be broken down into tuples r \u2032 = noun, attribute and r \u2032\u2032 = attribute, adjective . Previous learning approaches focussed on r \u2032 (Cimiano, 2006) or r \u2032\u2032 (Almuhareb, 2006) only.",
"cite_spans": [
{
"start": 455,
"end": 470,
"text": "(Cimiano, 2006)",
"ref_id": "BIBREF4"
},
{
"start": 479,
"end": 496,
"text": "(Almuhareb, 2006)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "(2) a. a blue value car concept b. ATTR(concept) = value",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In semantic composition of adjective-noun compounds, the adjective (e.g. blue) contributes a value for an attribute (here: COLOR) that characterizes the concept evoked by the noun (e.g. car). Thus, the attribute in (2) constitutes a 'hidden variable' that is not overtly expressed in (2a), but constitutes the central axis that relates r \u2032 and r \u2032\u2032 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We model the semantics of adjectives and nouns in a structured VSM that conveys the hidden relationship in (2). The dimensions of the model are defined by attributes, such as COLOR, SIZE or SPEED, while the vector components are determined on the basis of carefully selected acquisition patterns that are tailored to capturing the particular semantic information of interest for r \u2032 and r \u2032\u2032 . In this respect, lexico-syntactic patterns serve a similar purpose as dependency relations in Pad\u00f3 and Lapata (2007) or Rothenh\u00e4usler and Sch\u00fctze (2009) . The upper part of Fig. 1 displays examples of vectors we build for adjectives and nouns.",
"cite_spans": [
{
"start": 488,
"end": 510,
"text": "Pad\u00f3 and Lapata (2007)",
"ref_id": "BIBREF15"
},
{
"start": 514,
"end": 546,
"text": "Rothenh\u00e4usler and Sch\u00fctze (2009)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 567,
"end": 573,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Structured vectors built on extraction patterns.",
"sec_num": null
},
{
"text": "The fine granularity of lexico-syntactic patterns that capture the triple r comes at the cost of their sparsity when applied to corpus data. Therefore, we construct separate vector representations for r \u2032 and r \u2032\u2032 . Eventually, these representations are joined by vector composition to reconstruct the triple r. Apart from avoiding sparsity issues, this compositional approach has several prospects from a linguistic perspective as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Composing vectors along hidden dimensions.",
"sec_num": null
},
{
"text": "Building vectors with attributes as meaning dimensions enables us to model (i) ambiguity of adjectives with regard to the attributes they select, and (ii) the disambiguation capacity of adjective and noun vectors when considered jointly. Consider, for example, the phrase enormous ball that is ambiguous for two reasons: enormous may select a set of possible attributes (SIZE or WEIGHT, among others), while ball elicits several attributes in accordance with its different word senses 2 . As seen in Fig. 1 , these ambiguities are nicely captured by the separate vector representations for the adjective and the noun (upper part); by composing these representations, the ambiguity is resolved (lower part).",
"cite_spans": [],
"ref_spans": [
{
"start": 500,
"end": 506,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Ambiguity and disambiguation.",
"sec_num": null
},
{
"text": "In this section, we introduce the methods we apply in order to (i) acquire vector representations for adjectives and nouns, (ii) select appropriate attributes from them, and (iii) compose them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building a VSM for Adjective-Noun Meaning",
"sec_num": "3.2"
},
{
"text": "We use the following patterns 3 for the acquisition of vectors capturing the tuple r \u2032\u2032 = attribute, adjective . Even though some of these patterns (A1 and A4) match triples of nouns, attributes and adjectives, we only use them for the extraction of binary tuples (underlined), thus abstracting from the modified noun.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attribute Acquisition Patterns",
"sec_num": "3.2.1"
},
{
"text": "(A1) ATTR of DT? NN is|was JJ (A2) DT? RB? JJ ATTR (A3) DT? JJ or JJ ATTR (A4) DT? NN's ATTR is|was JJ (A5) is|was|are|were JJ in|of ATTR To acquire noun vectors capturing the tuple r \u2032 = noun, attribute , we rely on the following patterns. Again, we only extract pairs, as indicated by the underlined elements. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attribute Acquisition Patterns",
"sec_num": "3.2.1"
},
{
"text": "Some of the adjectives extracted by A1-A5 are not property-denoting and thus represent noise. This affects in particular pattern A2, which extracts adjectives like former or more, or relational ones such as economic or geographic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Target Filtering",
"sec_num": "3.2.2"
},
{
"text": "This problem may be addressed in different ways: By target filtering, extractions can be checked against a predicative pattern P1 that is supposed to apply to property-denoting adjectives only. Vectors that fail this test are suppressed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Target Filtering",
"sec_num": "3.2.2"
},
{
"text": "(P1) DT NN is|was JJ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Target Filtering",
"sec_num": "3.2.2"
},
{
"text": "Alternatively, extractions obtained from lowconfidence patterns can be awarded reduced weights by means of a pattern value function (defined in 3.3; cf. Pantel and Pennacchiotti (2006) ).",
"cite_spans": [
{
"start": 153,
"end": 184,
"text": "Pantel and Pennacchiotti (2006)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Target Filtering",
"sec_num": "3.2.2"
},
{
"text": "We intend to use the acquired vectors in order to detect attributes that are implicit in adjectivenoun meaning. Therefore, we need a method that selects appropriate attributes from each vector. While, in general, this task consists in distinguishing semantically meaningful dimensions from noise, the requirements are different depending on whether attributes are to be selected from adjective or noun vectors. This is illustrated in Fig. 1 , a typical configuration, with one vector representing a typical property-denoting adjective that exhibits relatively strong peaks on one or more dimensions, whereas noun vectors show a tendency for broad and flat distributions over their dimensions. This suggests using a strict selection function (choosing few very prominent dimensions) for adjectives and a less restrictive one (licensing the inclusion of more dimensions of lower relative prominence) for nouns. Moreover, we are interested in finding a selection function that relies on as few free parameters as possible in order to avoid frequency or dimensionality effects.",
"cite_spans": [],
"ref_spans": [
{
"start": 434,
"end": 440,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Attribute Selection",
"sec_num": "3.2.3"
},
{
"text": "MPC Selection (MPC). An obvious method for attribute selection is to choose the most prominent component from any vector (i.e., the highest absolute value). If a vector exhibits several peaks, all other components are rejected, their relative importance notwithstanding. MPC obviously fails to capture polysemy of targets, which affects ad-jectives such as hot, in particular.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attribute Selection",
"sec_num": "3.2.3"
},
{
"text": "Threshold Selection (TSel). TSel recasts the approach of Almuhareb (2006) , in selecting all dimensions as attributes whose components exceed a frequency threshold. This avoids the drawback of MPC, but introduces a parameter that needs to be optimized. Also, it is difficult to apply absolute thresholds to composed vectors, as the range of their components is subject to great variation, and it is unclear whether the method will scale with increased dimensionality.",
"cite_spans": [
{
"start": 57,
"end": 73,
"text": "Almuhareb (2006)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attribute Selection",
"sec_num": "3.2.3"
},
{
"text": "Entropy Selection (ESel). In information theory, entropy measures the average uncertainty in a probability distribution (Manning and Sch\u00fctze, 1999) .",
"cite_spans": [
{
"start": 120,
"end": 147,
"text": "(Manning and Sch\u00fctze, 1999)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attribute Selection",
"sec_num": "3.2.3"
},
{
"text": "We define the entropy",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attribute Selection",
"sec_num": "3.2.3"
},
{
"text": "H(v) of a vector v= v 1 , . . . , v n over its components as H(v) = \u2212 n i=1 P (v i ) log P (v i ), where P (v i ) = v i / n i=1 v i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attribute Selection",
"sec_num": "3.2.3"
},
{
"text": "We use H(v) to assess the impact of singular vector components on the overall entropy of the vector: We expect entropy to detect components that contribute noise, as opposed to those that contribute important information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attribute Selection",
"sec_num": "3.2.3"
},
{
"text": "We define an algorithm for entropy-based attribute selection that returns a list of informative dimensions. The algorithm successively suppresses (combinations of) vector components one by one. Given that a gain of entropy is equivalent to a loss of information and vice versa, we assume that every combination of components that leads to an increase in entropy when being suppressed is actually responsible for a substantial amount of information. The algorithm includes a back-off to MPC for the special case that a vector contains a single peak (i.e., H(v) = 0), so that, in principle, it should be applicable to vectors of any kind. Vectors with very broad distributions over their dimensions, however, pose a problem to this method. For ball in Fig. 1, for instance, the method does not select any dimension.",
"cite_spans": [],
"ref_spans": [
{
"start": 750,
"end": 761,
"text": "Fig. 1, for",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Attribute Selection",
"sec_num": "3.2.3"
},
{
"text": "Median Selection (MSel). As a further method we rely on the median m that can be informally defined as the value that separates the upper from the lower half of a distribution (Krengel, 2003) . It is less restrictive than MPC and TSel and overcomes the particular drawback of ESel. Using this measure, we choose all dimensions whose components exceed m. Thus, for the vector representing ",
"cite_spans": [
{
"start": 176,
"end": 191,
"text": "(Krengel, 2003)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attribute Selection",
"sec_num": "3.2.3"
},
{
"text": "We use vector composition as a hinge to combine adjective and noun vectors in order to reconstruct the triple r= noun, attribute, adjective . Mitchell and Lapata (2008) distinguish two major classes of vector composition operations, namely multiplicative and additive operations, that can be extended in various ways. We use their standard definitions (denoted \u00d7 and +, henceforth). For our task, we expect \u00d7 to perform best as it comes closest to the linguistic function of intersective adjectives, i.e. to select dimensions that are prominent both for the adjective and the noun, whereas + basically blurs the vector components, as can be seen in the lower part of Fig. 1 .",
"cite_spans": [
{
"start": 142,
"end": 168,
"text": "Mitchell and Lapata (2008)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 667,
"end": 673,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Vector Composition",
"sec_num": "3.2.4"
},
{
"text": "We follow Pad\u00f3 and Lapata (2007) in defining a semantic space as a matrix M = B \u00d7 T relating a set of target elements T to a set of basis elements B. Further parameters and their instantiations we use in our model are described below. We use p to denote an individual lexico-syntactic pattern.",
"cite_spans": [
{
"start": 10,
"end": 32,
"text": "Pad\u00f3 and Lapata (2007)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Parameters",
"sec_num": "3.3"
},
{
"text": "The basis elements of our VSM are nouns denoting attributes. For comparison, we use the attributes selected by Almuhareb (2006) The context selection function cont(t) determines the set of patterns that contribute to the representation of each target word t \u2208 T . These are the patterns A1-A5 and N1-N4 (cf. Section 3.2.1).",
"cite_spans": [
{
"start": 111,
"end": 127,
"text": "Almuhareb (2006)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Parameters",
"sec_num": "3.3"
},
{
"text": "The target elements represented in the vector space comprise all adjectives T A that match the patterns A1 to A5 in the corpus, provided they ex-ceed a frequency threshold n. During development, n was set to 5 in order to filter noise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Parameters",
"sec_num": "3.3"
},
{
"text": "As for the target nouns T N , we rely on a representative dataset compiled by Almuhareb (2006) . It contains 402 nouns that are balanced with regard to semantic class (according to the WordNet supersenses), ambiguity and frequency.",
"cite_spans": [
{
"start": 78,
"end": 94,
"text": "Almuhareb (2006)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Parameters",
"sec_num": "3.3"
},
{
"text": "As association measure that captures the strength of the association between the elements of B and T , we use raw frequency counts 4 as obtained from the PoS-tagged and lemmatized version of the ukWaC corpus (Baroni et al., 2009) . Table 1 gives an overview of the number of hits returned by these patterns.",
"cite_spans": [
{
"start": 208,
"end": 229,
"text": "(Baroni et al., 2009)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 232,
"end": 239,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Model Parameters",
"sec_num": "3.3"
},
{
"text": "The basis mapping function \u00b5 creates the dimensions of the semantic space by mapping each extraction of a pattern p to the attribute it contains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Parameters",
"sec_num": "3.3"
},
{
"text": "The pattern value function enables us to subdivide dimensions along particular patterns. We experimented with two instantiations: pv const considers, for each dimension, all patterns, while weighting them equally. pv f (p) awards the extractions of pattern p with weight 1, while setting the weights for all patterns different from p to 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Parameters",
"sec_num": "3.3"
},
{
"text": "We evaluate the performance of the structured VSM on the task of inferring attributes from adjective-noun phrases in three experiments: In Exp1 and Exp2, we evaluate vector representations capturing r \u2032 and r \u2032\u2032 independently of one another. Exp3 investigates the selection of hidden attributes from vector representations constructed by composition of adjective and noun vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We compare all results against different gold standards. In Exp1, we follow Almuhareb (2006) , evaluating against WordNet 3.0. For Exp2 and Exp3, we establish gold standards manually: For Exp2, we construct a test set of nouns annotated with their corresponding attributes. For Exp3, we manually annotate adjective-noun phrases with the attributes appropriate for the whole phrase. All experiments are evaluated in terms of precision, recall and F 1 score.",
"cite_spans": [
{
"start": 76,
"end": 92,
"text": "Almuhareb (2006)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "The first experiment evaluates the performance of structured vector representations on attribute selection for adjectives. We compare this model against a re-implementation of Almuhareb (2006) .",
"cite_spans": [
{
"start": 176,
"end": 192,
"text": "Almuhareb (2006)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exp1: Attribute Selection for Adjectives",
"sec_num": "4.1"
},
{
"text": "Experimental settings and gold standard. To reconstruct Almuhareb's approach, we ran his patterns A1-A3 on the ukWaC corpus. Table 1 shows the number of hits when applied to the Web (Almuhareb, 2006) vs. ukWaC. A1 and A3 yield less extractions on ukWaC as compared to the Web. 5 We introduced two additional patterns, A4 and A5, that contribute about 10,000 additional hits. We adopted Almuhareb's manually chosen thresholds for attribute selection for A1-A3; for A4, A5 and a combination of all patterns, we manually selected optimal thresholds.",
"cite_spans": [
{
"start": 182,
"end": 199,
"text": "(Almuhareb, 2006)",
"ref_id": null
},
{
"start": 277,
"end": 278,
"text": "5",
"ref_id": null
}
],
"ref_spans": [
{
"start": 125,
"end": 132,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Exp1: Attribute Selection for Adjectives",
"sec_num": "4.1"
},
{
"text": "We experiment with pv const and all variants of pv f (p) for pattern weighting (see sect. 3.3). For attribute selection, we compare TSel (as used by Almuhareb) , ESel and MSel.",
"cite_spans": [
{
"start": 149,
"end": 159,
"text": "Almuhareb)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exp1: Attribute Selection for Adjectives",
"sec_num": "4.1"
},
{
"text": "The gold standard consists of all adjectives that are linked to at least one of the ten attributes we consider by WordNet's attribute relation (1063 adjectives in total). Evaluation results. Results for Exp1 are displayed in Table 2 . The settings of pv are given in the rows, the attribute selection methods (in combination with target filtering 6 ) in the columns.",
"cite_spans": [],
"ref_spans": [
{
"start": 225,
"end": 232,
"text": "Table 2",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Exp1: Attribute Selection for Adjectives",
"sec_num": "4.1"
},
{
"text": "The results for our re-implementation of Almuhareb's individual patterns are comparable to his original figures 7 , except for A3 that seems to suffer from quantitative differences of the underlying data. Combining all patterns leads to an improvement in precision over (our reconstruction of) Almuhareb's best individual pattern when TSel and target filtering are used in combination. MPC and MSel perform worse (not reported here). As for target filtering, A1 and A3 work best.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exp1: Attribute Selection for Adjectives",
"sec_num": "4.1"
},
{
"text": "Both TSel and ESel benefit from the combination with the target filter, where the largest improvement (and the best overall result) is observ- 5 The difference for A2 is an artifact of Almuhareb's extraction methodology.",
"cite_spans": [
{
"start": 143,
"end": 144,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exp1: Attribute Selection for Adjectives",
"sec_num": "4.1"
},
{
"text": "6 Regarding target filtering, we only report the best filter pattern for each configuration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exp1: Attribute Selection for Adjectives",
"sec_num": "4.1"
},
{
"text": "7 P(A1)=0.176, P(A2)=0.218, P(A3)=0.504 Rothenh\u00e4usler and Sch\u00fctze (2009) that VSMs intended to convey specific semantic information rather than mere similarity benefit primarily from a linguistically adequate choice of contexts. Similar to Almuhareb, recall is problematic. Even though ESel leads to slight improvements, the scores are far from satisfying. With Almuhareb, we note that this is mainly due to a high number of extremely fine-grained adjectives in WordNet that are rare in corpora. 8",
"cite_spans": [
{
"start": 40,
"end": 72,
"text": "Rothenh\u00e4usler and Sch\u00fctze (2009)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exp1: Attribute Selection for Adjectives",
"sec_num": "4.1"
},
{
"text": "Exp2 evaluates the performance of attribute selection from noun vectors tailored to the tuple r \u2032\u2032 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exp2: Attribute Selection for Nouns",
"sec_num": "4.2"
},
{
"text": "Construction of the gold standard. For evaluation, we created a gold standard by manually annotating a set of nouns with attributes. This gold standard builds on a random sample extracted from T N (cf. section 3.3). Running N1-N4 on ukWaC returned semantic vectors for 216 concepts. From these, we randomly sampled 100 concepts that were manually annotated by three human annotators.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exp2: Attribute Selection for Nouns",
"sec_num": "4.2"
},
{
"text": "The annotators were provided a matrix consisting of the nouns and the set of ten attributes for each noun. Their task was to remove all inappropriate attributes. They were free to decide how many attributes to accept for each noun. In order to deal with word sense ambiguity, the annotators were instructed to consider all senses of a noun and to retain every attribute that was acceptable for at least one sense.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exp2: Attribute Selection for Nouns",
"sec_num": "4.2"
},
{
"text": "Inter-annotator agreement amounts to \u03ba= 0.69 (Fleiss, 1971) . Cases of disagreement were adjudicated by majority-voting. The gold standard Table 3 . Performance is lower in comparison to Exp1. We hypothesize that the tuple r \u2032\u2032 might not be fully captured by overt linguistic patterns. This needs further investigation in future research. Against this background, MPC is relatively precise, but poor in terms of recall. ESel, being designed to select more than one prominent dimension, counterintuitively fails to increase recall, suffering from the fact that many noun vectors show a rather flat distribution without any strong peak. MSel turns out to be most suitable for this task: Its precision is comparable to MPC (with N3 as an outlier), while recall is considerably higher. Overall, these results indicate that attribute selection for adjectives and nouns, though similar, should be viewed as distinct tasks that require different attribute selection methods.",
"cite_spans": [
{
"start": 45,
"end": 59,
"text": "(Fleiss, 1971)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 139,
"end": 146,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Exp2: Attribute Selection for Nouns",
"sec_num": "4.2"
},
{
"text": "In this experiment, we compose noun and adjective vectors in order to yield a new combined representation. We investigate whether the semantic information encoded by the components of this new vector is sufficiently precise to disambiguate the attribute dimensions of the original representations (see section 3.1) and, thus, to infer hidden attributes from adjective-noun phrases (see (2)) as advocated by Pustejovsky (1995) .",
"cite_spans": [
{
"start": 407,
"end": 425,
"text": "Pustejovsky (1995)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exp3: Attribute Selection for Adjective-Noun Phrases",
"sec_num": "4.3"
},
{
"text": "Construction of the gold standard. For evaluation, we created a manually annotated test set of adjective-noun phrases. We selected a subset of property-denoting adjectives that are appropriate modifiers for the nouns from T N using the predicative pattern P1 (see sect. 3) on ukWaC. This yielded 2085 adjective types that were further reduced to 386 by frequency filtering (n = 5). We sampled our test set from all pairs in the cartesian product of the 386 adjectives and 216 nouns (cf. Exp2) that occurred at least 5 times in a subsection of ukWaC. To ensure a sufficient number of ambiguous adjectives in the test set, sampling proceeded in two steps: First, we sampled four nouns each for a manual selection of 15 adjectives of all ambiguity levels in WordNet. This leads to 60 adjective-noun pairs. Second, another 40 pairs were sampled fully automatically. The test set was manually annotated by the same annotators as in Exp2. They were asked to remove all attributes that were not appropriate for a given adjective-noun pair, either because it is not appropriate for the noun or because it is not selected by the adjective. Further instructions were as in Exp2, in particular regarding ambiguity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exp3: Attribute Selection for Adjective-Noun Phrases",
"sec_num": "4.3"
},
{
"text": "The overall agreement is \u03ba=0.67. After adjudication by majority voting, the resulting gold standard contains 86 attributes for 76 pairs. 24 pairs could not be assigned any attribute, either because the adjective did not denote a property, as in private investment, or the most appropriate attribute was not offered, as in blue day or new house.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exp3: Attribute Selection for Adjective-Noun Phrases",
"sec_num": "4.3"
},
{
"text": "We evaluate the vector composition methods discussed in section 3.2.4. Individual vectors for the adjectives and nouns from the test pairs were constructed using all patterns A1-A5 and N1-N4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exp3: Attribute Selection for Adjective-Noun Phrases",
"sec_num": "4.3"
},
{
"text": "For attribute selection, we tested MPC, ESel and MSel. The results are compared against three baselines: BL-P implements a purely patternbased method, i.e. running the patterns that extract the triple r (A1, A4, N1, N3 and N4, with JJ and NN instantiated accordingly) on the pairs from the test set. BL-N and BL-Adj are back-offs for vector composition, taking the respective noun or adjective vector, as investigated in Exp1 and Exp2, as surrogates for a composed vector. 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Table 4 . Attribute selection based on the composition of adjective and noun vectors yields a considerable improvement of both precision and recall as compared to the individual results obtained in Exp1 and Exp2. Comparing the results of Exp3 against the baselines reveals two important aspects of our work. First, the complete failure of BL-P 9 underlines the attractiveness of our method to build structured vector representations from patterns of reduced complexity. Second, vector composition is suitable for selecting hidden attributes from adjective-noun phrases that are jointly encoded by adjective and noun vectors: Both composition methods we tested outperform BL-N. However, the choice of the composition method matters: \u00d7 performs best with a maximum precision of 0.63. This confirms our expectation that vector multiplication is a good approximation for attribute selection in adjective-noun semantics. Being outperformed by BL-Adj in most categories, + is less suited for this task.",
"cite_spans": [],
"ref_spans": [
{
"start": 518,
"end": 526,
"text": "Table 4",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Exp3: Attribute Selection for Adjective-Noun Phrases",
"sec_num": "4.3"
},
{
"text": "All selection methods outperform BL-Adj in precision. Comparing MPC and ESel, ESel achieves better precision when combined with the \u00d7-operator, while doing worse for recall. The robust performance of MPC is not surprising as the test set contains only ten adjective-noun pairs that are still ambiguous with regard to the attributes they elicit. The stronger performance of the entropy-based method with the \u00d7-operator is mainly due to its accuracy on detecting false positives, in that it is able to return \"empty\" selections. In terms of precision, MSel did worse in general, while recall is decent. This underlines that vector composition generally promotes meaningful components, but MSel is too inaccurate to select them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exp3: Attribute Selection for Adjective-Noun Phrases",
"sec_num": "4.3"
},
{
"text": "Given the performance of the baselines and the noun vectors in Exp2, we consider this a very promising result for our approach to attribute 9 The patterns used yield no hits for the test pairs at all. selection from structured vector representations. The results also corroborate the insufficiency of previous approaches to attribute learning from adjectives alone.",
"cite_spans": [
{
"start": 140,
"end": 141,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exp3: Attribute Selection for Adjective-Noun Phrases",
"sec_num": "4.3"
},
{
"text": "We proposed a structured VSM as a framework for inferring hidden attributes from the compositional semantics of adjective-noun phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Outlook",
"sec_num": "5"
},
{
"text": "By reconstructing Almuhareb (2006) , we showed that structured vector representations of adjective meaning consistently outperform simple pattern-based learning, up to 13 pp. in precision. A combination of target filtering and pattern weighting turned out to be effective here, by selecting particulary meaningful lexico-syntactic contexts and filtering adjectives that are not property-denoting. Further studies need to investigate this phenomenon and its most appropriate formulation in a vector space framework.",
"cite_spans": [
{
"start": 18,
"end": 34,
"text": "Almuhareb (2006)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Outlook",
"sec_num": "5"
},
{
"text": "Moreover, the VSM offers a natural representation for sense ambiguity of adjectives. Comparing attribute selection methods on adjective and noun vectors shows that they are sensitive to the distributional structure of the vectors, and need to be chosen with care. Future work will investigate these selection methods in high-dimensional vectors spaces, by using larger sets of attributes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Outlook",
"sec_num": "5"
},
{
"text": "Exp3 shows that the composition of patternbased adjective and noun vectors robustly reflects aspects of meaning composition in adjective-noun phrases, with attributes as a hidden dimension. It also suggests that composition is effective in disambiguation of adjective and noun meanings. This hypothesis needs to be substantiated in further experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Outlook",
"sec_num": "5"
},
{
"text": "Finally, we showed that composition of vectors representing complementary meaning aspects can be beneficial to overcome sparsity effects. However, our compositional approach meets its limits if the patterns capturing adjective and noun meaning in isolation are too sparse to acquire sufficiently populated vector components from corpora. For future work, we envisage using vector similarity to acquire structured vectors for infrequent targets from semantic spaces that convey less linguistic structure to address these remaining sparsity issues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Outlook",
"sec_num": "5"
},
{
"text": "In the original statement of the theory, adjectives select qualia roles that can be considered as collections of attributes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "WordNet senses for the noun ball include, among others: 1. round object[...] in games; 2. solid projectile, 3. object with a spherical shape, 4. people [at a] dance.3 Some of these patterns are taken fromAlmuhareb (2006) andSowa (2000). The descriptions rely on the Penn Tagset(Marcus et al., 1999). ? marks optional elements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We experimented with the conditional probability ratio proposed byMitchell and Lapata (2009). As it performed worse on our data, we did not consider it any further.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For instance: bluish-lilac, chartreuse or pink-lavender as values of the attribute COLOR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Distributional Memory. A General Framework for Corpus-based Semantics",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Lenci",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baroni, Marco and Alessandro Lenci. to appear. Distributional Memory. A General Framework for Corpus-based Semantics. Computational Linguis- tics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The wacky wide web: A collection of very large linguistically processed web-crawled corpora",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Silvia",
"middle": [],
"last": "Bernardini",
"suffix": ""
},
{
"first": "Adriano",
"middle": [],
"last": "Ferraresi",
"suffix": ""
},
{
"first": "Eros",
"middle": [],
"last": "Zanchetta",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of Language Resources and Evaluation",
"volume": "43",
"issue": "3",
"pages": "209--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baroni, Marco, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The wacky wide web: A collection of very large linguistically processed web-crawled corpora. Journal of Language Re- sources and Evaluation, 43(3):209-226.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Strudel. A Corpus-based Semantic Model of Based on Properties and Types",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Barbu",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2010,
"venue": "Cognitive Science",
"volume": "34",
"issue": "",
"pages": "222--254",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baroni, Marco, Brian Murphy, Eduard Barbu, and Massimo Poesio. 2010. Strudel. A Corpus-based Semantic Model of Based on Properties and Types. Cognitive Science, 34:222-254.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Ontology Learning and Population from Text. Algorithms, Evaluation and Applications",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Cimiano",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cimiano, Philipp. 2006. Ontology Learning and Pop- ulation from Text. Algorithms, Evaluation and Ap- plications. Springer.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Structured Vector Space Model for Word Meaning in Context",
"authors": [
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erk, Katrin and Sebastian Pad\u00f3. 2008. A Structured Vector Space Model for Word Meaning in Context. In Proceedings of EMNLP, Honolulu, HI.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "WordNet: An Electronic Lexical Database",
"authors": [],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fellbaum, Christiane, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press, Cam- bridge, Mass.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Measuring nominal scale agreement among many raters",
"authors": [
{
"first": "Joseph",
"middle": [
"L"
],
"last": "Fleiss",
"suffix": ""
}
],
"year": 1971,
"venue": "Psychological Bulletin",
"volume": "76",
"issue": "5",
"pages": "378--382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fleiss, Joseph L. 1971. Measuring nominal scale agreement among many raters. Psychological Bul- letin, 76(5):378-382.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Semi-supervised Type-based Classification of Adjectives. Distinguishing Properties and Relations",
"authors": [
{
"first": "Matthias",
"middle": [],
"last": "Hartung",
"suffix": ""
},
{
"first": "Anette",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 7th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hartung, Matthias and Anette Frank. 2010. A Semi-supervised Type-based Classification of Ad- jectives. Distinguishing Properties and Relations. In Proceedings of the 7th International Conference on Language Resources and Evaluation, Valletta, Malta, May.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Towards the Automatic Identification of Adjectival Scales. Clustering Adjectives According to Meaning",
"authors": [
{
"first": "Vasileios",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the 31st Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "172--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hatzivassiloglou, Vasileios and Kathleen McKeown. 1993. Towards the Automatic Identification of Ad- jectival Scales. Clustering Adjectives According to Meaning. In Proceedings of the 31st Annual Meet- ing of the Association of Computational Linguistics, pages 172-182.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Wahrscheinlichkeitstheorie und Statistik",
"authors": [
{
"first": "Ulrich",
"middle": [],
"last": "Krengel",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Krengel, Ulrich. 2003. Wahrscheinlichkeitstheorie und Statistik. Vieweg, Wiesbaden.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Foundations of Statistical Natural Language Processing",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manning, Christopher D. and Hinrich Sch\u00fctze. 1999. Foundations of Statistical Natural Language Pro- cessing. The MIT Press, Cambridge, Mas- sachusetts.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Treebank-3, ldc99t42. CD-ROM",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Taylor",
"suffix": ""
}
],
"year": 1999,
"venue": "Linguistic Data Consortium",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcus, Mitchell P., Beatrice Santorini, Mary Ann Marcinkiewicz, and Ann Taylor. 1999. Treebank-3, ldc99t42. CD-ROM. Philadelphia, Penn.: Linguis- tic Data Consortium.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Vector-based Models of Semantic Composition",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "236--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell, Jeff and Mirella Lapata. 2008. Vector-based Models of Semantic Composition. In Proceedings of ACL-08: HLT, pages 236-244, Columbus, Ohio, June.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Language Models Based on Semantic Composition",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "430--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell, Jeff and Mirella Lapata. 2009. Lan- guage Models Based on Semantic Composition. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, Singa- pore, August 2009, pages 430-439, Singapore, Au- gust.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Dependency-based Construction of Semantic Space Models",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "",
"pages": "161--199",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pad\u00f3, Sebastian and Mirella Lapata. 2007. Dependency-based Construction of Semantic Space Models. Computational Linguistics, 33:161-199.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Espresso: Leveraging generic patterns for automatically harvesting semantic relations",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Pennacchiotti",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "113--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pantel, Patrick and Marco Pennacchiotti. 2006. Espresso: Leveraging generic patterns for automati- cally harvesting semantic relations. In Proceedings of the 21st International Conference on Computa- tional Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, Sydney, Australia, 17-21 July 2006, pages 113-120.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The Generative Lexicon",
"authors": [
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pustejovsky, James. 1995. The Generative Lexicon. MIT Press, Cambridge, Mass.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Unsupervised Classification with Dependency Based Word Spaces",
"authors": [
{
"first": "Klaus",
"middle": [],
"last": "Rothenh\u00e4usler",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the EACL Workshop on Geometrical Models of Natural Language Semantics (GEMS)",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rothenh\u00e4usler, Klaus and Hinrich Sch\u00fctze. 2009. Un- supervised Classification with Dependency Based Word Spaces. In Proceedings of the EACL Work- shop on Geometrical Models of Natural Language Semantics (GEMS), pages 17-24, Athens, Greece, March.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Knowledge Representation. Logical, Philosophical, and Computational Foundations",
"authors": [
{
"first": "John",
"middle": [
"F"
],
"last": "Sowa",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sowa, John F. 2000. Knowledge Representation. Logical, Philosophical, and Computational Foun- dations. Brooks Cole.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "From Frequency to Meaning. Vector Space Models of Semantics",
"authors": [
{
"first": "Peter",
"middle": [
"D"
],
"last": "Turney",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Artificial Intelligence Research",
"volume": "37",
"issue": "",
"pages": "141--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Turney, Peter D. and Patrick Pantel. 2010. From Fre- quency to Meaning. Vector Space Models of Se- mantics. Journal of Artificial Intelligence Research, 37:141-188.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Semantic Vector Products. Some Initial Investigations",
"authors": [
{
"first": "Dominic",
"middle": [],
"last": "Widdows",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2nd Conference on Quantum Interaction",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Widdows, Dominic. 2008. Semantic Vector Products. Some Initial Investigations. In Proceedings of the 2nd Conference on Quantum Interaction, Oxford, UK, March.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Vectors for enormous (v e ) and ball (v b )",
"type_str": "figure",
"uris": null
},
"TABREF0": {
"text": "",
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td/><td>COLOR</td><td>DIRECTION</td><td>DURATION</td><td>SHAPE</td><td>SIZE</td><td>SMELL</td><td>SPEED</td><td>TASTE</td><td>TEMPERATURE</td><td>WEIGHT</td></tr><tr><td>ve</td><td>1</td><td>1</td><td>0</td><td>1</td><td>45</td><td>0</td><td>4</td><td>0</td><td>0</td><td>21</td></tr><tr><td>v b</td><td>14</td><td>38</td><td>2</td><td>20</td><td>26</td><td>0</td><td>45</td><td>0</td><td>0</td><td>20</td></tr><tr><td>ve \u00d7 v b ve + v b</td><td>14 15</td><td>38 39</td><td>0 2</td><td>20 21</td><td>1170 71</td><td>0 0</td><td>180 49</td><td>0 0</td><td>0 0</td><td>420 41</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>Contrary to prior work, we model attribute selec-</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>tion as involving triples of nouns, attributes and</td></tr></table>"
},
"TABREF1": {
"text": "NN with|without DT? RB? JJ? ATTR (N2) DT ATTR of DT? RB? JJ? NN (N3) DT NN's RB? JJ? ATTR (N4) NN has|had a|an RB? JJ? ATTR",
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>"
},
"TABREF3": {
"text": "Number of pattern hits on the Web (Almuhareb, 2006) and on ukWaC ball, WEIGHT, DIRECTION, SHAPE, SPEED and",
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>"
},
"TABREF5": {
"text": "",
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>: Evaluation results for Experiment 2</td></tr><tr><td>able for ESel on pattern A1 only. This is the</td></tr><tr><td>pattern that performs worst in Almuhareb's orig-</td></tr><tr><td>inal setting. From this, we conclude that both</td></tr><tr><td>ESel and target filtering are valuable extensions</td></tr><tr><td>to pattern-based structured vector spaces if preci-</td></tr><tr><td>sion is in focus. This also underlines a finding</td></tr><tr><td>of</td></tr></table>"
},
"TABREF6": {
"text": "Evaluation results for Experiment 1 contains 424 attributes for 100 nouns.",
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>Evaluation results. Results for Exp2 are given</td></tr><tr><td>in</td></tr></table>"
},
"TABREF8": {
"text": "Evaluation results for Experiment 3",
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>Evaluation</td></tr></table>"
}
}
}
}