Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S17-1011",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:29:07.181862Z"
},
"title": "Frame-Based Continuous Lexical Semantics through Exponential Family Tensor Factorization and Semantic Proto-Roles",
"authors": [
{
"first": "Francis",
"middle": [],
"last": "Ferraro",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Adam",
"middle": [],
"last": "Poliak",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We study how different frame annotations complement one another when learning continuous lexical semantics. We learn the representations from a tensorized skip-gram model that consistently encodes syntactic-semantic content better, with multiple 10% gains over baselines.",
"pdf_parse": {
"paper_id": "S17-1011",
"_pdf_hash": "",
"abstract": [
{
"text": "We study how different frame annotations complement one another when learning continuous lexical semantics. We learn the representations from a tensorized skip-gram model that consistently encodes syntactic-semantic content better, with multiple 10% gains over baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Consider \"Bill\" in Fig. 1 : what is his involvement with the words \"would try,\" and what does this involvement mean? Word embeddings represent such meaning as points in a real-valued vector space (Deerwester et al., 1990; Mikolov et al., 2013) . These representations are often learned by exploiting the frequency that the word cooccurs with contexts, often within a user-defined window (Harris, 1954; Turney and Pantel, 2010) . When built from large-scale sources, like Wikipedia or web crawls, embeddings capture general characteristics of words and allow for robust downstream applications (Kim, 2014; Das et al., 2015) .",
"cite_spans": [
{
"start": 196,
"end": 221,
"text": "(Deerwester et al., 1990;",
"ref_id": "BIBREF9"
},
{
"start": 222,
"end": 243,
"text": "Mikolov et al., 2013)",
"ref_id": "BIBREF24"
},
{
"start": 387,
"end": 401,
"text": "(Harris, 1954;",
"ref_id": null
},
{
"start": 402,
"end": 426,
"text": "Turney and Pantel, 2010)",
"ref_id": "BIBREF35"
},
{
"start": 593,
"end": 604,
"text": "(Kim, 2014;",
"ref_id": "BIBREF20"
},
{
"start": 605,
"end": 622,
"text": "Das et al., 2015)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 19,
"end": 25,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Frame semantics generalize word meanings to that of analyzing structured and interconnected labeled \"concepts\" and abstractions (Minsky, 1974; Fillmore, 1976 Fillmore, , 1982 . These concepts, or roles, implicitly encode expected properties of that word. In a frame semantic analysis of Fig. 1 , the segment \"would try\" triggers the ATTEMPT frame, filling the expected roles AGENT and GOAL with \"Bill\" and \"the same tactic,\" respectively. While frame semantics provide a structured form for analyzing words with crisp, categorically-labeled concepts, the encoded properties and expectations are implicit. What does it mean to fill a frame's role?",
"cite_spans": [
{
"start": 128,
"end": 142,
"text": "(Minsky, 1974;",
"ref_id": "BIBREF25"
},
{
"start": 143,
"end": 157,
"text": "Fillmore, 1976",
"ref_id": "BIBREF14"
},
{
"start": 158,
"end": 174,
"text": "Fillmore, , 1982",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 287,
"end": 293,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Semantic proto-role (SPR) theory, motivated by Dowty (1991) 's thematic proto-role theory, offers an answer to this. SPR replaces categorical roles ATTEMPT She said Bill would try the same tactic again.",
"cite_spans": [
{
"start": 47,
"end": 59,
"text": "Dowty (1991)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "AGENT GOAL Figure 1 : A simple frame analysis.",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 19,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "with judgements about multiple underlying properties about what is likely true of the entity filling the role. For example, SPR talks about how likely it is for Bill to be a willing participant in the ATTEMPT. The answer to this and other simple judgments characterize Bill and his involvement. Since SPR both captures the likelihood of certain properties and characterizes roles as groupings of properties, we can view SPR as representing a type of continuous frame semantics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We are interested in capturing these SPR-based properties and expectations within word embeddings. We present a method that learns frameenriched embeddings from millions of documents that have been semantically parsed with multiple different frame analyzers (Ferraro et al., 2014) . Our method leverages Cotterell et al. (2017) 's formulation of Mikolov et al. (2013) 's popular skip-gram model as exponential family principal component analysis (EPCA) and tensor factorization. This paper's primary contributions are: (i) enriching learned word embeddings with multiple, automatically obtained frames from large, disparate corpora; and (ii) demonstrating these enriched embeddings better capture SPR-based properties. In so doing, we also generalize Cotterell et al.'s method to arbitrary tensor dimensions. This allows us to include an arbitrary amount of semantic information when learning embeddings. Our variable-size tensor factorization code is available at https://github.com/ fmof/tensor-factorization.",
"cite_spans": [
{
"start": 258,
"end": 280,
"text": "(Ferraro et al., 2014)",
"ref_id": "BIBREF12"
},
{
"start": 304,
"end": 327,
"text": "Cotterell et al. (2017)",
"ref_id": "BIBREF4"
},
{
"start": 346,
"end": 367,
"text": "Mikolov et al. (2013)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Frame semantics currently used in NLP have a rich history in linguistic literature. Fillmore (1976) 's frames are based on a word's context and prototypical concepts that an individual word evokes; they intend to represent the meaning of lexical items by mapping words to real world concepts and shared experiences. Frame-based semantics have inspired many semantic annotation schemata and datasets, such as FrameNet (Baker et al., 1998) , PropBank (Palmer et al., 2005) , and Verbnet (Schuler, 2005) , as well as composite resources (Hovy et al., 2006; Palmer, 2009; Banarescu et al., 2012) . 1 Thematic Roles and Proto Roles These resources map words to their meanings through discrete/categorically labeled frames and roles; sometimes, as in FrameNet, the roles can be very descriptive (e.g., the DEGREE role for the AF-FIRM OR DENY frame), while in other cases, as in PropBank, the roles can be quite general (e.g., ARG0). Regardless of the actual schema, the roles are based on thematic roles, which map a predicate's arguments to a semantic representation that makes various semantic distinctions among the arguments (Dowty, 1989 ). 2 Dowty (1991) claims that thematic role distinctions are not atomic, i.e., they can be deconstructed and analyzed at a lower level. Instead of many discrete thematic roles, Dowty (1991) argues for proto-thematic roles, e.g. PROTO-AGENT rather than AGENT, where distinctions in proto-roles are based on clusterings of logical entailments. That is, PROTO-AGENTs often have certain properties in common, e.g., manipulating other objects or willingly participating in an action; PROTO-PATIENTs are often changed or affected by some action. By decomposing the meaning of roles into properties or expectations that can be reasoned about, proto-roles can be seen as including a form of vector representation within structured frame semantics.",
"cite_spans": [
{
"start": 84,
"end": 99,
"text": "Fillmore (1976)",
"ref_id": "BIBREF14"
},
{
"start": 417,
"end": 437,
"text": "(Baker et al., 1998)",
"ref_id": "BIBREF0"
},
{
"start": 449,
"end": 470,
"text": "(Palmer et al., 2005)",
"ref_id": "BIBREF27"
},
{
"start": 485,
"end": 500,
"text": "(Schuler, 2005)",
"ref_id": "BIBREF32"
},
{
"start": 534,
"end": 553,
"text": "(Hovy et al., 2006;",
"ref_id": "BIBREF17"
},
{
"start": 554,
"end": 567,
"text": "Palmer, 2009;",
"ref_id": "BIBREF26"
},
{
"start": 568,
"end": 591,
"text": "Banarescu et al., 2012)",
"ref_id": "BIBREF1"
},
{
"start": 594,
"end": 595,
"text": "1",
"ref_id": null
},
{
"start": 1123,
"end": 1135,
"text": "(Dowty, 1989",
"ref_id": "BIBREF11"
},
{
"start": 1139,
"end": 1140,
"text": "2",
"ref_id": null
},
{
"start": 1141,
"end": 1153,
"text": "Dowty (1991)",
"ref_id": "BIBREF10"
},
{
"start": 1313,
"end": 1325,
"text": "Dowty (1991)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Frame Semantics and Proto-Roles",
"sec_num": "2"
},
{
"text": "Word embeddings represent word meanings as elements of a (real-valued) vector space (Deerwester et al., 1990) . Mikolov et al. (2013) 's word2vec methods-skip-gram (SG) and continuous bag of words (CBOW)-repopularized these methods. We focus on SG, which predicts the context i around a word j, with learned representations c i and w j , respectively, as p(context",
"cite_spans": [
{
"start": 84,
"end": 109,
"text": "(Deerwester et al., 1990)",
"ref_id": "BIBREF9"
},
{
"start": 112,
"end": 133,
"text": "Mikolov et al. (2013)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Continuous Lexical Semantics",
"sec_num": "3"
},
{
"text": "i | word j) \u221d exp (c i w j ) = exp (1 (c i w j ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Continuous Lexical Semantics",
"sec_num": "3"
},
{
"text": ", where is the Hadamard (pointwise) product. Traditionally, the context words i are those words within a small window of j and are trained with negative sampling (Goldberg and Levy, 2014) .",
"cite_spans": [
{
"start": 162,
"end": 187,
"text": "(Goldberg and Levy, 2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Continuous Lexical Semantics",
"sec_num": "3"
},
{
"text": "Levy and Goldberg (2014b), and subsequently Keerthi et al. (2015) , showed how vectors learned under SG with the negative sampling are, under certain conditions, the factorization of (shifted) positive pointwise mutual information. Cotterell et al. (2017) showed that SG is a form of exponential family PCA that factorizes the matrix of word/context cooccurrence counts (rather than shifted positive PMI values). With this interpretation, they generalize SG from matrix to tensor factorization, and provide a theoretical basis for modeling higher-order SG (or additional context, such as morphological features of words) within a word embeddings framework.",
"cite_spans": [
{
"start": 44,
"end": 65,
"text": "Keerthi et al. (2015)",
"ref_id": "BIBREF19"
},
{
"start": 232,
"end": 255,
"text": "Cotterell et al. (2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram as Matrix Factorization",
"sec_num": "3.1"
},
{
"text": "Specifically, Cotterell et al. recast higher-order SG as maximizing the log-likelihood",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram as Matrix Factorization",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "ijk X ijk log p(context i | word j, feature k) (1) = ijk X ijk log exp (1 (c i w j a k )) i exp (1 (c i w j a k )) ,",
"eq_num": "(2)"
}
],
"section": "Skip-Gram as Matrix Factorization",
"sec_num": "3.1"
},
{
"text": "where X ijk is a cooccurrence count 3-tensor of words j, surrounding contexts i, and features k.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram as Matrix Factorization",
"sec_num": "3.1"
},
{
"text": "When factorizing an n-dimensional tensor to include an arbitrary number of L annotations, we replace feature k in Equation (1) and a k in Equation (2) with each annotation type l and vector \u03b1 l included.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram as n-Tensor Factorization",
"sec_num": "3.2"
},
{
"text": "X i,j,k becomes X i,j,l 1 ,...l L ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram as n-Tensor Factorization",
"sec_num": "3.2"
},
{
"text": "representing the number of times word j appeared in context i with features l 1 through l L . We maximize",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram as n-Tensor Factorization",
"sec_num": "3.2"
},
{
"text": "i,j,l 1 ,...,l L X i,j,l 1 ,...,l L log \u03b2 i,j,l 1 ,...,l L \u03b2 i,j,l 1 ,...,l L \u221d exp (1 (c i w j \u03b1 l 1 \u2022 \u2022 \u2022 \u03b1 l L )) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Skip-Gram as n-Tensor Factorization",
"sec_num": "3.2"
},
{
"text": "Our end goal is to use multiple kinds of automatically obtained, \"in-the-wild\" frame se-mantic parses in order to improve the semantic content-specifically SPR-type informationwithin learned lexical embeddings. We utilize majority portions of the Concretely Annotated New York Times and Wikipedia corpora from Ferraro et al. (2014) . These have been annotated with three frame semantic parses: FrameNet from Das et al. 2010, and both FrameNet and PropBank from Wolfe et al. (2016) . In total, we use nearly five million frame-annotated documents.",
"cite_spans": [
{
"start": 310,
"end": 331,
"text": "Ferraro et al. (2014)",
"ref_id": "BIBREF12"
},
{
"start": 461,
"end": 480,
"text": "Wolfe et al. (2016)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Extracting Counts The baseline extraction we consider is a standard sliding window: for each word w j seen \u2265 T times, extract all words w i two to the left and right of w j . These counts, forming a matrix, are then used within standard word2vec. We also follow Cotterell et al. (2017) and augment the above with the signed number of tokens separating w i and w j , e.g., recording that w i appeared two to the left of w j ; these counts form a 3-tensor.",
"cite_spans": [
{
"start": 262,
"end": 285,
"text": "Cotterell et al. (2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "To turn semantic parses into tensor counts, we first identify relevant information from the parses. We consider all parses that are triggered by the target word w j (seen \u2265 T times) and that have at least one role filled by some word in the sentence. We organize the extraction around roles and what fills them. We extract every word w r that fills all possible triggered frames; each of those frame and role labels; and the distance between filler w r and trigger w j . This process yields a 9-tensor X. 3 Although we always treat the trigger as the \"original\" word (e.g., word j, with vector w j ), later we consider (1) what to include from X, (2) what to predict (what to treat as the \"context\" word i), and (3) what to treat as auxiliary features. Data Discussion The baseline extraction methods result in roughly symmetric target and surrounding word counts. This is not the case for the frame extraction. Our target words must trigger some semantic parse, so our target words are actually target triggers. However, the surrounding context words are those words that fill semantic roles. As shown in Table 1 , there are an order-of-magnitude fewer triggers than target words, but up to an order-of-magnitude more surrounding words. Implementation We generalize Levy and Goldberg (2014a)'s and Cotterell et al. (2017) to enable any arbitrary dimensional tensor factorization, as described in \u00a73.2. We learn 100dimensional embeddings for words that appear at least 100 times from 15 negative samples. 4 The implementation is available at https://github. com/fmof/tensor-factorization.",
"cite_spans": [
{
"start": 1299,
"end": 1322,
"text": "Cotterell et al. (2017)",
"ref_id": "BIBREF4"
},
{
"start": 1505,
"end": 1506,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1106,
"end": 1113,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Metric We evaluate our learned (trigger) embeddings w via QVEC (Tsvetkov et al., 2015) . QVEC uses canonical correlation analysis to measure the Pearson correlation between w and a collection of oracle lexical vectors o. These oracle vectors are derived from a human-annotated resource. For QVEC, higher is better: a higher score indicates w more closely correlates (positively) with o.",
"cite_spans": [
{
"start": 63,
"end": 86,
"text": "(Tsvetkov et al., 2015)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Evaluating Semantic Content with SPR Motivated by Dowty (1991) 's proto-role theory, Reisinger et al. (2015) , with a subsequent expansion by White et al. (2016), annotated thousands of predicate-argument pairs (v, a) with (boolean) applicability and (ordinal) likelihoods of wellmotivated semantic properties applying to/being true of a. 5 These likelihood judgments, under the SPR framework, are converted from a fivepoint Likert scale to a 1-5 interval scale. Because the predicate-argument pairs were extracted from previously annotated dependency trees, we link each property with the dependency relation joining v and a when forming the oracle vectors; each component of an oracle vector o v is the unitynormalized sum of likelihood judgments for joint property and grammatical relation, using the interval responses when the property is applicable and discarding non-applicable properties, i.e. treating the response as 0. Thus, the combined 20 properties of Reisinger et al. (2015) and White et al. (2016)-together with the four basic grammatical",
"cite_spans": [
{
"start": 50,
"end": 62,
"text": "Dowty (1991)",
"ref_id": "BIBREF10"
},
{
"start": 85,
"end": 108,
"text": "Reisinger et al. (2015)",
"ref_id": "BIBREF30"
},
{
"start": 966,
"end": 989,
"text": "Reisinger et al. (2015)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We use the training portion of http: //decomp.net/wp-content/uploads/2015/08/ UniversalDecompositionalSemantics.tar.gz. Cotterell et al. (2017) . Each row represents an ablation model: sep means the prediction relies on the token separation distance between the frame and role filler, fn-frame means the prediction uses FrameNet frames, fn-role means the prediction uses FrameNet roles, and filler means the prediction uses the tokens filling the frame role. Read from top to bottom, additional contextual features are denoted with a +. Note when filler is used, we only predict PropBank roles.",
"cite_spans": [
{
"start": 120,
"end": 143,
"text": "Cotterell et al. (2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "relations nsubj, dobj, iobj and nsubjpass-result in 80-dimensional oracle vectors. 6 Predict Fillers or Roles? Since SPR judgments are between predicates and arguments, we predict the words filling the roles, and treat all other frame information as auxiliary features. SPR annotations were originally based off of (gold-standard) Prop-Bank annotations, so we also train a model to predict PropBank frames and roles, thereby treating role-filling text and all other frame information as auxiliary features. In early experiments, we found it beneficial to treat the FrameNet annotations additively and not distinguish one system's output from another. Treating the annotations additively serves as a type of collapsing operation. Although X started as a 9-tensor, we only consider up to 6-tensors: trigger, role filler, token separation between the trigger and filler, PropBank frame and role, FrameNet frame, and FrameNet role. Results Fig. 2 shows the overall percent change for SPR-QVEC from the filler and role prediction models, on newswire ( Fig. 2a) and Wikipedia (Fig. 2b) , across different ablation models. We indicate additional contextual features being used with a +: sep uses the token separation distance between the frame and role filler, fn-frame uses FrameNet frames, fn-role uses FrameNet roles, filler uses the tokens filling the frame role, and none indicates no additional information is used when predicting. The 0 line represents a plain word2vec baseline and the dashed line represents the 3-tensor baseline of Cotterell et al. (2017) . Both of these baselines are windowed: they are restricted to a local context and cannot take advantage of frames or any lexical signal that can be derived from frames.",
"cite_spans": [
{
"start": 83,
"end": 84,
"text": "6",
"ref_id": null
},
{
"start": 1535,
"end": 1558,
"text": "Cotterell et al. (2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 928,
"end": 942,
"text": "Results Fig. 2",
"ref_id": "FIGREF1"
},
{
"start": 1047,
"end": 1055,
"text": "Fig. 2a)",
"ref_id": "FIGREF1"
},
{
"start": 1070,
"end": 1079,
"text": "(Fig. 2b)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Overall, we notice that we obtain large improvements from models trained on lexical signals that have been derived from frame output (sep and none), even if the model itself does not incorporate any frame labels. The embeddings that predict the role filling lexical items (the green triangles) correlate higher with SPR oracles than the embeddings that predict PropBank frames and roles (red circles). Examining Fig. 2a , we see that both model types outperform both the word2vec and Cotterell et al. (2017) baselines in nearly all model configurations and ablations. We see the highest improvement when predicting role fillers given the frame trigger and the number of tokens separating the two (the green triangles in the sep rows).",
"cite_spans": [
{
"start": 484,
"end": 507,
"text": "Cotterell et al. (2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 412,
"end": 419,
"text": "Fig. 2a",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Comparing Fig. 2a to Fig. 2b , we see newswire is more amenable to predicting PropBank frames and roles. We posit this is a type of out-ofdomain error, as the PropBank parser was trained on newswire. We also find that newswire is overall more amenable to incorporating limited framebased features, particularly when predicting Prop-Bank using lexical role fillers as part of the con- textual features. We hypothesize this is due to the significantly increased vocabulary size of the Wikipedia role fillers (c.f., Tab. 1). Note, however, that by using all available schema information when predicting PropBank, we are able to compensate for the increased vocabulary.",
"cite_spans": [],
"ref_spans": [
{
"start": 10,
"end": 17,
"text": "Fig. 2a",
"ref_id": "FIGREF1"
},
{
"start": 21,
"end": 28,
"text": "Fig. 2b",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "In Fig. 3 we display the ten nearest neighbors for three randomly sampled trigger words according to two of the highest performing newswire models. They each condition on the trigger and the role filler/trigger separation; these correspond to the sep rows of Fig. 2a . The left column of Fig. 3 predicts the role filler, while the right column predicts PropBank annotations. We see that while both models learn inflectional relations, this quality is prominent in the model that predicts Prop-Bank information while the model predicting role fillers learns more non-inflectional paraphrases.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 9,
"text": "Fig. 3",
"ref_id": "FIGREF2"
},
{
"start": 259,
"end": 266,
"text": "Fig. 2a",
"ref_id": "FIGREF1"
},
{
"start": 288,
"end": 294,
"text": "Fig. 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "The recent popularity of word embeddings have inspired others to consider leveraging linguistic annotations and resources to learn embeddings. Both Cotterell et al. (2017) and Levy and Goldberg (2014a) incorporate additional syntactic and morphological information in their word embeddings. Rothe and Sch\u00fctze (2015)'s use lexical resource entries, such as WordNet synsets, to improve pre-computed word embeddings. Through generalized CCA, Rastogi et al. (2015) incorporate paraphrased FrameNet training data. On the applied side, Wang and Yang (2015) used frame embeddings-produced by training word2vec on tweet-derived semantic frame (names)-as additional features in downstream prediction. Teichert et al. (2017) similarly explored the relationship between semantic frames and thematic proto-roles. They proposed using a Conditional Random Field (Lafferty et al., 2001) to jointly and conditionally model SPR and SRL. Teichert et al. (2017) demonstrated slight improvements in jointly and conditionally predicting PropBank (Bonial et al., 2013) 's semantic role labels and Reisinger et al. (2015) 's proto-role labels.",
"cite_spans": [
{
"start": 148,
"end": 171,
"text": "Cotterell et al. (2017)",
"ref_id": "BIBREF4"
},
{
"start": 176,
"end": 201,
"text": "Levy and Goldberg (2014a)",
"ref_id": "BIBREF22"
},
{
"start": 439,
"end": 460,
"text": "Rastogi et al. (2015)",
"ref_id": "BIBREF29"
},
{
"start": 692,
"end": 714,
"text": "Teichert et al. (2017)",
"ref_id": "BIBREF33"
},
{
"start": 848,
"end": 871,
"text": "(Lafferty et al., 2001)",
"ref_id": "BIBREF21"
},
{
"start": 920,
"end": 942,
"text": "Teichert et al. (2017)",
"ref_id": "BIBREF33"
},
{
"start": 1025,
"end": 1046,
"text": "(Bonial et al., 2013)",
"ref_id": "BIBREF2"
},
{
"start": 1075,
"end": 1098,
"text": "Reisinger et al. (2015)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "We presented a way to learn embeddings enriched with multiple, automatically obtained frames from large, disparate corpora. We also presented a QVEC evaluation for semantic proto-roles. As demonstrated by our experiments, our extension of Cotterell et al. (2017) 's tensor factorization enriches word embeddings by including syntacticsemantic information not often captured, resulting in consistently higher SPR-based correlations. The implementation is available at https: //github.com/fmof/tensor-factorization.",
"cite_spans": [
{
"start": 239,
"end": 262,
"text": "Cotterell et al. (2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "See Petruck and de Melo (2014) for detailed descriptions on frame semantics' contributions to applied NLP tasks.2 Thematic role theory is rich, and beyond this paper's scope(Whitehead, 1920;Davidson, 1967;Cresswell, 1973;Kamp, 1979;Carlson, 1984).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Each record consists of the trigger, a role filler, the number of words between the trigger and filler, and the relevant frame and roles from the three semantic parsers. Being automatically obtained, the parses are overlapping and incomplete; to properly form X, one can implicitly include special NO FRAME and NO ROLE labels as needed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In preliminary experiments, this occurrence threshold did not change the overall conclusions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The full cooccurrence among the properties and relations is relatively sparse. Nearly two thirds of all non-zero oracle components are comprised of just fourteen properties, and only the nsubj and dobj relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by Johns Hopkins University, the Human Language Technology Center of Excellence (HLTCOE), DARPA DEFT, and DARPA LORELEI. We would also like to thank three anonymous reviewers for their feedback. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA or the U.S. Government.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The berkeley framenet project",
"authors": [
{
"first": "Collin",
"middle": [
"F"
],
"last": "Baker",
"suffix": ""
},
{
"first": "Charles",
"middle": [
"J"
],
"last": "Fillmore",
"suffix": ""
},
{
"first": "John",
"middle": [
"B"
],
"last": "Lowe",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--90",
"other_ids": {
"DOI": [
"10.3115/980845.980860"
]
},
"num": null,
"urls": [],
"raw_text": "Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The berkeley framenet project. In Proceed- ings of the 36th Annual Meeting of the Associa- tion for Computational Linguistics and 17th Inter- national Conference on Computational Linguistics -Volume 1. Association for Computational Linguis- tics, Stroudsburg, PA, USA, ACL '98, pages 86-90. https://doi.org/10.3115/980845.980860.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Abstract meaning representation (amr) 1.0 specification. In Parsing on Freebase from Question-Answer Pairs",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Banarescu",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Bonial",
"suffix": ""
},
{
"first": "Shu",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Madalina",
"middle": [],
"last": "Georgescu",
"suffix": ""
},
{
"first": "Kira",
"middle": [],
"last": "Griffitt",
"suffix": ""
},
{
"first": "Ulf",
"middle": [],
"last": "Hermjakob",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1533--1544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2012. Abstract meaning representation (amr) 1.0 specification. In Parsing on Freebase from Question-Answer Pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing. Seattle: ACL. pages 1533-1544.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Renewing and revising semlink",
"authors": [
{
"first": "Claire",
"middle": [],
"last": "Bonial",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Stowe",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2nd Workshop on Linked Data in Linguistics (LDL-2013): Representing and linking lexicons, terminologies and other language data",
"volume": "",
"issue": "",
"pages": "9--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claire Bonial, Kevin Stowe, and Martha Palmer. 2013. Renewing and revising semlink. In Proceedings of the 2nd Workshop on Linked Data in Linguistics (LDL-2013): Representing and linking lexicons, ter- minologies and other language data. Association for Computational Linguistics, Pisa, Italy, pages 9 -17. http://www.aclweb.org/anthology/W13-5503.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Thematic roles and their role in semantic interpretation",
"authors": [
{
"first": "N",
"middle": [],
"last": "Greg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Carlson",
"suffix": ""
}
],
"year": 1984,
"venue": "Linguistics",
"volume": "22",
"issue": "3",
"pages": "259--280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greg N Carlson. 1984. Thematic roles and their role in semantic interpretation. Linguistics 22(3):259-280.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Explaining and generalizing skip-gram through exponential family principal component analysis",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Poliak",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell, Adam Poliak, Benjamin Van Durme, and Jason Eisner. 2017. Explaining and general- izing skip-gram through exponential family princi- pal component analysis. In Proceedings of the 15th Conference of the European Chapter of the Associa- tion for Computational Linguistics. Valencia, Spain.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Logics and languages. London: Methuen",
"authors": [
{
"first": "Maxwell John",
"middle": [],
"last": "Cresswell",
"suffix": ""
}
],
"year": 1973,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maxwell John Cresswell. 1973. Logics and languages. London: Methuen [Distributed in the U.S.A. By Harper & Row].",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Probabilistic frame-semantic parsing",
"authors": [
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Desai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2010,
"venue": "Human language technologies: The 2010 annual conference of the North American chapter of the association for computational linguistics",
"volume": "",
"issue": "",
"pages": "948--956",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dipanjan Das, Nathan Schneider, Desai Chen, and Noah A Smith. 2010. Probabilistic frame-semantic parsing. In Human language technologies: The 2010 annual conference of the North American chapter of the association for computational lin- guistics. Association for Computational Linguistics, pages 948-956.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Gaussian lda for topic models with word embeddings",
"authors": [
{
"first": "Rajarshi",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Manzil",
"middle": [],
"last": "Zaheer",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "795--804",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rajarshi Das, Manzil Zaheer, and Chris Dyer. 2015. Gaussian lda for topic models with word em- beddings. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Con- ference on Natural Language Processing (Vol- ume 1: Long Papers). Association for Computa- tional Linguistics, Beijing, China, pages 795-804. http://www.aclweb.org/anthology/P15-1077.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The logical form of action sentences",
"authors": [
{
"first": "Donald",
"middle": [],
"last": "Davidson",
"suffix": ""
}
],
"year": 1967,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Donald Davidson. 1967. The logical form of action sentences. In Nicholas Rescher, editor, The Logic of Decision and Action, University of Pittsburgh Press.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Indexing by latent semantic analysis",
"authors": [
{
"first": "Scott",
"middle": [],
"last": "Deerwester",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"T"
],
"last": "Dumais",
"suffix": ""
},
{
"first": "George",
"middle": [
"W"
],
"last": "Furnas",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"K"
],
"last": "Landauer",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Harshman",
"suffix": ""
}
],
"year": 1990,
"venue": "JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE",
"volume": "41",
"issue": "6",
"pages": "391--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE 41(6):391-407.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Thematic proto-roles and argument selection",
"authors": [
{
"first": "David",
"middle": [],
"last": "Dowty",
"suffix": ""
}
],
"year": 1991,
"venue": "Language",
"volume": "67",
"issue": "3",
"pages": "547--619",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Dowty. 1991. Thematic proto-roles and argu- ment selection. Language 67(3):547-619.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "On the semantic content of the notion of thematic role",
"authors": [
{
"first": "",
"middle": [],
"last": "David R Dowty",
"suffix": ""
}
],
"year": 1989,
"venue": "Properties, types and meaning",
"volume": "",
"issue": "",
"pages": "69--129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David R Dowty. 1989. On the semantic content of the notion of thematic role. In Properties, types and meaning, Springer, pages 69-129.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Concretely Annotated Corpora",
"authors": [
{
"first": "Francis",
"middle": [],
"last": "Ferraro",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"R"
],
"last": "Gormley",
"suffix": ""
},
{
"first": "Travis",
"middle": [],
"last": "Wolfe",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Harman",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2014,
"venue": "4th Workshop on Automated Knowledge Base Construction (AKBC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francis Ferraro, Max Thomas, Matthew R. Gormley, Travis Wolfe, Craig Harman, and Benjamin Van Durme. 2014. Concretely Annotated Corpora. In 4th Workshop on Automated Knowledge Base Con- struction (AKBC).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Frame semantics. Linguistics in the morning calm pages",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Fillmore",
"suffix": ""
}
],
"year": 1982,
"venue": "",
"volume": "",
"issue": "",
"pages": "111--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Fillmore. 1982. Frame semantics. Linguistics in the morning calm pages 111-137.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Frame semantics and the nature of language*",
"authors": [
{
"first": "J",
"middle": [],
"last": "Charles",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fillmore",
"suffix": ""
}
],
"year": 1976,
"venue": "Annals of the New York Academy of Sciences",
"volume": "280",
"issue": "1",
"pages": "20--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles J Fillmore. 1976. Frame semantics and the na- ture of language*. Annals of the New York Academy of Sciences 280(1):20-32.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "word2vec explained: Deriving Mikolov et al.'s negativesampling word-embedding method",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1402.3722"
]
},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg and Omer Levy. 2014. word2vec explained: Deriving Mikolov et al.'s negative- sampling word-embedding method. arXiv preprint arXiv:1402.3722 .",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Ontonotes: the 90% solution",
"authors": [
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the human language technology conference of the NAACL, Companion Volume: Short Papers. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "57--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: the 90% solution. In Proceedings of the human lan- guage technology conference of the NAACL, Com- panion Volume: Short Papers. Association for Com- putational Linguistics, pages 57-60.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Events, instants and temporal reference",
"authors": [
{
"first": "Hans",
"middle": [],
"last": "Kamp",
"suffix": ""
}
],
"year": 1979,
"venue": "Semantics from different points of view",
"volume": "",
"issue": "",
"pages": "376--418",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hans Kamp. 1979. Events, instants and temporal ref- erence. In Semantics from different points of view, Springer, pages 376-418.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Towards a better understanding of predict and count models",
"authors": [
{
"first": "S",
"middle": [
"Sathiya"
],
"last": "Keerthi",
"suffix": ""
},
{
"first": "Tobias",
"middle": [],
"last": "Schnabel",
"suffix": ""
},
{
"first": "Rajiv",
"middle": [],
"last": "Khanna",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.0204"
]
},
"num": null,
"urls": [],
"raw_text": "S. Sathiya Keerthi, Tobias Schnabel, and Rajiv Khanna. 2015. Towards a better understanding of predict and count models. arXiv preprint arXiv:1511.0204 .",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1746--1751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP). Association for Com- putational Linguistics, Doha, Qatar, pages 1746- 1751. http://www.aclweb.org/anthology/D14-1181.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [
"D"
],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [
"C N"
],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Eighteenth International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and label- ing sequence data. In Proceedings of the Eigh- teenth International Conference on Machine Learn- ing. Morgan Kaufmann Publishers Inc., San Fran- cisco, CA, USA, ICML '01, pages 282-289.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Dependencybased word embeddings",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "302--308",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014a. Dependency- based word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Lin- guistics, Baltimore, Maryland, pages 302-308. http://www.aclweb.org/anthology/P14-2050.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Neural word embedding as implicit matrix factorization",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "2177--2185",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014b. Neural word embedding as implicit matrix factorization. In Ad- vances in neural information processing systems. pages 2177-2185.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 .",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A framework for representing knowledge",
"authors": [
{
"first": "Marvin",
"middle": [],
"last": "Minsky",
"suffix": ""
}
],
"year": 1974,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marvin Minsky. 1974. A framework for representing knowledge. MIT-AI Laboratory Memo 306.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Semlink: Linking propbank, verbnet and framenet",
"authors": [
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Generative Lexicon Conference. GenLex-09",
"volume": "",
"issue": "",
"pages": "9--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martha Palmer. 2009. Semlink: Linking propbank, verbnet and framenet. In Proceedings of the Gen- erative Lexicon Conference. GenLex-09, 2009 Pisa, Italy, pages 9-15.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "The proposition bank: An annotated corpus of semantic roles",
"authors": [
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Kingsbury",
"suffix": ""
}
],
"year": 2005,
"venue": "Computational linguistics",
"volume": "31",
"issue": "1",
"pages": "71--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational linguistics 31(1):71- 106.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Proceedings of Frame Semantics in NLP: A Workshop in Honor of Chuck Fillmore",
"authors": [
{
"first": "R",
"middle": [
"L"
],
"last": "Miriam",
"suffix": ""
},
{
"first": "Gerard",
"middle": [],
"last": "Petruck",
"suffix": ""
},
{
"first": "Melo",
"middle": [],
"last": "De",
"suffix": ""
}
],
"year": 1929,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miriam R. L. Petruck and Gerard de Melo, ed- itors. 2014. Proceedings of Frame Seman- tics in NLP: A Workshop in Honor of Chuck Fillmore (1929-2014). Association for Com- putational Linguistics, Baltimore, MD, USA. http://www.aclweb.org/anthology/W14-30.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Multiview LSA: Representation Learning via Generalized CCA",
"authors": [
{
"first": "Pushpendre",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Raman",
"middle": [],
"last": "Arora",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "556--566",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pushpendre Rastogi, Benjamin Van Durme, and Ra- man Arora. 2015. Multiview LSA: Representation Learning via Generalized CCA. In Proceedings of the 2015 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Denver, Colorado, pages 556-566. http://www.aclweb.org/anthology/N15- 1058.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Semantic proto-roles",
"authors": [
{
"first": "Drew",
"middle": [],
"last": "Reisinger",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Ferraro",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Harman",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Rawlins",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics (TACL)",
"volume": "3",
"issue": "",
"pages": "475--488",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Drew Reisinger, Rachel Rudinger, Francis Ferraro, Craig Harman, Kyle Rawlins, and Benjamin Van Durme. 2015. Semantic proto-roles. Transactions of the Association for Computational Linguistics (TACL) 3:475-488.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Autoextend: Extending word embeddings to embeddings for synsets and lexemes",
"authors": [
{
"first": "Sascha",
"middle": [],
"last": "Rothe",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1793--1803",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sascha Rothe and Hinrich Sch\u00fctze. 2015. Autoex- tend: Extending word embeddings to embeddings for synsets and lexemes. In Proceedings of the 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers). Association for Compu- tational Linguistics, Beijing, China, pages 1793- 1803. http://www.aclweb.org/anthology/P15-1173.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Verbnet: A broadcoverage, comprehensive verb lexicon",
"authors": [
{
"first": "Karin Kipper",
"middle": [],
"last": "Schuler",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karin Kipper Schuler. 2005. Verbnet: A broad- coverage, comprehensive verb lexicon .",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Semantic proto-role labeling",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Teichert",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Poliak",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Gormley",
"suffix": ""
}
],
"year": 2017,
"venue": "AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Teichert, Adam Poliak, Benjamin Van Durme, and Matthew Gormley. 2017. Semantic proto-role labeling. In AAAI Conference on Artificial Intelli- gence.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Evaluation of word vector representations by subspace alignment",
"authors": [
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2049--2054",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Guil- laume Lample, and Chris Dyer. 2015. Evalua- tion of word vector representations by subspace alignment. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Lan- guage Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 2049-2054. http://aclweb.org/anthology/D15-1243.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "From frequency to meaning: Vector space models of semantics",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of artificial intelligence research",
"volume": "37",
"issue": "",
"pages": "141--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D Turney and Patrick Pantel. 2010. From fre- quency to meaning: Vector space models of se- mantics. Journal of artificial intelligence research 37:141-188.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "That's so annoying!!!: A lexical and frame-semantic embedding based data augmentation approach to automatic categorization of annoying behaviors using #petpeeve tweets",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Yang Wang and Diyi Yang. 2015. That's so annoying!!!: A lexical and frame-semantic em- bedding based data augmentation approach to au- tomatic categorization of annoying behaviors us- ing #petpeeve tweets. In Proceedings of the 2015",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Conference on Empirical Methods in Natural Language Processing",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "2557--2563",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conference on Empirical Methods in Natural Lan- guage Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 2557-2563. http://aclweb.org/anthology/D15-1306.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Universal decompositional semantics on universal dependencies",
"authors": [
{
"first": "Aaron",
"middle": [],
"last": "Steven White",
"suffix": ""
},
{
"first": "Drew",
"middle": [],
"last": "Reisinger",
"suffix": ""
},
{
"first": "Keisuke",
"middle": [],
"last": "Sakaguchi",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Vieira",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Rawlins",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1713--1723",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aaron Steven White, Drew Reisinger, Keisuke Sak- aguchi, Tim Vieira, Sheng Zhang, Rachel Rudinger, Kyle Rawlins, and Benjamin Van Durme. 2016. Universal decompositional semantics on univer- sal dependencies. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computa- tional Linguistics, Austin, Texas, pages 1713-1723. https://aclweb.org/anthology/D16-1177.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "The concept of nature: the Tarner lectures delivered in Trinity College",
"authors": [
{
"first": "Alfred North",
"middle": [],
"last": "Whitehead",
"suffix": ""
}
],
"year": 1919,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alfred North Whitehead. 1920. The concept of na- ture: the Tarner lectures delivered in Trinity College, November 1919. Kessinger Publishing.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "A study of imitation learning methods for semantic role labeling",
"authors": [
{
"first": "Travis",
"middle": [],
"last": "Wolfe",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Workshop on Structured Prediction for NLP",
"volume": "",
"issue": "",
"pages": "44--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Travis Wolfe, Mark Dredze, and Benjamin Van Durme. 2016. A study of imitation learning methods for se- mantic role labeling. In Proceedings of the Work- shop on Structured Prediction for NLP. Association for Computational Linguistics, Austin, TX, pages 44-53. http://aclweb.org/anthology/W16-5905.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "(a) Changes in SPR-QVEC for Annotated NYT. (b) Changes in SPR-QVEC for Wikipedia.",
"num": null
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "Effect of frame-extracted tensor counts on our SPR-QVEC evaulation. Deltas are shown as relative percent changes vs. the word2vec baseline. The dashed line represents the 3-tensor word2vec method of",
"num": null
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"text": "K-Nearest Neighbors for three randomly sampled trigger words, from two newswire models.",
"num": null
},
"TABREF1": {
"text": "Vocabulary sizes, in thousands, extracted from Fer-",
"type_str": "table",
"num": null,
"content": "<table><tr><td>raro et al. (2014)'s data with both the standard sliding context</td></tr><tr><td>window approach ( \u00a73) and the frame-based approach ( \u00a74).</td></tr><tr><td>Upper numbers (Roman) are for newswire; lower numbers</td></tr><tr><td>(italics) are Wikipedia. For both corpora, 800 total FrameNet</td></tr><tr><td>frame types and 5100 PropBank frame types are extracted.</td></tr></table>",
"html": null
}
}
}
}