ACL-OCL / Base_JSON /prefixS /json /semspace /2021.semspace-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:27:20.028626Z"
},
"title": "Conversational Negation using Worldly Context in Compositional Distributional Semantics",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Rodatz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Oxford",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Razin",
"middle": [
"A"
],
"last": "Shaikh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Oxford",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Lia",
"middle": [],
"last": "Yeh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Oxford",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose a framework to model an operational conversational negation by applying worldly context (prior knowledge) to logical negation in compositional distributional semantics. Given a word, our framework can create its negation that is similar to how humans perceive negation. The framework corrects logical negation to weight meanings closer in the entailment hierarchy more than meanings further apart. The proposed framework is flexible to accommodate different choices of logical negations, compositions, and worldly context generation. In particular, we propose and motivate a new logical negation using matrix inverse. We validate the sensibility of our conversational negation framework by performing experiments, leveraging density matrices to encode graded entailment information. We conclude that the combination of subtraction negation (\u00ac sub) and phaser in the basis of the negated word yields the highest Pearson correlation of 0.635 with human ratings.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose a framework to model an operational conversational negation by applying worldly context (prior knowledge) to logical negation in compositional distributional semantics. Given a word, our framework can create its negation that is similar to how humans perceive negation. The framework corrects logical negation to weight meanings closer in the entailment hierarchy more than meanings further apart. The proposed framework is flexible to accommodate different choices of logical negations, compositions, and worldly context generation. In particular, we propose and motivate a new logical negation using matrix inverse. We validate the sensibility of our conversational negation framework by performing experiments, leveraging density matrices to encode graded entailment information. We conclude that the combination of subtraction negation (\u00ac sub) and phaser in the basis of the negated word yields the highest Pearson correlation of 0.635 with human ratings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Negation is fundamental to every human language, marking a key difference from how other animals communicate (Horn, 1972) . It enables us to express denial, contradiction, and other uniquely human aspects of language. As humans, we know that negation has an operational interpretation: if we know the meaning of A, we can infer the meaning of not A, without needing to see or hear not A explicitly in any context.",
"cite_spans": [
{
"start": 109,
"end": 121,
"text": "(Horn, 1972)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Formalizing an operational description of how humans interpret negation in natural language is a challenge of significance to the fields of linguistics, epistemology, and psychology. Kruszewski et al. (2016) notes that there is no straightforward negation operation that, when applied to the distributional semantics vector of a word, derives a negation of that word that captures our intuition. This work proposes and experimentally validates an operational framework for conversational negation in compositional distributional semantics.",
"cite_spans": [
{
"start": 183,
"end": 207,
"text": "Kruszewski et al. (2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the field of distributional semantics, there have been developments in capturing the purely logical form of negation. Widdows and Peters (2003) introduce the idea of computing negation by mapping a vector to its orthogonal subspace; Lewis (2020) analogously model their logical negation for density matrices. However, logical negation alone is insufficient in expressing the nuances of negation in human language. Consider the sentences: a) This is not an apple; this is an orange. b) This is not an apple; this is a paper. Sentence a) is more plausible in real life than sentence b). However, since apples and oranges share a lot in common, their vector or density matrix encodings would most likely not be orthogonal. Consequently, such a logical negation of apple would more likely indicate a paper than an orange. Blunsom et al. (2013) propose that the encoding of a word should have a distinct \"domain\" and \"value\", and its negation should only affect the \"value\". In this way, not blue would still be in the domain of color. However, they do not provide any scalable way to generate such representation of \"domain\" and \"value\" from a corpus. We argue that this domain need not be encoded in the vector or density matrix itself. Instead, we propose a method to generate what we call worldly context directly from the word and its relationships to other words, computed a priori using worldly knowledge.",
"cite_spans": [
{
"start": 121,
"end": 146,
"text": "Widdows and Peters (2003)",
"ref_id": "BIBREF27"
},
{
"start": 236,
"end": 248,
"text": "Lewis (2020)",
"ref_id": "BIBREF20"
},
{
"start": 821,
"end": 842,
"text": "Blunsom et al. (2013)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Furthermore, we want such conversational negation to generalize from words to sentences and to entire texts. DisCoCat (Coecke et al., 2010) provides a method to compose the meaning of words to get the meaning of sentences and DisCoCirc (Coecke, 2020) extends this to propagate knowledge throughout the text. Therefore, we propose our conversational negation in the DisCoCirc formalism, putting our framework in a rich expanse of grammatical types and sentence structures. Focusing on the conversational negation of single words, we leave the interaction of conversational negation with grammatical structures for future work.",
"cite_spans": [
{
"start": 118,
"end": 139,
"text": "(Coecke et al., 2010)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Section 2 introduces the necessary background. Section 3 discusses the logical negation using subtraction from the identity matrix from Lewis (2020) , and proposes and justifies a second, new form of logical negation using matrix inverse. Section 4 introduces methods for context creation based on worldly knowledge. Section 5 presents the general framework for performing conversational negation of a word by combining logical negation with worldly context. Section 6 experimentally verifies the proposed framework, comparing each combination of different logical negations, compositions, bases, and worldly context generation. We end our discussion with an overview of future work.",
"cite_spans": [
{
"start": 136,
"end": 148,
"text": "Lewis (2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2.1 Conversational negation Kruszewski et al. (2016) point out a long tradition in formal semantics, pragmatics and psycholinguistics which has argued that negation-in human conversation-is not simply a denial of information; it also indicates the truth of an alternative assertion. They call this alternative-licensing view of negation conversational negation.",
"cite_spans": [
{
"start": 28,
"end": 52,
"text": "Kruszewski et al. (2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Another view on negation states that the effect of negation is merely one of information denial (Evans et al., 1996) . However, Prado and Noveck (2006) explain that even under this view, the search for alternatives could happen as a secondary effort for interpreting negation in the sentence.",
"cite_spans": [
{
"start": 96,
"end": 116,
"text": "(Evans et al., 1996)",
"ref_id": "BIBREF13"
},
{
"start": 128,
"end": 151,
"text": "Prado and Noveck (2006)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "The likelihood of different alternatives to a negated word inherently admits a grading (Oaksford, 2002; Kruszewski et al., 2016) . For example, something that is not a car is more likely to be a bus than a pen. They argue that the most plausible alternatives are the ones that are applicable across many varied contexts; car can be replaced by bus in many contexts, but it requires an unusual context to sensibly replace car with pen. ",
"cite_spans": [
{
"start": 87,
"end": 103,
"text": "(Oaksford, 2002;",
"ref_id": "BIBREF21"
},
{
"start": 104,
"end": 128,
"text": "Kruszewski et al., 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Language comprehension depends on understanding the meaning of words as well as understanding how the words interact with each other in a sentence. While the former is an understanding of the definitions of words, the latter requires an understanding of grammar. Coecke et al. (2010) build on this intuition to propose DisCoCat, a compositional distributional model of meaning, making use of the diagrammatic calculus originally introduced for quantum computing (Abramsky and Coecke, 2004) . In Coecke (2020) , this model was extended to DisCoCirc which generalized DisCoCat from modeling individual sentences to entire texts. In DisCoCirc, the two sentences Alice is an elf.",
"cite_spans": [
{
"start": 263,
"end": 283,
"text": "Coecke et al. (2010)",
"ref_id": "BIBREF11"
},
{
"start": 462,
"end": 489,
"text": "(Abramsky and Coecke, 2004)",
"ref_id": "BIBREF0"
},
{
"start": 495,
"end": 508,
"text": "Coecke (2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional semantics and DisCoCirc",
"sec_num": "2.2"
},
{
"text": "Alice is old.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional semantics and DisCoCirc",
"sec_num": "2.2"
},
{
"text": "are viewed as two processes updating the state of Alice, about whom, at the beginning of the text, the reader knows nothing. Graphically this can be displayed as shown in Figure 1 . The wire labeled by Alice represents the knowledge we have about Alice at any point in time. It is first updated by the fact that she is an elf and subsequently updated by the fact that she is old. We use a black square to represent a general meaning-update operation, which can be one of a variety of operators we discuss in the next section. DisCoCirc allows for more grammatically complex sentence and text structures not investigated in this work. DisCoCirc allows for various ways of representing meaning such as vector spaces (Coecke et al., 2010; Grefenstette and Sadrzadeh, 2011) , conceptual spaces (Bolt et al., 2017) , and density matrices (Balkir et al., 2016; Lewis, 2019) . A density matrix is a complex matrix, which is equal to its own conjugate transpose (Hermitian) and has non-negative eigenvalues (positive semidefinite). They can be viewed as an extension of vector spaces to allow for encoding lexical entailment structure (see Section 2.4), a property for which they were selected as the model of meaning for this paper.",
"cite_spans": [
{
"start": 714,
"end": 735,
"text": "(Coecke et al., 2010;",
"ref_id": "BIBREF11"
},
{
"start": 736,
"end": 769,
"text": "Grefenstette and Sadrzadeh, 2011)",
"ref_id": "BIBREF15"
},
{
"start": 790,
"end": 809,
"text": "(Bolt et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 833,
"end": 854,
"text": "(Balkir et al., 2016;",
"ref_id": "BIBREF3"
},
{
"start": 855,
"end": 867,
"text": "Lewis, 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 171,
"end": 179,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Compositional semantics and DisCoCirc",
"sec_num": "2.2"
},
{
"text": "We present four compositions for meaning update: Coecke and Meichanetzidis (2020) -Kmult in Lewis (2020) phaser(A, B) := B 1 2 AB 1 2",
"cite_spans": [
{
"start": 49,
"end": 81,
"text": "Coecke and Meichanetzidis (2020)",
"ref_id": "BIBREF10"
},
{
"start": 92,
"end": 104,
"text": "Lewis (2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compositions for meaning update",
"sec_num": "2.3"
},
{
"text": "spider(A, B) := U s (A \u2297 B)U \u2020 s (1) -U s = i |i ii| where {|i } i is B's eigenbasis -non-linear AND in Coecke (2020) fuzz(A, B) := i x i P i \u2022 A \u2022 P i (2) -B = i x i P i -in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositions for meaning update",
"sec_num": "2.3"
},
{
"text": "(3) where A and B are density matrices, x i is a real scalar between 0 and 1, P i 's are projectors, and the function dg sets all off-diagonal matrix elements to 0 giving a diagonal matrix. Of the many Compr variants (De las Cuevas et al., 2020), we only consider diag and mult (elementwise matrix multiplication, which is an instance of spider) as candidates for composition. All other variants are scalar multiples of one input, the identity wire, or a maximally mixed state; therefore we do not consider them as they discard too much information about the inputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositions for meaning update",
"sec_num": "2.3"
},
{
"text": "-B = i x 2 i P i where B 1 2 = i x i P i -in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositions for meaning update",
"sec_num": "2.3"
},
{
"text": "For spider, fuzz, and phaser, choosing the basis of the composition determines the basis the resulting density matrix takes on, and its meaning is interpreted in (Coecke and Meichanetzidis, 2020).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositions for meaning update",
"sec_num": "2.3"
},
{
"text": "A word w A is a hyponym of w B if w A is a type of w B ; then, w B is a hypernym of w A . For example, dog is a hyponym of animal, and animal is a hypernym of dog. Where there is a meaning relation between two words, there exists an entailment relation between two sentences containing those words. Measures to quantify these relations ought to be graded, as one would expect some entailment relations to be weaker than others. Furthermore, such measures should be asymmetric (a bee is an insect, but an insect is not necessarily a bee) and pseudo-transitive (a t-shirt is a shirt, a shirt can be formal, but a t-shirt is usually not formal).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical entailment via hyponymies",
"sec_num": "2.4"
},
{
"text": "One of the limitations of the vector space model of NLP is that it does not admit a natural non-trivial graded entailment structure (Balkir et al., 2016; Coecke, 2020) . Bankova et al. (2019) utilize the richer setting of density matrices to define a measure called k-hyponymy, generalizing the L\u00f6wner order to have a grading for positive operators, satisfying the above three properties. They further lift from entailment between words to between two sentences of the same grammatical structure, using compositional semantics, and prove a lower bound on this entailment between sentences.",
"cite_spans": [
{
"start": 132,
"end": 153,
"text": "(Balkir et al., 2016;",
"ref_id": "BIBREF3"
},
{
"start": 154,
"end": 167,
"text": "Coecke, 2020)",
"ref_id": "BIBREF8"
},
{
"start": 170,
"end": 191,
"text": "Bankova et al. (2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical entailment via hyponymies",
"sec_num": "2.4"
},
{
"text": "The k-hyponymy (k hyp ) between density matrices A and B is the maximum k such that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical entailment via hyponymies",
"sec_num": "2.4"
},
{
"text": "A k B \u21d0\u21d2 B \u2212 k A is a positive operator (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical entailment via hyponymies",
"sec_num": "2.4"
},
{
"text": "where k is between 0 (no entailment) and 1 (full entailment).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical entailment via hyponymies",
"sec_num": "2.4"
},
{
"text": "Van de Wetering (2018) finds that the crisp L\u00f6wner ordering (k hyp = 1) is trivial when operators are normalized to trace 1. On the other hand, they enumerate highly desirable properties of the L\u00f6wner order when normalized to highest eigenvalue 1. In particular, the maximally mixed state is the bottom element; all pure states are maximal; and the ordering is preserved under any linear trace-preserving isometry (including unitaries), convex mixture, and the tensor product. In our experiments, we leverage these ordering properties following Lewis (2020)'s convention of normalizing operators to highest eigenvalue \u2264 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical entailment via hyponymies",
"sec_num": "2.4"
},
{
"text": "According to Bankova et al. (2019, Theorem 2) , when supp(A) \u2286 supp(B), k hyp is given by 1/\u03b3, where \u03b3 is the maximum eigenvalue of B + A. Here B + denotes the Moore-Penrose inverse of B, which we refer to in the next section as support inverse. If supp(A) \u2286 supp(B), k hyp is 0. This means that k hyp admits a grading, but is not robust to errors. In our experiments, to circumvent this issue of almost all of our calculated k hyp being 0, we employ a generalized form of k hyp equivalent to as originally defined in Bankova et al. (2019, Theorem 2) , less checking whether supp(A) \u2286 supp(B).",
"cite_spans": [
{
"start": 13,
"end": 45,
"text": "Bankova et al. (2019, Theorem 2)",
"ref_id": null
},
{
"start": 518,
"end": 550,
"text": "Bankova et al. (2019, Theorem 2)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical entailment via hyponymies",
"sec_num": "2.4"
},
{
"text": "To propose more robust measures, Lewis (2019) says A entails B with the error term E if there exists a D such that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical entailment via hyponymies",
"sec_num": "2.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "A + D = B + E",
"eq_num": "(6)"
}
],
"section": "Lexical entailment via hyponymies",
"sec_num": "2.4"
},
{
"text": "to define the following two entailment measures",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical entailment via hyponymies",
"sec_num": "2.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "k BA = i \u03bb i i |\u03bb i | = Trace(D \u2212 E) Trace(D + E)",
"eq_num": "(7)"
}
],
"section": "Lexical entailment via hyponymies",
"sec_num": "2.4"
},
{
"text": "k E = 1 \u2212 E A (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical entailment via hyponymies",
"sec_num": "2.4"
},
{
"text": "where the \u03bb i 's are the eigenvalues of B \u2212 A. In Equations 7 and 8, the error term E satisfying Equation 6 is constructed by taking the diagonalization of B \u2212 A, setting all positive eigenvalues to zero, and changing the sign of all negative eigenvalues. k BA ranges from \u22121 to 1, and k E ranges from 0 to 1. According to De las Cuevas et al. 2020, diag, mult, and spider preserve crisp L\u00f6wner order:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical entailment via hyponymies",
"sec_num": "2.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "A 1 B 1 , A 2 B 2 \u21d0\u21d2 A 1 A 2 B 1 B 2",
"eq_num": "("
}
],
"section": "Lexical entailment via hyponymies",
"sec_num": "2.4"
},
{
"text": "9) Fuzz and phaser do not satisfy Equation 9.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical entailment via hyponymies",
"sec_num": "2.4"
},
{
"text": "To construct conversational negation, we must first define a key ingredient -logical negation, denoted by \u00ac. The logical negation of a density matrix is a unary function that yields another density matrix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logical negations",
"sec_num": "3"
},
{
"text": "The most important property of a logical negation is that it must interact well with hyponymy. Ideally, the interpretation of the contrapositive of an entailment must be sensible:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logical negations",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "A B \u21d0\u21d2 \u00acB \u00acA",
"eq_num": "(10)"
}
],
"section": "Logical negations",
"sec_num": "3"
},
{
"text": "A weakened notion arises from allowing varying degrees of entailment:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logical negations",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "A k B \u21d0\u21d2 \u00acB k \u00acA",
"eq_num": "(11)"
}
],
"section": "Logical negations",
"sec_num": "3"
},
{
"text": "where k = k in the ideal case. Equation 11 necessitates any candidate of logical negation to be order-reversing. However, van de Wetering (2018) proved that all unitary operations preserve L\u00f6wner order. Therefore, no quantum gates can reverse L\u00f6wner order, and the search for a logical negation compatible with quantum natural language processing ) (originally formulated in the category of CPM(FHilb) (Piedeleu et al., 2015) ) remains an open question.",
"cite_spans": [
{
"start": 402,
"end": 425,
"text": "(Piedeleu et al., 2015)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Logical negations",
"sec_num": "3"
},
{
"text": "We now discuss two candidates for logical negation that have desirable properties and interaction with the hyponymies presented in Section 2.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logical negations",
"sec_num": "3"
},
{
"text": "Lewis (2020) introduces a candidate logical negation which preserves positivity of density matrix X:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtraction from identity negation",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u00ac sub X := I \u2212 X",
"eq_num": "(12)"
}
],
"section": "Subtraction from identity negation",
"sec_num": "3.1"
},
{
"text": "In the case where X is a pure state, it maps X to the subspace orthogonal to it, as the identity matrix I is the sum of orthonormal projectors. This logical negation satisfies Equation 10 for the crisp L\u00f6wner order. It satisfies Equation 11 with k = k for k BA , but not for k hyp or k E .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtraction from identity negation",
"sec_num": "3.1"
},
{
"text": "We introduce a new candidate for logical negation, the matrix inverse. This reverses L\u00f6wner order, i.e. satisfies Equation 11 with k = k (see Corollary 1 in Appendix). It additionally satisfies Equation 11 with k = k for k BA if both density operators have same eigenbases (see Theorem 2 in Appendix).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matrix inverse negation",
"sec_num": "3.2"
},
{
"text": "As the matrix inverse of a non-invertible matrix is undefined, we define a logical negation from two generalizations of the matrix inverse acting upon the support and kernel subspaces, respectively. Definition 1. For any density matrix X with spectral decomposition",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matrix inverse negation",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "X = i \u03bb i |i i|, \u00ac supp X := i 1 \u03bb i |i i| , if \u03bb i > 0 0, otherwise",
"eq_num": "(13)"
}
],
"section": "Matrix inverse negation",
"sec_num": "3.2"
},
{
"text": "Definition 1 is the Moore-Penrose generalized matrix inverse and is equal to the matrix inverse when the kernel is empty. It has the property that Equation 11 with k = k is satisfied for k hyp when rank(A) = rank(B) (see Theorem 1 in Appendix). We call it the support inverse, to contrast with what we call the kernel inverse: Definition 2. For any non-invertible density matrix X with spectral decomposition X = i \u03bb i |i i|,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matrix inverse negation",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u00ac ker X := i 1 |i i| , if \u03bb i = 0 0, otherwise",
"eq_num": "(14)"
}
],
"section": "Matrix inverse negation",
"sec_num": "3.2"
},
{
"text": "The kernel inverse is the limit of matrix regularization by spectral filtering (i.e. setting all zero eigenvalues to an infinitesimal positive eigenvalue), then inverting the matrix and normalizing to highest eigenvalue 1. Its application discards all information about the eigenspectrum of the original matrix. Therefore, applying the kernel inverse twice results in a maximally mixed state over the support of the original matrix. Operationally speaking, \u00ac ker and \u00ac sub act upon the kernel of the original matrix identically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matrix inverse negation",
"sec_num": "3.2"
},
{
"text": "We can think conceptually of a negated word as containing elements both \"near\" (in support) and \"far\" (in kernel) from the original word. Therefore, a logical negation should encompass nonzero values in the original matrix's support and in its kernel; it is up to conversational negation to then weight the values in the logical negation according to their contextual relevance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matrix inverse negation",
"sec_num": "3.2"
},
{
"text": "On their own, neither the support inverse nor the kernel inverse are sensible candidates for logical negation. A convex mixture of the two, which we call matrix inverse and denote with \u00ac inv , spans both support and kernel of the original matrix. In our experiments we weight support and kernel equally, but other weightings could be considered, for instance to take into account a noise floor or enforce the naively unsatisfied property that twice application is the identity operation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matrix inverse negation",
"sec_num": "3.2"
},
{
"text": "When composing a density matrix X with \u00ac inv X or \u00ac supp X via spider, fuzz, or phaser, the resulting density matrix has the desired property of being a maximally mixed state on the support with zeroes on the kernel (see Theorem 3 and Corollary 2 in Appendix). In other words, this operation is the fastest \"quantum (Bayesian, in the case of phaser) update\" from a density matrix to the state encoding no information other than partitioning support and kernel subspaces. Interpreting composition as logical AND, this corresponds to the contradiction that a proposition (restricted to the support subspace) cannot simultaneously be true and not true.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matrix inverse negation",
"sec_num": "3.2"
},
{
"text": "\u00ac sub , \u00ac supp , and \u00ac inv preserve eigenvectors (up to uniqueness for eigenvalues with multiplicity > 1). We ignore normalization for logical negation because in our conversational negation framework, which we introduce in Section 5, we can always normalize to largest eigenvalue \u2264 1 after the composition operation. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normalization",
"sec_num": "3.3"
},
{
"text": "Negation is intrinsically dependent on context. Context can be derived from two sources: 1) knowledge gained throughout the sentence or the text (textual context), and 2) worldly knowledge from experts or data such as a corpus (worldly context). While textual context depends on the specific text being analyzed, worldly context can be computed a priori. In this section, we introduce worldly context and propose two methods of computing it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context determination",
"sec_num": "4"
},
{
"text": "Worldly knowledge is a certain understanding of the world that most users of a language intuitively possess. We want to capture this worldly knowledge to provide a context for negation that is not explicit in the text. In this section, we propose two methods of generating a worldly context: 1) knowledge encoded in an entailment hierarchy such as WordNet, and 2) generalizing the ideas of the first method to context derivation from the entailment information encoded in density matrices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Worldly context",
"sec_num": "4.1"
},
{
"text": "We consider an entailment hierarchy for words that leads to relations such as in Figure 2 , where a directed edge can be understood as a hyponym relation. Such relational hierarchy can be obtained from human curated database like WordNet (Fellbaum, 1998) or using unsupervised methods such as Hearst patterns (Hearst, 1992; Roller et al., 2018) .",
"cite_spans": [
{
"start": 309,
"end": 323,
"text": "(Hearst, 1992;",
"ref_id": "BIBREF16"
},
{
"start": 324,
"end": 344,
"text": "Roller et al., 2018)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 81,
"end": 89,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Context from an entailment hierarchy",
"sec_num": "4.1.1"
},
{
"text": "We can use such a hierarchy of hyponyms to generate worldly context, as words usually appear in the implicit context of their hypernyms; for example, apple is usually thought of as a fruit. Now, to calculate the worldly context for the word apple, we take a weighted sum of the hypernyms of apple, with more direct hypernyms such as fruit weighted higher than more distant hypernyms such as entity. This corresponds to the idea that when we talk in the context of apple, we are more likely to talk about an orange (hyponym of fruit) than a movie (hyponym of entity). Hence, for a word w with hypernyms h 1 , . . . , h n ordered from closest to furthest, we define the worldly context wc w as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context from an entailment hierarchy",
"sec_num": "4.1.1"
},
{
"text": "wc w := i p i h i (15)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context from an entailment hierarchy",
"sec_num": "4.1.1"
},
{
"text": "where p i \u2265 p i+1 for all i. For this approach, we assume that the density matrix of the word is a mixture containing its hyponyms; i.e. the density matrix of fruit is a mixture of all fruits such as apple, orange and pears.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context from an entailment hierarchy",
"sec_num": "4.1.1"
},
{
"text": "As explained in Section 2.4, density matrix representation of words can be used to encode the information about entailment between words. Furthermore, this entailment can be graded; for example, fruit would entail dessert with a high degree, but not necessarily by 1. Such graded entailment is not captured in the human curated WordNet database. Although there have been proposals to extend WordNet (Boyd-Graber et al., 2006; Ahsaee et al., 2014) , such semantic networks are not yet available.",
"cite_spans": [
{
"start": 399,
"end": 425,
"text": "(Boyd-Graber et al., 2006;",
"ref_id": "BIBREF7"
},
{
"start": 426,
"end": 446,
"text": "Ahsaee et al., 2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Context using entailment encoded in the density matrices",
"sec_num": "4.1.2"
},
{
"text": "We generalize the idea of entailment hierarchy by considering a directed weighted graph where each node is a word and the edges indicate how much one word entails the other. Once we have the density matrices for words generated from corpus data, we can build this graph by calculating the graded hyponymies (see Section 2.4) among the words, thereby extracting the knowledge gained from the corpus encoded in the density matrices, without requiring human narration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context using entailment encoded in the density matrices",
"sec_num": "4.1.2"
},
{
"text": "Consider words x and y where x p y and y q x. In the ideal case, there are three possibilities: 1) x and y are not related (both p and q are small), 2) one is a type of the other (one of p and q is large), or 3) they are very similar (both p and q are large). Hence, we need to consider both p and q when we generate the worldly context. To obtain the worldly context for a word w, we consider all nodes (words) connected to w along with their weightings. If p 1 , . . . , p n and q 1 , . . . , q n are the weights of the edges from w to words h 1 , . . . , h n , then worldly context wc w is given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context using entailment encoded in the density matrices",
"sec_num": "4.1.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "wc w := i f (p i , q i ) h i",
"eq_num": "(16)"
}
],
"section": "Context using entailment encoded in the density matrices",
"sec_num": "4.1.2"
},
{
"text": "where f is some function of weights p i and q i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context using entailment encoded in the density matrices",
"sec_num": "4.1.2"
},
{
"text": "5 Conversational negation in DisCoCirc",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context using entailment encoded in the density matrices",
"sec_num": "4.1.2"
},
{
"text": "In this section, we present a framework to obtain conversational negation by composing logical negation with worldly context. As discussed in Section 2.1, negation-when used in conversationcan be viewed as not just a complement of the original word, but as also suggesting an alternative claim. Therefore, to obtain conversational negation, we need to adapt the logical negation to take into account the worldly context of the negated word. In DisCoCirc (see Section 2.2), words are wires, and sentences are processes that update meaning of the words. Similarly, we view conversational negation as a process that updates the meaning of the words. We propose the general framework for conversational negation by defining it to be the logical negation of the word, updated through composition with the worldly context evoked by that word:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A framework for conversational negation",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Conversational negation \u00ac",
"eq_num": "(17)"
}
],
"section": "A framework for conversational negation",
"sec_num": "5.1"
},
{
"text": "The framework presented here is general; i.e. it does not restrict the choice of logical negation, worldly context or composition. The main steps of conversational negation are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A framework for conversational negation",
"sec_num": "5.1"
},
{
"text": "1. Calculate the logical negation \u00ac( w ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A framework for conversational negation",
"sec_num": "5.1"
},
{
"text": "2. Compute the worldly context wc w . 3. Update the meaning of \u00ac( w ) by composing with wc w to obtain \u00ac( w ) wc w . Further meaning updates can be applied to the output of conversational negation using compositional semantics as required from the structure of the text, although we do not investigate this in the current work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A framework for conversational negation",
"sec_num": "5.1"
},
{
"text": "We present a toy example to develop intuition of how meaning provided by worldly context interacts with logical negation and composition to derive conversational negation. Suppose {apple, orange, f ig, movie} are pure states forming an orthonormal basis (ONB). In practice ONBs are far larger, but this example suffices to illustrate how the conversational negation accounts for which states are relevant. We take \u00ac sub as the choice of negation and spider in this ONB as the choice of composition. Now, consider the sentence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "See it in action",
"sec_num": "5.2"
},
{
"text": "This is not an apple.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "See it in action",
"sec_num": "5.2"
},
{
"text": "Although in reality the worldly context of apple encompasses more than just f ruit, for ease of understanding, assume the worldly context of apple is wc apple = f ruit , given by f ruit = 1 2 apple + 1 3 orange + 1 6 f ig",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "See it in action",
"sec_num": "5.2"
},
{
"text": "Applying \u00ac sub ( apple ) = I \u2212 apple , we get \u00ac sub ( apple ) = orange + f ig + movie Finally, to obtain conversational negation, logical negation is endowed with meaning through the application of worldly context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "See it in action",
"sec_num": "5.2"
},
{
"text": "\u00ac sub ( apple ) f ruit = 1 3 orange + 1 6 f ig",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "See it in action",
"sec_num": "5.2"
},
{
"text": "This conversational negation example not only yields all fruits which are not apples, but also preserves the proportions of the non-apple fruits.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "See it in action",
"sec_num": "5.2"
},
{
"text": "To validate the proposed framework, we perform experiments on the data set of alternative plausibility ratings created by Kruszewski et al. (2016) 1 . In their paper, Kruszewski et al. (2016) predict plausibility scores for word pairs consisting of a negated word and its alternative using various methods to compare the similarity of the words. While achieving a high correlation with human intuition, they do not provide an operation to model the outcome of a conversational negation. Through the experiments, we test whether our operational conversational negation still has correlation with human intuition.",
"cite_spans": [
{
"start": 122,
"end": 146,
"text": "Kruszewski et al. (2016)",
"ref_id": "BIBREF18"
},
{
"start": 167,
"end": 191,
"text": "Kruszewski et al. (2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "The Kruszewski et al. (2016) data set consists of word pairs containing a noun to be negated and an alternative noun, along with a plausibility rating. We will denote the word pairs as (w N , w A ). The authors transform these word pairs into simple sentences of the form: This is not a w N , it is a w A (e.g. This is not a radio, it is a dad.). These sentences are then rated by human participants on how plausible they are to appear in a natural conversation.",
"cite_spans": [
{
"start": 4,
"end": 28,
"text": "Kruszewski et al. (2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "6.1"
},
{
"text": "To build these word pairs, Kruszewski et al. (2016) randomly picked 50 common nouns as w N and paired them with alternatives that have various relations to w N . Then using a crowd-sourcing service, they asked the human participants to judge the plausibility of each sentence. The participants were told to rate each sentence on a scale of 1 to 5.",
"cite_spans": [
{
"start": 27,
"end": 51,
"text": "Kruszewski et al. (2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "6.1"
},
{
"text": "We build density matrices from 50 dimensional GloVe (Pennington et al., 2014 ) vectors using the method described in Lewis (2019) . Then for each word pair (w N , w A ) in the data set, we use various combinations of operations to perform conversational negation on the density matrix of w N and calculate similarity with the density matrix of w A .",
"cite_spans": [
{
"start": 52,
"end": 76,
"text": "(Pennington et al., 2014",
"ref_id": "BIBREF22"
},
{
"start": 117,
"end": 129,
"text": "Lewis (2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "6.2"
},
{
"text": "For conversational negation, we experiment with different combinations of logical negations, composition operations and worldly context. We use two types of logical negations: \u00ac sub and \u00ac inv . For composition, we use spider, fuzz, phaser, mult and diag. With spider, fuzz and phaser, we perform experiments in two choices of basis: 'w', the basis of \u00ac( w N ), and 'c', the basis of wc w N . We use worldly context generated from the WordNet entailment hierarchy as per Section 4.1.1; we experiment with different methods to calculate the weights p i along the hypernym path.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "6.2"
},
{
"text": "To find plausibility ratings, we calculate hyponymies k hyp , k E and k BA , as well as trace similarity (the density operator analog of cosine similarity for vectors), between the density matrix of the conversational negation of w N and w A . Note that in our experiments, unlike in the originally proposed formulation of k hyp , we generalize k hyp to not be 0 when supp(A) \u2286 supp(B), as described in Section 2.4. We calculate entailment in both directions for k E and k hyp , which are asymmetric. The entailment from w N to w A is denoted k E1 and k hyp1 while the entailment from w A to w N is denoted k E2 and k hyp2 . Finally, we calculate the Pearson correlation between our plausibility ratings and the mean human plausibility ratings from Kruszewski et al. (2016) .",
"cite_spans": [
{
"start": 749,
"end": 773,
"text": "Kruszewski et al. (2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "6.2"
},
{
"text": "Our experiments revealed that the best conversational negation is obtained by choosing \u00ac sub with phaser in the basis 'w'. We achieve 0.635 correlation of the trace similarity plausibility rating with On the other hand, Figure 3 (left) shows trace similarity of \u00ac sub without applying any context. We observe that simply performing logical negation yields a negative correlation with human plausibility ratings. This is because logical negation gives us a density matrix furthest from the original word, going against the observation of Kruszewski et al. (2016) that an alternative to a negated word appears in similar contexts to it. Figure 3 (right) shows the results of combining this logical negation with worldly context to obtain meaning that positively correlates with how humans think of negation in conversation.",
"cite_spans": [
{
"start": 537,
"end": 561,
"text": "Kruszewski et al. (2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 220,
"end": 228,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 635,
"end": 651,
"text": "Figure 3 (right)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.3"
},
{
"text": "We tested many combinations for conversational negation enumerated in Section 6.2. The correlation between plausibility ratings for our conversational negation and the mean human plausibility rating is shown in Figure 4 . We left out mult and diag from the table as they did not achieve any correlation above 0.3. Now, we will explore each variable of our experiments individually in the next sections.",
"cite_spans": [],
"ref_spans": [
{
"start": 211,
"end": 219,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.3"
},
{
"text": "We tested \u00ac sub and \u00ac inv logical negations. We found that the conversational negations built from \u00ac sub negation usually had a higher correlation with human plausibility ratings, with the highest being 0.635 as shown in Figures 3 and 4 . One exception to this is when the \u00ac inv is combined with spider in the basis 'c', for which we get the correlation of 0.455 for both trace similarity and k E2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 221,
"end": 236,
"text": "Figures 3 and 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Logical negation",
"sec_num": "6.3.1"
},
{
"text": "We investigated five kinds of composition operations: spider, fuzz, phaser, mult, and diag. We found that the results using mult and diag do not have any statistically significant correlation (<0.3) with human plausibility rating. On the other hand, phaser (in the basis 'w') has the highest correlation. It performs well with both logical negations. Plausibility ratings for phaser with \u00ac sub negation measured using k E2 and trace similarity has correlations of 0.602 and 0.635 respectively. Spider and fuzz have statistically relevant correlation for a few cases but never more than 0.5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Composition",
"sec_num": "6.3.2"
},
{
"text": "Spider, fuzz, and phaser necessitate a choice of basis for applying the worldly context in the conversational negation. We can interpret this choice as determining which input density matrix sets the eigenbasis of the output, and which modifies the other's spectrum. We found that phaser paired with the basis 'w' (the basis of the logically negated word) performs better than the basis 'c' (the basis of the worldly context) across both negations for most plausibility metrics. This lines up with our intuition that applying worldly context updates the eigenspectrum of \u00ac( w N ), leveraging worldly knowledge to increase/decrease the weights of more/less contextually relevant values of the logical negation of w N . However, a notable exception to this reasoning is our result that for spider paired with \u00ac inv , basis 'c' has statistically significant correlations with human ratings, while basis 'w' does not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basis",
"sec_num": "6.3.3"
},
{
"text": "For these experiments, we create worldly context based on the hypernym paths provided by WordNet. As explained in Section 4.1.1, we need p i \u2265 p i+1 in Equation 15 for the more direct hypernyms to be more important than more distant hypernyms. Hence, we tried multiple monotonically decreasing functions for the weights {p i } i of the hypernyms. For a word w with n hypernyms h 1 , ..., h n ordered from closest to furthest, we define the following functions to calculate p i . Figure 5 shows on the y-axis the correlation of the human rating with the plausibility rating (trace) of our best conversational negation (phaser with \u00ac sub in the basis 'w') and the parameters of context functions on the x-axis. We observe that all three context functions achieve a maximal correlation of 0.635, therefore being equally good. All functions eventually drop in correlation as the value of x increases, showing that having the context too close to the word does not yield optimal results either. One important observation is that at x = 0, hyp x (i) = k E (w, h i ) still performs well with a correlation of 0.581, despite not taking the Word-Net hypernym distance into account. This is an evidence for the potential of the context creation based on density matrix entailment proposed in Section 4.1.2.",
"cite_spans": [],
"ref_spans": [
{
"start": 479,
"end": 487,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Worldly context",
"sec_num": "6.3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "poly x (i) := (n \u2212 i) x (18) exp x (i) := (1 + x 10 ) (n\u2212i) (19) hyp x (i) := (n \u2212 i) x 2 k E (w, h i )",
"eq_num": "(20)"
}
],
"section": "Worldly context",
"sec_num": "6.3.4"
},
{
"text": "On top of calculating the conversational negation, the experiments call for comparing the results of the conversational negation with w A to give plausibility ratings. We compare the hyponymies k E , k hyp , and k BA , as well as trace similarity. The results show that trace similarity and k E2 interact most sensibly with our conversational negation, attaining 0.635 and 0.602 correlation with mean human ratings respectively. For the asymmetric measures k E and k hyp , computing the entailment from w A to the conversational negation of w N performed better than the other direction. For all sim-ilarity measures (except k hyp1 ), \u00ac sub paired with phaser in the basis 'w' performs the best.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Plausibility rating measures",
"sec_num": "6.3.5"
},
{
"text": "The framework presented in this paper shows promising results for conversational negation in compositional distributional semantics. Given its modular design, additional work should be done exploring more kinds of logical negations, compositions and worldly contexts, as well as situations for which certain combinations are optimal. Since creating worldly context-as presented in this paper-is a new concept in the area of DisCo-Circ, it leaves the most room for further exploration. In particular, our framework does not handle how to disambiguate different meanings of the same word; for example, the worldly context of the word apple should be different for the fruit apple versus the technology company apple.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "7"
},
{
"text": "Our conversational negation framework currently does not model a different kind of negation where the suggested alternative is an antonym rather than just any other word that appears in similar contexts. For instance, the sentence Alice is not happy suggests that Alice is sad-an antonym of happy-rather than cheerful, even though cheerful might appear in similar contexts as happy. We would like to extend the conversational negation framework to account for this.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "7"
},
{
"text": "We would like to implement the context generation method presented in Section 4.1.2 and test on the current experimental setup. 2 To further validate the framework, more data sets should be collected and evaluated on to explore, for each type of relation between words, what construction of conversational negation yields sensible plausibility ratings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "7"
},
{
"text": "For the conversational negation to be fully applicable in the context of compositional distributional semantics, further theoretical work is required to generalize the model from negation of individual nouns to negation of other grammatical classes and complex sentences. Furthermore, we would like to analyze the interplay between conversational negation, textual context, and evolving meanings. Lastly, the interaction of conversational negation with logical connectives and quantifiers leaves open questions to explore.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "7"
},
{
"text": "Proof. \u00ac supp X and X have the same eigenbasis. From Equation 13, all nonzero eigenvalues of \u00ac supp X are multiplicative inverses of the corresponding eigenvalue of X. We use definitions of spider, fuzz, and phaser from Equations 1, 2, and 3. The summation indices are over eigenvectors with nonzero eigenvalue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "7"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "spider(X, \u00ac supp X) (34) = U s (X \u2297 \u00ac supp X)U \u2020 s (35) = i |i ii| (X \u2297 \u00ac supp X) j |jj j| (36) = i |i ii| \u03bb |i i| \u2297 1 \u03bb i |i i| |ii i| (37) = i |i i| (38) = I supp (39) fuzz(X, \u00ac supp X) = i x i P i \u2022 X \u2022 P i (40) = i 1 \u03bb i P i j \u03bb i P i P i (41) = i P i (42) = I supp (43) phaser(X, \u00ac supp X)",
"eq_num": "(44)"
}
],
"section": "Future work",
"sec_num": "7"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= i x i P i \u2022 X \u2022 i x i P i (45) = i \u03bb i \u2212 1 2 P i j \u03bb j P j k \u03bb k \u2212 1 2 P k (46) = i P i (47) = I supp",
"eq_num": "(48)"
}
],
"section": "Future work",
"sec_num": "7"
},
{
"text": "Corollary 2. When composing a density matrix X with \u00ac inv X via spider, fuzz, or phaser, the resulting density matrix has the desired property of being a maximally mixed state on the support with zeroes on the kernel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "7"
},
{
"text": "The data set is available at http://marcobaroni. org/PublicData/alternatives_dataset.zip",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The code is available upon request.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to give special thanks to Martha Lewis for the insightful conversation and for sharing her code for generating density matrices. We appreciate the guidance of Bob Coecke in introducing us to the field of compositional distributional semantics for natural language processing. We thank John van de Wetering for informative discussion about ordering density matrices. We thank the anonymous reviewers for their helpful feedback. Lia Yeh gratefully acknowledges funding from the Oxford-Basil Reeve Graduate Scholarship at Oriel College in partnership with the Clarendon Fund.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "A.1 Support inverse reverses k-hyponymy Theorem 1. For two density matrices A and B, k-hyponymy is reversed by support inverse when rank(A) = rank(B):Proof. From (Baksalary et al., 1989) , \u00ac supp reverses L\u00f6wner order when rank(A) = rank(B):Thus, letting \"\u2265 0\" denote the operator is positive:using Equations 5 and 22 from Equation 23 to 24.Corollary 1. For two invertible density matrices A and B, k-hyponymy is reversed by matrix inverse:A.2 Matrix inverse reverses k BA in same basis case Theorem 2. For two density matrices A and B with the same eigenbasis, k BA is reversed by matrix inverse:Proof.using Equation 13 from Equation 30 to 31.A.3 Composing with \u00ac sub or \u00ac inv gives maximally mixed support Theorem 3. When composing a density matrix X with \u00ac supp X via spider, fuzz, or phaser, the resulting density matrix has the desired property of being a maximally mixed state on the support with zeroes on the kernel.",
"cite_spans": [
{
"start": 162,
"end": 186,
"text": "(Baksalary et al., 1989)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Proofs",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A categorical semantics of quantum protocols",
"authors": [
{
"first": "Samson",
"middle": [],
"last": "Abramsky",
"suffix": ""
},
{
"first": "Bob",
"middle": [],
"last": "Coecke",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 19th Annual IEEE Symposium on Logic in Computer Science",
"volume": "",
"issue": "",
"pages": "415--425",
"other_ids": {
"DOI": [
"10.1109/LICS.2004.1319636"
]
},
"num": null,
"urls": [],
"raw_text": "Samson Abramsky and Bob Coecke. 2004. A categor- ical semantics of quantum protocols. In Proceed- ings of the 19th Annual IEEE Symposium on Logic in Computer Science, 2004., pages 415-425.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Semantic similarity assessment of words using weighted wordnet",
"authors": [
{
"first": "Mahmoud",
"middle": [],
"last": "Mostafa Ghazizadeh Ahsaee",
"suffix": ""
},
{
"first": "S Ehsan Yasrebi",
"middle": [],
"last": "Naghibzadeh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Naeini",
"suffix": ""
}
],
"year": 2014,
"venue": "International Journal of Machine Learning and Cybernetics",
"volume": "5",
"issue": "3",
"pages": "479--490",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mostafa Ghazizadeh Ahsaee, Mahmoud Naghibzadeh, and S Ehsan Yasrebi Naeini. 2014. Semantic simi- larity assessment of words using weighted wordnet. International Journal of Machine Learning and Cy- bernetics, 5(3):479-490.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Some properties of matrix partial orderings. Linear Algebra and its Applications",
"authors": [
{
"first": "Jerzy",
"middle": [
"K"
],
"last": "Baksalary",
"suffix": ""
},
{
"first": "Friedrich",
"middle": [],
"last": "Pukelsheim",
"suffix": ""
},
{
"first": "George",
"middle": [
"P H"
],
"last": "Styan",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "119",
"issue": "",
"pages": "57--85",
"other_ids": {
"DOI": [
"10.1016/0024-3795(89)90069-4"
]
},
"num": null,
"urls": [],
"raw_text": "Jerzy K. Baksalary, Friedrich Pukelsheim, and George P.H. Styan. 1989. Some properties of matrix partial orderings. Linear Algebra and its Applica- tions, 119:57-85.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Distributional sentence entailment using density matrices",
"authors": [
{
"first": "Esma",
"middle": [],
"last": "Balkir",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
},
{
"first": "Bob",
"middle": [],
"last": "Coecke",
"suffix": ""
}
],
"year": 2016,
"venue": "Topics in Theoretical Computer Science",
"volume": "",
"issue": "",
"pages": "1--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Esma Balkir, Mehrnoosh Sadrzadeh, and Bob Coecke. 2016. Distributional sentence entailment using den- sity matrices. In Topics in Theoretical Computer Science, pages 1-22, Cham. Springer International Publishing.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Graded hyponymy for compositional distributional semantics",
"authors": [
{
"first": "Dea",
"middle": [],
"last": "Bankova",
"suffix": ""
},
{
"first": "Bob",
"middle": [],
"last": "Coecke",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Marsden",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Language Modelling",
"volume": "6",
"issue": "2",
"pages": "225--260",
"other_ids": {
"DOI": [
"10.15398/jlm.v6i2.230"
]
},
"num": null,
"urls": [],
"raw_text": "Dea Bankova, Bob Coecke, Martha Lewis, and Dan Marsden. 2019. Graded hyponymy for composi- tional distributional semantics. Journal of Language Modelling, 6(2):225-260.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "not not bad\" is not \"bad\": A distributional account of negation",
"authors": [
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"Moritz"
],
"last": "Hermann",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Workshop on Continuous Vector Space Models and their Compositionality",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phil Blunsom, Edward Grefenstette, and Karl Moritz Hermann. 2013. \"not not bad\" is not \"bad\": A distri- butional account of negation. In Proceedings of the 2013 Workshop on Continuous Vector Space Models and their Compositionality.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Interacting conceptual spaces I : Grammatical composition of concepts",
"authors": [
{
"first": "Joe",
"middle": [],
"last": "Bolt",
"suffix": ""
},
{
"first": "Bob",
"middle": [],
"last": "Coecke",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Genovese",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Marsden",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Piedeleu",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joe Bolt, Bob Coecke, Fabrizio Genovese, Martha Lewis, Dan Marsden, and Robin Piedeleu. 2017. In- teracting conceptual spaces I : Grammatical compo- sition of concepts. CoRR, abs/1703.08314.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Adding dense, weighted connections to wordnet",
"authors": [
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the third international WordNet conference",
"volume": "",
"issue": "",
"pages": "29--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jordan Boyd-Graber, Christiane Fellbaum, Daniel Os- herson, and Robert Schapire. 2006. Adding dense, weighted connections to wordnet. In Proceedings of the third international WordNet conference, pages 29-36. Citeseer.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The mathematics of text structure",
"authors": [
{
"first": "Bob",
"middle": [],
"last": "Coecke",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bob Coecke. 2020. The mathematics of text structure.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Konstantinos Meichanetzidis, and Alexis Toumi. 2020. Foundations for near-term quantum natural language processing",
"authors": [
{
"first": "Bob",
"middle": [],
"last": "Coecke",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "De",
"suffix": ""
},
{
"first": "Felice",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bob Coecke, Giovanni de Felice, Konstantinos Me- ichanetzidis, and Alexis Toumi. 2020. Foundations for near-term quantum natural language processing. ArXiv, abs/2012.03755.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Meaning updating of density matrices",
"authors": [
{
"first": "Bob",
"middle": [],
"last": "Coecke",
"suffix": ""
},
{
"first": "Konstantinos",
"middle": [],
"last": "Meichanetzidis",
"suffix": ""
}
],
"year": 2020,
"venue": "FLAP",
"volume": "7",
"issue": "",
"pages": "745--770",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bob Coecke and Konstantinos Meichanetzidis. 2020. Meaning updating of density matrices. FLAP, 7:745-770.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Mathematical foundations for a compositional distributional model of meaning",
"authors": [
{
"first": "Bob",
"middle": [],
"last": "Coecke",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2010,
"venue": "Lambek Festschrift Linguistic Analysis",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bob Coecke, Mehrnoosh Sadrzadeh, and Stephen Clark. 2010. Mathematical foundations for a com- positional distributional model of meaning. Lambek Festschrift Linguistic Analysis, 36.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Cats climb entails mammals move: preserving hyponymy in compositional distributional semantics",
"authors": [
{
"first": "Gemma",
"middle": [],
"last": "De Las Cuevas",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Klinger",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Netzer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of SEMSPACE 2020",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gemma De las Cuevas, Andreas Klinger, Martha Lewis, and Tim Netzer. 2020. Cats climb entails mammals move: preserving hyponymy in compo- sitional distributional semantics. In Proceedings of SEMSPACE 2020.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The role of implicit and explicit negation in conditional reasoning bias",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "St",
"suffix": ""
},
{
"first": "B",
"middle": [
"T"
],
"last": "Evans",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Clibbens",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Rood",
"suffix": ""
}
],
"year": 1996,
"venue": "Journal of Memory and Language",
"volume": "35",
"issue": "3",
"pages": "392--409",
"other_ids": {
"DOI": [
"10.1006/jmla.1996.0022"
]
},
"num": null,
"urls": [],
"raw_text": "Jonathan St BT Evans, John Clibbens, and Benjamin Rood. 1996. The role of implicit and explicit nega- tion in conditional reasoning bias. Journal of Mem- ory and Language, 35(3):392-409.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Wordnet: An electronic lexical database",
"authors": [
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum. 1998. Wordnet: An electronic lexical database.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Experimental support for a categorical compositional distributional model of meaning",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1394--1404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011. Experimental support for a categorical composi- tional distributional model of meaning. In Proceed- ings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1394-1404, Edinburgh, Scotland, UK. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Automatic acquisition of hyponyms from large text corpora",
"authors": [
{
"first": "A",
"middle": [],
"last": "Marti",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hearst",
"suffix": ""
}
],
"year": 1992,
"venue": "The 15th international conference on computational linguistics",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marti A Hearst. 1992. Automatic acquisition of hy- ponyms from large text corpora. In Coling 1992 vol- ume 2: The 15th international conference on compu- tational linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "On the semantic properties of logical operators in english",
"authors": [
{
"first": "Laurence",
"middle": [],
"last": "Horn",
"suffix": ""
}
],
"year": 1972,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laurence Horn. 1972. On the semantic properties of logical operators in english. Unpublished Ph.D. dis- sertation.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "There is no logical negation here, but there are alternatives: Modeling conversational negation with distributional semantics",
"authors": [
{
"first": "Germ\u00e1n",
"middle": [],
"last": "Kruszewski",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Paperno",
"suffix": ""
},
{
"first": "Raffaella",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2016,
"venue": "Computational Linguistics",
"volume": "42",
"issue": "4",
"pages": "637--660",
"other_ids": {
"DOI": [
"10.1162/COLI_a_00262"
]
},
"num": null,
"urls": [],
"raw_text": "Germ\u00e1n Kruszewski, Denis Paperno, Raffaella Bernardi, and Marco Baroni. 2016. There is no logical negation here, but there are alternatives: Modeling conversational negation with distri- butional semantics. Computational Linguistics, 42(4):637-660.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Compositional hyponymy with positive operators",
"authors": [
{
"first": "Martha",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "638--647",
"other_ids": {
"DOI": [
"10.26615/978-954-452-056-4_075"
]
},
"num": null,
"urls": [],
"raw_text": "Martha Lewis. 2019. Compositional hyponymy with positive operators. In Proceedings of the Interna- tional Conference on Recent Advances in Natural Language Processing (RANLP 2019), pages 638- 647, Varna, Bulgaria. INCOMA Ltd.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Towards logical negation for compositional distributional semantics",
"authors": [
{
"first": "Martha",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2020,
"venue": "IfCoLoG Journal of Logics and their Applications",
"volume": "7",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martha Lewis. 2020. Towards logical negation for com- positional distributional semantics. IfCoLoG Jour- nal of Logics and their Applications, 7(3).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Contrast classes and matching bias as explanations of the effects of negation on conditional reasoning",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Oaksford",
"suffix": ""
}
],
"year": 2002,
"venue": "Thinking & Reasoning",
"volume": "8",
"issue": "2",
"pages": "135--151",
"other_ids": {
"DOI": [
"10.1080/13546780143000170"
]
},
"num": null,
"urls": [],
"raw_text": "Mike Oaksford. 2002. Contrast classes and match- ing bias as explanations of the effects of negation on conditional reasoning. Thinking & Reasoning, 8(2):135-151.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Open system categorical quantum semantics in natural language processing",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Piedeleu",
"suffix": ""
},
{
"first": "Dimitri",
"middle": [],
"last": "Kartsaklis",
"suffix": ""
},
{
"first": "Bob",
"middle": [],
"last": "Coecke",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robin Piedeleu, Dimitri Kartsaklis, Bob Coecke, and Mehrnoosh Sadrzadeh. 2015. Open system categor- ical quantum semantics in natural language process- ing.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "How reaction times can elucidate matching effects and the processing of negation",
"authors": [
{
"first": "J\u00e9r\u00f4me",
"middle": [],
"last": "Prado",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ira",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Noveck",
"suffix": ""
}
],
"year": 2006,
"venue": "Thinking and Reasoning",
"volume": "12",
"issue": "3",
"pages": "",
"other_ids": {
"DOI": [
"10.1080/13546780500371241"
]
},
"num": null,
"urls": [],
"raw_text": "J\u00e9r\u00f4me Prado and Ira A. Noveck. 2006. How reaction times can elucidate matching effects and the process- ing of negation. Thinking and Reasoning, 12(3).",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Hearst patterns revisited: Automatic hypernym detection from large text corpora",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Maximilian",
"middle": [],
"last": "Nickel",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1806.03191"
]
},
"num": null,
"urls": [],
"raw_text": "Stephen Roller, Douwe Kiela, and Maximilian Nickel. 2018. Hearst patterns revisited: Automatic hy- pernym detection from large text corpora. arXiv preprint arXiv:1806.03191.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Ordering quantum states and channels based on positive bayesian evidence",
"authors": [
{
"first": "John",
"middle": [],
"last": "Van De Wetering",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Mathematical Physics",
"volume": "59",
"issue": "10",
"pages": "",
"other_ids": {
"DOI": [
"10.1063/1.5023474"
]
},
"num": null,
"urls": [],
"raw_text": "John van de Wetering. 2018. Ordering quantum states and channels based on positive bayesian evidence. Journal of Mathematical Physics, 59(10):102201.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Word vectors and quantum logic: Experiments with negation and disjunction",
"authors": [
{
"first": "Dominic",
"middle": [],
"last": "Widdows",
"suffix": ""
},
{
"first": "Stanley",
"middle": [],
"last": "Peters",
"suffix": ""
}
],
"year": 2003,
"venue": "Mathematics of language",
"volume": "8",
"issue": "",
"pages": "141--154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominic Widdows and Stanley Peters. 2003. Word vec- tors and quantum logic: Experiments with negation and disjunction. Mathematics of language, 8(141- 154).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Graphical representation of meaning updating in DisCoCirc -read from top to bottom"
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Example of hyponymy structure as can be found in entailment hierarchies"
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Correlation of logical (left) and conversational negation (right) with mean human rating Figure 4: Correlation of various conversational negations with mean plausibility ratings of human participants. Correlations above 0.4 are highlighted in green. the human ratings, as shown in Figure 3 (right)."
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Correlation of results of different context functions with human rating"
}
}
}
}