Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S14-2001",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:32:27.878829Z"
},
"title": "SemEval-2014 Task 1: Evaluation of Compositional Distributional Semantic Models on Full Sentences through Semantic Relatedness and Textual Entailment",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Marelli",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Trento",
"location": {
"country": "Italy"
}
},
"email": ""
},
{
"first": "Luisa",
"middle": [],
"last": "Bentivogli",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Trento",
"location": {
"country": "Italy"
}
},
"email": ""
},
{
"first": "Raffaella",
"middle": [],
"last": "Bernardi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Trento",
"location": {
"country": "Italy"
}
},
"email": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Menini",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Trento",
"location": {
"country": "Italy"
}
},
"email": "[email protected]"
},
{
"first": "Roberto",
"middle": [],
"last": "Zamparelli",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Trento",
"location": {
"country": "Italy"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents the task on the evaluation of Compositional Distributional Semantics Models on full sentences organized for the first time within SemEval-2014. Participation was open to systems based on any approach. Systems were presented with pairs of sentences and were evaluated on their ability to predict human judgments on (i) semantic relatedness and (ii) entailment. The task attracted 21 teams, most of which participated in both subtasks. We received 17 submissions in the relatedness subtask (for a total of 66 runs) and 18 in the entailment subtask (65 runs).",
"pdf_parse": {
"paper_id": "S14-2001",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents the task on the evaluation of Compositional Distributional Semantics Models on full sentences organized for the first time within SemEval-2014. Participation was open to systems based on any approach. Systems were presented with pairs of sentences and were evaluated on their ability to predict human judgments on (i) semantic relatedness and (ii) entailment. The task attracted 21 teams, most of which participated in both subtasks. We received 17 submissions in the relatedness subtask (for a total of 66 runs) and 18 in the entailment subtask (65 runs).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Distributional Semantic Models (DSMs) approximate the meaning of words with vectors summarizing their patterns of co-occurrence in corpora. Recently, several compositional extensions of DSMs (CDSMs) have been proposed, with the purpose of representing the meaning of phrases and sentences by composing the distributional representations of the words they contain (Baroni and Zamparelli, 2010; Grefenstette and Sadrzadeh, 2011; Mitchell and Lapata, 2010; Socher et al., 2012) . Despite the ever increasing interest in the field, the development of adequate benchmarks for CDSMs, especially at the sentence level, is still lagging. Existing data sets, such as those introduced by Mitchell and Lapata (2008) and Grefenstette and Sadrzadeh (2011) , are limited to a few hundred instances of very short sentences with a fixed structure. In the last ten years, several large This work is licensed under a Creative Commons Attribution 4.0 International Licence. Page numbers and proceedings footer are added by the organisers. Licence details: http://creativecommons.org/licenses/by/4.0/ data sets have been developed for various computational semantics tasks, such as Semantic Text Similarity (STS) (Agirre et al., 2012) or Recognizing Textual Entailment (RTE) (Dagan et al., 2006) . Working with such data sets, however, requires dealing with issues, such as identifying multiword expressions, recognizing named entities or accessing encyclopedic knowledge, which have little to do with compositionality per se. CDSMs should instead be evaluated on data that are challenging for reasons due to semantic compositionality (e.g. context-cued synonymy resolution and other lexical variation phenomena, active/passive and other syntactic alternations, impact of negation at various levels, operator scope, and other effects linked to the functional lexicon). These issues do not occur frequently in, e.g., the STS and RTE data sets.",
"cite_spans": [
{
"start": 363,
"end": 392,
"text": "(Baroni and Zamparelli, 2010;",
"ref_id": "BIBREF2"
},
{
"start": 393,
"end": 426,
"text": "Grefenstette and Sadrzadeh, 2011;",
"ref_id": "BIBREF10"
},
{
"start": 427,
"end": 453,
"text": "Mitchell and Lapata, 2010;",
"ref_id": "BIBREF18"
},
{
"start": 454,
"end": 474,
"text": "Socher et al., 2012)",
"ref_id": "BIBREF20"
},
{
"start": 678,
"end": 704,
"text": "Mitchell and Lapata (2008)",
"ref_id": "BIBREF17"
},
{
"start": 709,
"end": 742,
"text": "Grefenstette and Sadrzadeh (2011)",
"ref_id": "BIBREF10"
},
{
"start": 1193,
"end": 1214,
"text": "(Agirre et al., 2012)",
"ref_id": "BIBREF0"
},
{
"start": 1255,
"end": 1275,
"text": "(Dagan et al., 2006)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "With these considerations in mind, we developed SICK (Sentences Involving Compositional Knowledge), a data set aimed at filling the void, including a large number of sentence pairs that are rich in the lexical, syntactic and semantic phenomena that CDSMs are expected to account for, but do not require dealing with other aspects of existing sentential data sets that are not within the scope of compositional distributional semantics. Moreover, we distinguished between generic semantic knowledge about general concept categories (such as knowledge that a couple is formed by a bride and a groom) and encyclopedic knowledge about specific instances of concepts (e.g., knowing the fact that the current president of the US is Barack Obama). The SICK data set contains many examples of the former, but none of the latter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Task involved two subtasks. (i) Relatedness: predicting the degree of semantic similarity between two sentences, and (ii) Entailment: detecting the entailment relation holding between them (see below for the exact definition). Sentence relatedness scores provide a direct way to evaluate CDSMs, insofar as their outputs are able to quantify the degree of semantic similarity between sentences. On the other hand, starting from the assumption that understanding a sentence means knowing when it is true, being able to verify whether an entailment is valid is a crucial challenge for semantic systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Task",
"sec_num": "2"
},
{
"text": "In the semantic relatedness subtask, given two sentences, systems were required to produce a relatedness score (on a continuous scale) indicating the extent to which the sentences were expressing a related meaning. Table 1 shows examples of sentence pairs with different degrees of semantic relatedness; gold relatedness scores are expressed on a 5-point rating scale.",
"cite_spans": [],
"ref_spans": [
{
"start": 215,
"end": 222,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Task",
"sec_num": "2"
},
{
"text": "In the entailment subtask, given two sentences A and B, systems had to determine whether the meaning of B was entailed by A. In particular, systems were required to assign to each pair either the ENTAILMENT label (when A entails B, viz., B cannot be false when A is true), the CONTRA-DICTION label (when A contradicted B, viz. B is false whenever A is true), or the NEUTRAL label (when the truth of B could not be determined on the basis of A). Table 2 shows examples of sentence pairs holding different entailment relations.",
"cite_spans": [],
"ref_spans": [
{
"start": 445,
"end": 452,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "The Task",
"sec_num": "2"
},
{
"text": "Participants were invited to submit up to five system runs for one or both subtasks. Developers of CDSMs were especially encouraged to participate, but developers of other systems that could tackle sentence relatedness or entailment tasks were also welcome. Besides being of intrinsic interest, the latter systems' performance will serve to situate CDSM performance within the broader landscape of computational semantics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Task",
"sec_num": "2"
},
{
"text": "The SICK data set, consisting of about 10,000 English sentence pairs annotated for relatedness in meaning and entailment, was used to evaluate the systems participating in the task. The data set creation methodology is outlined in the following subsections, while all the details about data generation and annotation, quality control, and interannotator agreement can be found in Marelli et al. (2014) .",
"cite_spans": [
{
"start": 380,
"end": 401,
"text": "Marelli et al. (2014)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The SICK Data Set",
"sec_num": "3"
},
{
"text": "SICK was built starting from two existing data sets: the 8K ImageFlickr data set 1 and the SemEval-2012 STS MSR-Video Descriptions data set. 2 The 8K ImageFlickr dataset is a dataset of images, where each image is associated with five descriptions. To derive SICK sentence pairs we randomly chose 750 images and we sampled two descriptions from each of them. The SemEval-2012 STS MSR-Video Descriptions data set is a collection of sentence pairs sampled from the short video snippets which compose the Microsoft Research Video Description Corpus. A subset of 750 sentence pairs were randomly chosen from this data set to be used in SICK.",
"cite_spans": [
{
"start": 141,
"end": 142,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set Creation",
"sec_num": "3.1"
},
{
"text": "In order to generate SICK data from the 1,500 sentence pairs taken from the source data sets, a 3step process was applied to each sentence composing the pair, namely (i) normalization, (ii) expansion and (iii) pairing. Table 3 presents an example of the output of each step in the process.",
"cite_spans": [],
"ref_spans": [
{
"start": 219,
"end": 226,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Set Creation",
"sec_num": "3.1"
},
{
"text": "The normalization step was carried out on the original sentences (S0) to exclude or simplify instances that contained lexical, syntactic or semantic phenomena (e.g., named entities, dates, numbers, multiword expressions) that CDSMs are currently not expected to account for.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set Creation",
"sec_num": "3.1"
},
{
"text": "The expansion step was applied to each of the normalized sentences (S1) in order to create up to three new sentences with specific characteristics suitable to CDSM evaluation. In this step syntactic and lexical transformations with predictable effects were applied to each normalized sentence, in order to obtain (i) a sentence with a similar meaning (S2), (ii) a sentence with a logically contradictory or at least highly contrasting meaning (S3), and (iii) a sentence that contains most of the same lexical items, but has a different meaning (S4) (this last step was carried out only where it could yield a meaningful sentence; as a result, not all normalized sentences have an (S4) expansion).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set Creation",
"sec_num": "3.1"
},
{
"text": "Finally, in the pairing step each normalized sentence in the pair was combined with all the sentences resulting from the expansion phase and with the other normalized sentence in the pair. Considering the example in Table 3 , S1a and S1b were paired. Then, S1a and S1b were each combined with S2a, S2b,S3a, S3b, S4a, and S4b, lead-Relatedness score Example 1.6 A: \"A man is jumping into an empty pool\" B: \"There is no biker jumping in the air\" 2.9 A: \"Two children are lying in the snow and are making snow angels\" B: \"Two angels are making snow on the lying children\"",
"cite_spans": [],
"ref_spans": [
{
"start": 216,
"end": 223,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Set Creation",
"sec_num": "3.1"
},
{
"text": "3.6 A: \"The young boys are playing outdoors and the man is smiling nearby\" B: \"There is no boy playing outdoors and there is no man smiling\" 4.9 A: \"A person in a black jacket is doing tricks on a motorbike\" B: \"A man in a black jacket is doing tricks on a motorbike\" Table 1 : Examples of sentence pairs with their gold relatedness scores (on a 5-point rating scale).",
"cite_spans": [],
"ref_spans": [
{
"start": 268,
"end": 275,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Set Creation",
"sec_num": "3.1"
},
{
"text": "ENTAILMENT A: \"Two teams are competing in a football match\" B: \"Two groups of people are playing football\" CONTRADICTION A: \"The brown horse is near a red barrel at the rodeo\" B: \"The brown horse is far from a red barrel at the rodeo\" NEUTRAL A: \"A man in a black jacket is doing tricks on a motorbike\" B: \"A person is riding the bicycle on one wheel\" ing to a total of 13 different sentence pairs. Furthermore, a number of pairs composed of completely unrelated sentences were added to the data set by randomly taking two sentences from two different pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entailment label Example",
"sec_num": null
},
{
"text": "The result is a set of about 10,000 new sentence pairs, in which each sentence is contrasted with either a (near) paraphrase, a contradictory or strongly contrasting statement, another sentence with very high lexical overlap but different meaning, or a completely unrelated sentence. The rationale behind this approach was that of building a data set which encouraged the use of a compositional semantics step in understanding when two sentences have close meanings or entail each other, hindering methods based on individual lexical items, on the syntactic complexity of the two sentences or on pure world knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entailment label Example",
"sec_num": null
},
{
"text": "Each pair in the SICK dataset was annotated to mark (i) the degree to which the two sentence meanings are related (on a 5-point scale), and (ii) whether one entails or contradicts the other (con-sidering both directions). The ratings were collected through a large crowdsourcing study, where each pair was evaluated by 10 different subjects, and the order of presentation of the sentences was counterbalanced (i.e., 5 judgments were collected for each presentation order). Swapping the order of the sentences within each pair served a twofold purpose: (i) evaluating the entailment relation in both directions and (ii) controlling possible bias due to priming effects in the relatedness task. Once all the annotations were collected, the relatedness gold score was computed for each pair as the average of the ten ratings assigned by participants, whereas a majority vote scheme was adopted for the entailment gold labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relatedness and Entailment Annotation",
"sec_num": "3.2"
},
{
"text": "For the purpose of the task, the data set was randomly split into training and test set (50% and 50%), ensuring that each relatedness range and entailment category was equally represented in both sets. Table 4 shows the distribution of sentence pairs considering the combination of relatedness ranges and entailment labels. The \"total\" column Original pair S0a: A sea turtle is hunting for fish S0b: The turtle followed the fish Normalized pair S1a: A sea turtle is hunting for fish S1b: The turtle is following the fish Expanded pairs S2a: A sea turtle is hunting for food S2b: The turtle is following the red fish S3a: A sea turtle is not hunting for fish S3b: The turtle isn't following the fish S4a: A fish is hunting for a turtle in the sea S4b: The fish is following the turtle Table 3 : Data set creation process.",
"cite_spans": [],
"ref_spans": [
{
"start": 202,
"end": 209,
"text": "Table 4",
"ref_id": "TABREF2"
},
{
"start": 784,
"end": 791,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Set Statistics",
"sec_num": "3.3"
},
{
"text": "indicates the total number of pairs in each range of relatedness, while the \"total\" row contains the total number of pairs in each entailment class. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set Statistics",
"sec_num": "3.3"
},
{
"text": "Both subtasks were evaluated using standard metrics. In particular, the results on entailment were evaluated using accuracy, whereas the outputs on relatedness were evaluated using Pearson correlation, Spearman correlation, and Mean Squared Error (MSE). Pearson correlation was chosen as the official measure to rank the participating systems. Table 5 presents the performance of 4 baselines. The Majority baseline always assigns the most common label in the training data (NEUTRAL), whereas the Probability baseline assigns labels randomly according to their relative frequency in the training set. The Overlap baseline measures word overlap, again with parameters (number of stop words and EN-TAILMENT/NEUTRAL/CONTRADICTION thresholds) estimated on the training part of the data.",
"cite_spans": [],
"ref_spans": [
{
"start": 344,
"end": 351,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Evaluation Metrics and Baselines",
"sec_num": "4"
},
{
"text": "Relatedness Overall, 21 teams participated in the task. Participants were allowed to submit up to 5 runs for each subtask and had to choose the primary run to be included in the comparative evaluation. We received 17 submissions to the relatedness subtask (for a total of 66 runs) and 18 for the entailment subtask (65 runs). We asked participants to pre-specify a primary run to encourage commitment to a theoretically-motivated approach, rather than post-hoc performance-based assessment. Interestingly, some participants used the non-primary runs to explore the performance one could reach by exploiting weaknesses in the data that are not likely to hold in future tasks of the same kind (for instance, run 3 submitted by The Meaning Factory exploited sentence ID ordering information, but it was not presented as a primary run). Participants could also use non-primary runs to test smart baselines. In the relatedness subtask six non-primary runs slightly outperformed the official winning primary entry, 3 while in the entailment task all ECNU's runs but run 4 were better than ECNU's primary run. Interestingly, the differences between the ECNU's runs were due to the learning methods used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": null
},
{
"text": "We present the results achieved by primary runs against the Entailment and Relatedness subtasks in Table 6 and Table 7 , respectively. 4 We witnessed a very close finish in both subtasks, with 4 more systems within 3 percentage points of the winner in both cases. 4 of these 5 top systems were the same across the two subtasks. Most systems performed well above the best baselines from Table 5 .",
"cite_spans": [
{
"start": 135,
"end": 136,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 99,
"end": 118,
"text": "Table 6 and Table 7",
"ref_id": "TABREF6"
},
{
"start": 386,
"end": 394,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Baseline",
"sec_num": null
},
{
"text": "The overall performance pattern suggests that, owing perhaps to the more controlled nature of the sentences, as well as to the purely linguistic nature of the challenges it presents, SICK entailment is \"easier\" than RTE. Considering the first five RTE challenges (Bentivogli et al., 2009) , the median values ranged from 56.20% to 61.75%, whereas the average values ranged from 56.45% to 61.97%. The entailment scores obtained on the SICK data set are considerably higher, being 77.06% for the median system and 75.36% for the average system. On the other hand, the relatedness task is more challenging than the one run on MSRvid (one of our data sources) at STS 2012, where the top Pearson correlation was 0.88 (Agirre et al., 2012) .",
"cite_spans": [
{
"start": 263,
"end": 288,
"text": "(Bentivogli et al., 2009)",
"ref_id": "BIBREF4"
},
{
"start": 712,
"end": 733,
"text": "(Agirre et al., 2012)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": null
},
{
"text": "A summary of the approaches used by the systems to address the task is presented in Table 8 . In the table, systems in bold are those for which the authors submitted a paper (Ferrone and Zanzotto, 2014; Bjerva et al., 2014; Beltagy et al., 2014; Lai and Hockenmaier, 2014; Alves et al., 2014; Le\u00f3n et al., 2014; Bestgen, 2014; Zhao et al., 2014; Vo et al., 2014; Bi\u00e7ici and Way, 2014; Lien and Kouylekov, 2014; Jimenez et al., 2014; Proisl and Evert, 2014; Gupta et al., 2014) . For the others, we used the brief description sent with the system's results, double-checking the information with the authors. In the table, \"E\" and \"R\" refer to the entailment and relatedness task respectively, and \"B\" to both.",
"cite_spans": [
{
"start": 174,
"end": 202,
"text": "(Ferrone and Zanzotto, 2014;",
"ref_id": "BIBREF9"
},
{
"start": 203,
"end": 223,
"text": "Bjerva et al., 2014;",
"ref_id": "BIBREF7"
},
{
"start": 224,
"end": 245,
"text": "Beltagy et al., 2014;",
"ref_id": "BIBREF3"
},
{
"start": 246,
"end": 272,
"text": "Lai and Hockenmaier, 2014;",
"ref_id": "BIBREF13"
},
{
"start": 273,
"end": 292,
"text": "Alves et al., 2014;",
"ref_id": "BIBREF1"
},
{
"start": 293,
"end": 311,
"text": "Le\u00f3n et al., 2014;",
"ref_id": "BIBREF14"
},
{
"start": 312,
"end": 326,
"text": "Bestgen, 2014;",
"ref_id": "BIBREF5"
},
{
"start": 327,
"end": 345,
"text": "Zhao et al., 2014;",
"ref_id": "BIBREF22"
},
{
"start": 346,
"end": 362,
"text": "Vo et al., 2014;",
"ref_id": "BIBREF21"
},
{
"start": 363,
"end": 384,
"text": "Bi\u00e7ici and Way, 2014;",
"ref_id": "BIBREF6"
},
{
"start": 385,
"end": 410,
"text": "Lien and Kouylekov, 2014;",
"ref_id": "BIBREF15"
},
{
"start": 411,
"end": 432,
"text": "Jimenez et al., 2014;",
"ref_id": "BIBREF12"
},
{
"start": 433,
"end": 456,
"text": "Proisl and Evert, 2014;",
"ref_id": "BIBREF19"
},
{
"start": 457,
"end": 476,
"text": "Gupta et al., 2014)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 84,
"end": 91,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Approaches",
"sec_num": "6"
},
{
"text": "Almost all systems combine several kinds of features. To highlight the role played by composition, we draw a distinction between compositional and non-compositional features, and divide the former into 'fully compositional' (sys- purely non-compositional system (UNAL-NLP) which reaches the 4th position (0.80 r UNAL-NLP vs. 0.82 r obtained by the best system). UNAL-NLP however exploits an ad-hoc \"negation\" feature discussed below. In the entailment task, the best noncompositional model (again UNAL-NLP) reaches the 3rd position, within close reach of the best system (83% UNAL-NLP vs. 84.5% obtained by the best system). Again, purely compositional models have lower performance. haLF CDSM reaches 69.42% accuracy, Illinois-LH Word Overlap combined with a compositional feature reaches 71.8%. The fine-grained analysis reported by Illinois-LH (Lai and Hockenmaier, 2014) shows that a full compositional system (based on point-wise multiplication) fails to capture contradiction. It is better than partial phrase-based compositional models in recognizing entailment pairs, but worse than them on recognizing neutral pairs.",
"cite_spans": [
{
"start": 847,
"end": 874,
"text": "(Lai and Hockenmaier, 2014)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approaches",
"sec_num": "6"
},
{
"text": "Given our more general interest in the distributional approaches, in Table 8 we also classify the different DSMs used as 'Vector Space Mod-els', 'Topic Models' and 'Neural Language Models'. Due to the impact shown by learning methods (see ECNU's results), we also report the different learning approaches used.",
"cite_spans": [],
"ref_spans": [
{
"start": 69,
"end": 76,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Approaches",
"sec_num": "6"
},
{
"text": "Several participating systems deliberately exploit ad-hoc features that, while not helping a true understanding of sentence meaning, exploit some systematic characteristics of SICK that should be controlled for in future releases of the data set. In particular, the Textual Entailment subtask has been shown to rely too much on negative words and antonyms. The Illinois-LH team reports that, just by checking the presence of negative words (the Negation Feature in the table), one can detect 86.4% of the contradiction pairs, and by combining Word Overlap and antonyms one can detect 83.6% of neutral pairs and 82.6% of entailment pairs. This approach, however, is obviously very brittle (it would not have been successful, for instance, if negation had been optionally combined with word-rearranging in the creation of S4 sentences, see Section 3.1 above).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approaches",
"sec_num": "6"
},
{
"text": "Finally, Table 8 reports about the use of external resources in the task. One of the reasons we created SICK was to have a compositional semantics benchmark that would not require too many external tools and resources (e.g., named-entity recognizers, gazetteers, ontologies). By looking at what the participants chose to use, we think we succeeded, as only standard NLP pre-processing tools (tokenizers, PoS taggers and parsers) and relatively few knowledge resources (mostly, Word-Net and paraphrase corpora) were used.",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Approaches",
"sec_num": "6"
},
{
"text": "We presented the results of the first task on the evaluation of compositional distributional semantic models and other semantic systems on full sentences, organized within SemEval-2014. Two subtasks were offered: (i) predicting the degree of relatedness between two sentences, and (ii) detecting the entailment relation holding between them. The task has raised noticeable attention in the community: 17 and 18 submissions for the relatedness and entailment subtasks, respectively, for a total of 21 participating teams. Participation was not limited to compositional models but the majority of systems (13/21) used composition in at least one of the subtasks. Moreover, the top-ranking systems in both tasks use compositional features. However, it must be noted that all systems also ex- ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "http://nlp.cs.illinois.edu/HockenmaierGroup/data.html 2 http://www.cs.york.ac.uk/semeval-2012/task6/index.php?id=data",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "They were: The Meaning Factory's run3 (Pearson 0.84170) ECNU's runs2 (0.83893) run5 (0.83500) and Stan-fordNLP's run4 (0.83462) and run2 (0.83103).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "ITTK's primary run could not be evaluated due to technical problems with the submission. The best ITTK's nonprimary run scored 78,2% accuracy in the entailment task and 0.76 r in the relatedness task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the creators of the ImageFlickr, MSR-Video, and SemEval-2012 STS data sets for granting us permission to use their data for the task. The University of Trento authors were supported by ERC 2011 Starting Independent Research Grant n. 283554 (COMPOSES).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "The Meaning Factory Table 8 : Summary of the main characteristics of the participating systems on R(elatedness), E(ntailment) or B(oth) ploit non-compositional features and most of them use external resources, especially WordNet. Almost all the participating systems outperformed the proposed baselines in both tasks. Further analyses carried out by some participants in the task show that purely compositional approaches reach accuracy above 70% in entailment and 0.70 r for relatedness. These scores are comparable with the average results obtained in the task.",
"cite_spans": [],
"ref_spans": [
{
"start": 20,
"end": 27,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Semeval-2012 task 6: A pilot on semantic textual similarity",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Sixth International Workshop on Semantic Evaluation",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pi- lot on semantic textual similarity. In Proceedings of the Sixth International Workshop on Semantic Eval- uation (SemEval 2012), volume 2.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "ASAP: Automatica semantic alignment for phrases",
"authors": [
{
"first": "Ana",
"middle": [
"O"
],
"last": "Alves",
"suffix": ""
},
{
"first": "Adirana",
"middle": [],
"last": "Ferrugento",
"suffix": ""
},
{
"first": "Mariana",
"middle": [],
"last": "Loren\u00e7o",
"suffix": ""
},
{
"first": "Filipe",
"middle": [],
"last": "Rodrigues",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of Se-mEval 2014: International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ana O. Alves, Adirana Ferrugento, Mariana Loren\u00e7o, and Filipe Rodrigues. 2014. ASAP: Automatica se- mantic alignment for phrases. In Proceedings of Se- mEval 2014: International Workshop on Semantic Evaluation.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1183--1193",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni and Roberto Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In Proceedings of EMNLP, pages 1183-1193, Boston, MA.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "UTexas: Natural language semantics using distributional semantics and probablisitc logic",
"authors": [
{
"first": "Islam",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Gemma",
"middle": [],
"last": "Boleda",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "Raymon",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of Se-mEval 2014: International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Islam Beltagy, Stephen Roller, Gemma Boleda, Katrin Erk, and Raymon J. Mooney. 2014. UTexas: Nat- ural language semantics using distributional seman- tics and probablisitc logic. In Proceedings of Se- mEval 2014: International Workshop on Semantic Evaluation.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The fifth PASCAL recognizing textual entailment challenge",
"authors": [
{
"first": "Luisa",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Hoa",
"middle": [
"T"
],
"last": "Dang",
"suffix": ""
},
{
"first": "Danilo",
"middle": [],
"last": "Giampiccolo",
"suffix": ""
},
{
"first": "Bernardo",
"middle": [],
"last": "Magnini",
"suffix": ""
}
],
"year": 2009,
"venue": "The Text Analysis Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luisa Bentivogli, Ido Dagan, Hoa T. Dang, Danilo Gi- ampiccolo, and Bernardo Magnini. 2009. The fifth PASCAL recognizing textual entailment challenge. In The Text Analysis Conference (TAC 2009).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "CECL: a new baseline and a noncompositional approach for the Sick benchmark",
"authors": [
{
"first": "Yves",
"middle": [],
"last": "Bestgen",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of SemEval 2014: International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yves Bestgen. 2014. CECL: a new baseline and a non- compositional approach for the Sick benchmark. In Proceedings of SemEval 2014: International Work- shop on Semantic Evaluation.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "RTM-DCU: Referential translation machines for semantic similarity",
"authors": [
{
"first": "Ergun",
"middle": [],
"last": "Bi\u00e7ici",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of SemEval 2014: International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ergun Bi\u00e7ici and Andy Way. 2014. RTM-DCU: Ref- erential translation machines for semantic similar- ity. In Proceedings of SemEval 2014: International Workshop on Semantic Evaluation.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The Meaning Factory: Formal Semantics for Recognizing Textual Entailment and Determining Semantic Similarity",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Bjerva",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Van Der Goot",
"suffix": ""
},
{
"first": "Malvina",
"middle": [],
"last": "Nissim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of SemEval 2014: International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Bjerva, Johan Bos, Rob van der Goot, and Malvina Nissim. 2014. The Meaning Factory: For- mal Semantics for Recognizing Textual Entailment and Determining Semantic Similarity. In Proceed- ings of SemEval 2014: International Workshop on Semantic Evaluation.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The PASCAL recognising textual entailment challenge",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Ido Dagan",
"suffix": ""
},
{
"first": "Bernardo",
"middle": [],
"last": "Glickman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Magnini",
"suffix": ""
}
],
"year": 2006,
"venue": "Machine learning challenges. Evaluating predictive uncertainty, visual object classification, and recognising textual entailment",
"volume": "",
"issue": "",
"pages": "177--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL recognising textual entailment challenge. In Machine learning challenges. Evalu- ating predictive uncertainty, visual object classifica- tion, and recognising textual entailment, pages 177- 190. Springer.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "haLF:comparing a pure CDSM approach and a standard ML system for RTE",
"authors": [
{
"first": "Lorenzo",
"middle": [],
"last": "Ferrone",
"suffix": ""
},
{
"first": "Fabio",
"middle": [
"Massimo"
],
"last": "Zanzotto",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of Se-mEval 2014: International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lorenzo Ferrone and Fabio Massimo Zanzotto. 2014. haLF:comparing a pure CDSM approach and a stan- dard ML system for RTE. In Proceedings of Se- mEval 2014: International Workshop on Semantic Evaluation.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Experimental support for a categorical compositional distributional model of meaning",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1394--1404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011. Experimental support for a categorical composi- tional distributional model of meaning. In Proceed- ings of EMNLP, pages 1394-1404, Edinburgh, UK.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "UoW: NLP techniques developed at the University of Wolverhampton for Semantic Similarity and Textual Entailment",
"authors": [
{
"first": "Rohit",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Ismail",
"middle": [
"El"
],
"last": "Maarouf",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Bechara",
"suffix": ""
},
{
"first": "Costantin",
"middle": [],
"last": "Oras\u01cen",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of SemEval 2014: International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rohit Gupta, Ismail El Maarouf Hannah Bechara, and Costantin Oras\u01cen. 2014. UoW: NLP techniques de- veloped at the University of Wolverhampton for Se- mantic Similarity and Textual Entailment. In Pro- ceedings of SemEval 2014: International Workshop on Semantic Evaluation.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "UNAL-NLP: Combining soft cardinality features for semantic textual similarity, relatedness and entailment",
"authors": [
{
"first": "Sergio",
"middle": [],
"last": "Jimenez",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Duenas",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Baquero",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Gelbukh",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of SemEval 2014: International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergio Jimenez, George Duenas, Julia Baquero, and Alexander Gelbukh. 2014. UNAL-NLP: Combin- ing soft cardinality features for semantic textual sim- ilarity, relatedness and entailment. In Proceedings of SemEval 2014: International Workshop on Se- mantic Evaluation.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Illinois-lh: A denotational and distributional approach to semantics",
"authors": [
{
"first": "Alice",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of SemEval 2014: International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alice Lai and Julia Hockenmaier. 2014. Illinois-lh: A denotational and distributional approach to seman- tics. In Proceedings of SemEval 2014: International Workshop on Semantic Evaluation.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "BUAP:evaluating compositional distributional semantic models on full sentences through semantic relatedness and textual entailment",
"authors": [
{
"first": "Darnes",
"middle": [],
"last": "Sa\u00fal Le\u00f3n",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vilarino",
"suffix": ""
},
{
"first": "Mireya",
"middle": [],
"last": "Pinto",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Tovar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Beltr\u00e1n",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of SemEval 2014: International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sa\u00fal Le\u00f3n, Darnes Vilarino, David Pinto, Mireya To- var, and Beatrice Beltr\u00e1n. 2014. BUAP:evaluating compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. In Proceedings of SemEval 2014: Inter- national Workshop on Semantic Evaluation.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "UIO-Lien: Entailment recognition using minimal recursion semantics",
"authors": [
{
"first": "Elisabeth",
"middle": [],
"last": "Lien",
"suffix": ""
},
{
"first": "Milen",
"middle": [],
"last": "Kouylekov",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of SemEval 2014: International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elisabeth Lien and Milen Kouylekov. 2014. UIO- Lien: Entailment recognition using minimal recur- sion semantics. In Proceedings of SemEval 2014: International Workshop on Semantic Evaluation.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A SICK cure for the evaluation of compositional distributional semantic models",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Marelli",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Menini",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Luisa",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Raffaella",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zam- parelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In Proceedings of LREC, Reykjavik.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Vector-based models of semantic composition",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "236--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of ACL, pages 236-244, Columbus, OH.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Composition in distributional models of semantics",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2010,
"venue": "Cognitive Science",
"volume": "34",
"issue": "8",
"pages": "1388--1429",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive Sci- ence, 34(8):1388-1429.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "SemantiK-LUE: Robust semantic similarity at multiple levels using maximum weight matching",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Proisl",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Evert",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of SemEval 2014: International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Proisl and Stefan Evert. 2014. SemantiK- LUE: Robust semantic similarity at multiple levels using maximum weight matching. In Proceedings of SemEval 2014: International Workshop on Semantic Evaluation.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Semantic compositionality through recursive matrix-vector spaces",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Brody",
"middle": [],
"last": "Huval",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1201--1211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Brody Huval, Christopher Manning, and Andrew Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceed- ings of EMNLP, pages 1201-1211, Jeju Island, Ko- rea.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "FBK-TR: SVM for Semantic Relatedness and Corpus Patterns for RTE",
"authors": [
{
"first": "N",
"middle": [
"P"
],
"last": "An",
"suffix": ""
},
{
"first": "Octavian",
"middle": [],
"last": "Vo",
"suffix": ""
},
{
"first": "Tommaso",
"middle": [],
"last": "Popescu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Caselli",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of Se-mEval 2014: International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "An N. P. Vo, Octavian Popescu, and Tommaso Caselli. 2014. FBK-TR: SVM for Semantic Relatedness and Corpus Patterns for RTE. In Proceedings of Se- mEval 2014: International Workshop on Semantic Evaluation.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "ECNU: One Stone Two Birds: Ensemble of Heterogenous Measures for Semantic Relatedness and Textual Entailment",
"authors": [
{
"first": "Jiang",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Man",
"middle": [],
"last": "Tian Tian Zhu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lan",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of SemEval 2014: International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiang Zhao, Tian Tian Zhu, and Man Lan. 2014. ECNU: One Stone Two Birds: Ensemble of Het- erogenous Measures for Semantic Relatedness and Textual Entailment. In Proceedings of SemEval 2014: International Workshop on Semantic Evalu- ation.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"content": "<table/>",
"text": "Examples of sentence pairs with their gold entailment labels.",
"type_str": "table",
"num": null
},
"TABREF2": {
"html": null,
"content": "<table/>",
"text": "Distribution of sentence pairs across the Training and Test Sets.",
"type_str": "table",
"num": null
},
"TABREF4": {
"html": null,
"content": "<table/>",
"text": "",
"type_str": "table",
"num": null
},
"TABREF6": {
"html": null,
"content": "<table><tr><td>: Primary run results for the entailment</td></tr><tr><td>subtask. The table also shows whether a sys-</td></tr><tr><td>tem exploits composition information at either the</td></tr><tr><td>phrase (P) or sentence (S) level.</td></tr><tr><td>tems that compositionally computed the meaning</td></tr><tr><td>of the full sentences, though not necessarily by as-</td></tr><tr><td>signing meanings to intermediate syntactic con-</td></tr><tr><td>stituents) and 'partially compositional' (systems</td></tr><tr><td>that stop the composition at the level of phrases).</td></tr><tr><td>As the table shows, thirteen systems used compo-</td></tr><tr><td>sition in at least one of the tasks; ten used compo-</td></tr><tr><td>sition for full sentences and six for phrases, only.</td></tr><tr><td>The best systems are among these thirteen sys-</td></tr><tr><td>tems.</td></tr><tr><td>Let us focus on such compositional methods.</td></tr><tr><td>Concerning the relatedness task, the fine-grained</td></tr><tr><td>analyses reported for several systems (Illinois-</td></tr><tr><td>LH, The Meaning Factory and ECNU) shows that</td></tr><tr><td>purely compositional systems currently reach per-</td></tr><tr><td>formance above 0.7 r. In particular, ECNU's</td></tr><tr><td>compositional feature gives 0.75 r, The Meaning</td></tr><tr><td>Factory's logic-based composition model 0.73 r,</td></tr><tr><td>and Illinois-LH compositional features combined</td></tr><tr><td>with Word Overlap 0.75 r. While competitive,</td></tr><tr><td>these scores are lower than the one of the best</td></tr></table>",
"text": "",
"type_str": "table",
"num": null
},
"TABREF7": {
"html": null,
"content": "<table><tr><td>: Primary run results for the relatedness</td></tr><tr><td>subtask (r for Pearson and \u03c1 for Spearman corre-</td></tr><tr><td>lation). The table also shows whether a system ex-</td></tr><tr><td>ploits composition information at either the phrase</td></tr><tr><td>(P) or sentence (S) level.</td></tr></table>",
"text": "",
"type_str": "table",
"num": null
}
}
}
}