File size: 3,925 Bytes
73c102c
 
 
b3be0b3
73c102c
 
 
 
b3be0b3
73c102c
 
 
cd49d93
b3be0b3
34ef512
e45176e
b3be0b3
73c102c
 
 
 
 
 
 
34ef512
7f7efcf
cbdf5b1
7f7efcf
73c102c
 
 
 
 
 
 
 
 
cbdf5b1
590cd5a
cbdf5b1
e45176e
7f7efcf
e45176e
 
 
 
73c102c
 
 
 
 
 
cbdf5b1
73c102c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dc16f2a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
language:
- it
pretty_name: Corpus of Italian Relative Clauses for Entailment (CIRCE)
size_categories:
- n<1K
---

# Dataset Card for CIRCE Challenge @ CALAMITA 2024

<!-- Provide a quick summary of the dataset. -->

This dataset represents one of the benchmark proposed for the CALAMITA2024 Challenge, co-located with the Tenth Italian Conference on Computational Linguistics .

The dataset focuses on evaluating Language Models' understanding of a specific linguistic structure in Italian: object-extracted relative clauses (ORCs). 
The assessment involves a yes/no entailment task in which the model is given two sentences. The first contains the target structure, and the second is a simple declarative sentence whose meaning may or may not be logically inferred from the first. 


## Dataset Details

### Dataset Description

<!-- Provide a longer summary of what this dataset is. -->

The dataset contains 566 pairs of sentences, where the first sentence includes the ORC and the second sentence is the declarative sentence which can be implied or not by the first.
The ORCs have been primarily sourced from linguistic and psycholinguistic literature to explore the impact of grammatical and semantic features on the processing difficulties humans face when reading ORCs.

A smaller portion of the dataset includes ORCs from existing NLP benchmarks, which are specifically designed to test language models' capabilities in recognizing grammaticality.



## Dataset Structure

<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->

[More Information Needed]

The dataset is provided as a tab-separated text file with the following information for each entry:
- UniqueID: a numerical identifier for the entry;
- ID-mapping: an identifier mapping for cross-referencing according to the CONDITION;
- Source: the original reference from which the sentence has been taken.
- Condition: The type of ORC, based on the features of the two NPs involved.
- Sentence1: the first sentence containing the ORC;
- Sentence2: the second sentence that may or may not be implied by sentence 1;
- NP Target: indicates whether SENTENCE 2 targets the head of the relative clause (NP1) or the subject of the embedded clause (NP2).;
- Gold: the gold label assigned to the pair ("sì" if sentence 1 implied sentence 2, "no" otherwise).


[More Information Needed]

### Source Data


<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->

#### Data Collection and Processing

<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->

[More Information Needed]

#### Who are the source data producers?

<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->

[More Information Needed]

### Annotations [optional]

<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->

#### Annotation process

<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->

[More Information Needed]

#### Who are the annotators?

<!-- This section describes the people or systems who created the annotations. -->

[More Information Needed]




**BibTeX:**

[More Information Needed]

**APA:**

[More Information Needed]


## Dataset Card Contact

Dominique Brunato, [email protected]