Update README.md
Browse files
README.md
CHANGED
@@ -137,6 +137,7 @@ print(quranic_dataset)
|
|
137 |
`golden` (`bool`): The audio sample is labeled by experts or via crowdsourcing. If the value is true, the audio sample is considered golden, meaning it has been labeled by experts. If the value is false, the sample has been labeled via crowdsourcing.
|
138 |
|
139 |
`final_label` (`string`): The consensus label assigned to the audio sample. This label indicates the agreed-upon classification of the recitation based on the annotations provided. The final label is determined either through a majority vote among crowd-sourced annotators or by expert annotators for golden samples. The possible values for this field are:
|
|
|
140 |
- **correct**: When the pronunciation is correct with the diacritics, regardless of the rules of Tajweed.
|
141 |
- **in_correct**: When the pronunciation is incorrect with the diacritics, regardless of the rules of Tajweed.
|
142 |
- **not_related_quran**: When the content of the audio clip is incomprehensible or contains words that have nothing to do with the Quran or is empty, this choice should be selected.
|
@@ -155,6 +156,7 @@ print(quranic_dataset)
|
|
155 |
`judgments_num` (`int64`): the number of judgments or annotations provided for each audio sample in the dataset.
|
156 |
|
157 |
`annotation_metadata` (`string`): The metadata related to the annotations provided for each audio sample in the dataset. Each annotation consists of several key-value pairs:
|
|
|
158 |
- **label_X**: The assigned label for the X-th annotator, indicating the classification or judgment made by the annotator (e.g., "correct" or "in_correct").
|
159 |
- **annotatorX_id**: The unique identifier of the X-th annotator who provided the judgment.
|
160 |
- **annotatorX_SCT**: The number of solved control tasks by the X-th annotator, which assess the annotator's performance on predefined control tasks.
|
|
|
137 |
`golden` (`bool`): The audio sample is labeled by experts or via crowdsourcing. If the value is true, the audio sample is considered golden, meaning it has been labeled by experts. If the value is false, the sample has been labeled via crowdsourcing.
|
138 |
|
139 |
`final_label` (`string`): The consensus label assigned to the audio sample. This label indicates the agreed-upon classification of the recitation based on the annotations provided. The final label is determined either through a majority vote among crowd-sourced annotators or by expert annotators for golden samples. The possible values for this field are:
|
140 |
+
|
141 |
- **correct**: When the pronunciation is correct with the diacritics, regardless of the rules of Tajweed.
|
142 |
- **in_correct**: When the pronunciation is incorrect with the diacritics, regardless of the rules of Tajweed.
|
143 |
- **not_related_quran**: When the content of the audio clip is incomprehensible or contains words that have nothing to do with the Quran or is empty, this choice should be selected.
|
|
|
156 |
`judgments_num` (`int64`): the number of judgments or annotations provided for each audio sample in the dataset.
|
157 |
|
158 |
`annotation_metadata` (`string`): The metadata related to the annotations provided for each audio sample in the dataset. Each annotation consists of several key-value pairs:
|
159 |
+
|
160 |
- **label_X**: The assigned label for the X-th annotator, indicating the classification or judgment made by the annotator (e.g., "correct" or "in_correct").
|
161 |
- **annotatorX_id**: The unique identifier of the X-th annotator who provided the judgment.
|
162 |
- **annotatorX_SCT**: The number of solved control tasks by the X-th annotator, which assess the annotator's performance on predefined control tasks.
|