File size: 5,295 Bytes
a559968
 
 
 
 
 
1242793
 
 
6b86f1d
 
 
a559968
63a9689
 
 
245a842
23b1045
 
 
 
 
0425c6a
23b1045
63a9689
0425c6a
63a9689
 
 
 
 
 
 
 
 
 
b7f7b60
7a3e648
63a9689
 
 
5d48856
 
 
 
 
 
 
 
 
 
 
 
 
 
63a9689
 
 
 
 
 
 
 
 
 
7a3e648
 
63a9689
 
 
7a3e648
 
 
 
 
 
 
 
 
 
b7f7b60
7a3e648
 
 
 
 
63a9689
 
 
 
 
 
 
5d48856
27ce514
63a9689
b7f7b60
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63a9689
 
0425c6a
b7f7b60
0425c6a
 
 
23b1045
245a842
b7f7b60
245a842
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
---
license: mit
language_bcp47:
- ru-RU
tags:
- spellcheck
language:
- ru
size_categories:
- 100K<n<1M
task_categories:
- text2text-generation
---

### Dataset Summary

This dataset is a set of samples for testing the spell checker, grammar error correction and ungrammatical text detection models.

The dataset contains two splits:

test.json contains samples hand-selected to evaluate the quality of models.

train.json contains synthetic samples generated in various ways.

The purpose of creating the dataset was to test an internal spellchecker for [a generative poetry project](https://github.com/Koziev/verslibre), but it can also be useful in other projects, since it does not have an explicit specialization for poetry.
You can consider this dataset as an extension of [RuCOLA](https://huggingface.co/datasets/RussianNLP/rucola).
In addition, for some samples there is a corrected version of the text ("fixed_sentence" field), so it can be used as an extension of datasets in [ai-forever/spellcheck_benchmark](https://huggingface.co/datasets/ai-forever/spellcheck_benchmark).

### Example

```
{
        "id": 1483,
        "sentence": "Разучи стихов по больше",
        "fixed_sentence": "Разучи стихов побольше",
        "label": 0,
        "error_type": "Tokenization",
        "domain": "prose"
}
```

### Notes

The test split contains only examples of mistakes made by people. There are no synthetics among these mistakes.

The examples of errors in the test split come from different people in terms of gender, age, education, context, and social context.

The input and output text can be not only one sentence, but also 1) part of a sentence, 2) several sentences - a paragraph, 3) a fragment of a poem, usually a quatrain or two quatrains.

The texts may include offensive texts, texts that offend religious or political feelings, texts that contradict moral standards, etc. Such samples are only needed to make the corpus as representative as possible for the tasks of processing messages in various media such as blogs, comments, etc.

One sample may contain several errors of different types.



### Uncensoring samples

A number of samples contain text with explicit obscenities:

```
{
        "id": 1,
        "sentence": "Но не простого - с лёгкой еб@нцой.",
        "fixed_sentence": "Но не простого - с лёгкой ебанцой.",
        "label": 0,
        "error_type": "Misspelling",
        "domain": "prose"
}
```

### Poetry samples

A few poetry samples are included in this version:

```
{
        "id": 24,
        "sentence": "Чему научит забытьё?\nСмерть формы д'арует литьё.\nРезец мгновенье любит стружка...\nСмерть безобидная подружка!",
        "fixed_sentence": null,
        "label": 0,
        "error_type": "Grammar",
        "domain": "poetry"
}
```



### Dataset fields

**id** (int64): the sentence's id, starting 1.  
**sentence** (str): the original sentence.  
**fixed_sentence** (str): the corrected version of original sentence.  
**label** (str): the target class. "1" for "acceptable", "0" for "unacceptable".  
**error_type** (str): the violation category: Spelling, Grammar, Tokenization, Punctuation, Mixture, Unknown.  
**domain** (str): domain: "prose" or "poetry".

### Error types

**Tokenization**: a word is split into two tokens, or two words are merged into one word.

```
{
        "id": 6,
        "sentence": "Я подбираю по проще слова",
        "fixed_sentence": "Я подбираю попроще слова",
        "label": 0,
        "error_type": "Tokenization",
        "domain": "prose"
}
```

**Punctuation**: missing or extra comma, hyphen or other punctuation mark

```
{
        "id": 5,
        "sentence": "И швырнуть по-дальше",
        "fixed_sentence": "И швырнуть подальше",
        "label": 0,
        "error_type": "Punctuation",
        "domain": "prose"
}
```

**Spelling**:

```
{
        "id": 38,
        "sentence": "И ведь что интересно, русские официально ни в одном крестовом позоде не участвовали.",
        "fixed_sentence": "И ведь что интересно, русские официально ни в одном крестовом походе не участвовали.",
        "label": 0,
        "error_type": "Spelling",
        "domain": "prose"
}
```

**Grammar**: One of the words is in the wrong grammatical form, for example the verb is in the infinitive instead of the personal form.

```
{
        "id": 61,
        "sentence": "на него никто не польститься",
        "fixed_sentence": "на него никто не польстится",
        "label": 0,
        "error_type": "Grammar",
        "domain": "prose"
}
```

Please note that error categories are not always set accurately, so you should not use
the "error_type" field to train classifiers.


### Statistics


Total number of samples in test split: **6244**  
Total number of samples in train split: **435538**  

Statistics for test split.

Domains:  
prose	5635  
poetry	609