Update README.md
Browse files
README.md
CHANGED
@@ -15,43 +15,33 @@ This dataset, hosted by [Yasalma](https://huggingface.co/neurotatarlar), is a cu
|
|
15 |
## Dataset Details
|
16 |
|
17 |
- **Language**: Tatar (Cyrillic script)
|
18 |
-
- **Format**:
|
|
|
|
|
19 |
- **Columns**:
|
20 |
-
-
|
21 |
-
|
22 |
-
-
|
|
|
|
|
|
|
23 |
- **License**: MIT
|
24 |
|
25 |
### Structure
|
26 |
|
27 |
-
The dataset is organized
|
|
|
|
|
|
|
|
|
28 |
|
29 |
## Potential Use Cases
|
30 |
|
31 |
- **Language Modeling**: Train language models specifically for Tatar in Cyrillic script.
|
|
|
32 |
- **Machine Translation**: Use the dataset for Cyrillic-to-Latin transliteration and other translation tasks.
|
33 |
- **Linguistic Research**: Study linguistic structures, grammar, and vocabulary in Tatar.
|
34 |
|
35 |
-
## Examples
|
36 |
-
|
37 |
-
Here’s how to load and use the dataset in Python with `pandas`:
|
38 |
-
|
39 |
-
```python
|
40 |
-
import pandas as pd
|
41 |
-
|
42 |
-
# Load the Parquet file
|
43 |
-
df = pd.read_parquet("path/to/train-00000-of-00001.parquet", engine="pyarrow")
|
44 |
-
|
45 |
-
# View the first few rows
|
46 |
-
print(df.head())
|
47 |
-
```
|
48 |
-
|
49 |
-
Example of a sample entry:
|
50 |
-
|
51 |
-
| file_name | text |
|
52 |
-
|------------------------|---------------------------------------------------|
|
53 |
-
| Корымлы Бармак_tat.txt | Кояш Тимбикова\nКОРЫМЛЫ БАРМАК\n\nӨйләнешүебез... |
|
54 |
-
|
55 |
## Usage
|
56 |
|
57 |
To load the dataset using Hugging Face’s `datasets` library:
|
|
|
15 |
## Dataset Details
|
16 |
|
17 |
- **Language**: Tatar (Cyrillic script)
|
18 |
+
- **Format**: Two Parquet files
|
19 |
+
- Original text
|
20 |
+
- Markdown-formatted text
|
21 |
- **Columns**:
|
22 |
+
- train-00000-of-00001.parquet:
|
23 |
+
- `file_name`: The original name of each book’s file
|
24 |
+
- `text`: The full content of each book in raw text
|
25 |
+
- lib-books.parquet:
|
26 |
+
- `text`: The full content of each book in Markdown format
|
27 |
+
- **Total Size**: 180 MB
|
28 |
- **License**: MIT
|
29 |
|
30 |
### Structure
|
31 |
|
32 |
+
The dataset is organized as follows:
|
33 |
+
- **train-00000-of-00001.parquet**: Each row represents an individual Tatar book, with columns for the book’s filename (`file_name`) and its content in raw text (`text`).
|
34 |
+
- **lib-books.parquet**: Contains the full content of each book in Markdown format, with a single column (`text`).
|
35 |
+
|
36 |
+
All links to images have been removed from the Markdown text to ensure compatibility and simplify processing.
|
37 |
|
38 |
## Potential Use Cases
|
39 |
|
40 |
- **Language Modeling**: Train language models specifically for Tatar in Cyrillic script.
|
41 |
+
- **Markdown Processing**: Use Markdown-formatted text for specific NLP applications, such as HTML rendering or structured content analysis.
|
42 |
- **Machine Translation**: Use the dataset for Cyrillic-to-Latin transliteration and other translation tasks.
|
43 |
- **Linguistic Research**: Study linguistic structures, grammar, and vocabulary in Tatar.
|
44 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
## Usage
|
46 |
|
47 |
To load the dataset using Hugging Face’s `datasets` library:
|