tagay1n commited on
Commit
8a069c2
·
verified ·
1 Parent(s): 3273a82

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -25
README.md CHANGED
@@ -15,43 +15,33 @@ This dataset, hosted by [Yasalma](https://huggingface.co/neurotatarlar), is a cu
15
  ## Dataset Details
16
 
17
  - **Language**: Tatar (Cyrillic script)
18
- - **Format**: Single Parquet file
 
 
19
  - **Columns**:
20
- - `file_name`: The original name of each book’s file
21
- - `text`: The full content of each book
22
- - **Total Size**: 75 MB
 
 
 
23
  - **License**: MIT
24
 
25
  ### Structure
26
 
27
- The dataset is organized so that each row in the Parquet file represents an individual Tatar book, with columns for the book’s filename (`file_name`) and its content (`text`).
 
 
 
 
28
 
29
  ## Potential Use Cases
30
 
31
  - **Language Modeling**: Train language models specifically for Tatar in Cyrillic script.
 
32
  - **Machine Translation**: Use the dataset for Cyrillic-to-Latin transliteration and other translation tasks.
33
  - **Linguistic Research**: Study linguistic structures, grammar, and vocabulary in Tatar.
34
 
35
- ## Examples
36
-
37
- Here’s how to load and use the dataset in Python with `pandas`:
38
-
39
- ```python
40
- import pandas as pd
41
-
42
- # Load the Parquet file
43
- df = pd.read_parquet("path/to/train-00000-of-00001.parquet", engine="pyarrow")
44
-
45
- # View the first few rows
46
- print(df.head())
47
- ```
48
-
49
- Example of a sample entry:
50
-
51
- | file_name | text |
52
- |------------------------|---------------------------------------------------|
53
- | Корымлы Бармак_tat.txt | Кояш Тимбикова\nКОРЫМЛЫ БАРМАК\n\nӨйләнешүебез... |
54
-
55
  ## Usage
56
 
57
  To load the dataset using Hugging Face’s `datasets` library:
 
15
  ## Dataset Details
16
 
17
  - **Language**: Tatar (Cyrillic script)
18
+ - **Format**: Two Parquet files
19
+ - Original text
20
+ - Markdown-formatted text
21
  - **Columns**:
22
+ - train-00000-of-00001.parquet:
23
+ - `file_name`: The original name of each book’s file
24
+ - `text`: The full content of each book in raw text
25
+ - lib-books.parquet:
26
+ - `text`: The full content of each book in Markdown format
27
+ - **Total Size**: 180 MB
28
  - **License**: MIT
29
 
30
  ### Structure
31
 
32
+ The dataset is organized as follows:
33
+ - **train-00000-of-00001.parquet**: Each row represents an individual Tatar book, with columns for the book’s filename (`file_name`) and its content in raw text (`text`).
34
+ - **lib-books.parquet**: Contains the full content of each book in Markdown format, with a single column (`text`).
35
+
36
+ All links to images have been removed from the Markdown text to ensure compatibility and simplify processing.
37
 
38
  ## Potential Use Cases
39
 
40
  - **Language Modeling**: Train language models specifically for Tatar in Cyrillic script.
41
+ - **Markdown Processing**: Use Markdown-formatted text for specific NLP applications, such as HTML rendering or structured content analysis.
42
  - **Machine Translation**: Use the dataset for Cyrillic-to-Latin transliteration and other translation tasks.
43
  - **Linguistic Research**: Study linguistic structures, grammar, and vocabulary in Tatar.
44
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
  ## Usage
46
 
47
  To load the dataset using Hugging Face’s `datasets` library: