|
--- |
|
dataset_info: |
|
features: |
|
- name: text |
|
sequence: string |
|
splits: |
|
- name: train |
|
num_bytes: 327344603 |
|
num_examples: 668582 |
|
- name: validation |
|
num_bytes: 8406146 |
|
num_examples: 17144 |
|
download_size: 189165954 |
|
dataset_size: 335750749 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- split: validation |
|
path: data/validation-* |
|
--- |
|
|
|
# Tiny Conversations |
|
|
|
## Overview |
|
|
|
This dataset consists of dialogue samples sourced from two main resources: the **Cornell Movie Dialogs** and the **Taiga TV Series Subtitles**. The dataset primarily contains Russian language dialogues and is designed for various natural language processing tasks such as language modeling, and dialogue systems. |
|
|
|
### Sources |
|
|
|
1. **Cornell Movie Dialogs**: |
|
- **Source**: [Cornell Movie Dialogs](https://github.com/Koziev/NLP_Datasets) |
|
- **License**: CC0-1.0 |
|
- **Description**: This dataset includes cleaned subtitles from a collection of movie dialogues. Notably, many dialogues are sampled from the middle of conversations. |
|
|
|
2. **Taiga TV Series Subtitles**: |
|
- **Source**: [Russian Subtitles Dataset](https://github.com/dbklim/Russian_subtitles_dataset) |
|
- **License**: Apache-2.0 |
|
- **Description**: The dataset is based on the Taiga corpus, specifically from a collection of subtitles across 347 TV series in multiple languages. For this dataset, only the Russian language subtitles were retained. |