Create README.md
Browse files
README.md
CHANGED
@@ -1,22 +1,13 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
splits:
|
9 |
-
- name: train
|
10 |
-
num_bytes: 13665882
|
11 |
-
num_examples: 2886
|
12 |
-
download_size: 4262370
|
13 |
-
dataset_size: 13665882
|
14 |
-
configs:
|
15 |
-
- config_name: default
|
16 |
-
data_files:
|
17 |
-
- split: train
|
18 |
-
path: data/train-*
|
19 |
---
|
20 |
-
|
|
|
21 |
|
22 |
-
|
|
|
|
1 |
---
|
2 |
+
license: apache-2.0
|
3 |
+
task_categories:
|
4 |
+
- zero-shot-classification
|
5 |
+
- summarization
|
6 |
+
language:
|
7 |
+
- en
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
---
|
9 |
+
This dataset collected from various sources,
|
10 |
+
Once I obtained the urls for youtube videos, I used langchain with YoutubeLoader function to get text of videos.
|
11 |
|
12 |
+
Source of data: https://github.com/talesmarra/youtube_data_analysis
|
13 |
+
tasks: summarization, named entity recognition,
|