Update README.md
Browse files
README.md
CHANGED
@@ -2,16 +2,18 @@
|
|
2 |
language: en
|
3 |
license: apache-2.0
|
4 |
datasets:
|
5 |
-
|
6 |
tags:
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
base_model: google/flan-t5-base
|
12 |
model-index:
|
13 |
-
|
14 |
-
|
|
|
|
|
15 |
---
|
16 |
|
17 |
# πΊ T5 YouTube Summarizer
|
@@ -27,7 +29,7 @@ This is a fine-tuned [flan-t5-base](https://huggingface.co/google/flan-t5-base)
|
|
27 |
- **Training Data**: YouTube video transcripts and human-written summaries
|
28 |
- **Max Input Length**: 512 tokens
|
29 |
- **Max Output Length**: 256 tokens
|
30 |
-
- **Fine-tuning Epochs**:
|
31 |
- **Tokenizer**: T5Tokenizer (pretrained)
|
32 |
|
33 |
---
|
@@ -44,7 +46,7 @@ This model is designed to generate short, informative summaries from long transc
|
|
44 |
|
45 |
## π How to Use
|
46 |
|
47 |
-
python
|
48 |
from transformers import T5ForConditionalGeneration, T5Tokenizer
|
49 |
|
50 |
# Load the model
|
@@ -69,7 +71,7 @@ summary_ids = model.generate(
|
|
69 |
|
70 |
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
|
71 |
print(summary)
|
72 |
-
|
73 |
|
74 |
## π Evaluation
|
75 |
|
@@ -83,9 +85,11 @@ print(summary)
|
|
83 |
|
84 |
## π Citation
|
85 |
If you use this model in your work, consider citing:
|
|
|
86 |
@misc{t5ytsummarizer2025,
|
87 |
title={Flan T5 YouTube Transcript Summarizer},
|
88 |
author={Muhammad Bilal Yousaf},
|
89 |
year={2025},
|
90 |
howpublished={\url{https://huggingface.co/bilal521/flan-t5-youtube-summarizer}},
|
91 |
-
}
|
|
|
|
2 |
language: en
|
3 |
license: apache-2.0
|
4 |
datasets:
|
5 |
+
- custom
|
6 |
tags:
|
7 |
+
- summarization
|
8 |
+
- flan-t5
|
9 |
+
- youtube
|
10 |
+
- fine-tuned
|
11 |
base_model: google/flan-t5-base
|
12 |
model-index:
|
13 |
+
- name: Flan T5 YouTube Summarizer
|
14 |
+
results: []
|
15 |
+
metrics:
|
16 |
+
- rouge
|
17 |
---
|
18 |
|
19 |
# πΊ T5 YouTube Summarizer
|
|
|
29 |
- **Training Data**: YouTube video transcripts and human-written summaries
|
30 |
- **Max Input Length**: 512 tokens
|
31 |
- **Max Output Length**: 256 tokens
|
32 |
+
- **Fine-tuning Epochs**: 5
|
33 |
- **Tokenizer**: T5Tokenizer (pretrained)
|
34 |
|
35 |
---
|
|
|
46 |
|
47 |
## π How to Use
|
48 |
|
49 |
+
```python
|
50 |
from transformers import T5ForConditionalGeneration, T5Tokenizer
|
51 |
|
52 |
# Load the model
|
|
|
71 |
|
72 |
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
|
73 |
print(summary)
|
74 |
+
```
|
75 |
|
76 |
## π Evaluation
|
77 |
|
|
|
85 |
|
86 |
## π Citation
|
87 |
If you use this model in your work, consider citing:
|
88 |
+
```
|
89 |
@misc{t5ytsummarizer2025,
|
90 |
title={Flan T5 YouTube Transcript Summarizer},
|
91 |
author={Muhammad Bilal Yousaf},
|
92 |
year={2025},
|
93 |
howpublished={\url{https://huggingface.co/bilal521/flan-t5-youtube-summarizer}},
|
94 |
+
}
|
95 |
+
```
|