language: en | |
tags: | |
- summarization | |
model-index: | |
- name: yuvraj/summarizer-cnndm | |
results: | |
- task: | |
type: summarization | |
name: Summarization | |
dataset: | |
name: sepidmnorozy/Urdu_sentiment | |
type: sepidmnorozy/Urdu_sentiment | |
config: sepidmnorozy--Urdu_sentiment | |
split: train | |
metrics: | |
- name: ROUGE-1 | |
type: rouge | |
value: 0.0 | |
verified: true | |
- name: ROUGE-2 | |
type: rouge | |
value: 0.0 | |
verified: true | |
- name: ROUGE-L | |
type: rouge | |
value: 0.0 | |
verified: true | |
- name: ROUGE-LSUM | |
type: rouge | |
value: 0.0 | |
verified: true | |
- name: loss | |
type: loss | |
value: 10.730116844177246 | |
verified: true | |
- name: gen_len | |
type: gen_len | |
value: 19.9912 | |
verified: true | |
β | |
# Summarization | |
β | |
## Model description | |
β | |
BartForConditionalGeneration model fine tuned for summarization on 10000 samples from the cnn-dailymail dataset | |
β | |
## How to use | |
β | |
PyTorch model available | |
β | |
```python | |
from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline | |
β | |
tokenizer = AutoTokenizer.from_pretrained("yuvraj/summarizer-cnndm") | |
AutoModelWithLMHead.from_pretrained("yuvraj/summarizer-cnndm") | |
β | |
summarizer = pipeline('summarization', model=model, tokenizer=tokenizer) | |
summarizer("<Text to be summarized>") | |
β | |
## Limitations and bias | |
Trained on a small dataset | |