BART Base Text Summarization Modeli
This model is based on the Facebook BART (Bidirectional and Auto-Regressive Transformers) architecture. BART is particularly effective when fine-tuned for text generation tasks like summarization but also works well for comprehension tasks. BART is a transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
Model Details
Model Description
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Architecture: [BART Base]
- Pre-trained model: [facebook/bart-base]
- Fine-tuned for: [Summarization]
- License: [MIT]
- Finetuned from model: [facebook/bart-base]
Uses
- Installation: pip install transformers
Direct Use
Here is a simple snippet oon how to use the model directly.
Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("ChijoTheDatascientist/summarization-model") model = AutoModelForSeq2SeqLM.from_pretrained("ChijoTheDatascientist/summarization-model")
- Downloads last month
- 108