lora-finetuned-xsum-t5-summarizer
This model is a fine-tuned version of t5-small on the xsum dataset.
Model description
This is a LoRA (Low-Rank Adaptation) fine-tuned version of T5-small optimized for text summarization. The model was trained on the XSum dataset for abstractive summarization.
Usage example
Plain text inference
from peft import PeftModel
from transformers import AutoModelForSeq2SeqLM
from transformers import AutoTokenizer
import torch
base_model = AutoModelForSeq2SeqLM.from_pretrained("t5-small")
my_model = PeftModel.from_pretrained(base_model, "Lakshan2003/finetuned-t5-xsum")
def test_peft_summarizer(text, model, max_length=128, min_length=30):
"""
Test the PEFT-loaded summarization model
Args:
text (str): Input text to summarize
model: The loaded PEFT model
max_length (int): Maximum length of the summary
min_length (int): Minimum length of the summary
"""
# Load tokenizer for t5-small (base model)
tokenizer = AutoTokenizer.from_pretrained("Lakshan2003/finetuned-t5-xsum")
# Move model to GPU if available
device = "cuda" if torch.cuda.is_available() else "cpu"
model = model.to(device)
# Prepare the input text
prefix = "summarize: "
input_text = prefix + text
# Tokenize
inputs = tokenizer(input_text, return_tensors="pt", max_length=512, truncation=True)
inputs = {k: v.to(device) for k, v in inputs.items()}
# Generate summary
with torch.no_grad():
output_ids = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
max_length=max_length,
min_length=min_length,
num_beams=4,
length_penalty=2.0,
early_stopping=True,
no_repeat_ngram_size=3
)
# Decode the summary
summary = tokenizer.decode(output_ids[0], skip_special_tokens=True)
return summary
# Test text
test_text = """
The United Nations has warned that climate change poses an unprecedented threat to human civilization. In a landmark report, scientists detailed how rising temperatures are affecting everything from weather patterns to food production. The report emphasizes that without immediate and substantial action to reduce greenhouse gas emissions, the world faces severe consequences including rising sea levels, more frequent extreme weather events, and widespread ecosystem collapse. Many countries have pledged to reduce their carbon emissions, but experts say current commitments fall short of what's needed to prevent the worst impacts of climate change. The report also highlights the disproportionate effect of climate change on developing nations, which often lack the resources to adapt to changing conditions.
"""
# Generate summary
summary = test_peft_summarizer(test_text, my_model)
print("Original Text:")
print(test_text)
print("\nGenerated Summary:")
print(summary)
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
Framework versions
- PEFT 0.14.0
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 14
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
1
Ask for provider support
Model tree for Lakshan2003/finetuned-t5-xsum
Base model
google-t5/t5-small