File size: 2,187 Bytes
2122e7b 1e97c55 9088e37 b8b11bf 2122e7b 9088e37 2122e7b b8b11bf 2122e7b 9088e37 2122e7b e185ec2 2122e7b e185ec2 d8845fc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
---
base_model:
- meta-llama/Llama-3.2-1B-Instruct
datasets:
- openai/summarize_from_feedback
language:
- en
license: apache-2.0
metrics:
- accuracy
tags:
- text-generation-inference
- transformers
- llama
- trl
- meta
- summary
- summarization
---
# ๐ Summarization Model Card ๐
## Model Overview
- **Model Name:** Llama-3.2-1B Instruct Model Fine-tuned for Summarization
- **Developed by:** [saishshinde15](https://huggingface.co/saishshinde15)
- **License:** Apache-2.0
- **Base Model:** [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)
## Description
This model has been fine-tuned to excel in generating concise and informative summaries from lengthy texts. It captures key ideas while presenting them in an easy-to-read bullet-point format.
### Key Features
- **Language:** English
- **Fine-tuned on:** The dataset `openai/summarize_from_feedback` for improved summarization capabilities.
- **Performance Metric:** Evaluated based on accuracy.
## Prompt for Optimal Use
For the best results, please utilize the following tried-and-true prompt structure:
```plaintext
You are given the following text. Please provide a summary in 5-10 key points, depending on the length of the document. Each point should be clearly formatted in bullet format, starting with an asterisk (*).
**Note:** The examples provided below are for your reference only and should not be included in your response.
### Examples (for reference only):
* The sky is blue on a clear day.
* Water boils at 100 degrees Celsius.
* Trees produce oxygen through photosynthesis.
### Original Text:
{}
### Key Points Summary (in bullet points):
# Model Loading Instructions
To load this model, use the following code snippet:
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
# Replace "lora_model" with your actual model name
model = AutoPeftModelForCausalLM.from_pretrained(
"saishshinde15/Summmary_Model_Llama-3.2-1B-Instruct", # YOUR MODEL YOU USED FOR TRAINING
load_in_4bit=True, # Adjust as needed
)
tokenizer = AutoTokenizer.from_pretrained("saishshinde15/Summmary_Model_Llama-3.2-1B-Instruct") |