saishshinde15's picture
Update README.md
d8845fc verified
|
raw
history blame
2.19 kB
metadata
base_model:
  - meta-llama/Llama-3.2-1B-Instruct
datasets:
  - openai/summarize_from_feedback
language:
  - en
license: apache-2.0
metrics:
  - accuracy
tags:
  - text-generation-inference
  - transformers
  - llama
  - trl
  - meta
  - summary
  - summarization

🌟 Summarization Model Card 🌟

Model Overview

Description

This model has been fine-tuned to excel in generating concise and informative summaries from lengthy texts. It captures key ideas while presenting them in an easy-to-read bullet-point format.

Key Features

  • Language: English
  • Fine-tuned on: The dataset openai/summarize_from_feedback for improved summarization capabilities.
  • Performance Metric: Evaluated based on accuracy.

Prompt for Optimal Use

For the best results, please utilize the following tried-and-true prompt structure:

You are given the following text. Please provide a summary in 5-10 key points, depending on the length of the document. Each point should be clearly formatted in bullet format, starting with an asterisk (*).

**Note:** The examples provided below are for your reference only and should not be included in your response.

### Examples (for reference only):
* The sky is blue on a clear day.
* Water boils at 100 degrees Celsius.
* Trees produce oxygen through photosynthesis.

### Original Text:
{}

### Key Points Summary (in bullet points):



# Model Loading Instructions

To load this model, use the following code snippet:

```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer

# Replace "lora_model" with your actual model name
model = AutoPeftModelForCausalLM.from_pretrained(
    "saishshinde15/Summmary_Model_Llama-3.2-1B-Instruct",  # YOUR MODEL YOU USED FOR TRAINING
    load_in_4bit=True,  # Adjust as needed
)
tokenizer = AutoTokenizer.from_pretrained("saishshinde15/Summmary_Model_Llama-3.2-1B-Instruct")