YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
GPT-2 for Storytelling
This repository hosts a quantized version of the GPT-2 model, fine-tuned for creative writing and storytelling tasks. The model has been optimized for efficient deployment while maintaining high coherence and creativity, making it suitable for resource-constrained environments.
Model Details
- Model Architecture: gpt2-lmheadmodel-story-telling-model
- Task: Storytelling & Writing Prompts Generation
- Dataset: euclaise/writingprompts
- Quantization: Float16
- Fine-tuning Framework: Hugging Face Transformers
Usage
Installation
pip install transformers torch
Loading the Model
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = "AventIQ-AI/gpt2-lmheadmodel-story-telling-model"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).to(device)
import html
# Define test text
test_text = "Once upon a time, in a mystical land,"
# Tokenize input
inputs = tokenizer(test_text, return_tensors="pt").to(device)
# Generate response
with torch.no_grad():
output_tokens = model.generate(
**inputs,
max_length=200,
num_beams=5,
repetition_penalty=2.0,
temperature=0.7,
top_k=50,
top_p=0.9,
do_sample=True,
no_repeat_ngram_size=3,
num_return_sequences=1,
early_stopping=True,
length_penalty=1.2,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
return_dict_in_generate=True,
output_scores=True
)
# Decode and clean response
generated_response = tokenizer.decode(output_tokens.sequences[0], skip_special_tokens=True)
cleaned_response = html.unescape(generated_response).replace("#39;", "'").replace("quot;", '"')
print("\nGenerated Response:\n", cleaned_response)
π ROUGE Evaluation Results
After fine-tuning the GPT-2 model for storytelling, we obtained the following ROUGE scores:
Metric | Score | Meaning |
---|---|---|
ROUGE-1 | 0.7525 (~75%) | Measures overlap of unigrams (single words) between the reference and generated text. |
ROUGE-2 | 0.3552 (~35%) | Measures overlap of bigrams (two-word phrases), indicating coherence and fluency. |
ROUGE-L | 0.4904 (~49%) | Measures longest matching word sequences, testing sentence structure preservation. |
ROUGE-Lsum | 0.5701 (~57%) | Similar to ROUGE-L but optimized for storytelling tasks. |
Fine-Tuning Details
Dataset
The Hugging Face euclaise/writingprompts dataset was used, containing creative writing prompts and responses.
Training
- Number of epochs: 3
- Batch size: 4
- Evaluation strategy: epoch
- Learning rate: 5e-5
Quantization
Post-training quantization was applied using PyTorch's built-in quantization framework to reduce the model size and improve inference efficiency.
Repository Structure
.
βββ model/ # Contains the quantized model files
βββ tokenizer_config/ # Tokenizer configuration and vocabulary files
βββ model.safetensors/ # Quantized Model
βββ README.md # Model documentation
Limitations
- The model may not generalize well to domains outside the fine-tuning dataset.
- Quantization may result in minor accuracy degradation compared to full-precision models.
Contributing
Contributions are welcome! Feel free to open an issue or submit a pull request if you have suggestions or improvements.
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support