dante-gpt / README.md
maiurilorenzo's picture
Update README.md
3bd4939 verified
---
library_name: transformers
tags:
- dante
- literature
- italian
license: cc-by-sa-4.0
datasets:
- maiurilorenzo/divina-commedia
language:
- it
base_model:
- openai-community/gpt2
pipeline_tag: text-generation
---
# Model Card for DanteGPT
<!-- Provide a quick summary of what the model is/does. -->
This model, **DanteGPT**, is a fine-tuned version of GPT-2 designed to generate text in the style of Dante Alighieri’s *Divina Commedia*. The model emulates Dante's poetic structure, including his use of tercets with a specific rhyme scheme (ABA BCB CDC) and thematic elements of his work, such as divine justice and moral reflection.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Lorenzo Maiuri
- **Funded by:** Independent research
- **Shared by:** Lorenzo Maiuri
- **Model type:** Fine-tuned GPT-2
- **Language(s) (NLP):** Italian (`it`)
- **License:** CC BY-SA 4.0
- **Finetuned from model:** GPT-2 (base version by OpenAI)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [Hugging Face Model Repository](https://huggingface.co/maiurilorenzo/dante-gpt)
- **Dataset:** [Divina Commedia](https://huggingface.co/datasets/maiurilorenzo/divina-commedia)
- **Kaggle Notebook:** [Link to Kaggle Notebook](https://www.kaggle.com/code/lorenzomaiuri/dante-gpt)
- **Demo:** [DanteGPT Space](https://huggingface.co/spaces/maiurilorenzo/dante-gpt-space)
## Uses
### Try It Out
You can try this model interactively using the [DanteGPT Space](https://huggingface.co/spaces/maiurilorenzo/dante-gpt-space).
Simply enter a text prompt, and the model will generate verses in the style of Dante Alighieri!
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
The model is designed for generating text in the style of the *Divina Commedia* and can be used for literary exploration, creative writing, and educational purposes.
### Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
Users may adapt the model for additional fine-tuning on similar literary texts or use it to generate other forms of poetic or stylistic writing.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
The model may produce inaccurate or nonsensical text when used outside its intended domain. It is not suitable for tasks requiring factual accuracy or ethical decision-making.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
### Biases
- The model reflects the content and biases of the original dataset, which is a historical text. Modern ethical, cultural, and social considerations may not align with the themes or language of Dante's work.
### Risks
- The model may inadvertently generate offensive or inappropriate content when prompted with ambiguous or unrelated topics.
- Over-reliance on this model for literary generation without proper human oversight may lead to misrepresentation of Dante’s work.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should validate generated content for coherence and appropriateness. It is recommended to use the model in combination with literary expertise to ensure quality.
## How to Get Started with the Model
To use the model for text generation, run the following code snippet:
```python
from transformers import GPT2LMHeadModel, GPT2Tokenizer
# Load model and tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("maiurilorenzo/dante-gpt")
model = GPT2LMHeadModel.from_pretrained("maiurilorenzo/dante-gpt")
# Generate text
prompt = "Nel mezzo del cammin di nostra vita,"
input_ids = tokenizer.encode(prompt, return_tensors="pt")
output = model.generate(input_ids, max_length=100, num_beams=5, no_repeat_ngram_size=2)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The model was fine-tuned on the Divina Commedia dataset sourced from the Hugging Face Datasets library (`maiurilorenzo/divina-commedia`). The dataset contains cleaned and tokenized text from the original work.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing
- Removed text exceeding 1024 tokens to ensure compatibility with GPT-2's input limits.
- Split the dataset into training and test subsets.
- Added special tokens `<|startoftext|>` and `<|endoftext|>` to each entry for model training.
#### Training Hyperparameters
Training Hyperparameters
- **Training regime**: FP16 mixed precision
- **Learning rate**: 2e-5
- **Batch size**: 16 (with gradient accumulation to simulate larger batch sizes)
- **Epochs: 5**
- **Optimizer**: AdamW
- **Scheduler**: Linear warm-up with decay
#### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
- **Training Time**: ~1.5 hours on NVIDIA Tesla P100 (16 GB)
- **Model Size**: ~500 MB
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
A subset of 20 samples from the dataset was held out for testing purposes.
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
Evaluation focused on:
- Coherence of generated text.
- Thematic relevance to the Divina Commedia.
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
<!--- **Perplexity**: A quantitative measure of the model's predictive performance.-->
- **Human Evaluation**: Subjective assessment of the generated text's quality.
### Results
<!--- Perplexity: [Enter Perplexity Score]-->
- Human Evaluation: 75% accuracy in replicating Dante’s style (based on thematic and stylistic criteria).
#### Summary
The model successfully generates stylistically accurate text that aligns with the poetic form and thematic elements of Dante’s work. Inconsistencies in rhyme and coherence may occur in longer outputs.
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** NVIDIA Tesla P100 (16 GB)
- **Hours used:** ~1.5 hours
- **Cloud Provider:** Kaggle
- **Carbon Emitted:** 0.21
## Technical Specifications
### Model Architecture and Objective
- **Base Model**: GPT-2
- **Objective**: Minimize cross-entropy loss between predicted and target tokens in fine-tuned training data.
### Compute Infrastructure
[More Information Needed]
#### Hardware
- **GPU:** NVIDIA Tesla P100 (16 GB)
- **RAM** 32 GB
#### Software
- Hugging Face Transformers
- PyTorch
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@misc{maiurilorenzo/dante-gpt,
author = {Lorenzo Maiuri},
title = {DanteGPT: Generating Text in the Style of Dante Alighieri},
year = {2024},
publisher = {Hugging Face Hub},
url = {https://huggingface.co/maiurilorenzo/dante-gpt}
}
```
**APA:**
[Lorenzo Maiuri]. (2024). DanteGPT: Generating Text in the Style of Dante Alighieri. Hugging Face Hub.