File size: 8,118 Bytes
3844c2a
 
134e125
 
 
 
 
 
 
 
 
 
 
 
3844c2a
 
c82890c
3844c2a
 
 
134e125
3844c2a
 
 
 
 
 
 
 
134e125
 
 
 
 
 
 
3844c2a
134e125
3844c2a
 
 
5aa71c8
 
 
b8ca78b
3844c2a
 
 
b8ca78b
 
3bd4939
b8ca78b
 
3844c2a
 
 
 
 
134e125
3844c2a
134e125
3844c2a
 
 
134e125
3844c2a
 
 
 
 
134e125
3844c2a
 
 
 
 
134e125
 
 
 
 
 
 
 
3844c2a
 
 
 
 
134e125
3844c2a
 
 
134e125
3844c2a
134e125
 
 
 
05cd2a4
 
134e125
 
 
 
 
 
 
 
3844c2a
 
 
 
 
 
 
134e125
3844c2a
 
 
 
 
134e125
3844c2a
134e125
 
 
3844c2a
 
 
 
134e125
3844c2a
134e125
 
 
 
 
 
 
 
3844c2a
 
 
134e125
 
3844c2a
 
 
 
 
 
 
 
 
 
 
134e125
3844c2a
 
 
 
 
134e125
 
 
 
3844c2a
 
 
 
 
0a6a4ee
134e125
3844c2a
 
 
0a6a4ee
 
3844c2a
 
 
134e125
3844c2a
 
 
 
 
 
 
134e125
 
 
 
3844c2a
134e125
3844c2a
 
 
134e125
 
3844c2a
 
 
 
 
 
 
134e125
 
3844c2a
 
 
134e125
 
3844c2a
134e125
3844c2a
 
 
 
 
134e125
 
 
 
 
 
 
 
 
3844c2a
 
 
134e125
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
---
library_name: transformers
tags:
- dante
- literature
- italian
license: cc-by-sa-4.0
datasets:
- maiurilorenzo/divina-commedia
language:
- it
base_model:
- openai-community/gpt2
pipeline_tag: text-generation
---

# Model Card for DanteGPT

<!-- Provide a quick summary of what the model is/does. -->

This model, **DanteGPT**, is a fine-tuned version of GPT-2 designed to generate text in the style of Dante Alighieri’s *Divina Commedia*. The model emulates Dante's poetic structure, including his use of tercets with a specific rhyme scheme (ABA BCB CDC) and thematic elements of his work, such as divine justice and moral reflection.


## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

- **Developed by:** Lorenzo Maiuri  
- **Funded by:** Independent research  
- **Shared by:** Lorenzo Maiuri
- **Model type:** Fine-tuned GPT-2  
- **Language(s) (NLP):** Italian (`it`)  
- **License:** CC BY-SA 4.0  
- **Finetuned from model:** GPT-2 (base version by OpenAI) 

### Model Sources

<!-- Provide the basic links for the model. -->

- **Repository:** [Hugging Face Model Repository](https://huggingface.co/maiurilorenzo/dante-gpt)
- **Dataset:** [Divina Commedia](https://huggingface.co/datasets/maiurilorenzo/divina-commedia)
- **Kaggle Notebook:** [Link to Kaggle Notebook](https://www.kaggle.com/code/lorenzomaiuri/dante-gpt)
- **Demo:** [DanteGPT Space](https://huggingface.co/spaces/maiurilorenzo/dante-gpt-space)

## Uses

### Try It Out

You can try this model interactively using the [DanteGPT Space](https://huggingface.co/spaces/maiurilorenzo/dante-gpt-space).  
Simply enter a text prompt, and the model will generate verses in the style of Dante Alighieri!


### Direct Use

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->

The model is designed for generating text in the style of the *Divina Commedia* and can be used for literary exploration, creative writing, and educational purposes.

### Downstream Use

<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->

Users may adapt the model for additional fine-tuning on similar literary texts or use it to generate other forms of poetic or stylistic writing.

### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->

The model may produce inaccurate or nonsensical text when used outside its intended domain. It is not suitable for tasks requiring factual accuracy or ethical decision-making.

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

### Biases

- The model reflects the content and biases of the original dataset, which is a historical text. Modern ethical, cultural, and social considerations may not align with the themes or language of Dante's work.

### Risks

- The model may inadvertently generate offensive or inappropriate content when prompted with ambiguous or unrelated topics.
- Over-reliance on this model for literary generation without proper human oversight may lead to misrepresentation of Dante’s work.

### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->

Users should validate generated content for coherence and appropriateness. It is recommended to use the model in combination with literary expertise to ensure quality.

## How to Get Started with the Model

To use the model for text generation, run the following code snippet:

```python
from transformers import GPT2LMHeadModel, GPT2Tokenizer

# Load model and tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("maiurilorenzo/dante-gpt")
model = GPT2LMHeadModel.from_pretrained("maiurilorenzo/dante-gpt")

# Generate text
prompt = "Nel mezzo del cammin di nostra vita,"
input_ids = tokenizer.encode(prompt, return_tensors="pt")
output = model.generate(input_ids, max_length=100, num_beams=5, no_repeat_ngram_size=2)

print(tokenizer.decode(output[0], skip_special_tokens=True))
```

## Training Details

### Training Data

<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->

The model was fine-tuned on the Divina Commedia dataset sourced from the Hugging Face Datasets library (`maiurilorenzo/divina-commedia`). The dataset contains cleaned and tokenized text from the original work.

### Training Procedure

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->

#### Preprocessing

- Removed text exceeding 1024 tokens to ensure compatibility with GPT-2's input limits.
- Split the dataset into training and test subsets.
- Added special tokens `<|startoftext|>` and `<|endoftext|>` to each entry for model training.


#### Training Hyperparameters

Training Hyperparameters

- **Training regime**: FP16 mixed precision
- **Learning rate**: 2e-5
- **Batch size**: 16 (with gradient accumulation to simulate larger batch sizes)
- **Epochs: 5**
- **Optimizer**: AdamW
- **Scheduler**: Linear warm-up with decay

#### Speeds, Sizes, Times

<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->

- **Training Time**: ~1.5 hours on NVIDIA Tesla P100 (16 GB)
- **Model Size**: ~500 MB

## Evaluation

<!-- This section describes the evaluation protocols and provides the results. -->

### Testing Data, Factors & Metrics

#### Testing Data

<!-- This should link to a Dataset Card if possible. -->

A subset of 20 samples from the dataset was held out for testing purposes.

#### Factors

<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->

Evaluation focused on:

- Coherence of generated text.
- Thematic relevance to the Divina Commedia.

#### Metrics

<!-- These are the evaluation metrics being used, ideally with a description of why. -->

<!--- **Perplexity**: A quantitative measure of the model's predictive performance.-->
- **Human Evaluation**: Subjective assessment of the generated text's quality.

### Results

<!--- Perplexity: [Enter Perplexity Score]-->
- Human Evaluation: 75% accuracy in replicating Dante’s style (based on thematic and stylistic criteria).

#### Summary

The model successfully generates stylistically accurate text that aligns with the poetic form and thematic elements of Dante’s work. Inconsistencies in rhyme and coherence may occur in longer outputs.

## Environmental Impact

<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->

Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).

- **Hardware Type:** NVIDIA Tesla P100 (16 GB)
- **Hours used:** ~1.5 hours
- **Cloud Provider:** Kaggle
- **Carbon Emitted:** 0.21

## Technical Specifications

### Model Architecture and Objective

- **Base Model**: GPT-2
- **Objective**: Minimize cross-entropy loss between predicted and target tokens in fine-tuned training data.

### Compute Infrastructure

[More Information Needed]

#### Hardware

- **GPU:** NVIDIA Tesla P100 (16 GB)
- **RAM** 32 GB

#### Software

- Hugging Face Transformers
- PyTorch

## Citation

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

```
@misc{maiurilorenzo/dante-gpt,
  author = {Lorenzo Maiuri},
  title = {DanteGPT: Generating Text in the Style of Dante Alighieri},
  year = {2024},
  publisher = {Hugging Face Hub},
  url = {https://huggingface.co/maiurilorenzo/dante-gpt}
}
```

**APA:**

[Lorenzo Maiuri]. (2024). DanteGPT: Generating Text in the Style of Dante Alighieri. Hugging Face Hub.