sartajbhuvaji
commited on
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,84 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- allenai/dolma
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
base_model: allenai/OLMo-1B-0724-hf
|
8 |
+
library_name: transformers
|
9 |
+
pipeline_tag: text-generation
|
10 |
+
tags:
|
11 |
+
- art
|
12 |
+
- literature
|
13 |
+
- OLMo
|
14 |
+
- allenai
|
15 |
+
---
|
16 |
+
## Model Overview
|
17 |
+
|
18 |
+
`OLMo-1B-Base-Shakespeare` is a fine-tuned version of the `allenai/OLMo-1B-0724-hf` model, retrained on the complete collection of novels by William Shakespeare. The model aims to generate text in the style of Shakespeare's works and has been optimized to capture the linguistic and stylistic nuances present in the original text.
|
19 |
+
|
20 |
+
## Model Details
|
21 |
+
- **Model Type:** Base Mode;
|
22 |
+
- **Base Model:** [allenai/OLMo-1B-0724-hf](https://huggingface.co/allenai/OLMo-1B-0724-hf)
|
23 |
+
- **Training Dataset:** [Works by William Shakespeare](https://gist.githubusercontent.com/blakesanie/dde3a2b7e698f52f389532b4b52bc254/raw/76fe1b5e9efcf0d2afdfd78b0bfaa737ad0a67d3/shakespeare.txt)
|
24 |
+
- **GPU VRAM Requirements:** 25 GB
|
25 |
+
|
26 |
+
- **Intended Use Cases:**
|
27 |
+
- Creative writing assistance
|
28 |
+
- Educational purposes for studying literary styles
|
29 |
+
- Text generation in the style of William Shakespeare
|
30 |
+
|
31 |
+
## Installation
|
32 |
+
Ensure you have the `transformers` library installed:
|
33 |
+
```bash
|
34 |
+
pip install transformers
|
35 |
+
```
|
36 |
+
## Inference
|
37 |
+
```python
|
38 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
39 |
+
import torch
|
40 |
+
|
41 |
+
torch.random.manual_seed(0)
|
42 |
+
|
43 |
+
model_name = 'sartajbhuvaji/OLMo-1B-Base-Shakespeare'
|
44 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
45 |
+
model = AutoModelForCausalLM.from_pretrained(
|
46 |
+
model_name,
|
47 |
+
device_map="cuda",
|
48 |
+
torch_dtype="auto",
|
49 |
+
trust_remote_code=True,
|
50 |
+
)
|
51 |
+
model.to('cuda')
|
52 |
+
|
53 |
+
input_text = 'Hello how are you?'
|
54 |
+
input_ids = tokenizer.encode(input_text, return_tensors='pt').to('cuda')
|
55 |
+
|
56 |
+
output = model.generate(input_ids, max_length=100, num_return_sequences=1, no_repeat_ngram_size=2)
|
57 |
+
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
|
58 |
+
print(generated_text)
|
59 |
+
'''
|
60 |
+
Hello how are you?
|
61 |
+
SECOND GENTLEMAN. I am a gentleman.
|
62 |
+
The Duke, my lord, and all the court are yours.
|
63 |
+
|
64 |
+
Enter a MESSENGER
|
65 |
+
|
66 |
+
THIRD GENTSLE MAN. Here's a messenger. What news? What's the news,
|
67 |
+
sir? How doth your lady? Is she well? Or is she
|
68 |
+
hears'd, beaten, or slain? The news is, sir
|
69 |
+
'''
|
70 |
+
```
|
71 |
+
## Fientuning Details
|
72 |
+
- **Global Step:** 4656
|
73 |
+
- **Training Loss:** 2.180167237763962
|
74 |
+
- **Train Runtime:** 2710.0517
|
75 |
+
- **Train Samples per second:** 13.742
|
76 |
+
- **Train Steps per second:** 1.718
|
77 |
+
- **Total Flos:** 3.3657646372356096e+16
|
78 |
+
- **Train Loss:** 2.180167237763962
|
79 |
+
- **Epoch:** 3.0
|
80 |
+
|
81 |
+
|
82 |
+
## Training Curve
|
83 |
+
|
84 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6354695712edd0ed5dc46b04/cVDWr59JFTZ6evZwgw5NF.png)
|