File size: 3,077 Bytes
ad950ee
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c51987d
ad950ee
17dedf0
 
9feb8c0
5a654e2
ad950ee
3de0b29
ad950ee
3de0b29
ad950ee
be46530
3de0b29
 
 
 
 
ad950ee
3de0b29
 
 
 
 
be46530
 
514a532
a9dd919
514a532
a9dd919
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ad950ee
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.3
model-index:
- name: tinyllama-colorist-lora-v0.3
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

<h1 align="center"><font color="red">tinyllama-colorist-lora-v0.3</font></h1>


![image/png](https://cdn-uploads.huggingface.co/production/uploads/628fcb73267c3813eb5ae99d/UMg3Uviv6JcwD4D6Vil7o.png)

This model, `tinyllama-colorist-lora-v0.3`, is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v0.3](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.3) on the color dataset.

## <font color="yellow">Study Motivation</font>

To study this new TinyLlama model as a replacement for Llama2 for resource-constrained environment. Also, in the future I will perform the Fine-Tuning of this model for Chat and for a specific domain in Portuguese and Spanish 🤗.


## <font color="yellow">Prompt format</font>
The model training process is similar to the regular Llama2 model with a chat prompt format like this:
```
<|im_start|>user\n{question}<|im_end|>\n<|im_start|>assistant\n{answer}<|im_end|>\n
```

## <font color="yellow">Instructions for use</font>
```
User Input: Give me a sky blue color.
LLM response: #6092ff
```



## <font color="yellow">Model usage</font>
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
from transformers import pipeline

def print_color_space(hex_color):
    def hex_to_rgb(hex_color):
        hex_color = hex_color.lstrip('#')
        return tuple(int(hex_color[i:i+2], 16) for i in (0, 2, 4))
    r, g, b = hex_to_rgb(hex_color)
    print(f'{hex_color}: \033[48;2;{r};{g};{b}m           \033[0m')

tokenizer = AutoTokenizer.from_pretrained(model_id_colorist_final)
pipe = pipeline(
    "text-generation",
    model=model_id_colorist_final,
    torch_dtype=torch.float16,
    device_map="auto",
)

from time import perf_counter
start_time = perf_counter()

prompt = formatted_prompt('give me a pure brown color')
sequences = pipe(
    prompt,
    do_sample=True,
    temperature=0.1,
    top_p=0.9,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
    max_new_tokens=12
)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")

output_time = perf_counter() - start_time
print(f"Time taken for inference: {round(output_time,2)} seconds")
```


### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 200
- mixed_precision_training: Native AMP

### Training results



### Framework versions

- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2