efederici commited on
Commit
7c65c11
1 Parent(s): b203be1

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +97 -0
README.md ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sft
4
+ - it
5
+ - mistral
6
+ - chatml
7
+ model-index:
8
+ - name: maestrale-chat-v0.2-alpha
9
+ results: []
10
+ license: cc-by-nc-4.0
11
+ language:
12
+ - it
13
+ prompt_template: >-
14
+ <|im_start|>system {system_message}<|im_end|> <|im_start|>user
15
+ {prompt}<|im_end|> <|im_start|>assistant
16
+ ---
17
+
18
+ <div style="width: auto; margin-left: auto; margin-right: auto">
19
+ <img src="https://i.imgur.com/uXLz7Jz.jpg" alt="Mii-LLM" style="width: 100%; min-width: 400px; display: block; margin: auto;">
20
+ </div>
21
+ <div style="display: flex; justify-content: space-between; width: 100%;">
22
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
23
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://buy.stripe.com/8wM00Sf3vb3H3pmfYY">Want to contribute? Please donate! This will let us work on better datasets and models!</a></p>
24
+ </div>
25
+ </div>
26
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
27
+ <!-- header end -->
28
+
29
+ # Maestrale chat alpha ༄
30
+
31
+ By @efederici and @mferraretto
32
+
33
+ ## Model description
34
+
35
+ - **Language Model**: Mistral-7b for the Italian language, continued pre-training for Italian on a curated large-scale high-quality corpus.
36
+ - **Fine-Tuning**: SFT performed on ~270k Italian convs/instructions for one epoch.
37
+
38
+ This model uses ChatML prompt format:
39
+ ```
40
+ <|im_start|>system
41
+ Assisti sempre con cura, rispetto e verità. Rispondi con la massima utilità ma in modo sicuro. Evita contenuti dannosi, non etici, pregiudizievoli o negativi. Assicurati che le risposte promuovano equità e positività.<|im_end|>
42
+ <|im_start|>user
43
+ {prompt}<|im_end|>
44
+ <|im_start|>assistant
45
+ ```
46
+
47
+ ## Usage:
48
+ ```python
49
+ from transformers import (
50
+ AutoTokenizer,
51
+ AutoModelForCausalLM,
52
+ GenerationConfig,
53
+ TextStreamer
54
+ )
55
+ import torch
56
+
57
+ torch.backends.cuda.matmul.allow_tf32 = True
58
+
59
+ tokenizer = AutoTokenizer.from_pretrained("mii-llm/maestrale-chat-v0.2-alpha")
60
+ model = AutoModelForCausalLM.from_pretrained("mii-llm/maestrale-chat-v0.2-alpha", load_in_8bit=True, device_map="auto")
61
+
62
+ gen = GenerationConfig(
63
+ do_sample=True,
64
+ temperature=0.7,
65
+ repetition_penalty=1.2,
66
+ top_k=50,
67
+ top_p=0.95,
68
+ max_new_tokens=500,
69
+ pad_token_id=tokenizer.eos_token_id,
70
+ eos_token_id=tokenizer.convert_tokens_to_ids("<|im_end|>")
71
+ )
72
+
73
+ messages = [
74
+ {"role": "system", "content": "Assisti sempre con cura, rispetto e verità. Rispondi con la massima utilità ma in modo sicuro. Evita contenuti dannosi, non etici, pregiudizievoli o negativi. Assicurati che le risposte promuovano equità e positività."},
75
+ {"role": "user", "content": "{prompt}"}
76
+ ]
77
+
78
+ with torch.no_grad(), torch.backends.cuda.sdp_kernel(
79
+ enable_flash=True,
80
+ enable_math=False,
81
+ enable_mem_efficient=False
82
+ ):
83
+ temp = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
84
+ inputs = tokenizer(temp, return_tensors="pt").to("cuda")
85
+
86
+ streamer = TextStreamer(tokenizer, skip_prompt=True)
87
+
88
+ _ = model.generate(
89
+ **inputs,
90
+ streamer=streamer,
91
+ generation_config=gen
92
+ )
93
+ ```
94
+
95
+ ## Intended uses & limitations
96
+
97
+ It's an alpha version, it's not `aligned`. We are working on alignement data and evals.