JacopoAbate commited on
Commit
e72f79e
·
verified ·
1 Parent(s): 685f763

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +93 -0
README.md CHANGED
@@ -1,3 +1,96 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ language:
4
+ - it
5
+ - en
6
+ library_name: transformers
7
+ tags:
8
+ - sft
9
+ - it
10
+ - mistral
11
+ - chatml
12
  ---
13
+
14
+ # Model Information
15
+
16
+ Volare is an updated version of [Gemma7B](https://huggingface.co/google/gemma-7b), specifically fine-tuned with SFT and LoRA adjustments.
17
+
18
+ - It's trained on publicly available datasets, like [SQUAD-it](https://huggingface.co/datasets/squad_it), and datasets we've created in-house.
19
+ - it's designed to understand and maintain context, making it ideal for Retrieval Augmented Generation (RAG) tasks and applications requiring contextual awareness.
20
+
21
+ # Evaluation
22
+
23
+ We evaluated the model using the same test sets as used for the Open Ita LLM Leaderboard
24
+
25
+ | hellaswag_it acc_norm | arc_it acc_norm | m_mmlu_it 5-shot acc | Average |
26
+ |:----------------------| :--------------- | :-------------------- | :------- |
27
+ | 0.6474 | 0.4671 | da calcolare | 0,52 |
28
+
29
+
30
+ ## Usage
31
+
32
+ Be sure to install these dependencies before running the program
33
+
34
+ ```python
35
+ !pip install transformers torch sentencepiece
36
+ ```
37
+
38
+ ```python
39
+ from transformers import AutoModelForCausalLM, AutoTokenizer
40
+
41
+ device = "cpu" # if you want to use the gpu make sure to have cuda toolkit installed and change this to "cuda"
42
+
43
+ model = AutoModelForCausalLM.from_pretrained("MoxoffSpA/Volare")
44
+ tokenizer = AutoTokenizer.from_pretrained("MoxoffSpA/Volare")
45
+
46
+ question = """Quanto è alta la torre di Pisa?"""
47
+ context = """
48
+ La Torre di Pisa è un campanile del XII secolo, famoso per la sua inclinazione. Alta circa 56 metri.
49
+ """
50
+
51
+ prompt = f"Domanda: {question}, contesto: {context}"
52
+
53
+ messages = [
54
+ {"role": "user", "content": prompt}
55
+ ]
56
+
57
+ encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
58
+
59
+ model_inputs = encodeds.to(device)
60
+ model.to(device)
61
+
62
+ generated_ids = model.generate(
63
+ model_inputs, # The input to the model
64
+ max_new_tokens=128, # Limiting the maximum number of new tokens generated
65
+ do_sample=True, # Enabling sampling to introduce randomness in the generation
66
+ temperature=0.1, # Setting temperature to control the randomness, lower values make it more deterministic
67
+ top_p=0.95, # Using nucleus sampling with top-p filtering for more coherent generation
68
+ eos_token_id=tokenizer.eos_token_id # Specifying the token that indicates the end of a sequence
69
+ )
70
+
71
+ decoded_output = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
72
+ trimmed_output = decoded_output.strip()
73
+ print(trimmed_output)
74
+ ```
75
+
76
+ ## Bias, Risks and Limitations
77
+
78
+ Volare has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of
79
+ responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition
80
+ of the corpus was used to train the base model, however it is likely to have included a mix of Web data and technical sources
81
+ like books and code.
82
+
83
+ ## Links to resources
84
+
85
+ - SQUAD-it dataset: https://huggingface.co/datasets/squad_it
86
+ - Gemma-7b model: https://huggingface.co/google/gemma-7b
87
+ - Open Ita LLM Leaderbord: https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard
88
+
89
+ ## Quantized versions
90
+
91
+ We have published as well the 4 bit and 8 bit versions of this model:
92
+ https://huggingface.co/MoxoffSpA/VolareQuantized
93
+
94
+ ## The Moxoff Team
95
+
96
+ Jacopo Abate, Marco D'Ambra, Luigi Simeone, Gianpaolo Francesco Trotta