Andrea Colombo commited on
Commit
4793ab9
·
1 Parent(s): bb99c5b

upload fine-tuned model

Browse files
README.md CHANGED
@@ -9,198 +9,30 @@ tags:
9
  base_model: mistralai/Mistral-7B-Instruct-v0.1
10
  ---
11
 
12
- # Model Card for Model ID
13
 
14
- <!-- Provide a quick summary of what the model is/does. -->
15
 
16
- ## Model Details
17
 
18
- ### Model Description
 
 
 
 
19
 
20
- <!-- Provide a longer summary of what this model is. -->
21
-
22
-
23
-
24
- - **Developed by:** [More Information Needed]
25
- - **Funded by [optional]:** [More Information Needed]
26
- - **Shared by [optional]:** [More Information Needed]
27
- - **Model type:** [More Information Needed]
28
- - **Language(s) (NLP):** [More Information Needed]
29
- - **License:** [More Information Needed]
30
- - **Finetuned from model [optional]:** [More Information Needed]
31
-
32
- ### Model Sources [optional]
33
-
34
- <!-- Provide the basic links for the model. -->
35
-
36
- - **Repository:** [More Information Needed]
37
- - **Paper [optional]:** [More Information Needed]
38
- - **Demo [optional]:** [More Information Needed]
39
-
40
- ## Uses
41
-
42
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
43
-
44
- ### Direct Use
45
-
46
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
47
-
48
- [More Information Needed]
49
-
50
- ### Downstream Use [optional]
51
-
52
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
53
-
54
- [More Information Needed]
55
-
56
- ### Out-of-Scope Use
57
-
58
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
59
-
60
- [More Information Needed]
61
-
62
- ## Bias, Risks, and Limitations
63
-
64
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
65
-
66
- [More Information Needed]
67
-
68
- ### Recommendations
69
-
70
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
71
-
72
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
73
 
74
  ## How to Get Started with the Model
75
 
76
- Use the code below to get started with the model.
77
-
78
- [More Information Needed]
79
 
80
  ## Training Details
81
 
82
- ### Training Data
83
-
84
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
85
-
86
- [More Information Needed]
87
-
88
  ### Training Procedure
89
 
90
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
91
-
92
- #### Preprocessing [optional]
93
-
94
- [More Information Needed]
95
-
96
 
97
- #### Training Hyperparameters
98
-
99
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
100
-
101
- #### Speeds, Sizes, Times [optional]
102
-
103
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
104
-
105
- [More Information Needed]
106
 
107
  ## Evaluation
108
 
109
- <!-- This section describes the evaluation protocols and provides the results. -->
110
-
111
- ### Testing Data, Factors & Metrics
112
-
113
- #### Testing Data
114
-
115
- <!-- This should link to a Dataset Card if possible. -->
116
-
117
- [More Information Needed]
118
-
119
- #### Factors
120
-
121
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
122
-
123
- [More Information Needed]
124
-
125
- #### Metrics
126
-
127
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
128
-
129
- [More Information Needed]
130
-
131
- ### Results
132
-
133
- [More Information Needed]
134
-
135
- #### Summary
136
-
137
-
138
-
139
- ## Model Examination [optional]
140
-
141
- <!-- Relevant interpretability work for the model goes here -->
142
-
143
- [More Information Needed]
144
-
145
- ## Environmental Impact
146
-
147
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
148
-
149
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
150
-
151
- - **Hardware Type:** [More Information Needed]
152
- - **Hours used:** [More Information Needed]
153
- - **Cloud Provider:** [More Information Needed]
154
- - **Compute Region:** [More Information Needed]
155
- - **Carbon Emitted:** [More Information Needed]
156
-
157
- ## Technical Specifications [optional]
158
-
159
- ### Model Architecture and Objective
160
-
161
- [More Information Needed]
162
-
163
- ### Compute Infrastructure
164
-
165
- [More Information Needed]
166
-
167
- #### Hardware
168
-
169
- [More Information Needed]
170
-
171
- #### Software
172
-
173
- [More Information Needed]
174
-
175
- ## Citation [optional]
176
-
177
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
178
-
179
- **BibTeX:**
180
-
181
- [More Information Needed]
182
-
183
- **APA:**
184
-
185
- [More Information Needed]
186
-
187
- ## Glossary [optional]
188
-
189
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
190
-
191
- [More Information Needed]
192
-
193
- ## More Information [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Authors [optional]
198
-
199
- [More Information Needed]
200
-
201
- ## Model Card Contact
202
-
203
- [More Information Needed]
204
- ### Framework versions
205
 
206
- - PEFT 0.11.1
 
9
  base_model: mistralai/Mistral-7B-Instruct-v0.1
10
  ---
11
 
12
+ # Model Description
13
 
14
+ A Mistral-7B-instruct-v0.1 model to extract a title from the text of Italian law articles. It is fine-tuned over a set of 100k text-title pairs that are available throughout the Italian legislation. It can be used to extract titles for articles or attachments that do not have a pre-defined title.
15
 
 
16
 
17
+ - **Developed by:** Andrea Colombo, Politecnico di Milano
18
+ - **Model type:** text generation
19
+ - **Language(s) (NLP):** Italian
20
+ - **License:** Apache 2.0
21
+ - **Finetuned from model:** mistralai/Mistral-7B-Instruct-v0.1
22
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
  ## How to Get Started with the Model
25
 
 
 
 
26
 
27
  ## Training Details
28
 
 
 
 
 
 
 
29
  ### Training Procedure
30
 
31
+ The model has been trained for 100 training steps with batch size 4, 4-bit quantization using bitsandbytes and a LoRA rank of 64.
32
+ We use the paged Adam optimizer, a learning rate of 0.004, and a cosine learning rate scheduler with a 0.03 warm-up fraction.
 
 
 
 
33
 
 
 
 
 
 
 
 
 
 
34
 
35
  ## Evaluation
36
 
37
+ The best model reported an evaluation loss of 1.241.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
 
 
adapter_config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "mistralai/Mistral-7B-Instruct-v0.1",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 16,
14
+ "lora_dropout": 0.1,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 64,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "v_proj",
24
+ "q_proj"
25
+ ],
26
+ "task_type": "CAUSAL_LM",
27
+ "use_dora": false,
28
+ "use_rslora": false
29
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2e293ab6a52bb26bb69612ccc6c47d2b1c68f04232cc0dad369b1fc4a9456de0
3
+ size 109069176
optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:111e9cd1e94b04428204e1f0d3962c18104226f8b74c3852cf0d4fcad2ec7c12
3
+ size 218211962
rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:16efb5e84d31149576cd7f684c4f49c3efb70127b39a234041019d54c6ea9612
3
+ size 14244
scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81007ec48272bbdc4f9622c046f9c026bf8120ed11d1398fd97bb5168a6f3dda
3
+ size 1064
special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "</s>",
17
+ "unk_token": {
18
+ "content": "<unk>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "additional_special_tokens": [],
31
+ "bos_token": "<s>",
32
+ "clean_up_tokenization_spaces": false,
33
+ "eos_token": "</s>",
34
+ "legacy": true,
35
+ "model_max_length": 1000000000000000019884624838656,
36
+ "pad_token": "</s>",
37
+ "sp_model_kwargs": {},
38
+ "spaces_between_special_tokens": false,
39
+ "tokenizer_class": "LlamaTokenizer",
40
+ "unk_token": "<unk>",
41
+ "use_default_system_prompt": false
42
+ }
trainer_state.json ADDED
@@ -0,0 +1,143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.008880994671403197,
5
+ "eval_steps": 20,
6
+ "global_step": 100,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.0008880994671403197,
13
+ "grad_norm": 0.3568251430988312,
14
+ "learning_rate": 0.0002,
15
+ "loss": 1.6891,
16
+ "step": 10
17
+ },
18
+ {
19
+ "epoch": 0.0017761989342806395,
20
+ "grad_norm": 0.29317253828048706,
21
+ "learning_rate": 0.0002,
22
+ "loss": 1.5007,
23
+ "step": 20
24
+ },
25
+ {
26
+ "epoch": 0.0017761989342806395,
27
+ "eval_loss": 1.4433443546295166,
28
+ "eval_runtime": 2364.2696,
29
+ "eval_samples_per_second": 4.79,
30
+ "eval_steps_per_second": 0.599,
31
+ "step": 20
32
+ },
33
+ {
34
+ "epoch": 0.0026642984014209592,
35
+ "grad_norm": 0.2844996154308319,
36
+ "learning_rate": 0.0002,
37
+ "loss": 1.4145,
38
+ "step": 30
39
+ },
40
+ {
41
+ "epoch": 0.003552397868561279,
42
+ "grad_norm": 0.26554158329963684,
43
+ "learning_rate": 0.0002,
44
+ "loss": 1.3632,
45
+ "step": 40
46
+ },
47
+ {
48
+ "epoch": 0.003552397868561279,
49
+ "eval_loss": 1.3292763233184814,
50
+ "eval_runtime": 2364.4165,
51
+ "eval_samples_per_second": 4.79,
52
+ "eval_steps_per_second": 0.599,
53
+ "step": 40
54
+ },
55
+ {
56
+ "epoch": 0.004440497335701598,
57
+ "grad_norm": 0.2824796140193939,
58
+ "learning_rate": 0.0002,
59
+ "loss": 1.3502,
60
+ "step": 50
61
+ },
62
+ {
63
+ "epoch": 0.0053285968028419185,
64
+ "grad_norm": 0.2961088716983795,
65
+ "learning_rate": 0.0002,
66
+ "loss": 1.2801,
67
+ "step": 60
68
+ },
69
+ {
70
+ "epoch": 0.0053285968028419185,
71
+ "eval_loss": 1.2857508659362793,
72
+ "eval_runtime": 2364.2023,
73
+ "eval_samples_per_second": 4.79,
74
+ "eval_steps_per_second": 0.599,
75
+ "step": 60
76
+ },
77
+ {
78
+ "epoch": 0.006216696269982238,
79
+ "grad_norm": 0.3122478723526001,
80
+ "learning_rate": 0.0002,
81
+ "loss": 1.2985,
82
+ "step": 70
83
+ },
84
+ {
85
+ "epoch": 0.007104795737122558,
86
+ "grad_norm": 0.2596866488456726,
87
+ "learning_rate": 0.0002,
88
+ "loss": 1.3061,
89
+ "step": 80
90
+ },
91
+ {
92
+ "epoch": 0.007104795737122558,
93
+ "eval_loss": 1.2607264518737793,
94
+ "eval_runtime": 2364.551,
95
+ "eval_samples_per_second": 4.789,
96
+ "eval_steps_per_second": 0.599,
97
+ "step": 80
98
+ },
99
+ {
100
+ "epoch": 0.007992895204262877,
101
+ "grad_norm": 0.28420358896255493,
102
+ "learning_rate": 0.0002,
103
+ "loss": 1.2627,
104
+ "step": 90
105
+ },
106
+ {
107
+ "epoch": 0.008880994671403197,
108
+ "grad_norm": 0.27427056431770325,
109
+ "learning_rate": 0.0002,
110
+ "loss": 1.2262,
111
+ "step": 100
112
+ },
113
+ {
114
+ "epoch": 0.008880994671403197,
115
+ "eval_loss": 1.2411292791366577,
116
+ "eval_runtime": 2364.3817,
117
+ "eval_samples_per_second": 4.79,
118
+ "eval_steps_per_second": 0.599,
119
+ "step": 100
120
+ }
121
+ ],
122
+ "logging_steps": 10,
123
+ "max_steps": 100,
124
+ "num_input_tokens_seen": 0,
125
+ "num_train_epochs": 1,
126
+ "save_steps": 500,
127
+ "stateful_callbacks": {
128
+ "TrainerControl": {
129
+ "args": {
130
+ "should_epoch_stop": false,
131
+ "should_evaluate": false,
132
+ "should_log": false,
133
+ "should_save": true,
134
+ "should_training_stop": true
135
+ },
136
+ "attributes": {}
137
+ }
138
+ },
139
+ "total_flos": 3.50843194834944e+16,
140
+ "train_batch_size": 4,
141
+ "trial_name": null,
142
+ "trial_params": null
143
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a27525a3487d07abf90399803e3b6b9e91108ad784516779b81492131f72e08b
3
+ size 5368