alicata commited on
Commit
8dbed0c
·
verified ·
1 Parent(s): 8bf59bd

Model save

Browse files
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: meta-llama/Meta-Llama-3.1-8B
3
+ datasets:
4
+ - generator
5
+ library_name: peft
6
+ license: llama3.1
7
+ tags:
8
+ - trl
9
+ - sft
10
+ - generated_from_trainer
11
+ model-index:
12
+ - name: code-llama-3-1-8b-text-to-sql
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ # code-llama-3-1-8b-text-to-sql
20
+
21
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the generator dataset.
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 0.0002
41
+ - train_batch_size: 1
42
+ - eval_batch_size: 8
43
+ - seed: 42
44
+ - gradient_accumulation_steps: 8
45
+ - total_train_batch_size: 8
46
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
+ - lr_scheduler_type: constant
48
+ - lr_scheduler_warmup_ratio: 0.03
49
+ - num_epochs: 1
50
+
51
+ ### Training results
52
+
53
+
54
+
55
+ ### Framework versions
56
+
57
+ - PEFT 0.13.0
58
+ - Transformers 4.44.2
59
+ - Pytorch 2.4.1+cu121
60
+ - Datasets 3.0.1
61
+ - Tokenizers 0.19.1
adapter_config.json CHANGED
@@ -20,13 +20,13 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "up_proj",
24
- "gate_proj",
25
- "q_proj",
26
  "v_proj",
27
- "o_proj",
28
  "down_proj",
29
- "k_proj"
 
 
 
 
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
 
 
 
23
  "v_proj",
 
24
  "down_proj",
25
+ "k_proj",
26
+ "gate_proj",
27
+ "q_proj",
28
+ "up_proj",
29
+ "o_proj"
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:44eddfaf1ad5e221e2fe4960a781e95cd2f304bfbc163aa57dd918ba0b70d279
3
  size 2436984000
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d67b54940e869445fe5309f0eb5326f3a139c69202b35ea966dfbf1f2e30b758
3
  size 2436984000
runs/Oct12_13-08-49_g/events.out.tfevents.1728763731.g.22948.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:174e680e8d9f3628569b6db73a483c9c41f2f17e527153977ff0e48e996424de
3
- size 6041
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9715a5d67088b99872ddd22950934db61926adf72d86e42cf0da42c4f681d1d8
3
+ size 6389
runs/Oct12_13-40-46_g/events.out.tfevents.1728765647.g.10016.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19e4d5472e156aea3da6a7ba78b7c472d0970684c0e4a8c8bccbbd30fcbe78d9
3
+ size 6389
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1e470c93c3fc544ac449fe1f9655ee5e5daa4fbc3f333267a01c408fed02628e
3
  size 5496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:948a35e29b924a2ee006fe7d7a50354064aa0471d8ffe3dabfcf9325873a6ece
3
  size 5496