Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ model-index:
|
|
14 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
15 |
should probably proofread and complete it, then remove this comment. -->
|
16 |
|
17 |
-
#
|
18 |
|
19 |
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on [b-mc2/sql-create-context](https://huggingface.co/datasets/b-mc2/sql-create-context).
|
20 |
|
@@ -24,9 +24,8 @@ This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggin
|
|
24 |
The model is trained on 10,000 random samples from [b-mc2/sql-create-context](https://huggingface.co/datasets/b-mc2/sql-create-context).
|
25 |
It is trained in a manner described by [Phil Schmid here](https://www.philschmid.de/fine-tune-llms-in-2024-with-trl).
|
26 |
|
27 |
-
## Training procedure
|
28 |
|
29 |
-
|
30 |
|
31 |
| Hyperparameter | Value |
|
32 |
| -------------- | ----- |
|
@@ -41,11 +40,12 @@ It is trained in a manner described by [Phil Schmid here](https://www.philschmid
|
|
41 |
| lr_scheduler_warmup_ratio | 0.03 |
|
42 |
| num_epochs | 3 |
|
43 |
|
44 |
-
|
45 |
|
|
|
46 |
|
47 |
|
48 |
-
|
49 |
|
50 |
- PEFT 0.7.2.dev0
|
51 |
- Transformers 4.36.2
|
|
|
14 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
15 |
should probably proofread and complete it, then remove this comment. -->
|
16 |
|
17 |
+
# Finetuning Llama-7b on text-to-sql task
|
18 |
|
19 |
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on [b-mc2/sql-create-context](https://huggingface.co/datasets/b-mc2/sql-create-context).
|
20 |
|
|
|
24 |
The model is trained on 10,000 random samples from [b-mc2/sql-create-context](https://huggingface.co/datasets/b-mc2/sql-create-context).
|
25 |
It is trained in a manner described by [Phil Schmid here](https://www.philschmid.de/fine-tune-llms-in-2024-with-trl).
|
26 |
|
|
|
27 |
|
28 |
+
## Training hyperparameters
|
29 |
|
30 |
| Hyperparameter | Value |
|
31 |
| -------------- | ----- |
|
|
|
40 |
| lr_scheduler_warmup_ratio | 0.03 |
|
41 |
| num_epochs | 3 |
|
42 |
|
43 |
+
## Training results
|
44 |
|
45 |
+

|
46 |
|
47 |
|
48 |
+
## Framework versions
|
49 |
|
50 |
- PEFT 0.7.2.dev0
|
51 |
- Transformers 4.36.2
|