Update README.md
Browse files
README.md
CHANGED
@@ -11,35 +11,39 @@ datasets:
|
|
11 |
base_model: tiiuae/falcon-40b
|
12 |
---
|
13 |
|
14 |
-
|
15 |
|
16 |
-
|
17 |
-
|
18 |
-
The contributors could address questions from their peers. Rephrasing the original question was encouraged, and there was a clear preference to answer only those queries they were certain about.
|
19 |
-
In some categories, the data comes with reference texts sourced from Wikipedia. Users might find bracketed Wikipedia citation numbers (like [42]) within the context field of the dataset. For smoother downstream applications, it's advisable to exclude these.
|
20 |
|
21 |
-
|
|
|
22 |
|
23 |
-
|
24 |
-
|
25 |
-
|
|
|
26 |
|
27 |
-
|
28 |
-
- Epochs: 1
|
29 |
-
- Cost: $11.8
|
30 |
-
- Model Path: tiiuae/falcon-40b
|
31 |
-
- Dataset: databricks/databricks-dolly-15k
|
32 |
-
- Learning rate: 0.0002
|
33 |
-
- Data split: Training 90% / Validation 10%
|
34 |
-
- Gradient accumulation steps: 4
|
35 |
|
36 |
-
|
37 |
-
|
|
|
|
|
|
|
|
|
38 |
|
39 |
-
|
40 |
|
41 |
-
|
|
|
|
|
|
|
|
|
|
|
42 |
|
|
|
|
|
43 |
### INSTRUCTION:
|
44 |
[instruction]
|
45 |
|
@@ -47,8 +51,9 @@ Prompt Used:
|
|
47 |
|
48 |
### RESPONSE:
|
49 |
[response]
|
|
|
50 |
|
51 |
Loss metrics
|
52 |
|
53 |
-
Training loss
|
54 |
-
![training loss](train-loss.png "Training loss")
|
|
|
11 |
base_model: tiiuae/falcon-40b
|
12 |
---
|
13 |
|
14 |
+
### Finetuning Overview:
|
15 |
|
16 |
+
**Model Used:** tiiuae/falcon-40b
|
17 |
+
**Dataset:** Databricks-dolly-15k
|
|
|
|
|
18 |
|
19 |
+
#### Dataset Insights:
|
20 |
+
The Databricks-dolly-15k dataset, comprising over 15,000 records, stands as a testament to the dedication of numerous Databricks professionals. Aimed at refining the interactive capabilities of systems like ChatGPT, the dataset offers:
|
21 |
|
22 |
+
- Prompt/response pairs across eight distinct instruction categories.
|
23 |
+
- A blend of the seven categories from the InstructGPT paper and an open-ended category.
|
24 |
+
- Original content, devoid of generative AI influence and primarily offline-sourced, with exceptions for Wikipedia references.
|
25 |
+
- Interactive sessions where contributors could address and rephrase peer questions.
|
26 |
|
27 |
+
Note: Some data categories incorporate Wikipedia references, evident from bracketed citation numbers, e.g., [42]. Exclusion is recommended for downstream applications.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
|
29 |
+
#### Finetuning Details:
|
30 |
+
|
31 |
+
Leveraging [MonsterAPI](https://monsterapi.ai)'s no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm), our finetuning emphasized:
|
32 |
+
|
33 |
+
- **Cost-Effectiveness:** A complete run at just `$11.8`.
|
34 |
+
- **Efficiency:** Using an A6000 48GB GPU, the session concluded in 5 hours and 40 minutes.
|
35 |
|
36 |
+
#### Hyperparameters & Additional Details:
|
37 |
|
38 |
+
- **Epochs:** 1
|
39 |
+
- **Learning Rate:** 0.0002
|
40 |
+
- **Data Split:** Training 90% / Validation 10%
|
41 |
+
- **Gradient Accumulation Steps:** 4
|
42 |
+
|
43 |
+
---
|
44 |
|
45 |
+
### Prompt Structure:
|
46 |
+
```
|
47 |
### INSTRUCTION:
|
48 |
[instruction]
|
49 |
|
|
|
51 |
|
52 |
### RESPONSE:
|
53 |
[response]
|
54 |
+
```
|
55 |
|
56 |
Loss metrics
|
57 |
|
58 |
+
Training loss:
|
59 |
+
![training loss](train-loss.png "Training loss")
|