Update README.md
Browse files
README.md
CHANGED
@@ -17,6 +17,57 @@ Some reasons for using these checkpoints:
|
|
17 |
- You can use them starting point to train your own small language model.
|
18 |
- More interestingly, you can prob into the learning process of these models to understand how LLM learns to mimic human.
|
19 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
# How to use these checkpoints
|
21 |
|
22 |
These checkpoints are compatible with [litgpt](https://github.com/Lightning-AI/litgpt) with slight modifications (see section below).
|
@@ -40,6 +91,8 @@ Reference:
|
|
40 |
1. litgpt pretrain checkpoint to inference checkpoint https://github.com/Lightning-AI/litgpt/blob/main/tutorials/pretrain_tinyllama.md#export-checkpoints
|
41 |
2. litgpt inference checkpoint to HF checkpoints https://github.com/Lightning-AI/litgpt/blob/main/tutorials/convert_lit_models.md
|
42 |
|
|
|
|
|
43 |
|
44 |
# Advanced usage - pretraining with litgpt
|
45 |
|
|
|
17 |
- You can use them starting point to train your own small language model.
|
18 |
- More interestingly, you can prob into the learning process of these models to understand how LLM learns to mimic human.
|
19 |
|
20 |
+
# Evaluation results
|
21 |
+
|
22 |
+
**Note** this does not represent the final performance of the model and should only be served as a reference for my training progress.
|
23 |
+
```
|
24 |
+
checkpoint: step-00088000
|
25 |
+
|
26 |
+
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|
27 |
+
|-------------|------:|------|-----:|--------|-----:|---|-----:|
|
28 |
+
|piqa | 1|none | 0|acc |0.6202|± |0.0113|
|
29 |
+
| | |none | 0|acc_norm|0.6213|± |0.0113|
|
30 |
+
|boolq | 2|none | 0|acc |0.5875|± |0.0086|
|
31 |
+
|arc_challenge| 1|none | 0|acc |0.1980|± |0.0116|
|
32 |
+
| | |none | 0|acc_norm|0.2201|± |0.0121|
|
33 |
+
|arc_easy | 1|none | 0|acc |0.4373|± |0.0102|
|
34 |
+
| | |none | 0|acc_norm|0.3935|± |0.0100|
|
35 |
+
|winogrande | 1|none | 0|acc |0.5004|± |0.0141|
|
36 |
+
|openbookqa | 1|none | 0|acc |0.1760|± |0.0170|
|
37 |
+
| | |none | 0|acc_norm|0.2680|± |0.0198|
|
38 |
+
|hellaswag | 1|none | 0|acc |0.2893|± |0.0045|
|
39 |
+
| | |none | 0|acc_norm|0.3125|± |0.0046|
|
40 |
+
```
|
41 |
+
|
42 |
+
You can use the following script to reproduce the results (assuming you have installed litgpt)
|
43 |
+
```
|
44 |
+
MODEL_NAME="step-00088000"
|
45 |
+
MODEL_OUTPUT_ROOT="MicroLlamaV2-VastAI-Checkpoints/out/pretrain/micro-llama-v2"
|
46 |
+
MODEL_OUTPUT_REL="${MODEL_OUTPUT_ROOT}/${MODEL_NAME}"
|
47 |
+
|
48 |
+
# HuggingFace
|
49 |
+
huggingface-cli download keeeeenw/MicroLlama2-checkpoints ${MODEL_NAME}/lit_model.pth --local-dir checkpoints/${MODEL_OUTPUT_ROOT}/
|
50 |
+
huggingface-cli download keeeeenw/MicroLlama2-checkpoints ${MODEL_NAME}/generation_config.json --local-dir checkpoints/${MODEL_OUTPUT_ROOT}/
|
51 |
+
huggingface-cli download keeeeenw/MicroLlama2-checkpoints ${MODEL_NAME}/hyperparameters.yaml --local-dir checkpoints/${MODEL_OUTPUT_ROOT}/
|
52 |
+
huggingface-cli download keeeeenw/MicroLlama2-checkpoints ${MODEL_NAME}/model_config.yaml --local-dir checkpoints/${MODEL_OUTPUT_ROOT}/
|
53 |
+
huggingface-cli download keeeeenw/MicroLlama2-checkpoints ${MODEL_NAME}/tokenizer.json --local-dir checkpoints/${MODEL_OUTPUT_ROOT}/
|
54 |
+
huggingface-cli download keeeeenw/MicroLlama2-checkpoints ${MODEL_NAME}/tokenizer_config.json --local-dir checkpoints/${MODEL_OUTPUT_ROOT}/
|
55 |
+
|
56 |
+
# Copy config, see "caveat" below
|
57 |
+
cp -r <local_path>/config.json checkpoints/${MODEL_OUTPUT_REL}/
|
58 |
+
|
59 |
+
# AWS
|
60 |
+
# aws s3 cp s3://microllama-v2/checkpoints/out/pretrain/micro-llama-v2/${MODEL_NAME} checkpoints/${MODEL_OUTPUT_REL} --recursive
|
61 |
+
|
62 |
+
litgpt evaluate \
|
63 |
+
${MODEL_OUTPUT_REL} \
|
64 |
+
--tasks "hellaswag,openbookqa,winogrande,arc_easy,arc_challenge,boolq,piqa" \
|
65 |
+
--device cuda:0 \
|
66 |
+
--batch_size 16
|
67 |
+
```
|
68 |
+
**Caveat**: for some reason the auto generated config.json for the model in the checkpoint is incorrect, you will need to replace it with https://huggingface.co/keeeeenw/MicroLlama2-checkpoints/blob/main/config.json
|
69 |
+
to resolve the evaluation error.
|
70 |
+
|
71 |
# How to use these checkpoints
|
72 |
|
73 |
These checkpoints are compatible with [litgpt](https://github.com/Lightning-AI/litgpt) with slight modifications (see section below).
|
|
|
91 |
1. litgpt pretrain checkpoint to inference checkpoint https://github.com/Lightning-AI/litgpt/blob/main/tutorials/pretrain_tinyllama.md#export-checkpoints
|
92 |
2. litgpt inference checkpoint to HF checkpoints https://github.com/Lightning-AI/litgpt/blob/main/tutorials/convert_lit_models.md
|
93 |
|
94 |
+
**Caveat**: for some reason the auto generated config.json for the model in the checkpoint is incorrect, you will need to replace it with https://huggingface.co/keeeeenw/MicroLlama2-checkpoints/blob/main/config.json
|
95 |
+
to resolve any inference or evaluation error.
|
96 |
|
97 |
# Advanced usage - pretraining with litgpt
|
98 |
|