JunxiongWang commited on
Commit
55eafdf
1 Parent(s): 2b07191

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -64
README.md CHANGED
@@ -1,67 +1,27 @@
1
  ---
2
- license: llama3.2
3
- base_model: meta-llama/Llama-3.2-3B-Instruct
4
- tags:
5
- - alignment-handbook
6
- - generated_from_trainer
7
- datasets:
8
- - JunxiongWang/sftdatasetv3
9
- model-index:
10
- - name: Llama-Mamba-3.2-3B-teacher-Llama-3.1-70B-Instruct-kl1.0-ce0.0-update
11
- results: []
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
-
17
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/together-research/huggingface/runs/qfrd2pha)
18
- # Llama-Mamba-3.2-3B-teacher-Llama-3.1-70B-Instruct-kl1.0-ce0.0-update
19
-
20
- This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on the JunxiongWang/sftdatasetv3 dataset.
21
- It achieves the following results on the evaluation set:
22
- - Loss: 375.7168
23
-
24
- ## Model description
25
-
26
- More information needed
27
-
28
- ## Intended uses & limitations
29
-
30
- More information needed
31
-
32
- ## Training and evaluation data
33
-
34
- More information needed
35
-
36
- ## Training procedure
37
-
38
- ### Training hyperparameters
39
-
40
- The following hyperparameters were used during training:
41
- - learning_rate: 2e-05
42
- - train_batch_size: 4
43
- - eval_batch_size: 4
44
- - seed: 42
45
- - distributed_type: multi-GPU
46
- - num_devices: 8
47
- - gradient_accumulation_steps: 2
48
- - total_train_batch_size: 64
49
- - total_eval_batch_size: 32
50
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
- - lr_scheduler_type: cosine
52
- - lr_scheduler_warmup_ratio: 0.01
53
- - num_epochs: 1
54
-
55
- ### Training results
56
-
57
- | Training Loss | Epoch | Step | Validation Loss |
58
- |:-------------:|:------:|:-----:|:---------------:|
59
- | 329.7069 | 1.0000 | 51995 | 375.7168 |
60
-
61
-
62
- ### Framework versions
63
-
64
- - Transformers 4.43.1
65
- - Pytorch 2.1.1+cu118
66
- - Datasets 3.1.0
67
- - Tokenizers 0.19.1
 
1
  ---
2
+ license: apache-2.0
 
 
 
 
 
 
 
 
 
3
  ---
4
 
5
+ Zero-shot results when using the [Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) as the teacher model, and the [Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) as the initialized model
6
+
7
+ | Task | Llama-3.2-3B-Instruct | Llama3.2-Mamba-3B-distill |
8
+ |---------------|------------------------|--------------------------|
9
+ | arc_challenge | 0.459 | 0.4838 |
10
+ | arc_easy | 0.7407 | 0.7765 |
11
+ | hellaswag | 0.7043 | 0.7037 |
12
+ | mmlu | 0.6043 | 0.5448 |
13
+ | openbookqa | 0.36 | 0.394 |
14
+ | piqa | 0.7568 | 0.7731 |
15
+ | pubmedqa | 0.696 | 0.664 |
16
+ | race | 0.4067 | 0.4029 |
17
+ | winogrande | 0.6748 | 0.6732 |
18
+
19
+
20
+ ```
21
+ @article{junxiongdaniele2024mambainllama,
22
+ title = {The Mamba in the Llama: Distilling and Accelerating Hybrid Models},
23
+ author = {Junxiong Wang and Daniele Paliotta and Avner May and Alexander M. Rush and Tri Dao},
24
+ journal = {arXiv preprint arXiv:2408.15237},
25
+ year = {2024}
26
+ }
27
+ ```