File size: 1,157 Bytes
fe76b5e
 
 
 
 
 
 
 
06a09fb
fe76b5e
06a09fb
 
 
 
 
fe76b5e
c84b117
cdec80b
6b8ced8
 
fb1756c
6b8ced8
fb1756c
fe76b5e
 
 
fb1756c
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
library_name: transformers
tags: []
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->
This is a testing model by:

- **shrinking the layers of llama-3-8b to only 2 layers of transformer decoder** 
- **adding a customized layer to llama attention**
- **shrinking the total param size to around 2b... so pathetic** 😢😢😢

The purpose of this model is to show how you can download a pre-trained llama and customize it... however you want... and then re-train it with... whatever you want... and then upload your model to hugging face 🤗

It is not intended to compete with larger models developed by large corporations with GPU advantage.

However, of course, if you want to use this model properly, you have to not only download it with LlamaForCausalLM but also to use the following code in .ipynb to build the same customized layer and merge the huggingface safetensor to each parameter:

- **Repository:** https://github.com/Brownwang0426/Cullama

# Model Details

<!-- Provide a longer summary of what this model is. -->

- **Developed by:** Po-Lung Wang
- **Model type:** Llama-3
- **License:** MIT