legolasyiu
commited on
Commit
•
38f393f
1
Parent(s):
36bce5f
Update README.md
Browse files
README.md
CHANGED
@@ -11,6 +11,130 @@ tags:
|
|
11 |
- trl
|
12 |
---
|
13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
# Uploaded model
|
15 |
|
16 |
- **Developed by:** EpistemeAI
|
|
|
11 |
- trl
|
12 |
---
|
13 |
|
14 |
+
# Finance Fireball 12B
|
15 |
+
|
16 |
+
# Fireball-12B-v1.0f
|
17 |
+
This model is super fine-tune from finance dataset to provide concise ## finance ## response.
|
18 |
+
|
19 |
+
# Benchmark
|
20 |
+
- TBD
|
21 |
+
|
22 |
+
## Training Dataset
|
23 |
+
Supervised fine-tuning with dataset:
|
24 |
+
- candenizkocak/code-alpaca-297k
|
25 |
+
- yahma/alpaca-cleaned
|
26 |
+
|
27 |
+
# Model Card for Fireball-12Bf
|
28 |
+
|
29 |
+
The Heavy fine-tuned Mistral-Nemo-Base-2407 Large Language Model (LLM) is a pretrained generative text model of 12B parameters trained jointly by Mistral AI and NVIDIA, it significantly outperforms existing models smaller or similar in size.
|
30 |
+
|
31 |
+
For more details about this model please refer to our release [blog post](https://mistral.ai/news/mistral-nemo/).
|
32 |
+
|
33 |
+
## Key features
|
34 |
+
- Released under the **Apache 2 License**
|
35 |
+
- Pre-trained and instructed versions
|
36 |
+
- Trained with a **128k context window**
|
37 |
+
- Trained on a large proportion of **multilingual and code data**
|
38 |
+
- Drop-in replacement of Mistral 7B
|
39 |
+
|
40 |
+
## Model Architecture
|
41 |
+
Mistral Nemo is a transformer model, with the following architecture choices:
|
42 |
+
- **Layers:** 40
|
43 |
+
- **Dim:** 5,120
|
44 |
+
- **Head dim:** 128
|
45 |
+
- **Hidden dim:** 14,436
|
46 |
+
- **Activation Function:** SwiGLU
|
47 |
+
- **Number of heads:** 32
|
48 |
+
- **Number of kv-heads:** 8 (GQA)
|
49 |
+
- **Vocabulary size:** 2**17 ~= 128k
|
50 |
+
- **Rotary embeddings (theta = 1M)**
|
51 |
+
|
52 |
+
# Guardrail/Moderation guide:
|
53 |
+
For guardrailing and moderating prompts against indirect/direct prompt injections and jailbreaking, please follow the SentinelShield AI GitHub repository:
|
54 |
+
[SentinelShield AI](https://github.com/tomtyiu/SentinelShieldAI)
|
55 |
+
|
56 |
+
|
57 |
+
#### Demo
|
58 |
+
|
59 |
+
After installing `mistral_inference`, a `mistral-demo` CLI command should be available in your environment.
|
60 |
+
|
61 |
+
### Transformers
|
62 |
+
|
63 |
+
> [!IMPORTANT]
|
64 |
+
> NOTE: Until a new release has been made, you need to install transformers from source:
|
65 |
+
> ```sh
|
66 |
+
> pip install mistral_inference
|
67 |
+
> pip install mistral-demo
|
68 |
+
> pip install git+https://github.com/huggingface/transformers.git
|
69 |
+
> ```
|
70 |
+
|
71 |
+
If you want to use Hugging Face `transformers` to generate text, you can do something like this.
|
72 |
+
|
73 |
+
```py
|
74 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
75 |
+
|
76 |
+
model_id = "EpistemeAI/Fireball-12B"
|
77 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
78 |
+
model = AutoModelForCausalLM.from_pretrained(model_id)
|
79 |
+
inputs = tokenizer("Hello my name is", return_tensors="pt")
|
80 |
+
outputs = model.generate(**inputs, max_new_tokens=20)
|
81 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
82 |
+
```
|
83 |
+
|
84 |
+
## Accelerator mode:
|
85 |
+
|
86 |
+
```py
|
87 |
+
pip install accelerate #GPU A100/L4
|
88 |
+
|
89 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
90 |
+
from accelerate import Accelerator
|
91 |
+
|
92 |
+
# Initialize the accelerator
|
93 |
+
accelerator = Accelerator()
|
94 |
+
|
95 |
+
# Define the model ID
|
96 |
+
model_id = "EpistemeAI/Fireball-12B-v1.0f"
|
97 |
+
|
98 |
+
# Load the tokenizer
|
99 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
100 |
+
|
101 |
+
# Load the model and prepare it for distributed setup using accelerate
|
102 |
+
model = AutoModelForCausalLM.from_pretrained(model_id)
|
103 |
+
|
104 |
+
# Move the model to the appropriate device using accelerate
|
105 |
+
model, = accelerator.prepare(model)
|
106 |
+
|
107 |
+
# Prepare inputs
|
108 |
+
inputs = tokenizer("Hello my name is", return_tensors="pt").to(accelerator.device)
|
109 |
+
|
110 |
+
# Generate outputs with the model
|
111 |
+
outputs = model.generate(**inputs, max_new_tokens=20)
|
112 |
+
|
113 |
+
# Decode and print the outputs
|
114 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
115 |
+
```
|
116 |
+
|
117 |
+
> [!TIP]
|
118 |
+
> Unlike previous Mistral models, Mistral Nemo requires smaller temperatures. We recommend to use a temperature of 0.3.
|
119 |
+
|
120 |
+
## Note
|
121 |
+
|
122 |
+
`EpistemeAI/Fireball-12B` is a pretrained base model and therefore does not have any moderation mechanisms. Go to Guardrail/Moderation guide section for moderation guide
|
123 |
+
|
124 |
+
|
125 |
+
### Citation for yahma/alpaca-cleaned dataset
|
126 |
+
```
|
127 |
+
@misc{alpaca,
|
128 |
+
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
|
129 |
+
title = {Stanford Alpaca: An Instruction-following LLaMA model},
|
130 |
+
year = {2023},
|
131 |
+
publisher = {GitHub},
|
132 |
+
journal = {GitHub repository},
|
133 |
+
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
|
134 |
+
}
|
135 |
+
```
|
136 |
+
|
137 |
+
|
138 |
# Uploaded model
|
139 |
|
140 |
- **Developed by:** EpistemeAI
|