Text Generation
Transformers
Safetensors
English
mistral
text-generation-inference
unsloth
trl
Eval Results
Inference Endpoints
File size: 6,415 Bytes
dfbe3b7
 
 
 
 
 
 
 
 
 
 
e2ed12c
 
 
 
0933509
34c3e22
 
 
 
 
 
 
 
 
 
 
fb59a4f
c863680
 
fb59a4f
 
 
 
34c3e22
 
 
dfbe3b7
 
2177b45
 
 
 
cb04486
2177b45
a61e766
2177b45
 
 
1db1896
2177b45
 
 
 
 
cb04486
2177b45
f54c831
2177b45
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
404c30d
 
 
 
eb2798f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
404c30d
2177b45
 
 
 
274d54b
9f6d4ce
5d850df
9f6d4ce
 
 
 
 
 
 
 
 
 
 
 
5d850df
9f6d4ce
2177b45
 
 
 
 
0978750
 
2177b45
 
 
 
 
 
 
26b8494
8c301d3
2177b45
 
 
 
 
 
 
c26f901
 
 
a335ff5
 
c26f901
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2177b45
 
 
 
 
d992d99
2177b45
 
 
 
 
 
 
 
 
 
 
 
404c30d
 
 
 
 
 
 
 
 
 
0933509
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
---
base_model: EpistemeAI/Fireball-Mistral-Nemo-Base-2407-sft-v2.2a
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
datasets:
- candenizkocak/code-alpaca-297k
- yahma/alpaca-cleaned
- reciperesearch/dolphin-sft-v0.1-preference
pipeline_tag: text-generation
model-index:
  - name: Fireball-12B
    results:
      - task:
          type: text-generation
        dataset:
          name: dolphin-sft-v0.1-preference
          type: reciperesearch/dolphin-sft-v0.1-preference
        metrics:
          - name: MMLU_PRO
            type: MMLU
            value: 26.04
          - name: bbh
            type: bbh
            value: 30.67
          - name: IFEval
            type: IFeval
            value: 18.34
        source:
          name: Open LLM Leaderboard
          url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard
---


<img src="https://huggingface.co/EpistemeAI/Fireball-Mistral-Nemo-Base-2407-v1-DPO2/resolve/main/fireball.JPG" width="200"/>


# Fireball-12B
This model is super fine-tune to provide better coding and better response(from first fine-tune) than Llama-3.1-8B and Google Gemma 2 9B. 
Further fine tuned with ORPO method with dataset. Best use in alpaca(see **Prompt instructions - Alpaca style prompt(recommended)** ) instruct mode for best response, instead of chat mode. 
- reciperesearch/dolphin-sft-v0.1-preference

# Benchmark
<img src="https://huggingface.co/EpistemeAI/Fireball-12B/resolve/main/benchmark2.jpg"/>
## Training Dataset 
Supervised fine-tuning with dataset: 
- candenizkocak/code-alpaca-297k
- yahma/alpaca-cleaned

# Model Card for Fireball-12B

The Heavy fine-tuned Mistral-Nemo-Base-2407 Large Language Model (LLM) is a pretrained generative text model of 12B parameters trained jointly by Mistral AI and NVIDIA, it significantly outperforms existing models smaller or similar in size.

For more details about this model please refer to our release [blog post](https://mistral.ai/news/mistral-nemo/).

## Key features
- Released under the **Apache 2 License**
- Pre-trained and instructed versions
- Trained with a **128k context window**
- Trained on a large proportion of **multilingual and code data**
- Drop-in replacement of Mistral 7B

## Model Architecture
Mistral Nemo is a transformer model, with the following architecture choices:
- **Layers:** 40
- **Dim:** 5,120
- **Head dim:** 128
- **Hidden dim:** 14,436
- **Activation Function:** SwiGLU
- **Number of heads:** 32
- **Number of kv-heads:** 8 (GQA)
- **Vocabulary size:** 2**17 ~= 128k
- **Rotary embeddings (theta = 1M)**

# Guardrail/Moderation guide: 
For guardrailing and moderating prompts against indirect/direct prompt injections and jailbreaking, please follow the SentinelShield AI GitHub repository:
[SentinelShield AI](https://github.com/tomtyiu/SentinelShieldAI)

## Prompt Template: Alpaca (recommended)
plesee use Alpaca prompt

```python
f"""Below is an instruction that describes a task. \
    Write a response that appropriately completes the request.

    ### Instruction:
    {x['instruction']}

    ### Input:
    {x['input']}

    ### Response:
    """
```

#### Demo

After installing `mistral_inference`, a `mistral-demo` CLI command should be available in your environment.

### Prompt instructions - Alpaca style prompt(recommended):

```py
f"""Below is an instruction that describes a task. \
    Write a response that appropriately completes the request.

    ### Instruction:
    {x['instruction']}

    ### Input:
    {x['input']}

    ### Response:
    """

```

### Transformers

> [!IMPORTANT]
> NOTE: Until a new release has been made, you need to install transformers from source:
> ```sh
> pip install mistral_inference
> pip install mistral-demo
> pip install git+https://github.com/huggingface/transformers.git
> ```

If you want to use Hugging Face `transformers` to generate text, you can do something like this.

```py
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "EpistemeAI/Fireball-12B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
inputs = tokenizer("Hello my name is", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

## Accelerator mode: 

```py
pip install accelerate #GPU A100/L4

from transformers import AutoModelForCausalLM, AutoTokenizer
from accelerate import Accelerator

# Initialize the accelerator
accelerator = Accelerator()

# Define the model ID
model_id = "EpistemeAI/Fireball-12B"

# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)

# Load the model and prepare it for distributed setup using accelerate
model = AutoModelForCausalLM.from_pretrained(model_id)

# Move the model to the appropriate device using accelerate
model, = accelerator.prepare(model)

# Prepare inputs
inputs = tokenizer("Hello my name is", return_tensors="pt").to(accelerator.device)

# Generate outputs with the model
outputs = model.generate(**inputs, max_new_tokens=20)

# Decode and print the outputs
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

> [!TIP]
> Unlike previous Mistral models, Mistral Nemo requires smaller temperatures. We recommend to use a temperature of 0.3.

## Note

`EpistemeAI/Fireball-12B` is a pretrained base model and therefore does not have any moderation mechanisms. Go to Guardrail/Moderation guide section for moderation guide


### Citation for yahma/alpaca-cleaned dataset
```
@misc{alpaca,
  author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
  title = {Stanford Alpaca: An Instruction-following LLaMA model},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```

# Uploaded  model

- **Developed by:** EpistemeAI
- **License:** apache-2.0
- **Finetuned from model :** EpistemeAI/Fireball-Mistral-Nemo-Base-2407-sft-v2.2a

This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)