Text Generation
Transformers
Safetensors
English
mistral
text-generation-inference
unsloth
trl
Eval Results
Inference Endpoints
legolasyiu commited on
Commit
8c301d3
·
verified ·
1 Parent(s): f54c831

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -91,7 +91,7 @@ If you want to use Hugging Face `transformers` to generate text, you can do some
91
 
92
  ```py
93
  from transformers import AutoModelForCausalLM, AutoTokenizer
94
- model_id = "EpistemeAI/Fireball-Mistral-Nemo-Base-2407-V2"
95
  tokenizer = AutoTokenizer.from_pretrained(model_id)
96
  model = AutoModelForCausalLM.from_pretrained(model_id)
97
  inputs = tokenizer("Hello my name is", return_tensors="pt")
@@ -104,7 +104,7 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
104
 
105
  ## Note
106
 
107
- `Mistral-Nemo-Base-2407` is a pretrained base model and therefore does not have any moderation mechanisms.
108
 
109
 
110
  ### Citation for yahma/alpaca-cleaned dataset
 
91
 
92
  ```py
93
  from transformers import AutoModelForCausalLM, AutoTokenizer
94
+ model_id = "EpistemeAI/Fireball-12B"
95
  tokenizer = AutoTokenizer.from_pretrained(model_id)
96
  model = AutoModelForCausalLM.from_pretrained(model_id)
97
  inputs = tokenizer("Hello my name is", return_tensors="pt")
 
104
 
105
  ## Note
106
 
107
+ `EpistemeAI/Fireball-12B` is a pretrained base model and therefore does not have any moderation mechanisms.
108
 
109
 
110
  ### Citation for yahma/alpaca-cleaned dataset