How to use
from transformers import AutoModelForCausalLM, AutoTokenizer, TextGenerationPipeline
model_path = 'fiveflow/ATOMM'
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path,
device_map="auto",
# load_in_4bit=True,
low_cpu_mem_usage=True)
pipe = TextGenerationPipeline(model = model, tokenizer = tokenizer)
- Downloads last month
- 6
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for fiveflow/ATOMM
Base model
mistralai/Mistral-7B-v0.1
Finetuned
mistralai/Mistral-7B-Instruct-v0.1