How to use

from transformers import AutoModelForCausalLM, AutoTokenizer, TextGenerationPipeline
model_path = 'fiveflow/KoLlama-3-8B-Instruct'
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, 
                                                  device_map="auto",
                                                #   load_in_4bit=True,
                                                  low_cpu_mem_usage=True)

pipe = TextGenerationPipeline(model = model, tokenizer = tokenizer)
Downloads last month
4,478
Safetensors
Model size
8.03B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for fiveflow/KoLlama-3-8B-Instruct

Finetuned
(486)
this model
Quantizations
3 models

Spaces using fiveflow/KoLlama-3-8B-Instruct 6