Model Card for mistral-ko-OpenOrca-wiki-v1

It is a fine-tuned model using Korean in the mistral-7b model

Model Details

  • Model Developers : shleeeee(Seunghyeon Lee), oopsung(Sungwoo Park)
  • Repository : To be added
  • Model Architecture : The shleeeee/mistral-ko-OpenOrca-wiki-v1 is is a fine-tuned version of the Mistral-7B-v0.1.
  • Lora target modules : q_proj, k_proj, v_proj, o_proj,gate_proj
  • train_batch : 4
  • epochs : 2

Dataset

2000 ko-OpenOrca datasets

Prompt template: Mistral

<s>[INST]{['instruction']}[/INST]{['output']}</s>

Usage

# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("shleeeee/mistral-ko-OpenOrca-wiki-v1")
model = AutoModelForCausalLM.from_pretrained("shleeeee/mistral-ko-OpenOrca-wiki-v1")

# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="shleeeee/mistral-ko-OpenOrca-wiki-v1")

Evaluation

To be added

Downloads last month
174
Safetensors
Model size
7.24B params
Tensor type
FP16
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for shleeeee/mistral-ko-OpenOrca-wiki-v1

Quantizations
4 models

Spaces using shleeeee/mistral-ko-OpenOrca-wiki-v1 6