license: mit
language:
- en
- ko
tags:
- KT
- K-intelligence
- Mi:dm
pipeline_tag: text-generation
library_name: transformers
Mi:dm 2.0-Base
🤗 Mi:dm 2.0 Models | 📜 Mi:dm 2.0 Technical Report* | 📕 Mi:dm 2.0 Technical Blog*
*To be released soon
News 📢
- 🔜 (Coming Soon!) GGUF format model files will be available soon for easier local deployment.
- ⚡️
2025/07/04
: Released Mi:dm 2.0 Model collection on Hugging Face🤗.
Table of Contents
- Overview
- Usage
- More Information
Overview
Mi:dm 2.0
Mi:dm 2.0 is a "Korean-centric AI" model developed with KT's proprietary technology. "Korean-centric AI" refers to a model that thoroughly internalizes the unique values, cognitive frameworks, and commonsense reasoning intrinsic to Korean society. It is not simply about processing and responding in Korean; it is about the profound understanding that reflects and respects the socio-cultural fabric of Korean norms and values.
The newly introduced Mi:dm 2.0 model comes in two versions:
Mi:dm 2.0-Mini is a 2.3B parameter Dense small model, designed for seamless use in environments such as on-device settings and low-end GPUs. It was created by pruning and distilling the Base model.
Mi:dm 2.0-Base has 11.5B parameters and was designed to balance model size and performance by expanding an 8B scale model using the DuS (Depth-up Scaling) method. It's a practical model that can be applied to various real-world services, considering both performance and versatility.
Neither the pre-training nor the post-training data includes KT users' data.
Quickstart
Here is the code snippet to run conversational inference with the model:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
model_name = "K-intelligence/Midm-2.0-Base-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
generation_config = GenerationConfig.from_pretrained(model_name)
prompt = "KT에 대해 소개해줘"
# message for inference
messages = [
{"role": "system",
"content": "Mi:dm(믿:음)은 KT에서 개발한 AI 기반 어시스턴트이다."},
{"role": "user", "content": prompt}
]
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
)
output = model.generate(
input_ids.to("cuda"),
generation_config=generation_config,
eos_token_id=tokenizer.eos_token_id,
max_new_tokens=128,
do_sample=False,
)
print(tokenizer.decode(output[0]))
The
transformers
library should be version4.45.0
or higher.
Evaluation
English
Benchmark | Exaone-3.5-2.4B-inst | Qwen3-4B | Mi:dm 2.0-Mini-inst | Exaone-3.5-7.8B-inst | Qwen3-14B | Llama-3.1-8B-inst | Mi:dm 2.0-Base-inst | |
---|---|---|---|---|---|---|---|---|
Instruction Following | IFEval | 81.1 | 79.7 | 73.6 | 83.6 | 83.9 | 79.9 | 84.0 |
Reasoning | BBH | 46.4 | 79.0 | 44.5 | 50.1 | 83.4 | 60.3 | 77.7 |
GPQA | 28.1 | 39.8 | 26.6 | 33.1 | 49.8 | 21.6 | 33.5 | |
MuSR | 49.7 | 58.5 | 51.7 | 51.2 | 57.7 | 50.3 | 51.9 | |
Avg. | 41.4 | 59.1 | 40.9 | 44.8 | 63.6 | 44.1 | 54.4 | |
Mathematics | GSM8K | 82.5 | 90.4 | 83.1 | 81.1 | 88.0 | 81.2 | 91.6 |
MBPP+ | 59.8 | 62.4 | 60.9 | 79.4 | 73.4 | 81.8 | 77.5 | |
General Knowledge | MMLU-pro | - | - | - | 40.7 | 70.5 | 47.6 | 53.3 |
MMLU | 59.5 | 73.3 | 56.5 | 69.0 | 82.7 | 70.7 | 73.7 | |
Avg. | 59.5 | 73.3 | 56.5 | 54.8 | 76.6 | 59.2 | 63.5 |
Korean
Benchmark | Exaone-3.5-2.4B-inst | Qwen3-4B | Mi:dm 2.0-Mini-inst | Exaone-3.5-7.8B-inst | Qwen3-14B | Llama-3.1-8B-inst | Mi:dm 2.0-Base-inst | |
---|---|---|---|---|---|---|---|---|
Comprehension | K-Prag* | 68.7 | 73.9 | 69.5 | 73.5 | 86.7 | 59.9 | 86.5 |
K-Refer-Hard* | 58.5 | 56.7 | 55.4 | 61.9 | 74.0 | 48.6 | 70.8 | |
Ko-Best | 87.2 | 91.5 | 80.5 | 92.0 | 93.9 | 77.4 | 95.2 | |
Ko-Sovereign* | 38.0 | 43.5 | 42.5 | 44.0 | 52.0 | 31.5 | 53.0 | |
Avg. | 62.5 | 66.6 | 61.9 | 67.2 | 76.8 | 51.5 | 76.1 | |
Reasoning | Ko-Winogrande | 60.3 | 67.5 | 61.7 | 64.6 | 77.2 | 40.1 | 75.1 |
Ko-Best | 64.1 | 69.2 | 64.5 | 60.3 | 75.4 | 26.0 | 73.0 | |
LogicKor* | 7.4 | 5.6 | 7.7 | 8.6 | 6.4 | 2.4 | 8.6 | |
HRM8K* | 38.5 | 56.7 | 39.9 | 49.7 | 64.5 | 30.9 | 52.9 | |
Avg. | 36.7 | 43.8 | 37.4 | 39.5 | 48.8 | 19.8 | 44.8 | |
Society & Culture | K-Refer* | 64.0 | 53.6 | 66.4 | 71.6 | 72.4 | 43.2 | 89.6 |
K-Refer-Hard* | 67.1 | 42.9 | 61.4 | 69.3 | 65.7 | 36.4 | 86.4 | |
Ko-Sovereign* | 44.4 | 35.8 | 36.7 | 46.9 | 49.8 | 33.8 | 56.3 | |
HAERAE* | 61.3 | 50.6 | 70.8 | 72.9 | 68.4 | 49.5 | 81.5 | |
Avg. | 59.2 | 45.7 | 58.8 | 65.2 | 64.1 | 40.7 | 78.4 | |
Reasoning (Domain) | KMMLU | 43.5 | 50.6 | 45.1 | 52.6 | 55.4 | 33.0 | 57.3 |
Ko-Sovereign* | 42.4 | 42.5 | 42.4 | 45.6 | 54.7 | 36.7 | 58.0 | |
Avg. | 43.0 | 46.5 | 43.8 | 49.1 | 55.1 | 34.8 | 57.7 | |
Instruction Following | Ko-IFEval* | 65.4 | 75.9 | 73.3 | 69.1 | 83.6 | 60.1 | 82.0 |
Ko-MTBench | 74.0 | 63.0 | 74.0 | 79.6 | 71.0 | 57.0 | 89.7 | |
Avg. | 68.9 | 69.4 | 73.6 | 74.4 | 77.3 | 58.5 | 85.9 |
*
indicates KT proprietary evaluation resources.
Usage
Run on Friendli.AI
You can try our model immediately via Friendli.AI
. Simply click Deploy
and then Friendli Endpoints
.
Please note that a login to
Friendli.AI
is required after your fifth chat interaction.
Run on Your Local Machine
We provide a detailed description about running Mi:dm 2.0 on your local machine using llama.cpp, LM Studio, and Ollama. Please check our github for more information
Deployment
To serve Mi:dm 2.0 using vLLM(>=0.8.0
) with an OpenAI-compatible API:
vllm serve K-intelligence/Midm-2.0-Base-Instruct
Tutorials
To help our end-users easily use Mi:dm 2.0, we have provided comprehensive tutorials on github.
More Information
Limitation
The training data for both Mi:dm 2.0 models consists primarily of English and Korean. Understanding and generation in other languages are not guaranteed.
The model is not guaranteed to provide reliable advice in fields that require professional expertise, such as law, medicine, or finance.
Researchers have made efforts to exclude unethical content from the training data — such as profanity, slurs, bias, and discriminatory language. However, despite these efforts, the model may still produce inappropriate expressions or factual inaccuracies.
License
Mi:dm 2.0 is licensed under the MIT License.
Contact
- Mi:dm 2.0 Technical Inquiries: [email protected]