--- license: apache-2.0 language: - ko base_model: - Qwen/Qwen2.5-14B-Instruct --- # Announcing OLAFv2: The Next Step in Korean Language Understanding ๐ We are thrilled to announce the release of **OLAFv2**, our state-of-the-art Korean language model, now available on Hugging Face! ๐ Designed to excel in complex reasoning, mathematical problem-solving, and general language understanding, OLAFv2 represents a significant leap forward in NLP capabilities for the Korean language. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/650c0029987b1ae4e51fa2d4/KxAsxe10pZkaqC6x82qH3.png) ## Key Features of OLAFv2 ๐ ### **Two Model Sizes for Flexibility** OLAFv2 is available in two parameter sizes: - **14B (Billion) Parameters**: For maximum performance. ๐๏ธโโ๏ธ - **1.5B (Billion) Parameters**: For lightweight applications and hardware-constrained environments. ๐ชถ ### **Reasoning Mode for Complex Tasks** ๐ค One of OLAFv2's standout features is its **Reasoning Mode**, specifically designed for: - Complex mathematical problem-solving. โ๏ธโ - STEM (Science, Technology, Engineering, Mathematics) applications. ๐ฌ๐ - Tasks requiring detailed step-by-step reasoning. ๐ง This mode can be effectively utilized for **Test-Time Scaling**, enabling the model to harness additional computational resources during inference. This approach enhances output detail and accuracy, achieving performance levels that surpass GPT-4o. ๐ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/650c0029987b1ae4e51fa2d4/aZlD94ZAqxePTaGdb4TQ8.png) ### **Long Context Support** ๐ With support for up to **32K tokens**, OLAFv2 is perfect for: - Retrieval-Augmented Generation (RAG). ๐ ๏ธ - Tasks requiring long-context understanding and reasoning. ๐งต ## Benchmarks and Performance ๐ We share evaluation results across three benchmarks, KMMLU, HRM8K and LogicKor.