--- license: apache-2.0 language: - ko base_model: - Qwen/Qwen2.5-14B-Instruct --- # Announcing OLAFv2: The Next Step in Korean Language Understanding ๐Ÿš€ We are thrilled to announce the release of **OLAFv2**, our state-of-the-art Korean language model, now available on Hugging Face! ๐ŸŽ‰ Designed to excel in complex reasoning, mathematical problem-solving, and general language understanding, OLAFv2 represents a significant leap forward in NLP capabilities for the Korean language. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/650c0029987b1ae4e51fa2d4/KxAsxe10pZkaqC6x82qH3.png) ## Key Features of OLAFv2 ๐ŸŒŸ ### **Two Model Sizes for Flexibility** OLAFv2 is available in two parameter sizes: - **14B (Billion) Parameters**: For maximum performance. ๐Ÿ‹๏ธโ€โ™‚๏ธ - **1.5B (Billion) Parameters**: For lightweight applications and hardware-constrained environments. ๐Ÿชถ ### **Reasoning Mode for Complex Tasks** ๐Ÿค” One of OLAFv2's standout features is its **Reasoning Mode**, specifically designed for: - Complex mathematical problem-solving. โœ–๏ธโž— - STEM (Science, Technology, Engineering, Mathematics) applications. ๐Ÿ”ฌ๐Ÿ“ - Tasks requiring detailed step-by-step reasoning. ๐Ÿง  This mode can be effectively utilized for **Test-Time Scaling**, enabling the model to harness additional computational resources during inference. This approach enhances output detail and accuracy, achieving performance levels that surpass GPT-4o. ๐Ÿ“ˆ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/650c0029987b1ae4e51fa2d4/aZlD94ZAqxePTaGdb4TQ8.png) ### **Long Context Support** ๐Ÿ“œ With support for up to **32K tokens**, OLAFv2 is perfect for: - Retrieval-Augmented Generation (RAG). ๐Ÿ› ๏ธ - Tasks requiring long-context understanding and reasoning. ๐Ÿงต ## Benchmarks and Performance ๐Ÿ“Š We share evaluation results across three benchmarks, KMMLU, HRM8K and LogicKor.
polyglot_budget
We also share results with inference-time scaling. For more details have a look into our [blog](https://www.onelineai.com/blog/test-time-scaling).
alt-text-1 alt-text-2
## Getting Started ๐Ÿš€ OLAFv2 is now available on Hugging Face! You can start using it by accessing our repository: ```python # pip install transformers from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "OLAResearch/OLAF2-14B" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "introduce yourself!" messages = [ {"role": "system", "content": "You're name is OLAF. A large language model made by OneLineAI, specializing in Korean culture and finance."}, # for reasoning mode #{"role": "system", "content": "You're name is OLAF. A large language model made by OneLineAI, specializing in Korean culture and finance.Perform two-step reasoning. Return your answers in \\boxed{N} format."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ```