AXCXEPT commited on
Commit
0da30e8
·
verified ·
1 Parent(s): 94003ae

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -3
README.md CHANGED
@@ -30,11 +30,10 @@ This model is the result of combining Phi-4 with a reinforcement learning (RL) a
30
 
31
  ##### Key Features & Improvements
32
  Enhanced Multilingual Performance: Unlike previous iterations, this model strengthens English capabilities without compromising Japanese proficiency.
33
- Optimized Training Efficiency: Inspired by Deepseek R1 research, we fine-tuned Phi-4 with a 14K dataset in just two days, achieving substantial gains.
34
  Benchmark-Proven Quality:
35
  Outperforms the base Phi-4 model on OpenAI’s Simple-eval and translation benchmarks (Japanese MT Bench, MT Bench).
36
  Surpasses gpt-4o-mini in multiple evaluation categories, proving its capability as a high-performance 14B model.
37
- Secure and Scalable for Enterprises: Designed to function efficiently in local and on-premise environments, making it suitable for high-security industries where cloud-based solutions are not viable.
38
 
39
  ##### Why Local LLMs Still Matter
40
  Despite rapid advancements in cloud-based models, local LLMs remain crucial for enterprises that require high security and strict data privacy compliance. Many organizations—especially in public institutions, manufacturing, and design industries—cannot risk exposing sensitive data externally. This model is developed with the goal of delivering state-of-the-art performance in a secure, closed environment.
@@ -130,4 +129,4 @@ print(response)
130
  ```
131
 
132
  ### Special Thanks:
133
- To the Phi-4 development team, the Deepseek research team, and everyone who contributed to this project.
 
30
 
31
  ##### Key Features & Improvements
32
  Enhanced Multilingual Performance: Unlike previous iterations, this model strengthens English capabilities without compromising Japanese proficiency.
33
+ Optimized Training Efficiency: Inspired by Deepseek R1 research, we fine-tuned Phi-4 with a 14K dataset in just two days, achieving both gains.
34
  Benchmark-Proven Quality:
35
  Outperforms the base Phi-4 model on OpenAI’s Simple-eval and translation benchmarks (Japanese MT Bench, MT Bench).
36
  Surpasses gpt-4o-mini in multiple evaluation categories, proving its capability as a high-performance 14B model.
 
37
 
38
  ##### Why Local LLMs Still Matter
39
  Despite rapid advancements in cloud-based models, local LLMs remain crucial for enterprises that require high security and strict data privacy compliance. Many organizations—especially in public institutions, manufacturing, and design industries—cannot risk exposing sensitive data externally. This model is developed with the goal of delivering state-of-the-art performance in a secure, closed environment.
 
129
  ```
130
 
131
  ### Special Thanks:
132
+ To the Phi-4 development team who developed high-quality base model, the Deepseek research team, and everyone who contributed to this project.