Quazim0t0 commited on
Commit
ed30719
·
verified ·
1 Parent(s): 78b0e22

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -1
README.md CHANGED
@@ -29,4 +29,71 @@ datasets:
29
 
30
  Model hasn't been tested yet, will update when model has been.
31
 
32
- If using this model for Open WebUI here is a simple function to organize the models responses: https://openwebui.com/f/quaz93/phi4_turn_r1_distill_thought_function_v1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
 
30
  Model hasn't been tested yet, will update when model has been.
31
 
32
+ If using this model for Open WebUI here is a simple function to organize the models responses: https://openwebui.com/f/quaz93/phi4_turn_r1_distill_thought_function_v1
33
+
34
+ # Phi4 Turn R1Distill LoRA Adapters
35
+
36
+ ## Overview
37
+ These **LoRA adapters** were trained using diverse **reasoning datasets** that incorporate structured **Thought** and **Solution** responses to enhance logical inference. This project was designed to **test the R1 dataset** on **Phi-4**, aiming to create a **lightweight, fast, and efficient reasoning model**.
38
+
39
+ All adapters were fine-tuned using an **NVIDIA A800 GPU**, ensuring high performance and compatibility for continued training, merging, or direct deployment.
40
+ As part of an open-source initiative, all resources are made **publicly available** for unrestricted research and development.
41
+
42
+ ---
43
+
44
+ ## LoRA Adapters
45
+ Below are the currently available LoRA fine-tuned adapters (**as of January 30, 2025**):
46
+
47
+ - [Phi4.Turn.R1Distill-Lora1](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora1)
48
+ - [Phi4.Turn.R1Distill-Lora2](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora2)
49
+ - [Phi4.Turn.R1Distill-Lora3](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora3)
50
+ - [Phi4.Turn.R1Distill-Lora4](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora4)
51
+ - [Phi4.Turn.R1Distill-Lora5](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora5)
52
+ - [Phi4.Turn.R1Distill-Lora6](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora6)
53
+ - [Phi4.Turn.R1Distill-Lora7](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora7)
54
+ - [Phi4.Turn.R1Distill-Lora8](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora8)
55
+
56
+ ---
57
+
58
+ ## GGUF Full & Quantized Models
59
+ To facilitate broader testing and real-world inference, **GGUF Full and Quantized versions** have been provided for evaluation on **Open WebUI** and other LLM interfaces.
60
+
61
+ ### **Version 1**
62
+ - [Phi4.Turn.R1Distill.Q8_0](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill.Q8_0)
63
+ - [Phi4.Turn.R1Distill.Q4_k](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill.Q4_k)
64
+ - [Phi4.Turn.R1Distill.16bit](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill.16bit)
65
+
66
+ ### **Version 1.1**
67
+ - [Phi4.Turn.R1Distill_v1.1_Q4_k](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill_v1.1_Q4_k)
68
+
69
+ ### **Version 1.2**
70
+ - [Phi4.Turn.R1Distill_v1.2_Q4_k](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill_v1.2_Q4_k)
71
+
72
+ ### **Version 1.3**
73
+ - [Phi4.Turn.R1Distill_v1.3_Q4_k-GGUF](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill_v1.3_Q4_k-GGUF)
74
+
75
+ ### **Version 1.4**
76
+ - [Phi4.Turn.R1Distill_v1.4_Q4_k-GGUF](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill_v1.4_Q4_k-GGUF)
77
+
78
+ ### **Version 1.5**
79
+ - [Phi4.Turn.R1Distill_v1.5_Q4_k-GGUF](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill_v1.5_Q4_k-GGUF)
80
+
81
+ ---
82
+
83
+ ## Usage
84
+
85
+ ### **Loading LoRA Adapters with `transformers` and `peft`**
86
+ To load and apply the LoRA adapters on Phi-4, use the following approach:
87
+
88
+ ```python
89
+ from transformers import AutoModelForCausalLM, AutoTokenizer
90
+ from peft import PeftModel
91
+
92
+ base_model = "microsoft/Phi-4"
93
+ lora_adapter = "Quazim0t0/Phi4.Turn.R1Distill-Lora1"
94
+
95
+ tokenizer = AutoTokenizer.from_pretrained(base_model)
96
+ model = AutoModelForCausalLM.from_pretrained(base_model)
97
+ model = PeftModel.from_pretrained(model, lora_adapter)
98
+
99
+ model.eval()