Spaces:
Sleeping
Sleeping
iris-s-coon
commited on
Commit
·
a8f7492
1
Parent(s):
9904e57
📖 [docs] 英語READMEの更新
Browse files- docs/README.en.md +38 -12
docs/README.en.md
CHANGED
@@ -44,41 +44,67 @@ license: mit
|
|
44 |
|
45 |
## 🚀 Project Overview
|
46 |
|
47 |
-
**Llama-finetune-sandbox**
|
48 |
|
49 |
-
## ✨
|
50 |
|
51 |
-
1. **
|
52 |
- LoRA (Low-Rank Adaptation)
|
53 |
- QLoRA (Quantized LoRA)
|
54 |
- ⚠️~Full Fine-tuning~
|
55 |
- ⚠️~Parameter-Efficient Fine-tuning (PEFT)~
|
56 |
|
57 |
-
2. **Flexible Model Settings
|
58 |
- Customizable maximum sequence length
|
59 |
- Various quantization options
|
60 |
- Multiple attention mechanisms
|
61 |
|
62 |
-
3. **Experimental Environment Setup
|
63 |
- Performance evaluation tools
|
64 |
- Memory usage optimization
|
65 |
- Visualization of experimental results
|
66 |
|
67 |
-
## 🔧 How to Use
|
68 |
|
69 |
-
|
70 |
|
|
|
71 |
|
72 |
-
|
|
|
|
|
|
|
|
|
73 |
|
74 |
-
|
75 |
|
|
|
76 |
|
77 |
-
|
|
|
|
|
|
|
|
|
78 |
|
79 |
-
|
80 |
|
|
|
|
|
|
|
|
|
81 |
|
82 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
83 |
|
84 |
This project is licensed under the MIT License.
|
|
|
44 |
|
45 |
## 🚀 Project Overview
|
46 |
|
47 |
+
**Llama-finetune-sandbox** is an experimental environment for learning and verifying Llama model fine-tuning. You can try various fine-tuning methods, customize models, and evaluate performance. It caters to a wide range of users, from beginners to researchers. Version 0.1.0 includes a repository name change, significantly updated README, and added a Llama model fine-tuning tutorial.
|
48 |
|
49 |
+
## ✨ Key Features
|
50 |
|
51 |
+
1. **Various Fine-tuning Methods:**
|
52 |
- LoRA (Low-Rank Adaptation)
|
53 |
- QLoRA (Quantized LoRA)
|
54 |
- ⚠️~Full Fine-tuning~
|
55 |
- ⚠️~Parameter-Efficient Fine-tuning (PEFT)~
|
56 |
|
57 |
+
2. **Flexible Model Settings:**
|
58 |
- Customizable maximum sequence length
|
59 |
- Various quantization options
|
60 |
- Multiple attention mechanisms
|
61 |
|
62 |
+
3. **Experimental Environment Setup:**
|
63 |
- Performance evaluation tools
|
64 |
- Memory usage optimization
|
65 |
- Visualization of experimental results
|
66 |
|
|
|
67 |
|
68 |
+
## 📚 Implementation Examples
|
69 |
|
70 |
+
This repository includes the following implementation examples:
|
71 |
|
72 |
+
1. **High-speed fine-tuning using Unsloth:**
|
73 |
+
- Implementation of high-speed fine-tuning for Llama-3.2-1B/3B models.
|
74 |
+
- → See [`Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md`](sandbox/Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md) for details.
|
75 |
+
- → [Use this to convert from Markdown to Notebook format](https://huggingface.co/spaces/MakiAi/JupytextWebUI)
|
76 |
+
- [📒Notebook here](https://colab.research.google.com/drive/1AjtWF2vOEwzIoCMmlQfSTYCVgy4Y78Wi?usp=sharing)
|
77 |
|
78 |
+
2. Other implementation examples will be added periodically.
|
79 |
|
80 |
+
## 🛠️ Environment Setup
|
81 |
|
82 |
+
1. Clone the repository:
|
83 |
+
```bash
|
84 |
+
git clone https://github.com/Sunwood-ai-labs/Llama-finetune-sandbox.git
|
85 |
+
cd Llama-finetune-sandbox
|
86 |
+
```
|
87 |
|
88 |
+
## 📝 Adding Example Experiments
|
89 |
|
90 |
+
1. Add new implementations to the `examples/` directory.
|
91 |
+
2. Add necessary settings and utilities to `utils/`.
|
92 |
+
3. Update documentation and tests.
|
93 |
+
4. Create a pull request.
|
94 |
|
95 |
+
## 🤝 Contributions
|
96 |
+
|
97 |
+
- Implementation of new fine-tuning methods
|
98 |
+
- Bug fixes and feature improvements
|
99 |
+
- Documentation improvements
|
100 |
+
- Addition of usage examples
|
101 |
+
|
102 |
+
## 📚 References
|
103 |
+
|
104 |
+
- [HuggingFace PEFT Documentation](https://huggingface.co/docs/peft)
|
105 |
+
- [About Llama Models](https://github.com/facebookresearch/llama)
|
106 |
+
- [Fine-tuning Best Practices](https://github.com/Sunwood-ai-labs/Llama-finetune-sandbox/wiki)
|
107 |
+
|
108 |
+
## ⚖️ License
|
109 |
|
110 |
This project is licensed under the MIT License.
|