iris-s-coon commited on
Commit
410e7f8
·
1 Parent(s): 663908b

📖 [docs] 英語READMEの更新

Browse files
Files changed (1) hide show
  1. docs/README.en.md +18 -17
docs/README.en.md CHANGED
@@ -44,34 +44,33 @@ license: mit
44
 
45
  ## 🚀 Project Overview
46
 
47
- **Llama-finetune-sandbox** provides an experimental environment for learning and verifying Llama model fine-tuning. You can try various fine-tuning methods, customize models, and evaluate their performance. It caters to a wide range of users, from beginners to researchers. Version 0.1.0 includes a repository name change, a major README update, and the addition of a Llama model fine-tuning tutorial.
 
48
 
49
  ## ✨ Main Features
50
 
51
- 1. **Diverse Fine-tuning Methods**:
52
  - LoRA (Low-Rank Adaptation)
53
  - QLoRA (Quantized LoRA)
54
- - ⚠️~Full Fine-tuning~
55
- - ⚠️~Parameter-Efficient Fine-tuning (PEFT)~
56
 
57
- 2. **Flexible Model Configuration**:
58
  - Customizable maximum sequence length
59
  - Various quantization options
60
  - Multiple attention mechanisms
61
 
62
- 3. **Well-Equipped Experiment Environment**:
63
  - Performance evaluation tools
64
  - Memory usage optimization
65
- - Visualization of experimental results
66
 
67
- ## 📚 Examples
68
 
69
- This repository includes the following examples:
70
 
71
  ### High-Speed Fine-tuning using Unsloth
72
- - High-speed fine-tuning implementation for Llama-3.2-1B/3B models
73
  - → See [`Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md`](sandbox/Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md) for details.
74
- - → [Use this to convert from Markdown to Notebook format](https://huggingface.co/spaces/MakiAi/JupytextWebUI)
75
  - [📒Notebook here](https://colab.research.google.com/drive/1AjtWF2vOEwzIoCMmlQfSTYCVgy4Y78Wi?usp=sharing)
76
 
77
  ### Efficient Model Operation using Ollama and LiteLLM
@@ -79,6 +78,7 @@ This repository includes the following examples:
79
  - → See [`efficient-ollama-colab-setup-with-litellm-guide.md`](sandbox/efficient-ollama-colab-setup-with-litellm-guide.md) for details.
80
  - [📒Notebook here](https://colab.research.google.com/drive/1buTPds1Go1NbZOLlpG94VG22GyK-F4GW?usp=sharing)
81
 
 
82
  ## 🛠️ Environment Setup
83
 
84
  1. Clone the repository:
@@ -87,9 +87,9 @@ git clone https://github.com/Sunwood-ai-labs/Llama-finetune-sandbox.git
87
  cd Llama-finetune-sandbox
88
  ```
89
 
90
- ## 📝 Adding Examples
91
 
92
- 1. Add a new implementation to the `examples/` directory.
93
  2. Add necessary settings and utilities to `utils/`.
94
  3. Update documentation and tests.
95
  4. Create a pull request.
@@ -104,9 +104,10 @@ cd Llama-finetune-sandbox
104
  ## 📚 References
105
 
106
  - [HuggingFace PEFT Documentation](https://huggingface.co/docs/peft)
107
- - [About Llama Models](https://github.com/facebookresearch/llama)
108
- - [Fine-tuning Best Practices](https://github.com/Sunwood-ai-labs/Llama-finetune-sandbox/wiki)
109
 
110
- ## ⚖️ License
111
 
112
- This project is licensed under the MIT License.
 
 
44
 
45
  ## 🚀 Project Overview
46
 
47
+ **Llama-finetune-sandbox** provides an experimental environment for learning and verifying Llama model fine-tuning. You can try various fine-tuning methods, customize models, and evaluate performance. It caters to a wide range of users, from beginners to researchers. Version 0.2.0 includes updated and improved documentation. In particular, guides for efficient model operation using Ollama and LiteLLM have been added, and the README.md and implementation example guides have been made easier to understand.
48
+
49
 
50
  ## ✨ Main Features
51
 
52
+ 1. **Diverse Fine-tuning Methods:**
53
  - LoRA (Low-Rank Adaptation)
54
  - QLoRA (Quantized LoRA)
 
 
55
 
56
+ 2. **Flexible Model Configuration:**
57
  - Customizable maximum sequence length
58
  - Various quantization options
59
  - Multiple attention mechanisms
60
 
61
+ 3. **Well-equipped Experiment Environment:**
62
  - Performance evaluation tools
63
  - Memory usage optimization
64
+ - Experiment result visualization
65
 
66
+ ## 📚 Implementation Examples
67
 
68
+ This repository includes the following implementation examples:
69
 
70
  ### High-Speed Fine-tuning using Unsloth
71
+ - High-speed fine-tuning implementation of Llama-3.2-1B/3B models
72
  - → See [`Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md`](sandbox/Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md) for details.
73
+ - → [Use this to convert from markdown format to notebook format](https://huggingface.co/spaces/MakiAi/JupytextWebUI)
74
  - [📒Notebook here](https://colab.research.google.com/drive/1AjtWF2vOEwzIoCMmlQfSTYCVgy4Y78Wi?usp=sharing)
75
 
76
  ### Efficient Model Operation using Ollama and LiteLLM
 
78
  - → See [`efficient-ollama-colab-setup-with-litellm-guide.md`](sandbox/efficient-ollama-colab-setup-with-litellm-guide.md) for details.
79
  - [📒Notebook here](https://colab.research.google.com/drive/1buTPds1Go1NbZOLlpG94VG22GyK-F4GW?usp=sharing)
80
 
81
+
82
  ## 🛠️ Environment Setup
83
 
84
  1. Clone the repository:
 
87
  cd Llama-finetune-sandbox
88
  ```
89
 
90
+ ## 📝 Adding Implementation Examples
91
 
92
+ 1. Add new implementations to the `examples/` directory.
93
  2. Add necessary settings and utilities to `utils/`.
94
  3. Update documentation and tests.
95
  4. Create a pull request.
 
104
  ## 📚 References
105
 
106
  - [HuggingFace PEFT Documentation](https://huggingface.co/docs/peft)
107
+ - [About Llama models](https://github.com/facebookresearch/llama)
108
+ - [Fine-tuning best practices](https://github.com/Sunwood-ai-labs/Llama-finetune-sandbox/wiki)
109
 
110
+ ## 📄 License
111
 
112
+ This project is licensed under the MIT License.
113
+ ```