[email protected] commited on
Commit
e545dbf
·
2 Parent(s): 15ecf3d c525509

Merge pull request #15 from Sunwood-ai-labs/translate-readme-11587644676

Browse files
Files changed (1) hide show
  1. docs/README.en.md +23 -18
docs/README.en.md CHANGED
@@ -44,10 +44,10 @@ license: mit
44
 
45
  ## 🚀 Project Overview
46
 
47
- **Llama-finetune-sandbox** provides an experimental environment for learning and validating Llama model fine-tuning. You can try various fine-tuning methods, customize models, and evaluate performance. It caters to a wide range of users, from beginners to researchers. Version 0.3.0 included improved documentation and an updated English README.
48
 
49
 
50
- ## ✨ Main Features
51
 
52
  1. **Diverse Fine-tuning Methods**:
53
  - LoRA (Low-Rank Adaptation)
@@ -58,37 +58,43 @@ license: mit
58
  - Various quantization options
59
  - Multiple attention mechanisms
60
 
61
- 3. **Experiment Environment Setup**:
62
  - Performance evaluation tools (added in v0.3.0, later removed)
63
- - Memory usage optimization
64
- - Experiment result visualization
65
-
66
 
67
  ## 📚 Examples
68
 
69
  This repository includes the following examples:
70
 
71
- ### High-Speed Fine-tuning using Unsloth
72
- - Implementation of high-speed fine-tuning for Llama-3.2-1B/3B models.
73
  - → See [`Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md`](sandbox/Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md) for details. (Japanese)
74
- - → Use [this tool](https://huggingface.co/spaces/MakiAi/JupytextWebUI) to convert from markdown to notebook format.
75
  - [📒Notebook here](https://colab.research.google.com/drive/1AjtWF2vOEwzIoCMmlQfSTYCVgy4Y78Wi?usp=sharing)
76
 
77
  ### Efficient Model Deployment using Ollama and LiteLLM
78
- - Setup and deployment guide for Google Colab.
79
  - → See [`efficient-ollama-colab-setup-with-litellm-guide.md`](sandbox/efficient-ollama-colab-setup-with-litellm-guide.md) for details.
80
  - [📒Notebook here](https://colab.research.google.com/drive/1buTPds1Go1NbZOLlpG94VG22GyK-F4GW?usp=sharing)
81
 
82
  ### LLM Evaluation System (LLMs as a Judge)
83
  - Implementation of a system for automatically evaluating the quality of LLM responses (added in v0.3.0, later removed).
84
  - Uses LLMs as evaluators to assess the responses of other LLMs (LLMs as a Judge method).
85
- - Quantitative quality assessment and feedback generation using a 4-point rating scale.
86
  - → See [`llm-evaluator-notebook.md`](sandbox/llm-evaluator-notebook.md) for details.
87
  - Efficient evaluation system using Gemini and LiteLLM.
88
  - [📒Notebook here](https://colab.research.google.com/drive/1haO44IeseQ3OL92HlsINAgBI_yA1fxcJ?usp=sharing)
89
 
 
 
 
 
 
 
 
90
 
91
- ## 🔧 Setup
92
 
93
  1. Clone the repository:
94
  ```bash
@@ -99,7 +105,7 @@ cd Llama-finetune-sandbox
99
  ## 📝 Adding Examples
100
 
101
  1. Add new implementations to the `examples/` directory.
102
- 2. Add necessary configurations and utilities to `utils/`.
103
  3. Update documentation and tests.
104
  4. Create a pull request.
105
 
@@ -108,15 +114,14 @@ cd Llama-finetune-sandbox
108
  - Implementation of new fine-tuning methods.
109
  - Bug fixes and feature improvements.
110
  - Documentation improvements.
111
- - Adding usage examples.
112
 
113
  ## 📚 References
114
 
115
  - [HuggingFace PEFT Documentation](https://huggingface.co/docs/peft)
116
- - [About Llama models](https://github.com/facebookresearch/llama)
117
- - [Fine-tuning best practices](https://github.com/Sunwood-ai-labs/Llama-finetune-sandbox/wiki)
118
 
119
  ## 📄 License
120
 
121
- This project is licensed under the MIT License.
122
- ```
 
44
 
45
  ## 🚀 Project Overview
46
 
47
+ **Llama-finetune-sandbox** provides an experimental environment for learning and validating Llama model fine-tuning. You can try various fine-tuning methods, customize models, and evaluate performance. It caters to a wide range of users, from beginners to researchers. Version 0.3.0 included improved documentation and an updated English README.
48
 
49
 
50
+ ## ✨ Key Features
51
 
52
  1. **Diverse Fine-tuning Methods**:
53
  - LoRA (Low-Rank Adaptation)
 
58
  - Various quantization options
59
  - Multiple attention mechanisms
60
 
61
+ 3. **Experiment Environment**:
62
  - Performance evaluation tools (added in v0.3.0, later removed)
63
+ - Optimized memory usage
64
+ - Visualization of experimental results
 
65
 
66
  ## 📚 Examples
67
 
68
  This repository includes the following examples:
69
 
70
+ ### Fast Fine-tuning using Unsloth
71
+ - Implementation of fast fine-tuning for Llama-3.2-1B/3B models.
72
  - → See [`Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md`](sandbox/Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md) for details. (Japanese)
73
+ - → Use [this](https://huggingface.co/spaces/MakiAi/JupytextWebUI) to convert from markdown to notebook format.
74
  - [📒Notebook here](https://colab.research.google.com/drive/1AjtWF2vOEwzIoCMmlQfSTYCVgy4Y78Wi?usp=sharing)
75
 
76
  ### Efficient Model Deployment using Ollama and LiteLLM
77
+ - Setup and deployment guide on Google Colab.
78
  - → See [`efficient-ollama-colab-setup-with-litellm-guide.md`](sandbox/efficient-ollama-colab-setup-with-litellm-guide.md) for details.
79
  - [📒Notebook here](https://colab.research.google.com/drive/1buTPds1Go1NbZOLlpG94VG22GyK-F4GW?usp=sharing)
80
 
81
  ### LLM Evaluation System (LLMs as a Judge)
82
  - Implementation of a system for automatically evaluating the quality of LLM responses (added in v0.3.0, later removed).
83
  - Uses LLMs as evaluators to assess the responses of other LLMs (LLMs as a Judge method).
84
+ - Quantitative quality assessment and feedback generation using a 4-level rating scale.
85
  - → See [`llm-evaluator-notebook.md`](sandbox/llm-evaluator-notebook.md) for details.
86
  - Efficient evaluation system using Gemini and LiteLLM.
87
  - [📒Notebook here](https://colab.research.google.com/drive/1haO44IeseQ3OL92HlsINAgBI_yA1fxcJ?usp=sharing)
88
 
89
+ ### Wikipedia Data-based Q&A Dataset Generation (Sentence Pool QA Method)
90
+ - Generation of high-quality Q&A datasets using the sentence pool QA method.
91
+ - → A new dataset creation method that generates Q&A pairs while preserving context by pooling sentences separated by periods.
92
+ - → Chunk size is flexibly adjustable (default 200 characters) to generate Q&A pairs with optimal context ranges for different purposes.
93
+ - → See [`wikipedia-qa-dataset-generator.md`](sandbox/wikipedia-qa-dataset-generator.md) for details.
94
+ - [📒Notebook here](https://colab.research.google.com/drive/1mmK5vxUzjk3lI6OnEPrQqyjSzqsEoXpk?usp=sharing)
95
+
96
 
97
+ ## 🛠️ Setup
98
 
99
  1. Clone the repository:
100
  ```bash
 
105
  ## 📝 Adding Examples
106
 
107
  1. Add new implementations to the `examples/` directory.
108
+ 2. Add necessary settings and utilities to `utils/`.
109
  3. Update documentation and tests.
110
  4. Create a pull request.
111
 
 
114
  - Implementation of new fine-tuning methods.
115
  - Bug fixes and feature improvements.
116
  - Documentation improvements.
117
+ - Addition of usage examples.
118
 
119
  ## 📚 References
120
 
121
  - [HuggingFace PEFT Documentation](https://huggingface.co/docs/peft)
122
+ - [About Llama Models](https://github.com/facebookresearch/llama)
123
+ - [Fine-tuning Best Practices](https://github.com/Sunwood-ai-labs/Llama-finetune-sandbox/wiki)
124
 
125
  ## 📄 License
126
 
127
+ This project is licensed under the MIT License.