iris-s-coon commited on
Commit
16a7f59
·
1 Parent(s): cdd8a80

📖 [docs] 英語READMEの更新

Browse files
Files changed (1) hide show
  1. docs/README.en.md +34 -25
docs/README.en.md CHANGED
@@ -31,7 +31,7 @@ license: mit
31
  </p>
32
 
33
  <h2 align="center">
34
- Llama Model Fine-tuning Experiment Environment
35
  </h2>
36
 
37
  <p align="center">
@@ -44,7 +44,7 @@ license: mit
44
 
45
  ## 🚀 Project Overview
46
 
47
- **Llama-finetune-sandbox** provides an experimental environment for learning and validating Llama model fine-tuning. You can try various fine-tuning methods, customize models, and evaluate performance. It caters to a wide range of users, from beginners to researchers. Version 0.3.0 included improved documentation and an updated English README.
48
 
49
 
50
  ## ✨ Key Features
@@ -53,14 +53,14 @@ license: mit
53
  - LoRA (Low-Rank Adaptation)
54
  - QLoRA (Quantized LoRA)
55
 
56
- 2. **Flexible Model Configuration**:
57
  - Customizable maximum sequence length
58
  - Various quantization options
59
  - Multiple attention mechanisms
60
 
61
- 3. **Experiment Environment**:
62
- - Performance evaluation tools (added in v0.3.0, later removed)
63
- - Optimized memory usage
64
  - Visualization of experimental results
65
 
66
  ## 📚 Examples
@@ -68,31 +68,39 @@ license: mit
68
  This repository includes the following examples:
69
 
70
  ### Fast Fine-tuning using Unsloth
71
- - Implementation of fast fine-tuning for Llama-3.2-1B/3B models.
72
- - → See [`Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md`](sandbox/Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md) for details. (Japanese)
73
- - → Use [this](https://huggingface.co/spaces/MakiAi/JupytextWebUI) to convert from markdown to notebook format.
74
  - [📒Notebook here](https://colab.research.google.com/drive/1AjtWF2vOEwzIoCMmlQfSTYCVgy4Y78Wi?usp=sharing)
75
 
76
  ### Efficient Model Deployment using Ollama and LiteLLM
77
- - Setup and deployment guide on Google Colab.
78
  - → See [`efficient-ollama-colab-setup-with-litellm-guide.md`](sandbox/efficient-ollama-colab-setup-with-litellm-guide.md) for details.
79
  - [📒Notebook here](https://colab.research.google.com/drive/1buTPds1Go1NbZOLlpG94VG22GyK-F4GW?usp=sharing)
80
 
81
  ### LLM Evaluation System (LLMs as a Judge)
82
- - Implementation of a system for automatically evaluating the quality of LLM responses (added in v0.3.0, later removed).
83
- - Uses LLMs as evaluators to assess the responses of other LLMs (LLMs as a Judge method).
84
- - Quantitative quality assessment and feedback generation using a 4-level rating scale.
85
  - → See [`llm-evaluator-notebook.md`](sandbox/llm-evaluator-notebook.md) for details.
86
- - Efficient evaluation system using Gemini and LiteLLM.
87
  - [📒Notebook here](https://colab.research.google.com/drive/1haO44IeseQ3OL92HlsINAgBI_yA1fxcJ?usp=sharing)
88
 
89
- ### Wikipedia Data-based Q&A Dataset Generation (Sentence Pool QA Method)
90
- - Generation of high-quality Q&A datasets using the sentence pool QA method.
91
- - → A new dataset creation method that generates Q&A pairs while preserving context by pooling sentences separated by periods.
92
- - → Chunk size is flexibly adjustable (default 200 characters) to generate Q&A pairs with optimal context ranges for different purposes.
93
- - → See [`wikipedia-qa-dataset-generator.md`](sandbox/wikipedia-qa-dataset-generator.md) for details.
94
- - [📒Notebook here](https://colab.research.google.com/drive/1mmK5vxUzjk3lI6OnEPrQqyjSzqsEoXpk?usp=sharing)
95
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
96
 
97
  ## 🛠️ Setup
98
 
@@ -111,10 +119,10 @@ cd Llama-finetune-sandbox
111
 
112
  ## 🤝 Contributions
113
 
114
- - Implementation of new fine-tuning methods.
115
- - Bug fixes and feature improvements.
116
- - Documentation improvements.
117
- - Addition of usage examples.
118
 
119
  ## 📚 References
120
 
@@ -124,4 +132,5 @@ cd Llama-finetune-sandbox
124
 
125
  ## 📄 License
126
 
127
- This project is licensed under the MIT License.
 
 
31
  </p>
32
 
33
  <h2 align="center">
34
+ Experimental Environment for Fine-tuning Llama Models ~
35
  </h2>
36
 
37
  <p align="center">
 
44
 
45
  ## 🚀 Project Overview
46
 
47
+ **Llama-finetune-sandbox** provides an experimental environment for learning and verifying the fine-tuning of Llama models. You can try various fine-tuning methods, customize models, and evaluate their performance. It caters to a wide range of users, from beginners to researchers. Version 0.3.0 includes improved documentation and an updated English README.
48
 
49
 
50
  ## ✨ Key Features
 
53
  - LoRA (Low-Rank Adaptation)
54
  - QLoRA (Quantized LoRA)
55
 
56
+ 2. **Flexible Model Settings**:
57
  - Customizable maximum sequence length
58
  - Various quantization options
59
  - Multiple attention mechanisms
60
 
61
+ 3. **Experimental Environment Setup**:
62
+ - Performance evaluation tools (added in v0.3.0, subsequently removed)
63
+ - Memory usage optimization
64
  - Visualization of experimental results
65
 
66
  ## 📚 Examples
 
68
  This repository includes the following examples:
69
 
70
  ### Fast Fine-tuning using Unsloth
71
+ - Implementation of fast fine-tuning for Llama-3.2-1B/3B models
72
+ - → See [`Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md`](sandbox/Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md) for details.
73
+ - → [Use this to convert from markdown to notebook format](https://huggingface.co/spaces/MakiAi/JupytextWebUI)
74
  - [📒Notebook here](https://colab.research.google.com/drive/1AjtWF2vOEwzIoCMmlQfSTYCVgy4Y78Wi?usp=sharing)
75
 
76
  ### Efficient Model Deployment using Ollama and LiteLLM
77
+ - Setup and usage guide on Google Colab
78
  - → See [`efficient-ollama-colab-setup-with-litellm-guide.md`](sandbox/efficient-ollama-colab-setup-with-litellm-guide.md) for details.
79
  - [📒Notebook here](https://colab.research.google.com/drive/1buTPds1Go1NbZOLlpG94VG22GyK-F4GW?usp=sharing)
80
 
81
  ### LLM Evaluation System (LLMs as a Judge)
82
+ - Implementation of a system for automatically evaluating the quality of LLM responses (added in v0.3.0, subsequently removed)
83
+ - Utilizing LLMs as evaluators to assess the responses of other LLMs (LLMs as a Judge method)
84
+ - Quantitative quality assessment and feedback generation using a 4-level rating scale
85
  - → See [`llm-evaluator-notebook.md`](sandbox/llm-evaluator-notebook.md) for details.
86
+ - Efficient evaluation system using Gemini and LiteLLM
87
  - [📒Notebook here](https://colab.research.google.com/drive/1haO44IeseQ3OL92HlsINAgBI_yA1fxcJ?usp=sharing)
88
 
 
 
 
 
 
 
89
 
90
+ ### Wikipedia Data-based Q&A Dataset Generation (Sentence Pool QA Method)
91
+ - High-quality Q&A dataset generation using the Sentence Pool QA method
92
+ - → A new dataset creation method that generates Q&A pairs while maintaining context by pooling sentences separated by punctuation marks.
93
+ - → Chunk size is flexibly adjustable (default 200 characters) to generate Q&A pairs with optimal context range depending on the application.
94
+ - → See [`wikipedia-qa-dataset-generator.md`](sandbox/wikipedia-qa-dataset-generator.md) for details.
95
+ - [📒Notebook here](https://colab.research.google.com/drive/1mmK5vxUzjk3lI6OnEPrQqyjSzqsEoXpk?usp=sharing)
96
+
97
+ ### Context-Aware Reflexive QA Generation System
98
+ - Q&A dataset generation with reflexive quality improvement
99
+ - → A new method that automatically evaluates the quality of generated Q&A pairs and iteratively improves them.
100
+ - → Quantifies and evaluates factuality, question quality, and answer completeness.
101
+ - → High-precision question generation and answer consistency check using contextual information.
102
+ - → See [`context_aware_Reflexive_qa_generator_V2.md`](sandbox/context_aware_Reflexive_qa_generator_V2.md) for details.
103
+ - [📒Notebook here](https://colab.research.google.com/drive/1OYdgAuXHbl-0LUJgkLl_VqknaAEmAm0S?usp=sharing)
104
 
105
  ## 🛠️ Setup
106
 
 
119
 
120
  ## 🤝 Contributions
121
 
122
+ - Implementation of new fine-tuning methods
123
+ - Bug fixes and feature improvements
124
+ - Documentation improvements
125
+ - Adding examples
126
 
127
  ## 📚 References
128
 
 
132
 
133
  ## 📄 License
134
 
135
+ This project is licensed under the MIT License.
136
+ ```