[email protected] commited on
Commit
b7d905e
·
2 Parent(s): 12e2e94 387d3a5

Merge pull request #14 from Sunwood-ai-labs/translate-readme-11573465431

Browse files
Files changed (1) hide show
  1. docs/README.en.md +20 -19
docs/README.en.md CHANGED
@@ -44,7 +44,7 @@ license: mit
44
 
45
  ## 🚀 Project Overview
46
 
47
- **Llama-finetune-sandbox** provides an experimental environment for learning and verifying Llama model fine-tuning. You can try various fine-tuning methods, customize models, and evaluate their performance. It caters to a wide range of users, from beginners to researchers. Version 0.2.0 includes updated and improved documentation. Specifically, it adds guides for efficient model operation using Ollama and LiteLLM, and improves the readability of the README.md and implementation example guides.
48
 
49
 
50
  ## ✨ Main Features
@@ -59,35 +59,36 @@ license: mit
59
  - Multiple attention mechanisms
60
 
61
  3. **Experiment Environment Setup**:
62
- - Performance evaluation tools
63
  - Memory usage optimization
64
- - Visualization of experimental results
65
 
66
- ## 📚 Implementation Examples
67
 
68
- This repository includes the following implementation examples:
 
 
69
 
70
  ### High-Speed Fine-tuning using Unsloth
71
- - Implementation of high-speed fine-tuning for Llama-3.2-1B/3B models
72
- - → See [`Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md`](sandbox/Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md) for details.
73
- - → [Use this to convert from markdown to notebook format](https://huggingface.co/spaces/MakiAi/JupytextWebUI)
74
  - [📒Notebook here](https://colab.research.google.com/drive/1AjtWF2vOEwzIoCMmlQfSTYCVgy4Y78Wi?usp=sharing)
75
 
76
- ### Efficient Model Operation using Ollama and LiteLLM
77
- - Setup and operation guide on Google Colab
78
  - → See [`efficient-ollama-colab-setup-with-litellm-guide.md`](sandbox/efficient-ollama-colab-setup-with-litellm-guide.md) for details.
79
  - [📒Notebook here](https://colab.research.google.com/drive/1buTPds1Go1NbZOLlpG94VG22GyK-F4GW?usp=sharing)
80
 
81
  ### LLM Evaluation System (LLMs as a Judge)
82
- - Implementation of a system for automatically evaluating the quality of LLM responses.
83
  - Uses LLMs as evaluators to assess the responses of other LLMs (LLMs as a Judge method).
84
- - Quantitative quality assessment and feedback generation using a 4-level rating scale.
85
  - → See [`llm-evaluator-notebook.md`](sandbox/llm-evaluator-notebook.md) for details.
86
  - Efficient evaluation system using Gemini and LiteLLM.
87
  - [📒Notebook here](https://colab.research.google.com/drive/1haO44IeseQ3OL92HlsINAgBI_yA1fxcJ?usp=sharing)
88
 
89
 
90
- ## 🛠️ Environment Setup
91
 
92
  1. Clone the repository:
93
  ```bash
@@ -95,19 +96,19 @@ git clone https://github.com/Sunwood-ai-labs/Llama-finetune-sandbox.git
95
  cd Llama-finetune-sandbox
96
  ```
97
 
98
- ## 📝 Adding Implementation Examples
99
 
100
  1. Add new implementations to the `examples/` directory.
101
- 2. Add necessary settings and utilities to `utils/`.
102
  3. Update documentation and tests.
103
  4. Create a pull request.
104
 
105
  ## 🤝 Contributions
106
 
107
- - Implementation of new fine-tuning methods
108
- - Bug fixes and feature improvements
109
- - Documentation improvements
110
- - Addition of usage examples
111
 
112
  ## 📚 References
113
 
 
44
 
45
  ## 🚀 Project Overview
46
 
47
+ **Llama-finetune-sandbox** provides an experimental environment for learning and validating Llama model fine-tuning. You can try various fine-tuning methods, customize models, and evaluate performance. It caters to a wide range of users, from beginners to researchers. Version 0.3.0 included improved documentation and an updated English README.
48
 
49
 
50
  ## ✨ Main Features
 
59
  - Multiple attention mechanisms
60
 
61
  3. **Experiment Environment Setup**:
62
+ - Performance evaluation tools (added in v0.3.0, later removed)
63
  - Memory usage optimization
64
+ - Experiment result visualization
65
 
 
66
 
67
+ ## 📚 Examples
68
+
69
+ This repository includes the following examples:
70
 
71
  ### High-Speed Fine-tuning using Unsloth
72
+ - Implementation of high-speed fine-tuning for Llama-3.2-1B/3B models.
73
+ - → See [`Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md`](sandbox/Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md) for details. (Japanese)
74
+ - → Use [this tool](https://huggingface.co/spaces/MakiAi/JupytextWebUI) to convert from markdown to notebook format.
75
  - [📒Notebook here](https://colab.research.google.com/drive/1AjtWF2vOEwzIoCMmlQfSTYCVgy4Y78Wi?usp=sharing)
76
 
77
+ ### Efficient Model Deployment using Ollama and LiteLLM
78
+ - Setup and deployment guide for Google Colab.
79
  - → See [`efficient-ollama-colab-setup-with-litellm-guide.md`](sandbox/efficient-ollama-colab-setup-with-litellm-guide.md) for details.
80
  - [📒Notebook here](https://colab.research.google.com/drive/1buTPds1Go1NbZOLlpG94VG22GyK-F4GW?usp=sharing)
81
 
82
  ### LLM Evaluation System (LLMs as a Judge)
83
+ - Implementation of a system for automatically evaluating the quality of LLM responses (added in v0.3.0, later removed).
84
  - Uses LLMs as evaluators to assess the responses of other LLMs (LLMs as a Judge method).
85
+ - Quantitative quality assessment and feedback generation using a 4-point rating scale.
86
  - → See [`llm-evaluator-notebook.md`](sandbox/llm-evaluator-notebook.md) for details.
87
  - Efficient evaluation system using Gemini and LiteLLM.
88
  - [📒Notebook here](https://colab.research.google.com/drive/1haO44IeseQ3OL92HlsINAgBI_yA1fxcJ?usp=sharing)
89
 
90
 
91
+ ## 🔧 Setup
92
 
93
  1. Clone the repository:
94
  ```bash
 
96
  cd Llama-finetune-sandbox
97
  ```
98
 
99
+ ## 📝 Adding Examples
100
 
101
  1. Add new implementations to the `examples/` directory.
102
+ 2. Add necessary configurations and utilities to `utils/`.
103
  3. Update documentation and tests.
104
  4. Create a pull request.
105
 
106
  ## 🤝 Contributions
107
 
108
+ - Implementation of new fine-tuning methods.
109
+ - Bug fixes and feature improvements.
110
+ - Documentation improvements.
111
+ - Adding usage examples.
112
 
113
  ## 📚 References
114