[email protected] commited on
Commit
a7a2535
·
2 Parent(s): bf5f953 34f2add

Merge pull request #12 from Sunwood-ai-labs/translate-readme-11573137120

Browse files
Files changed (1) hide show
  1. docs/README.en.md +15 -7
docs/README.en.md CHANGED
@@ -44,33 +44,33 @@ license: mit
44
 
45
  ## 🚀 Project Overview
46
 
47
- **Llama-finetune-sandbox** provides an experimental environment for learning and verifying Llama model fine-tuning. You can try various fine-tuning methods, customize models, and evaluate performance. It caters to a wide range of users, from beginners to researchers. Version 0.2.0 includes updated and improved documentation. In particular, guides for efficient model operation using Ollama and LiteLLM have been added, and the README.md and implementation example guides have been made easier to understand.
48
 
49
 
50
  ## ✨ Main Features
51
 
52
- 1. **Diverse Fine-tuning Methods:**
53
  - LoRA (Low-Rank Adaptation)
54
  - QLoRA (Quantized LoRA)
55
 
56
- 2. **Flexible Model Configuration:**
57
  - Customizable maximum sequence length
58
  - Various quantization options
59
  - Multiple attention mechanisms
60
 
61
- 3. **Well-equipped Experiment Environment:**
62
  - Performance evaluation tools
63
  - Memory usage optimization
64
- - Experiment result visualization
65
 
66
  ## 📚 Implementation Examples
67
 
68
  This repository includes the following implementation examples:
69
 
70
  ### High-Speed Fine-tuning using Unsloth
71
- - High-speed fine-tuning implementation of Llama-3.2-1B/3B models
72
  - → See [`Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md`](sandbox/Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md) for details.
73
- - → [Use this to convert from markdown format to notebook format](https://huggingface.co/spaces/MakiAi/JupytextWebUI)
74
  - [📒Notebook here](https://colab.research.google.com/drive/1AjtWF2vOEwzIoCMmlQfSTYCVgy4Y78Wi?usp=sharing)
75
 
76
  ### Efficient Model Operation using Ollama and LiteLLM
@@ -78,6 +78,14 @@ This repository includes the following implementation examples:
78
  - → See [`efficient-ollama-colab-setup-with-litellm-guide.md`](sandbox/efficient-ollama-colab-setup-with-litellm-guide.md) for details.
79
  - [📒Notebook here](https://colab.research.google.com/drive/1buTPds1Go1NbZOLlpG94VG22GyK-F4GW?usp=sharing)
80
 
 
 
 
 
 
 
 
 
81
 
82
  ## 🛠️ Environment Setup
83
 
 
44
 
45
  ## 🚀 Project Overview
46
 
47
+ **Llama-finetune-sandbox** provides an experimental environment for learning and verifying Llama model fine-tuning. You can try various fine-tuning methods, customize models, and evaluate their performance. It caters to a wide range of users, from beginners to researchers. Version 0.2.0 includes updated and improved documentation. Specifically, it adds guides for efficient model operation using Ollama and LiteLLM, and improves the readability of the README.md and implementation example guides.
48
 
49
 
50
  ## ✨ Main Features
51
 
52
+ 1. **Diverse Fine-tuning Methods**:
53
  - LoRA (Low-Rank Adaptation)
54
  - QLoRA (Quantized LoRA)
55
 
56
+ 2. **Flexible Model Configuration**:
57
  - Customizable maximum sequence length
58
  - Various quantization options
59
  - Multiple attention mechanisms
60
 
61
+ 3. **Experiment Environment Setup**:
62
  - Performance evaluation tools
63
  - Memory usage optimization
64
+ - Visualization of experimental results
65
 
66
  ## 📚 Implementation Examples
67
 
68
  This repository includes the following implementation examples:
69
 
70
  ### High-Speed Fine-tuning using Unsloth
71
+ - Implementation of high-speed fine-tuning for Llama-3.2-1B/3B models
72
  - → See [`Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md`](sandbox/Llama_3_2_1B+3B_Conversational_+_2x_faster_finetuning_JP.md) for details.
73
+ - → [Use this to convert from markdown to notebook format](https://huggingface.co/spaces/MakiAi/JupytextWebUI)
74
  - [📒Notebook here](https://colab.research.google.com/drive/1AjtWF2vOEwzIoCMmlQfSTYCVgy4Y78Wi?usp=sharing)
75
 
76
  ### Efficient Model Operation using Ollama and LiteLLM
 
78
  - → See [`efficient-ollama-colab-setup-with-litellm-guide.md`](sandbox/efficient-ollama-colab-setup-with-litellm-guide.md) for details.
79
  - [📒Notebook here](https://colab.research.google.com/drive/1buTPds1Go1NbZOLlpG94VG22GyK-F4GW?usp=sharing)
80
 
81
+ ### LLM Evaluation System (LLMs as a Judge)
82
+ - Implementation of a system for automatically evaluating the quality of LLM responses.
83
+ - Uses LLMs as evaluators to assess the responses of other LLMs (LLMs as a Judge method).
84
+ - Quantitative quality assessment and feedback generation using a 4-level rating scale.
85
+ - → See [`llm-evaluator-notebook.md`](sandbox/llm-evaluator-notebook.md) for details.
86
+ - Efficient evaluation system using Gemini and LiteLLM.
87
+ - [📒Notebook here](https://colab.research.google.com/drive/1haO44IeseQ3OL92HlsINAgBI_yA1fxcJ?usp=sharing)
88
+
89
 
90
  ## 🛠️ Environment Setup
91