danielhanchen commited on
Commit
af58fc5
·
verified ·
1 Parent(s): 8537265

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -13
README.md CHANGED
@@ -6,19 +6,15 @@ colorTo: blue
6
  sdk: static
7
  pinned: false
8
  ---
9
- 🦥Unsloth makes fine-tuning of LLMs 2.2x faster and use 80% less VRAM!
10
 
11
  [Join our Discord server!](https://discord.gg/unsloth) Our open source Github: https://github.com/unslothai/unsloth
12
 
13
- | Unsloth support includes | Free Notebooks | Performance | Memory use |
14
- |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
15
- | **Llama 3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.1x faster | 60% less |
16
- | **Llama 3.2 (1+3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1T5-zKWM_5OD21QHwXHiV9ixTRR7k3iB9?usp=sharing) | 2.1x faster | 60% less |
17
- | **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2x faster | 63% less |
18
- | **Mistral Nemo (12B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/17d3U-CAIwzmbDRqbZ9NnpHxCkmXB6LZ0?usp=sharing) | 2x faster | 60% less |
19
- | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
20
-
21
- **[NEW]** Vision finetuning is here! We support Llama, Qwen, Pixtral, Llava & others!
22
- * **Llama 3.2 Vision** [▶️ Start on Colab](https://colab.research.google.com/drive/1j0N4XTY1zXXy7mPAhOC1_gMYZ2F2EBlk?usp=sharing)
23
- * **Qwen2 VL 7B**[▶️ Start on Colab](https://colab.research.google.com/drive/1whHb54GNZMrNxIsi2wm2EY_-Pvo2QyKh?usp=sharing)
24
- * **Pixtral 12B**[▶️ Start on Colab](https://colab.research.google.com/drive/1K9ZrdwvZRE96qGkCq_e88FgV3MLnymQq?usp=sharing)
 
6
  sdk: static
7
  pinned: false
8
  ---
9
+ 🦥Unsloth makes fine-tuning of LLMs & **Vision LMs** 2.2x faster and use 80% less VRAM!
10
 
11
  [Join our Discord server!](https://discord.gg/unsloth) Our open source Github: https://github.com/unslothai/unsloth
12
 
13
+ | Unsloth support includes | Free Notebooks | Performance | Memory use | Vision Support |
14
+ |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|------------------|
15
+ | **Llama 3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.1x faster | 60% less | Llava based |
16
+ | **Llama 3.2 (1+3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1T5-zKWM_5OD21QHwXHiV9ixTRR7k3iB9?usp=sharing) | 2.1x faster | 60% less | [▶️ Vision Colab](https://colab.research.google.com/drive/1j0N4XTY1zXXy7mPAhOC1_gMYZ2F2EBlk?usp=sharing) |
17
+ | **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2x faster | 63% less | |
18
+ | **Mistral Nemo (12B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/17d3U-CAIwzmbDRqbZ9NnpHxCkmXB6LZ0?usp=sharing) | 2x faster | 60% less | [▶️ Pixtral Colab](https://colab.research.google.com/drive/1K9ZrdwvZRE96qGkCq_e88FgV3MLnymQq?usp=sharing) |
19
+ | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | |
20
+ | **Qwen 2.5 Coder (14B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/18sN803sU23XuJV9Q8On2xgqHSer6-UZF?usp=sharing) | 2x faster | 50% less | [▶️ Qwen2VL Colab](https://colab.research.google.com/drive/1whHb54GNZMrNxIsi2wm2EY_-Pvo2QyKh?usp=sharing) |