File size: 10,109 Bytes
758f7bd
 
 
 
 
 
 
 
 
3de0c5f
 
 
 
 
b99a0ed
3de0c5f
 
9ba030d
3de0c5f
9ba030d
 
 
 
8d20685
3de0c5f
 
 
 
8143d70
 
 
 
23b9d2d
8143d70
 
3de0c5f
 
 
388d3b1
4e0e48e
388d3b1
4e0e48e
 
 
388d3b1
c6c35cd
4e0e48e
146c5dc
5f28ccd
146c5dc
 
388d3b1
92aa5f2
9cbf6e6
92aa5f2
9cbf6e6
 
 
 
 
d0f751c
3de0c5f
9cbf6e6
9e6d70c
 
2a6496c
 
3de0c5f
 
d6bbbdc
 
3de0c5f
9e6d70c
 
 
 
 
3de0c5f
 
d0f751c
9911ac3
3de0c5f
 
 
9e6d70c
 
 
3de0c5f
9e6d70c
 
 
3de0c5f
 
 
 
38256b4
3de0c5f
 
 
 
 
 
 
 
 
d0f751c
fdb20dd
3de0c5f
 
 
9e6d70c
 
3de0c5f
 
d0f751c
9cbf6e6
38256b4
9cbf6e6
be9f18e
9cbf6e6
 
be9f18e
 
 
9cbf6e6
 
967a12a
 
 
d8afde9
967a12a
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
---
title: README
emoji: πŸƒ
colorFrom: pink
colorTo: red
sdk: static
pinned: false
---

<p align="center" width="100%">
</p>

<div id="top" align="center">

<p style="font-size: 40px; font-weight: bold;">Knowledge Fusion of Large Language Models</p>


<h4> |<a href="https://arxiv.org/abs/2401.10491"> πŸ“‘ FuseLLM Paper @ICLR2024 </a> | <a href="https://arxiv.org/abs/2408.07990"> πŸ“‘ FuseChat Tech Report </a> | <a href="https://arxiv.org/abs/2412.03187"> πŸ“‘ WRPO Paper @ICLR2025 </a> | <a href="https://arxiv.org/pdf/2503.04222"> πŸ“‘ FuseChat-3.0 Tech Report </a> | 
</h4>
<h4>
| <a href="https://huggingface.co/FuseAI"> πŸ€— HuggingFace Repo </a> | <a href="https://github.com/fanqiwan/FuseLLM"> 🐱 GitHub Repo </a> | <a href="https://huggingface.co/blog/Wanfq/fuseo1-preview"> 🌐 FuseO1-Preview Blog </a> |
</h4>
  <p align="center">
    <img src="logo.png" width="60%"> <br>
</p>

</div>

## FuseAI

FuseAI is an open-source research community focused on model fusion topics. 

The community members currently applying model fusion on Foundation, Chat, o1-like LLMs.

Welcome to join us!

## News

### FuseO1-Preview [74.0 on AIME24, approaching OpenAI o1's 79.2]

- **Jan 21, 2025:** πŸ”₯ [FuseO1-Preview](https://huggingface.co/collections/FuseAI/fuseo1-preview-678eb56093649b2688bc9977) is our initial endeavor to enhance the System-II reasoning capabilities of large language models (LLMs) through innovative model fusion techniques. By employing our advanced [SCE](https://arxiv.org/abs/2408.07990) merging methodologies, we integrate multiple open-source o1-like LLMs into a unified model. Our goal is to incorporate the distinct knowledge and strengths from different reasoning LLMs into a single, unified model with strong System-II reasoning abilities, particularly in mathematics, coding, and science domains.

To achieve this, we conduct two types of model merging:

- **Long-Long Reasoning Merging**: This approach involves model fusion across LLMs that utilize long-CoT reasoning, with the goal of enhancing long-CoT reasoning capabilities. The resulted [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-32B-Preview) achieves a Pass@1 accuracy of **74.0 on AIME24**,  demonstrating significant performance improvements compared to the OpenAI o1-preview (44.6) and OpenAI o1-mini (63.4), even approaching OpenAI o1 (79.2).
- **Long-Short Reasoning Merging**: This approach involves model fusion between long-CoT and short-CoT LLMs, aiming to improve reasoning capabilities in both long and short reasoning processes. The resulted [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Instruct-32B-Preview) and [FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview) is capable of utilizing both long and short reasoning processes and demonstrates relatively strong performance in long reasoning tasks.

<p align="center">
    <img src="fuseo1-preview.jpg" width="100%"> <br>
</p>


### FuseChat-3.0 [SOTA 8B LLM on AlpacaEval-2 & Arena-Hard]

- **Dec 12, 2024:** πŸ”₯ We release [FuseChat-3.0](https://huggingface.co/collections/FuseAI/fusechat-30-6752d18dec430bad7a236a75) and [Blog Post](https://slit-ai.github.io/FuseChat-3.0/). FuseChat-3.0 contains a series of models crafted to enhance performance by integrating the strengths of multiple source LLMs into more compact target LLMs. To achieve this fusion, we utilized four powerful source LLMs: [Gemma-2-27b-It](https://huggingface.co/google/gemma-2-27b-it), [Mistral-Large-Instruct-2407](https://huggingface.co/mistralai/Mistral-Large-Instruct-2407), [Qwen-2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct), and [Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct). For the target LLMs, we employed three widely-used smaller modelsβ€”[Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct), [Gemma-2-9B-It](https://huggingface.co/google/gemma-2-9b-it), and [Qwen-2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)β€”along with two even more compact modelsβ€”[Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) and [Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct). . The implicit model fusion process involves a two-stage training pipeline comprising Supervised Fine-Tuning (SFT) to mitigate distribution discrepancies between target and source LLMs, and Direct Preference Optimization (DPO) for learning preferences from multiple source LLMs. The resulting FuseChat-3.0 models demonstrated substantial improvements in tasks related to general conversation, instruction following, mathematics, and coding. Notably, when Llama-3.1-8B-Instruct served as the target LLM, our fusion approach achieved an average improvement of **6.8** points across 14 benchmarks. Moreover, it showed significant improvements of **37.1** and **30.1** points on instruction-following test sets AlpacaEval-2 and Arena-Hard respectively.  

<p align="center">
    <img src="FuseChat-3.0.png" width="60%"> <br>
</p>

### FuseChat [SOTA 7B LLM on MT-Bench]


- **Aug 16, 2024:** πŸ”₯πŸ”₯πŸ”₯πŸ”₯ We update the [FuseChat tech report](https://arxiv.org/abs/2408.07990) and release [FuseChat-7B-v2.0](https://huggingface.co/FuseAI/FuseChat-7B-v2.0), which is the fusion of six prominent chat LLMs with diverse architectures and scales, namely [OpenChat-3.5-7B](https://huggingface.co/openchat/openchat_3.5), [Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha), [NH2-Solar-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B), [InternLM2-Chat-20B](https://huggingface.co/internlm/internlm2-chat-20b), [Mixtral-8x7B-Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1), and [Qwen1.5-Chat-72B](https://huggingface.co/Qwen/Qwen1.5-72B-Chat). FuseChat-7B-v2.0 achieves an average performance of **7.38** on MT-Bench (GPT-4-0125-Preview as judge LLM), which is comparable to [Mixtral-8x7B-Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) and approaches [GPT-3.5-Turbo-1106](https://platform.openai.com/docs/models/gpt-3-5-turbo).  

- **Mar 13, 2024:** πŸ”₯πŸ”₯πŸ”₯ We release a HuggingFace Space for [FuseChat-7B](https://huggingface.co/spaces/FuseAI/FuseChat-7B), try it now!

- **Feb 26, 2024:** πŸ”₯πŸ”₯ We release [FuseChat-7B-VaRM](https://huggingface.co/FuseAI/FuseChat-7B-VaRM), which is the fusion of three prominent chat LLMs with diverse architectures and scales, namely [NH2-Mixtral-8x7B](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO), [NH2-Solar-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B), and [OpenChat-3.5-7B](https://huggingface.co/openchat/openchat_3.5). FuseChat-7B-VaRM achieves an average performance of **8.22** on MT-Bench, outperforming various powerful chat LLMs like [Starling-7B](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha), [Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat), and [Tulu-2-DPO-70B](https://huggingface.co/allenai/tulu-2-dpo-70b), even surpassing [GPT-3.5 (March)](https://platform.openai.com/docs/models/gpt-3-5-turbo), [Claude-2.1](https://www.anthropic.com/news/claude-2-1), and approaching [Mixtral-8x7B-Instruct](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1).

- **Feb 25, 2024:** πŸ”₯ We release [FuseChat-Mixture](https://huggingface.co/datasets/FuseAI/FuseChat-Mixture), which is a comprehensive training dataset covers different styles and capabilities, featuring both human-written and model-generated, and spanning general instruction-following and specific skills.

<p align="center">
    <img src="tab0.png" width="60%"> <br>
</p>

<p align="center">
    <img src="tab1.png" width="95%"> <br>
</p>


### FuseLLM [Surpassing Llama-2-7B]

- **Jan 22, 2024:** πŸ”₯ We release [FuseLLM-7B](https://huggingface.co/Wanfq/FuseLLM-7B), which is the fusion of three open-source foundation LLMs with distinct architectures, including [Llama-2-7B](https://huggingface.co/meta-llama/Llama-2-7b-hf), [OpenLLaMA-7B](https://huggingface.co/openlm-research/open_llama_7b_v2), and [MPT-7B](https://huggingface.co/mosaicml/mpt-7b).

<p align="center">
    <img src="fig0.png" width="95%"> <br>
</p>

<p align="center">
    <img src="fig1.png" width="95%"> <br>
</p>


## Citation

Please cite the following paper if you reference our model, code, data, or paper related to FuseLLM.
```
@inproceedings{wan2024knowledge,
  title={Knowledge Fusion of Large Language Models},
  author={Fanqi Wan and Xinting Huang and Deng Cai and Xiaojun Quan and Wei Bi and Shuming Shi},
  booktitle={The Twelfth International Conference on Learning Representations},
  year={2024},
  url={https://openreview.net/pdf?id=jiDsk12qcz}
}
```

Please cite the following paper if you reference our model, code, data, or paper related to FuseChat.
```
@article{wan2024fusechat,
  title={FuseChat: Knowledge Fusion of Chat Models},
  author={Fanqi Wan and Longguang Zhong and Ziyi Yang and Ruijun Chen and Xiaojun Quan},
  journal={arXiv preprint arXiv:2408.07990},
  year={2024}
}
```

Please cite the following paper if you reference our model, code, data, or paper related to WRPO.
```
@inproceedings{yang2025weightedreward,
  title={Weighted-Reward Preference Optimization for Implicit Model Fusion},
  author={Ziyi Yang and Fanqi Wan and Longguang Zhong and Tianyuan Shi and Xiaojun Quan},
  booktitle={The Thirteenth International Conference on Learning Representations},
  year={2025},
  url={https://openreview.net/forum?id=fq24pEb8SL}
}
```

Please cite the following paper if you reference our model, code, data, or paper related to FuseChat-3.0.
```
@article{yang2025fusechat,
  title={FuseChat-3.0: Preference Optimization Meets Heterogeneous Model Fusion}, 
  author={Ziyi Yang and Fanqi Wan and Longguang Zhong and Canbin Huang and Guosheng Liang and Xiaojun Quan},
  journal={arXiv preprint arXiv:2503.04222},
  year={2025},
}
```