Magpie Alignment

community
Activity Feed

AI & ML interests

Transparent LLM alignment for all.

Recent Activity

flydustΒ  updated a dataset 15 days ago
Magpie-Align/MagpieLM-DPO-Data-v0.1
flydustΒ  updated a dataset 15 days ago
Magpie-Align/MagpieLM-SFT-Data-v0.1
flydustΒ  updated a model 15 days ago
Magpie-Align/MagpieLM-4B-Chat-v0.1
View all activity

Hi, I am a magpie 🐦!

πŸ•ΈοΈ Project Website: https://magpie-align.github.io/

πŸ“„ Technical Report: https://arxiv.org/abs/2406.08464

πŸ€— HF Paper Page: https://huggingface.co/papers/2406.08464

😬 Codes: https://github.com/magpie-align/magpie

πŸ€— Magpie Demo: https://huggingface.co/spaces/davanstrien/magpie (Thanks a lot for the implementation from @davanstrien!)

🐦 MagpieLM: MagpieLM-4B, MagpieLM-8B

Questions? Please contact Zhangchen and/or Yuchen by email or raise an issue in Github.

🧭 Click here for full dataset navigation (SFT and DPO)

Raw Datasets

Model Name Dataset Type Description
Qwen2.5 72B Instruct Magpie-Qwen2.5-Pro-1M SFT 1M Raw conversations built with Qwen2.5 72B Instruct.
Llama 3.1 70B Instruct Magpie-Llama-3.1-Pro-1M SFT 1M Raw conversations built with Meta Llama 3.1 70B.
Llama 3 70B Instruct Magpie-Pro-1M SFT 1M Raw conversations built with Meta Llama 3 70B.
Llama 3 8B Instruct Magpie-Air-3M SFT 3M Raw conversations built with Meta Llama 3 8B.
Qwen2 72B Instruct Magpie-Qwen2-Pro-1M SFT 1M Raw conversations built with Qwen2 72B Instruct.
Qwen2 7B Instruct Magpie-Qwen2-Air-3M SFT 3M Raw conversations built with Qwen2 7B Instruct.
Phi-3 Medium Instruct Magpie-Phi3-Pro-1M SFT 1M Raw conversations built with Phi-3 Medium Instruct.
Gemma-2-27b-it Magpie-Gemma2-Pro-534K SFT 534K conversations built with Gemma-2-27b-it.
Llama 3.1 405B Instruct Magpie-Ultra-v0.1 SFT [Argilla] 50K Raw conversations built with Meta Llama 3.1 405B.

Recommended Filtered Datasets

Here are some filtered datasets made by the authors, which are utilized in our Magpie-Align models. We also encourage you to create and apply your own filters to customize datasets.

We've kept these datasets within the 200K-300K range for your convenience. We found this range represents a sweet spot balancing model performance and training time.

The full list of filtered datasets can be found here.

Model Name Dataset Size Type Description
Llama 3.1 70B Instruct Magpie-Llama-3.1-Pro-MT-300K-Filtered 300K SFT (🌟 Flexible License! 🌟) Select 300K high quality multi-turn conversations from Magpie-Llama-3.1-Pro-MT-500K.
Llama 3 70B Instruct Magpie-Pro-300K-Filtered 300K SFT Apply a filter and select 300K high quality conversations from Magpie-Pro-1M.
Llama 3 70B Instruct Magpie-Pro-MT-300K 300K SFT Select 300K difficult questions from Magpie-Pro-1M and extend to multi-turn conversations.
Llama 3 70B Instruct Magpie-Reasoning-150K 150K SFT Reasoning booster with 150K math + code + reasoning conversations. Recommend mixing with Magpie-Pro-MT-300K.
Qwen2 72B Instruct Magpie-Qwen2-Pro-200K-Chinese 200K SFT Apply a filter and select 200K high quality Chinese conversations from Magpie-Qwen2-Pro-1M.
Gemma-2-27b-it Magpie-Gemma2-Pro-200K-Filtered 200K SFT (🌟 Flexible License! 🌟) Apply a filter and select 200K conversations from Magpie-Gemma2-Pro-534K.
Llama 3 8B Instruct Magpie-Air-DPO-100K 100K DPO DPO dataset via Best-of-N sampling and rewards.