Introducing our first standalone model – FluentlyLM Prinum
Introducing the first standalone model from Project Fluently LM! We worked on it for several months, used different approaches and eventually found the optimal one.
General characteristics: - Model type: Causal language models (QwenForCausalLM, LM Transformer) - Number of parameters: 32.5B - Number of parameters (not embedded): 31.0B - Number of layers: 64 - Context: 131,072 tokens - Language(s) (NLP): English, French, Spanish, Russian, Chinese, Japanese, Persian (officially supported) - License: MIT
Creation strategy: The basis of the strategy is shown in Pic. 2. We used Axolotl & Unsloth for SFT-finetuning with PEFT LoRA (rank=64, alpha=64) and Mergekit for SLERP and TIES mergers.
Researchers developed Sonic AI enabling precise facial animation from speech cues 🎧 Decouples head/expression control via audio tone analysis + time-aware fusion for natural long-form synthesis
Finally, an open-source AI that turns your lyrics into full songs is here—meet YuE! Unlike other tools that only create short clips, YuE can make entire songs (up to 5 minutes) with vocals, melody, and instruments all working together. Letsss go!