Mingxing Li's picture
2 3

Mingxing Li

MingxingLi

AI & ML interests

None yet

Recent Activity

reacted to reach-vb's post with ๐Ÿ”ฅ about 1 month ago
What a brilliant week for Open Source AI! Qwen 2.5 Coder by Alibaba - 0.5B / 1.5B / 3B / 7B / 14B/ 32B (Base + Instruct) Code generation LLMs, with 32B tackling giants like Gemnini 1.5 Pro, Claude Sonnet https://huggingface.co/collections/Qwen/qwen25-coder-66eaa22e6f99801bf65b0c2f LLM2CLIP from Microsoft - Leverage LLMs to train ultra-powerful CLIP models! Boosts performance over the previous SOTA by ~17% https://huggingface.co/collections/microsoft/llm2clip-672323a266173cfa40b32d4c Athene v2 Chat & Agent by NexusFlow - SoTA general LLM fine-tuned from Qwen 2.5 72B excels at Chat + Function Calling/ JSON/ Agents https://huggingface.co/collections/Nexusflow/athene-v2-6735b85e505981a794fb02cc Orca Agent Instruct by Microsoft - 1 million instruct pairs covering text editing, creative writing, coding, reading comprehension, etc - permissively licensed https://huggingface.co/datasets/microsoft/orca-agentinstruct-1M-v1 Ultravox by FixieAI - 70B/ 8B model approaching GPT4o level, pick any LLM, train an adapter with Whisper as Audio Encoder https://huggingface.co/collections/reach-vb/ultravox-audio-language-model-release-67373b602af0a52b2a88ae71 JanusFlow 1.3 by DeepSeek - Next iteration of their Unified MultiModal LLM Janus with RectifiedFlow https://huggingface.co/deepseek-ai/JanusFlow-1.3B Common Corpus by Pleais - 2,003,039,184,047 multilingual, commercially permissive and high quality tokens! https://huggingface.co/datasets/PleIAs/common_corpus I'm sure I missed a lot, can't wait for the next week! Put down in comments what I missed! ๐Ÿค—
View all activity

Organizations

None yet

MingxingLi's activity

reacted to prithivMLmods's post with ๐Ÿค— about 1 month ago
view post
Post
4101
CRISP ๐Ÿ”ฅ [ Isometric-3D-Cinematography / Isometric-3D-Obj / 3D-Kawaii / Long Toons ]

[ Flux DLC ] : prithivMLmods/FLUX-LoRA-DLC

[ Stranger Zone ] : https://huggingface.co/strangerzonehf

๐ŸŽƒ[ Isometric 3D Cinematography ] : strangerzonehf/Flux-Isometric-3D-Cinematography
๐ŸŽƒ[ Isometric 3D ] : strangerzonehf/Flux-Isometric-3D-LoRA
๐ŸŽƒ[ Cute 3D Kawaii ] : strangerzonehf/Flux-Cute-3D-Kawaii-LoRA
๐ŸŒš[ Long Toon 3D ] : prithivMLmods/Flux-Long-Toon-LoRA

[ Stranger Zone Collection ] : https://huggingface.co/collections/prithivMLmods/stranger-zone-collections-6737118adcf2cb40d66d0c7e

[ Flux Collection ] : prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be

[ Flux Mix ] : prithivMLmods/Midjourney-Flux

.
.
.
@prithivMLmods
reacted to reach-vb's post with ๐Ÿ”ฅ about 1 month ago
view post
Post
4331
What a brilliant week for Open Source AI!

Qwen 2.5 Coder by Alibaba - 0.5B / 1.5B / 3B / 7B / 14B/ 32B (Base + Instruct) Code generation LLMs, with 32B tackling giants like Gemnini 1.5 Pro, Claude Sonnet
Qwen/qwen25-coder-66eaa22e6f99801bf65b0c2f

LLM2CLIP from Microsoft - Leverage LLMs to train ultra-powerful CLIP models! Boosts performance over the previous SOTA by ~17%
microsoft/llm2clip-672323a266173cfa40b32d4c

Athene v2 Chat & Agent by NexusFlow - SoTA general LLM fine-tuned from Qwen 2.5 72B excels at Chat + Function Calling/ JSON/ Agents
Nexusflow/athene-v2-6735b85e505981a794fb02cc

Orca Agent Instruct by Microsoft - 1 million instruct pairs covering text editing, creative writing, coding, reading comprehension, etc - permissively licensed
microsoft/orca-agentinstruct-1M-v1

Ultravox by FixieAI - 70B/ 8B model approaching GPT4o level, pick any LLM, train an adapter with Whisper as Audio Encoder
reach-vb/ultravox-audio-language-model-release-67373b602af0a52b2a88ae71

JanusFlow 1.3 by DeepSeek - Next iteration of their Unified MultiModal LLM Janus with RectifiedFlow
deepseek-ai/JanusFlow-1.3B

Common Corpus by Pleais - 2,003,039,184,047 multilingual, commercially permissive and high quality tokens!
PleIAs/common_corpus

I'm sure I missed a lot, can't wait for the next week!

Put down in comments what I missed! ๐Ÿค—
reacted to fdaudens's post with ๐Ÿš€ about 1 month ago
view post
Post
1639
๐Ÿš€ @Qwen just dropped 2.5-Turbo!

1M token context (that's entire "War and Peace"!) + 4.3x faster processing speed. Same price, way more power ๐Ÿ”ฅ

Check out the demo: Qwen/Qwen2.5-Turbo-1M-Demo

#QWEN
reacted to fffiloni's post with ๐Ÿ”ฅ about 1 month ago
upvoted an article about 2 months ago
view article
Article

Accelerating LLM Inference: Fast Sampling with Gumbel-Max Trick

By cxdu โ€ข
โ€ข 10
liked a Space 6 months ago