AI & ML interests

Decision Intelligence & Reinforcement Learning & Deep Learning

Recent Activity

OpenDILabCommunity's activity

lunarfluย 
posted an update 20 days ago
xianbaoย 
posted an update 4 months ago
view post
Post
1704
With the open-weight release of CogVideoX-5B from THUDM, i.e. GLM team, the Video Generation Model (how about calling it VGM) field has officially became the next booming "LLM"

What does the landscape look like? What are other video generation models? This collection below is all your need.

xianbao/video-generation-models-66c350163c74f60f5c412af6

The above video is generated by @a-r-r-o-w with CogVideoX-5B, taken from a nice lookout for the field!
lunarfluย 
posted an update 4 months ago
lunarfluย 
posted an update 5 months ago
view post
Post
1880
Cool things this week from @huggingface !

๐ŸŒŽAI math olympiad winner NuminaMath is here!
๐Ÿค—Announcing New Hugging Face and Keras NLP integration
โœจUI overhaul to HF tokens!
๐ŸงŠ Embed our dataset viewer on any webpage!

https://huggingface.co/blog/winning-aimo-progress-prize
https://huggingface.co/blog/keras-nlp-integration
https://huggingface.co/settings/tokens
https://x.com/julien_c/status/1812099420726456457

Check out the full list on our discord! ๐Ÿ‘‡
https://discord.com/invite/JfAtkvEtRb
lunarfluย 
posted an update 7 months ago
view post
Post
2314
By popular demand, HF activity tracker v1.0 is here! ๐Ÿ“Š let's build it together!๐Ÿค—

Lots of things to improve, feel free to open PRs in the community tab!

good PR ideas:
- track more types of actions that include date+time
- bigger plot
- track discord activity too ๐Ÿคฏ
- link github? โšก

https://huggingface.co/spaces/huggingface-projects/LevelBot
  • 2 replies
ยท
lunarfluย 
posted an update 7 months ago
view post
Post
1962
Weekly highlights for the HF ecosystem!

๐Ÿš€ Phi 3
๐Ÿฆ… Falcon VLM
๐Ÿค— sentence-transformers v3.0 is here! Train and finetune embedding models with multi-GPU training, bf16 support, loss logging, callbacks and more!
๐Ÿฅณ Gradio launch event 6/6! We're launching 1.0 versions of two new libraries, Python + JS client libraries to programmatically query Gradio apps, and several new features making it easier to use Gradio apps in production!
โœจ Tools now available in HuggingChat! Use any AI apps built by the community! ๐Ÿ”ฅ
๐ŸงŠ ML for 3D Course Unit 3 is here! Covering Gaussian splatting, how it fits in the generative 3D pipeline, and hands-on code to build your own demo!

See the full list here!
https://discord.com/channels/879548962464493619/897387888663232554/1245036889539612764 !
  • 2 replies
ยท
lunarfluย 
posted an update 7 months ago
view post
Post
1934
cooking up something....anyone interested in a daily activity tracker for HF?
ยท
xianbaoย 
posted an update 7 months ago
view post
Post
1791
Why Apache 2.0 Matters for LLMs ๐Ÿค”

@01AI_Yi recently switched from a permissive & commercially friendly license, to Apache 2.0. And the community loved it! ๐Ÿš€

@JustinLin610 also had a poll on model license and the majority votes for Apache 2.0.

Why it is a Big Deal? โฌ‡๏ธ

๐Ÿ“š Legal Simplicity: Custom licenses need costly & time-consuming legal review. Apache 2.0 is well-known & easier for legal teams to handle.

๐Ÿ‘ฉโ€๐Ÿ’ป Developer-Friendly: Legal docs are a pain for devs! Apache 2.0 is well-known and tech-friendly, making it easier for non-native developers to understand the implications too.

๐Ÿ”— Easier Integration: Apache 2.0 is compatible with many other licenses, simplifying tasks like model merging with models of different licensing requirements.

๐Ÿšซ No Permission Needed: Custom licenses often require explicit permission and additional documentation work of filling forms, creating barriers. Apache 2.0 removes this hurdle, letting devs focus on innovation.

There are a lot interesting discussions from
@JustinLin610 's poll: https://x.com/JustinLin610/status/1793559737482764375 which inspired this thread.

Any other thoughts? Let me know ^^
  • 1 reply
ยท
xianbaoย 
posted an update 7 months ago
view post
Post
1214
DeepSeekV2 is a big deal. Not only because its significant improvements to both key components of Transformer: the Attention layer and FFN layer.

It has also completed disrupted the Chines LLM market and forcing the competitors to drop the price to 1% of the original price.

---

There are two key components in Transformer architecture: the self-attention layer, which captures relationships between tokens in context, and the Feed-Forward Network (FFN) layer, which stores knowledge.

DeepSeek V2 introduces optimizations to both:

Attention layer normally uses KV Cache to reduce repetitive compute, but it consumes significant GPU RAM, limiting concurrent requests. DeepSeek V2 introduces Multi-head Latent Attention (MLA), which stores only a small latent representation, resulting in substantial RAM savings.

DeepSeek V2 utilizes 162 experts instead of the usual 8 as in Mixtral. This approach segments experts into finer granularity for higher specialization and more accurate knowledge acquisition. Activating only a small subset of experts for each token, leads to efficient processing.

It disrupted the market by dropping API prices to $0.14 per 1M tokens. This dramatic reduction forced competitors like GLM, Ernie, and QWen to follow suit, lowering their prices to 1% of their original offerings. Now, users can access these APIs at 1/35th the cost of ChatGPT-4o.
lunarfluย 
posted an update 7 months ago
xianbaoย 
posted an update 8 months ago