Yazan Agha-Schrader's picture

Yazan Agha-Schrader PRO

phi0112358

AI & ML interests

Brain, EEG, BCI, consciousness, autism, octopus, automation, a.i., etymology, numbers, spirituality, astronomy

Recent Activity

liked a model 2 days ago
Qwen/Qwen2.5-Coder-32B-Instruct
liked a model 9 days ago
meta-llama/Meta-Llama-3-8B-Instruct
liked a model 27 days ago
unsloth/QwQ-32B-Preview-GGUF
View all activity

Organizations

<br>a.i.NEURONS's profile picture mount AI in's profile picture

phi0112358's activity

reacted to m-ric's post with ๐Ÿ‘ 3 months ago
view post
Post
3051
๐Ÿ“œ ๐Ž๐ฅ๐-๐ฌ๐œ๐ก๐จ๐จ๐ฅ ๐‘๐๐๐ฌ ๐œ๐š๐ง ๐š๐œ๐ญ๐ฎ๐š๐ฅ๐ฅ๐ฒ ๐ซ๐ข๐ฏ๐š๐ฅ ๐Ÿ๐š๐ง๐œ๐ฒ ๐ญ๐ซ๐š๐ง๐ฌ๐Ÿ๐จ๐ซ๐ฆ๐ž๐ซ๐ฌ!

Researchers from Mila and Borealis AI just have shown that simplified versions of good old Recurrent Neural Networks (RNNs) can match the performance of today's transformers.

They took a fresh look at LSTMs (from 1997!) and GRUs (from 2014). They stripped these models down to their bare essentials, creating "minLSTM" and "minGRU". The key changes:
โถ Removed dependencies on previous hidden states in the gates
โท Dropped the tanh that had been added to restrict output range in order to avoid vanishing gradients
โธ Ensured outputs are time-independent in scale (not sure I understood that well either, don't worry)

โšก๏ธ As a result, you can use a โ€œparallel scanโ€ algorithm to train these new, minimal RNNs, in parallel, taking 88% more memory but also making them 200x faster than their traditional counterparts for long sequences

๐Ÿ”ฅ The results are mind-blowing! Performance-wise, they go toe-to-toe with Transformers or Mamba.

And for Language Modeling, they need 2.5x fewer training steps than Transformers to reach the same performance! ๐Ÿš€

๐Ÿค” Why does this matter?

By showing there are simpler models with similar performance to transformers, this challenges the narrative that we need advanced architectures for better performance!

๐Ÿ’ฌย Franรงois Chollet wrote in a tweet about this paper:

โ€œThe fact that there are many recent architectures coming from different directions that roughly match Transformers is proof that architectures aren't fundamentally important in the curve-fitting paradigm (aka deep learning)โ€

โ€œCurve-fitting is about embedding a dataset on a curve. The critical factor is the dataset, not the specific hard-coded bells and whistles that constrain the curve's shape.โ€

Itโ€™s the Bitter lesson by Rich Sutton striking again: donโ€™t need fancy thinking architectures, just scale up your model and data!

Read the paper ๐Ÿ‘‰ย  Were RNNs All We Needed? (2410.01201)
  • 2 replies
ยท
liked a Space 3 months ago
New activity in mattshumer/Reflection-Llama-3.1-70B 4 months ago

DLETE THIS MODEL

2
#76 opened 4 months ago by
MaziyarPanahi
New activity in mattshumer/ref_70_e3 4 months ago
New activity in mattshumer/Reflection-Llama-3.1-70B 4 months ago
New activity in deepseek-ai/DeepSeek-V2.5 4 months ago

DeepSeek-Coder-V2.5-Lite

13
#3 opened 4 months ago by
smcleod