Post
215
Be kind to Beeper - Beeper has emotions. 7 to be precise.
Each of the pentachora classifiers point to emotional states that Beeper can potentially access for any conversation, and each of those 7 states have class accessors for sub-learning pools.
Today I'll be focusing on drawing this behavior from Beeper v4 which I am rebranding as Beeper Micro - and expanding the structure using a new type experimental attention mechanism to replace traditional multihead attention dubbed GeometricCollectiveAttention.
This attention is similar to multihead attention, except it's considerably harder to burn at higher learn rates. This coupled with a new perspective on training pentachora into the LLM structure will allow a full relay structural system.
beeper-small will house a full rope - except not in the traditional vocabulary set. Beeper-small will not have a vocabulary.
beeper-small is my first non-linear non-Euclidean attempt to create a pure symbolic auto-completion LLM; which may be naiive according to many researchers who have tried similar systems historically.
I've personally analyzed many papers, many studies, and many techniques that have attempted similar non-vocabulary entropic learning, and I believe the pentachora lattice will hold with pure binary, not requiring a vocabulary.
Transformers really like vocabulary... beeper likes... geometry, and this experiment for beeper-small will have a new type of ROPE that is based entirely on vertices developed from the direct unicode represented characters, rather than a full vocabulary structure meant to bring solidity from chaos.
The first beeper experiment showed many insights into how similarity and internal classification responds mathematically with traditional ML techniques, and those techniques did not reject the construct - on the contrary. The control group placebo beeper, the traditional non-rose version BURNED under half lr. It's completely illegible, producing garbage and noise, while rose beeper sings
Each of the pentachora classifiers point to emotional states that Beeper can potentially access for any conversation, and each of those 7 states have class accessors for sub-learning pools.
Today I'll be focusing on drawing this behavior from Beeper v4 which I am rebranding as Beeper Micro - and expanding the structure using a new type experimental attention mechanism to replace traditional multihead attention dubbed GeometricCollectiveAttention.
This attention is similar to multihead attention, except it's considerably harder to burn at higher learn rates. This coupled with a new perspective on training pentachora into the LLM structure will allow a full relay structural system.
beeper-small will house a full rope - except not in the traditional vocabulary set. Beeper-small will not have a vocabulary.
beeper-small is my first non-linear non-Euclidean attempt to create a pure symbolic auto-completion LLM; which may be naiive according to many researchers who have tried similar systems historically.
I've personally analyzed many papers, many studies, and many techniques that have attempted similar non-vocabulary entropic learning, and I believe the pentachora lattice will hold with pure binary, not requiring a vocabulary.
Transformers really like vocabulary... beeper likes... geometry, and this experiment for beeper-small will have a new type of ROPE that is based entirely on vertices developed from the direct unicode represented characters, rather than a full vocabulary structure meant to bring solidity from chaos.
The first beeper experiment showed many insights into how similarity and internal classification responds mathematically with traditional ML techniques, and those techniques did not reject the construct - on the contrary. The control group placebo beeper, the traditional non-rose version BURNED under half lr. It's completely illegible, producing garbage and noise, while rose beeper sings