sometimesanotion commited on
Commit
dfe9f38
·
verified ·
1 Parent(s): 1cbe1e8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -20,7 +20,7 @@ metrics:
20
  ![Lamarck.webp](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.7/resolve/main/LamarckShades.webp)
21
  ---
22
 
23
- > [!TIP] With no benchmark regressions, mostly gains over the previous release, this version of Lamarck has [broken the 41.0 average](https://shorturl.at/jUqEk) maximum for 14B parameter models. Those providing feedback, thank you! As Lamarck v0.7 has two varieties of chain-of-thought in its ancestry, it has both high reasoning potential for its class, and some volatility in step-by-step use cases. For those needing more stability with <think> tags, [Lamarck 0.6](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.6) uses CoT more sparingly, and Chocolatine is gratifyingly stable.
24
 
25
  Lamarck 14B v0.7: A generalist merge with emphasis on multi-step reasoning, prose, and multi-language ability. The 14B parameter model class has a lot of strong performers, and Lamarck strives to be well-rounded and solid: ![14b.png](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.7/resolve/main/14b.png)
26
 
 
20
  ![Lamarck.webp](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.7/resolve/main/LamarckShades.webp)
21
  ---
22
 
23
+ > [!TIP] With no benchmark regressions, mostly gains over the previous release, this version of Lamarck has [broken the 41.0 average](https://shorturl.at/jUqEk) maximum for 14B parameter models. Those providing feedback, thank you!
24
 
25
  Lamarck 14B v0.7: A generalist merge with emphasis on multi-step reasoning, prose, and multi-language ability. The 14B parameter model class has a lot of strong performers, and Lamarck strives to be well-rounded and solid: ![14b.png](https://huggingface.co/sometimesanotion/Lamarck-14B-v0.7/resolve/main/14b.png)
26