Upload README.md
Browse files
README.md
CHANGED
@@ -371,7 +371,7 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
|
|
371 |
|
372 |
**Special thanks to**: Aemon Algiz.
|
373 |
|
374 |
-
**Patreon special mentions**:
|
375 |
|
376 |
|
377 |
Thank you to all my generous patrons and donaters!
|
@@ -394,9 +394,10 @@ text, strengthening it's multilingual capabilities while retaining (and partiall
|
|
394 |
This was then further finetuned on a combination of some the most popular open-source instruction sets.
|
395 |
DiscoLM 70b is a [DiscoResearch](https://huggingface.co/DiscoResearch) project and was trained by [Björn Plüster](https://huggingface.co/bjoernp).
|
396 |
|
397 |
-
The model was trained with compute provided by [HessianAI](https://hessian.ai/) - we are very grateful for their support; please check out their wesbite and projects!
|
398 |
|
399 |
<img src="https://hessian.ai/wp-content/themes/hessianai/img/hessian-ai-logo.svg" width="120">
|
|
|
400 |
|
401 |
## Table of Contents
|
402 |
|
@@ -413,14 +414,14 @@ The model was trained with compute provided by [HessianAI](https://hessian.ai/)
|
|
413 |
|
414 |
| Huggingface | GPTQ | GGUF | AWQ | *Base Model* |
|
415 |
|-------|-------|-------|-------|-------|
|
416 |
-
| [Link](https://huggingface.co/DiscoResearch/DiscoLM-70b) |
|
417 |
|
418 |
## Benchmarks
|
419 |
|
420 |
### Hugginface Leaderboard
|
421 |
|
422 |
This models is still an early Alpha and we can't guarantee that there isn't any contamination.
|
423 |
-
However, the average of **71.24** would earn the #
|
424 |
|
425 |
| Metric | Value |
|
426 |
|-----------------------|-------|
|
@@ -513,7 +514,7 @@ Many thanks for all dataset providers/curators!
|
|
513 |
|
514 |
## Contact
|
515 |
|
516 |
-
Best way to reach us is on our [Discord](https://discord.gg/
|
517 |
|
518 |
## About DiscoResearch
|
519 |
|
@@ -523,7 +524,7 @@ DiscoResearch is an aspiring open research community. Disco should be a place wh
|
|
523 |
|
524 |
Disco 70b is a [DiscoResearch](https://huggingface.co/DiscoResearch) project and was trained by [Björn Plüster](https://huggingface.co/bjoernp). [Jan Harries](https://huggingface.co/jphme) helped with technical adivce, logistics and the Model Card.
|
525 |
[AutoMeta](https://huggingface.co/Alignment-Lab-AI) also provided helpful technical advice and rounded up his connections to select a set of high-quality datasets.
|
526 |
-
The model was trained with compute provided by [HessianAI](https://hessian.ai/) - many thanks in particular to [Patrick Schramowski](https://huggingface.co/PSaiml) for his support.
|
527 |
|
528 |
We are standing on the shoulders of giants; many thanks in no particular order to [Laion](https://laion.ai) for LeoLM 70b
|
529 |
(especially to [Christoph Schuhmann](https://laion.ai) who got us all connected),
|
|
|
371 |
|
372 |
**Special thanks to**: Aemon Algiz.
|
373 |
|
374 |
+
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
|
375 |
|
376 |
|
377 |
Thank you to all my generous patrons and donaters!
|
|
|
394 |
This was then further finetuned on a combination of some the most popular open-source instruction sets.
|
395 |
DiscoLM 70b is a [DiscoResearch](https://huggingface.co/DiscoResearch) project and was trained by [Björn Plüster](https://huggingface.co/bjoernp).
|
396 |
|
397 |
+
The model was trained with compute provided by [HessianAI](https://hessian.ai/) in collaboration with [LAION](https://laion.ai) - we are very grateful for their support; please check out their wesbite and projects!
|
398 |
|
399 |
<img src="https://hessian.ai/wp-content/themes/hessianai/img/hessian-ai-logo.svg" width="120">
|
400 |
+
<img src="https://avatars.githubusercontent.com/u/92627801?s=200&v=4" width="120">
|
401 |
|
402 |
## Table of Contents
|
403 |
|
|
|
414 |
|
415 |
| Huggingface | GPTQ | GGUF | AWQ | *Base Model* |
|
416 |
|-------|-------|-------|-------|-------|
|
417 |
+
| [Link](https://huggingface.co/DiscoResearch/DiscoLM-70b) | [@TheBloke](https://huggingface.co/TheBloke/DiscoLM-70B-GPTQ) | [@TheBloke](https://huggingface.co/TheBloke/DiscoLM-70B-GGUF) | [@TheBloke](https://huggingface.co/TheBloke/DiscoLM-70B-AWQ) | [LeoLM 70b](https://huggingface.co/LeoLM/leo-hessianai-70b) |
|
418 |
|
419 |
## Benchmarks
|
420 |
|
421 |
### Hugginface Leaderboard
|
422 |
|
423 |
This models is still an early Alpha and we can't guarantee that there isn't any contamination.
|
424 |
+
However, the average of **71.24** would earn the #3 spot on the HF leaderboard at the time of writing.
|
425 |
|
426 |
| Metric | Value |
|
427 |
|-----------------------|-------|
|
|
|
514 |
|
515 |
## Contact
|
516 |
|
517 |
+
Best way to reach us is on our [Discord](https://discord.gg/S8W8B5nz3v).
|
518 |
|
519 |
## About DiscoResearch
|
520 |
|
|
|
524 |
|
525 |
Disco 70b is a [DiscoResearch](https://huggingface.co/DiscoResearch) project and was trained by [Björn Plüster](https://huggingface.co/bjoernp). [Jan Harries](https://huggingface.co/jphme) helped with technical adivce, logistics and the Model Card.
|
526 |
[AutoMeta](https://huggingface.co/Alignment-Lab-AI) also provided helpful technical advice and rounded up his connections to select a set of high-quality datasets.
|
527 |
+
The model was trained with compute provided by [HessianAI](https://hessian.ai/) in collaboration with [LAION](https://laion.ai) - many thanks in particular to [Patrick Schramowski](https://huggingface.co/PSaiml) for his support.
|
528 |
|
529 |
We are standing on the shoulders of giants; many thanks in no particular order to [Laion](https://laion.ai) for LeoLM 70b
|
530 |
(especially to [Christoph Schuhmann](https://laion.ai) who got us all connected),
|