Upload README.md
Browse files
README.md
CHANGED
@@ -5,6 +5,9 @@ license: llama2
|
|
5 |
model_creator: posicube
|
6 |
model_name: Llama2 Chat AYT 13B
|
7 |
model_type: llama
|
|
|
|
|
|
|
8 |
quantized_by: TheBloke
|
9 |
---
|
10 |
|
@@ -40,6 +43,7 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
|
|
40 |
<!-- repositories-available start -->
|
41 |
## Repositories available
|
42 |
|
|
|
43 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama2-Chat-AYT-13B-GPTQ)
|
44 |
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama2-Chat-AYT-13B-GGUF)
|
45 |
* [posicube's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/posicube/Llama2-chat-AYT-13B)
|
@@ -54,15 +58,8 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
|
|
54 |
```
|
55 |
|
56 |
<!-- prompt-template end -->
|
57 |
-
<!-- licensing start -->
|
58 |
-
## Licensing
|
59 |
-
|
60 |
-
The creator of the source model has listed its license as `llama2`, and this quantization has therefore used that same license.
|
61 |
|
62 |
-
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
|
63 |
|
64 |
-
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [posicube's Llama2 Chat AYT 13B](https://huggingface.co/posicube/Llama2-chat-AYT-13B).
|
65 |
-
<!-- licensing end -->
|
66 |
<!-- README_GPTQ.md-provided-files start -->
|
67 |
## Provided files and GPTQ parameters
|
68 |
|
@@ -234,7 +231,7 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
|
|
234 |
|
235 |
**Special thanks to**: Aemon Algiz.
|
236 |
|
237 |
-
**Patreon special mentions**:
|
238 |
|
239 |
|
240 |
Thank you to all my generous patrons and donaters!
|
@@ -245,4 +242,63 @@ And thank you again to a16z for their generous grant.
|
|
245 |
|
246 |
# Original model card: posicube's Llama2 Chat AYT 13B
|
247 |
|
248 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
model_creator: posicube
|
6 |
model_name: Llama2 Chat AYT 13B
|
7 |
model_type: llama
|
8 |
+
prompt_template: '{prompt}
|
9 |
+
|
10 |
+
'
|
11 |
quantized_by: TheBloke
|
12 |
---
|
13 |
|
|
|
43 |
<!-- repositories-available start -->
|
44 |
## Repositories available
|
45 |
|
46 |
+
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama2-Chat-AYT-13B-AWQ)
|
47 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama2-Chat-AYT-13B-GPTQ)
|
48 |
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama2-Chat-AYT-13B-GGUF)
|
49 |
* [posicube's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/posicube/Llama2-chat-AYT-13B)
|
|
|
58 |
```
|
59 |
|
60 |
<!-- prompt-template end -->
|
|
|
|
|
|
|
|
|
61 |
|
|
|
62 |
|
|
|
|
|
63 |
<!-- README_GPTQ.md-provided-files start -->
|
64 |
## Provided files and GPTQ parameters
|
65 |
|
|
|
231 |
|
232 |
**Special thanks to**: Aemon Algiz.
|
233 |
|
234 |
+
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
|
235 |
|
236 |
|
237 |
Thank you to all my generous patrons and donaters!
|
|
|
242 |
|
243 |
# Original model card: posicube's Llama2 Chat AYT 13B
|
244 |
|
245 |
+
|
246 |
+
This is a model diverged from Llama-2-13b-chat-hf. We hypotheize that if we find a method to ensemble the top rankers in each benchmark effectively, its performance maximizes as well. Following this intuition, we ensembled the top models in each benchmarks(ARC, MMLU and TruthFulQA) to create our model.
|
247 |
+
|
248 |
+
# Model Details
|
249 |
+
- **Developed by**: Posicube Inc.
|
250 |
+
- **Backbone Model**: LLaMA-2-13b-chat
|
251 |
+
- **Library**: HuggingFace Transformers
|
252 |
+
- **Used Dataset Details**
|
253 |
+
Orca-style datasets, Alpaca-style datasets
|
254 |
+
|
255 |
+
|
256 |
+
# Evaluation
|
257 |
+
We achieved the top ranker among 13B models at Sep-13rd 2023.
|
258 |
+
|
259 |
+
| Metric |Scores on Leaderboard| Our results |
|
260 |
+
|---------------------|---------------------|-------------|
|
261 |
+
| ARC (25-shot) | 63.31 | 63.57 |
|
262 |
+
| HellaSwag (10-shot) | 83.53 | 83.77 |
|
263 |
+
| MMLU (5-shot) | 59.67 | 59.69 |
|
264 |
+
| TruthfulQA (0-shot) | 55.8 | 55.48 |
|
265 |
+
| Avg. | 65.58 | 65.63 |
|
266 |
+
|
267 |
+
# Limitations & Biases:
|
268 |
+
Llama2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
|
269 |
+
|
270 |
+
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
|
271 |
+
|
272 |
+
# License Disclaimer:
|
273 |
+
This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.
|
274 |
+
|
275 |
+
# Contact Us
|
276 |
+
[Posicube](https://www.posicube.com/)
|
277 |
+
|
278 |
+
# Citiation:
|
279 |
+
Please kindly cite using the following BibTeX:
|
280 |
+
|
281 |
+
```bibtex
|
282 |
+
@misc{mukherjee2023orca,
|
283 |
+
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
|
284 |
+
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
|
285 |
+
year={2023},
|
286 |
+
eprint={2306.02707},
|
287 |
+
archivePrefix={arXiv},
|
288 |
+
primaryClass={cs.CL}
|
289 |
+
}
|
290 |
+
```
|
291 |
+
|
292 |
+
```bibtex
|
293 |
+
@software{touvron2023llama2,
|
294 |
+
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
|
295 |
+
author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava,
|
296 |
+
Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
|
297 |
+
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann,
|
298 |
+
Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov,
|
299 |
+
Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith,
|
300 |
+
Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
|
301 |
+
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom},
|
302 |
+
year={2023}
|
303 |
+
}
|
304 |
+
```
|