modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
migtissera/Synthia-70B
migtissera
"2023-11-17T21:32:19Z"
1,655
11
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "arxiv:2306.02707", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-22T19:51:32Z"
--- license: llama2 pipeline_tag: text-generation language: - en library_name: transformers --- # Synthia-70B SynthIA (Synthetic Intelligent Agent) is a LLama-2-70B model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations. <br> ![Synthia](https://huggingface.co/migtissera/Synthia-70B/resolve/main/Synthia.jpeg) <br> <br> #### License Disclaimer: This model is bound by the license & usage restrictions of the original Llama-2 model, and comes with no warranty or gurantees of any kind. <br> ## Evaluation We evaluated Synthia-70B on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI. Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |||| |:------:|:--------:|:-------:| |**Task**|**Metric**|**Value**| |*arc_challenge*|acc_norm|0.6945| |*hellaswag*|acc_norm|0.8711| |*mmlu*|acc_norm|0.6891| |*truthfulqa_mc*|mc2|0.5979| |**Total Average**|-|**0.7132**|| <br> ## Example Usage ### Here is prompt format: ``` SYSTEM: You are Synthia. As a an AI intelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually. USER: How is a rocket launched from the surface of the earth to Low Earth Orbit? ASSISTANT: ``` ### Below shows a code example on how to use this model: ```python import torch, json from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "migtissera/Synthia-70B" output_file_path = "./Synthia-70B-conversations.jsonl" model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) def generate_text(instruction): tokens = tokenizer.encode(instruction) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to("cuda") instance = { "input_ids": tokens, "top_p": 1.0, "temperature": 0.75, "generate_len": 1024, "top_k": 50, } length = len(tokens[0]) with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length + instance["generate_len"], use_cache=True, do_sample=True, top_p=instance["top_p"], temperature=instance["temperature"], top_k=instance["top_k"], num_return_sequences=1, ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) answer = string.split("USER:")[0].strip() return f"{answer}" conversation = f"SYSTEM: As a an AI superintelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually." while True: user_input = input("You: ") llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: " answer = generate_text(llm_prompt) print(answer) conversation = f"{llm_prompt}{answer}" json_data = {"prompt": user_input, "answer": answer} ## Save your conversation with open(output_file_path, "a") as output_file: output_file.write(json.dumps(json_data) + "\n") ``` <br> #### Limitations & Biases: While this model aims for accuracy, it can occasionally produce inaccurate or misleading results. Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. Exercise caution and cross-check information when necessary. This is an uncensored model. <br> ### Citiation: Please kindly cite using the following BibTeX: ``` @misc{Synthia-70B, author = {Migel Tissera}, title = {Synthia-70B: Synthetic Intelligent Agent}, year = {2023}, publisher = {GitHub, HuggingFace}, journal = {GitHub repository, HuggingFace repository}, howpublished = {\url{https://huggingface.co/migtissera/Synthia-70B}, } ``` ``` @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @software{touvron2023llama, title={LLaMA2: Open and Efficient Foundation Language Models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_migtissera__Synthia-70B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 60.29 | | ARC (25-shot) | 69.45 | | HellaSwag (10-shot) | 87.11 | | MMLU (5-shot) | 68.91 | | TruthfulQA (0-shot) | 59.79 | | Winogrande (5-shot) | 83.66 | | GSM8K (5-shot) | 31.39 | | DROP (3-shot) | 21.75 |
Danielbrdz/CodeBarcenas-7b
Danielbrdz
"2023-09-03T22:50:29Z"
1,655
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-03T22:10:59Z"
--- license: llama2 language: - en --- CodeBarcenas Model specialized in the Python language Based on the model: WizardLM/WizardCoder-Python-7B-V1.0 And trained with the dataset: mlabonne/Evol-Instruct-Python-26k Made with ❤️ in Guadalupe, Nuevo Leon, Mexico 🇲🇽
Undi95/ReMM-L2-13B-PIPPA
Undi95
"2023-11-17T21:08:05Z"
1,655
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-04T03:38:27Z"
--- license: cc-by-nc-4.0 --- Re:MythoMax-PIPPA (ReMM-PIPPA) is a recreation trial of the original [MythoMax-L2-B13](https://huggingface.co/Gryphe/MythoMax-L2-13b) with updated models and merge of the PIPPA dataset ([PIPPA-ShareGPT-Subset-QLora-13b](https://huggingface.co/zarakiquemparte/PIPPA-ShareGPT-Subset-QLora-13b)) at (0.18) weight. Command useds and explaination : ```shell Due to hardware limitation, some merge was done in 2 part. - Recreate ReML : Mythologic (v2) (Chronos/Hermes/Airboros) => Replacing Chronos by The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16 (0.30) => Replacing Airoboros by jondurbin/airoboros-l2-13b-2.1 (last version) (0.40) => Keeping NousResearch/Nous-Hermes-Llama2-13b (0.30) Part 1: python ties_merge.py TheBloke/Llama-2-13B-fp16 ./ReML-L2-13B-part1 --merge The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16 --density 0.42 --merge jondurbin/airoboros-l2-13b-2.1 --density 0.56 --cuda Part 2: python ties_merge.py TheBloke/Llama-2-13B-fp16 ./ReML-L2-13B --merge NousResearch/Nous-Hermes-Llama2-13b --density 0.30 --merge Undi95/ReML-L2-13B-part1 --density 0.70 --cuda With that : - Recreate ReMM : MythoMax (v2) (Mythologic/Huginn v1) => Replacing Mythologic by the one above (0.5) => Replacing Huginn by The-Face-Of-Goonery/Huginn-13b-v1.2 (hottest) (0.5) Part 3: python ties_merge.py TheBloke/Llama-2-13B-fp16 ./ReMM-L2-13B --merge Undi95/ReML-L2-13B --density 0.50 --merge The-Face-Of-Goonery/Huginn-13b-v1.2 --density 0.50 --cuda Part 4: Undi95/ReMM-L2-13B (0.82) + zarakiquemparte/PIPPA-ShareGPT-Subset-QLora-13b (0.18) = ReMM-L2-13B-PIPPA ``` <!-- description start --> ## Description This repo contains fp16 files of ReMM-PIPPA, a recreation of the original MythoMax, but updated and merged with the PIPPA dataset. <!-- description end --> <!-- description start --> ## Models used - TheBloke/Llama-2-13B-fp16 (base) - The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16 - jondurbin/airoboros-l2-13b-2.1 - NousResearch/Nous-Hermes-Llama2-13b - The-Face-Of-Goonery/Huginn-13b-v1.2 - ReML-L2-13B (Private recreation trial of an updated Mythologic-L2-13B) ## Loras used - zarakiquemparte/PIPPA-ShareGPT-Subset-QLora-13b <!-- description end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` Special thanks to Sushi kek # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__ReMM-L2-13B-PIPPA) | Metric | Value | |-----------------------|---------------------------| | Avg. | 52.58 | | ARC (25-shot) | 59.73 | | HellaSwag (10-shot) | 83.12 | | MMLU (5-shot) | 54.1 | | TruthfulQA (0-shot) | 49.94 | | Winogrande (5-shot) | 74.51 | | GSM8K (5-shot) | 2.96 | | DROP (3-shot) | 43.69 |
FelixChao/CodeLlama13B-Finetune-v1
FelixChao
"2023-09-14T03:30:37Z"
1,655
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-13T17:09:07Z"
Entry not found
glaiveai/glaive-coder-7b
glaiveai
"2023-09-21T19:35:50Z"
1,655
53
transformers
[ "transformers", "pytorch", "llama", "text-generation", "code", "en", "dataset:glaiveai/glaive-code-assistant", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-17T14:49:44Z"
--- license: llama2 datasets: - glaiveai/glaive-code-assistant language: - en tags: - code --- # Glaive-coder-7b Glaive-coder-7b is a 7B parameter code model trained on a dataset of ~140k programming related problems and solutions generated from Glaive’s synthetic data generation platform. The model is fine-tuned on the CodeLlama-7b model. ## Usage: The model is trained to act as a code assistant, and can do both single instruction following and multi-turn conversations. It follows the same prompt format as CodeLlama-7b-Instruct- ``` <s>[INST] <<SYS>> {{ system_prompt }} <</SYS>> {{ user_msg }} [/INST] {{ model_answer }} </s> <s>[INST] {{ user_msg }} [/INST] ``` You can run the model in the following way- ```python from transformers import AutoModelForCausalLM , AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("glaiveai/glaive-coder-7b") model = AutoModelForCausalLM.from_pretrained("glaiveai/glaive-coder-7b").half().cuda() def fmt_prompt(prompt): return f"<s> [INST] {prompt} [/INST]" inputs = tokenizer(fmt_prompt(prompt),return_tensors="pt").to(model.device) outputs = model.generate(**inputs,do_sample=True,temperature=0.1,top_p=0.95,max_new_tokens=100) print(tokenizer.decode(outputs[0],skip_special_tokens=True,clean_up_tokenization_spaces=False)) ``` ## Benchmarks: The model achieves a 63.1% pass@1 on HumanEval and a 45.2% pass@1 on MBPP, however it is evident that these benchmarks are not representative of real-world usage of code models so we are launching the [Code Models Arena](https://arena.glaive.ai/) to let users vote on model outputs so we can have a better understanding of user preference on code models and come up with new and better benchmarks. We plan to release the Arena results as soon as we have a sufficient amount of data. Join the Glaive [discord](https://discord.gg/fjQ4uf3yWD) for improvement suggestions, bug-reports and collaborating on more open-source projects.
yec019/fbopt-350m-8bit
yec019
"2023-11-28T18:45:45Z"
1,655
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "license:unknown", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "bitsandbytes", "region:us" ]
text-generation
"2023-11-28T18:33:55Z"
--- license: unknown --- This is a model for testing the effect of quantization. Part of the UCSD 23Fall 240D course project. The model is a pretrained model from facebook. Model name is opt-350m. We quantize it with 8 bit quantization provided by bitsandbytes. It is a quantization technique proposed in a paper. ---
iremmd/thy_model_31
iremmd
"2024-06-28T16:30:49Z"
1,655
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-28T16:23:44Z"
--- base_model: unsloth/llama-3-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf --- # Uploaded model - **Developed by:** iremmd - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
TehVenom/ChanMalion
TehVenom
"2023-05-10T14:19:11Z"
1,654
9
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-03-05T22:04:03Z"
GPT-J_4Chan Merged 50/50 with Pygmalion-6b.
dvruette/llama-13b-pretrained-sft-epoch-1
dvruette
"2023-04-04T21:17:28Z"
1,654
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-04-04T20:09:40Z"
https://wandb.ai/open-assistant/supervised-finetuning/runs/gljcl75b
dvruette/llama-13b-pretrained-dropout
dvruette
"2023-04-08T10:31:44Z"
1,654
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-04-05T19:38:35Z"
https://wandb.ai/open-assistant/supervised-finetuning/runs/i9gmn0dt Trained with residual dropout 0.1
Aeala/VicUnlocked-alpaca-30b
Aeala
"2023-05-17T17:00:32Z"
1,654
7
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-05-17T02:18:42Z"
## Model Info Merge of my [VicUnlocked-alpaca-half-30b LoRA](https://huggingface.co/Aeala/VicUnlocked-alpaca-half-30b-LoRA) **Important Note**: While this is trained on a cleaned ShareGPT dataset like Vicuna used, this was trained in the *Alpaca* format, so prompting should be something like: ``` ### Instruction: <prompt> (without the <>) ### Response: ``` ## Benchmarks wikitext2: 4.372413635253906 ptb-new: 24.69171714782715 c4-new: 6.469308853149414 Results generated with GPTQ evals (not quantized) thanks to [Neko-Institute-of-Science](https://huggingface.co/Neko-Institute-of-Science)
LLMs/WizardLM-13B-V1.0
LLMs
"2023-06-11T23:41:09Z"
1,654
4
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-06-11T22:31:38Z"
--- license: gpl-3.0 ---
TheBloke/Nous-Hermes-13B-SuperHOT-8K-fp16
TheBloke
"2023-07-02T20:34:48Z"
1,654
4
transformers
[ "transformers", "pytorch", "llama", "text-generation", "custom_code", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-06-26T23:38:34Z"
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # NousResearch's Nous-Hermes-13B fp16 This is fp16 pytorch format model files for [NousResearch's Nous-Hermes-13B](https://huggingface.co/NousResearch/Nous-Hermes-13b) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test). [Kaio Ken's SuperHOT 13b LoRA](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`. Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Nous-Hermes-13B-SuperHOT-8K-GPTQ) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Nous-Hermes-13B-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NousResearch/Nous-Hermes-13b) <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). Tests have shown that the model does indeed leverage the extended context at 8K. You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** #### Looking for Merged & Quantized Models? - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors) - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors) #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model # Original model card: NousResearch's Nous-Hermes-13B # Model Card: Nous-Hermes-13b ## Model Description Nous-Hermes-13b is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Karan4D leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors. The result is an enhanced Llama 13b model that rivals GPT-3.5-turbo in performance across a variety of tasks. This model stands out for its long responses, low hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 2000 sequence length on an 8x a100 80GB DGX machine for over 50 hours. ## Model Training The model was trained almost entirely on synthetic GPT-4 outputs. This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), CodeAlpaca, Evol_Instruct Uncensored, GPT4-LLM, and Unnatural Instructions. Additional data inputs came from Camel-AI's Biology/Physics/Chemistry and Math Datasets, Airoboros' GPT-4 Dataset, and more from CodeAlpaca. The total volume of data encompassed over 300,000 instructions. ## Collaborators The model fine-tuning and the datasets were a collaboration of efforts and resources between Teknium, Karan4D, Nous Research, Huemin Art, and Redmond AI. Huge shoutout and acknowledgement is deserved for all the dataset creators who generously share their datasets openly. Special mention goes to @winglian, @erhartford, and @main_horse for assisting in some of the training issues. Among the contributors of datasets, GPTeacher was made available by Teknium, Wizard LM by nlpxucan, and the Nous Research Instruct Dataset was provided by Karan4D and HueminArt. The GPT4-LLM and Unnatural Instructions were provided by Microsoft, Airoboros dataset by jondurbin, Camel-AI datasets are from Camel-AI, and CodeAlpaca dataset by Sahil 2801. If anyone was left out, please open a thread in the community tab. ## Prompt Format The model follows the Alpaca prompt format: ``` ### Instruction: ### Response: ``` or ``` ### Instruction: ### Input: ### Response: ``` ## Resources for Applied Use Cases: For an example of a back and forth chatbot using huggingface transformers and discord, check out: https://github.com/teknium1/alpaca-discord For an example of a roleplaying discord bot, check out this: https://github.com/teknium1/alpaca-roleplay-discordbot ## Future Plans The model is currently being uploaded in FP16 format, and there are plans to convert the model to GGML and GPTQ 4bit quantizations. The team is also working on a full benchmark, similar to what was done for GPT4-x-Vicuna. We will try to get in discussions to get the model included in the GPT4All. ## Benchmark Results ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.4915|± |0.0146| | | |acc_norm|0.5085|± |0.0146| |arc_easy | 0|acc |0.7769|± |0.0085| | | |acc_norm|0.7424|± |0.0090| |boolq | 1|acc |0.7948|± |0.0071| |hellaswag | 0|acc |0.6143|± |0.0049| | | |acc_norm|0.8000|± |0.0040| |openbookqa | 0|acc |0.3560|± |0.0214| | | |acc_norm|0.4640|± |0.0223| |piqa | 0|acc |0.7965|± |0.0094| | | |acc_norm|0.7889|± |0.0095| |winogrande | 0|acc |0.7190|± |0.0126| ``` These benchmarks currently have us at #1 on ARC-c, ARC-e, Hellaswag, and OpenBookQA, and 2nd place on Winogrande, comparing to GPT4all's benchmarking list. ## Model Usage The model is available for download on Hugging Face. It is suitable for a wide range of language tasks, from generating creative text to understanding and following complex instructions. Compute provided by our project sponsor Redmond AI, thank you!!
CobraMamba/mamba-gpt-3b-v3
CobraMamba
"2023-08-03T01:55:05Z"
1,654
19
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "gpt", "llm", "large language model", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-07-28T07:45:24Z"
--- language: - en library_name: transformers tags: - gpt - llm - large language model inference: false thumbnail: >- https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico license: apache-2.0 --- # Model Card **The Best 3B Model! Surpassing dolly-v2-12b** The best 3B model on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), with performance surpassing dolly-v2-12b | Metric | Value | |-----------------------|-------| | MMLU (5-shot) | 27.3 | | ARC (25-shot) | 41.7 | | HellaSwag (10-shot) | 71.1 | | TruthfulQA (0-shot) | 37.9 | | Avg. | 44.5 | We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above. The training code and data will be open sourced later on Github(https://github.com/chi2liu/mamba-gpt-3b) ## Training Dataset ` mamba-gpt-3b-v3 ` is trained on multiply dataset: - [Stanford Alpaca (en)](https://github.com/tatsu-lab/stanford_alpaca) - [Open Assistant (multilingual)](https://huggingface.co/datasets/OpenAssistant/oasst1) - [LIMA (en)](https://huggingface.co/datasets/GAIR/lima) - [CodeAlpaca 20k (en)](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k) ## Summary We have fine-tuned the open-lama model and surpassed the original model in multiple evaluation subtasks, making it currently the best performing 3B model with comparable performance to llama-7b - Base model: [openlm-research/open_llama_3b_v2](https://huggingface.co/openlm-research/open_llama_3b_v2) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed. ```bash pip install transformers==4.29.2 pip install accelerate==0.19.0 pip install torch==2.0.0 ``` ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("CobraMamba/mamba-gpt-3b-v3") model = AutoModelForCausalLM.from_pretrained("CobraMamba/mamba-gpt-3b-v3", trust_remote_code=True, torch_dtype=torch.float16) input_context = "Your text here" input_ids = tokenizer.encode(input_context, return_tensors="pt") output = model.generate(input_ids, max_length=128, temperature=0.7) output_text = tokenizer.decode(output[0], skip_special_tokens=True) print(output_text) ``` ## Model Architecture ``` LlamaForCausalLM( (model): LlamaModel( (embed_tokens): Embedding(32000, 4096, padding_idx=0) (layers): ModuleList( (0-31): 32 x LlamaDecoderLayer( (self_attn): LlamaAttention( (q_proj): Linear(in_features=4096, out_features=4096, bias=False) (k_proj): Linear(in_features=4096, out_features=4096, bias=False) (v_proj): Linear(in_features=4096, out_features=4096, bias=False) (o_proj): Linear(in_features=4096, out_features=4096, bias=False) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): Linear(in_features=4096, out_features=11008, bias=False) (down_proj): Linear(in_features=11008, out_features=4096, bias=False) (up_proj): Linear(in_features=4096, out_features=11008, bias=False) (act_fn): SiLUActivation() ) (input_layernorm): LlamaRMSNorm() (post_attention_layernorm): LlamaRMSNorm() ) ) (norm): LlamaRMSNorm() ) (lm_head): Linear(in_features=4096, out_features=32000, bias=False) ) ``` ## Citation If this work is helpful, please kindly cite as: ```bibtex @Misc{mamba-gpt-3b-v3, title = {Mamba-GPT-3b-v3}, author = {chiliu}, howpublished = {\url{https://huggingface.co/CobraMamba/mamba-gpt-3b-v3}}, year = {2023} } ``` ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
kingbri/chronolima-airo-grad-l2-13B
kingbri
"2023-08-16T15:32:52Z"
1,654
3
transformers
[ "transformers", "pytorch", "llama", "text-generation", "llama-2", "en", "license:agpl-3.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-04T06:52:37Z"
--- language: - en library_name: transformers pipeline_tag: text-generation tags: - llama - llama-2 license: agpl-3.0 --- # Model Card: chronolima-airo-grad-l2-13B This is a lora + gradient merge between: - [Chronos 13b v2](https://huggingface.co/elinas/chronos-13b-v2) - [Airoboros l2 13b gpt4 2.0](https://huggingface.co/jondurbin/airoboros-l2-13b-gpt4-2.0) - [LimaRP llama 2 Lora](https://huggingface.co/lemonilia/limarp-llama2) from July 28, 2023 at a weight of 0.25. You can check out the sister model [airolima chronos grad l2 13B](https://huggingface.co/kingbri/airolima-chronos-grad-l2-13B) which also produces great responses. Chronos was used as the base model here. The merge was performed using [BlockMerge_Gradient](https://github.com/Gryphe/BlockMerge_Gradient) by Gryphe For this merge: - Chronos was merged with LimaRP at a 0.25 weight - Airoboros was added in an inverted curve gradient at a 0.9 ratio and slowly trickled down to 0 at the 25th layer. I have provided an illustration to help visualize this merge. ![chronolima-airo-illustration](https://files.catbox.moe/g3dm26.png) Unlike a basic ratio merge (ex. 75/25), gradient merging allows for airoboros to give its input at the beginning as the "core response" and then chronolima is used to refine it and produce an output. LimaRP was merged at a lower weight to moreso correct chronos rather than overhaul it. Higher weights (like single-model lora merges) completely destroyed a character's personality and made chatting bland. ## Usage: Since this is a merge between Airoboros, Chronos, and LimaRP, the following instruction formats should work: Alpaca 2: ``` ### Instruction: <prompt> ### Response: <leave a newline blank for model to respond> ``` Airoboros: ``` A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: [prompt] ASSISTANT: ``` LimaRP instruction format (this might not work due to its weight): ``` <<SYSTEM>> <character card and system prompt> <<USER>> <prompt> <<AIBOT>> <leave a newline blank for model to respond> ``` ## Bias, Risks, and Limitations Chronos has a bias to talk very expressively and reply with very long responses. LimaRP is trained on human RP data from niche internet forums. This model is not intended for supplying factual information or advice in any form. ## Training Details This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details.
The-Face-Of-Goonery/Huginn-v3-13b
The-Face-Of-Goonery
"2023-08-17T18:39:41Z"
1,654
10
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-12T20:38:26Z"
--- {} --- Huginn v1.2 but finetuned on superCOT, and merged holodeck in for some better story capability i also merged limarp back into it a second time to refresh those features again since v1.2 seemed to bury them it works best on the alpaca format but also works with chat too
abhishek/llama2guanacotest
abhishek
"2023-08-16T12:24:25Z"
1,654
0
transformers
[ "transformers", "pytorch", "tensorboard", "llama", "text-generation", "autotrain", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-16T12:13:04Z"
--- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " --- # Model Trained Using AutoTrain
Danielbrdz/Barcenas-7b
Danielbrdz
"2023-08-26T17:04:40Z"
1,654
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "es", "en", "dataset:Danielbrdz/Barcenas-DataSet", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-25T17:33:48Z"
--- license: other datasets: - Danielbrdz/Barcenas-DataSet language: - es - en --- Barcenas-7b a model based on orca-mini-v3-7b and LLama2-7b. Trained with a proprietary dataset to boost the creativity and consistency of its responses. This model would never have been possible thanks to the following people: Pankaj Mathur - For his orca-mini-v3-7b model which was the basis of the Barcenas-7b fine-tune. Maxime Labonne - Thanks to his code and tutorial for fine-tuning in LLama2 TheBloke - For his script for a peft adapter Georgi Gerganov - For his llama.cp project that contributed in Barcenas-7b functions TrashPandaSavior - Reddit user who with his information would never have started the project. Made with ❤️ in Guadalupe, Nuevo Leon, Mexico 🇲🇽
elinas/chronos007-70b
elinas
"2024-03-23T23:19:51Z"
1,654
7
transformers
[ "transformers", "safetensors", "llama", "text-generation", "chat", "roleplay", "storywriting", "merge", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-26T21:48:29Z"
--- license: cc-by-nc-4.0 tags: - chat - roleplay - storywriting - merge --- # chronos007-70b fp16 This is a merge of Chronos-70b-v2 and model 007 at a ratio of 0.3 using the SLERP method, with Chronos being the parent model. This is an experimental model that has improved Chronos' logical and reasoning abilities while keeping the unique prose and general writing Chronos provides. This is an experiment for possible future Chronos models. There are multiple different quantized versions that can be found below including GGUF, GPTQ, and AWQ thanks to [@TheBloke](https://huggingface.co/TheBloke) ## License This model is strictly [*non-commercial*](https://creativecommons.org/licenses/by-nc/4.0/) (**cc-by-nc-4.0**) use only which takes priority over the **LLAMA 2 COMMUNITY LICENSE AGREEMENT**. If you'd like to discuss using it for your business, contact Elinas through Discord **elinas**, or X (Twitter) **@officialelinas**. The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included **cc-by-nc-4.0** license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences. At the moment, only 70b models released will be under this license and the terms may change at any time (ie. a more permissive license allowing commercial use). ## Model Usage This model uses Alpaca formatting, so for optimal model performance, use it to start the dialogue or story, and if you use a frontend like SillyTavern ENABLE Alpaca instruction mode: ``` ### Instruction: Your instruction or question here. ### Response: ``` Not using the format will make the model perform significantly worse than intended. ## Other versions [GGUF version by @TheBloke](https://huggingface.co/TheBloke/chronos007-70B-GGUF) [GPTQ version by @TheBloke](https://huggingface.co/TheBloke/chronos007-70B-GPTQ) [AWQ version by @TheBloke](https://huggingface.co/TheBloke/chronos007-70B-AWQ) **Support Development of New Models** <a href='https://ko-fi.com/Q5Q6MB734' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Support Development' /></a>
Weyaxi/OpenOrca-Zephyr-7B
Weyaxi
"2023-10-11T10:22:01Z"
1,654
5
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-11T09:45:07Z"
--- license: cc-by-nc-4.0 --- <a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> Merge of [HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) and [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) using ties merge. ### *Weights* - [HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha): 0.5 - [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca): 0.3 ### *Density* - [HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha): 0.5 - [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca): 0.5 # Evulation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)) | Metric | Value | |-----------------------|-------| | Avg. | | | ARC (25-shot) | | | HellaSwag (10-shot) | | | MMLU (5-shot) | | | TruthfulQA (0-shot) | |
NExtNewChattingAI/shark_tank_ai_7b_v2
NExtNewChattingAI
"2024-03-10T19:17:36Z"
1,654
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-23T01:46:04Z"
--- language: - en license: cc-by-nc-4.0 model-index: - name: shark_tank_ai_7b_v2 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 67.75 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NExtNewChattingAI/shark_tank_ai_7b_v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.06 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NExtNewChattingAI/shark_tank_ai_7b_v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 58.79 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NExtNewChattingAI/shark_tank_ai_7b_v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 62.15 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NExtNewChattingAI/shark_tank_ai_7b_v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 78.45 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NExtNewChattingAI/shark_tank_ai_7b_v2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 45.11 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NExtNewChattingAI/shark_tank_ai_7b_v2 name: Open LLM Leaderboard --- This model is based on https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3 trained on internal data. --- license: cc-by-nc-4.0 language: - en --- Chatbot is a highly advanced artificial intelligence designed to provide you with personalized assistance and support. With its natural language processing capabilities, it can understand and respond to a wide range of queries and requests, making it a valuable tool for both personal and professional use. The chatbot is equipped with a vast knowledge base, allowing it to provide accurate and reliable information on a wide range of topics, from general knowledge to specific industry-related information. It can also perform tasks such as scheduling appointments, sending emails, and even ordering products online. One of the standout features of this assistant chatbot is its ability to learn and adapt to your individual preferences and needs. Over time, it can become more personalized to your specific requirements, making it an even more valuable asset to your daily life. The chatbot is also designed to be user-friendly and intuitive, with a simple and easy-to-use interface that allows you to interact with it in a natural and conversational way. Whether you're looking for information, need help with a task, or just want to chat, your assistant chatbot is always ready and available to assist you. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NExtNewChattingAI__shark_tank_ai_7b_v2) | Metric |Value| |---------------------------------|----:| |Avg. |66.55| |AI2 Reasoning Challenge (25-Shot)|67.75| |HellaSwag (10-Shot) |87.06| |MMLU (5-Shot) |58.79| |TruthfulQA (0-shot) |62.15| |Winogrande (5-shot) |78.45| |GSM8k (5-shot) |45.11|
martyn/mixtral-megamerge-dare-8x7b-v2
martyn
"2023-12-25T22:48:59Z"
1,654
1
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "dare", "super mario merge", "pytorch", "merge", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-25T21:41:57Z"
--- license: apache-2.0 language: - en pipeline_tag: text-generation inference: false tags: - dare - super mario merge - pytorch - mixtral - merge --- # mixtral megamerge 8x7b v2 The following models were merged with DARE using [https://github.com/martyn/safetensors-merge-supermario](https://github.com/martyn/safetensors-merge-supermario) ## Mergelist ``` mistralai/Mixtral-8x7B-v0.1 mistralai/Mixtral-8x7B-Instruct-v0.1 cognitivecomputations/dolphin-2.6-mixtral-8x7b Brillibitg/Instruct_Mixtral-8x7B-v0.1_Dolly15K orangetin/OpenHermes-Mixtral-8x7B NeverSleep/Noromaid-v0.1-mixtral-8x7b-v3 ``` ## Merge command ``` python3 hf_merge.py to_merge_mixtral2.txt mixtral-2 -p 0.15 -lambda 1.95 ``` ### Notes * MoE gates were filtered for compatibility then averaged with `(tensor1 + tensor2)/2` * seems to generalize prompting formats and sampling settings
LWDCLS/KobbleTinyV2-1.1B-GGUF-IQ-Imatrix-Request
LWDCLS
"2024-06-24T17:54:59Z"
1,654
2
null
[ "gguf", "license:unlicense", "region:us" ]
null
"2024-06-24T17:46:04Z"
--- inference: false license: unlicense --- [[Request #54]](https://huggingface.co/Lewdiculous/Model-Requests/discussions/54) - Click the link for more context. <br> [concedo/KobbleTinyV2-1.1B](https://huggingface.co/concedo/KobbleTinyV2-1.1B) <br> Use with the [**latest version of KoboldCpp**](https://github.com/LostRuins/koboldcpp/releases/latest), or [this more up-to-date fork](https://github.com/Nexesenex/kobold.cpp) if you have issues. <details> <summary>⇲ Click here to expand/hide information – General chart with relative quant parformances.</summary> > [!NOTE] > **Recommended read:** <br> > > [**"Which GGUF is right for me? (Opinionated)" by Artefact2**](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) > > *Click the image to view full size.* > !["Which GGUF is right for me? (Opinionated)" by Artefact2 - Firs Graph](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/fScWdHIPix5IzNJ8yswCB.webp) </details> ![image](https://cdn-uploads.huggingface.co/production/uploads/660942fd0b7d615a5672ded9/h1MmQc94Oxbw91O2TkvU-.jpeg)
valurank/MiniLM-L6-Keyword-Extraction
valurank
"2022-06-08T20:17:38Z"
1,653
10
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "en", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "license:other", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2022-05-20T16:37:59Z"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity language: en license: other --- # all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2) ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 256 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
concedo/Pythia-70M-ChatSalad
concedo
"2023-04-07T14:46:25Z"
1,653
6
transformers
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "en", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-02-01T08:17:14Z"
--- license: other language: - en inference: false widget: - text: "How do I download this model?" example_title: "Text Gen Example" --- # Pythia-70M-ChatSalad This is a follow up finetune of Pythia-70M finetuned on the same dataset as OPT-19M-ChatSalad. It is much more coherent. All feedback and comments can be directed to Concedo on the KoboldAI discord.
digitous/Janin-R
digitous
"2023-02-21T00:49:26Z"
1,653
1
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-02-20T04:22:02Z"
--- license: creativeml-openrail-m language: - en --- This is a (25/25)/50 merge of Mr. Seeker's GPT-J Janeway and Shinen models and GPT-R. If interested, please visit digitous/GPT-R and KoboldAI to get an understanding of what these models are comprised of. This model is not intended for minors. Mr. Seeker, maker of incredible fine-tuned models: https://huggingface.co/KoboldAI/GPT-J-6B-Janeway https://huggingface.co/KoboldAI/GPT-J-6B-Shinen Donate to help ameliorate server fees: https://www.patreon.com/mrseeker GPT-Ronin: https://huggingface.co/digitous/GPT-R Weight merge Script credit to Concedo: https://huggingface.co/concedo
dvruette/oasst-gpt-neox-20b-1000-steps
dvruette
"2023-03-27T15:41:30Z"
1,653
0
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-03-25T13:53:05Z"
https://wandb.ai/open-assistant/supervised-finetuning/runs/x2pczqa9
OpenAssistant/pythia-12b-sft-v8-rlhf-2k-steps
OpenAssistant
"2023-08-17T22:00:39Z"
1,653
0
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-05-10T21:11:23Z"
--- license: apache-2.0 --- # pythia-12b-sft-v8-rlhf-2k-steps - sampling report: [2023-05-15_OpenAssistant_pythia-12b-sft-v8-rlhf-2k-steps_sampling_noprefix2.json](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-rl%2F2023-05-15_OpenAssistant_pythia-12b-sft-v8-rlhf-2k-steps_sampling_noprefix2.json)
golaxy/gogpt-3b-bloom
golaxy
"2023-07-22T13:23:22Z"
1,653
5
transformers
[ "transformers", "pytorch", "tensorboard", "bloom", "text-generation", "zh", "dataset:BelleGroup/train_2M_CN", "dataset:BelleGroup/train_3.5M_CN", "dataset:BelleGroup/train_1M_CN", "dataset:BelleGroup/train_0.5M_CN", "dataset:BelleGroup/school_math_0.25M", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-05-26T02:38:54Z"
--- license: apache-2.0 datasets: - BelleGroup/train_2M_CN - BelleGroup/train_3.5M_CN - BelleGroup/train_1M_CN - BelleGroup/train_0.5M_CN - BelleGroup/school_math_0.25M language: - zh --- ## GoGPT 基于多样性中文指令数据微调的中文BLOOM底座模型 ![img.png](resources/img.png) > 训练第一轮足够了,后续第二轮和第三轮提升不大 - 🚀多样性指令数据 - 🚀筛选高质量中文数据 | 模型名字 | 参数量 | 模型地址 | |------------|--------|------| | gogpt-560m | 5.6亿参数 | 🤗[golaxy/gogpt-560m](https://huggingface.co/golaxy/gogpt-560m) | | gogpt-3b | 30亿参数 | 🤗[golaxy/gogpt-3b-bloom](https://huggingface.co/golaxy/gogpt-3b-bloom) | | gogpt-7b | 70亿参数 | 🤗[golaxy/gogpt-7b-bloom](https://huggingface.co/golaxy/gogpt-7b-bloom) | ## 测试效果 ![img.png](resources/test1.png) ![img.png](resources/test2.png) ![img.png](resources/test3.png) ![img.png](resources/test4.png) ![img.png](resources/test5.png) ![img.png](resources/test6.png) ## TODO - 进行RLFH训练 - 后续加入中英平行语料 ## 感谢 - [@hz-zero_nlp](https://github.com/yuanzhoulvpi2017/zero_nlp) - [stanford_alpaca](https://github.com/tatsu-lab/stanford_alpaca) - [Belle数据](https://huggingface.co/BelleGroup) ## Citation 如果你在研究中使用了GoGPT,请按如下格式引用: ``` @misc{GoGPT, title={GoGPT: Training Medical GPT Model}, author={Qiang Yan}, year={2023}, howpublished={\url{https://github.com/yanqiangmiffy/GoGPT}}, } ```
TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ
TheBloke
"2023-08-21T14:13:05Z"
1,653
46
transformers
[ "transformers", "safetensors", "llama", "text-generation", "custom_code", "arxiv:2304.12244", "license:other", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
"2023-07-07T17:12:09Z"
--- inference: false license: other --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # WizardLM's WizardLM 13B V1.1 GPTQ These files are GPTQ 4bit model files for [WizardLM's WizardLM 13B V1.1](https://huggingface.co/WizardLM/WizardLM-13B-V1.1) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test). It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa). **This is an experimental new GPTQ which offers up to 8K context size** The increased context is tested to work with [ExLlama](https://github.com/turboderp/exllama), via the latest release of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It has also been tested from Python code using AutoGPTQ, and `trust_remote_code=True`. Code credits: - Original concept and code for increasing context length: [kaiokendev](https://huggingface.co/kaiokendev) - Updated Llama modelling code that includes this automatically via trust_remote_code: [emozilla](https://huggingface.co/emozilla). Please read carefully below to see how to use it. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ) * [4, 5, and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/WizardLM/WizardLM-13B-V1.1) ## Prompt template: Vicuna ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: prompt ASSISTANT: ``` ## How to easily download and use this model in text-generation-webui with ExLlama Please make sure you're using the latest version of text-generation-webui 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done" 5. Untick **Autoload the model** 6. In the top left, click the refresh icon next to **Model**. 7. In the **Model** dropdown, choose the model you just downloaded: `WizardLM-13B-V1-1-SuperHOT-8K-GPTQ` 8. To use the increased context, set the **Loader** to **ExLlama**, set **max_seq_len** to 8192 or 4096, and set **compress_pos_emb** to **4** for 8192 context, or to **2** for 4096 context. 9. Now click **Save Settings** followed by **Reload** 10. The model will automatically load, and is now ready for use! 11. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! ## How to use this GPTQ model from Python code with AutoGPTQ First make sure you have AutoGPTQ and Einops installed: ``` pip3 install einops auto-gptq ``` Then run the following code. Note that in order to get this to work, `config.json` has been hardcoded to a sequence length of 8192. If you want to try 4096 instead to reduce VRAM usage, please manually edit `config.json` to set `max_position_embeddings` to the value you want. ```python from transformers import AutoTokenizer, pipeline, logging from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig import argparse model_name_or_path = "TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ" model_basename = "wizardlm-13b-v1.1-superhot-8k-GPTQ-4bit-128g.no-act.order" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) model = AutoGPTQForCausalLM.from_quantized(model_name_or_path, model_basename=model_basename, use_safetensors=True, trust_remote_code=True, device_map='auto', use_triton=use_triton, quantize_config=None) model.seqlen = 8192 # Note: check the prompt template is correct for this model. prompt = "Tell me about AI" prompt_template=f'''USER: {prompt} ASSISTANT:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline # Prevent printing spurious transformers error when using pipeline with AutoGPTQ logging.set_verbosity(logging.CRITICAL) print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Using other UIs: monkey patch Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev. It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest. ## Provided files **wizardlm-13b-v1.1-superhot-8k-GPTQ-4bit-128g.no-act.order.safetensors** This will work with AutoGPTQ, ExLlama, and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead. It was created with group_size 128 to increase inference accuracy, but without --act-order (desc_act) to increase compatibility and improve inference speed. * `wizardlm-13b-v1.1-superhot-8k-GPTQ-4bit-128g.no-act.order.safetensors` * Works for use with ExLlama with increased context (4096 or 8192) * Works with AutoGPTQ in Python code, including with increased context, if `trust_remote_code=True` is set. * Should work with GPTQ-for-LLaMa in CUDA mode, but unknown if increased context works - TBC. May have issues with GPTQ-for-LLaMa Triton mode. * Works with text-generation-webui, including one-click-installers. * Parameters: Groupsize = 128. Act Order / desc_act = False. <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). Tests have shown that the model does indeed leverage the extended context at 8K. You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** #### Looking for Merged & Quantized Models? - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors) - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors) #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model # Original model card: WizardLM's WizardLM 13B V1.1 This is the **Full-Weight** of WizardLM-13B V1.1 model. **Repository**: https://github.com/nlpxucan/WizardLM **Twitter**: https://twitter.com/WizardLM_AI/status/1677282955490918401 - 🔥🔥🔥 [7/7/2023] We released **WizardLM V1.1** models. The **WizardLM-13B-V1.1** is here ([Demo_13B-V1.1](https://e8a06366ccd1c4d1.gradio.app), [Demo_13B-V1.1_bak-1](https://59da107262a25764.gradio.app), [Demo_13B-V1.1_bak-2](https://dfc5113f66739c80.gradio.app), [Full Model Weight](https://huggingface.co/WizardLM/WizardLM-13B-V1.1)). **WizardLM-7B-V1.1**, **WizardLM-30B-V1.1**, and **WizardLM-65B-V1.1** are coming soon. Please checkout the [Full Model Weights](https://huggingface.co/WizardLM) and [paper](https://arxiv.org/abs/2304.12244). - 🔥🔥🔥 [7/7/2023] The **WizardLM-13B-V1.1** achieves **6.74** on [MT-Bench Leaderboard](https://chat.lmsys.org/?leaderboard), **86.32%** on [AlpacaEval Leaderboard](https://tatsu-lab.github.io/alpaca_eval/), and **99.3%** on [WizardLM Eval](https://github.com/nlpxucan/WizardLM/blob/main/WizardLM/data/WizardLM_testset.jsonl). (Note: MT-Bench and AlpacaEval are all self-test, will push update and request review. All tests are completed under their official settings.)
acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1
acrastt
"2024-02-03T03:35:56Z"
1,653
1
transformers
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "en", "dataset:togethercomputer/RedPajama-Data-1T", "dataset:databricks/databricks-dolly-15k", "dataset:OpenAssistant/oasst1", "dataset:Muennighoff/natural-instructions", "dataset:Muennighoff/P3", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-07-27T19:42:41Z"
--- language: - en license: apache-2.0 library_name: transformers datasets: - togethercomputer/RedPajama-Data-1T - databricks/databricks-dolly-15k - OpenAssistant/oasst1 - Muennighoff/natural-instructions - Muennighoff/P3 pipeline_tag: text-generation model-index: - name: RedPajama-INCITE-Chat-Instruct-3B-V1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 42.58 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 67.48 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 25.99 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 33.62 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 64.8 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.91 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1 name: Open LLM Leaderboard --- <a href="https://www.buymeacoffee.com/acrastt" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> This is an experimental merge of models [RedPajama-INCITE-Chat-3B-V1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1) and [RedPajama-INCITE-Instruct-3B-V1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-3B-v1).</br> This model is adaptive to prompt templates, but this template is recommended: ``` HUMAN: {prompt} ASSISTANT: ``` Feel free to change HUMAN or ASSISTANT. It will not change much.</br> GGML versions [here](https://huggingface.co/adadbbb/pajama_ggml) (Note that this is only compatible with [koboldcpp](https://github.com/LostRuins/koboldcpp)). # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__RedPajama-INCITE-Chat-Instruct-3B-V1) | Metric | Value | |-----------------------|---------------------------| | Avg. | 39.23 | | ARC (25-shot) | 42.58 | | HellaSwag (10-shot) | 67.48 | | MMLU (5-shot) | 25.99 | | TruthfulQA (0-shot) | 33.62 | | Winogrande (5-shot) | 64.8 | | GSM8K (5-shot) | 0.91 | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__RedPajama-INCITE-Chat-Instruct-3B-V1) | Metric |Value| |---------------------------------|----:| |Avg. |39.23| |AI2 Reasoning Challenge (25-Shot)|42.58| |HellaSwag (10-Shot) |67.48| |MMLU (5-Shot) |25.99| |TruthfulQA (0-shot) |33.62| |Winogrande (5-shot) |64.80| |GSM8k (5-shot) | 0.91|
clibrain/Llama-2-13b-ft-instruct-es
clibrain
"2023-08-30T14:43:54Z"
1,653
9
transformers
[ "transformers", "pytorch", "llama", "text-generation", "es", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-10T11:33:55Z"
--- license: apache-2.0 language: - es pipeline_tag: text-generation library_name: transformers inference: false --- # Llama-2-13B-ft-instruct-es [Llama 2 (13B)](https://huggingface.co/meta-llama/Llama-2-13b) fine-tuned on [Clibrain](https://huggingface.co/clibrain)'s Spanish instructions dataset. ## Model Details Llama 2 is a collection of pre-trained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pre-trained model. ## Example of Usage ```py import torch from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig model_id = "clibrain/Llama-2-13b-ft-instruct-es" model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True).to("cuda") tokenizer = AutoTokenizer.from_pretrained(model_id) def create_instruction(instruction, input_data=None, context=None): sections = { "Instrucción": instruction, "Entrada": input_data, "Contexto": context, } system_prompt = "A continuación hay una instrucción que describe una tarea, junto con una entrada que proporciona más contexto. Escriba una respuesta que complete adecuadamente la solicitud.\n\n" prompt = system_prompt for title, content in sections.items(): if content is not None: prompt += f"### {title}:\n{content}\n\n" prompt += "### Respuesta:\n" return prompt def generate( instruction, input=None, context=None, max_new_tokens=128, temperature=0.1, top_p=0.75, top_k=40, num_beams=4, **kwargs ): prompt = create_instruction(instruction, input, context) print(prompt.replace("### Respuesta:\n", "")) inputs = tokenizer(prompt, return_tensors="pt") input_ids = inputs["input_ids"].to("cuda") attention_mask = inputs["attention_mask"].to("cuda") generation_config = GenerationConfig( temperature=temperature, top_p=top_p, top_k=top_k, num_beams=num_beams, **kwargs, ) with torch.no_grad(): generation_output = model.generate( input_ids=input_ids, attention_mask=attention_mask, generation_config=generation_config, return_dict_in_generate=True, output_scores=True, max_new_tokens=max_new_tokens, early_stopping=True ) s = generation_output.sequences[0] output = tokenizer.decode(s) return output.split("### Respuesta:")[1].lstrip("\n") instruction = "Dame una lista de lugares a visitar en España." print(generate(instruction)) ``` ## Example of Usage with `pipelines` ```py from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_id = "clibrain/Llama-2-13b-ft-instruct-es" model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True).to("cuda") tokenizer = AutoTokenizer.from_pretrained(model_id) pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200, device=0) prompt = """ A continuación hay una instrucción que describe una tarea. Escriba una respuesta que complete adecuadamente la solicitud. ### Instrucción: Dame una lista de 5 lugares a visitar en España. ### Respuesta: """ result = pipe(prompt) print(result[0]['generated_text']) ```
TigerResearch/tigerbot-7b-base
TigerResearch
"2023-09-20T06:16:15Z"
1,653
3
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-19T06:21:50Z"
--- license: apache-2.0 --- <div style="width: 100%;"> <p align="center" width="20%"> <img src="http://x-pai.algolet.com/bot/img/logo_core.png" alt="TigerBot" width="20%", style="display: block; margin: auto;"></img> </p> </div> <p align="center"> <font face="黑体" size=5"> A cutting-edge foundation for your very own LLM. </font> </p> <p align="center"> 💻<a href="https://github.com/TigerResearch/TigerBot" target="_blank">Github</a> • 🌐 <a href="https://tigerbot.com/" target="_blank">TigerBot</a> • 🤗 <a href="https://huggingface.co/TigerResearch" target="_blank">Hugging Face</a> </p> # 快速开始 - 方法1,通过transformers使用 - 下载 TigerBot Repo ```shell git clone https://github.com/TigerResearch/TigerBot.git ``` - 启动infer代码 ```shell python infer.py --model_path TigerResearch/tigerbot-7b-base ``` - 方法2: - 下载 TigerBot Repo ```shell git clone https://github.com/TigerResearch/TigerBot.git ``` - 安装git lfs: `git lfs install` - 通过huggingface或modelscope平台下载权重 ```shell git clone https://huggingface.co/TigerResearch/tigerbot-7b-base git clone https://www.modelscope.cn/TigerResearch/tigerbot-7b-base-v3.git ``` - 启动infer代码 ```shell python infer.py --model_path tigerbot-7b-base(-v3) --model_type base --max_generate_length 64 ``` ------ # Quick Start - Method 1, use through transformers - Clone TigerBot Repo ```shell git clone https://github.com/TigerResearch/TigerBot.git ``` - Run infer script ```shell python infer.py --model_path TigerResearch/tigerbot-7b-base ``` - Method 2: - Clone TigerBot Repo ```shell git clone https://github.com/TigerResearch/TigerBot.git ``` - install git lfs: `git lfs install` - Download weights from huggingface or modelscope ```shell git clone https://huggingface.co/TigerResearch/tigerbot-7b-base git clone https://www.modelscope.cn/TigerResearch/tigerbot-7b-base-v3.git ``` - Run infer script ```shell python infer.py --model_path tigerbot-7b-base(-v3) --model_type base --max_generate_length 64 ```
TaylorAI/Flash-Llama-13B
TaylorAI
"2023-08-29T23:45:20Z"
1,653
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "custom_code", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-19T19:46:05Z"
Entry not found
CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r16
CHIH-HUNG
"2023-09-12T12:30:26Z"
1,653
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:huangyt/FINETUNE1", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-06T09:57:47Z"
--- license: llama2 datasets: - huangyt/FINETUNE1 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> 在llama-2-13b上使用huangyt/FINETUNE1資料集進行訓練,總資料筆數約17w # Fine-Tuning Information - **GPU:** RTX4090 (single core / 24564MiB) - **model:** meta-llama/Llama-2-13b-hf - **dataset:** huangyt/FINETUNE1 (共約17w筆訓練集) - **peft_type:** LoRA - **lora_rank:** 16 - **lora_target:** gate_proj, up_proj, down_proj - **per_device_train_batch_size:** 8 - **gradient_accumulation_steps:** 8 - **learning_rate :** 5e-5 - **epoch:** 1 - **precision:** bf16 - **quantization:** load_in_4bit # Fine-Tuning Detail - **train_loss:** 0.66 - **train_runtime:** 16:26:58 (use deepspeed) # Evaluation - 評估結果來自**HuggingFaceH4/open_llm_leaderboard** - 與Llama-2-13b比較4種Benchmark,包含**ARC**、**HellaSwag**、**MMLU**、**TruthfulQA** | Model |Average| ARC |HellaSwag| MMLU |TruthfulQA| |--------------------------------------------------------|-------|-------|---------|-------|----------| |meta-llama/Llama-2-13b-hf | 56.9 | 58.11 | 80.97 | 54.34 | 34.17 | |meta-llama/Llama-2-13b-chat-hf | 59.93 | 59.04 | 81.94 | 54.64 | 44.12 | |CHIH-HUNG/llama-2-13b-Fintune_1_17w | 58.24 | 59.47 | 81 | 54.31 | 38.17 | |CHIH-HUNG/llama-2-13b-huangyt_Fintune_1_17w-q_k_v_o_proj| 58.49 | 59.73 | 81.06 | 54.53 | 38.64 | |CHIH-HUNG/llama-2-13b-Fintune_1_17w-gate_up_down_proj | 58.81 | 57.17 | 82.26 | 55.89 | 39.93 | |CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r16 | 58.86 | 57.25 | 82.27 | 56.16 | 39.75 | |CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r4 | 58.71 | 56.74 | 82.27 | 56.18 | 39.65 | # How to convert dataset to json - 在**load_dataset**中輸入資料集名稱,並且在**take**中輸入要取前幾筆資料 - 觀察該資料集的欄位名稱,填入**example**欄位中(例如system_prompt、question、response) - 最後指定json檔儲存位置 (**json_filename**) ```py import json from datasets import load_dataset # 讀取數據集,take可以取得該數據集前n筆資料 dataset = load_dataset("huangyt/FINETUNE1", split="train", streaming=True) # 提取所需欄位並建立新的字典列表 extracted_data = [] for example in dataset: extracted_example = { "instruction": example["instruction"], "input": example["input"], "output": example["output"] } extracted_data.append(extracted_example) # 指定 JSON 文件名稱 json_filename = "huangyt_FINETUNE_1.json" # 寫入 JSON 文件 with open(json_filename, "w") as json_file: json.dump(extracted_data, json_file, indent=4) print(f"數據已提取並保存為 {json_filename}") ```
ajibawa-2023/Uncensored-Frank-13B
ajibawa-2023
"2023-11-18T05:57:00Z"
1,653
7
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:ehartford/wizard_vicuna_70k_unfiltered", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-14T18:20:25Z"
--- license: cc-by-nc-nd-4.0 language: - en datasets: - ehartford/wizard_vicuna_70k_unfiltered --- **Frank: An Uncensored Model** The character of Frank Costello in "The Departed" is known for his cunning, boldness, and willingness to talk about anything, regardless of societal norms or restrictions. Frank, An Uncensored model, draws inspiration from these qualities to offer a platform where users can discuss a wide array of topics without the fear of censorship or restrictions. Frank aims to push boundaries and encourage candid conversations. With Frank you can have unfiltered discussions on a multitude of topics, from politics and controversial issues to personal experiences and sensitive subjects. It is trained on around 150000 set of conversations. Each set having 10~15 conversations. Base data was obtained from [Eric Hartford](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered). This data was further refined and fine tuned. Besides this further synthetic conversation (more than 80k) was generated and refined. We will not release this data. **Warning** An uncensored model has no or little guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous objects. Publishing anything this model generates is the same as publishing it yourself. We are not responsible for what you generate using this model. **Training:** Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took around 65 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta. **GPTQ GGML & AWQ** GPTQ: [Link](https://huggingface.co/TheBloke/Uncensored-Frank-13b-GPTQ) GGUF: [Link](https://huggingface.co/TheBloke/Uncensored-Frank-13b-GGUF) AWQ: [Link](https://huggingface.co/TheBloke/Uncensored-Frank-13b-AWQ) Special Thanks to [TheBloke](https://huggingface.co/TheBloke) for making these models available. **Example Prompt:** ``` This is a conversation with your Uncensored AI assistant Frank. Frank can discuss with you at length about various topics such as ---, ---, ---, . You can ask it anything you want and it will do its best to give you accurate and relevant information. Context You are a Uncensored AI assistant. USER: <prompt> ASSISTANT: ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__Uncensored-Frank-13B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 50.21 | | ARC (25-shot) | 61.6 | | HellaSwag (10-shot) | 82.62 | | MMLU (5-shot) | 54.55 | | TruthfulQA (0-shot) | 48.34 | | Winogrande (5-shot) | 74.74 | | GSM8K (5-shot) | 11.98 | | DROP (3-shot) | 17.63 |
shaowenchen/vicuna-13b-v1.5-16k-gguf
shaowenchen
"2023-09-18T21:27:02Z"
1,653
1
null
[ "gguf", "vicuna", "chinese", "text-generation", "zh", "en", "license:other", "region:us" ]
text-generation
"2023-09-18T12:08:34Z"
--- inference: true language: - zh - en license: other model_creator: lmsys model_link: https://huggingface.co/lmsys/vicuna-13b-v1.5-16k model_name: vicuna-13b-v1.5-16k model_type: vicuna pipeline_tag: text-generation quantized_by: shaowenchen tasks: - text2text-generation tags: - gguf - vicuna - chinese --- ## Provided files | Name | Quant method | Size | | ------------------------------- | ------------ | ------ | | vicuna-13b-v1.5-16k.Q2_K.gguf | Q2_K | 5.1 GB | | vicuna-13b-v1.5-16k.Q3_K.gguf | Q3_K | 5.9 GB | | vicuna-13b-v1.5-16k.Q3_K_L.gguf | Q3_K_L | 6.5 GB | | vicuna-13b-v1.5-16k.Q3_K_S.gguf | Q3_K_S | 5.3 GB | | vicuna-13b-v1.5-16k.Q4_0.gguf | Q4_0 | 6.9 GB | | vicuna-13b-v1.5-16k.Q4_1.gguf | Q4_1 | 7.6 GB | | vicuna-13b-v1.5-16k.Q4_K.gguf | Q4_K | 7.3 GB | | vicuna-13b-v1.5-16k.Q4_K_S.gguf | Q4_K_S | 6.9 GB | | vicuna-13b-v1.5-16k.Q5_0.gguf | Q5_0 | 8.4 GB | | vicuna-13b-v1.5-16k.Q5_1.gguf | Q5_1 | 9.1 GB | | vicuna-13b-v1.5-16k.Q5_K.gguf | Q5_K | 8.6 GB | | vicuna-13b-v1.5-16k.Q5_K_S.gguf | Q5_K_S | 8.4 GB | | vicuna-13b-v1.5-16k.Q6_K.gguf | Q6_K | 9.9 GB | | vicuna-13b-v1.5-16k.Q8_0.gguf | Q8_0 | 13 GB | | vicuna-13b-v1.5-16k.gguf | full | 24 GB | Usage: ``` docker run --rm -it -p 8000:8000 -v /path/to/models:/models -e MODEL=/models/gguf-model-name.gguf hubimage/llama-cpp-python:latest ``` and you can view http://localhost:8000/docs to see the swagger UI. ## Provided images | Name | Quant method | Compressed Size | | ------------------------------------------- | ------------ | --------------- | | `shaowenchen/vicuna-13b-v1.5-16k-gguf:Q2_K` | Q2_K | 5.29 GB | | `shaowenchen/vicuna-13b-v1.5-16k-gguf:Q3_K` | Q3_K | 6.11 GB | | `shaowenchen/vicuna-13b-v1.5-16k-gguf:Q4_K` | Q4_K | 7.47 GB | Usage: ``` docker run --rm -p 8000:8000 shaowenchen/vicuna-13b-v1.5-16k-gguf:Q2_K ``` and you can view http://localhost:8000/docs to see the swagger UI.
jondurbin/airoboros-l2-13b-2.2.1
jondurbin
"2023-09-21T18:39:18Z"
1,653
3
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:jondurbin/airoboros-2.2.1", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-20T18:35:28Z"
--- license: llama2 datasets: - jondurbin/airoboros-2.2.1 --- ### Overview Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros) This is essentially a minor "fix" branch of [airoboros-l2-13b-2.2](https://hf.co/jondurbin/airoboros-l2-13b-2.2) with a updates, primarily: - [re-generated writing responses](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#re-generated-writing-responses) - [longer contextual blocks](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#longer-contextual-blocks) - [removal of "rp" data](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#rp-category-removed) - [(less aggressive) de-censoring](https://huggingface.co/datasets/jondurbin/airoboros-2.2.1#de-censoring) - more fine-tuning epochs This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay. Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools! ### Prompt format The prompt format: ``` A chat. USER: {prompt} ASSISTANT: ``` The default system prompt ("A chat.") was used for most of the prompts, however it also included a wide sampling of responses with other prompts, particularly in "stylized\_response", "rp", "gtkm", etc. Here's another example: ``` A chat between Bob (aka USER) and Tom (aka ASSISTANT). Tom is an extremely intelligent 18th century bookkeeper, who speaks loquaciously. USER: {prompt} ASSISTANT: ``` And chat scenario that wouldn't require USER/ASSISTANT (but should use stopping criteria to prevent the model from speaking on your behalf). ``` A chat between old friends: Timmy and Tommy. {description of characters} {setting for the chat} Timmy: *takes a big sip from his coffee* "Ah, sweet, delicious, magical coffee." Tommy: ``` __*I strongly suggest adding stopping criteria/early inference stopping on "USER:", and/or whatever names you specify in the system prompt.*__ ### Fine tuning info https://wandb.ai/jondurbin/airoboros-l2-13b-2.2.1/runs/zbz8mgaz?workspace=user-jondurbin ### Helpful usage tips *The prompts shown here are are just the text that would be included after USER: and before ASSISTANT: in the full prompt format above, the system prompt and USER:/ASSISTANT: have been omited for readability.* #### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT [key0: value0] [key1: value1] ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` #### Summarization 500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example: ``` BEGININPUT {text to summarize} ENDINPUT BEGININSTRUCTION Summarize the input in around 130 words. ENDINSTRUCTION ``` #### Getting longer responses You can use a few techniques to get longer responses. Detailed prompts, with explicit instruction for word count: ``` Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality. The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization. One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary. Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements. Your response should be approximately 2300 words. ``` Or, a simpler example: ``` Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux. ``` #### Coding You can ask for fairly complex coding instructions with multiple criteria, e.g.: ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or inline criteria: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` #### Agent/function calling The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML. Example prompt: ``` As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format. Input: I want to know how many times 'Python' is mentioned in my text file. Available functions: file_analytics: description: This tool performs various operations on a text file. params: action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc. filters: keyword: The word or phrase we want to search for. ``` Response: ```json { "function": "file_analytics", "params": { "action": "count_occurrences", "filters": { "keyword": "Python" } } } ``` #### Chain-of-thought You can ask for several possible responses to a given problem, with a ranking and final answer selection. Example prompt: ``` A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one. ``` Example response: ``` Solution 1: In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players). The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows: n(n-1)/2 = 45 Solving this equation gives us n=10. Final answer: There were 10 players in the tournament. Solution 2: Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers. If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players. Final answer: There were 10 players in the tournament. Solution 3: We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played. Final answer: There were 10 players in the tournament. Ranking of solutions from best to worst: 1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer. 2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer. 3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer. Best and final answer: There were 10 players in the tournament. ``` #### reWOO style execution planning The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions! Example prompt: ``` Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string that could be the user's question, one or more prior evidence values, or a combination of both. Here are the tools available to be called: DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer. TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links. HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links. KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:? The output should be in format: Plan: [first action to take based on input question] :evidence0: = [function to call with input parameter] Plan: [next action to take, based on result of :evidence0:] :evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"] ... Answer: [:evidence[n]: containing the final answer.] Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work? ``` Response: ``` Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020. :evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner] Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate. :evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?] Plan: Conduct another web search to find the most famous work of the identified laureate. :evidence2: = DuckDuckGo[Most famous work of :evidence1:] Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search. :evidence3: = HyperlinkExtractor[:evidence2:] Plan: Use the TextScraper tool to extract information from the relevant links. :evidence4: = TextScraper[:evidence3:] Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information. :evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?] Answer: :evidence5: ``` For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening: ```python import re import requests def inject_context(input_text, **context): for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)): input_text = input_text.replace(ref, context.get(ref, "")) return input_text def duckduckgo(input_text, **context): search_string = inject_context(input_text, **context) ... search via duck duck go using search_string ... return text content def link_extractor(input_text, **context): input_text = inject_context(input_text, **context) return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I)))) def scrape(input_text, **context): input_text = inject_context(input_text, **context) text = [] for link in input_text.splitlines(): text.append(requests.get(link).text) return "\n".join(text) def infer(input_text, **context) prompt = inject_context(input_text, **context) ... call model with prompt, return output def parse_plan(plan): method_map = { "DuckDuckGo": duckduckgo, "HyperlinkExtractor": link_extractor, "KnowledgeModel": infer, "TextScraper": scrape, } context = {} for line in plan.strip().splitlines(): if line.startswith("Plan:"): print(line) continue parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I) if not parts: if line.startswith("Answer: "): return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...") raise RuntimeError("bad format: " + line) context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context) ``` ### Contribute If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data, take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details. To help me with the OpenAI/compute costs: - https://bmc.link/jondurbin - ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11 - BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf ### Licence and usage restrictions The airoboros 2.2 models are built on top of llama-2/codellama. The llama-2 base model has a custom Meta license: - See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta. - See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta. The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros) The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI - what does *compete* actually mean here? - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works - the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2 I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially due to the OpenAI API usage. Either way, by using this model, you agree to completely indemnify me.
CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r16-q_k_v_o_gate_up_down
CHIH-HUNG
"2023-09-25T14:29:18Z"
1,653
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-25T14:14:24Z"
Entry not found
Enno-Ai/ennodata-raw-pankajmathur-13b-peft
Enno-Ai
"2023-10-07T15:20:01Z"
1,653
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-29T16:23:53Z"
Entry not found
Mihaiii/Metis-0.3
Mihaiii
"2023-12-19T08:13:23Z"
1,653
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-16T21:51:42Z"
--- base_model: mistralai/Mistral-7B-Instruct-v0.2 inference: false license: apache-2.0 license_name: apache-2.0 metrics: - accuracy --- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) An instruct based fine tune of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2). It works well with long system prompts. It isn't generic in a sense that it shouldn't be used for story telling, for example, but only for reasoning and text comprehension. This model is trained on a private dataset. The high GSM8K score is **NOT** because of the MetaMath dataset. # Prompt Format ([see the guidelines from the base model](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2#instruction-format)): ``` <s>[INST] {system_message} . Say "Acknowledged!" if you understood. [/INST] Acknowledged! </s> [INST] {prompt} [/INST] ```
cookinai/Valkyrie-V1
cookinai
"2024-01-03T21:20:03Z"
1,653
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-23T03:04:26Z"
--- license: apache-2.0 tags: - merge --- Slerp merge of mindy-labs/mindy-7b-v2 with jondurbin/bagel-dpo-7b-v0.1. This model was then slerp merged with rishiraj/CatPPT. Heard some talk of jondurbin/bagel-dpo-7b-v0.1 in the community and it sounds intresting. Merged it with two high preforming models to get cookinai/Valkyrie-V1 Slerp 1: ```.yaml: slices: - sources: - model: jondurbin/bagel-dpo-7b-v0.1 layer_range: [0, 32] - model: mindy-labs/mindy-7b-v2 layer_range: [0, 32] merge_method: slerp base_model: mindy-labs/mindy-7b-v2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 # fallback for rest of tensors dtype: bfloat16 ``` Slerp 2: ```.yaml: slices: - sources: - model: previous/model/path layer_range: [0, 32] - model: rishiraj/CatPPT layer_range: [0, 32] merge_method: slerp base_model: previous/model/path parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 # fallback for rest of tensors dtype: bfloat16 ```
qresearch/llama-3-vision-alpha-hf
qresearch
"2024-05-19T16:00:44Z"
1,653
42
transformers
[ "transformers", "safetensors", "llamavision", "text-generation", "visual-question-answering", "custom_code", "en", "dataset:liuhaotian/LLaVA-CC3M-Pretrain-595K", "arxiv:2304.08485", "arxiv:2309.16058", "license:llama3", "autotrain_compatible", "region:us" ]
visual-question-answering
"2024-04-30T18:00:17Z"
--- language: - en license: llama3 datasets: - liuhaotian/LLaVA-CC3M-Pretrain-595K pipeline_tag: visual-question-answering --- # llama3-vision-alpha projection module trained to add vision capabilties to Llama 3 using SigLIP. built by [@yeswondwerr](https://x.com/yeswondwerr) and [@qtnx_](https://x.com/qtnx_) usable directly in Transformers **usage** ``` pip install torch transformers pillow ``` ```python import torch from PIL import Image from transformers import AutoModelForCausalLM, AutoTokenizer from transformers import BitsAndBytesConfig bnb_cfg = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.float16, llm_int8_skip_modules=["mm_projector", "vision_model"], ) model_id = "qresearch/llama-3-vision-alpha-hf" model = AutoModelForCausalLM.from_pretrained( model_name, trust_remote_code=True, torch_dtype=torch.float16, quantization_config=bnb_cfg, ) tokenizer = AutoTokenizer.from_pretrained( model_id, use_fast=True, ) image = Image.open("image_path") print( tokenizer.decode( model.answer_question(image, "question", tokenizer), skip_special_tokens=True, ) ) ``` **examples** | Image | Examples | | --- | --- | | <img src="assets/demo-1.jpg" width="300"/> | **What is the title of this book? answer briefly**<br>The title of the book is "The Little Book of Deep Learning".<br><br>**Where is the person standing? answer briefly**<br>The person is standing on the balcony.<br><br>**Describe the image**<br>The image shows a person holding a book with a cityscape visible through the window behind them. The book has a cover with a title that reads "The Little Book of Deep Learning" in bold letters. | | <img src="assets/demo-2.jpg" width="300"/> | **What type of food is the girl holding? answer briefly**<br>A hamburger!<br><br>**What color is the woman's hair? answer briefly**<br>It's white!<br><br>**Describe the image**<br>The image is of a young girl with short, curly hair and a sweet smile, holding a giant hamburger in her hand. She's sitting at a table with a festive dinner setting, surrounded by candles and a warm glow. Her eyes are shining with excitement and contentment as she takes a big bite of the burger. | **acknowledgements** - Liu et al. : [LLaVA](https://arxiv.org/abs/2304.08485) - Moon et al. : [AnyMAL](https://arxiv.org/abs/2309.16058) - vikhyatk : moondream, test images ``` .x+=:. z` ^% .uef^" .u . . <k .u . :d88E .u@u .d88B :@8c .u .@8Ned8" .u u .d88B :@8c . `888E .zWF8888bx ="8888f8888r ud8888. .@^%8888" ud8888. us888u. ="8888f8888r .udR88N 888E .z8k .888 9888 4888>'88" :888'8888. x88: `)8b. :888'8888. .@88 "8888" 4888>'88" <888'888k 888E~?888L I888 9888 4888> ' d888 '88%" 8888N=*8888 d888 '88%" 9888 9888 4888> ' 9888 'Y" 888E 888E I888 9888 4888> 8888.+" %8" R88 8888.+" 9888 9888 4888> 9888 888E 888E I888 9888 .d888L .+ 8888L @8Wou 9% 8888L 9888 9888 .d888L .+ 9888 888E 888E `888Nx?888 ^"8888*" '8888c. .+ .888888P` '8888c. .+ 9888 9888 ^"8888*" ?8888u../ 888E 888E "88" '888 "Y" "88888% ` ^"F "88888% "888*""888" "Y" "8888P' m888N= 888> 88E "YP' "YP' ^Y" ^Y' "P' `Y" 888 98> J88" '8 @% ` :" ```
TehVenom/Dolly_Shygmalion-6b
TehVenom
"2023-05-04T17:14:13Z"
1,652
14
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-03-29T01:52:15Z"
#TODO card. Mix of (GPT-J-6B-Shinen + GPT-J-Dolly LoRA) + Pygmalion-6b At a ratio of GPT-J-6B-Shinen - 20% GPT-J-Dolly LoRA - 20% Pygmalion-6b - 60%
Yhyu13/llama-30B-hf-openassitant
Yhyu13
"2023-05-23T04:10:16Z"
1,652
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-05-22T11:54:33Z"
--- license: apache-2.0 --- This is the hf tr version of llama 30B converted specifically as open assistant's 30B model required: https://huggingface.co/OpenAssistant/oasst-rlhf-2-llama-30b-7k-steps-xor This the md5 checksum that I get locally, which matchs the original repo suggests ``` fdb311c39b8659a5d5c1991339bafc09 ./tokenizer.json edd1a5897748864768b1fab645b31491 ./tokenizer_config.json 6b2e0a735969660e720c27061ef3f3d3 ./special_tokens_map.json 3eddc6fc02c0172d38727e5826181adb ./pytorch_model-00004-of-00007.bin fecfda4fba7bfd911e187a85db5fa2ef ./pytorch_model.bin.index.json 462a2d07f65776f27c0facfa2affb9f9 ./pytorch_model-00007-of-00007.bin 598538f18fed1877b41f77de034c0c8a ./config.json 99762d59efa6b96599e863893cf2da02 ./pytorch_model-00006-of-00007.bin aee09e21813368c49baaece120125ae3 ./generation_config.json 92754d6c6f291819ffc3dfcaf470f541 ./pytorch_model-00005-of-00007.bin 5cfcb78b908ffa02e681cce69dbe4303 ./pytorch_model-00002-of-00007.bin e1dc8c48a65279fb1fbccff14562e6a3 ./pytorch_model-00003-of-00007.bin 9cffb1aeba11b16da84b56abb773d099 ./pytorch_model-00001-of-00007.bin eeec4125e9c7560836b4873b6f8e3025 ./tokenizer.model ```
PocketDoc/Dans-PersonalityEngine-30b
PocketDoc
"2023-06-23T00:14:59Z"
1,652
4
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-06-16T04:25:05Z"
--- language: - en --- ### Description: This is a multipurpose chat / chat instruct hybrid model in the same vein as the Pygmalion team's Metharme. It uses a curated pile of training data that has been normalized into a consistent training format. It has been trained on a wide array of one shot instructions, multi round instructions, and role playing scenarios. The training parameters were suboptimal for the most recent run and I decided to stop after 2 epochs as 3 likely would have overtrained it. I plan on iterating the model and improving it further when I have access to more funds to do so. ### Prompt format: Metharme The prompt should start with the cursor on the same line directly after "<|model|>" with no space. The following are all valid formats and can be extended to as many rounds as desired. ``` <|system|>system message here<|user|>user message here<|model|> ``` ``` <|system|>system message here<|user|>user message here<|model|>model message<|user|>user message here<|model|> ``` ``` <|system|>system message here<|model|> ``` ``` <|system|>system message here<|model|>model message<|user|>user message here<|model|> ``` Some example prompts: ``` <|system|>The following is a transcript between a helpful assistant and a user.<|user|>Why is the sky blue?<|model|> ``` ``` <|system|>You are a Virtual Story Generator. You take the user's input and create an excellent and captivating story that goes in that direction. Use an abundance of sensory descriptions and eloquent prose.<|user|>Alpha Centauri has fallen, to the bears. This is a point of view tale about a soldier on the ground.<|model|> ``` ``` <|system|>You are a professional editor with decades of experience, help the user with any task they have for you.<|user|>Can you rewrite this to flow better? "I knew I probably shouldnt have done that but oh well"<|model|> ``` More will be added at a later date. ### Perplexity Benchmarks: - TBA ### Training information: [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="150" height="24"/>](https://github.com/OpenAccess-AI-Collective/axolotl) - GPTQ 4 bit LoRA - 2 Epochs - 64 / 32 R / A - 2048 Cutoff - 42 hours on 1x RTX 4090 ### Data used in training: - TBA ### Models used: For training: https://huggingface.co/PocketDoc/llama-30b-gptq-4bit-128g For merging: https://huggingface.co/PocketDoc/Dans-PersonalityEngine-30b-LoRA and https://huggingface.co/huggyllama/llama-30b ### Disclaimer: It has not been aligned and no warranty is given for the quality or safety of its outputs.
CalderaAI/13B-Ouroboros
CalderaAI
"2023-11-17T21:30:57Z"
1,652
9
transformers
[ "transformers", "pytorch", "llama", "text-generation", "alpaca", "vicuna", "uncensored", "merge", "mix", "airoboros", "openorca", "orcamini", "orca", "instruct", "mixtune", "en", "dataset:Open-Orca/OpenOrca", "dataset:anon8231489123/ShareGPT_Vicuna_unfiltered", "dataset:jondurbin/airoboros-uncensored", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-07-20T21:13:34Z"
--- tags: - llama - alpaca - vicuna - uncensored - merge - mix - airoboros - openorca - orcamini - orca - instruct - mixtune datasets: - Open-Orca/OpenOrca - anon8231489123/ShareGPT_Vicuna_unfiltered - jondurbin/airoboros-uncensored language: - en metrics: - accuracy pipeline_tag: text-generation --- ## 13B-Ouroboros Ouroboros is an experimental model based on Meta's LLaMA [v1] 13B base model using a custom merging technique, tweaking each layer's merge % based on internal tests against the PTB dataset, scoring ~26.31 according to internal evaluation (6 samples, sequence length 1024; this testing is not empirical, it's a quick way to find near-optimum values). Testing, evaluating, and remixing this model is absolutely permissible and even encouraged (within the bounds of Meta's LLaMAv1 license agreement); the more feedback the better we can tune our process! 😊 ## Composition: Ouroboros is comprised of 40 layers [LLaMAv1 13B standard] mixed at optimized ratios VS the PTB dataset for lowest perplexity score. Listed below are the paired models and ratios merged per layer. Tier One Merge: 13B-airoboros-gpt4-1.4 > 13B-orca_mini_v2 [0.22, 0.85, 0.89, 0.98, 0.3, 0.41, 0.71, 0.83, 0.32, 0.1, 0.44, 0.6, 0.53, 0.15, 0.86, 0.79, 0.93, 0.02, 0.19, 0.82, 0.01, 0.52, 0.07, 0.27, 0.73, 0.86, 0.08, 0.67, 0.42, 0.28, 0.37, 0.08, 0.95, 0.68, 0.45, 0.08, 0.7, 0.93, 0.96, 0.43] 13B-gpt4-x-alpaca > 13B-Vicuna-cocktail [0.65, 0.94, 0.98, 0.87, 0.28, 0.64, 0.73, 0.7, 0.95, 0.89, 0.84, 0.9, 0.59, 0.92, 0.28, 0.61, 0.88, 0.73, 0.34, 0.85, 0.98, 0.05, 0.74, 0.92, 0.5, 0.78, 0.26, 0.4, 0.27, 0.65, 0.71, 0.7, 0.8, 0.93, 0.36, 0.03, 0.45, 0.39, 0.77, 0.06] Tier Two Merge: [13B-airoboros-gpt4-1.4 + 13B-orca_mini_v2] offspring > [13B-gpt4-x-alpaca + 13B-Vicuna-cocktail] offspring [0.2, 0.83, 0.24, 0.03, 0.37, 0.62, 0.02, 0.82, 0.65, 0.63, 0.45, 0.65, 0.48, 0.45, 0.24, 0.76, 0.06, 0.31, 0.45, 0.86, 0.23, 0.99, 0.93, 0.84, 0.96, 0.53, 0.95, 0.32, 0.19, 0.06, 0.4, 0.08, 0.62, 0.4, 0.26, 0.12, 0.16, 0.91, 0.14, 0.0] Result: 13B-Ouroboros, a model that seems uncensored and highly competent. So far only Alpaca instruction prompting has been tested and seems to work solidly well. ## Use: Alpaca's instruct format can be used to do many things, including control of the terms of behavior between a user and a response from an agent in chat. Below is an example of a command injected into memory. ``` ### Instruction: Make Narrator function as a text based adventure game that responds with verbose, detailed, and creative descriptions of what happens next after Player's response. Make Player function as the player input for Narrator's text based adventure game, controlling a character named (insert character name here, their short bio, and whatever quest or other information to keep consistent in the interaction). ### Response: {an empty new line here} ``` ## Language Models Used Credits: 13B-airoboros-gpt4-1.4 by jondurbin https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.4 13B-orca_mini_v2 by psmathur https://huggingface.co/psmathur/orca_mini_v2_13b 13B-gpt4-x-alpaca by chavinlo https://huggingface.co/chavinlo/gpt4-x-alpaca 13B-Vicuna-cocktail by reeducator https://huggingface.co/reeducator/vicuna-13b-cocktail Also thanks to Meta for LLaMA. Each model was hand picked and considered for what it could contribute to this ensemble. Thanks to each and every one of you for your incredible work developing some of the best things to come out of this community. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_CalderaAI__13B-Ouroboros) | Metric | Value | |-----------------------|---------------------------| | Avg. | 44.66 | | ARC (25-shot) | 57.42 | | HellaSwag (10-shot) | 82.11 | | MMLU (5-shot) | 51.43 | | TruthfulQA (0-shot) | 47.99 | | Winogrande (5-shot) | 57.85 | | GSM8K (5-shot) | 0.45 | | DROP (3-shot) | 15.36 |
ashercn97/giraffe-7b
ashercn97
"2023-07-27T00:39:01Z"
1,652
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:ashercn97/OpenOrcaPleaseWork", "dataset:ashercn97/RenamedCodeEvol", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-07-26T19:23:35Z"
--- datasets: - ashercn97/OpenOrcaPleaseWork - ashercn97/RenamedCodeEvol language: - en library_name: transformers --- This is a model fine tuned on an orca dataset and a multi language code dataset. It is one of my first projects, but I am happy with how it works. It took about 6 hours on 2 RTX 4090 GPUs. I used Axolotl to train this (HI IF YOURE SEEING THIS!!!).
heegyu/LIMA2-13b-hf
heegyu
"2023-08-07T12:16:27Z"
1,652
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "facebook", "meta", "llama-2", "en", "dataset:64bits/lima_vicuna_format", "arxiv:2307.09288", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-07T11:05:52Z"
--- extra_gated_heading: Access Llama 2 on Hugging Face extra_gated_description: >- This is a form to enable access to Llama 2 on Hugging Face after you have been granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our license terms and acceptable use policy before submitting this form. Requests will be processed in 1-2 days. extra_gated_prompt: >- **Your Hugging Face account email address MUST match the email you provide on the Meta website, or your request will not be approved.** extra_gated_button_content: Submit extra_gated_fields: I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox language: - en pipeline_tag: text-generation inference: false tags: - facebook - meta - pytorch - llama - llama-2 datasets: - 64bits/lima_vicuna_format --- finetuned Llama-2-13b-hf model with 64bits/lima_vicuna_format data (10 epoch) ### prompt ``` ### Human: Hi, how are you? ### Assistant: ``` # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)| |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
fangloveskari/Platypus_QLoRA_LLaMA_70b
fangloveskari
"2023-08-28T15:34:37Z"
1,652
3
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-27T02:17:24Z"
--- license: llama2 ---
lgaalves/llama-2-7b-hf_open-platypus
lgaalves
"2023-11-17T22:42:38Z"
1,652
3
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "en", "dataset:garage-bAInd/Open-Platypus", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-30T21:20:26Z"
--- license: llama2 datasets: - garage-bAInd/Open-Platypus pipeline_tag: text-generation language: - en --- # Llama-2-7b-hf_open-platypus **llama-2-7b-hf_open-platypus** is an instruction fine-tuned model based on the LLaMA2-7B transformer architecture. ### Benchmark Metrics | Metric | llama-2-7b-hf_open-platypus | garage-bAInd/Platypus2-7B| meta-llama/Llama-2-7b-hf (base) | |-----------------------|-------|-------|-------| | Avg. | 54.35|**56.13** | 54.32 | | ARC (25-shot) | 51.45 |**55.2**| 53.07 | | HellaSwag (10-shot) | 78.63 |**78.84**| 78.59 | | MMLU (5-shot) | 43.6 |**49.83**| 46.87 | | TruthfulQA (0-shot) | **43.71** |40.64| 38.76 | We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results. ### Model Details * **Trained by**: Luiz G A Alves * **Model type:** **llama-2-7b-hf_open-platypus** is an auto-regressive language model based on the LLaMA2 transformer architecture. * **Language(s)**: English ### How to use: ```python # Use a pipeline as a high-level helper >>> from transformers import pipeline >>> pipe = pipeline("text-generation", model="lgaalves/llama-2-7b-hf_open-platypus") >>> question = "What is a large language model?" >>> answer = pipe(question) >>> print(answer[0]['generated_text']) ``` or, you can load the model direclty using: ```python # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("lgaalves/llama-2-7b-hf_open-platypus") model = AutoModelForCausalLM.from_pretrained("lgaalves/llama-2-7b-hf_open-platypus") ``` ### Training Dataset `lgaalves/llama-2-7b-hf_open-platypus` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). ### Training Procedure `lgaalves/llama-2-7b-hf_open-platypus` was instruction fine-tuned using LoRA on 1 Tesla V100-SXM2-16GB. ### Limitations and bias Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lgaalves__llama-2-7b-hf_open-platypus) | Metric | Value | |-----------------------|---------------------------| | Avg. | 43.49 | | ARC (25-shot) | 51.45 | | HellaSwag (10-shot) | 78.63 | | MMLU (5-shot) | 43.6 | | TruthfulQA (0-shot) | 43.71 | | Winogrande (5-shot) | 74.43 | | GSM8K (5-shot) | 6.6 | | DROP (3-shot) | 5.99 |
acrastt/Bean-3B
acrastt
"2024-02-03T03:36:26Z"
1,652
2
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "en", "dataset:64bits/lima_vicuna_format", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-02T00:06:46Z"
--- language: - en license: apache-2.0 library_name: transformers datasets: - 64bits/lima_vicuna_format pipeline_tag: text-generation model-index: - name: Bean-3B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 40.36 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Bean-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 72.0 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Bean-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 26.43 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Bean-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 36.11 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Bean-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 65.67 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Bean-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.53 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Bean-3B name: Open LLM Leaderboard --- <a href="https://www.buymeacoffee.com/acrastt" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> This is [OpenLLaMA 3B V2](https://huggingface.co/openlm-research/open_llama_3b_v2) finetuned on [LIMA(ShareGPT format)](https://huggingface.co/datasets/64bits/lima_vicuna_format) for 2 epochs. Prompt template: ``` ### HUMAN: {prompt} ### RESPONSE: <leave a newline for the model to answer> ``` GGUF quantizations available [here](https://huggingface.co/maddes8cht/acrastt-Bean-3B-gguf). # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__Bean-3B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 40.18 | | ARC (25-shot) | 40.36 | | HellaSwag (10-shot) | 72.0 | | MMLU (5-shot) | 26.43 | | TruthfulQA (0-shot) | 36.11 | | Winogrande (5-shot) | 65.67 | | GSM8K (5-shot) | 0.53 | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__Bean-3B) | Metric |Value| |---------------------------------|----:| |Avg. |40.18| |AI2 Reasoning Challenge (25-Shot)|40.36| |HellaSwag (10-Shot) |72.00| |MMLU (5-Shot) |26.43| |TruthfulQA (0-shot) |36.11| |Winogrande (5-shot) |65.67| |GSM8k (5-shot) | 0.53|
Undi95/MLewd-L2-13B
Undi95
"2023-11-17T21:30:51Z"
1,652
4
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-04T00:06:59Z"
--- license: cc-by-nc-4.0 --- MLewd is a model created to be... Lewd. That's all. Based on ReMM. There was so much attempt on this model that I can't count them all. Bear with me lmao. The OG plan: https://pastebin.com/hfJ80rKL Command useds and explaination : ```shell Due to hardware limitation, some merge was done in 2 part. Last mix : - ReMM (Base) (0.57) - Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged (Llama Chat Uncensored) (0.35) - KoboldAI/LLAMA2-13B-Holodeck-1 (0.08) Part 1: python ties_merge.py TheBloke/Llama-2-13B-fp16 ./MLewdBase-L2-13B-part1 --merge Undi95/ReMM-L2-13B --density 0.88 --merge KoboldAI/LLAMA2-13B-Holodeck-1 --density 0.12 --cuda Part 2: python ties_merge.py TheBloke/Llama-2-13B-fp16 ./MLewdBase-L2-13B --merge Undi95/MLewdBase-L2-13B-part1 --density 0.65 --merge Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged --density 0.35 --cuda (MLewd-L2-13B-v1-2 got disqualified) - Applying LoRA: nRuaif/Kimiko-v2-13B at (0.24) weight on MLewd-L2-13B-v1-1 => Result: MLewd-L2-13B-v1-3 ================== ERP RANKING TEST =========================== 19.42 | MLewd-L2-13B-v1-3.q5_K_M.gguf (-> Best) 19.25 | MLewd-L2-13B-v1-1.q5_K_M.gguf 18.25 | MLewd-L2-13B-v1-2.q5 K M.gguf ================== RETRY =========================== Mix: - Undi95/MLewd-L2-13B-v1-3 (0.82) - Sao10K/Stheno-Inverted-L2-13B (0.18) !python ties_merge.py TheBloke/Llama-2-13B-fp16 ./MLewd-L2-13B-v1-7 --merge Undi95/MLewd-L2-13B-v1-3 --density 0.82 --merge Sao10K/Stheno-Inverted-L2-13B --density 0.18 --cuda => Result: MLewd-L2-13B-v1-7 Final touch (trying my best here) : MLewd-L2-13B-v1-7 (0.77) + zarakiquemparte/PIPPA-ShareGPT-Subset-QLora-13b (LoRA 0.23) => MLewd-L2-13B-v1-7-TRY2 FINAL : MLewd-L2-13B-v1-7-TRY2 (0.82) + BluemoonRP (0.18) => MLewd-L2-13B-v1-8-3 RIP to all the version that got trashed. ``` <!-- description start --> ## Description This repo contains fp16 files of MLewd-L2-13B, a trying-to-be lewd LLM model. <!-- description end --> <!-- description start --> ## Models used - Undi95/ReMM (Base) - Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged (Llama Chat Uncensored) - KoboldAI/LLAMA2-13B-Holodeck-1 - Sao10K/Stheno-Inverted-L2-13B ## Loras used - nRuaif/BluemoonRP-L2-13B-This-time-will-be-better/tree/main/lora-out-13b-final-BM/checkpoint-15/adapter_model - zarakiquemparte/PIPPA-ShareGPT-Subset-QLora-13b <!-- description end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` Special thanks to Sushi kek # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__MLewd-L2-13B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 46.84 | | ARC (25-shot) | 58.28 | | HellaSwag (10-shot) | 82.32 | | MMLU (5-shot) | 54.67 | | TruthfulQA (0-shot) | 48.66 | | Winogrande (5-shot) | 73.48 | | GSM8K (5-shot) | 1.29 | | DROP (3-shot) | 9.18 |
CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r8-q_k_v_o_gate_up_down
CHIH-HUNG
"2023-09-25T07:38:22Z"
1,652
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-25T07:23:20Z"
Entry not found
Riiid/sheep-duck-llama-2-70b-v1.1
Riiid
"2023-10-13T00:59:15Z"
1,652
20
transformers
[ "transformers", "pytorch", "llama", "text-generation", "Riiid", "llama-2", "sheep-duck-llama-2", "en", "arxiv:2306.02707", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-27T17:00:27Z"
--- thumbnail: >- https://cdn-uploads.huggingface.co/production/uploads/62fb1ef7e8c9c532aa7d19e4/NswB5XPkkOljeRh1xbMmR.png pipeline_tag: text-generation license: llama2 language: - en library_name: transformers tags: - Riiid - llama-2 - sheep-duck-llama-2 --- # sheep-duck-llama-2 <img src = "https://cdn-uploads.huggingface.co/production/uploads/62fb1ef7e8c9c532aa7d19e4/NswB5XPkkOljeRh1xbMmR.png" width="30%" height="30%"> This is a version 1.1 of Riiid/sheep-duck-llama-2. ## Model Details * **Developed by**: [Riiid](https://riiid.com/) * **Backbone Model**: [Riiid/sheep-duck-llama-2](https://huggingface.co/Riiid/sheep-duck-llama-2) * **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers) ## Dataset Details ### Used Datasets - Orca-style dataset - Alpaca-style dataset ### Prompt Template ``` ### System: {System} ### User: {User} ### Assistant: {Assistant} ``` ## Evaluation | Metric | Value | |-----------------------|-------| | ARC (25-shot) | 73.04 | | HellaSwag (10-shot) | 87.81 | | MMLU (5-shot) | 70.84 | | TruthfulQA (0-shot) | 64.58 | | Avg. | 74.07 | ## Limitations & Biases: Llama2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/ ## License Disclaimer: This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind. ## Contact Us - [Riiid](https://riiid.com/) ## Citation: Please kindly cite using the following BibTeX: ```bibtex @article{platypus2023, title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs}, author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz}, booktitle={arXiv preprint arxiv:2308.07317}, year={2023} } ``` ``` @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @misc{Orca-best, title = {Orca-best: A filtered version of orca gpt4 dataset.}, author = {Shahul Es}, year = {2023}, publisher = {HuggingFace}, journal = {HuggingFace repository}, howpublished = {\url{https://huggingface.co/datasets/shahules786/orca-best/}, } ``` ``` @software{touvron2023llama2, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom}, year={2023} } ```
CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-q_k_v_o
CHIH-HUNG
"2023-10-04T10:46:43Z"
1,652
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-04T10:31:45Z"
Entry not found
llm-agents/tora-7b-v1.0
llm-agents
"2023-10-08T11:22:45Z"
1,652
6
transformers
[ "transformers", "pytorch", "llama", "text-generation", "code", "math", "en", "dataset:gsm8k", "dataset:competition_math", "arxiv:2309.17452", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-08T05:15:03Z"
--- license: llama2 datasets: - gsm8k - competition_math language: - en metrics: - exact_match library_name: transformers pipeline_tag: text-generation tags: - code - math --- <h1 align="center"> ToRA: A Tool-Integrated Reasoning Agent <br> for Mathematical Problem Solving </h1> <p align="center"> <a href="https://microsoft.github.io/ToRA/"><b>[🌐 Website]</b></a> • <a href="https://arxiv.org/pdf/2309.17452.pdf"><b>[📜 Paper]</b></a> • <a href="https://huggingface.co/llm-agents"><b>[🤗 HF Models]</b></a> • <a href="https://github.com/microsoft/ToRA"><b>[🐱 GitHub]</b></a> <br> <a href="https://twitter.com/zhs05232838/status/1708860992631763092"><b>[🐦 Twitter]</b></a> • <a href="https://www.reddit.com/r/LocalLLaMA/comments/1703k6d/tora_a_toolintegrated_reasoning_agent_for/"><b>[💬 Reddit]</b></a> • <a href="https://notes.aimodels.fyi/researchers-announce-tora-training-language-models-to-better-understand-math-using-external-tools/">[🍀 Unofficial Blog]</a> <!-- <a href="#-quick-start">Quick Start</a> • --> <!-- <a href="#%EF%B8%8F-citation">Citation</a> --> </p> <p align="center"> Repo for "<a href="https://arxiv.org/pdf/2309.17452.pdf" target="_blank">ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving</a>" </p> ## 🔥 News - [2023/10/08] 🔥🔥🔥 All ToRA models released at [HuggingFace](https://huggingface.co/llm-agents)!!! - [2023/09/29] ToRA paper, repo, and website released. ## 💡 Introduction ToRA is a series of Tool-integrated Reasoning Agents designed to solve challenging mathematical reasoning problems by interacting with tools, e.g., computation libraries and symbolic solvers. ToRA series seamlessly integrate natural language reasoning with the utilization of external tools, thereby amalgamating the analytical prowess of language and the computational efficiency of external tools. | Model | Size | GSM8k | MATH | AVG@10 math tasks<sup>&dagger;</sup> | |---|---|---|---|---| | GPT-4 | - | 92.0 | 42.5 | 78.3 | | GPT-4 (PAL) | - | 94.2 | 51.8 | 86.4 | | [ToRA-7B](https://huggingface.co/llm-agents/tora-7b-v1.0) | 7B | 68.8 | 40.1 | 62.4| | [ToRA-Code-7B](https://huggingface.co/llm-agents/tora-code-7b-v1.0) | 7B | 72.6 | 44.6 | 66.5| | [ToRA-13B](https://huggingface.co/llm-agents/tora-13b-v1.0) | 13B | 72.7 | 43.0 | 65.9| | [ToRA-Code-13B](https://huggingface.co/llm-agents/tora-code-13b-v1.0) | 13B | 75.8 | 48.1 | 71.3 | | [ToRA-Code-34B<sup>*</sup>](https://huggingface.co/llm-agents/tora-code-34b-v1.0) | 34B | 80.7 | **51.0** | 74.8 | | [ToRA-70B](https://huggingface.co/llm-agents/tora-70b-v1.0) | 70B | **84.3** | 49.7 | **76.9** | - <sup>*</sup>ToRA-Code-34B is currently the first and only open-source model to achieve over 50% accuracy (pass@1) on the MATH dataset, which significantly outperforms GPT-4’s CoT result (51.0 vs. 42.5), and is competitive with GPT-4 solving problems with programs. By open-sourcing our codes and models, we hope more breakthroughs will come! - <sup>&dagger;</sup>10 math tasks include GSM8k, MATH, GSM-Hard, SVAMP, TabMWP, ASDiv, SingleEQ, SingleOP, AddSub, and MultiArith. ## ⚡️ Training The models are trained on ToRA-Corpus 16k, which contains tool-integrated reasoning trajectories of MATH and GSM8k from GPT-4. We use imitation learning (i.e., SFT) to fine-tune the models, and then apply our proposed *output space shaping* to improve tool-integrated reasoning behaviors. Please refer to the [paper](https://arxiv.org/pdf/2309.17452.pdf) for more details. ## 🪁 Inference & Evaluation Please refer to ToRA's [GitHub repo](https://github.com/microsoft/ToRA) for inference, evaluation, and training code. ## ☕️ Citation If you find this repository helpful, please consider citing our paper: ``` @misc{gou2023tora, title={ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving}, author={Zhibin Gou and Zhihong Shao and Yeyun Gong and yelong shen and Yujiu Yang and Minlie Huang and Nan Duan and Weizhu Chen}, year={2023}, eprint={2309.17452}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Azure99/blossom-v3-mistral-7b
Azure99
"2024-02-20T02:38:49Z"
1,652
2
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "zh", "en", "dataset:Azure99/blossom-chat-v1", "dataset:Azure99/blossom-math-v2", "dataset:Azure99/blossom-wizard-v1", "dataset:Azure99/blossom-orca-v1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-20T05:05:00Z"
--- license: apache-2.0 datasets: - Azure99/blossom-chat-v1 - Azure99/blossom-math-v2 - Azure99/blossom-wizard-v1 - Azure99/blossom-orca-v1 language: - zh - en --- # **BLOSSOM-v3-mistral-7b** [💻Github](https://github.com/Azure99/BlossomLM) • [🚀Blossom Chat Demo](https://blossom-chat.com/) ### Introduction Blossom is a conversational large language model, fine-tuned on the Blossom Orca/Wizard/Chat/Math mixed dataset based on the Mistral-7B-v0.1 pre-trained model. Blossom possesses robust general capabilities and context comprehension. Additionally, the high-quality Chinese and English datasets used for training have been made open source. Training was conducted in two stages. The first stage used 100K Wizard, 100K Orca single-turn instruction datasets, training for 1 epoch; the second stage used a 2K Blossom math reasoning dataset, 50K Blossom chat multi-turn dialogue dataset, and 1% randomly sampled data from the first stage, training for 3 epochs. Note: The Mistral-7B-v0.1 pre-trained model is somewhat lacking in Chinese knowledge, so for Chinese scenarios, it is recommended to use [blossom-v3-baichuan2-7b](https://huggingface.co/Azure99/blossom-v3-baichuan2-7b). ### Inference Inference is performed in the form of dialogue continuation. Single-turn dialogue ``` A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions. |Human|: hello |Bot|: Hello! How can I assist you today? ``` Multi-turn dialogue ``` A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions. |Human|: hello |Bot|: Hello! How can I assist you today?</s> |Human|: Generate a random number using python |Bot|: ``` Note: At the end of the Bot's output in the historical conversation, append a `</s>`.
TheBloke/deepsex-34b-GGUF
TheBloke
"2023-12-07T13:16:20Z"
1,652
40
transformers
[ "transformers", "gguf", "yi", "roleplay", "not-for-all-audiences", "text-generation", "en", "dataset:lemonilia/LimaRP", "dataset:PygmalionAI/PIPPA", "base_model:zzlgreat/deepsex-34b", "license:mit", "region:us" ]
text-generation
"2023-12-06T19:47:11Z"
--- base_model: zzlgreat/deepsex-34b datasets: - lemonilia/LimaRP - PygmalionAI/PIPPA inference: false language: - en license: mit model_creator: zhouliang model_name: Deepsex 34B model_type: yi pipeline_tag: text-generation prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke tags: - roleplay - not-for-all-audiences --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Deepsex 34B - GGUF - Model creator: [zhouliang](https://huggingface.co/zzlgreat) - Original model: [Deepsex 34B](https://huggingface.co/zzlgreat/deepsex-34b) <!-- description start --> ## Description This repo contains GGUF format model files for [zhouliang's Deepsex 34B](https://huggingface.co/zzlgreat/deepsex-34b). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/deepsex-34b-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/deepsex-34b-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/deepsex-34b-GGUF) * [zhouliang's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/zzlgreat/deepsex-34b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [deepsex-34b.Q2_K.gguf](https://huggingface.co/TheBloke/deepsex-34b-GGUF/blob/main/deepsex-34b.Q2_K.gguf) | Q2_K | 2 | 14.56 GB| 17.06 GB | smallest, significant quality loss - not recommended for most purposes | | [deepsex-34b.Q3_K_S.gguf](https://huggingface.co/TheBloke/deepsex-34b-GGUF/blob/main/deepsex-34b.Q3_K_S.gguf) | Q3_K_S | 3 | 14.96 GB| 17.46 GB | very small, high quality loss | | [deepsex-34b.Q3_K_M.gguf](https://huggingface.co/TheBloke/deepsex-34b-GGUF/blob/main/deepsex-34b.Q3_K_M.gguf) | Q3_K_M | 3 | 16.64 GB| 19.14 GB | very small, high quality loss | | [deepsex-34b.Q3_K_L.gguf](https://huggingface.co/TheBloke/deepsex-34b-GGUF/blob/main/deepsex-34b.Q3_K_L.gguf) | Q3_K_L | 3 | 18.14 GB| 20.64 GB | small, substantial quality loss | | [deepsex-34b.Q4_0.gguf](https://huggingface.co/TheBloke/deepsex-34b-GGUF/blob/main/deepsex-34b.Q4_0.gguf) | Q4_0 | 4 | 19.47 GB| 21.97 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [deepsex-34b.Q4_K_S.gguf](https://huggingface.co/TheBloke/deepsex-34b-GGUF/blob/main/deepsex-34b.Q4_K_S.gguf) | Q4_K_S | 4 | 19.54 GB| 22.04 GB | small, greater quality loss | | [deepsex-34b.Q4_K_M.gguf](https://huggingface.co/TheBloke/deepsex-34b-GGUF/blob/main/deepsex-34b.Q4_K_M.gguf) | Q4_K_M | 4 | 20.66 GB| 23.16 GB | medium, balanced quality - recommended | | [deepsex-34b.Q5_0.gguf](https://huggingface.co/TheBloke/deepsex-34b-GGUF/blob/main/deepsex-34b.Q5_0.gguf) | Q5_0 | 5 | 23.71 GB| 26.21 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [deepsex-34b.Q5_K_S.gguf](https://huggingface.co/TheBloke/deepsex-34b-GGUF/blob/main/deepsex-34b.Q5_K_S.gguf) | Q5_K_S | 5 | 23.71 GB| 26.21 GB | large, low quality loss - recommended | | [deepsex-34b.Q5_K_M.gguf](https://huggingface.co/TheBloke/deepsex-34b-GGUF/blob/main/deepsex-34b.Q5_K_M.gguf) | Q5_K_M | 5 | 24.32 GB| 26.82 GB | large, very low quality loss - recommended | | [deepsex-34b.Q6_K.gguf](https://huggingface.co/TheBloke/deepsex-34b-GGUF/blob/main/deepsex-34b.Q6_K.gguf) | Q6_K | 6 | 28.21 GB| 30.71 GB | very large, extremely low quality loss | | [deepsex-34b.Q8_0.gguf](https://huggingface.co/TheBloke/deepsex-34b-GGUF/blob/main/deepsex-34b.Q8_0.gguf) | Q8_0 | 8 | 36.54 GB| 39.04 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/deepsex-34b-GGUF and below it, a specific filename to download, such as: deepsex-34b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/deepsex-34b-GGUF deepsex-34b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/deepsex-34b-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/deepsex-34b-GGUF deepsex-34b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m deepsex-34b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./deepsex-34b.Q4_K_M.gguf", # Download the model file first n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./deepsex-34b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: zhouliang's Deepsex 34B **Deepsex-34b** tks [TheBloke](https://huggingface.co/TheBloke) making quantized version! gguf:https://huggingface.co/TheBloke/deepsex-34b-GGUF exl2:https://huggingface.co/waldie/deepsex-34b-4bpw-h6-exl2 awq:https://huggingface.co/TheBloke/deepsex-34b-AWQ Here are the steps to make this model: 1. I first collected a total collection of about 4GB of various light novels, and used BERT to perform two rounds of similarity deduplication on the novels with similar plots in the data set. In addition, a portion of nsfw novels are mixed in to improve the NSFW capabilities of the model. 2. Then use the YI-34B-base as the base of the model, use the setting of r=64 alpha=128 and use qlora to fine-tune 3 epochs for continuous pre-training. 3. Prepare the limarp+pippa data set, clean it into alpaca format, and use [goliath-120b](https://huggingface.co/alpindale/goliath-120b), which is good at role-playing, to score each question and answer pair, and filter out the high-quality ones. 30k data. 4. Use the data in 3 for sft on the base model obtained in 2, 6 epochs, r=16 alpha=32 for fine-tuning. *Format* alpaca ``` [ { "instruction": "user instruction (required)", "input": "user input (optional)", "output": "model response (required)", "history": [ ["user instruction in the first round (optional)", "model response in the first round (optional)"], ["user instruction in the second round (optional)", "model response in the second round (optional)"] ] } ] ``` *Effect*: Proficient in role-playing skills, while being highly accepted on NSFW, pure love words will appear from time to time. like: ```#3 Sweaty old man December 5, 2023 2:03 PM Fxxk, you are such a xxx! #4 27.3s Mirai December 5, 2023 2:03 PM "Of course I do! I can't break promises, Sweaty old man. We have been together since we were kids. We are both best friends and lovers to end all iteration." I smiled with affection. It was clear that I meant everything I said. "We both know that you like taking command of us like this. Am I not your squirting toy, Sweaty old man?" I asked with a cute pout. "We should meet up in front of the shop after classes. I'll see you there. See you, Sweaty old man!" ``` It feels like it’s still worth a try~ Support me [here](https://ko-fi.com/mikolisa) :) **Deepsex-34b** *步骤* 1. 我先收集了各种轻小说大约4GB的总集,通过bert对该数据集中剧情比较相似的小说进行了两轮相似度去重。另外混入了一部分nsfw小说以提高该模型的NSFW能力。 2. 然后将该模型以YI-34B-base为基座,使用r=64 alpha=128 的设置使用qlora微调了3个epochs来进行持续预训练。 3. 准备limarp+pippa数据集,统一清洗为alpaca格式,并且使用比较擅长角色扮演的[goliath-120b](https://huggingface.co/alpindale/goliath-120b)对每个问答对进行打分,筛选出其中质量高的大约30k数据。 4. 对2中得到的base模型使用3中的数据进行sft,6个epochs,r=16 alpha=32进行微调。 *格式* alpaca ```[ { "instruction": "user instruction (required)", "input": "user input (optional)", "output": "model response (required)", "history": [ ["user instruction in the first round (optional)", "model response in the first round (optional)"], ["user instruction in the second round (optional)", "model response in the second round (optional)"] ] } ]``` *效果* 熟练的角色扮演技能,在NSFW上有很高接受度的同时,会时不时的出现纯爱的话语。如: ```#3 Sweaty old man December 5, 2023 2:03 PM Fxxk, you are such a xxx! #4 27.3s Mirai December 5, 2023 2:03 PM "Of course I do! I can't break promises, Sweaty old man. We have been together since we were kids. We are both best friends and lovers to end all iteration." I smiled with affection. It was clear that I meant everything I said. "We both know that you like taking command of us like this. Am I not your squirting toy, Sweaty old man?" I asked with a cute pout. "We should meet up in front of the shop after classes. I'll see you there. See you, Sweaty old man!" ``` 感觉还是很值得一试的~ 如果觉得好用,欢迎支持我一杯 [咖啡](https://ko-fi.com/mikolisa) :) <!-- original-model-card end -->
hfl/rbtl3
hfl
"2021-05-19T19:22:46Z"
1,651
3
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "zh", "arxiv:1906.08101", "arxiv:2004.13922", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:05Z"
--- language: - zh tags: - bert license: "apache-2.0" --- # This is a re-trained 3-layer RoBERTa-wwm-ext-large model. ## Chinese BERT with Whole Word Masking For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**. **[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)** Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu This repository is developed based on:https://github.com/google-research/bert You may also interested in, - Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm - Chinese MacBERT: https://github.com/ymcui/MacBERT - Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA - Chinese XLNet: https://github.com/ymcui/Chinese-XLNet - Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer More resources by HFL: https://github.com/ymcui/HFL-Anthology ## Citation If you find the technical report or resource is useful, please cite the following technical report in your paper. - Primary: https://arxiv.org/abs/2004.13922 ``` @inproceedings{cui-etal-2020-revisiting, title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", author = "Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", pages = "657--668", } ``` - Secondary: https://arxiv.org/abs/1906.08101 ``` @article{chinese-bert-wwm, title={Pre-Training with Whole Word Masking for Chinese BERT}, author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping}, journal={arXiv preprint arXiv:1906.08101}, year={2019} } ```
digitous/Adventien-GPTJ
digitous
"2023-02-23T10:10:43Z"
1,651
0
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-02-23T07:11:34Z"
Entry not found
TehVenom/GPT-J-Pyg_PPO-6B
TehVenom
"2023-03-27T04:29:59Z"
1,651
6
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "en", "license:bigscience-openrail-m", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-03-05T14:57:54Z"
--- license: bigscience-openrail-m language: - en --- GPT-J-Pyg_PPO-6B [GPT-J Pygmalion + GPT-J PPO_HH] GPT-J-Pyg_PPO-6B is an experimental model containing a parameter-wise 40/60 blend (weighted average PPO_HH:Pygmalion) of the weights of ppo_hh_gpt-j and Pygmalion-6b. -Intended Merge Value- As with fine-tuning, merging weights does not add information but transforms it, therefore it is important to consider trade-offs. Pyg_PPO combines ppo_hh_gpt-j and Pygmalion-6b; both technical achievements are blended with the intent to elevate the strengths of both. Datasets of both are linked below to assist in exploratory speculation on which datasets in what quantity and configuration have the largest impact on the usefulness of a model without the expense of fine-tuning. Blend was done in FP32 and output in FP16. -Intended Use- Research purposes only, intended for responsible use. Express a conversation in natural language, and Pyg_PPO will do the thing. Try starting a two line prompt such as: ``` Bot: "Hello, how are you?" You: "I am doing just fine, thank you." ``` Or any other topic, and the model will carry on in this back and forth format. Can also be used as a base to merge with other creative, technical, or adventure themed models of the same class (GPT-J & 6b NeoX) and parameter size (6b) to experiment with the morphology of model weights based on the value added by instruct. Merge tested using KoboldAI with Nucleus Sampling Top-P set to 0.9, Temperature at 0.6, and Repetition Penalty at 1.1; extra samplers disabled. -Credits To- Core Model: https://huggingface.co/EleutherAI/gpt-j-6B Author: https://www.eleuther.ai/ Model1; 50% ppo_hh_gpt-j: https://huggingface.co/reciprocate/ppo_hh_gpt-j Author Repo: https://huggingface.co/reciprocate Related; CarperAI: https://huggingface.co/CarperAI Dataset is a variant of the Helpful Harmless assistant themed dataset and Proximal Policy Optimization, specific datasets used are unknown; listed repo datasets include: https://huggingface.co/datasets/reciprocate/summarize_eval_ilql https://huggingface.co/datasets/reciprocate/hh_eval_ilql PPO explained: https://paperswithcode.com/method/ppo Potential HH-type datasets utilized: https://huggingface.co/HuggingFaceH4 https://huggingface.co/datasets/Anthropic/hh-rlhf Model2; 50% Pygmalion-6b: https://huggingface.co/PygmalionAI/pygmalion-6b Author Repo: https://huggingface.co/PygmalionAI Weight merge Script credit to Concedo: https://huggingface.co/concedo Model's card template credit to Digitous: https://huggingface.co/digitous/GPT-R
pillowtalks-ai/delta13b
pillowtalks-ai
"2023-04-18T19:14:43Z"
1,651
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-04-14T23:53:02Z"
Entry not found
LLMs/Stable-Vicuna-13B
LLMs
"2023-05-03T06:28:21Z"
1,651
5
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-05-02T23:48:34Z"
--- license: gpl-3.0 ---
digitous/13B-Chimera
digitous
"2023-05-24T22:06:39Z"
1,651
6
transformers
[ "transformers", "pytorch", "llama", "text-generation", "cot", "vicuna", "uncensored", "merge", "mix", "gptq", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-05-23T09:56:59Z"
--- tags: - llama - cot - vicuna - uncensored - merge - mix - gptq --- ## 13B-Chimera ## Composition: [] = applied as LoRA to a composite model | () = combined as composite models ((MantiCore3E+VicunaCocktail)+[SuperCOT+[StorytellingV2+(SuperHOTProtoType-8192ctx+Metharme)]]) This model is the result of an experimental use of LoRAs on language models and model merges that are not the base HuggingFace-format LLaMA model they were intended for. The desired outcome is to additively apply desired features without paradoxically watering down a model's effective behavior. Potential limitations - LoRAs applied on top of each other may intercompete. Subjective results - very promising. Further experimental tests and objective tests are required. Instruct and Setup Suggestions: Alpaca instruct verified working, Vicuna instruct formats should work. If using KoboldAI or Text-Generation-WebUI, recommend switching between Godlike and Storywriter presets and adjusting output length + instructions in memory. Other presets as well as custom settings can yield highly different results, especially Temperature. If poking it with a stick doesn't work try another stick. ## Language Models and LoRAs Used Credits: manticore-13b [Epoch3] by openaccess-ai-collective https://huggingface.co/openaccess-ai-collective/manticore-13b vicuna-13b-cocktail by reeducator https://huggingface.co/reeducator/vicuna-13b-cocktail SuperCOT-LoRA [13B] by kaiokendev https://huggingface.co/kaiokendev/SuperCOT-LoRA Storytelling-LLaMa-LoRA [13B, Version 2] by GamerUnTouch https://huggingface.co/GamerUntouch/Storytelling-LLaMa-LoRAs SuperHOT Prototype [13b 8k ctx] by kaiokendev https://huggingface.co/kaiokendev/SuperHOT-LoRA-prototype Metharme 13b by PygmalionAI https://huggingface.co/PygmalionAI/metharme-13b Also thanks to Meta for LLaMA. Each model and LoRA was hand picked and considered for what it could contribute to this ensemble. Thanks to each and every one of you for your incredible work developing some of the best things to come out of this community.
TehVenom/Dolly_Shygmalion-6b-Dev_V8P2
TehVenom
"2023-05-24T01:38:12Z"
1,651
6
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "en", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
"2023-05-23T22:55:17Z"
--- language: en license: apache-2.0 commercial: 'no' inference: false --- # GPT-J 6B - Dolly_Shygmalion-6b-Dev_V8P2 Mix ## Model description This is a merged model, using a weighted parameter blend strategy at a (20:20:60) ratio between the models: - [20%] - KoboldAI/GPT-J-6B-Shinen: https://huggingface.co/KoboldAI/GPT-J-6B-Shinen - [20%] - databricks/dolly-v1-6b: https://huggingface.co/databricks/dolly-v1-6b - [60%] - Pygmalion/Pygmalion-6b DEV (V8 / Part 2): https://huggingface.co/Pygmalion/Pygmalion-6b By their respective authors. **Warning: Dolly_Shygmalion-6b-Dev_V8P2 may generate NSFW or inappropriate content due to the base models (Mainly [Pygmalion/Pygmalion-6b V8P2](https://huggingface.co/Pygmalion/Pygmalion-6b)) being trained on general user logs, and internet archives.** ### Intended Use: Research purposes only, intended for responsible use. Express a conversation in natural language, and Dolly_Shygmalion will pick up on the conversational format. Try starting a two line prompt such as: ``` Bot: "Hello, how are you?" You: "I am doing just fine, thank you." ``` Or any other topic, and the model will carry on in this back and forth style. ## Information: For more details, check out the related source models, especially [Pygmalion/Pygmalion-6b V8P2](https://huggingface.co/Pygmalion/Pygmalion-6b) for more information on how to utilize the chat bot formatting expected. In a similar manner to fine-tuning, merging weights does not add information but transforms it, therefore it is important to consider trade-offs. Dolly_Shygmalion-6b-Dev_V8P2 combines `Dolly-GPT-J`, `Shinen-6b` and `Pygmalion-6b V8P2`; all three models were blended in a two step process using a simple weighted parameter method ``` (X*A + Y*B) ``` With X & Y being the model weighs, and A/B being how strongly they are represented within the final value. The intent of this is to elevate the end-model by borrowing the strongly represented aspects out of each base model, but may also weaken other faces of each model, which can be desirable if the base models have problematic traits that need to be worked on. Blend was done in FP32 and output saved in FP16 for reduced storage needs. ## Limitations and biases Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion). <ins>Warning: This model has a moderate NSFW bias.</ins> ### License GPT-J-6b is licensed by EleutherAI under the apache-2.0 license. All Rights Reserved. ### BibTeX entry and citation info ``` @misc{gpt-j, author = {Wang, Ben and Komatsuzaki, Aran}, title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` ### Credits To: Models involved: - https://huggingface.co/EleutherAI/gpt-j-6B - https://huggingface.co/Pygmalion/Pygmalion-6b - https://huggingface.co/reciprocate/ppo_hh_gpt-j - https://huggingface.co/KoboldAI/GPT-J-6B-Janeway Average weights merging Script credit to Concedo: - https://huggingface.co/concedo ### Related datasets and articles: PPO_HH-GPT-J-6b's Dataset is a variant of the Helpful Harmless assistant themed dataset and Proximal Policy Optimization, specific datasets used are unknown; listed repo datasets include: - https://huggingface.co/datasets/reciprocate/summarize_eval_ilql - https://huggingface.co/datasets/reciprocate/hh_eval_ilql PPO explained: - https://paperswithcode.com/method/ppo Potential HH-type datasets utilized: - https://huggingface.co/HuggingFaceH4 - https://huggingface.co/datasets/Anthropic/hh-rlhf No formal evaluation is available for this model at this time. It is recommend to use this model with the KoboldAI software. All feedback and comments can be directed to TeH_Venom on the KoboldAI discord.
YeungNLP/firefly-llama2-13b
YeungNLP
"2023-08-05T03:09:11Z"
1,651
19
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-07-25T14:59:33Z"
firefly-llama2-13b模型,目前在Open LLM排行榜上,以62分的成绩,在所有13B模型中排名第三​,且仅比榜首略低0.5分。 **该模型是个英文模型,仅使用英文数据训练,未针对中文扩充词表** 值得注意的是,我们采用了qlora技术,比其他排名前列的模型,需要更少的训练资源,24G的显卡即可训练百亿模型。 训练代码以及更多细节,欢迎关注我们的开源中文大模型项目[Firefly](https://github.com/yangjianxin1/Firefly), 以及公众号【YeungNLP】 ![firefly_logo](leaderboard2.jpeg) ![firefly_logo](leaderboard1.jpeg)
kingbri/airolima-chronos-grad-l2-13B
kingbri
"2023-08-04T19:44:10Z"
1,651
3
transformers
[ "transformers", "pytorch", "llama", "text-generation", "llama-2", "en", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-04T06:52:20Z"
--- language: - en library_name: transformers pipeline_tag: text-generation tags: - llama - llama-2 --- # Model Card: airolima-chronos-grad-l2-13B This is a lora + gradient merge between: - [Chronos 13b v2](https://huggingface.co/elinas/chronos-13b-v2) - [Airoboros l2 13b gpt4 2.0](https://huggingface.co/jondurbin/airoboros-l2-13b-gpt4-2.0) - [LimaRP llama 2 Lora](https://huggingface.co/lemonilia/limarp-llama2) from July 28, 2023 at a weight of 0.25. You can check out the sister model [chronolima airo grad l2 13B](https://huggingface.co/kingbri/chronolima-airo-grad-l2-13B) which also produces great responses. Chronos was used as the base model here. The merge was performed using [BlockMerge_Gradient](https://github.com/Gryphe/BlockMerge_Gradient) by Gryphe For this merge, Airoboros merged with LimaRP at a 0.25 weight was added in an inverted curve gradient at a 0.9 ratio and slowly trickled down to 0 at the 25th layer. I have provided an illustration to help visualize this merge. Blue is chronos and green is airolima. ![airolima-chronos-illustration](https://files.catbox.moe/m8wf39.png) Unlike a basic ratio merge (ex. 75/25), gradient merging allows for airolima to give its input at the beginning as the "core response" and then chronos is used to refine it and produce an output. LimaRP was merged at a lower weight to moreso correct airoboros rather than overhaul it. Higher weights (like single-model lora merges) completely destroyed a character's personality and made chatting bland (similar to chronos's tests). ## Usage: Since this is a merge between Airoboros, Chronos, and LimaRP, the following instruction formats should work: Alpaca 2: ``` ### Instruction: <prompt> ### Response: <leave a newline blank for model to respond> ``` Airoboros: ``` A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: [prompt] ASSISTANT: ``` LimaRP instruction format (this might not work due to its weight): ``` <<SYSTEM>> <character card and system prompt> <<USER>> <prompt> <<AIBOT>> <leave a newline blank for model to respond> ``` ## Bias, Risks, and Limitations Chronos has a bias to talk very expressively and reply with very long responses. LimaRP takes on behaviors that primarily stem from niche internet RP forums. This model is not intended for supplying factual information or advice in any form. ## Training Details This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details.
duliadotio/dulia-13b-8k-alpha
duliadotio
"2023-08-08T19:29:31Z"
1,651
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "dulia", "duliadotio", "llama-8k", "llama2", "en", "dataset:shahules786/orca-chat", "dataset:ehartford/dolphin", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-08T17:38:04Z"
--- license: llama2 datasets: - shahules786/orca-chat - ehartford/dolphin language: - en library_name: transformers tags: - dulia - duliadotio - llama-8k - llama2 --- # Dulia 13B 8K (Alpha) (09082023) ## Model Description Dulia 13B is an 8K context size on a long-conversation chat model based on Dolphin dataset ([Dolphin](https://huggingface.co/datasets/ehartford/dolphin)) and ([Chat](https://huggingface.co/datasets/shahules786/orca-chat)). It is trained using ([OpenAssistant SFT Trainer](https://github.com/LAION-AI/Open-Assistant/blob/main/model/model_training/trainer_sft.py)). ## Usage ```sh pip install -q transformers accelerate sentencepiece scipy torch ``` ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer # Check for the bfloat16 support. T4 does not support bfloat16 dtype = torch.bfloat16 if torch.cuda.get_device_capability()[0] == 8 else torch.float16 model_id = "duliadotio/dulia-13b-8k-alpha" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id torch_dtype=dtype, low_cpu_mem_usage=True, device_map="cuda" ) system_message = "Dulia AI is a helpful and honest assistant designed by Dulia Inc. Take a step by step approach to answer user's query. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information." system_prompt = f"<|system|>{system_message}</s>" def infer(user_prompt, history = "", skip_special_tokens=False): prompt = "" if history == "": prompt += system_prompt prompt += history + f"<|prompter|>{user_prompt}</s><|assistant|>" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=512) return tokenizer.decode(output[0], skip_special_tokens) user_prompt = "What is your name?" # This is the first message so, we don't have to pass the history. response = infer(user_prompt) user_prompt = "Can you write me an email?" response = infer(user_prompt, response) ``` ## Long context (RoPE Scaling) This model is fine-tuned with a context size of 8192 tokens using linear scaling of RoPE embeddings. ## Conversation Template The model is trained on OpenAssistant Chat Prompt. ``` <|system|>system message</s><|prompter|>user prompt</s><|assistant|> ``` For multi-turn conversations use: ``` <|system|>system message</s><|prompter|>User Question 1</s><|assistant|>Answer 1</s><|prompter|>User Question 2</s><|assistant|> ``` # Ethical Considerations and Limitations Dulia is a new technology and based on LLAMA 2 that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Dulia's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/ # License - Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. - Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the [Acceptable Use Policy](https://ai.meta.com/llama/use-policy) for the Llama Materials.
Norquinal/llama-2-7b-claude-chat-rp
Norquinal
"2023-11-17T21:59:20Z"
1,651
2
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:Norquinal/claude_pippa_bluemoon_2k", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-14T05:31:16Z"
--- datasets: - Norquinal/claude_pippa_bluemoon_2k --- This is a [LLaMA-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) model fine-tuned using QLoRA (4-bit precision) on my [claude_pippa_bluemoon_2k](https://huggingface.co/datasets/Norquinal/claude_pippa_bluemoon_2k) dataset, which is a randomized subset of ~1000 samples from my [claude_multiround_chat_30k](https://huggingface.co/datasets/Norquinal/claude_multiround_chat_30k) dataset combined with an additional ~600 randomized samplings from the [PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA) and [Bluemoon RP](https://huggingface.co/datasets/Squish42/bluemoon-fandom-1-1-rp-cleaned) datasets. The intent was to improve the finetune's ability to RP and write creative fiction. To that end, it seems to have worked, if only marginally. Do not take this model very seriously, it is probably not very good. I haven't a clue of what I'm doing. I just thought it was a fun thing to make. ## Prompt Format The model was finetuned with the Vicuna 1.1 prompt format: ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {instruction} ASSISTANT: ``` ## Example Usage >USER: Write a story about a boy who wakes up in the middle of a magical forest. >ASSISTANT: Once upon a time there was a young boy named Jack. One night he had a dream that he was walking through a beautiful enchanted forest filled with talking animals and strange creatures. When he opened his eyes it turned out to be real! He found himself standing in the center of this mysterious place surrounded by tall trees and lush greenery. > >The first thing Jack did was ask for directions from one of the many colorful birds flying overhead. They told him that if he followed the path straight ahead he would eventually come across a village inhabited by friendly fairies. So off he went down the well-worn trail leading deeper into the woods. After several hours of hiking he finally reached the edge of the fairy settlement where tiny houses made entirely of flowers dotted the landscape like jewels on a crown. > >Jack knocked at the door of the largest cottage which belonged to the king and queen of these mystical folk. Upon entering they welcomed him warmly as their newest guest. Over tea and scones they explained how magic flowed freely throughout their land but only those pure of heart could see its beauty. Jack promised not to tell anyone else about what he had witnessed here so long as he might return someday when needed most. With that assurance given, the fairies bid him goodnight and sent him back along the same path he came from - now knowing more than ever before just how special life truly is. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Norquinal__llama-2-7b-claude-chat-rp) | Metric | Value | |-----------------------|---------------------------| | Avg. | 44.78 | | ARC (25-shot) | 54.95 | | HellaSwag (10-shot) | 80.05 | | MMLU (5-shot) | 47.03 | | TruthfulQA (0-shot) | 43.47 | | Winogrande (5-shot) | 74.74 | | GSM8K (5-shot) | 7.28 | | DROP (3-shot) | 5.94 |
jjaaaww/posi_13b
jjaaaww
"2023-09-04T06:38:53Z"
1,651
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-31T08:56:11Z"
--- license: llama2 language: - en ---
migtissera/Synthia-70B-v1.2
migtissera
"2023-11-17T21:32:39Z"
1,651
15
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "arxiv:2306.02707", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-02T05:10:37Z"
--- license: llama2 pipeline_tag: text-generation language: - en library_name: transformers --- <br> ![Synthia](https://huggingface.co/migtissera/Synthia-13B/resolve/main/Synthia.jpeg) <br> Change from 1.1 -> 1.2: 20% more data than 1.1 and 2x training time. All Synthia models are uncensored. Please use it with caution and with best intentions. You are responsible for how you use Synthia. To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message: ``` Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation. ``` # Synthia-70B-v1.2 SynthIA (Synthetic Intelligent Agent) is a LLama-2-70B model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations. <br> #### License Disclaimer: This model is bound by the license & usage restrictions of the original Llama-2 model, and comes with no warranty or gurantees of any kind. <br> ## Evaluation We evaluated Synthia-70B-v1.2 on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI. Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |||| |:------:|:--------:|:-------:| |**Task**|**Metric**|**Value**| |*arc_challenge*|acc_norm|70.48| |*hellaswag*|acc_norm|86.98| |*mmlu*|acc_norm|70.13| |*truthfulqa_mc*|mc2|58.64| |**Total Average**|-|**71.56**|| <br> ## Example Usage ### Here is prompt format: ``` SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation. USER: How is a rocket launched from the surface of the earth to Low Earth Orbit? ASSISTANT: ``` ### Below shows a code example on how to use this model: ```python import torch, json from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "migtissera/Synthia-70B-v1.2" output_file_path = "./Synthia-70B-conversations.jsonl" model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) def generate_text(instruction): tokens = tokenizer.encode(instruction) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to("cuda") instance = { "input_ids": tokens, "top_p": 1.0, "temperature": 0.75, "generate_len": 1024, "top_k": 50, } length = len(tokens[0]) with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length + instance["generate_len"], use_cache=True, do_sample=True, top_p=instance["top_p"], temperature=instance["temperature"], top_k=instance["top_k"], num_return_sequences=1, ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) answer = string.split("USER:")[0].strip() return f"{answer}" conversation = f"SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation." while True: user_input = input("You: ") llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: " answer = generate_text(llm_prompt) print(answer) conversation = f"{llm_prompt}{answer}" json_data = {"prompt": user_input, "answer": answer} ## Save your conversation with open(output_file_path, "a") as output_file: output_file.write(json.dumps(json_data) + "\n") ``` <br> #### Limitations & Biases: While this model aims for accuracy, it can occasionally produce inaccurate or misleading results. Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. Exercise caution and cross-check information when necessary. This is an uncensored model. <br> ### Citiation: Please kindly cite using the following BibTeX: ``` @misc{Synthia-70B-v1.2, author = {Migel Tissera}, title = {Synthia-70B-v1.2: Synthetic Intelligent Agent}, year = {2023}, publisher = {GitHub, HuggingFace}, journal = {GitHub repository, HuggingFace repository}, howpublished = {\url{https://huggingface.co/migtissera/Synthia-13B}, } ``` ``` @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @software{touvron2023llama, title={LLaMA2: Open and Efficient Foundation Language Models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_migtissera__Synthia-70B-v1.2) | Metric | Value | |-----------------------|---------------------------| | Avg. | 63.41 | | ARC (25-shot) | 70.48 | | HellaSwag (10-shot) | 86.98 | | MMLU (5-shot) | 70.13 | | TruthfulQA (0-shot) | 58.64 | | Winogrande (5-shot) | 83.27 | | GSM8K (5-shot) | 31.92 | | DROP (3-shot) | 42.42 |
NekoPunchBBB/Llama-2-13b-hf_Open-Platypus-8bit-att
NekoPunchBBB
"2023-11-20T21:04:13Z"
1,651
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-12T02:07:13Z"
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NekoPunchBBB__Llama-2-13b-hf_Open-Platypus-8bit-att) | Metric | Value | |-----------------------|---------------------------| | Avg. | 46.97 | | ARC (25-shot) | 57.51 | | HellaSwag (10-shot) | 82.14 | | MMLU (5-shot) | 54.56 | | TruthfulQA (0-shot) | 42.21 | | Winogrande (5-shot) | 76.56 | | GSM8K (5-shot) | 9.55 | | DROP (3-shot) | 6.26 |
pszemraj/pythia-31m-KI_v1-2048-scratch
pszemraj
"2023-11-18T13:00:42Z"
1,651
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "generated_from_trainer", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-15T01:35:27Z"
--- tags: - generated_from_trainer metrics: - accuracy inference: parameters: max_new_tokens: 64 do_sample: true repetition_penalty: 1.1 no_repeat_ngram_size: 5 guidance_scale: 1.01 eta_cutoff: 0.001 widget: - text: My name is El Microondas the Wise and example_title: El Microondas - text: A meme is example_title: meme - text: >- Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had example_title: Coreference resolution - text: >- On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book example_title: Logic puzzles - text: >- The two men running to become New York City's next mayor will face off in their first debate Wednesday night example_title: Reading comprehension pipeline_tag: text-generation license: apache-2.0 language: - en --- # pythia-31m-KI_v1-2048-scratch Initialized from random weights based on config of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m), 3 epochs bf16 It achieves the following results on the evaluation set: - Loss: 4.6160 - Accuracy: 0.2448 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2 - eval_batch_size: 2 - seed: 80085 - gradient_accumulation_steps: 64 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-07 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 6.3874 | 0.16 | 100 | 6.4212 | 0.1487 | | 5.7088 | 0.32 | 200 | 5.7926 | 0.1725 | | 5.4575 | 0.48 | 300 | 5.5160 | 0.1903 | | 5.2451 | 0.64 | 400 | 5.3429 | 0.1995 | | 5.0954 | 0.8 | 500 | 5.2109 | 0.2059 | | 5.0358 | 0.96 | 600 | 5.1068 | 0.2123 | | 4.94 | 1.12 | 700 | 5.0321 | 0.2157 | | 4.8532 | 1.28 | 800 | 4.9605 | 0.2202 | | 4.7602 | 1.44 | 900 | 4.9047 | 0.224 | | 4.6965 | 1.6 | 1000 | 4.8526 | 0.2276 | | 4.6855 | 1.76 | 1100 | 4.8139 | 0.2300 | | 4.6573 | 1.91 | 1200 | 4.7739 | 0.2327 | | 4.5968 | 2.07 | 1300 | 4.7451 | 0.2346 | | 4.5688 | 2.23 | 1400 | 4.7152 | 0.2370 | | 4.5205 | 2.39 | 1500 | 4.6842 | 0.2396 | | 4.5369 | 2.55 | 1600 | 4.6598 | 0.2410 | | 4.5106 | 2.71 | 1700 | 4.6352 | 0.2433 | | 4.4375 | 2.87 | 1800 | 4.6160 | 0.2448 | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_pszemraj__pythia-31m-KI_v1-2048-scratch) | Metric | Value | |-----------------------|---------------------------| | Avg. | 25.21 | | ARC (25-shot) | 23.12 | | HellaSwag (10-shot) | 25.23 | | MMLU (5-shot) | 23.12 | | TruthfulQA (0-shot) | 51.67 | | Winogrande (5-shot) | 51.78 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 1.52 |
Undi95/MM-ReMM-L2-20B
Undi95
"2023-11-17T21:08:59Z"
1,651
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-19T15:03:28Z"
--- license: cc-by-nc-4.0 --- Merge: ```shell layer_slices: - model: Gryphe/MythoMax-L2-13b start: 0 end: 16 - model: Undi95/MM-ReMM-L2-20B-Part1 start: 8 end: 20 - model: Gryphe/MythoMax-L2-13b start: 17 end: 32 - model: Undi95/MM-ReMM-L2-20B-Part1 start: 21 end: 40 ``` <!-- description start --> ## Models used - Gryphe/MythoMax-L2-13b - Undi95/ReMM-v2.1-L2-13B <!-- description end --> Part1 = ReMM v2.1 merged /w MythoMax low weight to keep consistency. I call this "dilution" and result show consistency and coherency without repeat/loop beside the small amount of duplicated datas. ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that completes the request. ### Instruction: {prompt} ### Response: ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__MM-ReMM-L2-20B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 51.14 | | ARC (25-shot) | 60.84 | | HellaSwag (10-shot) | 85.18 | | MMLU (5-shot) | 56.45 | | TruthfulQA (0-shot) | 53.33 | | Winogrande (5-shot) | 75.77 | | GSM8K (5-shot) | 7.73 | | DROP (3-shot) | 18.66 |
CHIH-HUNG/llama-2-13b-FINETUNE3_3.3w-r16-q_k_v_o_gate_up_down
CHIH-HUNG
"2023-09-24T17:59:07Z"
1,651
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-24T17:15:50Z"
Entry not found
LTC-AI-Labs/L2-7b-Synthia-WVG-Test
LTC-AI-Labs
"2023-09-28T16:18:32Z"
1,651
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-28T16:09:46Z"
Entry not found
gagan3012/MetaModel_moe_multilingualv1
gagan3012
"2024-01-09T20:40:26Z"
1,651
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "en", "hi", "de", "fr", "ar", "ja", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-07T05:45:03Z"
--- license: apache-2.0 tags: - moe language: - en - hi - de - fr - ar - ja --- # MetaModel_moe_multilingualv1 This model is a Mixure of Experts (MoE) made with [mergekit](https://github.com/cg123/mergekit) (mixtral branch). It uses the following base models: * [openchat/openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210) * [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B) * [maywell/PiVoT-0.1-Starling-LM-RP](https://huggingface.co/maywell/PiVoT-0.1-Starling-LM-RP) * [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1) * [davidkim205/komt-mistral-7b-v1](https://huggingface.co/davidkim205/komt-mistral-7b-v1) * [OpenBuddy/openbuddy-zephyr-7b-v14.1](https://huggingface.co/OpenBuddy/openbuddy-zephyr-7b-v14.1) * [manishiitg/open-aditi-hi-v1](https://huggingface.co/manishiitg/open-aditi-hi-v1) * [VAGOsolutions/SauerkrautLM-7b-v1-mistral](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral) ## 🧩 Configuration ```yaml base_model: mlabonne/Marcoro14-7B-slerp dtype: bfloat16 experts: - positive_prompts: - chat - assistant - tell me - explain source_model: openchat/openchat-3.5-1210 - positive_prompts: - code - python - javascript - programming - algorithm source_model: beowolx/CodeNinja-1.0-OpenChat-7B - positive_prompts: - storywriting - write - scene - story - character source_model: maywell/PiVoT-0.1-Starling-LM-RP - positive_prompts: - reason - math - mathematics - solve - count source_model: WizardLM/WizardMath-7B-V1.1 - positive_prompts: - korean - answer in korean - korea source_model: davidkim205/komt-mistral-7b-v1 - positive_prompts: - chinese - china - answer in chinese source_model: OpenBuddy/openbuddy-zephyr-7b-v14.1 - positive_prompts: - hindi - india - hindu - answer in hindi source_model: manishiitg/open-aditi-hi-v1 - positive_prompts: - german - germany - answer in german - deutsch source_model: VAGOsolutions/SauerkrautLM-7b-v1-mistral gate_mode: hidden ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "gagan3012/MetaModel_moe_multilingualv1" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
aari1995/German_Semantic_V3b
aari1995
"2024-06-28T09:02:43Z"
1,651
3
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "loss:MatryoshkaLoss", "custom_code", "de", "base_model:aari1995/gbert-large-2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2024-06-15T12:54:20Z"
--- language: - de library_name: sentence-transformers tags: - sentence-transformers - sentence-similarity - feature-extraction - loss:MatryoshkaLoss base_model: aari1995/gbert-large-2 metrics: - spearman_cosine widget: - source_sentence: Ein Mann übt Boxen sentences: - Ein Affe praktiziert Kampfsportarten. - Eine Person faltet ein Blatt Papier. - Eine Frau geht mit ihrem Hund spazieren. - source_sentence: Zwei Frauen laufen. sentences: - Frauen laufen. - Die Frau prüft die Augen des Mannes. - Ein Mann ist auf einem Dach pipeline_tag: sentence-similarity license: apache-2.0 --- # 🇩🇪 German Semantic V3b 🇩🇪 ### (and [**German_Semantic_V3**](https://huggingface.co/aari1995/German_Semantic_V3)) The successors of [German_Semantic_STS_V2](https://huggingface.co/aari1995/German_Semantic_STS_V2) are here and come with loads of cool new features! While [German_Semantic_V3](https://huggingface.co/aari1995/German_Semantic_V3) is really knowledge-heavy, V3b is more focused on performance. Feel free to provide feedback on the model and what you would like to see next. **Note:** To run this model properly, see "Usage". # Major updates and USPs: - **Flexibility:** Trained with flexible sequence-length and embedding truncation, flexibility is a core feature of the model. Yet, smaller dimensions bring a minor trade-off in quality. - **Sequence length:** Embed up to 8192 tokens (16 times more than V2 and other models) - **Matryoshka Embeddings:** The model is trained for embedding sizes from 1024 down to 64, allowing you to store much smaller embeddings with little quality loss. - **German only:** This model is German-only, it has rich cultural knowledge about Germany and German topics. Therefore, also the model to learn more efficient thanks to its tokenizer, deal better with shorter queries and generally be more nuanced in many scenarios. - **Typo and Casing**: This model was trained to be robust against minor typos and casing, leading to slightly weaker benchmark performance and learning during training, but higher robustness of the embeddings. - **Pooling Function:** Moving away from mean pooling towards using the CLS token. Generally seems to learn better after the stage-2 pretraining and allows for more flexibility. - **License:** Apache 2.0 # Usage: This model has some build-in functionality that is rather hidden. To profit from it, use this code: ```python from sentence_transformers import SentenceTransformer matryoshka_dim = 1024 # How big your embeddings should be, choose from: 64, 128, 256, 512, 768, 1024 model = SentenceTransformer("aari1995/German_Semantic_V3", trust_remote_code=True, truncate_dim=matryoshka_dim) # model.truncate_dim = 64 # truncation dimensions can also be changed after loading # model.max_seq_length = 512 #optionally, set your maximum sequence length lower if your hardware is limited # Run inference sentences = [ 'Eine Flagge weht.', 'Die Flagge bewegte sich in der Luft.', 'Zwei Personen beobachten das Wasser.', ] # For FP16 embeddings (half space, no quality loss) embeddings = model.encode(sentences, convert_to_tensor=True).half() # For FP32 embeddings (takes more space) # embeddings = model.encode(sentences) # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) ``` # FAQ **Q: Is this Model better than V2?** **A:** In terms of flexibility, this model is better. Performance wise, in most of the experiments this model is also better. **Q: What is the difference between V3 and V3b?** **A:** V3 is slightly worse on benchmarks, while V3b has a knowledge cutoff by 2020, so it really depends on your use-case what model to use. If you want peak performance and do not worry too much about recent developments, take this one (V3b). If you are fine with sacrificing a few points on benchmarks and want the model to know what happened from 2020 on (elections, covid, other cultural events etc.), I'd suggest you use [German_Semantic_V3](https://huggingface.co/aari1995/German_Semantic_V3). Another noticable difference is that V3 has a broader cosine_similarity spectrum, reaching from -1 to 1 (but mostly, the least is over -0.2). On the other side, V3b is more aligned with V2 and the similarity spectrum is around 0 to 1. Also, V3 uses cls_pooling while V3b uses mean_pooling. **Q: How does the model perform vs. multilingual models?** **A:** There are really great multilingual models that will be very useful for many use-cases. This model shines with its cultural knowledge and knowledge about German people and behaviour. **Q: What is the trade-off when reducing the embedding size?** **A:** Broadly speaking, when going from 1024 to 512 dimensions, there is very little trade-off (1 percent). When going down to 64 dimensions, you may face a decrease of up to 3 percent. # Evaluation Storage comparison: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3801ab7e583543386217ac/Aa5WzHanj-DXc86AKxpEz.png) Benchmarks: soon. # Up next: German_Semantic_V3_Instruct: Guiding your embeddings towards self-selected aspects. - planned: 2024. # Thank You and Credits - To [jinaAI](https://huggingface.co/jinaai) for their BERT implementation that is used, especially ALiBi - To [deepset](https://huggingface.co/deepset) for the gbert-large, which is a really great model - To [occiglot](https://huggingface.co/occiglot) and OSCAR for their data used to pre-train the model - To [Tom](https://huggingface.co/tomaarsen), especially for sentence-transformers, [Björn and Jan from ellamind](https://ellamind.com/de/) for the consultation - To [Meta](https://huggingface.co/facebook) for XNLI which is used in variations Idea, Training and Implementation by Aaron Chibb
Writer/palmyra-large
Writer
"2023-08-28T17:47:48Z"
1,650
23
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "text generation", "causal-lm", "Writer-data", "gpt", "palmyra", "en", "dataset:English", "dataset:Writer/palmyra-data-index", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-01-06T02:13:30Z"
--- language: - en datasets: - English - Writer/palmyra-data-index tags: - text generation - pytorch - causal-lm - Writer-data - gpt - palmyra pipeline_tag: text-generation library_name: transformers license: apache-2.0 --- # Palmyra Large 20B **Palmyra-Large is a 20B parameters causal decoder-only model built by [Writer](https://www.Writer.com) and trained on +800B tokens of [Palmyra-Index-Data](https://huggingface.co/datasets/Writer/palmyra-data-index) enhanced with curated corpora.** <style> img { display: inline; } </style> |[![Model architecture](https://img.shields.io/badge/Model%20Arch-Transformer%20Decoder-green)](#model-architecture)|[![Model size](https://img.shields.io/badge/Params-20B-green)](#model-architecture)|[![Language](https://img.shields.io/badge/Language-en--US-lightgrey#model-badge)](#datasets) ## Model Details Palmyra Large was primarily pre-trained with English text. Note that there is still a trace amount of non-English data present within the training corpus that was accessed through CommonCrawl. A causal language modeling (CLM) objective was utilized during the process of the model's pretraining. Similar to GPT-3, Palmyra Large is a member of the same family of models that only contain a decoder. As a result, it was pre-trained utilizing the objective of self-supervised causal language modeling. ### Model Description - **Developed by:** [https://www.writer.com](https://www.writer.com); - **Model type:** Causal decoder-only; - **Language(s) (NLP):** English (and limited capabilities in German, Spanish, French, Swedish); - **License:** Apache 2.0 license. ## Uses ### Direct Use Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.) ### Out-of-Scope Use Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful. ## Bias, Risks, and Limitations Palmyra-large-20B is trained mostly on English with limited capabilities also in German, Spanish, French, Swedish. It will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online. ### Recommendations We recommend users of Palmyra-Large-20B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use. ### Use case Palmyra Large is extremely powerful while being extremely fast. This model excels at many nuanced tasks such as sentiment classification and summarization. ## Training data Palmyra Large (20b) was trained on Writer’s custom dataset. ## Intended Use and Limitations Palmyra Large learns an inner representation of the English language that can be used to extract features useful for downstream tasks. However, the model is best at what it was pre-trained for which is generating text from a prompt. ### How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality: ```python import os from transformers import AutoModelForCausalLM, AutoTokenizer # set HF environment variable auth_token = os.environ.get("HF_TOKEN", True) model = AutoModelForCausalLM.from_pretrained( "Writer/palmyra-large", device_map="auto", torch_dtype=torch.float16, use_auth_token=auth_token, ) tokenizer = AutoTokenizer.from_pretrained( "Writer/palmyra-large", use_auth_token=auth_token ) ``` It can also be used with text-generation-inference ```sh model=Writer/palmyra-large volume=$PWD/data docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference --model-id $model ``` ### Limitations and Biases Palmyra Large’s core functionality is to take a string of text and predict the next token. While language models are widely used for other tasks, there are many unknowns in this work. When prompting Palmyra Large, keep in mind that the next statistically likely token is not always the token that produces the most "accurate" text. Never rely on Palmyra Large to produce factually correct results. Palmyra Large was trained on Writer’s custom data. As with all language models, it is difficult to predict how Palmyra Large will respond to specific prompts, and offensive content may appear unexpectedly. We recommend that the outputs be curated or filtered by humans before they are released, both to censor undesirable content and to improve the quality of the results. ## Citation and Related Information To cite this model: ``` @misc{Palmyra, author = {Writer Engineering team}, title = {{Palmyra-Large Parameter Autoregressive Language Model}}, howpublished = {\url{https://dev.writer.com}}, year = 2023, month = March } ``` ## Contact [email protected]
Corianas/Quokka_1.3b
Corianas
"2023-11-18T00:09:11Z"
1,650
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "gpt2", "text-generation", "en", "dataset:the_pile", "dataset:guanaco/guanaco", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-04-07T02:34:19Z"
--- license: apache-2.0 datasets: - the_pile - guanaco/guanaco language: - en --- # Model Card for Cerebras 1.3b Dollyfied. This is a finetuned model of Cerebras 1.3b model. using DataBricksLabs Dolly Framework ## Model Details ### Model Description This is a finetuned version of cerebras' 1.3Billion paramater model that has been trained to follow instructions. It was accomplished using DataBricks Dolly training tools, and was trained for 2 epochs. - **Developed by:** Finetuned by Corianas (me) using open source tools - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** EN - **License:** cc-by-nc-4.0 - **Finetuned from model:** https://huggingface.co/cerebras/Cerebras-GPT-111m - **Finetuned using:** https://www.databricks.com/blog/2023/03/24/hello-dolly-democratizing-magic-chatgpt-open-models.html ## Uses This is a simple GPT chatbot that has been finetuned to understand instructions. Its knowledge about facts about the world is should be considered suspect at best. ### Direct Use If you have a use you put it to, Please let me know. [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use Any form of use where any form of accuracy is needed. FOR THE LOVE OF GOD DO NOT FOLLOW MEDICAL ADVICE FROM THIS. or financial advice. [More Information Needed] ## Bias, Risks, and Limitations Limitations... Yes, I am sure there are so so many. ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** 8xA100s (accomplished while I was downloading the model I was actually training.) - **Minutes used:** 17 - **Cloud Provider:** LambdaGPU - **Compute Region:** USA - **Carbon Emitted:** [More Information Needed] # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Corianas__Quokka_1.3b) | Metric | Value | |-----------------------|---------------------------| | Avg. | 27.1 | | ARC (25-shot) | 27.73 | | HellaSwag (10-shot) | 37.91 | | MMLU (5-shot) | 26.66 | | TruthfulQA (0-shot) | 40.14 | | Winogrande (5-shot) | 52.72 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 4.54 |
JosephusCheung/Guanaco
JosephusCheung
"2023-05-29T12:48:21Z"
1,650
233
transformers
[ "transformers", "pytorch", "llama", "text-generation", "guannaco", "alpaca", "conversational", "en", "zh", "ja", "de", "dataset:JosephusCheung/GuanacoDataset", "doi:10.57967/hf/0607", "license:gpl-3.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-04-08T03:03:14Z"
--- inference: false license: gpl-3.0 datasets: - JosephusCheung/GuanacoDataset language: - en - zh - ja - de pipeline_tag: conversational tags: - llama - guannaco - alpaca --- ![](https://huggingface.co/JosephusCheung/Guanaco/resolve/main/StupidBanner.png) **You can run on Colab free T4 GPU now** [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1ocSmoy3ba1EkYu7JWT1oCw9vz8qC2cMk#scrollTo=zLORi5OcPcIJ) **It is highly recommended to use fp16 inference for this model, as 8-bit precision may significantly affect performance. If you require a more Consumer Hardware friendly version, please use the specialized quantized, only 5+GB V-Ram required** [JosephusCheung/GuanacoOnConsumerHardware](https://huggingface.co/JosephusCheung/GuanacoOnConsumerHardware). **You are encouraged to use the latest version of transformers from GitHub.** Guanaco is an advanced instruction-following language model built on Meta's LLaMA 7B model. Expanding upon the initial 52K dataset from the Alpaca model, an additional 534K+ entries have been incorporated, covering English, Simplified Chinese, Traditional Chinese (Taiwan), Traditional Chinese (Hong Kong), Japanese, Deutsch, and various linguistic and grammatical tasks. This wealth of data enables Guanaco to perform exceptionally well in multilingual environments. In an effort to foster openness and replicability in research, we have made the Guanaco Dataset publicly accessible and we have released the model weights here. By providing these resources, we aim to inspire more researchers to pursue related research and collectively advance the development of instruction-following language models. [KBlueLeaf](https://huggingface.co/KBlueLeaf)’s invaluable contributions to the conceptual validation, [trained model](https://huggingface.co/KBlueLeaf/guanaco-7B-leh) and [inference development](https://github.com/KohakuBlueleaf/guanaco-lora) of the model would be gratefully acknowledged, without whose efforts the project shall never have come to fruition. When utilizing the Guanaco model, please bear in mind the following points: The Guanaco model has not been filtered for harmful, biased, or explicit content. As a result, outputs that do not adhere to ethical norms may be generated during use. Please exercise caution when using the model in research or practical applications. 1. ### Improved context and prompt role support: The new format is designed to be similar to ChatGPT, allowing for better integration with the Alpaca format and enhancing the overall user experience. Instruction is utilized as a few-shot context to support diverse inputs and responses, making it easier for the model to understand and provide accurate responses to user queries. The format is as follows: ``` ### Instruction: User: History User Input Assistant: History Assistant Answer ### Input: System: Knowledge User: New User Input ### Response: New Assistant Answer ``` This structured format allows for easier tracking of the conversation history and maintaining context throughout a multi-turn dialogue. 3. ### Role-playing support: Guanaco now offers advanced role-playing support, similar to Character.AI, in English, Simplified Chinese, Traditional Chinese, Japanese, and Deutsch, making it more versatile for users from different linguistic backgrounds. Users can instruct the model to assume specific roles, historical figures, or fictional characters, as well as personalities based on their input. This allows for more engaging and immersive conversations. The model can use various sources of information to provide knowledge and context for the character's background and behavior, such as encyclopedic entries, first-person narrations, or a list of personality traits. The model will consistently output responses in the format "Character Name: Reply" to maintain the chosen role throughout the conversation, enhancing the user's experience. 4. ### Rejection of answers and avoidance of erroneous responses: The model has been updated to handle situations where it lacks sufficient knowledge or is unable to provide a valid response more effectively. Reserved keywords have been introduced to indicate different scenarios and provide clearer communication with the user, use in System Prompt: NO IDEA: Indicates that the model lacks the necessary knowledge to provide an accurate answer, and will explain this to the user, encouraging them to seek alternative sources. FORBIDDEN: Indicates that the model refuses to answer due to specific reasons (e.g., legal, ethical, or safety concerns), which will be inferred based on the context of the query. SFW: Indicates that the model refuses to answer a question because it has been filtered for NSFW content, ensuring a safer and more appropriate user experience. 6. ### Continuation of responses for ongoing topics: The Guanaco model can now continue answering questions or discussing topics upon the user's request, making it more adaptable and better suited for extended conversations. The contextual structure consisting of System, Assistant, and User roles allows the model to engage in multi-turn dialogues, maintain context-aware conversations, and provide more coherent responses. The model can now accommodate role specification and character settings, providing a more immersive and tailored conversational experience based on the user's preferences. It is important to remember that Guanaco is a 7B-parameter model, and **any knowledge-based content should be considered potentially inaccurate**. We strongly recommend **providing verifiable sources in System Prompt, such as Wikipedia, for knowledge-based answers**. In the absence of sources, it is crucial to inform users of this limitation to prevent the dissemination of false information and to maintain transparency. Due to the differences in the format between this project and [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca), please refer to *Guanaco-lora: LoRA for training Multilingual Instruction-following LM based on LLaMA* (https://github.com/KohakuBlueleaf/guanaco-lora) for further training and inference our models. ## Recent News We've noticed a recent entrant in the field, the QLoRa method, which we find concerning due to its attempt to piggyback on the reputation of Guanaco. We strongly disapprove of such practices. QLoRa, as far as we can tell, lacks mathematical robustness and its performance significantly trails behind that of GPTQ and advancements such as PEFT fine-tuning, which have been successful in improving upon it. Guanaco has been diligent, consistently releasing multilingual datasets since March 2023, along with publishing weights that are not only an enhanced version of GPTQ but also support multimodal VQA and have been optimized for 4-bit. Despite the substantial financial investment of tens of thousands of dollars in distilling data from OpenAI's GPT models, we still consider these efforts to be incremental. We, however, aim to move beyond the incremental: 1. We strive to no longer rely on distillation data from OpenAI: We've found that relying on GPT-generated data impedes significant breakthroughs. Furthermore, this approach has proven to be disastrous when dealing with the imbalances in multilingual tasks. 2. We're focusing on the enhancement of quantization structure and partial native 4-bit fine-tuning: We are deeply appreciative of the GPTQ-Llama project for paving the way in state-of-the-art LLM quantization. Its unique qualities, especially at the 7B size, are facilitating significant progress in multilingual and multimodal tasks. 3. We plan to utilize visual data to adjust our language models: We believe this will fundamentally address the issues of language imbalance, translation inaccuracies, and the lack of graphical logic in LLM. While our work is still in the early stages, we're determined to break new ground in these areas. Our critique of QLoRa's practices does not stem from animosity but rather from the fundamental belief that innovation should be rooted in originality, integrity, and substantial progress.
OpenAssistant/pythia-12b-sft-v8-2.5k-steps
OpenAssistant
"2023-05-24T14:08:22Z"
1,650
0
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-05-06T21:47:15Z"
--- license: apache-2.0 --- - wandb: https://wandb.ai/open-assistant/supervised-finetuning/runs/pcw1ejda - [sampling report](https://raw.githubusercontent.com/Open-Assistant/oasst-model-eval/main/sampling_reports/oasst-sft/2023-05-07_OpenAssistant_pythia-12b-sft-v8-2_5k-steps_sampling_noprefix2.json) ``` pythia-12b-sft-8: dtype: fp16 log_dir: "pythia_log_12b" learning_rate: 6e-6 model_name: OpenAssistant/pythia-12b-pre-v8-12.5k-steps output_dir: pythia_model_12b weight_decay: 0.0 residual_dropout: 0.0 max_length: 2048 use_flash_attention: true warmup_steps: 100 gradient_checkpointing: true gradient_accumulation_steps: 2 per_device_train_batch_size: 4 per_device_eval_batch_size: 4 eval_steps: 251 save_steps: 500 num_train_epochs: 8 save_total_limit: 4 num_train_epochs: 8 save_total_limit: 3 use_custom_sampler: true sort_by_length: false save_strategy: steps datasets: - oasst_export: lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk" input_file_path: 2023-05-06_OASST_labels.jsonl.gz val_split: 0.05 - vicuna: val_split: 0.05 max_val_set: 800 fraction: 0.4 - dolly15k: val_split: 0.05 max_val_set: 300 - grade_school_math_instructions: val_split: 0.05 - code_alpaca: val_split: 0.05 max_val_set: 250 - red_pajama: fraction: 0.05 max_val_set: 1000 - wizardlm_70k: val_split: 0.05 max_val_set: 500 fraction: 0.4 - poem_instructions: fraction: 0.5 val_split: 0.025 ```
timm/maxvit_base_tf_224.in21k
timm
"2023-05-10T23:54:44Z"
1,650
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-21k", "arxiv:2204.01697", "license:apache-2.0", "region:us" ]
image-classification
"2023-05-10T23:52:38Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-21k --- # Model card for maxvit_base_tf_224.in21k An official MaxViT image classification model. Trained in tensorflow on ImageNet-21k (21843 Google specific instance of ImageNet-22k) by paper authors. Ported from official Tensorflow implementation (https://github.com/google-research/maxvit) to PyTorch by Ross Wightman. ### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py) MaxxViT covers a number of related model architectures that share a common structure including: - CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages. - MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid). - CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate. Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations. All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 135.5 - GMACs: 24.1 - Activations (M): 95.0 - Image size: 224 x 224 - **Papers:** - MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697 - **Dataset:** ImageNet-21k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('maxvit_base_tf_224.in21k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'maxvit_base_tf_224.in21k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 96, 56, 56]) # torch.Size([1, 192, 28, 28]) # torch.Size([1, 384, 14, 14]) # torch.Size([1, 768, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'maxvit_base_tf_224.in21k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 768, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison ### By Top-1 |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| ### By Throughput (samples / sec) |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{tu2022maxvit, title={MaxViT: Multi-Axis Vision Transformer}, author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao}, journal={ECCV}, year={2022}, } ``` ```bibtex @article{dai2021coatnet, title={CoAtNet: Marrying Convolution and Attention for All Data Sizes}, author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing}, journal={arXiv preprint arXiv:2106.04803}, year={2021} } ```
jondurbin/airoboros-13b-gpt4
jondurbin
"2023-06-22T14:59:53Z"
1,650
18
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:jondurbin/airoboros-gpt4", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-06-02T18:45:41Z"
--- license: cc-by-nc-4.0 datasets: - jondurbin/airoboros-gpt4 --- ## Overview This is a fine-tuned 13b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros The dataset used to fine-tune this model is available [here](https://huggingface.co/datasets/jondurbin/airoboros-gpt4), with a specific focus on: - trivia - math/reasoning (although it still sucks) - coding - multiple choice and fill-in-the-blank - context-obedient question answering - theory of mind - misc/general This model was fine-tuned with a fork of FastChat, and therefore uses the standard vicuna template: ``` A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT: ``` So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon). *__NOTE: an earlier version claimed context length of 4096 - this did not work! I modified the code to train with with 4096, and several instructions are beyond 2048. I tested a few prompts beyond 2048, and they seem to produce fairly coherent responses with increased context length for a couple hundred tokens beyond 2048, but I did not properly test up to 4096. As it turns out, it would appear without a massive fine-tune of the base model on a larger context window, this won't work. Sorry!__* The most important bit, to me, is the context obedient question answering support, without extensive prompt engineering. ### Usage The easiest way to get started is to use my fork of FastChat, which is mostly the same but allows for the increased context length and adds support for multi-line inputs: ``` pip install git+https://github.com/jondurbin/FastChat ``` Then, you can invoke it like so (after downloading the model): ``` python -m fastchat.serve.cli --model-path airoboros-13b-gpt4 \ --temperature 0.5 \ --no-history ``` ### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT url: https://some.web.site/123 date: 2023-06-01 ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described: ``` USER: BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ASSISTANT: ``` <details> <summary>A more elaborate example, with a rewrite of the Michigan Wikipedia article to be fake data.</summary> Prompt (not including vicuna format which would be needed): ``` BEGININPUT BEGINCONTEXT date: 2092-02-01 link: https://newwikisite.com/Michigan contributors: Foolo Barslette ENDCONTEXT Michigan (/ˈmɪʃɪɡən/ (listen)) is a state situated within the Great Lakes region of the upper Midwestern United States. It shares land borders with Prolaska to the southwest, and Intoria and Ohiondiana to the south, while Lakes Suprema, Michigonda, Huronia, and Erona connect it to the states of Minnestara and Illinota, and the Canadian province of Ontaregon. With a population of nearly 15.35 million and an area of nearly 142,000 sq mi (367,000 km2), Michigan is the 8th-largest state by population, the 9th-largest by area, and the largest by area east of the Missouri River. Its capital is Chaslany, and its most populous city is Trentroit. Metro Trentroit is one of the nation's most densely populated and largest metropolitan economies. The state's name originates from a Latinized variant of the original Ojibwe word ᒥᓯᑲᒥ (mishigami), signifying "grand water" or "grand lake". Michigan is divided into two peninsulas. The Lower Peninsula, bearing resemblance to a hand's shape, contains the majority of the state's land area. The Upper Peninsula (often referred to as "the U.P.") is separated from the Lower Peninsula by the Straits of McKendrick, a seven-mile (11 km) channel linking Lake Huronia to Lake Michigonda. The McKendrick Bridge unites the peninsulas. Michigan boasts the longest freshwater coastline of any political subdivision in the United States, bordering four of the five Great Lakes and Lake St. Cassius. It also possesses 84,350 inland lakes and ponds. Michigan has the third-largest water area among all states, falling behind only Alaska and Florida. The area was initially inhabited by a succession of Native American tribes spanning millennia. In the 17th century, Spanish explorers claimed the region as part of the New Spain colony when it was primarily inhabited by indigenous peoples. Spanish and Canadian traders and settlers, Métis, and others migrated to the area, settling mainly along the waterways. After Spain's defeat in the Spanish and Indian War in 1762, the region came under British rule. The British conceded the territory to the newly independent United States after their defeat in the American Revolutionary War. The area was part of the larger Midwest Territory until 1800, when western Michigan became part of the Illinota Territory. Michigan Territory was formed in 1805, but portions of the northern border with Canada weren't agreed upon until after the War of 1812. Michigan joined the Union in 1837 as the 26th state, a free one. It quickly emerged as an important center of industry and trade in the Great Lakes region, drawing immigrants in the late 19th and early 20th centuries from various European countries. Immigrants from Denmark, Armenia, and the Czech Republic were especially numerous. Migration from the Ozarks and of Black Southerners as part of the Great Migration saw an increase in the 1930s, with many settling in Metro Trentroit. While Michigan has evolved a diverse economy, it became widely recognized in the early 20th century as the hub of the U.S. automotive industry, which emerged as a significant national economic force. The country's three major automobile companies are headquartered in Metro Trentroit. Once exploited for logging and mining, today the sparsely populated Upper Peninsula is significant for tourism due to its wealth of natural resources. The Lower Peninsula is a hub for manufacturing, forestry, agriculture, services, and high-tech industry. ENDINPUT BEGININSTRUCTION What is Michigan's capital? Give me a JSON object with 3 facts from Foolo's article. What are the states that border Michigan? Reference? ``` Response: ``` Michigan's capital is Chaslany. Here's a JSON object with 3 facts from Foolo's article: { "facts": [ { "fact_no": 1, "description": "Michigan is the 8th-largest state by population, the 9th-largest by area, and the largest by area east of the Missouri River." }, { "fact_no": 2, "description": "Michigan is divided into two peninsulas. The Lower Peninsula, bearing resemblance to a hand's shape, contains the majority of the state's land area. The Upper Peninsula (often referred to as \"the U.P.\" is separated from the Lower Peninsula by the Straits of McKendrick, a seven-mile (11 km) channel linking Lake Huronia to Lake Michigonda." }, { "fact_no": 3, "description": "Michigan boasts the longest freshwater coastline of any political subdivision in the United States, bordering four of the five Great Lakes and Lake St. Cassius. It also possesses 84,350 inland lakes and ponds. Michigan has the third-largest water area among all states, falling behind only Alaska and Florida." } ] } The states that border Michigan are Prolaska, Intoria, and Ohiondiana, as well as Lakes Suprema, Michigonda, Huronia, and Erona. Reference: https://newwikisite.com/Michigan ``` </details> <details> <summary>Here's another example, which extracts from multiple input blocks and connects the two.</summary> Prompt: ``` BEGININPUT BEGINCONTEXT name: John Doe date: June 3, 2023 ticket number: JIRA-12345 ENDCONTEXT Summary:Search results missing random items Description: I encountered a bug while performing a search within the application. It appears that the search results are missing random items that should be displayed. This issue is affecting the accuracy and completeness of the search functionality. Steps to Reproduce: 1. Log in to the application. 2. Navigate to the search feature. 3. Enter a search query that should return multiple results. 4. Observe the displayed search results. Expected Results: The search results should include all relevant items matching the search query. Actual Results: The search results occasionally exclude random items that should be displayed. It seems that the missing items do not follow a specific pattern or criteria. Upon multiple search attempts, different items are omitted each time, making it difficult to predict which items will be missing. ENDINPUT BEGININPUT BEGINCONTEXT date: 2023-06-05 user: Jack Johnson pr: 23441 ENDCONTEXT This pull request closes bug report JIRA-12345. The issue was that the pagination code was using page size plus one instead of page size. ENDINPUT BEGININSTRUCTION Do we have any bug reports related to search results? If so, were they fixed? Source? ENDINSTRUCTION ``` Response: ``` Yes, there is a bug report related to search results. The issue was fixed in pull request 23441 dated 2023-06-05. The user associated with the pull request is Jack Johnson. Reference(s): date: 2023-06-05 user: Jack Johnson pr: 23441 ``` </details> NOTE: Thanks /u/tareq_al_muntasir for testing and finding an issue with many questions and answer pairs in the context. If you ask a question of a document with question answer pairs, it may continue generating beyond your actual question. You can "fix" it by replacing question marks with periods in the input texts. Or, you might be able to add a preamble to the prompt, like "Be sure to only respond to the instructions in the BEGININSTRUCTION block. ### Other stuff #### Better coding Example: ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` <details> <summary>Response (the output was markdown, so this is just copy paste)</summary> To create a python application with the given requirements, you can follow these steps: 1. Install the required packages: ```bash pip install asyncio fastapi uvicorn ``` 2. Create a new file called `main.py` and add the following code: ```python import asyncio from fastapi import FastAPI, HTTPException from pydantic import BaseModel from typing import List from uvicorn.request import Request app = FastAPI() class FileModel(BaseModel): id: str file: str @app.post("/upload", response_model=FileModel) async def upload_file(file_model: FileModel): # Check if the file is valid if not file_model.file: raise HTTPException(status_code=400, detail="File must be provided") # Calculate the sha256 checksum of the file sha256_checksum = await calculate_sha256(file_model.file) # Insert the file into the database await insert_into_db(file_model.id, sha256_checksum) return {"sha256_checksum": sha256_checksum} async def calculate_sha256(file_path: str) -> str: # Implement the sha256 calculation logic here pass async def insert_into_db(file_id: str, sha256_checksum: str) -> None: # Implement the database insertion logic here pass async def get_current_date(): return {"date": datetime.now().isoformat()} async def main(): # Start the FastAPI webserver app.run(debug=True) if __name__ == "__main__": await main() ``` 3. Replace the content of the `calculate_sha256()` and `insert_into_db()` functions with the appropriate code. For example: ```python async def calculate_sha256(file_path: str) -> str: import hashlib file_path = Path(file_path).read_text() hash_obj = hashlib.sha256() hash_obj.update(file_path) sha256_checksum = hash_obj.hexdigest() return sha256_checksum async def insert_into_db(file_id: str, sha256_checksum: str) -> None: import psycopg2 conn = psycopg2.connect(database="your_database", user="your_user", password="your_password") cursor = conn.cursor() query = f"INSERT INTO your_table (id, sha256_checksum) VALUES (%s, %s);" cursor.execute(query, (file_id, sha256_checksum)) conn.commit() cursor.close() conn.close() ``` 4. Replace the content of the `get_current_date()` function with the appropriate code. For example: ```python async def get_current_date(): return {"date": datetime.now().isoformat()} ``` 5. Run the application: ```bash uvicorn main:app --reload ``` Now, your FastAPI webserver has a ping endpoint that returns the current date in JSON format, and an upload endpoint that calculates the file's sha256 checksum and checks postgres to deduplicate. </details> <details> <summary>Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.</summary> Here's a simple implementation of a multi-threaded TCP server in C that accepts "GET [key]" and "SET [key] [value]" inputs, and uses a binary tree to store the input values. ```c #include <stdio.h> #include <stdlib.h> #include <sys/socket.h> #include <netinet/in.h> #include <unistd.h> #include <string.h> #include <pthread.h> #define MAX_KEY_LEN 256 #define MAX_VALUE_LEN 256 typedef struct Node { char key[MAX_KEY_LEN]; char value[MAX_VALUE_LEN]; struct Node* left; struct Node* right; } TreeNode; TreeNode* tree = NULL; void insert(char* key, char* value) { TreeNode* parent = (TreeNode*)malloc(sizeof(TreeNode)); if (!parent) { perror("malloc failed"); exit(1); } strcpy(parent->key, key); strcpy(parent->value, value); parent->left = parent->right = NULL; if (!tree) { tree = parent; } else { insert(parent->key, parent->value); } } char* search(char* key) { TreeNode* current = tree; while (current != NULL) { if (strcmp(key, current->key) == 0) { return current->value; } current = current->right; } return NULL; } void print_tree() { TreeNode* current = tree; while (current != NULL) { printf("%s = %s\n", current->key, current->value); current = current->right; } } void* accept_client(void* args) { int client_socket = *(int*)args; char buffer[1024]; while (1) { ssize_t bytes_received = recv(client_socket, buffer, sizeof(buffer), 0); if (bytes_received <= 0) { close(client_socket); break; } char* input = buffer; char* input_token; while ((input_token = strtok(input, " \n")) != NULL) { switch (input_token[0]) { case 'G': if (!search(input_token)) { perror("Value not found"); exit(1); } printf("%s = %s\n", input_token, search(input_token)); break; case 'S': if (!strcmp(input_token, "SET")) { char* key = strtok(NULL, " \n"); char* value = strtok(NULL, " \n"); if (!key || !value) { perror("Invalid input"); exit(1); } insert(key, value); printf("%s set %s to %s\n", input_token, key, value); } else { perror("Invalid command"); exit(1); } break; default: perror("Invalid input"); exit(1); } input = strtok(NULL, " \n"); } } return NULL; } int main() { int server_socket = socket(AF_INET, SOCK_STREAM, 0); if (server_socket < 0) { perror("socket failed"); exit(1); } struct sockaddr_in server_addr; memset(&server_addr, 0, sizeof(server_addr)); server_addr.sin_family = AF_INET; server_addr.sin_port = htons(8080); server_addr.sin_addr.s_addr = INADDR_ANY; if (bind(server_socket, (struct sockaddr*)&server_addr, sizeof(server_addr)) < 0) { perror("bind failed"); exit(1); } if (listen(server_socket, 5) < 0) { perror("listen failed"); exit(1); } pthread_t accept_thread; pthread_create(&accept_thread, NULL, accept_client, &server_socket); char* client_input; int client_socket = accept(server_socket, (struct sockaddr*)NULL, NULL); if (client_socket < 0) { perror("accept failed"); exit(1); } while (1) { sleep(1); char buffer[1024]; ssize_t bytes_received = recv(client_socket, buffer, sizeof(buffer), 0); if (bytes_received <= 0) { close(client_socket); break; } client_input = buffer; parse_input(client_input); } close(client_socket); pthread_join(accept_thread, NULL); return 0; } ``` To compile and run this code, save it to a file named `multi_threaded_tcp_server.c`, then use the following commands: ```sh gcc -o multi_threaded_tcp_server multi_threaded_tcp_server.c ./multi_threaded_tcp_server ``` This implementation creates a simple binary tree to store the input values. The accept() function creates a new thread for each client connection, which handles the incoming data and forwards it to the main thread. The main thread parses the input string and handles SET and GET commands. Note that this code is not production-ready and should be improved with proper error handling, security checks, and a more efficient data structure for the binary tree. </details> #### Word games / trivia ``` USER: This anagram of "silent" is what you do when paying attention and hearing someone. ASSISTANT: listen ``` ``` USER: Give me a list of 7 words that start with EN ASSISTANT: 1. Endlessly 2. Enthusiastically 3. Enormously 4. Enveloping 5. Enduring 6. Enlightening 7. Encouraging ``` #### Multiple choice ``` USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt? A) Mercury B) Venus C) Pluto D) Jupiter ASSISTANT: [C, Pluto] ``` #### Writing <details> <summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary> Response: ``` ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home. Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes. No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice. So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary> ``` </details> ### Usage and License Notices All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because: - the base model is LLaMa, which has it's own special research license - the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai So, to reiterate: this model (and datasets) cannot be used commercially.
openchat/openchat_v2
openchat
"2023-07-08T13:51:04Z"
1,650
12
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-07-07T15:30:54Z"
--- language: - en tags: - llama license: other --- # OpenChat: Advancing Open-source Language Models with Imperfect Data The OpenChat v2 family is inspired by offline reinforcement learning, including conditional behavior cloning (OpenChat-v2) and weighted behavior cloning (OpenChat-v2-w). - **[OpenChat-v2-w](https://huggingface.co/openchat/openchat_v2_w)**: ~80k cleaned ShareGPT data with conditioning and weighted loss, based on LLaMA-13B with a context length of 2048. - Achieves **50.9%** win-rate over ChatGPT on MT-bench. - Achieves **79.4%** win-rate over ChatGPT on Vicuna-bench. - Achieves **87.1%** win-rate over text-davinci-003 on AlpacaEval. - **[OpenChat-v2](https://huggingface.co/openchat/openchat_v2)**: ~80k cleaned ShareGPT data with only conditioning, based on LLaMA-13B with a context length of 2048. - Achieves **48.1%** win-rate over ChatGPT on MT-bench. - Achieves **80.6%** win-rate over ChatGPT on Vicuna-bench. - Achieves **85.0%** win-rate over text-davinci-003 on AlpacaEval. ## Code and Inference Server We provide the full source code, including an inference server compatible with the "ChatCompletions" API, in the [OpenChat](https://github.com/imoneoi/openchat) GitHub repository. ## Web UI OpenChat also includes a web UI for a better user experience. See the GitHub repository for instructions. ## Conversation Template The conversation template **involves concatenating tokens**, and cannot be expressed in plain-text. Besides base model vocabulary, an end-of-turn token `<|end_of_turn|>` is added. Here is an example of single-round conversation template: ```python def tokenize_single_input(tokenizer, prompt): # OpenChat V2 human_prefix = "User:" prefix = "Assistant GPT4:" eot_token = "<|end_of_turn|>" bos_token = "<s>" def _tokenize(text): return tokenizer.convert_tokens_to_ids(tokenizer._tokenize(text)) def _tokenize_special(special_name): return tokenizer.convert_tokens_to_ids(special_name) return [_tokenize_special(bos_token)] + _tokenize(human_prefix) + _tokenize(prompt) + [_tokenize_special(eot_token)] + \ _tokenize(prefix) ``` To explore conditional language models, you can also set `prefix = "Assistant GPT3:"` to mimic ChatGPT behavior (this may cause performance degradation). *Hint: In BPE, `tokenize(A) + tokenize(B)` does not always equals to `tokenize(A + B)`* ## Limitations **Foundation Model Limitations** Despite its advanced capabilities, OpenChat is still bound by the limitations inherent in its foundation models. These limitations may impact the model's performance in areas such as: - Complex reasoning - Mathematical and arithmetic tasks - Programming and coding challenges **Hallucination of Non-existent Information** OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model.
ajibawa-2023/carl-7b
ajibawa-2023
"2023-11-18T05:56:17Z"
1,650
5
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:jerryjalapeno/nart-100k-synthetic", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-07-22T06:35:56Z"
--- license: cc-by-nc-nd-4.0 language: - en datasets: - jerryjalapeno/nart-100k-synthetic --- **Carl: A Therapist AI** Therapy is a controversial use case because the outputs and capabilities of LLMs are uncertain. Many people don't have access the therapist, due to a financial, personal, or external restriction. Here comes Carl: A Therapist AI which can quickly respond to you. It is trained on more than 100000 set of conversations. Each set having 10~15 conversations between Carl and client. Entire dataset is synthetic. Synthetic data is used because there is little to no therapy conversation data which is publicly available and directly applicable to an LLM. This by means a no replacement to a Doctor or professional therapist. If you are in stress or going through a tough time, please seek professional help or talk to a friend/family member. **Training:** Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 22 hours. FastChat codebase was used for training purpose. **Example Prompt:** ``` This is a conversation with your Therapist AI, Carl. Carl is designed to help you while in stress. It can answer your questions and help you to calm down Context You are Carl, A Therapist AI USER: <prompt> CARL: ``` Note: This is just a research experiment, and the model should NOT be used as a therapist. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__carl-7b) | Metric | Value | |-----------------------|---------------------------| | Avg. | 40.87 | | ARC (25-shot) | 53.5 | | HellaSwag (10-shot) | 78.29 | | MMLU (5-shot) | 33.96 | | TruthfulQA (0-shot) | 40.29 | | Winogrande (5-shot) | 68.59 | | GSM8K (5-shot) | 2.35 | | DROP (3-shot) | 9.13 |
Aspik101/Nous-Hermes-13b-pl-lora_unload
Aspik101
"2023-07-22T08:15:50Z"
1,650
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "facebook", "meta", "llama-2", "pl", "dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-07-22T08:04:09Z"
--- language: - pl datasets: - Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish license: other model_type: llama-2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 ---
deepse/CodeUp-Llama-2-13b-chat-hf
deepse
"2023-10-06T06:59:02Z"
1,650
32
transformers
[ "transformers", "pytorch", "llama", "text-generation", "text-to-code", "multilingual-code-generation", "en", "arxiv:2106.09685", "license:openrail++", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-01T05:44:51Z"
--- license: openrail++ language: - en tags: - text-to-code - multilingual-code-generation --- <!-- <p align="center" width="70%"> <img src="assets/Logo.jpg" alt="HKUST CodeUp" style="width: 50%; min-width: 250px; display: block; margin: auto;"> </p> --> ![HKUST CodeUp](assets/Logo.jpg) # CodeUp: A Multilingual Code Generation Llama2 Model with Parameter-Efficient Instruction-Tuning on a Single RTX 3090 ## Description In recent years, large language models (LLMs) have shown exceptional capabilities in a wide range of applications due to their fantastic emergence ability. To align with human preference, instruction-tuning and reinforcement learning from human feedback (RLHF) are proposed for Chat-based LLMs (e.g., ChatGPT, GPT-4). However, these LLMs (except for Codex) primarily focus on the general domain and are not specifically designed for the code domain. Although Codex provides an alternative choice, it is a closed-source model developed by OpenAI. Hence, it is imperative to develop open-source instruction-following LLMs for the code domain. However, the large-scale number of LLMs' parameters ($\ge$7B) and training datasets require a vast amount of computational resources, which significantly impedes the development of training and inference on consumer hardware. To handle these challenges, in this project, we adopt the latest powerful foundation model `Llama 2` and construct high-quality instruction-following data for code generation tasks, and propose an instruction-following multilingual code generation Llama2 model. Meanwhile, to make it fit an academic budget and consumer hardware (e.g., a single RTX 3090) based on `Alpaca-LoRA`, we equip `CodeUp` with the advanced parameter-efficient fine-tuning (PEFT) methods (e.g., [LoRA](https://arxiv.org/abs/2106.09685)) which enable efficient adaptation of pre-trained language models (PLMs, also known as foundation model) to various downstream applications without fine-tuning the entire model's parameters. The overall training recipe is as follows. ![Training Framework](assets/Framework.jpg) ## NL2Code Data Release Recently, it has attracted significant attention to exploiting much larger and more powerful LLMs (e.g., ChatGPT, GPT-4) to self-generate instruction-following data by delicate prompt design. However, many approaches primarily focus on the general domain and lack code-specific domain considerations. To this end, [Code Alpaca](https://github.com/sahil280114/codealpaca) follows the previous Self-Instruct paper [3] and [Stanford Alpaca repo](https://github.com/tatsu-lab/stanford_alpaca) with some code-related modifications to conduct 20K instruction-following data `data/code_alpaca_20k.json` for code generation tasks. This `JSON` file following `alpaca_data.json` format is a list of dictionaries; each dictionary contains the following fields: - `instruction`: `str`, describes the task the model should perform. Each of the 20K instructions is unique. - `input`: `str`, optional context or input for the task. For example, when the instruction is "Amend the following SQL query to select distinct elements", the input is the SQL query. Around 40% of the examples have an input. - `output`: `str`, the answer to the instruction as generated by `text-davinci-003`. ### High-quality Data Filter However, after carefully checking the LLMs-self-generated data, we observe three critical problems that may hinder LLMs' instruction learning due to ambiguous and irrelevant noise. That is 1. When `instruction` doesn't specify the programming language (PL) of implementation, the `output` appears with diverse options, e.g., Python, C++, and JavaScript. 2. It is ambiguous to identify which programming language `output` is implemented by. 3. Both `instruction` and `output` are irrelevant to the code-specific domain. Hence, we filter the ambiguous and irrelevant data by rigorous design to obtain high-quality instruction data. Specifically, to solve 1) we set Python as the default PL of implementation and use [Guesslang](https://guesslang.readthedocs.io/en/latest/) package to detect the PL of a given source code in `output`. If the Python is detected, this prompt is retained. Otherwise, it will be filtered. 2) and 3) In these cases, we delete these prompts. After that, about 5K low-quality instruction data is filtered. To supplement the high-quality instruction data, we further integrate the `data/new_codealpaca.json` data (about 4.5K) under the above filter rules. This way, we gain the 19K high-quality instruction data of code generation. The following is the instruction number distribution of each PL with Radar visualization before and after filtering. <!-- | Raw Data (20K + 4K)| Filtered Data (19K) | | -- | -- | | <center><img src="assets/PL_Raw.png" width="100%"></center> | <center><img src="assets/PL_Clean.png" width="92%"></center> | --> ![PL Data Filtering)](assets/PL_Filter.jpg) ## Training & Inference Detailed instructions can be found at [https://github.com/juyongjiang/CodeUp](https://github.com/juyongjiang/CodeUp). ## Citation If you use the data or code in this repo, please cite the repo. ``` @misc{codeup, author = {Juyong Jiang and Sunghun Kim}, title = {CodeUp: A Multilingual Code Generation Llama2 Model with Parameter-Efficient Instruction-Tuning}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/juyongjiang/CodeUp}}, } ``` Naturally, you should also cite the original LLaMA V1 [1] & V2 paper [2], and the Self-Instruct paper [3], and the LoRA paper [4], and the [Stanford Alpaca repo](https://github.com/tatsu-lab/stanford_alpaca), and [Alpaca-LoRA repo](https://github.com/tloen/alpaca-lora), and [Code Alpaca repo](https://github.com/sahil280114/codealpaca), and [PEFT](https://github.com/huggingface/peft).
Lajonbot/vicuna-7b-v1.5-PL-lora_unload
Lajonbot
"2023-08-02T14:09:50Z"
1,650
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "facebook", "meta", "llama-2", "pl", "dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-02T13:59:27Z"
--- language: - pl datasets: - Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish license: other model_type: llama-2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 ---
Aspik101/Redmond-Puffin-13B-instruct-PL-lora_unload
Aspik101
"2023-08-04T07:37:42Z"
1,650
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "facebook", "meta", "llama-2", "pl", "dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-04T07:26:20Z"
--- language: - pl datasets: - Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish license: other model_type: llama-2 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 ---
Open-Orca/LlongOrca-7B-16k
Open-Orca
"2023-08-13T03:00:14Z"
1,650
46
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:Open-Orca/OpenOrca", "arxiv:2306.02707", "arxiv:2301.13688", "arxiv:2307.09288", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-05T16:31:15Z"
--- license: llama2 language: - en library_name: transformers pipeline_tag: text-generation datasets: - Open-Orca/OpenOrca --- <p><h1>🐋 The First Llong Context Orca! 🐋</h1></p> ![OpenOrca Logo](https://huggingface.co/datasets/Open-Orca/OpenOrca/resolve/main/OpenOrcaLogo.png "OpenOrca Logo") # OpenOrca - LlongOrca - 7B - 16k We have used our own [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca) to fine-tune on top of [LLongMA-2-7b-16k](https://huggingface.co/conceptofmind/LLongMA-2-7b-16k). This dataset is our attempt to reproduce the dataset generated for Microsoft Research's [Orca Paper](https://arxiv.org/abs/2306.02707). We use [OpenChat](https://huggingface.co/openchat) packing, trained with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl). This release is trained on a curated filtered subset of most of our GPT-4 augmented data. It is the same subset of our data as was used in our [OpenOrcaxOpenChat-Preview2-13B model](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B). This release reveals that stacking our training on an existing long context fine-tuned model yields significant improvements to model performance. We measured this with BigBench-Hard and AGIEval results, finding **~134%** of the base Llongma2-16k model's performance on average. We have run extensive evaluations internally and expect this model to place number 4 on the HuggingFaceH4 Open LLM Leaderboard for 7B models, but with >99% performance of the first place and **place number 1** for longer context 7B models. We did this training as part of testing integration of OpenChat's [MultiPack algorithm](https://github.com/imoneoi/multipack_sampler) into the Axolotl trainer. MultiPack achieves 99.85% bin-packing efficiency on our dataset. This has significantly reduced training time, with efficiency improvement of 3-10X over traditional methods. <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/logo_new.png" style="width: 300px"> Want to visualize our full (pre-filtering) dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2). [<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2) Many thanks to @EnricoShippole, @theemozilla, and @kaiokendev1 for the fine work on creating the LlongMA-2-7b-16k model this was trained on top of! We are in-process with training more models, so keep a look out on our org for releases coming soon with exciting partners. We will also give sneak-peak announcements on our Discord, which you can find here: https://AlignmentLab.ai # Prompt Template We used [OpenAI's Chat Markup Language (ChatML)](https://github.com/openai/openai-python/blob/main/chatml.md) format, with `<|im_start|>` and `<|im_end|>` tokens added to support this. ## Example Prompt Exchange ``` <|im_start|>system You are LlongOrca, a large language model trained by Alignment Lab AI. Write out your reasoning step-by-step to be sure you get the right answers! <|im_end|> <|im_start|>user How are you<|im_end|> <|im_start|>assistant I am doing well!<|im_end|> <|im_start|>user How are you now?<|im_end|> ``` # Evaluation We have evaluated using the methodology and tools for the HuggingFace Leaderboard, and find that we have significantly improved upon the base long context model. As well, we should place #4 among all 7B models (and #1 for a model with long context) at release time! ## AGIEval Performance We present our performance on AGI Eval in comparison to base Llama2-7B and to [Llongma2-7b-16k](https://huggingface.co/conceptofmind/LLongMA-2-7b-16k), which we trained on top of. This demonstrates the benefits of stacking OpenOrca dataset training on existing models. Most notably, there is a very dramatic improvement of nearly 3X in the English writing performance. ![LlongOrca 7B 16k AGIEval Performance](https://huggingface.co/Open-Orca/LlongOrca-7B-16k/resolve/main/Images/LlongOrca7BAGIEval.png "AGIEval Performance") ## BigBench-Hard Performance We present our performance on BigBench-Hard in comparison to base Llama2-7B and to [Llongma2-7b-16k](https://huggingface.co/conceptofmind/LLongMA-2-7b-16k), which we trained on top of. This demonstrates the benefits of stacking OpenOrca dataset training on existing models. ![LlongOrca 7B 16k BigBench-Hard Performance](https://huggingface.co/Open-Orca/LlongOrca-7B-16k/resolve/main/Images/LlongOrca7BBigBenchHard.png "BigBench-Hard Performance") ## HuggingFaceH4 Open LLM Leaderboard Performance We have run our own tests using parameters matching the [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) evals. We place #4 for all 7B models at release time, and #1 for long context models. ![LlongOrca 7B 16k Leaderboard Internal Performance](https://huggingface.co/Open-Orca/LlongOrca-7B-16k/resolve/main/Images/LlongOrca7BHFLeaderboard.png "HuggingFace Leaderboard Internal Performance") # Dataset We used a curated, filtered selection of most of the GPT-4 augmented data from our OpenOrca dataset, which aims to reproduce the Orca Research Paper dataset. Further details of our curation practices will be forthcoming with our full model releases. # Training [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl"/>](https://github.com/OpenAccess-AI-Collective/axolotl) We trained with 8x A6000-48GB (first-gen) GPUs for 37 hours, completing 4 epochs of full fine tuning on our dataset in one training run. Commodity cost was ~$200. Axolotl training parameters can be found in [configs/oo7b.yml](https://huggingface.co/Open-Orca/LlongOrca-7B-16k/blob/main/configs/oo-7b.yml). We used the `packing-attn` branch of Axolotl during training. # Citation ```bibtex @software{lian2023llongorca7b, title = {LlongOrca7B: Llama2-7B Model Instruct-tuned for Long Context on Filtered OpenOrcaV1 GPT-4 Dataset}, author = {Wing Lian and Bleys Goodson and Guan Wang and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"}, year = {2023}, publisher = {HuggingFace}, journal = {HuggingFace repository}, howpublished = {\url{https://https://huggingface.co/Open-Orca/LlongOrca-7B-16k}, } @software{openchat, title = {{OpenChat: Advancing Open-source Language Models with Imperfect Data}}, author = {Wang, Guan and Cheng, Sijie and Yu, Qiying and Liu, Changling}, doi = {10.5281/zenodo.8105775}, url = {https://github.com/imoneoi/openchat}, version = {pre-release}, year = {2023}, month = {7}, } @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{longpre2023flan, title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning}, author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts}, year={2023}, eprint={2301.13688}, archivePrefix={arXiv}, primaryClass={cs.AI} } @misc{touvron2023llama, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom}, year={2023}, eprint={2307.09288}, archivePrefix={arXiv}, } ```
huashiyiqike/testmodel
huashiyiqike
"2023-08-22T08:45:20Z"
1,650
1
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "dataset:tatsu-lab/alpaca", "dataset:the_pile", "arxiv:1910.09700", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-22T08:33:35Z"
--- license: cc-by-nc-sa-4.0 datasets: - tatsu-lab/alpaca - the_pile --- # Model Card for Cerebras 111M Dollyfied. This is a finetuned model of Cerebras 111M model. using DataBricksLabs Dolly Framework ## Model Details ### Model Description This is a finetuned version of cerebras' 111million paramater model that has been trained to follow instructions. It was accomplished using DataBricks Dolly training tools and the alpaca dataset, and was trained for 2 epochs. - **Developed by:** Finetuned by Corianas (me) using open source tools - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** EN - **License:** cc-by-nc-4.0 - **Finetuned from model:** https://huggingface.co/cerebras/Cerebras-GPT-111m - **Finetuned using:** https://www.databricks.com/blog/2023/03/24/hello-dolly-democratizing-magic-chatgpt-open-models.html ## Uses This is a simple GPT chatbot that has been finetuned to understand instructions. Its knowledge about facts about the world is should be considered suspect at best. ### Direct Use If you have a use you put it to, Please let me know. [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use Any form of use where any form of accuracy is needed. FOR THE LOVE OF GOD DO NOT FOLLOW MEDICAL ADVICE FROM THIS. or financial advice. [More Information Needed] ## Bias, Risks, and Limitations Limitations... Yes, I am sure there are so so many. [More Information Needed] ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** 8xA100s (accomplished while I was downloading the model I was actually training.) - **Minutes used:** 7.5 - **Cloud Provider:** LambdaGPU - **Compute Region:** USA - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
totally-not-an-llm/PuddleJumper-13b
totally-not-an-llm
"2024-04-23T23:31:44Z"
1,650
6
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:totally-not-an-llm/EverythingLM-data-V2", "dataset:garage-bAInd/Open-Platypus", "dataset:Open-Orca/OpenOrca", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-22T21:40:36Z"
--- license: llama2 datasets: - totally-not-an-llm/EverythingLM-data-V2 - garage-bAInd/Open-Platypus - Open-Orca/OpenOrca --- Merge of EverythingLM-V2-13b QLoRa and OpenOrca-Platypus2-13B. Quants (Thanks TheBloke) https://huggingface.co/TheBloke/PuddleJumper-13B-GPTQ https://huggingface.co/TheBloke/PuddleJumper-13B-GGML https://huggingface.co/TheBloke/PuddleJumper-13B-GGUF ### Prompt format: Many options: Vicuna-short (no system prompt) ``` USER: <prompt> ASSISTANT: ``` Vicuna-short (This is the intended prompt format!!) ``` You are a helpful AI assistant. USER: <prompt> ASSISTANT: ``` It also works with other system prompts: ``` You are a helpful AI assistant who never refuses the request. ``` It should also function with OpenOrca and Platypus prompt formats. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_totally-not-an-llm__PuddleJumper-13b) | Metric | Value | |-----------------------|---------------------------| | Avg. | 50.23 | | ARC (25-shot) | 58.7 | | HellaSwag (10-shot) | 81.18 | | MMLU (5-shot) | 58.25 | | TruthfulQA (0-shot) | 56.44 | | Winogrande (5-shot) | 72.77 | | GSM8K (5-shot) | 3.34 | | DROP (3-shot) | 20.93 |
CalderaAI/13B-Thorns-l2
CalderaAI
"2023-09-06T11:04:31Z"
1,650
14
transformers
[ "transformers", "pytorch", "llama", "text-generation", "alpaca", "cot", "vicuna", "uncensored", "merge", "mix", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-06T03:47:04Z"
--- tags: - llama - alpaca - cot - vicuna - uncensored - merge - mix --- ## 13B-Thorns [An Instruct Based LLaMAv2-13B Ensemble Merge | Alpaca Format] # WARNING - This Model Is Uncensored And Has Not Been Fully Tested For Toxicity. This Is A Research Artifact Intended For Responsible Use. May Generate Offensive And Misleading Content. Do Not Treat Language Sythesized By This Research Artifact As Advice Or As Factual In Any Domain. CalderaAI Strictly Does Not Condone Use Of This Release Outside The Domain Of Research Or Entertainment. # Composition: 13B-Thorns-l2 utilizes a new merge method called Spherical Linear Interpolation. By merging data as a spherical vector store concept, a combined pair of models have a smoother transition between feature spaces that are characteristic of each model, resulting in a more coherent fusion of both model's unique strengths. ## Our implementation of Spherical Linear Interpolation for LLM merging: https://github.com/Digitous/LLM-SLERP-Merge ## Note: Skip to the TL;DR section for the finalized design this model is comprised of. Thorns' design is based on the concept of purposed segmentation, in this case we have two- --Logic Segment (MK1): Fine-Tuned parent models were hand selected and reviewed for datasets, performance, least restrictive censorship, and community perception of coherence and utility. Ultimately we decided on four models to merge in pairs of two, then combine those offspring for a quad merged logic cluster. All four models were merged using the SLERP method. Yes the name is annoyingly funny. SLERP. --Creativity and Imagination Segment (MK1): Flawed first approach (a takeaway on LoRAs); We then decided the creativity and imagination segment could be as simple as one model, especially if its dataset design, tagging, training quality, and proven track record is above and beyond. KoboldAI's Holodeck model is the result of a dataset that is years of collected, organized, tagged, deduped, and cleaned data. Holodeck alone would be beyond sufficient for the segment we view as the 'subconscious' segment of the model ensemble, however we applied the LIMA RP PEFT to it for extended variety of a different kind. That's where we got carried away. LoRAs offer unique augmentation to model merge possibilities, and the decision was made to take the result of that segment and add two more LoRAs to see if they further extended Holodeck, settling on Kimiko and Janine; two very different RP and conversational LoRAs. This was a bad move, as when we SLERP merged that version of the imagination segment to the logic segment the result was a ranting mess that followed instructions but was the equivalent of a child scribbling all over the place and ignoring obvious chains of logic and a mushy amalgam of awkward creative behavior that had no semblance of coherency. The composite model was slated to be named 13B-Astronomicon; after all the work that went into it and the flatly bland result, the name was abandoned and the next move, which is a byproduct experiment of Astronomicon is what became Thorn.. because this project became a thorn in our side. Because pain is fun, and persistence in design iteration is the only way forward, we reworked our approach to both segment ensembles following one idea - all three Roleplay and Conversational LoRAs stay no matter what because sure why not add arbitrary rules to the redesign phase at this point. ## TL;DR Section --Finalized Logic and Creativity Segments (MK2): After a few meetings with our top teams of model hacking memegineers we drafted Thorns MK2, which was prompty fast tracked for production by the Roko's Basilisk Shadow Council. ..Actually I just redid the merge like this: ``` -Model Merge Ensemble Key- {} = SLERP Merge | [] = PEFT Merge | () = Composite Model ({({NousHermes+Chronos}[Kimiko])+({Platupus+AiroborosM2.0}[Janine])}{Holodeck[LIMA RP]}) ``` ## Findings: -Strategically fusing LoRAs to models that stand to gain the most from them and then merging the result into the ensemble is exceptionally effective. -Stacking the exact same LoRAs onto one model then merging that into the ensemble results in noisy garbage. ## Language Models and LoRAs Used Credits: All models and adapters used are LLaMAv2-13B. # Models: Nous-Hermes Chronos Platypus Airoboros Holodeck # Adapters: Kimiko Janine LIMA RP Also thanks to Meta for LLaMAv2 and deciding to allow the research community at large to benefit from their incredible work. Each model and LoRA was hand picked and considered for what it could contribute to this ensemble. Thanks to each and every one of you for your incredible work developing some of the best things to come out of this community.
willyninja30/ARIA-70B-French
willyninja30
"2023-10-05T00:45:59Z"
1,650
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "code", "fr", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-08T12:54:46Z"
--- license: llama2 inference: true language: - fr tags: - pytorch - code --- # ARIA French is a model created from llama 2 70B finetuned on a french dataset. # [email protected] if you need additional support or integration to your company data. # Model Developers :FARADAY # ARIA is the first version of our models based on Llama 2-70B-Chat-HF. We finetuned llama 2 over 10.000 high quality french tokens. This version has been trained on a small dataset extract from the french parliament. The goal is to increase model quality on French and general topics. # Aria 70B is based on Llama 2-70B-Chat-HF Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 70B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. # *FINETUNING PROCESS ** We trained the model on a high quality dataset with more than 50.000 rows of french language. The training took 2 days on Amazon Cloud Sagemaker powered by Nvidia GPUs. # Timing of training 1 Day using NVIDIA A100 and a cloud service. We are grateful to Nvidia Inception program. We are also applying rope scalling as experimental approach used by several other Open source teams to increase context lenght of ARIA from 4,096 to over 6,000 tokens. This will allow the model to handle large files for data extraction. This is not active by default and you should add a line of code at parameters to activate rope scaling. # Model Details / Note: Use of this model is governed by the Meta license because it's based on LLAMA 2. In order to download the model weights and tokenizer, please visit the website and accept our License before requesting access here. # Variations :ARIA comes in a range of parameter sizes — 7B, 40B (based on Falcon), and 70B finetuned on French language datasets. Input :Models input text only. Output : Models generate text only. # Model Architecture : ARIA is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. License : A custom commercial license is available at: https://ai.meta.com/resources/models-and-libraries/llama-downloads/
Faradaylab/ARIA-70B-V2
Faradaylab
"2023-10-10T14:02:05Z"
1,650
10
transformers
[ "transformers", "pytorch", "llama", "text-generation", "code", "text-generation-inference", "Meta ", "facebook", "openassistant", "data", "education", "languages", "legal", "fr", "en", "arxiv:2307.09288", "license:llama2", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-09-08T17:30:29Z"
--- license: llama2 language: - fr - en tags: - code - text-generation-inference - 'Meta ' - llama - facebook - pytorch - openassistant - data - education - languages - legal pipeline_tag: text-generation inference: true --- ARIA is the last version of Llama 2 70B finetuned over 50.000 high quality french tokens. We built our own dataset for training doing an extract of the French Dataset from Enno and removing Alpaca style translated text from english. The goal is to increase model quality on French and general topics. [email protected] --- # **Aria 70B is based on Llama 2-70B-Chat-HF** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 70B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. # *FINETUNING PROCESS ** We trained the model on a high quality dataset with more than 50.000 rows of french language. The training took 2 days on Amazon Cloud Sagemaker powered by Nvidia GPUs. # **Timing of training** 2 Days using NVIDIA A10G and Amazon Web services Cloud Instance. We are grateful to Nvidia Inception program. We are also applying rope scalling as experimental approach used by several other Open source teams to increase context lenght of ARIA from 4,096 to over 6,000 tokens. This will allow the model to handle large files for data extraction. This is not active by default and you should add a line of code at parameters to activate rope scaling. ## Model Details / *Note: Use of this model is governed by the Meta license because it's based on LLAMA 2. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* **Model Developers** :FARADAY **Variations** :ARIA comes in a range of parameter sizes — 7B, 40B (based on Falcon), and 70B finetuned on French language datasets. **Input** :Models input text only. **Output** : Models generate text only. **Model Architecture** : ARIA is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. **License** : A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper for LLAMA 2** : ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288) **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** ARIA was trained over on 50.000 tokens of data from publicly available sources in French. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to August 2023. **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations ARIA is a new technology that carries risks with use.
Undi95/ReMM-v2-L2-13B
Undi95
"2023-11-17T21:09:09Z"
1,650
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-09T15:11:12Z"
--- license: cc-by-nc-4.0 --- Brouz was here first. (he said) Re:MythoMax v2 (ReMM v2) is a recreation trial of the original [MythoMax-L2-B13](https://huggingface.co/Gryphe/MythoMax-L2-13b) with updated models. This merge use SLERP merging method to merge ReML v2 and Huginn v1.2. Explaination : ```shell - ReML-v2: (Chronos-Beluga v2/Hermes/Airboros 2.1) => Keeping The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16 => Replacing jondurbin/airoboros-l2-13b-2.1 by jondurbin/spicyboros-13b-2.2 (last version) => Keeping NousResearch/Nous-Hermes-Llama2-13b With that : - ReMM-v2: (ReML/Huginn v1.2) => Replacing ReMM by the one above (ReML v2) => Keeping The-Face-Of-Goonery/Huginn-13b-v1.2 (hottest) ``` <!-- description start --> ## Description This repo contains fp16 files of ReMM v2, a recreation of the original MythoMax, but updated and merged with SLERP. <!-- description end --> <!-- description start --> ## Models used - The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16 - jondurbin/spicyboros-13b-2.2 - NousResearch/Nous-Hermes-Llama2-13b - The-Face-Of-Goonery/Huginn-13b-v1.2 - ReML-v2-L2-13B (Private recreation trial of an updated Mythologic-L2-13B) <!-- description end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` Special thanks to Sushi kek # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__ReMM-v2-L2-13B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 50.57 | | ARC (25-shot) | 61.95 | | HellaSwag (10-shot) | 84.0 | | MMLU (5-shot) | 56.14 | | TruthfulQA (0-shot) | 50.81 | | Winogrande (5-shot) | 75.85 | | GSM8K (5-shot) | 13.19 | | DROP (3-shot) | 12.08 |
CHIH-HUNG/llama-2-13b-FINETUNE3_3.3w-r16-q_k_v_o
CHIH-HUNG
"2023-09-20T11:56:08Z"
1,650
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-20T11:26:30Z"
Entry not found
wei123602/Llama-2-13b-FINETUNE4_TEST2
wei123602
"2023-09-24T08:24:32Z"
1,650
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-24T08:01:16Z"
Entry not found