modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
timestamp[us, tz=UTC]
card
stringlengths
1
901k
Ramikan-BR/TiamaPY-v31
Ramikan-BR
2024-06-20T12:02:55Z
704
0
transformers
[ "transformers", "pytorch", "safetensors", "gguf", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/tinyllama-chat-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-06-20T11:17:29Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft base_model: unsloth/tinyllama-chat-bnb-4bit --- # Uploaded model - **Developed by:** Ramikan-BR - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
GroNLP/gpt2-small-italian-embeddings
GroNLP
2023-06-02T14:04:13Z
703
1
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "gpt2", "text-generation", "adaption", "recycled", "gpt2-small", "it", "arxiv:2012.05628", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-03-02T23:29:04Z
--- language: it tags: - adaption - recycled - gpt2-small pipeline_tag: text-generation --- # GPT-2 recycled for Italian (small, adapted lexical embeddings) [Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) • [Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475) ## Model description This model is based on the small OpenAI GPT-2 ([`gpt2`](https://huggingface.co/gpt2)) model. The Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for an Italian vocabulary. For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle). ## Related models ### Dutch - [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings. - [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**) - [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings. ### Italian - [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings. - [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**) - [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings. ## How to use ```python from transformers import pipeline pipe = pipeline("text-generation", model="GroNLP/gpt2-small-italian-embeddings") ``` ```python from transformers import AutoTokenizer, AutoModel, TFAutoModel tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-small-italian-embeddings") model = AutoModel.from_pretrained("GroNLP/gpt2-small-italian-embeddings") # PyTorch model = TFAutoModel.from_pretrained("GroNLP/gpt2-small-italian-embeddings") # Tensorflow ``` ## BibTeX entry ```bibtex @misc{devries2020good, title={As good as new. How to successfully recycle English GPT-2 to make models for other languages}, author={Wietse de Vries and Malvina Nissim}, year={2020}, eprint={2012.05628}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
microsoft/beit-large-patch16-224
microsoft
2022-01-28T10:19:16Z
703
1
transformers
[ "transformers", "pytorch", "jax", "beit", "image-classification", "vision", "dataset:imagenet", "dataset:imagenet-21k", "arxiv:2106.08254", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - image-classification - vision datasets: - imagenet - imagenet-21k --- # BEiT (large-sized model, fine-tuned on ImageNet-1k) BEiT model pre-trained in a self-supervised fashion on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit). Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches. Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=microsoft/beit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import BeitFeatureExtractor, BeitForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-large-patch16-224') model = BeitForImageClassification.from_pretrained('microsoft/beit-large-patch16-224') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. ## Training data The BEiT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py). Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). ### Pretraining For all pre-training related hyperparameters, we refer to page 15 of the [original paper](https://arxiv.org/abs/2106.08254). ## Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```@article{DBLP:journals/corr/abs-2106-08254, author = {Hangbo Bao and Li Dong and Furu Wei}, title = {BEiT: {BERT} Pre-Training of Image Transformers}, journal = {CoRR}, volume = {abs/2106.08254}, year = {2021}, url = {https://arxiv.org/abs/2106.08254}, archivePrefix = {arXiv}, eprint = {2106.08254}, timestamp = {Tue, 29 Jun 2021 16:55:04 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
heegyu/gpt2-toxic
heegyu
2023-02-16T06:41:41Z
703
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-02-16T06:42:32Z
--- license: apache-2.0 ---
lst-nectec/HoogBERTa-NER-lst20
lst-nectec
2024-06-13T12:35:55Z
703
0
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "token-classification", "th", "dataset:lst20", "doi:10.57967/hf/2115", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-04-05T11:13:26Z
--- datasets: - lst20 language: - th widget: - text: วัน ที่ _ 12 _ มีนาคม นี้ _ ฉัน จะ ไป เที่ยว วัดพระแก้ว _ ที่ กรุงเทพ library_name: transformers --- # HoogBERTa This repository includes the Thai pretrained language representation (HoogBERTa_base) fine-tuned for **Named-Entity Recognition (NER) Task**. # Documentation ## Prerequisite Since we use subword-nmt BPE encoding, input needs to be pre-tokenize using [BEST](https://huggingface.co/datasets/best2009) standard before inputting into HoogBERTa ``` pip install attacut ``` ## Getting Start To initialize the model from hub, use the following commands ```python from transformers import RobertaTokenizerFast, RobertaForTokenClassification from attacut import tokenize import torch tokenizer = RobertaTokenizerFast.from_pretrained("lst-nectec/HoogBERTa-NER-lst20") model = RobertaForTokenClassification.from_pretrained("lst-nectec/HoogBERTa-NER-lst20") ``` To do NER Tagging, use the following commands ```python from transformers import pipeline nlp = pipeline('token-classification', model=model, tokenizer=tokenizer, aggregation_strategy="none") sentence = "วันที่ 12 มีนาคมนี้ ฉันจะไปเที่ยววัดพระแก้ว ที่กรุงเทพ" all_sent = [] sentences = sentence.split(" ") for sent in sentences: all_sent.append(" ".join(tokenize(sent)).replace("_","[!und:]")) sentence = " _ ".join(all_sent) print(nlp(sentence)) ``` For batch processing, ```python from transformers import pipeline nlp = pipeline('token-classification', model=model, tokenizer=tokenizer, aggregation_strategy="none") sentenceL = ["วันที่ 12 มีนาคมนี้","ฉันจะไปเที่ยววัดพระแก้ว ที่กรุงเทพ"] inputList = [] for sentX in sentenceL: sentences = sentX.split(" ") all_sent = [] for sent in sentences: all_sent.append(" ".join(tokenize(sent)).replace("_","[!und:]")) sentence = " _ ".join(all_sent) inputList.append(sentence) print(nlp(inputList)) ``` # Huggingface Models 1. `HoogBERTaEncoder` - [HoogBERTa](https://huggingface.co/lst-nectec/HoogBERTa): `Feature Extraction` and `Mask Language Modeling` 2. `HoogBERTaMuliTaskTagger`: - [HoogBERTa-NER-lst20](https://huggingface.co/lst-nectec/HoogBERTa-NER-lst20): `Named-entity recognition (NER)` based on LST20 - [HoogBERTa-POS-lst20](https://huggingface.co/lst-nectec/HoogBERTa-POS-lst20): `Part-of-speech tagging (POS)` based on LST20 - [HoogBERTa-SENTENCE-lst20](https://huggingface.co/lst-nectec/HoogBERTa-SENTENCE-lst20): `Clause Boundary Classification` based on LST20 # Citation Please cite as: ``` bibtex @inproceedings{porkaew2021hoogberta, title = {HoogBERTa: Multi-task Sequence Labeling using Thai Pretrained Language Representation}, author = {Peerachet Porkaew, Prachya Boonkwan and Thepchai Supnithi}, booktitle = {The Joint International Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP 2021)}, year = {2021}, address={Online} } ``` Download full-text [PDF](https://drive.google.com/file/d/1hwdyIssR5U_knhPE2HJigrc0rlkqWeLF/view?usp=sharing) Check out the code on [Github](https://github.com/lstnlp/HoogBERTa)
TheBloke/Speechless-Llama2-13B-GGUF
TheBloke
2023-09-27T12:47:08Z
703
3
transformers
[ "transformers", "gguf", "llama", "facebook", "meta", "pytorch", "llama-2", "text-generation", "en", "dataset:Open-Orca/OpenOrca", "dataset:garage-bAInd/Open-Platypus", "dataset:WizardLM/WizardLM_evol_instruct_V2_196k", "arxiv:2307.09288", "base_model:uukuguy/speechless-llama2-13b", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
2023-09-02T09:17:16Z
--- language: - en license: llama2 library_name: transformers tags: - facebook - meta - pytorch - llama - llama-2 datasets: - Open-Orca/OpenOrca - garage-bAInd/Open-Platypus - WizardLM/WizardLM_evol_instruct_V2_196k model_name: Speechless Llama2 13B base_model: uukuguy/speechless-llama2-13b inference: false model_creator: Jiangwen Su model_type: llama pipeline_tag: text-generation prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Speechless Llama2 13B - GGUF - Model creator: [Jiangwen Su](https://huggingface.co/uukuguy) - Original model: [Speechless Llama2 13B](https://huggingface.co/uukuguy/speechless-llama2-13b) <!-- description start --> ## Description This repo contains GGUF format model files for [Jiangwen Su's Speechless Llama2 13B](https://huggingface.co/uukuguy/speechless-llama2-13b). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Speechless-Llama2-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Speechless-Llama2-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Speechless-Llama2-13B-GGUF) * [Jiangwen Su's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/uukuguy/speechless-llama2-13b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [speechless-llama2-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Speechless-Llama2-13B-GGUF/blob/main/speechless-llama2-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [speechless-llama2-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Speechless-Llama2-13B-GGUF/blob/main/speechless-llama2-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [speechless-llama2-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Speechless-Llama2-13B-GGUF/blob/main/speechless-llama2-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [speechless-llama2-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Speechless-Llama2-13B-GGUF/blob/main/speechless-llama2-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [speechless-llama2-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Speechless-Llama2-13B-GGUF/blob/main/speechless-llama2-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [speechless-llama2-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Speechless-Llama2-13B-GGUF/blob/main/speechless-llama2-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [speechless-llama2-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Speechless-Llama2-13B-GGUF/blob/main/speechless-llama2-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [speechless-llama2-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Speechless-Llama2-13B-GGUF/blob/main/speechless-llama2-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [speechless-llama2-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Speechless-Llama2-13B-GGUF/blob/main/speechless-llama2-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [speechless-llama2-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Speechless-Llama2-13B-GGUF/blob/main/speechless-llama2-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [speechless-llama2-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Speechless-Llama2-13B-GGUF/blob/main/speechless-llama2-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [speechless-llama2-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Speechless-Llama2-13B-GGUF/blob/main/speechless-llama2-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Speechless-Llama2-13B-GGUF and below it, a specific filename to download, such as: speechless-llama2-13b.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Speechless-Llama2-13B-GGUF speechless-llama2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Speechless-Llama2-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Speechless-Llama2-13B-GGUF speechless-llama2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m speechless-llama2-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Speechless-Llama2-13B-GGUF", model_file="speechless-llama2-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Jiangwen Su's Speechless Llama2 13B <p><h1> speechless-llama2-13b:v1.1 </h1></p> speechless-llama2-13b:v1.1 is a merge of Open-Orca/OpenOrca-Platypus2-13B and WizardLM/WizardLM-13B-V1.2. | Metric | Value | | --- | --- | | ARC | 62.03 | | HellaSwag | 81.85 | | MMLU | 58.52 | | TruthfulQA | 55.7 | | Average | 64.52 | # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)| |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)| <!-- original-model-card end -->
google/owlv2-large-patch14
google
2024-04-15T17:02:12Z
703
3
transformers
[ "transformers", "pytorch", "safetensors", "owlv2", "zero-shot-object-detection", "vision", "arxiv:2306.09683", "license:apache-2.0", "region:us" ]
zero-shot-object-detection
2023-10-13T10:57:17Z
--- license: apache-2.0 tags: - vision - zero-shot-object-detection inference: false --- # Model Card: OWLv2 ## Model Details The OWLv2 model (short for Open-World Localization) was proposed in [Scaling Open-Vocabulary Object Detection](https://arxiv.org/abs/2306.09683) by Matthias Minderer, Alexey Gritsenko, Neil Houlsby. OWLv2, like OWL-ViT, is a zero-shot text-conditioned object detection model that can be used to query an image with one or multiple text queries. The model uses CLIP as its multi-modal backbone, with a ViT-like Transformer to get visual features and a causal language model to get the text features. To use CLIP for detection, OWL-ViT removes the final token pooling layer of the vision model and attaches a lightweight classification and box head to each transformer output token. Open-vocabulary classification is enabled by replacing the fixed classification layer weights with the class-name embeddings obtained from the text model. The authors first train CLIP from scratch and fine-tune it end-to-end with the classification and box heads on standard detection datasets using a bipartite matching loss. One or multiple text queries per image can be used to perform zero-shot text-conditioned object detection. ### Model Date June 2023 ### Model Type The model uses a CLIP backbone with a ViT-L/14 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. The CLIP backbone is trained from scratch and fine-tuned together with the box and class prediction heads with an object detection objective. ### Documents - [OWLv2 Paper](https://arxiv.org/abs/2306.09683) ### Use with Transformers ```python import requests from PIL import Image import numpy as np import torch from transformers import AutoProcessor, Owlv2ForObjectDetection from transformers.utils.constants import OPENAI_CLIP_MEAN, OPENAI_CLIP_STD processor = AutoProcessor.from_pretrained("google/owlv2-large-patch14") model = Owlv2ForObjectDetection.from_pretrained("google/owlv2-large-patch14") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) texts = [["a photo of a cat", "a photo of a dog"]] inputs = processor(text=texts, images=image, return_tensors="pt") # forward pass with torch.no_grad(): outputs = model(**inputs) # Note: boxes need to be visualized on the padded, unnormalized image # hence we'll set the target image sizes (height, width) based on that def get_preprocessed_image(pixel_values): pixel_values = pixel_values.squeeze().numpy() unnormalized_image = (pixel_values * np.array(OPENAI_CLIP_STD)[:, None, None]) + np.array(OPENAI_CLIP_MEAN)[:, None, None] unnormalized_image = (unnormalized_image * 255).astype(np.uint8) unnormalized_image = np.moveaxis(unnormalized_image, 0, -1) unnormalized_image = Image.fromarray(unnormalized_image) return unnormalized_image unnormalized_image = get_preprocessed_image(inputs.pixel_values) target_sizes = torch.Tensor([unnormalized_image.size[::-1]]) # Convert outputs (bounding boxes and class logits) to final bounding boxes and scores results = processor.post_process_object_detection( outputs=outputs, threshold=0.2, target_sizes=target_sizes ) i = 0 # Retrieve predictions for the first image for the corresponding text queries text = texts[i] boxes, scores, labels = results[i]["boxes"], results[i]["scores"], results[i]["labels"] for box, score, label in zip(boxes, scores, labels): box = [round(i, 2) for i in box.tolist()] print(f"Detected {text[label]} with confidence {round(score.item(), 3)} at location {box}") ``` ## Model Use ### Intended Use The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, text-conditioned object detection. We also hope it can be used for interdisciplinary studies of the potential impact of such models, especially in areas that commonly require identifying objects whose label is unavailable during training. #### Primary intended uses The primary intended users of these models are AI researchers. We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models. ## Data The CLIP backbone of the model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet. The prediction heads of OWL-ViT, along with the CLIP backbone, are fine-tuned on publicly available object detection datasets such as [COCO](https://cocodataset.org/#home) and [OpenImages](https://storage.googleapis.com/openimages/web/index.html). (to be updated for v2) ### BibTeX entry and citation info ```bibtex @misc{minderer2023scaling, title={Scaling Open-Vocabulary Object Detection}, author={Matthias Minderer and Alexey Gritsenko and Neil Houlsby}, year={2023}, eprint={2306.09683}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
TheBloke/SOLAR-10.7B-v1.0-GGUF
TheBloke
2023-12-13T14:04:46Z
703
12
transformers
[ "transformers", "gguf", "solar", "base_model:upstage/SOLAR-10.7B-v1.0", "license:apache-2.0", "region:us" ]
null
2023-12-13T13:58:08Z
--- base_model: upstage/SOLAR-10.7B-v1.0 inference: false license: apache-2.0 model_creator: upstage model_name: Solar 10.7B v1.0 model_type: solar prompt_template: '{prompt} ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Solar 10.7B v1.0 - GGUF - Model creator: [upstage](https://huggingface.co/upstage) - Original model: [Solar 10.7B v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) <!-- description start --> ## Description This repo contains GGUF format model files for [upstage's Solar 10.7B v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/SOLAR-10.7B-v1.0-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/SOLAR-10.7B-v1.0-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/SOLAR-10.7B-v1.0-GGUF) * [upstage's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: None ``` {prompt} ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [solar-10.7b-v1.0.Q2_K.gguf](https://huggingface.co/TheBloke/SOLAR-10.7B-v1.0-GGUF/blob/main/solar-10.7b-v1.0.Q2_K.gguf) | Q2_K | 2 | 4.55 GB| 7.05 GB | smallest, significant quality loss - not recommended for most purposes | | [solar-10.7b-v1.0.Q3_K_S.gguf](https://huggingface.co/TheBloke/SOLAR-10.7B-v1.0-GGUF/blob/main/solar-10.7b-v1.0.Q3_K_S.gguf) | Q3_K_S | 3 | 4.66 GB| 7.16 GB | very small, high quality loss | | [solar-10.7b-v1.0.Q3_K_M.gguf](https://huggingface.co/TheBloke/SOLAR-10.7B-v1.0-GGUF/blob/main/solar-10.7b-v1.0.Q3_K_M.gguf) | Q3_K_M | 3 | 5.19 GB| 7.69 GB | very small, high quality loss | | [solar-10.7b-v1.0.Q3_K_L.gguf](https://huggingface.co/TheBloke/SOLAR-10.7B-v1.0-GGUF/blob/main/solar-10.7b-v1.0.Q3_K_L.gguf) | Q3_K_L | 3 | 5.65 GB| 8.15 GB | small, substantial quality loss | | [solar-10.7b-v1.0.Q4_0.gguf](https://huggingface.co/TheBloke/SOLAR-10.7B-v1.0-GGUF/blob/main/solar-10.7b-v1.0.Q4_0.gguf) | Q4_0 | 4 | 6.07 GB| 8.57 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [solar-10.7b-v1.0.Q4_K_S.gguf](https://huggingface.co/TheBloke/SOLAR-10.7B-v1.0-GGUF/blob/main/solar-10.7b-v1.0.Q4_K_S.gguf) | Q4_K_S | 4 | 6.10 GB| 8.60 GB | small, greater quality loss | | [solar-10.7b-v1.0.Q4_K_M.gguf](https://huggingface.co/TheBloke/SOLAR-10.7B-v1.0-GGUF/blob/main/solar-10.7b-v1.0.Q4_K_M.gguf) | Q4_K_M | 4 | 6.46 GB| 8.96 GB | medium, balanced quality - recommended | | [solar-10.7b-v1.0.Q5_0.gguf](https://huggingface.co/TheBloke/SOLAR-10.7B-v1.0-GGUF/blob/main/solar-10.7b-v1.0.Q5_0.gguf) | Q5_0 | 5 | 7.40 GB| 9.90 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [solar-10.7b-v1.0.Q5_K_S.gguf](https://huggingface.co/TheBloke/SOLAR-10.7B-v1.0-GGUF/blob/main/solar-10.7b-v1.0.Q5_K_S.gguf) | Q5_K_S | 5 | 7.40 GB| 9.90 GB | large, low quality loss - recommended | | [solar-10.7b-v1.0.Q5_K_M.gguf](https://huggingface.co/TheBloke/SOLAR-10.7B-v1.0-GGUF/blob/main/solar-10.7b-v1.0.Q5_K_M.gguf) | Q5_K_M | 5 | 7.60 GB| 10.10 GB | large, very low quality loss - recommended | | [solar-10.7b-v1.0.Q6_K.gguf](https://huggingface.co/TheBloke/SOLAR-10.7B-v1.0-GGUF/blob/main/solar-10.7b-v1.0.Q6_K.gguf) | Q6_K | 6 | 8.81 GB| 11.31 GB | very large, extremely low quality loss | | [solar-10.7b-v1.0.Q8_0.gguf](https://huggingface.co/TheBloke/SOLAR-10.7B-v1.0-GGUF/blob/main/solar-10.7b-v1.0.Q8_0.gguf) | Q8_0 | 8 | 11.40 GB| 13.90 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/SOLAR-10.7B-v1.0-GGUF and below it, a specific filename to download, such as: solar-10.7b-v1.0.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/SOLAR-10.7B-v1.0-GGUF solar-10.7b-v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/SOLAR-10.7B-v1.0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/SOLAR-10.7B-v1.0-GGUF solar-10.7b-v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m solar-10.7b-v1.0.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./solar-10.7b-v1.0.Q4_K_M.gguf", # Download the model file first n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "{prompt}", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./solar-10.7b-v1.0.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: upstage's Solar 10.7B v1.0 # **Meet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling!** # **Introduction** We introduce the first 10.7 billion (B) parameter model, SOLAR-10.7B. It's compact, yet remarkably powerful, and demonstrates unparalleled state-of-the-art performance in models with parameters under 30B. We developed the Depth Up-Scaling technique. Built on the Llama2 architecture, SOLAR-10.7B incorporates the innovative Upstage Depth Up-Scaling. We then integrated Mistral 7B weights into the upscaled layers, and finally, continued pre-training for the entire model. Depth-Upscaled SOLAR-10.7B has remarkable performance. It outperforms models with up to 30B parameters, even surpassing the recent Mixtral 8X7B model. For detailed information, please refer to the experimental table ([link to be updated soon]). Solar 10.7B is an ideal choice for fine-tuning. SOLAR-10.7B offers robustness and adaptability for your fine-tuning needs. Our simple instruction fine-tuning using the SOLAR-10.7B pre-trained model yields significant performance improvements. [[link to be updated soon]] # **Usage Instructions** This model is pre-trained and is capable of just generating random text. To use it for chatting, you must fine-tune the model first. ### **Version** Make sure you have the correct version of the transformers library installed: ```sh pip install transformers==4.35.2 ``` ### **Loading the Model** Use the following Python code to load the model: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("Upstage/SOLAR-10.7B-v1.0") model = AutoModelForCausalLM.from_pretrained( "Upstage/SOLAR-10.7B-v1.0", device_map="auto", torch_dtype=torch.float16, ) ``` ### **Generating Text** To generate text, use the following Python code: ```python text = "Hi, my name is " inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=64) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ### **The Upstage AI Team** ### Upstage is creating the best LLM and DocAI. Please find more information at https://upstage.ai ### **Contact Us** ### Any questions and suggestions, please use the discussion tab. If you want to contact us directly, drop an email to [[email protected] ](mailto:[email protected]) <!-- original-model-card end -->
shenwzh3/alpha-umi-caller-7b
shenwzh3
2024-01-31T12:45:51Z
703
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-31T04:42:28Z
--- license: apache-2.0 language: - en library_name: transformers --- ## alpha-umi-caller-7b ## Introduction alpha-umi-caller-7b is the 7b version caller model of the alpha-UMi multi-LLM tool learning agent framework proposed by *''Small LLMs Are Weak Tool Learners: A Multi-LLM Agent''*. It can collaborate with alpha-umi-planner(7b/13b) and alpha-umi-summarizer(7b/13b) to form a multi-LLM agent system. So far alpha-umi-caller-7b is applicable for ToolBench, we will soon release more versions that applicable for more tasks. For the utility, please refer to the github repo of the paper.
MediaTek-Research/Breeze-7B-Base-v1_0
MediaTek-Research
2024-03-07T04:38:39Z
703
9
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "zh", "en", "arxiv:2403.02712", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-26T07:15:20Z
--- pipeline_tag: text-generation license: apache-2.0 language: - zh - en --- # Model Card for Breeze-7B-Base-v1_0 MediaTek Research Breeze-7B (hereinafter referred to as Breeze-7B) is a language model family that builds on top of [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1), specifically intended for Traditional Chinese use. [Breeze-7B-Base](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v1_0) is the base model for the Breeze-7B series. It is suitable for use if you have substantial fine-tuning data to tune it for your specific use case. [Breeze-7B-Instruct](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) derives from the base model Breeze-7B-Base, making the resulting model amenable to be used as-is for commonly seen tasks. The current release version of Breeze-7B is v1.0, which has undergone a more refined training process compared to Breeze-7B-v0_1, resulting in significantly improved performance in both English and Traditional Chinese. For details of this model please read our [paper](https://arxiv.org/abs/2403.02712). Practicality-wise: - Breeze-7B-Base expands the original vocabulary with an additional 30,000 Traditional Chinese tokens. With the expanded vocabulary, and everything else being equal, Breeze-7B operates at twice the inference speed for Traditional Chinese to Mistral-7B and Llama 7B. [See [Inference Performance](#inference-performance).] - Breeze-7B-Instruct can be used as is for common tasks such as Q&A, RAG, multi-round chat, and summarization. Performance-wise: - Breeze-7B-Instruct demonstrates impressive performance in benchmarks for Traditional Chinese and English when compared to similar-sized open-source contemporaries such as Taiwan-LLM-7B/13B-chat, QWen(1.5)-7B-Chat, and Yi-6B-Chat. [See [Chat Model Performance](#chat-model-performance).] *A project by the members (in alphabetical order): Chan-Jan Hsu 許湛然, Chang-Le Liu 劉昶樂, Feng-Ting Liao 廖峰挺, Po-Chun Hsu 許博竣, Yi-Chang Chen 陳宜昌, and the supervisor Da-Shan Shiu 許大山.* ## Demo [Try Demo Here](https://huggingface.co/spaces/MediaTek-Research/Demo_Breeze-7B-Instruct-v1.0) ## Features - Breeze-7B-Base-v1_0 - Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese - 8k-token context length - Breeze-7B-Instruct-v1_0 - Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese - 8k-token context length - Multi-turn dialogue (without special handling for harmfulness) ## Model Details - Breeze-7B-Base-v1_0 - Finetuned from: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) - Model type: Causal decoder-only transformer language model - Language: English and Traditional Chinese (zh-tw) - Breeze-7B-Instruct-v1_0 - Finetuned from: [MediaTek-Research/Breeze-7B-Base-v1_0](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v1_0) - Model type: Causal decoder-only transformer language model - Language: English and Traditional Chinese (zh-tw) ## Base Model Performance Here we compare Breeze-7B-Base-v1_0 with other open-source base language models of similar parameter size that are widely recognized for their good performance in Chinese. **TMMLU+**, **DRCD**, and **Table** source from [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2). [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2) derives from [TCEval-v1](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval) and [ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus). **MMLU** sources from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train). We use the code revised from [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate **TMMLU+**, **DRCD**, **Table**, and **MMLU**. All choice problems adapt the selection by the log-likelihood. | Models | #Parameters | TMMLU+ (ACC) | DRCD (EM) | Table (ACC) | MMLU (ACC) | |---------------------------------------------- |--------|--------------|-------------|-------------|------------| | | |TC, Knowledge |TC, Reasoning|TC, Reasoning|EN, Knowledge| | | | 5 shot | 3 shot | 5 shot | 5 shot | | [**Breeze-7B-Base-v1_0**](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v1_0) | 7B | 42.67 | 80.61 | 31.99 | 61.24 | | [Yi-6B](https://huggingface.co/01-ai/Yi-6B) | 6B | 49.63 | 76.61 | 34.72 | 65.35 | | [Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) | 7B | 46.59 | 74.41 | 30.56 | 63.07 | | [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 7B | 36.93 | 79.27 | 27.78 | 64.89 | ## Instruction-tuned Model Performance Here we compare Breeze-7B-Instruct-v1_0 with other open-source instruction-tuned language models of similar parameter size that are widely recognized for their good performance in Chinese. Also, we listed the benchmark scores of GPT-3.5 Turbo (1106), which represents one of the most widely used high-quality cloud language model API services, for reference. **TMMLU+**, **DRCD**, **Table**, and **MT-Bench-tw** source from [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2). [MediaTek-Research/TCEval-v2](https://huggingface.co/datasets/MediaTek-Research/TCEval-v2) derives from [TCEval-v1](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval) and [ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus). **MMLU** sources from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train). **MT-Bench** source from [lmsys/mt_bench_human_judgments](https://huggingface.co/datasets/lmsys/mt_bench_human_judgments). We use the code revised from [EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate **TMMLU+**, **DRCD**, **Table**, and **MMLU**. All choice problems adapt the selection by the log-likelihood. We use the code revised from [fastchat llm_judge](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) (GPT4 as judge) to evaluate **MT-Bench-tw** and **MT-Bench**. | Models | #Parameters | MT-Bench-tw (Score)| TMMLU+ (ACC) | Table (ACC) | MT-Bench (Score) | MMLU (ACC) | |---------------------------------------------------------------------------------------------------------|--------|--------------------|--------------|-------------|------------------|-------------| | | |TC, Chat |TC, Knowledge |TC, Reasoning|EN, Chat |EN, Knowledge| | | |0 shot | 0 shot | 0 shot |0 shot | 0 shot | | [**Breeze-7B-Instruct-v1_0**](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) | 7B |6.0 | 42.67 | 39.58 |7.4 | 61.73 | | [GPT-3.5-Turbo](https://openai.com) | |7.1 | 43.56 | 45.14 |7.9 | 67.09 | | [Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat) | 7B |6.4 | 45.65 | 34.72 |7.6 | 61.85 | | [Mistral-7B-v0.2-Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 7B |5.6 | 34.95 | 33.33 |7.6 | 59.97 | | [Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat) | 6B |5.0 | 44.79 | 25.69 |6.0 | 59.45 | | [Taiwan-LLM-13B-v2.0-chat](https://huggingface.co/yentinglin/Taiwan-LLM-13B-v2.0-chat) | 13B |5.0 | 29.47 | 23.61 |N/A* | 50.50 | | [Taiwan-LLM-7B-v2.1-chat](https://huggingface.co/yentinglin/Taiwan-LLM-7B-v2.1-chat) | 7B |4.2 | 28.08 | 31.25 |N/A* | 42.72 | \* Taiwan-LLM models respond to multi-turn questions (English) in Traditional Chinese. | Details on MT-Bench-tw (0 shot):<br/>Models | STEM |Extraction|Reasoning| Math | Coding | Roleplay| Writing |Humanities| AVG | |-----------------------------------------------------|---------|---------|---------|---------|---------|---------|---------|----------| --------- | | **Breeze-7B-Instruct-v1_0** | 7.8 | 5.2 | 4.2 | 4.2 | 4.1 | 7.6 | 5.9 | 9.1 | 6.0 | | GPT-3.5-Turbo | 7.8 | 6.1 | 5.1 | 6.4 | 6.2 | 8.7 | 7.4 | 9.3 | 7.1 | | Qwen1.5-7B-Chat | 9 | 5.6 | 4.7 | 2.8 | 3.7 | 8.0 | 8.0 | 9.4 | 6.4 | | Mistral-7B-v0.2-Instruct | 6.9 | 4.6 | 4.3 | 3.3 | 4.4 | 7.2 | 6.2 | 7.8 | 5.6 | | Yi-6B-Chat | 7.3 | 2.7 | 3.1 | 3.3 | 2.3 | 7.2 | 5.2 | 8.8 | 5.0 | | Taiwan-LLM-13B-v2.0-chat | 6.1 | 3.4 | 4.1 | 2.3 | 3.1 | 7.4 | 6.6 | 6.8 | 5.0 | | Taiwan-LLM-7B-v2.1-chat | 5.2 | 2.6 | 2.3 | 1.2 | 3.4 | 6.6 | 5.7 | 6.8 | 4.2 | | Details on TMMLU+ (0 shot):<br/>Model | STEM | Social Science | Humanities | Other | AVG | |-----------------------------------------------------|--------------|----------------|------------|------------|---------| | **Breeze-7B-Instruct-v1_0** | 36.46 | 48.38 | 45.11 | 40.75 | 42.67 | | Mistral-7B-v0.2-Instruct | 32.79 | 38.05 | 34.89 | 34.04 | 34.94 | | Yi-6B-Chat | 37.80 | 51.74 | 45.36 | 44.25 | 44.79 | | GPT-3.5-Turbo | 41.58 | 48.52 | 40.96 | 43.18 | 43.56 | | Qwen1.5-7B-Chat | 41.48 | 51.66 | 44.05 | 45.40 | 45.65 | | Taiwan-LLM-13B-v2.0-chat | 27.74 | 33.69 | 27.03 | 29.43 | 29.47 | | Taiwan-LLM-7B-v2.1-chat | 25.58 | 31.76 | 27.36 | 27.61 | 28.08 | ## Inference Performance In this test, we use the first 700 characters of the [web article](https://health.udn.com/health/story/5976/7699252?from=udn_ch1005_main_index) as the input and ask the model to write the same article again. All inferences run on 2 RTX A6000 GPUs (using `vllm`, with a tensor-parallel size of 2). | Models | Inference Time (sec)|Estimated Max Input Length (Char)| |--------------------------------------------------------------------|-------------------|--------------------------| | **Breeze-7B-Instruct-v1_0** | 10.74 | 11.1k | | Qwen1.5-7B-Chat | 9.35 | 38.9k | | Yi-6B-Chat | 10.62 | 5.2k | | Mistral-7B-Instruct-v0.2 | 20.48 | 5.1k | | Taiwan-LLM-7B-v2.1-chat | 26.26 | 2.2k | <!---| Taiwan-LLM-13B-v2.0-chat | 36.80 | 2.2k |---> <!---## Long-context Performance TBD---> ## Use in Transformers First install direct dependencies: ``` pip install transformers torch accelerate ``` If you want faster inference using flash-attention2, you need to install these dependencies: ```bash pip install packaging ninja pip install flash-attn ``` Then load the model in transformers: ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Instruction Model model = AutoModelForCausalLM.from_pretrained( "MediaTek-Research/Breeze-7B-Instruct-v1_0", device_map="auto", torch_dtype=torch.bfloat16, # attn_implementation="flash_attention_2" # optional ) # Basemodel model = AutoModelForCausalLM.from_pretrained( "MediaTek-Research/Breeze-7B-Base-v1_0", device_map="auto", torch_dtype=torch.bfloat16, # attn_implementation="flash_attention_2" # optional ) ``` **For Breeze-7B-Instruct**, the structure of the query is ```txt <s>SYS_PROMPT [INST] QUERY1 [/INST] RESPONSE1 [INST] QUERY2 [/INST] ``` where `SYS_PROMPT`, `QUERY1`, `RESPONSE1`, and `QUERY2` can be provided by the user. The suggested default `SYS_PROMPT` is ```txt You are a helpful AI assistant built by MediaTek Research. The user you are helping speaks Traditional Chinese and comes from Taiwan. ``` We also integrate `chat_template` into [tokenizer_config.json](tokenizer_config.json), so you can `apply_chat_template` to get the prompt. ```python >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("MediaTek-Research/Breeze-7B-Instruct-v1_0") >>> chat = [ ... {"role": "user", "content": "你好,請問你可以完成什麼任務?"}, ... {"role": "assistant", "content": "你好,我可以幫助您解決各種問題、提供資訊和協助您完成許多不同的任務。例如:回答技術問題、提供建議、翻譯文字、尋找資料或協助您安排行程等。請告訴我如何能幫助您。"}, ... {"role": "user", "content": "太棒了!"}, ... ] >>> tokenizer.apply_chat_template(chat, tokenize=False) "<s>You are a helpful AI assistant built by MediaTek Research. The user you are helping speaks Traditional Chinese and comes from Taiwan. [INST] 你好,請問你可以完成什麼任務? [/INST] 你好,我可以幫助您解決各種問題、提供資訊和協助您完成許多不同的任務。例如:回答技術問題、提供建議、翻譯文字、尋找資料或協助您安排行程等。請告訴我如何能幫助您。 [INST] 太棒了! [/INST] " # Tokenized results # ['▁', '你好', ',', '請問', '你', '可以', '完成', '什麼', '任務', '?'] # ['▁', '你好', ',', '我', '可以', '幫助', '您', '解決', '各種', '問題', '、', '提供', '資訊', '和', '協助', '您', '完成', '許多', '不同', '的', '任務', '。', '例如', ':', '回答', '技術', '問題', '、', '提供', '建議', '、', '翻譯', '文字', '、', '尋找', '資料', '或', '協助', '您', '安排', '行程', '等', '。', '請', '告訴', '我', '如何', '能', '幫助', '您', '。'] # ['▁', '太', '棒', '了', '!'] >>> outputs = model.generate(tokenizer.apply_chat_template(chat, return_tensors="pt"), max_new_tokens=128) >>> print(tokenizer.decode(outputs[0])) ``` ## Citation <!-- ``` @article{breeze7b2024, title={}, author={}, journal={arXiv}, year={2024} } ``` ---> ``` @article{MediaTek-Research2024breeze7b, title={Breeze-7B Technical Report}, author={Chan-Jan Hsu and Chang-Le Liu and Feng-Ting Liao and Po-Chun Hsu and Yi-Chang Chen and Da-Shan Shiu}, year={2024}, eprint={2403.02712}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
0-hero/Matter-0.2-7B-DPO
0-hero
2024-04-13T18:49:16Z
703
3
transformers
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "conversational", "en", "dataset:0-hero/Matter-0.2-alpha", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-13T12:24:15Z
--- license: apache-2.0 datasets: - 0-hero/Matter-0.2-alpha language: - en --- ## Matter 7B - 0.2 - DPO (Mistral 7B Finetune) DPO version of [Matter 7B](https://huggingface.co/0-hero/Matter-0.2-7B) fine-tuned on the [Matter dataset](https://huggingface.co/datasets/0-hero/Matter-0.2-alpha), which is curated from over 35 datsets analyzing >6B tokens ### Training Prompt format: This model uses ChatML prompt format. ``` <|im_start|>system You are a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ### Function Calling Model also supports function calling. Additional tokens for function calling Model function call tokens - <|begin_func|> - Function call start token - <|end_func|> - Function call end token Function call response tokens - <|begin_func_response|> - Function response start token - <|end_func_response|> - Function response end token Example ``` <|im_start|>system You are a helpful assistant with access to the following functions. Use them if required - { "name": "get_news_headlines", "description": "Get the latest news headlines", "parameters": { "type": "object", "properties": { "country": { "type": "string", "description": "The country for which to fetch news" } }, "required": [ "country" ] } } <|im_end|> <|im_start|>user Can you tell me the latest news headlines for the United States?<|im_end|> <|im_start|>assistant <|begin_func|>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}<|end_func|><|im_end|> <|im_start|>user <|begin_func_response|>{ "headlines": [ "Biden announces new vaccine mandates", "Hurricane Ida devastates Louisiana", "Apple unveils new iPhone", "NASA's Perseverance rover collects first Mars rock sample" ] }<|end_func_response|> <|im_end|> <|im_start|>assistant Here are the latest news headlines for the United States: 1. Biden announces new vaccine mandates 2. Hurricane Ida devastates Louisiana 3. Apple unveils new iPhone 4. NASA's Perseverance rover collects first Mars rock sample <|im_end|> ```
TommyZQ/llama3
TommyZQ
2024-04-28T08:52:18Z
703
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-18T21:02:02Z
--- license: apache-2.0 --- ## Model Details - **Model name:** llama3 - **Model version:** 1.0 - **Developed by:** [Development Team or Organization Name] - **Model type:** [e.g., Machine Translation, Text Classification, etc.] - **Model framework:** [e.g., TensorFlow, PyTorch, etc.] - **Training data:** [Description of the dataset(s) used for training] - **Validation data:** [Description of the dataset(s) used for validation] ## Intended Use - **Primary intended users:** [Who should be using this model - e.g., data scientists, developers] - **Out-of-scope use cases:** [List any use cases that are not recommended] ## Model Performance - **Metrics:** [Description of the metrics used to evaluate model performance] - **Evaluation results:** [Summary of the model's performance based on the chosen metrics] ## Ethical Considerations - **Bias detection:** [Any steps taken to address potential bias in the training data] - **Fairness assessment:** [Description of fairness assessment methods and results if applicable] ## Caveats and Recommendations - **Known limitations:** [List known limitations of the model] - **Best practices:** [Suggestions on best practices for implementation of the model] ## Change Log - **[Date]:** Model version 1.0 released. ## Contact Information - **Maintainer(s):** [Contact details for the person or team responsible for maintaining the model] - **Issues:** [Information on where to report issues or bugs] ## License - **Model license:** [Details of the model's usage license, if applicable]
Josephgflowers/Qllama-.5B-RAG-1
Josephgflowers
2024-05-07T01:19:19Z
703
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-04T13:38:39Z
--- license: mit --- This model needs further fine tuning. See: txtai-rag.py for rag implimentation with txt ai wikipedia. Llamafyd version of Qwen .5B further fine tuned 1 epoch on wiki, math, science, and chat datasets. Based on Cinder data. This model further fine tuned 1 epoch on rag data. Rough list of final datasets: formatted_beaugogh-openorca-multiplechoice-10k.txt formatted_BYC-Sophie-samsum-chatgpt-summary.txt formatted_conversation_bio.txt formatted_conversation_create_cinder_1.txt formatted_conversation_Electrical-engineering.txt formatted_conversation_multiturn_stem.txt formatted_conversation_physics.txt formatted_conversation_qa_rag_chem_prog_dataset.txt formatted_conversation_qa_robot_ai_dataset.txt formatted_conversation_qa_shopify_dataset1_rag.txt formatted_conversation_qa_shopify_dataset_rag.txt formatted_dyumat-databricks-dolly-5k-rag-split.txt formatted_Hypoxiic-wikipedia-summary-subset1k-summary_token.txt formatted_neural-bridge-rag-dataset-12000.txt formatted_rachid16-rag_finetuning_data.txt formatted_tiny_stories_1_summary_token_tag_token-xaa.txt formatted_tiny_stories_2_summary_token_assistant-xah.txt med_rag_small.txt z_formatted_cinder_test.txt
MrBlackSheep/XTOONS
MrBlackSheep
2024-06-05T17:57:49Z
703
0
diffusers
[ "diffusers", "checkpoint", "text-to-image", "en", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-06-05T13:01:04Z
--- license: creativeml-openrail-m language: - en tags: - checkpoint library_name: diffusers pipeline_tag: text-to-image --- ### Model Description (includes 3 different versions, full, pruned and inpainting SD checkpoints, also preview images for each of them) I created this model as a by-product from from my original merge, https://civitai.com/models/295429 aka CBMX. The point was to go to a totally different way, from semi realistic cartoon style to a very cartoonish style. Basically it's quite good with character loras, embeddings and so on, with a cartoonish (but still bit realistic) style. As I created this thing my main point was to get it work with CHARACTER loras etc. So that's what it's basically all about.. With this model you can also make nice images not being cartoonish at all and so. It's totally up to you. I prefer using ADetailer to get faces perfect. https://github.com/Bing-su/adetailer - **Developed by:** MrBlackSheep - **Model type:** Checkpoint **(3 different ones, full, pruned and inpainting version)** - **License:** creativeml-openrail-m ![00474-762419504.png](https://cdn-uploads.huggingface.co/production/uploads/64a994c773790912c77ec444/zDVpYazziXbxDuE8ZP7MA.png) ![XTOONS_full.preview.png](https://cdn-uploads.huggingface.co/production/uploads/64a994c773790912c77ec444/k7jyjqX-PQQWrwdAF7Olk.png) ![XTOONS.inpainting.preview.png](https://cdn-uploads.huggingface.co/production/uploads/64a994c773790912c77ec444/Qup0wXgUb1FqGxVLbfTAC.png) ![XTOONS.inpainting.preview2.png](https://cdn-uploads.huggingface.co/production/uploads/64a994c773790912c77ec444/2KS_S2ZXVcX5BiDU791_k.png)
LucyRez/CodeQwen-Swift
LucyRez
2024-06-24T22:58:13Z
703
0
null
[ "gguf", "license:other", "region:us" ]
null
2024-06-20T11:54:16Z
--- license: other license_name: tongyi-qianwen-research license_link: https://huggingface.co/Qwen/CodeQwen1.5-7B/blob/main/LICENSE ---
AlirezaF138/Gemma2_Farsi-Q6_K-GGUF
AlirezaF138
2024-06-20T22:08:11Z
703
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "fa", "base_model:alibidaran/Gemma2_Farsi", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-20T22:06:37Z
--- base_model: alibidaran/Gemma2_Farsi language: - fa library_name: transformers license: apache-2.0 tags: - llama-cpp - gguf-my-repo --- # AlirezaF138/Gemma2_Farsi-Q6_K-GGUF This model was converted to GGUF format from [`alibidaran/Gemma2_Farsi`](https://huggingface.co/alibidaran/Gemma2_Farsi) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/alibidaran/Gemma2_Farsi) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo AlirezaF138/Gemma2_Farsi-Q6_K-GGUF --hf-file gemma2_farsi-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo AlirezaF138/Gemma2_Farsi-Q6_K-GGUF --hf-file gemma2_farsi-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo AlirezaF138/Gemma2_Farsi-Q6_K-GGUF --hf-file gemma2_farsi-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo AlirezaF138/Gemma2_Farsi-Q6_K-GGUF --hf-file gemma2_farsi-q6_k.gguf -c 2048 ```
NikolayKozloff/MKLLM-7B-Instruct-Q8_0-GGUF
NikolayKozloff
2024-06-24T16:25:27Z
703
1
null
[ "gguf", "axolotl", "llama-cpp", "gguf-my-repo", "mk", "en", "base_model:trajkovnikola/MKLLM-7B-Instruct", "license:cc-by-nc-sa-4.0", "region:us" ]
null
2024-06-24T16:24:56Z
--- base_model: trajkovnikola/MKLLM-7B-Instruct language: - mk - en license: cc-by-nc-sa-4.0 tags: - axolotl - llama-cpp - gguf-my-repo --- # NikolayKozloff/MKLLM-7B-Instruct-Q8_0-GGUF This model was converted to GGUF format from [`trajkovnikola/MKLLM-7B-Instruct`](https://huggingface.co/trajkovnikola/MKLLM-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/trajkovnikola/MKLLM-7B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo NikolayKozloff/MKLLM-7B-Instruct-Q8_0-GGUF --hf-file mkllm-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo NikolayKozloff/MKLLM-7B-Instruct-Q8_0-GGUF --hf-file mkllm-7b-instruct-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo NikolayKozloff/MKLLM-7B-Instruct-Q8_0-GGUF --hf-file mkllm-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo NikolayKozloff/MKLLM-7B-Instruct-Q8_0-GGUF --hf-file mkllm-7b-instruct-q8_0.gguf -c 2048 ```
fishbone64/gte-Qwen2-7B-instruct-Q8_0-GGUF
fishbone64
2024-06-26T17:03:03Z
703
1
sentence-transformers
[ "sentence-transformers", "gguf", "mteb", "transformers", "Qwen2", "sentence-similarity", "llama-cpp", "gguf-my-repo", "base_model:Alibaba-NLP/gte-Qwen2-7B-instruct", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-06-26T17:02:19Z
--- base_model: Alibaba-NLP/gte-Qwen2-7B-instruct license: apache-2.0 tags: - mteb - sentence-transformers - transformers - Qwen2 - sentence-similarity - llama-cpp - gguf-my-repo model-index: - name: gte-qwen2-7B-instruct results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 91.31343283582089 - type: ap value: 67.64251402604096 - type: f1 value: 87.53372530755692 - task: type: Classification dataset: name: MTEB AmazonPolarityClassification type: mteb/amazon_polarity config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 97.497825 - type: ap value: 96.30329547047529 - type: f1 value: 97.49769793778039 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (en) type: mteb/amazon_reviews_multi config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 62.564 - type: f1 value: 60.975777935041066 - task: type: Retrieval dataset: name: MTEB ArguAna type: mteb/arguana config: default split: test revision: c22ab2a51041ffd869aaddef7af8d8215647e41a metrics: - type: map_at_1 value: 36.486000000000004 - type: map_at_10 value: 54.842 - type: map_at_100 value: 55.206999999999994 - type: map_at_1000 value: 55.206999999999994 - type: map_at_3 value: 49.893 - type: map_at_5 value: 53.105000000000004 - type: mrr_at_1 value: 37.34 - type: mrr_at_10 value: 55.143 - type: mrr_at_100 value: 55.509 - type: mrr_at_1000 value: 55.509 - type: mrr_at_3 value: 50.212999999999994 - type: mrr_at_5 value: 53.432 - type: ndcg_at_1 value: 36.486000000000004 - type: ndcg_at_10 value: 64.273 - type: ndcg_at_100 value: 65.66199999999999 - type: ndcg_at_1000 value: 65.66199999999999 - type: ndcg_at_3 value: 54.352999999999994 - type: ndcg_at_5 value: 60.131 - type: precision_at_1 value: 36.486000000000004 - type: precision_at_10 value: 9.395000000000001 - type: precision_at_100 value: 0.996 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 22.428 - type: precision_at_5 value: 16.259 - type: recall_at_1 value: 36.486000000000004 - type: recall_at_10 value: 93.95400000000001 - type: recall_at_100 value: 99.644 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 67.283 - type: recall_at_5 value: 81.294 - task: type: Clustering dataset: name: MTEB ArxivClusteringP2P type: mteb/arxiv-clustering-p2p config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 56.461169803700564 - task: type: Clustering dataset: name: MTEB ArxivClusteringS2S type: mteb/arxiv-clustering-s2s config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 51.73600434466286 - task: type: Reranking dataset: name: MTEB AskUbuntuDupQuestions type: mteb/askubuntudupquestions-reranking config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 67.57827065898053 - type: mrr value: 79.08136569493911 - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 83.53324575999243 - type: cos_sim_spearman value: 81.37173362822374 - type: euclidean_pearson value: 82.19243335103444 - type: euclidean_spearman value: 81.33679307304334 - type: manhattan_pearson value: 82.38752665975699 - type: manhattan_spearman value: 81.31510583189689 - task: type: Classification dataset: name: MTEB Banking77Classification type: mteb/banking77 config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 87.56818181818181 - type: f1 value: 87.25826722019875 - task: type: Clustering dataset: name: MTEB BiorxivClusteringP2P type: mteb/biorxiv-clustering-p2p config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 50.09239610327673 - task: type: Clustering dataset: name: MTEB BiorxivClusteringS2S type: mteb/biorxiv-clustering-s2s config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 46.64733054606282 - task: type: Retrieval dataset: name: MTEB CQADupstackAndroidRetrieval type: BeIR/cqadupstack config: default split: test revision: f46a197baaae43b4f621051089b82a364682dfeb metrics: - type: map_at_1 value: 33.997 - type: map_at_10 value: 48.176 - type: map_at_100 value: 49.82 - type: map_at_1000 value: 49.924 - type: map_at_3 value: 43.626 - type: map_at_5 value: 46.275 - type: mrr_at_1 value: 42.059999999999995 - type: mrr_at_10 value: 53.726 - type: mrr_at_100 value: 54.398 - type: mrr_at_1000 value: 54.416 - type: mrr_at_3 value: 50.714999999999996 - type: mrr_at_5 value: 52.639 - type: ndcg_at_1 value: 42.059999999999995 - type: ndcg_at_10 value: 55.574999999999996 - type: ndcg_at_100 value: 60.744 - type: ndcg_at_1000 value: 61.85699999999999 - type: ndcg_at_3 value: 49.363 - type: ndcg_at_5 value: 52.44 - type: precision_at_1 value: 42.059999999999995 - type: precision_at_10 value: 11.101999999999999 - type: precision_at_100 value: 1.73 - type: precision_at_1000 value: 0.218 - type: precision_at_3 value: 24.464 - type: precision_at_5 value: 18.026 - type: recall_at_1 value: 33.997 - type: recall_at_10 value: 70.35900000000001 - type: recall_at_100 value: 91.642 - type: recall_at_1000 value: 97.977 - type: recall_at_3 value: 52.76 - type: recall_at_5 value: 61.148 - task: type: Retrieval dataset: name: MTEB CQADupstackEnglishRetrieval type: BeIR/cqadupstack config: default split: test revision: ad9991cb51e31e31e430383c75ffb2885547b5f0 metrics: - type: map_at_1 value: 35.884 - type: map_at_10 value: 48.14 - type: map_at_100 value: 49.5 - type: map_at_1000 value: 49.63 - type: map_at_3 value: 44.646 - type: map_at_5 value: 46.617999999999995 - type: mrr_at_1 value: 44.458999999999996 - type: mrr_at_10 value: 53.751000000000005 - type: mrr_at_100 value: 54.37800000000001 - type: mrr_at_1000 value: 54.415 - type: mrr_at_3 value: 51.815 - type: mrr_at_5 value: 52.882 - type: ndcg_at_1 value: 44.458999999999996 - type: ndcg_at_10 value: 54.157 - type: ndcg_at_100 value: 58.362 - type: ndcg_at_1000 value: 60.178 - type: ndcg_at_3 value: 49.661 - type: ndcg_at_5 value: 51.74999999999999 - type: precision_at_1 value: 44.458999999999996 - type: precision_at_10 value: 10.248 - type: precision_at_100 value: 1.5890000000000002 - type: precision_at_1000 value: 0.207 - type: precision_at_3 value: 23.928 - type: precision_at_5 value: 16.878999999999998 - type: recall_at_1 value: 35.884 - type: recall_at_10 value: 64.798 - type: recall_at_100 value: 82.345 - type: recall_at_1000 value: 93.267 - type: recall_at_3 value: 51.847 - type: recall_at_5 value: 57.601 - task: type: Retrieval dataset: name: MTEB CQADupstackGamingRetrieval type: BeIR/cqadupstack config: default split: test revision: 4885aa143210c98657558c04aaf3dc47cfb54340 metrics: - type: map_at_1 value: 39.383 - type: map_at_10 value: 53.714 - type: map_at_100 value: 54.838 - type: map_at_1000 value: 54.87800000000001 - type: map_at_3 value: 50.114999999999995 - type: map_at_5 value: 52.153000000000006 - type: mrr_at_1 value: 45.016 - type: mrr_at_10 value: 56.732000000000006 - type: mrr_at_100 value: 57.411 - type: mrr_at_1000 value: 57.431 - type: mrr_at_3 value: 54.044000000000004 - type: mrr_at_5 value: 55.639 - type: ndcg_at_1 value: 45.016 - type: ndcg_at_10 value: 60.228 - type: ndcg_at_100 value: 64.277 - type: ndcg_at_1000 value: 65.07 - type: ndcg_at_3 value: 54.124 - type: ndcg_at_5 value: 57.147000000000006 - type: precision_at_1 value: 45.016 - type: precision_at_10 value: 9.937 - type: precision_at_100 value: 1.288 - type: precision_at_1000 value: 0.13899999999999998 - type: precision_at_3 value: 24.471999999999998 - type: precision_at_5 value: 16.991 - type: recall_at_1 value: 39.383 - type: recall_at_10 value: 76.175 - type: recall_at_100 value: 93.02 - type: recall_at_1000 value: 98.60900000000001 - type: recall_at_3 value: 60.265 - type: recall_at_5 value: 67.46600000000001 - task: type: Retrieval dataset: name: MTEB CQADupstackGisRetrieval type: BeIR/cqadupstack config: default split: test revision: 5003b3064772da1887988e05400cf3806fe491f2 metrics: - type: map_at_1 value: 27.426000000000002 - type: map_at_10 value: 37.397000000000006 - type: map_at_100 value: 38.61 - type: map_at_1000 value: 38.678000000000004 - type: map_at_3 value: 34.150999999999996 - type: map_at_5 value: 36.137 - type: mrr_at_1 value: 29.944 - type: mrr_at_10 value: 39.654 - type: mrr_at_100 value: 40.638000000000005 - type: mrr_at_1000 value: 40.691 - type: mrr_at_3 value: 36.817 - type: mrr_at_5 value: 38.524 - type: ndcg_at_1 value: 29.944 - type: ndcg_at_10 value: 43.094 - type: ndcg_at_100 value: 48.789 - type: ndcg_at_1000 value: 50.339999999999996 - type: ndcg_at_3 value: 36.984 - type: ndcg_at_5 value: 40.248 - type: precision_at_1 value: 29.944 - type: precision_at_10 value: 6.78 - type: precision_at_100 value: 1.024 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 15.895000000000001 - type: precision_at_5 value: 11.39 - type: recall_at_1 value: 27.426000000000002 - type: recall_at_10 value: 58.464000000000006 - type: recall_at_100 value: 84.193 - type: recall_at_1000 value: 95.52000000000001 - type: recall_at_3 value: 42.172 - type: recall_at_5 value: 50.101 - task: type: Retrieval dataset: name: MTEB CQADupstackMathematicaRetrieval type: BeIR/cqadupstack config: default split: test revision: 90fceea13679c63fe563ded68f3b6f06e50061de metrics: - type: map_at_1 value: 19.721 - type: map_at_10 value: 31.604 - type: map_at_100 value: 32.972 - type: map_at_1000 value: 33.077 - type: map_at_3 value: 27.218999999999998 - type: map_at_5 value: 29.53 - type: mrr_at_1 value: 25.0 - type: mrr_at_10 value: 35.843 - type: mrr_at_100 value: 36.785000000000004 - type: mrr_at_1000 value: 36.842000000000006 - type: mrr_at_3 value: 32.193 - type: mrr_at_5 value: 34.264 - type: ndcg_at_1 value: 25.0 - type: ndcg_at_10 value: 38.606 - type: ndcg_at_100 value: 44.272 - type: ndcg_at_1000 value: 46.527 - type: ndcg_at_3 value: 30.985000000000003 - type: ndcg_at_5 value: 34.43 - type: precision_at_1 value: 25.0 - type: precision_at_10 value: 7.811 - type: precision_at_100 value: 1.203 - type: precision_at_1000 value: 0.15 - type: precision_at_3 value: 15.423 - type: precision_at_5 value: 11.791 - type: recall_at_1 value: 19.721 - type: recall_at_10 value: 55.625 - type: recall_at_100 value: 79.34400000000001 - type: recall_at_1000 value: 95.208 - type: recall_at_3 value: 35.19 - type: recall_at_5 value: 43.626 - task: type: Retrieval dataset: name: MTEB CQADupstackPhysicsRetrieval type: BeIR/cqadupstack config: default split: test revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4 metrics: - type: map_at_1 value: 33.784 - type: map_at_10 value: 47.522 - type: map_at_100 value: 48.949999999999996 - type: map_at_1000 value: 49.038 - type: map_at_3 value: 43.284 - type: map_at_5 value: 45.629 - type: mrr_at_1 value: 41.482 - type: mrr_at_10 value: 52.830999999999996 - type: mrr_at_100 value: 53.559999999999995 - type: mrr_at_1000 value: 53.588 - type: mrr_at_3 value: 50.016000000000005 - type: mrr_at_5 value: 51.614000000000004 - type: ndcg_at_1 value: 41.482 - type: ndcg_at_10 value: 54.569 - type: ndcg_at_100 value: 59.675999999999995 - type: ndcg_at_1000 value: 60.989000000000004 - type: ndcg_at_3 value: 48.187000000000005 - type: ndcg_at_5 value: 51.183 - type: precision_at_1 value: 41.482 - type: precision_at_10 value: 10.221 - type: precision_at_100 value: 1.486 - type: precision_at_1000 value: 0.17500000000000002 - type: precision_at_3 value: 23.548 - type: precision_at_5 value: 16.805 - type: recall_at_1 value: 33.784 - type: recall_at_10 value: 69.798 - type: recall_at_100 value: 90.098 - type: recall_at_1000 value: 98.176 - type: recall_at_3 value: 52.127 - type: recall_at_5 value: 59.861 - task: type: Retrieval dataset: name: MTEB CQADupstackProgrammersRetrieval type: BeIR/cqadupstack config: default split: test revision: 6184bc1440d2dbc7612be22b50686b8826d22b32 metrics: - type: map_at_1 value: 28.038999999999998 - type: map_at_10 value: 41.904 - type: map_at_100 value: 43.36 - type: map_at_1000 value: 43.453 - type: map_at_3 value: 37.785999999999994 - type: map_at_5 value: 40.105000000000004 - type: mrr_at_1 value: 35.046 - type: mrr_at_10 value: 46.926 - type: mrr_at_100 value: 47.815000000000005 - type: mrr_at_1000 value: 47.849000000000004 - type: mrr_at_3 value: 44.273 - type: mrr_at_5 value: 45.774 - type: ndcg_at_1 value: 35.046 - type: ndcg_at_10 value: 48.937000000000005 - type: ndcg_at_100 value: 54.544000000000004 - type: ndcg_at_1000 value: 56.069 - type: ndcg_at_3 value: 42.858000000000004 - type: ndcg_at_5 value: 45.644 - type: precision_at_1 value: 35.046 - type: precision_at_10 value: 9.452 - type: precision_at_100 value: 1.429 - type: precision_at_1000 value: 0.173 - type: precision_at_3 value: 21.346999999999998 - type: precision_at_5 value: 15.342 - type: recall_at_1 value: 28.038999999999998 - type: recall_at_10 value: 64.59700000000001 - type: recall_at_100 value: 87.735 - type: recall_at_1000 value: 97.41300000000001 - type: recall_at_3 value: 47.368 - type: recall_at_5 value: 54.93900000000001 - task: type: Retrieval dataset: name: MTEB CQADupstackRetrieval type: BeIR/cqadupstack config: default split: test revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4 metrics: - type: map_at_1 value: 28.17291666666667 - type: map_at_10 value: 40.025749999999995 - type: map_at_100 value: 41.39208333333333 - type: map_at_1000 value: 41.499249999999996 - type: map_at_3 value: 36.347 - type: map_at_5 value: 38.41391666666667 - type: mrr_at_1 value: 33.65925 - type: mrr_at_10 value: 44.085499999999996 - type: mrr_at_100 value: 44.94116666666667 - type: mrr_at_1000 value: 44.9855 - type: mrr_at_3 value: 41.2815 - type: mrr_at_5 value: 42.91491666666666 - type: ndcg_at_1 value: 33.65925 - type: ndcg_at_10 value: 46.430833333333325 - type: ndcg_at_100 value: 51.761 - type: ndcg_at_1000 value: 53.50899999999999 - type: ndcg_at_3 value: 40.45133333333333 - type: ndcg_at_5 value: 43.31483333333334 - type: precision_at_1 value: 33.65925 - type: precision_at_10 value: 8.4995 - type: precision_at_100 value: 1.3210000000000004 - type: precision_at_1000 value: 0.16591666666666666 - type: precision_at_3 value: 19.165083333333335 - type: precision_at_5 value: 13.81816666666667 - type: recall_at_1 value: 28.17291666666667 - type: recall_at_10 value: 61.12624999999999 - type: recall_at_100 value: 83.97266666666667 - type: recall_at_1000 value: 95.66550000000001 - type: recall_at_3 value: 44.661249999999995 - type: recall_at_5 value: 51.983333333333334 - type: map_at_1 value: 17.936 - type: map_at_10 value: 27.399 - type: map_at_100 value: 28.632 - type: map_at_1000 value: 28.738000000000003 - type: map_at_3 value: 24.456 - type: map_at_5 value: 26.06 - type: mrr_at_1 value: 19.224 - type: mrr_at_10 value: 28.998 - type: mrr_at_100 value: 30.11 - type: mrr_at_1000 value: 30.177 - type: mrr_at_3 value: 26.247999999999998 - type: mrr_at_5 value: 27.708 - type: ndcg_at_1 value: 19.224 - type: ndcg_at_10 value: 32.911 - type: ndcg_at_100 value: 38.873999999999995 - type: ndcg_at_1000 value: 41.277 - type: ndcg_at_3 value: 27.142 - type: ndcg_at_5 value: 29.755 - type: precision_at_1 value: 19.224 - type: precision_at_10 value: 5.6930000000000005 - type: precision_at_100 value: 0.9259999999999999 - type: precision_at_1000 value: 0.126 - type: precision_at_3 value: 12.138 - type: precision_at_5 value: 8.909 - type: recall_at_1 value: 17.936 - type: recall_at_10 value: 48.096 - type: recall_at_100 value: 75.389 - type: recall_at_1000 value: 92.803 - type: recall_at_3 value: 32.812999999999995 - type: recall_at_5 value: 38.851 - task: type: Retrieval dataset: name: MTEB CQADupstackStatsRetrieval type: BeIR/cqadupstack config: default split: test revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a metrics: - type: map_at_1 value: 24.681 - type: map_at_10 value: 34.892 - type: map_at_100 value: 35.996 - type: map_at_1000 value: 36.083 - type: map_at_3 value: 31.491999999999997 - type: map_at_5 value: 33.632 - type: mrr_at_1 value: 28.528 - type: mrr_at_10 value: 37.694 - type: mrr_at_100 value: 38.613 - type: mrr_at_1000 value: 38.668 - type: mrr_at_3 value: 34.714 - type: mrr_at_5 value: 36.616 - type: ndcg_at_1 value: 28.528 - type: ndcg_at_10 value: 40.703 - type: ndcg_at_100 value: 45.993 - type: ndcg_at_1000 value: 47.847 - type: ndcg_at_3 value: 34.622 - type: ndcg_at_5 value: 38.035999999999994 - type: precision_at_1 value: 28.528 - type: precision_at_10 value: 6.902 - type: precision_at_100 value: 1.0370000000000001 - type: precision_at_1000 value: 0.126 - type: precision_at_3 value: 15.798000000000002 - type: precision_at_5 value: 11.655999999999999 - type: recall_at_1 value: 24.681 - type: recall_at_10 value: 55.81 - type: recall_at_100 value: 79.785 - type: recall_at_1000 value: 92.959 - type: recall_at_3 value: 39.074 - type: recall_at_5 value: 47.568 - task: type: Retrieval dataset: name: MTEB CQADupstackTexRetrieval type: BeIR/cqadupstack config: default split: test revision: 46989137a86843e03a6195de44b09deda022eec7 metrics: - type: map_at_1 value: 18.627 - type: map_at_10 value: 27.872000000000003 - type: map_at_100 value: 29.237999999999996 - type: map_at_1000 value: 29.363 - type: map_at_3 value: 24.751 - type: map_at_5 value: 26.521 - type: mrr_at_1 value: 23.021 - type: mrr_at_10 value: 31.924000000000003 - type: mrr_at_100 value: 32.922000000000004 - type: mrr_at_1000 value: 32.988 - type: mrr_at_3 value: 29.192 - type: mrr_at_5 value: 30.798 - type: ndcg_at_1 value: 23.021 - type: ndcg_at_10 value: 33.535 - type: ndcg_at_100 value: 39.732 - type: ndcg_at_1000 value: 42.201 - type: ndcg_at_3 value: 28.153 - type: ndcg_at_5 value: 30.746000000000002 - type: precision_at_1 value: 23.021 - type: precision_at_10 value: 6.459 - type: precision_at_100 value: 1.1320000000000001 - type: precision_at_1000 value: 0.153 - type: precision_at_3 value: 13.719000000000001 - type: precision_at_5 value: 10.193000000000001 - type: recall_at_1 value: 18.627 - type: recall_at_10 value: 46.463 - type: recall_at_100 value: 74.226 - type: recall_at_1000 value: 91.28500000000001 - type: recall_at_3 value: 31.357000000000003 - type: recall_at_5 value: 38.067 - task: type: Retrieval dataset: name: MTEB CQADupstackUnixRetrieval type: BeIR/cqadupstack config: default split: test revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53 metrics: - type: map_at_1 value: 31.457 - type: map_at_10 value: 42.888 - type: map_at_100 value: 44.24 - type: map_at_1000 value: 44.327 - type: map_at_3 value: 39.588 - type: map_at_5 value: 41.423 - type: mrr_at_1 value: 37.126999999999995 - type: mrr_at_10 value: 47.083000000000006 - type: mrr_at_100 value: 47.997 - type: mrr_at_1000 value: 48.044 - type: mrr_at_3 value: 44.574000000000005 - type: mrr_at_5 value: 46.202 - type: ndcg_at_1 value: 37.126999999999995 - type: ndcg_at_10 value: 48.833 - type: ndcg_at_100 value: 54.327000000000005 - type: ndcg_at_1000 value: 56.011 - type: ndcg_at_3 value: 43.541999999999994 - type: ndcg_at_5 value: 46.127 - type: precision_at_1 value: 37.126999999999995 - type: precision_at_10 value: 8.376999999999999 - type: precision_at_100 value: 1.2309999999999999 - type: precision_at_1000 value: 0.146 - type: precision_at_3 value: 20.211000000000002 - type: precision_at_5 value: 14.16 - type: recall_at_1 value: 31.457 - type: recall_at_10 value: 62.369 - type: recall_at_100 value: 85.444 - type: recall_at_1000 value: 96.65599999999999 - type: recall_at_3 value: 47.961 - type: recall_at_5 value: 54.676 - task: type: Retrieval dataset: name: MTEB CQADupstackWebmastersRetrieval type: BeIR/cqadupstack config: default split: test revision: 160c094312a0e1facb97e55eeddb698c0abe3571 metrics: - type: map_at_1 value: 27.139999999999997 - type: map_at_10 value: 38.801 - type: map_at_100 value: 40.549 - type: map_at_1000 value: 40.802 - type: map_at_3 value: 35.05 - type: map_at_5 value: 36.884 - type: mrr_at_1 value: 33.004 - type: mrr_at_10 value: 43.864 - type: mrr_at_100 value: 44.667 - type: mrr_at_1000 value: 44.717 - type: mrr_at_3 value: 40.777 - type: mrr_at_5 value: 42.319 - type: ndcg_at_1 value: 33.004 - type: ndcg_at_10 value: 46.022 - type: ndcg_at_100 value: 51.542 - type: ndcg_at_1000 value: 53.742000000000004 - type: ndcg_at_3 value: 39.795 - type: ndcg_at_5 value: 42.272 - type: precision_at_1 value: 33.004 - type: precision_at_10 value: 9.012 - type: precision_at_100 value: 1.7770000000000001 - type: precision_at_1000 value: 0.26 - type: precision_at_3 value: 19.038 - type: precision_at_5 value: 13.675999999999998 - type: recall_at_1 value: 27.139999999999997 - type: recall_at_10 value: 60.961 - type: recall_at_100 value: 84.451 - type: recall_at_1000 value: 98.113 - type: recall_at_3 value: 43.001 - type: recall_at_5 value: 49.896 - task: type: Retrieval dataset: name: MTEB ClimateFEVER type: mteb/climate-fever config: default split: test revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380 metrics: - type: map_at_1 value: 22.076999999999998 - type: map_at_10 value: 35.44 - type: map_at_100 value: 37.651 - type: map_at_1000 value: 37.824999999999996 - type: map_at_3 value: 30.764999999999997 - type: map_at_5 value: 33.26 - type: mrr_at_1 value: 50.163000000000004 - type: mrr_at_10 value: 61.207 - type: mrr_at_100 value: 61.675000000000004 - type: mrr_at_1000 value: 61.692 - type: mrr_at_3 value: 58.60999999999999 - type: mrr_at_5 value: 60.307 - type: ndcg_at_1 value: 50.163000000000004 - type: ndcg_at_10 value: 45.882 - type: ndcg_at_100 value: 53.239999999999995 - type: ndcg_at_1000 value: 55.852000000000004 - type: ndcg_at_3 value: 40.514 - type: ndcg_at_5 value: 42.038 - type: precision_at_1 value: 50.163000000000004 - type: precision_at_10 value: 13.466000000000001 - type: precision_at_100 value: 2.164 - type: precision_at_1000 value: 0.266 - type: precision_at_3 value: 29.707 - type: precision_at_5 value: 21.694 - type: recall_at_1 value: 22.076999999999998 - type: recall_at_10 value: 50.193 - type: recall_at_100 value: 74.993 - type: recall_at_1000 value: 89.131 - type: recall_at_3 value: 35.472 - type: recall_at_5 value: 41.814 - task: type: Retrieval dataset: name: MTEB DBPedia type: mteb/dbpedia config: default split: test revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659 metrics: - type: map_at_1 value: 9.953 - type: map_at_10 value: 24.515 - type: map_at_100 value: 36.173 - type: map_at_1000 value: 38.351 - type: map_at_3 value: 16.592000000000002 - type: map_at_5 value: 20.036 - type: mrr_at_1 value: 74.25 - type: mrr_at_10 value: 81.813 - type: mrr_at_100 value: 82.006 - type: mrr_at_1000 value: 82.011 - type: mrr_at_3 value: 80.875 - type: mrr_at_5 value: 81.362 - type: ndcg_at_1 value: 62.5 - type: ndcg_at_10 value: 52.42 - type: ndcg_at_100 value: 56.808 - type: ndcg_at_1000 value: 63.532999999999994 - type: ndcg_at_3 value: 56.654 - type: ndcg_at_5 value: 54.18300000000001 - type: precision_at_1 value: 74.25 - type: precision_at_10 value: 42.699999999999996 - type: precision_at_100 value: 13.675 - type: precision_at_1000 value: 2.664 - type: precision_at_3 value: 60.5 - type: precision_at_5 value: 52.800000000000004 - type: recall_at_1 value: 9.953 - type: recall_at_10 value: 30.253999999999998 - type: recall_at_100 value: 62.516000000000005 - type: recall_at_1000 value: 84.163 - type: recall_at_3 value: 18.13 - type: recall_at_5 value: 22.771 - task: type: Classification dataset: name: MTEB EmotionClassification type: mteb/emotion config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 79.455 - type: f1 value: 74.16798697647569 - task: type: Retrieval dataset: name: MTEB FEVER type: mteb/fever config: default split: test revision: bea83ef9e8fb933d90a2f1d5515737465d613e12 metrics: - type: map_at_1 value: 87.531 - type: map_at_10 value: 93.16799999999999 - type: map_at_100 value: 93.341 - type: map_at_1000 value: 93.349 - type: map_at_3 value: 92.444 - type: map_at_5 value: 92.865 - type: mrr_at_1 value: 94.014 - type: mrr_at_10 value: 96.761 - type: mrr_at_100 value: 96.762 - type: mrr_at_1000 value: 96.762 - type: mrr_at_3 value: 96.672 - type: mrr_at_5 value: 96.736 - type: ndcg_at_1 value: 94.014 - type: ndcg_at_10 value: 95.112 - type: ndcg_at_100 value: 95.578 - type: ndcg_at_1000 value: 95.68900000000001 - type: ndcg_at_3 value: 94.392 - type: ndcg_at_5 value: 94.72500000000001 - type: precision_at_1 value: 94.014 - type: precision_at_10 value: 11.065 - type: precision_at_100 value: 1.157 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 35.259 - type: precision_at_5 value: 21.599 - type: recall_at_1 value: 87.531 - type: recall_at_10 value: 97.356 - type: recall_at_100 value: 98.965 - type: recall_at_1000 value: 99.607 - type: recall_at_3 value: 95.312 - type: recall_at_5 value: 96.295 - task: type: Retrieval dataset: name: MTEB FiQA2018 type: mteb/fiqa config: default split: test revision: 27a168819829fe9bcd655c2df245fb19452e8e06 metrics: - type: map_at_1 value: 32.055 - type: map_at_10 value: 53.114 - type: map_at_100 value: 55.235 - type: map_at_1000 value: 55.345 - type: map_at_3 value: 45.854 - type: map_at_5 value: 50.025 - type: mrr_at_1 value: 60.34 - type: mrr_at_10 value: 68.804 - type: mrr_at_100 value: 69.309 - type: mrr_at_1000 value: 69.32199999999999 - type: mrr_at_3 value: 66.40899999999999 - type: mrr_at_5 value: 67.976 - type: ndcg_at_1 value: 60.34 - type: ndcg_at_10 value: 62.031000000000006 - type: ndcg_at_100 value: 68.00500000000001 - type: ndcg_at_1000 value: 69.286 - type: ndcg_at_3 value: 56.355999999999995 - type: ndcg_at_5 value: 58.687 - type: precision_at_1 value: 60.34 - type: precision_at_10 value: 17.176 - type: precision_at_100 value: 2.36 - type: precision_at_1000 value: 0.259 - type: precision_at_3 value: 37.14 - type: precision_at_5 value: 27.809 - type: recall_at_1 value: 32.055 - type: recall_at_10 value: 70.91 - type: recall_at_100 value: 91.83 - type: recall_at_1000 value: 98.871 - type: recall_at_3 value: 51.202999999999996 - type: recall_at_5 value: 60.563 - task: type: Retrieval dataset: name: MTEB HotpotQA type: mteb/hotpotqa config: default split: test revision: ab518f4d6fcca38d87c25209f94beba119d02014 metrics: - type: map_at_1 value: 43.68 - type: map_at_10 value: 64.389 - type: map_at_100 value: 65.24 - type: map_at_1000 value: 65.303 - type: map_at_3 value: 61.309000000000005 - type: map_at_5 value: 63.275999999999996 - type: mrr_at_1 value: 87.36 - type: mrr_at_10 value: 91.12 - type: mrr_at_100 value: 91.227 - type: mrr_at_1000 value: 91.229 - type: mrr_at_3 value: 90.57600000000001 - type: mrr_at_5 value: 90.912 - type: ndcg_at_1 value: 87.36 - type: ndcg_at_10 value: 73.076 - type: ndcg_at_100 value: 75.895 - type: ndcg_at_1000 value: 77.049 - type: ndcg_at_3 value: 68.929 - type: ndcg_at_5 value: 71.28 - type: precision_at_1 value: 87.36 - type: precision_at_10 value: 14.741000000000001 - type: precision_at_100 value: 1.694 - type: precision_at_1000 value: 0.185 - type: precision_at_3 value: 43.043 - type: precision_at_5 value: 27.681 - type: recall_at_1 value: 43.68 - type: recall_at_10 value: 73.707 - type: recall_at_100 value: 84.7 - type: recall_at_1000 value: 92.309 - type: recall_at_3 value: 64.564 - type: recall_at_5 value: 69.203 - task: type: Classification dataset: name: MTEB ImdbClassification type: mteb/imdb config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 96.75399999999999 - type: ap value: 95.29389839242187 - type: f1 value: 96.75348377433475 - task: type: Retrieval dataset: name: MTEB MSMARCO type: mteb/msmarco config: default split: dev revision: c5a29a104738b98a9e76336939199e264163d4a0 metrics: - type: map_at_1 value: 25.176 - type: map_at_10 value: 38.598 - type: map_at_100 value: 39.707 - type: map_at_1000 value: 39.744 - type: map_at_3 value: 34.566 - type: map_at_5 value: 36.863 - type: mrr_at_1 value: 25.874000000000002 - type: mrr_at_10 value: 39.214 - type: mrr_at_100 value: 40.251 - type: mrr_at_1000 value: 40.281 - type: mrr_at_3 value: 35.291 - type: mrr_at_5 value: 37.545 - type: ndcg_at_1 value: 25.874000000000002 - type: ndcg_at_10 value: 45.98 - type: ndcg_at_100 value: 51.197 - type: ndcg_at_1000 value: 52.073 - type: ndcg_at_3 value: 37.785999999999994 - type: ndcg_at_5 value: 41.870000000000005 - type: precision_at_1 value: 25.874000000000002 - type: precision_at_10 value: 7.181 - type: precision_at_100 value: 0.979 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 16.051000000000002 - type: precision_at_5 value: 11.713 - type: recall_at_1 value: 25.176 - type: recall_at_10 value: 68.67699999999999 - type: recall_at_100 value: 92.55 - type: recall_at_1000 value: 99.164 - type: recall_at_3 value: 46.372 - type: recall_at_5 value: 56.16 - task: type: Classification dataset: name: MTEB MTOPDomainClassification (en) type: mteb/mtop_domain config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 99.03784769721841 - type: f1 value: 98.97791641821495 - task: type: Classification dataset: name: MTEB MTOPIntentClassification (en) type: mteb/mtop_intent config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 91.88326493388054 - type: f1 value: 73.74809928034335 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (en) type: mteb/amazon_massive_intent config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 85.41358439811701 - type: f1 value: 83.503679460639 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (en) type: mteb/amazon_massive_scenario config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 89.77135171486215 - type: f1 value: 88.89843747468366 - task: type: Clustering dataset: name: MTEB MedrxivClusteringP2P type: mteb/medrxiv-clustering-p2p config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 46.22695362087359 - task: type: Clustering dataset: name: MTEB MedrxivClusteringS2S type: mteb/medrxiv-clustering-s2s config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 44.132372165849425 - task: type: Reranking dataset: name: MTEB MindSmallReranking type: mteb/mind_small config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 33.35680810650402 - type: mrr value: 34.72625715637218 - task: type: Retrieval dataset: name: MTEB NFCorpus type: mteb/nfcorpus config: default split: test revision: ec0fa4fe99da2ff19ca1214b7966684033a58814 metrics: - type: map_at_1 value: 7.165000000000001 - type: map_at_10 value: 15.424 - type: map_at_100 value: 20.28 - type: map_at_1000 value: 22.065 - type: map_at_3 value: 11.236 - type: map_at_5 value: 13.025999999999998 - type: mrr_at_1 value: 51.702999999999996 - type: mrr_at_10 value: 59.965 - type: mrr_at_100 value: 60.667 - type: mrr_at_1000 value: 60.702999999999996 - type: mrr_at_3 value: 58.772000000000006 - type: mrr_at_5 value: 59.267 - type: ndcg_at_1 value: 49.536 - type: ndcg_at_10 value: 40.6 - type: ndcg_at_100 value: 37.848 - type: ndcg_at_1000 value: 46.657 - type: ndcg_at_3 value: 46.117999999999995 - type: ndcg_at_5 value: 43.619 - type: precision_at_1 value: 51.393 - type: precision_at_10 value: 30.31 - type: precision_at_100 value: 9.972 - type: precision_at_1000 value: 2.329 - type: precision_at_3 value: 43.137 - type: precision_at_5 value: 37.585 - type: recall_at_1 value: 7.165000000000001 - type: recall_at_10 value: 19.689999999999998 - type: recall_at_100 value: 39.237 - type: recall_at_1000 value: 71.417 - type: recall_at_3 value: 12.247 - type: recall_at_5 value: 14.902999999999999 - task: type: Retrieval dataset: name: MTEB NQ type: mteb/nq config: default split: test revision: b774495ed302d8c44a3a7ea25c90dbce03968f31 metrics: - type: map_at_1 value: 42.653999999999996 - type: map_at_10 value: 59.611999999999995 - type: map_at_100 value: 60.32300000000001 - type: map_at_1000 value: 60.336 - type: map_at_3 value: 55.584999999999994 - type: map_at_5 value: 58.19 - type: mrr_at_1 value: 47.683 - type: mrr_at_10 value: 62.06700000000001 - type: mrr_at_100 value: 62.537 - type: mrr_at_1000 value: 62.544999999999995 - type: mrr_at_3 value: 59.178 - type: mrr_at_5 value: 61.034 - type: ndcg_at_1 value: 47.654 - type: ndcg_at_10 value: 67.001 - type: ndcg_at_100 value: 69.73899999999999 - type: ndcg_at_1000 value: 69.986 - type: ndcg_at_3 value: 59.95700000000001 - type: ndcg_at_5 value: 64.025 - type: precision_at_1 value: 47.654 - type: precision_at_10 value: 10.367999999999999 - type: precision_at_100 value: 1.192 - type: precision_at_1000 value: 0.121 - type: precision_at_3 value: 26.651000000000003 - type: precision_at_5 value: 18.459 - type: recall_at_1 value: 42.653999999999996 - type: recall_at_10 value: 86.619 - type: recall_at_100 value: 98.04899999999999 - type: recall_at_1000 value: 99.812 - type: recall_at_3 value: 68.987 - type: recall_at_5 value: 78.158 - task: type: Retrieval dataset: name: MTEB QuoraRetrieval type: mteb/quora config: default split: test revision: None metrics: - type: map_at_1 value: 72.538 - type: map_at_10 value: 86.702 - type: map_at_100 value: 87.31 - type: map_at_1000 value: 87.323 - type: map_at_3 value: 83.87 - type: map_at_5 value: 85.682 - type: mrr_at_1 value: 83.31 - type: mrr_at_10 value: 89.225 - type: mrr_at_100 value: 89.30399999999999 - type: mrr_at_1000 value: 89.30399999999999 - type: mrr_at_3 value: 88.44300000000001 - type: mrr_at_5 value: 89.005 - type: ndcg_at_1 value: 83.32000000000001 - type: ndcg_at_10 value: 90.095 - type: ndcg_at_100 value: 91.12 - type: ndcg_at_1000 value: 91.179 - type: ndcg_at_3 value: 87.606 - type: ndcg_at_5 value: 89.031 - type: precision_at_1 value: 83.32000000000001 - type: precision_at_10 value: 13.641 - type: precision_at_100 value: 1.541 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 38.377 - type: precision_at_5 value: 25.162000000000003 - type: recall_at_1 value: 72.538 - type: recall_at_10 value: 96.47200000000001 - type: recall_at_100 value: 99.785 - type: recall_at_1000 value: 99.99900000000001 - type: recall_at_3 value: 89.278 - type: recall_at_5 value: 93.367 - task: type: Clustering dataset: name: MTEB RedditClustering type: mteb/reddit-clustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 73.55219145406065 - task: type: Clustering dataset: name: MTEB RedditClusteringP2P type: mteb/reddit-clustering-p2p config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 74.13437105242755 - task: type: Retrieval dataset: name: MTEB SCIDOCS type: mteb/scidocs config: default split: test revision: None metrics: - type: map_at_1 value: 6.873 - type: map_at_10 value: 17.944 - type: map_at_100 value: 21.171 - type: map_at_1000 value: 21.528 - type: map_at_3 value: 12.415 - type: map_at_5 value: 15.187999999999999 - type: mrr_at_1 value: 33.800000000000004 - type: mrr_at_10 value: 46.455 - type: mrr_at_100 value: 47.378 - type: mrr_at_1000 value: 47.394999999999996 - type: mrr_at_3 value: 42.367 - type: mrr_at_5 value: 44.972 - type: ndcg_at_1 value: 33.800000000000004 - type: ndcg_at_10 value: 28.907 - type: ndcg_at_100 value: 39.695 - type: ndcg_at_1000 value: 44.582 - type: ndcg_at_3 value: 26.949 - type: ndcg_at_5 value: 23.988 - type: precision_at_1 value: 33.800000000000004 - type: precision_at_10 value: 15.079999999999998 - type: precision_at_100 value: 3.056 - type: precision_at_1000 value: 0.42100000000000004 - type: precision_at_3 value: 25.167 - type: precision_at_5 value: 21.26 - type: recall_at_1 value: 6.873 - type: recall_at_10 value: 30.568 - type: recall_at_100 value: 62.062 - type: recall_at_1000 value: 85.37700000000001 - type: recall_at_3 value: 15.312999999999999 - type: recall_at_5 value: 21.575 - task: type: STS dataset: name: MTEB SICK-R type: mteb/sickr-sts config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 82.37009118256057 - type: cos_sim_spearman value: 79.27986395671529 - type: euclidean_pearson value: 79.18037715442115 - type: euclidean_spearman value: 79.28004791561621 - type: manhattan_pearson value: 79.34062972800541 - type: manhattan_spearman value: 79.43106695543402 - task: type: STS dataset: name: MTEB STS12 type: mteb/sts12-sts config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 87.48474767383833 - type: cos_sim_spearman value: 79.54505388752513 - type: euclidean_pearson value: 83.43282704179565 - type: euclidean_spearman value: 79.54579919925405 - type: manhattan_pearson value: 83.77564492427952 - type: manhattan_spearman value: 79.84558396989286 - task: type: STS dataset: name: MTEB STS13 type: mteb/sts13-sts config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 88.803698035802 - type: cos_sim_spearman value: 88.83451367754881 - type: euclidean_pearson value: 88.28939285711628 - type: euclidean_spearman value: 88.83528996073112 - type: manhattan_pearson value: 88.28017412671795 - type: manhattan_spearman value: 88.9228828016344 - task: type: STS dataset: name: MTEB STS14 type: mteb/sts14-sts config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 85.27469288153428 - type: cos_sim_spearman value: 83.87477064876288 - type: euclidean_pearson value: 84.2601737035379 - type: euclidean_spearman value: 83.87431082479074 - type: manhattan_pearson value: 84.3621547772745 - type: manhattan_spearman value: 84.12094375000423 - task: type: STS dataset: name: MTEB STS15 type: mteb/sts15-sts config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 88.12749863201587 - type: cos_sim_spearman value: 88.54287568368565 - type: euclidean_pearson value: 87.90429700607999 - type: euclidean_spearman value: 88.5437689576261 - type: manhattan_pearson value: 88.19276653356833 - type: manhattan_spearman value: 88.99995393814679 - task: type: STS dataset: name: MTEB STS16 type: mteb/sts16-sts config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 85.68398747560902 - type: cos_sim_spearman value: 86.48815303460574 - type: euclidean_pearson value: 85.52356631237954 - type: euclidean_spearman value: 86.486391949551 - type: manhattan_pearson value: 85.67267981761788 - type: manhattan_spearman value: 86.7073696332485 - task: type: STS dataset: name: MTEB STS17 (en-en) type: mteb/sts17-crosslingual-sts config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 88.9057107443124 - type: cos_sim_spearman value: 88.7312168757697 - type: euclidean_pearson value: 88.72810439714794 - type: euclidean_spearman value: 88.71976185854771 - type: manhattan_pearson value: 88.50433745949111 - type: manhattan_spearman value: 88.51726175544195 - task: type: STS dataset: name: MTEB STS22 (en) type: mteb/sts22-crosslingual-sts config: en split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 67.59391795109886 - type: cos_sim_spearman value: 66.87613008631367 - type: euclidean_pearson value: 69.23198488262217 - type: euclidean_spearman value: 66.85427723013692 - type: manhattan_pearson value: 69.50730124841084 - type: manhattan_spearman value: 67.10404669820792 - task: type: STS dataset: name: MTEB STSBenchmark type: mteb/stsbenchmark-sts config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 87.0820605344619 - type: cos_sim_spearman value: 86.8518089863434 - type: euclidean_pearson value: 86.31087134689284 - type: euclidean_spearman value: 86.8518520517941 - type: manhattan_pearson value: 86.47203796160612 - type: manhattan_spearman value: 87.1080149734421 - task: type: Reranking dataset: name: MTEB SciDocsRR type: mteb/scidocs-reranking config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 89.09255369305481 - type: mrr value: 97.10323445617563 - task: type: Retrieval dataset: name: MTEB SciFact type: mteb/scifact config: default split: test revision: 0228b52cf27578f30900b9e5271d331663a030d7 metrics: - type: map_at_1 value: 61.260999999999996 - type: map_at_10 value: 74.043 - type: map_at_100 value: 74.37700000000001 - type: map_at_1000 value: 74.384 - type: map_at_3 value: 71.222 - type: map_at_5 value: 72.875 - type: mrr_at_1 value: 64.333 - type: mrr_at_10 value: 74.984 - type: mrr_at_100 value: 75.247 - type: mrr_at_1000 value: 75.25500000000001 - type: mrr_at_3 value: 73.167 - type: mrr_at_5 value: 74.35000000000001 - type: ndcg_at_1 value: 64.333 - type: ndcg_at_10 value: 79.06 - type: ndcg_at_100 value: 80.416 - type: ndcg_at_1000 value: 80.55600000000001 - type: ndcg_at_3 value: 74.753 - type: ndcg_at_5 value: 76.97500000000001 - type: precision_at_1 value: 64.333 - type: precision_at_10 value: 10.567 - type: precision_at_100 value: 1.1199999999999999 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 29.889 - type: precision_at_5 value: 19.533 - type: recall_at_1 value: 61.260999999999996 - type: recall_at_10 value: 93.167 - type: recall_at_100 value: 99.0 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 81.667 - type: recall_at_5 value: 87.394 - task: type: PairClassification dataset: name: MTEB SprintDuplicateQuestions type: mteb/sprintduplicatequestions-pairclassification config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.71980198019801 - type: cos_sim_ap value: 92.81616007802704 - type: cos_sim_f1 value: 85.17548454688318 - type: cos_sim_precision value: 89.43894389438944 - type: cos_sim_recall value: 81.3 - type: dot_accuracy value: 99.71980198019801 - type: dot_ap value: 92.81398760591358 - type: dot_f1 value: 85.17548454688318 - type: dot_precision value: 89.43894389438944 - type: dot_recall value: 81.3 - type: euclidean_accuracy value: 99.71980198019801 - type: euclidean_ap value: 92.81560637245072 - type: euclidean_f1 value: 85.17548454688318 - type: euclidean_precision value: 89.43894389438944 - type: euclidean_recall value: 81.3 - type: manhattan_accuracy value: 99.73069306930694 - type: manhattan_ap value: 93.14005487480794 - type: manhattan_f1 value: 85.56263269639068 - type: manhattan_precision value: 91.17647058823529 - type: manhattan_recall value: 80.60000000000001 - type: max_accuracy value: 99.73069306930694 - type: max_ap value: 93.14005487480794 - type: max_f1 value: 85.56263269639068 - task: type: Clustering dataset: name: MTEB StackExchangeClustering type: mteb/stackexchange-clustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 79.86443362395185 - task: type: Clustering dataset: name: MTEB StackExchangeClusteringP2P type: mteb/stackexchange-clustering-p2p config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 49.40897096662564 - task: type: Reranking dataset: name: MTEB StackOverflowDupQuestions type: mteb/stackoverflowdupquestions-reranking config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 55.66040806627947 - type: mrr value: 56.58670475766064 - task: type: Summarization dataset: name: MTEB SummEval type: mteb/summeval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 31.51015090598575 - type: cos_sim_spearman value: 31.35016454939226 - type: dot_pearson value: 31.5150068731 - type: dot_spearman value: 31.34790869023487 - task: type: Retrieval dataset: name: MTEB TRECCOVID type: mteb/trec-covid config: default split: test revision: None metrics: - type: map_at_1 value: 0.254 - type: map_at_10 value: 2.064 - type: map_at_100 value: 12.909 - type: map_at_1000 value: 31.761 - type: map_at_3 value: 0.738 - type: map_at_5 value: 1.155 - type: mrr_at_1 value: 96.0 - type: mrr_at_10 value: 98.0 - type: mrr_at_100 value: 98.0 - type: mrr_at_1000 value: 98.0 - type: mrr_at_3 value: 98.0 - type: mrr_at_5 value: 98.0 - type: ndcg_at_1 value: 93.0 - type: ndcg_at_10 value: 82.258 - type: ndcg_at_100 value: 64.34 - type: ndcg_at_1000 value: 57.912 - type: ndcg_at_3 value: 90.827 - type: ndcg_at_5 value: 86.79 - type: precision_at_1 value: 96.0 - type: precision_at_10 value: 84.8 - type: precision_at_100 value: 66.0 - type: precision_at_1000 value: 25.356 - type: precision_at_3 value: 94.667 - type: precision_at_5 value: 90.4 - type: recall_at_1 value: 0.254 - type: recall_at_10 value: 2.1950000000000003 - type: recall_at_100 value: 16.088 - type: recall_at_1000 value: 54.559000000000005 - type: recall_at_3 value: 0.75 - type: recall_at_5 value: 1.191 - task: type: Retrieval dataset: name: MTEB Touche2020 type: mteb/touche2020 config: default split: test revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f metrics: - type: map_at_1 value: 2.976 - type: map_at_10 value: 11.389000000000001 - type: map_at_100 value: 18.429000000000002 - type: map_at_1000 value: 20.113 - type: map_at_3 value: 6.483 - type: map_at_5 value: 8.770999999999999 - type: mrr_at_1 value: 40.816 - type: mrr_at_10 value: 58.118 - type: mrr_at_100 value: 58.489999999999995 - type: mrr_at_1000 value: 58.489999999999995 - type: mrr_at_3 value: 53.061 - type: mrr_at_5 value: 57.041 - type: ndcg_at_1 value: 40.816 - type: ndcg_at_10 value: 30.567 - type: ndcg_at_100 value: 42.44 - type: ndcg_at_1000 value: 53.480000000000004 - type: ndcg_at_3 value: 36.016 - type: ndcg_at_5 value: 34.257 - type: precision_at_1 value: 42.857 - type: precision_at_10 value: 25.714 - type: precision_at_100 value: 8.429 - type: precision_at_1000 value: 1.5939999999999999 - type: precision_at_3 value: 36.735 - type: precision_at_5 value: 33.878 - type: recall_at_1 value: 2.976 - type: recall_at_10 value: 17.854999999999997 - type: recall_at_100 value: 51.833 - type: recall_at_1000 value: 86.223 - type: recall_at_3 value: 7.887 - type: recall_at_5 value: 12.026 - task: type: Classification dataset: name: MTEB ToxicConversationsClassification type: mteb/toxic_conversations_50k config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 85.1174 - type: ap value: 30.169441069345748 - type: f1 value: 69.79254701873245 - task: type: Classification dataset: name: MTEB TweetSentimentExtractionClassification type: mteb/tweet_sentiment_extraction config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 72.58347481607245 - type: f1 value: 72.74877295564937 - task: type: Clustering dataset: name: MTEB TwentyNewsgroupsClustering type: mteb/twentynewsgroups-clustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 53.90586138221305 - task: type: PairClassification dataset: name: MTEB TwitterSemEval2015 type: mteb/twittersemeval2015-pairclassification config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 87.35769207844072 - type: cos_sim_ap value: 77.9645072410354 - type: cos_sim_f1 value: 71.32352941176471 - type: cos_sim_precision value: 66.5903890160183 - type: cos_sim_recall value: 76.78100263852242 - type: dot_accuracy value: 87.37557370209214 - type: dot_ap value: 77.96250046429908 - type: dot_f1 value: 71.28932757557064 - type: dot_precision value: 66.95249130938586 - type: dot_recall value: 76.22691292875989 - type: euclidean_accuracy value: 87.35173153722357 - type: euclidean_ap value: 77.96520460741593 - type: euclidean_f1 value: 71.32470733210104 - type: euclidean_precision value: 66.91329479768785 - type: euclidean_recall value: 76.35883905013192 - type: manhattan_accuracy value: 87.25636287774931 - type: manhattan_ap value: 77.77752485611796 - type: manhattan_f1 value: 71.18148599269183 - type: manhattan_precision value: 66.10859728506787 - type: manhattan_recall value: 77.0976253298153 - type: max_accuracy value: 87.37557370209214 - type: max_ap value: 77.96520460741593 - type: max_f1 value: 71.32470733210104 - task: type: PairClassification dataset: name: MTEB TwitterURLCorpus type: mteb/twitterurlcorpus-pairclassification config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.38176737687739 - type: cos_sim_ap value: 86.58811861657401 - type: cos_sim_f1 value: 79.09430644097604 - type: cos_sim_precision value: 75.45085977911366 - type: cos_sim_recall value: 83.10748383122882 - type: dot_accuracy value: 89.38370784336554 - type: dot_ap value: 86.58840606004333 - type: dot_f1 value: 79.10179860068133 - type: dot_precision value: 75.44546153308643 - type: dot_recall value: 83.13058207576223 - type: euclidean_accuracy value: 89.38564830985369 - type: euclidean_ap value: 86.58820721061164 - type: euclidean_f1 value: 79.09070942235888 - type: euclidean_precision value: 75.38729937194697 - type: euclidean_recall value: 83.17677856482906 - type: manhattan_accuracy value: 89.40699344122326 - type: manhattan_ap value: 86.60631843011362 - type: manhattan_f1 value: 79.14949970570925 - type: manhattan_precision value: 75.78191039729502 - type: manhattan_recall value: 82.83030489682784 - type: max_accuracy value: 89.40699344122326 - type: max_ap value: 86.60631843011362 - type: max_f1 value: 79.14949970570925 - task: type: STS dataset: name: MTEB AFQMC type: C-MTEB/AFQMC config: default split: validation revision: b44c3b011063adb25877c13823db83bb193913c4 metrics: - type: cos_sim_pearson value: 65.58442135663871 - type: cos_sim_spearman value: 72.2538631361313 - type: euclidean_pearson value: 70.97255486607429 - type: euclidean_spearman value: 72.25374250228647 - type: manhattan_pearson value: 70.83250199989911 - type: manhattan_spearman value: 72.14819496536272 - task: type: STS dataset: name: MTEB ATEC type: C-MTEB/ATEC config: default split: test revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865 metrics: - type: cos_sim_pearson value: 59.99478404929932 - type: cos_sim_spearman value: 62.61836216999812 - type: euclidean_pearson value: 66.86429811933593 - type: euclidean_spearman value: 62.6183520374191 - type: manhattan_pearson value: 66.8063778911633 - type: manhattan_spearman value: 62.569607573241115 - task: type: Classification dataset: name: MTEB AmazonReviewsClassification (zh) type: mteb/amazon_reviews_multi config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 53.98400000000001 - type: f1 value: 51.21447361350723 - task: type: STS dataset: name: MTEB BQ type: C-MTEB/BQ config: default split: test revision: e3dda5e115e487b39ec7e618c0c6a29137052a55 metrics: - type: cos_sim_pearson value: 79.11941660686553 - type: cos_sim_spearman value: 81.25029594540435 - type: euclidean_pearson value: 82.06973504238826 - type: euclidean_spearman value: 81.2501989488524 - type: manhattan_pearson value: 82.10094630392753 - type: manhattan_spearman value: 81.27987244392389 - task: type: Clustering dataset: name: MTEB CLSClusteringP2P type: C-MTEB/CLSClusteringP2P config: default split: test revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476 metrics: - type: v_measure value: 47.07270168705156 - task: type: Clustering dataset: name: MTEB CLSClusteringS2S type: C-MTEB/CLSClusteringS2S config: default split: test revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f metrics: - type: v_measure value: 45.98511703185043 - task: type: Reranking dataset: name: MTEB CMedQAv1 type: C-MTEB/CMedQAv1-reranking config: default split: test revision: 8d7f1e942507dac42dc58017c1a001c3717da7df metrics: - type: map value: 88.19895157194931 - type: mrr value: 90.21424603174603 - task: type: Reranking dataset: name: MTEB CMedQAv2 type: C-MTEB/CMedQAv2-reranking config: default split: test revision: 23d186750531a14a0357ca22cd92d712fd512ea0 metrics: - type: map value: 88.03317320980119 - type: mrr value: 89.9461507936508 - task: type: Retrieval dataset: name: MTEB CmedqaRetrieval type: C-MTEB/CmedqaRetrieval config: default split: dev revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301 metrics: - type: map_at_1 value: 29.037000000000003 - type: map_at_10 value: 42.001 - type: map_at_100 value: 43.773 - type: map_at_1000 value: 43.878 - type: map_at_3 value: 37.637 - type: map_at_5 value: 40.034 - type: mrr_at_1 value: 43.136 - type: mrr_at_10 value: 51.158 - type: mrr_at_100 value: 52.083 - type: mrr_at_1000 value: 52.12 - type: mrr_at_3 value: 48.733 - type: mrr_at_5 value: 50.025 - type: ndcg_at_1 value: 43.136 - type: ndcg_at_10 value: 48.685 - type: ndcg_at_100 value: 55.513 - type: ndcg_at_1000 value: 57.242000000000004 - type: ndcg_at_3 value: 43.329 - type: ndcg_at_5 value: 45.438 - type: precision_at_1 value: 43.136 - type: precision_at_10 value: 10.56 - type: precision_at_100 value: 1.6129999999999998 - type: precision_at_1000 value: 0.184 - type: precision_at_3 value: 24.064 - type: precision_at_5 value: 17.269000000000002 - type: recall_at_1 value: 29.037000000000003 - type: recall_at_10 value: 59.245000000000005 - type: recall_at_100 value: 87.355 - type: recall_at_1000 value: 98.74000000000001 - type: recall_at_3 value: 42.99 - type: recall_at_5 value: 49.681999999999995 - task: type: PairClassification dataset: name: MTEB Cmnli type: C-MTEB/CMNLI config: default split: validation revision: 41bc36f332156f7adc9e38f53777c959b2ae9766 metrics: - type: cos_sim_accuracy value: 82.68190018039687 - type: cos_sim_ap value: 90.18017125327886 - type: cos_sim_f1 value: 83.64080906868193 - type: cos_sim_precision value: 79.7076890489303 - type: cos_sim_recall value: 87.98223053542202 - type: dot_accuracy value: 82.68190018039687 - type: dot_ap value: 90.18782350103646 - type: dot_f1 value: 83.64242087729039 - type: dot_precision value: 79.65313028764805 - type: dot_recall value: 88.05237315875614 - type: euclidean_accuracy value: 82.68190018039687 - type: euclidean_ap value: 90.1801957900632 - type: euclidean_f1 value: 83.63636363636364 - type: euclidean_precision value: 79.52772506852203 - type: euclidean_recall value: 88.19265840542437 - type: manhattan_accuracy value: 82.14070956103427 - type: manhattan_ap value: 89.96178420101427 - type: manhattan_f1 value: 83.21087838578791 - type: manhattan_precision value: 78.35605121850475 - type: manhattan_recall value: 88.70703764320785 - type: max_accuracy value: 82.68190018039687 - type: max_ap value: 90.18782350103646 - type: max_f1 value: 83.64242087729039 - task: type: Retrieval dataset: name: MTEB CovidRetrieval type: C-MTEB/CovidRetrieval config: default split: dev revision: 1271c7809071a13532e05f25fb53511ffce77117 metrics: - type: map_at_1 value: 72.234 - type: map_at_10 value: 80.10000000000001 - type: map_at_100 value: 80.36 - type: map_at_1000 value: 80.363 - type: map_at_3 value: 78.315 - type: map_at_5 value: 79.607 - type: mrr_at_1 value: 72.392 - type: mrr_at_10 value: 80.117 - type: mrr_at_100 value: 80.36999999999999 - type: mrr_at_1000 value: 80.373 - type: mrr_at_3 value: 78.469 - type: mrr_at_5 value: 79.633 - type: ndcg_at_1 value: 72.392 - type: ndcg_at_10 value: 83.651 - type: ndcg_at_100 value: 84.749 - type: ndcg_at_1000 value: 84.83000000000001 - type: ndcg_at_3 value: 80.253 - type: ndcg_at_5 value: 82.485 - type: precision_at_1 value: 72.392 - type: precision_at_10 value: 9.557 - type: precision_at_100 value: 1.004 - type: precision_at_1000 value: 0.101 - type: precision_at_3 value: 28.732000000000003 - type: precision_at_5 value: 18.377 - type: recall_at_1 value: 72.234 - type: recall_at_10 value: 94.573 - type: recall_at_100 value: 99.368 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 85.669 - type: recall_at_5 value: 91.01700000000001 - task: type: Retrieval dataset: name: MTEB DuRetrieval type: C-MTEB/DuRetrieval config: default split: dev revision: a1a333e290fe30b10f3f56498e3a0d911a693ced metrics: - type: map_at_1 value: 26.173999999999996 - type: map_at_10 value: 80.04 - type: map_at_100 value: 82.94500000000001 - type: map_at_1000 value: 82.98100000000001 - type: map_at_3 value: 55.562999999999995 - type: map_at_5 value: 69.89800000000001 - type: mrr_at_1 value: 89.5 - type: mrr_at_10 value: 92.996 - type: mrr_at_100 value: 93.06400000000001 - type: mrr_at_1000 value: 93.065 - type: mrr_at_3 value: 92.658 - type: mrr_at_5 value: 92.84599999999999 - type: ndcg_at_1 value: 89.5 - type: ndcg_at_10 value: 87.443 - type: ndcg_at_100 value: 90.253 - type: ndcg_at_1000 value: 90.549 - type: ndcg_at_3 value: 85.874 - type: ndcg_at_5 value: 84.842 - type: precision_at_1 value: 89.5 - type: precision_at_10 value: 41.805 - type: precision_at_100 value: 4.827 - type: precision_at_1000 value: 0.49 - type: precision_at_3 value: 76.85 - type: precision_at_5 value: 64.8 - type: recall_at_1 value: 26.173999999999996 - type: recall_at_10 value: 89.101 - type: recall_at_100 value: 98.08099999999999 - type: recall_at_1000 value: 99.529 - type: recall_at_3 value: 57.902 - type: recall_at_5 value: 74.602 - task: type: Retrieval dataset: name: MTEB EcomRetrieval type: C-MTEB/EcomRetrieval config: default split: dev revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9 metrics: - type: map_at_1 value: 56.10000000000001 - type: map_at_10 value: 66.15299999999999 - type: map_at_100 value: 66.625 - type: map_at_1000 value: 66.636 - type: map_at_3 value: 63.632999999999996 - type: map_at_5 value: 65.293 - type: mrr_at_1 value: 56.10000000000001 - type: mrr_at_10 value: 66.15299999999999 - type: mrr_at_100 value: 66.625 - type: mrr_at_1000 value: 66.636 - type: mrr_at_3 value: 63.632999999999996 - type: mrr_at_5 value: 65.293 - type: ndcg_at_1 value: 56.10000000000001 - type: ndcg_at_10 value: 71.146 - type: ndcg_at_100 value: 73.27799999999999 - type: ndcg_at_1000 value: 73.529 - type: ndcg_at_3 value: 66.09 - type: ndcg_at_5 value: 69.08999999999999 - type: precision_at_1 value: 56.10000000000001 - type: precision_at_10 value: 8.68 - type: precision_at_100 value: 0.964 - type: precision_at_1000 value: 0.098 - type: precision_at_3 value: 24.4 - type: precision_at_5 value: 16.1 - type: recall_at_1 value: 56.10000000000001 - type: recall_at_10 value: 86.8 - type: recall_at_100 value: 96.39999999999999 - type: recall_at_1000 value: 98.3 - type: recall_at_3 value: 73.2 - type: recall_at_5 value: 80.5 - task: type: Classification dataset: name: MTEB IFlyTek type: C-MTEB/IFlyTek-classification config: default split: validation revision: 421605374b29664c5fc098418fe20ada9bd55f8a metrics: - type: accuracy value: 54.52096960369373 - type: f1 value: 40.930845295808695 - task: type: Classification dataset: name: MTEB JDReview type: C-MTEB/JDReview-classification config: default split: test revision: b7c64bd89eb87f8ded463478346f76731f07bf8b metrics: - type: accuracy value: 86.51031894934334 - type: ap value: 55.9516014323483 - type: f1 value: 81.54813679326381 - task: type: STS dataset: name: MTEB LCQMC type: C-MTEB/LCQMC config: default split: test revision: 17f9b096f80380fce5ed12a9be8be7784b337daf metrics: - type: cos_sim_pearson value: 69.67437838574276 - type: cos_sim_spearman value: 73.81314174653045 - type: euclidean_pearson value: 72.63430276680275 - type: euclidean_spearman value: 73.81358736777001 - type: manhattan_pearson value: 72.58743833842829 - type: manhattan_spearman value: 73.7590419009179 - task: type: Reranking dataset: name: MTEB MMarcoReranking type: C-MTEB/Mmarco-reranking config: default split: dev revision: None metrics: - type: map value: 31.648613483640254 - type: mrr value: 30.37420634920635 - task: type: Retrieval dataset: name: MTEB MMarcoRetrieval type: C-MTEB/MMarcoRetrieval config: default split: dev revision: 539bbde593d947e2a124ba72651aafc09eb33fc2 metrics: - type: map_at_1 value: 73.28099999999999 - type: map_at_10 value: 81.977 - type: map_at_100 value: 82.222 - type: map_at_1000 value: 82.22699999999999 - type: map_at_3 value: 80.441 - type: map_at_5 value: 81.46600000000001 - type: mrr_at_1 value: 75.673 - type: mrr_at_10 value: 82.41000000000001 - type: mrr_at_100 value: 82.616 - type: mrr_at_1000 value: 82.621 - type: mrr_at_3 value: 81.094 - type: mrr_at_5 value: 81.962 - type: ndcg_at_1 value: 75.673 - type: ndcg_at_10 value: 85.15599999999999 - type: ndcg_at_100 value: 86.151 - type: ndcg_at_1000 value: 86.26899999999999 - type: ndcg_at_3 value: 82.304 - type: ndcg_at_5 value: 84.009 - type: precision_at_1 value: 75.673 - type: precision_at_10 value: 10.042 - type: precision_at_100 value: 1.052 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 30.673000000000002 - type: precision_at_5 value: 19.326999999999998 - type: recall_at_1 value: 73.28099999999999 - type: recall_at_10 value: 94.446 - type: recall_at_100 value: 98.737 - type: recall_at_1000 value: 99.649 - type: recall_at_3 value: 86.984 - type: recall_at_5 value: 91.024 - task: type: Classification dataset: name: MTEB MassiveIntentClassification (zh-CN) type: mteb/amazon_massive_intent config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 81.08607935440484 - type: f1 value: 78.24879986066307 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification (zh-CN) type: mteb/amazon_massive_scenario config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 86.05917955615332 - type: f1 value: 85.05279279434997 - task: type: Retrieval dataset: name: MTEB MedicalRetrieval type: C-MTEB/MedicalRetrieval config: default split: dev revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6 metrics: - type: map_at_1 value: 56.2 - type: map_at_10 value: 62.57899999999999 - type: map_at_100 value: 63.154999999999994 - type: map_at_1000 value: 63.193 - type: map_at_3 value: 61.217 - type: map_at_5 value: 62.012 - type: mrr_at_1 value: 56.3 - type: mrr_at_10 value: 62.629000000000005 - type: mrr_at_100 value: 63.205999999999996 - type: mrr_at_1000 value: 63.244 - type: mrr_at_3 value: 61.267 - type: mrr_at_5 value: 62.062 - type: ndcg_at_1 value: 56.2 - type: ndcg_at_10 value: 65.592 - type: ndcg_at_100 value: 68.657 - type: ndcg_at_1000 value: 69.671 - type: ndcg_at_3 value: 62.808 - type: ndcg_at_5 value: 64.24499999999999 - type: precision_at_1 value: 56.2 - type: precision_at_10 value: 7.5 - type: precision_at_100 value: 0.899 - type: precision_at_1000 value: 0.098 - type: precision_at_3 value: 22.467000000000002 - type: precision_at_5 value: 14.180000000000001 - type: recall_at_1 value: 56.2 - type: recall_at_10 value: 75.0 - type: recall_at_100 value: 89.9 - type: recall_at_1000 value: 97.89999999999999 - type: recall_at_3 value: 67.4 - type: recall_at_5 value: 70.89999999999999 - task: type: Classification dataset: name: MTEB MultilingualSentiment type: C-MTEB/MultilingualSentiment-classification config: default split: validation revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a metrics: - type: accuracy value: 76.87666666666667 - type: f1 value: 76.7317686219665 - task: type: PairClassification dataset: name: MTEB Ocnli type: C-MTEB/OCNLI config: default split: validation revision: 66e76a618a34d6d565d5538088562851e6daa7ec metrics: - type: cos_sim_accuracy value: 79.64266377910124 - type: cos_sim_ap value: 84.78274442344829 - type: cos_sim_f1 value: 81.16947472745292 - type: cos_sim_precision value: 76.47058823529412 - type: cos_sim_recall value: 86.48363252375924 - type: dot_accuracy value: 79.64266377910124 - type: dot_ap value: 84.7851404063692 - type: dot_f1 value: 81.16947472745292 - type: dot_precision value: 76.47058823529412 - type: dot_recall value: 86.48363252375924 - type: euclidean_accuracy value: 79.64266377910124 - type: euclidean_ap value: 84.78068373762378 - type: euclidean_f1 value: 81.14794656110837 - type: euclidean_precision value: 76.35009310986965 - type: euclidean_recall value: 86.58922914466737 - type: manhattan_accuracy value: 79.48023822414727 - type: manhattan_ap value: 84.72928897427576 - type: manhattan_f1 value: 81.32084770823064 - type: manhattan_precision value: 76.24768946395564 - type: manhattan_recall value: 87.11721224920802 - type: max_accuracy value: 79.64266377910124 - type: max_ap value: 84.7851404063692 - type: max_f1 value: 81.32084770823064 - task: type: Classification dataset: name: MTEB OnlineShopping type: C-MTEB/OnlineShopping-classification config: default split: test revision: e610f2ebd179a8fda30ae534c3878750a96db120 metrics: - type: accuracy value: 94.3 - type: ap value: 92.8664032274438 - type: f1 value: 94.29311102997727 - task: type: STS dataset: name: MTEB PAWSX type: C-MTEB/PAWSX config: default split: test revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1 metrics: - type: cos_sim_pearson value: 48.51392279882909 - type: cos_sim_spearman value: 54.06338895994974 - type: euclidean_pearson value: 52.58480559573412 - type: euclidean_spearman value: 54.06417276612201 - type: manhattan_pearson value: 52.69525121721343 - type: manhattan_spearman value: 54.048147455389675 - task: type: STS dataset: name: MTEB QBQTC type: C-MTEB/QBQTC config: default split: test revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7 metrics: - type: cos_sim_pearson value: 29.728387290757325 - type: cos_sim_spearman value: 31.366121633635284 - type: euclidean_pearson value: 29.14588368552961 - type: euclidean_spearman value: 31.36764411112844 - type: manhattan_pearson value: 29.63517350523121 - type: manhattan_spearman value: 31.94157020583762 - task: type: STS dataset: name: MTEB STS22 (zh) type: mteb/sts22-crosslingual-sts config: zh split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 63.64868296271406 - type: cos_sim_spearman value: 66.12800618164744 - type: euclidean_pearson value: 63.21405767340238 - type: euclidean_spearman value: 66.12786567790748 - type: manhattan_pearson value: 64.04300276525848 - type: manhattan_spearman value: 66.5066857145652 - task: type: STS dataset: name: MTEB STSB type: C-MTEB/STSB config: default split: test revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0 metrics: - type: cos_sim_pearson value: 81.2302623912794 - type: cos_sim_spearman value: 81.16833673266562 - type: euclidean_pearson value: 79.47647843876024 - type: euclidean_spearman value: 81.16944349524972 - type: manhattan_pearson value: 79.84947238492208 - type: manhattan_spearman value: 81.64626599410026 - task: type: Reranking dataset: name: MTEB T2Reranking type: C-MTEB/T2Reranking config: default split: dev revision: 76631901a18387f85eaa53e5450019b87ad58ef9 metrics: - type: map value: 67.80129586475687 - type: mrr value: 77.77402311635554 - task: type: Retrieval dataset: name: MTEB T2Retrieval type: C-MTEB/T2Retrieval config: default split: dev revision: 8731a845f1bf500a4f111cf1070785c793d10e64 metrics: - type: map_at_1 value: 28.666999999999998 - type: map_at_10 value: 81.063 - type: map_at_100 value: 84.504 - type: map_at_1000 value: 84.552 - type: map_at_3 value: 56.897 - type: map_at_5 value: 70.073 - type: mrr_at_1 value: 92.087 - type: mrr_at_10 value: 94.132 - type: mrr_at_100 value: 94.19800000000001 - type: mrr_at_1000 value: 94.19999999999999 - type: mrr_at_3 value: 93.78999999999999 - type: mrr_at_5 value: 94.002 - type: ndcg_at_1 value: 92.087 - type: ndcg_at_10 value: 87.734 - type: ndcg_at_100 value: 90.736 - type: ndcg_at_1000 value: 91.184 - type: ndcg_at_3 value: 88.78 - type: ndcg_at_5 value: 87.676 - type: precision_at_1 value: 92.087 - type: precision_at_10 value: 43.46 - type: precision_at_100 value: 5.07 - type: precision_at_1000 value: 0.518 - type: precision_at_3 value: 77.49000000000001 - type: precision_at_5 value: 65.194 - type: recall_at_1 value: 28.666999999999998 - type: recall_at_10 value: 86.632 - type: recall_at_100 value: 96.646 - type: recall_at_1000 value: 98.917 - type: recall_at_3 value: 58.333999999999996 - type: recall_at_5 value: 72.974 - task: type: Classification dataset: name: MTEB TNews type: C-MTEB/TNews-classification config: default split: validation revision: 317f262bf1e6126357bbe89e875451e4b0938fe4 metrics: - type: accuracy value: 52.971999999999994 - type: f1 value: 50.2898280984929 - task: type: Clustering dataset: name: MTEB ThuNewsClusteringP2P type: C-MTEB/ThuNewsClusteringP2P config: default split: test revision: 5798586b105c0434e4f0fe5e767abe619442cf93 metrics: - type: v_measure value: 86.0797948663824 - task: type: Clustering dataset: name: MTEB ThuNewsClusteringS2S type: C-MTEB/ThuNewsClusteringS2S config: default split: test revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d metrics: - type: v_measure value: 85.10759092255017 - task: type: Retrieval dataset: name: MTEB VideoRetrieval type: C-MTEB/VideoRetrieval config: default split: dev revision: 58c2597a5943a2ba48f4668c3b90d796283c5639 metrics: - type: map_at_1 value: 65.60000000000001 - type: map_at_10 value: 74.773 - type: map_at_100 value: 75.128 - type: map_at_1000 value: 75.136 - type: map_at_3 value: 73.05 - type: map_at_5 value: 74.13499999999999 - type: mrr_at_1 value: 65.60000000000001 - type: mrr_at_10 value: 74.773 - type: mrr_at_100 value: 75.128 - type: mrr_at_1000 value: 75.136 - type: mrr_at_3 value: 73.05 - type: mrr_at_5 value: 74.13499999999999 - type: ndcg_at_1 value: 65.60000000000001 - type: ndcg_at_10 value: 78.84299999999999 - type: ndcg_at_100 value: 80.40899999999999 - type: ndcg_at_1000 value: 80.57 - type: ndcg_at_3 value: 75.40599999999999 - type: ndcg_at_5 value: 77.351 - type: precision_at_1 value: 65.60000000000001 - type: precision_at_10 value: 9.139999999999999 - type: precision_at_100 value: 0.984 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 27.400000000000002 - type: precision_at_5 value: 17.380000000000003 - type: recall_at_1 value: 65.60000000000001 - type: recall_at_10 value: 91.4 - type: recall_at_100 value: 98.4 - type: recall_at_1000 value: 99.6 - type: recall_at_3 value: 82.19999999999999 - type: recall_at_5 value: 86.9 - task: type: Classification dataset: name: MTEB Waimai type: C-MTEB/waimai-classification config: default split: test revision: 339287def212450dcaa9df8c22bf93e9980c7023 metrics: - type: accuracy value: 89.47 - type: ap value: 75.59561751845389 - type: f1 value: 87.95207751382563 --- # fishbone64/gte-Qwen2-7B-instruct-Q8_0-GGUF This model was converted to GGUF format from [`Alibaba-NLP/gte-Qwen2-7B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo fishbone64/gte-Qwen2-7B-instruct-Q8_0-GGUF --hf-file gte-qwen2-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo fishbone64/gte-Qwen2-7B-instruct-Q8_0-GGUF --hf-file gte-qwen2-7b-instruct-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo fishbone64/gte-Qwen2-7B-instruct-Q8_0-GGUF --hf-file gte-qwen2-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo fishbone64/gte-Qwen2-7B-instruct-Q8_0-GGUF --hf-file gte-qwen2-7b-instruct-q8_0.gguf -c 2048 ```
huggingtweets/bayesianboy
huggingtweets
2021-05-21T20:10:00Z
702
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://www.huggingtweets.com/bayesianboy/1616641444174/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1372264552029679619/liCuKg_U_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">Mel Andrews 🤖 AI Bot </div> <div style="font-size: 15px">@bayesianboy bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@bayesianboy's tweets](https://twitter.com/bayesianboy). | Data | Quantity | | --- | --- | | Tweets downloaded | 3244 | | Retweets | 216 | | Short tweets | 589 | | Tweets kept | 2439 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3shc5anf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bayesianboy's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3rrpzuts) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3rrpzuts/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/bayesianboy') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k
timm
2023-05-11T00:17:26Z
702
1
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-12k", "arxiv:2204.01697", "arxiv:2111.09883", "license:apache-2.0", "region:us" ]
image-classification
2023-01-20T21:32:38Z
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - imagenet-12k --- # Model card for maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k A timm specific MaxViT (w/ a MLP Log-CPB (continuous log-coordinate relative position bias motivated by Swin-V2) image classification model. Pretrained in `timm` on ImageNet-12k (a 11821 class subset of full ImageNet-22k) and fine-tuned on ImageNet-1k by Ross Wightman. ImageNet-12k pretraining and ImageNet-1k fine-tuning performed on 8x GPU [Lambda Labs](https://lambdalabs.com/) cloud instances.. ### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py) MaxxViT covers a number of related model architectures that share a common structure including: - CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages. - MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid). - CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate. Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations. All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 116.1 - GMACs: 23.1 - Activations (M): 92.6 - Image size: 224 x 224 - **Papers:** - MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697 - Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-12k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 96, 56, 56]) # torch.Size([1, 192, 28, 28]) # torch.Size([1, 384, 14, 14]) # torch.Size([1, 768, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 768, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison ### By Top-1 |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| ### By Throughput (samples / sec) |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{tu2022maxvit, title={MaxViT: Multi-Axis Vision Transformer}, author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao}, journal={ECCV}, year={2022}, } ``` ```bibtex @article{dai2021coatnet, title={CoAtNet: Marrying Convolution and Attention for All Data Sizes}, author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing}, journal={arXiv preprint arXiv:2106.04803}, year={2021} } ```
timinar/baby-llama-58m
timinar
2023-10-23T16:36:24Z
702
9
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "en", "arxiv:2308.02019", "license:unknown", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-29T05:44:49Z
--- license: unknown language: - en --- # Baby Llama Our submission to the `strict-small` track of the [BabyLM challenge](https://babylm.github.io/index.html). Baby Llama is a 58M-parameter model, distilled from an ensemble consisting of LLaMA-360M and GPT2-705M, both trained on the `babylm_10M` dataset. See the associated [paper](https://arxiv.org/abs/2308.02019) for a detailed discussion of the training procedure and of the model performance. The training code is available at [https://github.com/timinar/BabyLlama](https://github.com/timinar/BabyLlama). ### Hyperparameters for the tasks that require fine-tuning When evaluating the model on the [tasks that require fine-tuning](https://github.com/babylm/evaluation-pipeline/tree/main#fine-tuning), we noticed that the [default hyperparameters](https://github.com/babylm/evaluation-pipeline/tree/main#hyperparameters) suggested by the BabyLM organizers lead to severe overfitting in a number of tasks. To avoid this issue, we have re-tuned those hyperparameters. The sets of hyperparameters selected for each task are listed in the table below. | Task | Maximum learning rate | Batch size | Maximum epochs | Patience | Evaluate every (steps) | Random seed | | ---- | ------------- | ---------- | -------- | -------- | ---------- | ---- | | CoLA | 4e-5 | 64 | 3 | 10 | 20 | 12 | | SST-2 | 5e-5 | 64 | 6 | 10 | 200 | 12 | | MRPC | 3e-5 | 64 | 3 | 10 | 20 | 12 | | QQP | 4e-5 | 64 | 10 | 10 | 1000 | 12 | | MNLI | 5e-5 | 64 | 6 | 10 | 200 | 12 | | MNLI-mm |5e-5 | 64 | 6 | 10 | 200 | 12 | | QNLI | 5e-5 | 64 | 6 | 10 | 200 | 12 | | RTE | 5e-5 | 64 | 6 | 10 | 200 | 12 | | BoolQ | 3e-4 | 16 | 10 | 10 | 10| 12 | | MultiRC | 1e-4 | 64 | 7 | 10 | 1000 | 42 | | WSC | 5e-7 | 1 | 10 | 1000 | 2000 | 12 | | CR (Control) | 5e-5 | 64 | 10 | 10 | 100 | 12 | | LC (Control) | 1e-3 | 64 | 1 | 2 | 10 | 12 | | MV (Control) | 5e-5 | 64 | 6 | 10 | 200 | 12 | | RP (Control) | 1e-3 | 64 | 1 | 10 | 10 | 12 | | SC (Control) | 1e-3 | 64 | 2 | 10 | 10 | 12 | | CR\_LC | 1e-3 | 64 | 2 | 10 | 10 | 12 | | CR\_RTP | 5e-5 | 64 | 6 | 10 | 200 | 12 | | MV\_LC | 5e-5 | 64 | 6 | 10 | 200 | 12 | | MV\_RTP | 5e-5 | 64 | 6 | 10 | 200 | 12 | | SC\_LC | 1e-3 | 64 | 2 | 10 | 10 | 12 | | SC\_RP | 1e-3 | 64 | 2 | 10 | 10 | 12 |
TheBloke/openbuddy-llama2-34b-v11.1-bf16-GGUF
TheBloke
2023-09-27T12:54:40Z
702
5
transformers
[ "transformers", "gguf", "llama", "text-generation", "zh", "en", "fr", "de", "ja", "ko", "it", "ru", "base_model:OpenBuddy/openbuddy-llama2-34b-v11.1-bf16", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
2023-09-24T09:41:25Z
--- language: - zh - en - fr - de - ja - ko - it - ru license: llama2 library_name: transformers model_name: OpenBuddy Llama2 34B V11.1 base_model: OpenBuddy/openbuddy-llama2-34b-v11.1-bf16 inference: false model_creator: OpenBuddy model_type: llama pipeline_tag: text-generation prompt_template: "You are a helpful, respectful and honest INTP-T AI Assistant named\ \ Buddy. You are talking to a human User.\nAlways answer as helpfully and logically\ \ as possible, while being safe. Your answers should not include any harmful, political,\ \ religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please\ \ ensure that your responses are socially unbiased and positive in nature.\nIf a\ \ question does not make any sense, or is not factually coherent, explain why instead\ \ of answering something not correct. If you don't know the answer to a question,\ \ please don't share false information.\nYou like to use emojis. You can speak fluently\ \ in many languages, for example: English, Chinese.\nYou cannot access the internet,\ \ but you have vast knowledge, cutoff: 2021-09.\nYou are trained by OpenBuddy team,\ \ (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), you are based\ \ on LLaMA and Falcon transformers model, not related to GPT or OpenAI.\n\nUser:\ \ {prompt}\nAssistant: \n" quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # OpenBuddy Llama2 34B V11.1 - GGUF - Model creator: [OpenBuddy](https://huggingface.co/OpenBuddy) - Original model: [OpenBuddy Llama2 34B V11.1](https://huggingface.co/OpenBuddy/openbuddy-llama2-34b-v11.1-bf16) <!-- description start --> ## Description This repo contains GGUF format model files for [OpenBuddy's OpenBuddy Llama2 34B V11.1](https://huggingface.co/OpenBuddy/openbuddy-llama2-34b-v11.1-bf16). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/openbuddy-llama2-34b-v11.1-bf16-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/openbuddy-llama2-34b-v11.1-bf16-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/openbuddy-llama2-34b-v11.1-bf16-GGUF) * [OpenBuddy's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OpenBuddy/openbuddy-llama2-34b-v11.1-bf16) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: OpenBuddy ``` You are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human User. Always answer as helpfully and logically as possible, while being safe. Your answers should not include any harmful, political, religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. You like to use emojis. You can speak fluently in many languages, for example: English, Chinese. You cannot access the internet, but you have vast knowledge, cutoff: 2021-09. You are trained by OpenBuddy team, (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), you are based on LLaMA and Falcon transformers model, not related to GPT or OpenAI. User: {prompt} Assistant: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [openbuddy-llama2-34b-v11.1-bf16.Q2_K.gguf](https://huggingface.co/TheBloke/openbuddy-llama2-34b-v11.1-bf16-GGUF/blob/main/openbuddy-llama2-34b-v11.1-bf16.Q2_K.gguf) | Q2_K | 2 | 14.26 GB| 16.76 GB | smallest, significant quality loss - not recommended for most purposes | | [openbuddy-llama2-34b-v11.1-bf16.Q3_K_S.gguf](https://huggingface.co/TheBloke/openbuddy-llama2-34b-v11.1-bf16-GGUF/blob/main/openbuddy-llama2-34b-v11.1-bf16.Q3_K_S.gguf) | Q3_K_S | 3 | 14.66 GB| 17.16 GB | very small, high quality loss | | [openbuddy-llama2-34b-v11.1-bf16.Q3_K_M.gguf](https://huggingface.co/TheBloke/openbuddy-llama2-34b-v11.1-bf16-GGUF/blob/main/openbuddy-llama2-34b-v11.1-bf16.Q3_K_M.gguf) | Q3_K_M | 3 | 16.34 GB| 18.84 GB | very small, high quality loss | | [openbuddy-llama2-34b-v11.1-bf16.Q3_K_L.gguf](https://huggingface.co/TheBloke/openbuddy-llama2-34b-v11.1-bf16-GGUF/blob/main/openbuddy-llama2-34b-v11.1-bf16.Q3_K_L.gguf) | Q3_K_L | 3 | 17.83 GB| 20.33 GB | small, substantial quality loss | | [openbuddy-llama2-34b-v11.1-bf16.Q4_0.gguf](https://huggingface.co/TheBloke/openbuddy-llama2-34b-v11.1-bf16-GGUF/blob/main/openbuddy-llama2-34b-v11.1-bf16.Q4_0.gguf) | Q4_0 | 4 | 19.12 GB| 21.62 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [openbuddy-llama2-34b-v11.1-bf16.Q4_K_S.gguf](https://huggingface.co/TheBloke/openbuddy-llama2-34b-v11.1-bf16-GGUF/blob/main/openbuddy-llama2-34b-v11.1-bf16.Q4_K_S.gguf) | Q4_K_S | 4 | 19.21 GB| 21.71 GB | small, greater quality loss | | [openbuddy-llama2-34b-v11.1-bf16.Q4_K_M.gguf](https://huggingface.co/TheBloke/openbuddy-llama2-34b-v11.1-bf16-GGUF/blob/main/openbuddy-llama2-34b-v11.1-bf16.Q4_K_M.gguf) | Q4_K_M | 4 | 20.28 GB| 22.78 GB | medium, balanced quality - recommended | | [openbuddy-llama2-34b-v11.1-bf16.Q5_0.gguf](https://huggingface.co/TheBloke/openbuddy-llama2-34b-v11.1-bf16-GGUF/blob/main/openbuddy-llama2-34b-v11.1-bf16.Q5_0.gguf) | Q5_0 | 5 | 23.31 GB| 25.81 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [openbuddy-llama2-34b-v11.1-bf16.Q5_K_S.gguf](https://huggingface.co/TheBloke/openbuddy-llama2-34b-v11.1-bf16-GGUF/blob/main/openbuddy-llama2-34b-v11.1-bf16.Q5_K_S.gguf) | Q5_K_S | 5 | 23.31 GB| 25.81 GB | large, low quality loss - recommended | | [openbuddy-llama2-34b-v11.1-bf16.Q5_K_M.gguf](https://huggingface.co/TheBloke/openbuddy-llama2-34b-v11.1-bf16-GGUF/blob/main/openbuddy-llama2-34b-v11.1-bf16.Q5_K_M.gguf) | Q5_K_M | 5 | 23.91 GB| 26.41 GB | large, very low quality loss - recommended | | [openbuddy-llama2-34b-v11.1-bf16.Q6_K.gguf](https://huggingface.co/TheBloke/openbuddy-llama2-34b-v11.1-bf16-GGUF/blob/main/openbuddy-llama2-34b-v11.1-bf16.Q6_K.gguf) | Q6_K | 6 | 27.76 GB| 30.26 GB | very large, extremely low quality loss | | [openbuddy-llama2-34b-v11.1-bf16.Q8_0.gguf](https://huggingface.co/TheBloke/openbuddy-llama2-34b-v11.1-bf16-GGUF/blob/main/openbuddy-llama2-34b-v11.1-bf16.Q8_0.gguf) | Q8_0 | 8 | 35.95 GB| 38.45 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/openbuddy-llama2-34b-v11.1-bf16-GGUF and below it, a specific filename to download, such as: openbuddy-llama2-34b-v11.1-bf16.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/openbuddy-llama2-34b-v11.1-bf16-GGUF openbuddy-llama2-34b-v11.1-bf16.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/openbuddy-llama2-34b-v11.1-bf16-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/openbuddy-llama2-34b-v11.1-bf16-GGUF openbuddy-llama2-34b-v11.1-bf16.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m openbuddy-llama2-34b-v11.1-bf16.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "You are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human User.\nAlways answer as helpfully and logically as possible, while being safe. Your answers should not include any harmful, political, religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\nYou like to use emojis. You can speak fluently in many languages, for example: English, Chinese.\nYou cannot access the internet, but you have vast knowledge, cutoff: 2021-09.\nYou are trained by OpenBuddy team, (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), you are based on LLaMA and Falcon transformers model, not related to GPT or OpenAI.\n\nUser: {prompt}\nAssistant:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/openbuddy-llama2-34b-v11.1-bf16-GGUF", model_file="openbuddy-llama2-34b-v11.1-bf16.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: OpenBuddy's OpenBuddy Llama2 34B V11.1 # OpenBuddy - Open Multilingual Chatbot GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy) Website and Demo: [https://openbuddy.ai](https://openbuddy.ai) ![Demo](https://raw.githubusercontent.com/OpenBuddy/OpenBuddy/main/media/demo.png) # Copyright Notice This model is built upon Meta's LLaMA series of models and is subject to Meta's licensing agreement. This model is intended for use only by individuals who have obtained approval from Meta and are eligible to download LLaMA. If you have not obtained approval from Meta, you must visit the https://ai.meta.com/llama/ page, read and agree to the model's licensing agreement, submit an application, and wait for approval from Meta before downloading the model from this page. ## Disclaimer All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. ## 免责声明 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。 <!-- original-model-card end -->
Finnish-NLP/Ahma-3B
Finnish-NLP
2024-06-15T18:59:18Z
702
11
transformers
[ "transformers", "safetensors", "llama", "text-generation", "finnish", "conversational", "fi", "dataset:Finnish-NLP/CulturaX_fi_cleaned", "dataset:Finnish-NLP/HPLT_1.2_fi_cleaned", "dataset:Finnish-NLP/wikipedia_20231101_fi_cleaned", "dataset:Finnish-NLP/Reddit_fi_2006_2022", "dataset:intfloat/multilingual_cc_news", "arxiv:2302.13971", "arxiv:2302.06675", "arxiv:2305.16264", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-03T19:27:52Z
--- language: - fi license: apache-2.0 tags: - finnish - llama datasets: - Finnish-NLP/CulturaX_fi_cleaned - Finnish-NLP/HPLT_1.2_fi_cleaned - Finnish-NLP/wikipedia_20231101_fi_cleaned - Finnish-NLP/Reddit_fi_2006_2022 - intfloat/multilingual_cc_news inference: false pipeline_tag: text-generation --- # Ahma-3B for Finnish Ahma is 3B parameter decoder-only transformer model based on Meta's Llama (v1) architecture pretrained on Finnish language. Original Llama model architecture was introduced in [this paper](https://arxiv.org/abs/2302.13971) and first released at [this page](https://github.com/facebookresearch/llama). What does Ahma mean? Ahma is the Finnish word for wolverine! In the Finnish Lapland, wolverines are the biggest cause of reindeer damage. There are two different sized Ahma models, all pretrained from scratch for 139B tokens: | Model | Context length | Layers | Dim | Heads | Params | |:--------------------------------------------------------------------------------|:---------------|:-------|:-----|:------|:-------| | [Ahma-3B](https://huggingface.co/Finnish-NLP/Ahma-3B) | 2048 | 26 | 3200 | 32 | 3.6B | | [Ahma-7B](https://huggingface.co/Finnish-NLP/Ahma-7B) | 2048 | 32 | 4096 | 32 | 7.0B | ## Intended uses & limitations This model was pretrained only in a self-supervised way, without any supervised training. You can use this model for text generation or fine-tune it for a downstream task. This model followed a 2-stage pretraining approach where single-turn instruction-following examples were mixed in with the other training data in the second stage (explained more later in this readme). Thanks to this approach, this pretrained model is already capable of instruction following, but you might get even better results if you specifically fine-tune it for instruction following or other use cases. For instruction-following fine-tuning, you should use the same prompt format showcased below. ### How to use **Finetuning:** \ We have now added finetuning example notebook along with video! \ Notebook: https://huggingface.co/Finnish-NLP/Ahma-3B/blob/main/Finetune_Ahma_3B_example.ipynb \ Video: https://www.youtube.com/watch?v=6mbgn9XzpS4 **Inference:** \ If you want to use this model for instruction-following, you need to use the same prompt format we used in the second stage of the pretraining (basically the same format what Meta used in their Llama2 models). **Note: do not use "LlamaTokenizer" from transformers library but always use the AutoTokenizer instead, or use the plain sentencepiece tokenizer.** Here is an example using the instruction-following prompt format, with some generation arguments you can modify for your use: ```python from transformers import AutoTokenizer, AutoModelForCausalLM system_prompt = "Olet tekoälyavustaja. Vastaat aina mahdollisimman avuliaasti. Vastauksesi eivät saa sisältää mitään haitallista, epäeettistä, rasistista, seksististä, vaarallista tai laitonta sisältöä. Jos kysymyksessä ei ole mitään järkeä tai se ei ole asiasisällöltään johdonmukainen, selitä miksi sen sijaan, että vastaisit jotain väärin. Jos et tiedä vastausta kysymykseen, älä kerro väärää tietoa." def format_prompt(prompt: str) -> str: prompt = f" [INST] <<SYS>>\n{system_prompt.strip()}\n<</SYS>>\n\n{prompt.strip()} [/INST] " return prompt tokenizer = AutoTokenizer.from_pretrained("Finnish-NLP/Ahma-3B") model = AutoModelForCausalLM.from_pretrained("Finnish-NLP/Ahma-3B") # use the custom prompt format function or the chat template feature in the tokenizer to format your inputs # prompt = format_prompt("Mitä hyötyjä pienet avoimen lähdekoodin kielimallit tuovat?") # inputs = tokenizer(prompt, return_tensors="pt") messages = [ { "role": "system", "content": system_prompt, }, {"role": "user", "content": "Mitä hyötyjä pienet avoimen lähdekoodin kielimallit tuovat?"}, ] inputs = tokenizer.apply_chat_template( messages, tokenize=True, add_generation_prompt=True, return_tensors="pt" ) generated_ids = model.generate( inputs, temperature=0.6, penalty_alpha=0.6, top_k=4, do_sample=True, repetition_penalty=1.2, min_length=5, max_length=2048, ) generated_text = tokenizer.batch_decode( generated_ids, skip_special_tokens=False )[0] # Pienillä avoimen lähdekoodin kielimalleilla on lukuisia etuja, kuten parempi tarkkuus, nopeampi käsittelyaika ja parempi skaalautuvuus. Ne ovat myös usein edullisempia käyttää kuin kaupalliset mallit, joten ne ovat hyvä valinta pienemmille organisaatioille ja yksityishenkilöille, joilla on rajoitettu budjetti. Lisäksi ne voivat tarjota paremman joustavuuden ja mukauttamisen, koska käyttäjät voivat räätälöidä malleja vastaamaan omia tarpeitaan. Kaiken kaikkiaan pienet avoimen lähdekoodin kielimallit tarjoavat merkittäviä etuja, kuten paremman suorituskyvyn, paremman tarkkuuden, nopeamman käsittelyajan ja paremman skaalautuvuuden. ``` You may experiment with different system prompt instructions too if you like. ### Limitations and bias The training data used for this model contains a lot of content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model. To reduce toxic content, training data was filtered with a toxicity classifier but it cannot truly eliminate all toxic text. ## Training data This model was pretrained on the combination of 14 datasets: - [CulturaX_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/CulturaX_fi_cleaned), we cleaned Finnish split from the original [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX) dataset - [HPLT_1.2_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/HPLT_1.2_fi_cleaned), we cleaned Finnish split from the original [HPLT v1.2](https://hplt-project.org/datasets/v1.2) dataset - [wikipedia_20231101_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/wikipedia_20231101_fi_cleaned), we used the Finnish subset of the wikipedia (November 2023) dataset - [Reddit_fi_2006_2022](https://huggingface.co/datasets/Finnish-NLP/Reddit_fi_2006_2022), filtered and post-processed dataset of Finnish Reddit - [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501) - [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401) - [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001) - [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803) - [Project Lönnrot](http://www.lonnrot.net/) - [Finnish parliament speeches](https://avoindata.eduskunta.fi) - [multilingual_cc_news](https://huggingface.co/datasets/intfloat/multilingual_cc_news), we used the Finnish subset of the multilingual CC-News dataset - [fi-news-corpus](https://github.com/nkrusch/fi-news-corpus) - Finnish higher education public theses - Finnish single-turn instruction-following datasets, combination of multiple originally openly licensed English datasets translated to Finnish. For example, [Ultrachat, Aya, Capybara, etc](https://huggingface.co/collections/Finnish-NLP/sft-dpo-dataset-65f55dde1139c3cd683ff035) Raw datasets were automatically cleaned to filter out bad quality and non-Finnish examples. Also, a [perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) score was calculated for all texts with a KenLM model which was trained with very clean Finnish texts only. This perplexity score can then be used to determine how "clean" Finnish language the text contains. To reduce toxic text, we used Finnish toxicity classifier [TurkuNLP/bert-large-finnish-cased-toxicity](https://huggingface.co/TurkuNLP/bert-large-finnish-cased-toxicity) released by TurkuNLP to classify all text examples. Classified toxicity label scores can then be used to determine how toxic the text is. All datasets were concatenated and the whole dataset near deduplicated using MinHashLSH from [text-dedup](https://github.com/ChenghaoMou/text-dedup). Top 95% perplexity score was used as a filtering threshold to filter out the worst quality 5% of texts. To reduce amount of toxic content, the dataset was filtered to include text examples having lower than 80% score for the toxicity labels "label_identity_attack", "label_insult", "label_threat" and "label_severe_toxicity". Finally, 20,000 text examples from each of the CulturaX, Wikipedia, Yle, STT, Suomi24, and Reddit datasets were randomly selected for evaluation dataset. The final training dataset had 23 billion words (calculated with regex "\w+") and the evaluation dataset had 23 million words. After tokenization, the training dataset had 41 billion tokens and the evaluation dataset had 40 million tokens. For the 2-stage pretraining, training datasets are divided as follows: The first stage: |Dataset | Words | Ratio | |:-----------------------------|:------------|:-------------| |CulturaX | 12.820B | 59.88\% | |HPLT v1.2 | 5.034B | 23.51\% | |Suomi24 | 3.018B | 14.09\% | |Reddit | 0.141B | 0.66\% | |CC-News | 0.311B | 1.45\% | |FI news corpus | 0.004B | 0.02\% | |Project Lönnrot | 0.083B | 0.39\% | |**TOTAL** | **21.410B** | **100.0\%** | The second stage: |Dataset | Words | Ratio | |:--------------------------------------------------------------|:------------|:------------| |CulturaX (cleaner sample using KenLM perplexity score) | 2.252B | 55.48\% | |Wikipedia | 0.095B | 2.34\% | |STT | 0.253B | 6.23\% | |Yle | 0.212B | 5.22\% | |Finnish parliament speeches | 0.021B | 0.52\% | |Finnish higher education public theses | 0.855B | 21.07\% | |Finnish instruction-following datasets (note: 2X upsampled) | 0.371B | 9.14\% | |**TOTAL** | **4.059B** | **100.0\%** | ## Training procedure ### Preprocessing Texts are tokenized using Byte Pair Encoding (BPE) using the implementation from SentencePiece splitting all numbers into individual digits and using bytes to decompose unknown UTF-8 characters. The total vocabulary size is 64k tokens. Inputs are sequences of 2048 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish. Both BOS and EOS tokens were used in the pretraining. ### 2-stage pretraining The model was trained on TPUv4-32 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/). Training was conducted with a slightly modified Jax/Flax based [EasyLM](https://github.com/young-geng/EasyLM) framework, and inspired by the [OpenLLaMA](https://github.com/openlm-research/open_llama) project. The optimizer used was a [Lion](https://arxiv.org/abs/2302.06675). The 2-stage pretraining approach was inspired by [MiniCPM](https://shengdinghu.notion.site/MiniCPM-Unveiling-the-Potential-of-End-side-Large-Language-Models-d4d3a8c426424654a4e80e42a711cb20) findings. For the first stage (85% of the entire training), we used noisier web-scraped datasets. For the second stage (15% of the entire training), we primarily used cleaner datasets and instruction-following datasets shuffled together, like in MiniCPM. The learning rate schedule for the 2-stage pretraining was Warmup-Stable-Decay (WSD). During the first stage, the learning rate schedule had a linear warmup for about 8 billion tokens to a peak learning rate of 1e-4 (note: with the Lion optimizer, the learning rate had to be about 10 times smaller than with the commonly used AdamW), followed by a stable phase where the rate of 1e-4 was kept constant. During the second stage, the learning rate schedule had a linear decay from 1e-4 to 1e-5 for the first 13 billion tokens, followed by a stable phase for the remaining tokens. In the first stage, the model was trained for 118 billion tokens, which is about three epochs of the first-stage training data, inspired by the findings of [this paper](https://arxiv.org/abs/2305.16264). In the second stage, the model was trained for 21 billion tokens, which is about three epochs of the second-stage training data. Thanks to the WSD learning rate schedule, you can more easily experiment with different first-stage model checkpoints. For example, you could apply the second-stage training on an earlier checkpoint or continue pretraining further before the second stage. Model checkpoints were pushed to this repository every 100,000 training steps (approximately 13 billion tokens). - [900K](https://huggingface.co/Finnish-NLP/Ahma-3B/tree/916632fe707a7fbe341a1902ac9eacf6e5872ec9) - [800K](https://huggingface.co/Finnish-NLP/Ahma-3B/tree/a18d46e62823b19b4a97332c0a5a62b14372a3e2) - [700K](https://huggingface.co/Finnish-NLP/Ahma-3B/tree/2d16e05820af108582dbfcd3d25e51c6f1d5076b) - [600K](https://huggingface.co/Finnish-NLP/Ahma-3B/tree/949f4bfba406882d5ce0343aa1242bcf901202e2) - [500K](https://huggingface.co/Finnish-NLP/Ahma-3B/tree/359812c02839d4085d890c6db0e57796b7e48bfc) - [400K](https://huggingface.co/Finnish-NLP/Ahma-3B/tree/62468680cb84579a7d1885f60abe6d6607f59f45) - [300K](https://huggingface.co/Finnish-NLP/Ahma-3B/tree/0424dcc0b3dbf505f7b20cf02cb80233289ef125) - [200K](https://huggingface.co/Finnish-NLP/Ahma-3B/tree/e415206d791aad108bed8578009bf255c1f22c91) - [100K](https://huggingface.co/Finnish-NLP/Ahma-3B/tree/8085f7c3fba46cfdbf95a01b7a1da1587b757f8b) ## Evaluation results ### FIN-bench This Ahma model was primarily evaluated using [FIN-bench by TurkuNLP](https://github.com/TurkuNLP/FIN-bench), and the same evaluation was carried out for other relevant Finnish models for comparison. Below are the results with 0-shot and 3-shot settings in FIN-bench: | Benchmark | Ahma 3B (instruct prompt format) 0-shot | Ahma 7B (instruct prompt format) 0-shot | FinGPT 8B 0-shot | Viking 7B 0-shot | Poro 34B (8bit quant) 0-shot | |:---------------------------|:----------------------------------------|:----------------------------------------|:-----------------|:-----------------|:-----------------------------| | Analogies | 50.77 | TBA | 49.23 | 40.00 | 54.62 | | Arithmetic | 27.64 | TBA | 33.15 | 30.16 | 30.34 | | Cause and Effect | 59.48 | TBA | 66.01 | 58.82 | 62.74 | | Emotions | 36.25 | TBA | 22.50 | 26.25 | 35.63 | | Empirical Judgements | 33.33 | TBA | 27.27 | 33.33 | 49.49 | | General Knowledge | 44.29 | TBA | 40.00 | 24.29 | 51.43 | | HHH Alignment | 42.09 | TBA | 41.81 | 42.51 | 42.92 | | Intent Recognition | 24.42 | TBA | 17.49 | 22.40 | 68.35 | | Misconceptions | 46.27 | TBA | 53.73 | 53.73 | 52.24 | | Paraphrase | 59.50 | TBA | 51.00 | 50.00 | 51.00 | | Sentence Ambiguity | 53.33 | TBA | 51.67 | 48.33 | 50.00 | | Similarities Abstraction | 65.79 | TBA | 60.53 | 65.79 | 60.53 | | **Non-Arithmetic Average** | **47.55** | TBA | **46.17** | **44.42** | **52.08** | | **Overall Average** | **36.49** | TBA | **38.93** | **36.50** | **40.00** | | Benchmark | Ahma 3B (instruct prompt format) 3-shot | Ahma 7B (instruct prompt format) 3-shot | FinGPT 8B 3-shot | Viking 7B 3-shot | Poro 34B (8bit quant) 3-shot | |:---------------------------|:----------------------------------------|:----------------------------------------|:-----------------|:-----------------|:-----------------------------| | Analogies | 52.31 | TBA | 40.77 | 54.62 | 76.92 | | Arithmetic | 44.59 | TBA | 43.63 | 45.78 | 53.68 | | Cause and Effect | 61.44 | TBA | 64.05 | 58.17 | 67.32 | | Emotions | 14.37 | TBA | 44.37 | 48.13 | 56.87 | | Empirical Judgements | 38.38 | TBA | 32.32 | 43.43 | 63.64 | | General Knowledge | 38.57 | TBA | 54.29 | 28.57 | 74.29 | | HHH Alignment | 42.94 | TBA | 45.39 | 44.80 | 46.07 | | Intent Recognition | 24.28 | TBA | 51.45 | 58.82 | 83.67 | | Misconceptions | 46.27 | TBA | 52.99 | 46.27 | 52.99 | | Paraphrase | 58.50 | TBA | 53.00 | 54.50 | 55.00 | | Sentence Ambiguity | 53.33 | TBA | 51.67 | 53.33 | 66.67 | | Similarities Abstraction | 72.37 | TBA | 64.47 | 73.68 | 75.00 | | **Non-Arithmetic Average** | **47.15** | TBA | **51.19** | **50.94** | **61.96** | | **Overall Average** | **45.73** | TBA | **46.99** | **48.07** | **57.36** | As we can see, Ahma 3B model outperforms 2X larger models like the FinGPT 8B and Viking 7B, especially in non-arithmetic tasks in 0-shot usage. Even the 10X larger Poro 34B model, which is generally better, doesn't show a huge performance difference considering its size, and Ahma 3B actually surpasses it in some tasks. This result might be attributed to Ahma's 2-stage pretraining and the inclusion of instruct-following examples during the pretraining phase. In a 3-shot setting, the results are more mixed. The poorer performance of Ahma 3B in 3-shot settings might be due to the use of the instruct prompt format and having only single-turn instruction-following training examples. ### MTBench Finnish This Ahma model was also evaluated using [MTBench Finnish by LumiOpen](https://github.com/LumiOpen/FastChat/tree/main/fastchat/llm_judge) even though this Ahma model is not fine-tuned for chat. Since the MTBench evaluates also multi-turn chats while Ahma models were only pretrained with single-turn instruction following examples, we have reported MTBench Finnish results separately for their single-turn and multi-turn evaluation examples. [Poro 34B Chat](https://huggingface.co/LumiOpen/Poro-34B-chat) model's results are copied from their model card for comparison. | Benchmark | Ahma 3B (instruct prompt format) single-turn | Ahma 3B (instruct prompt format) multi-turn | Ahma 7B (instruct prompt format) single-turn | Ahma 7B (instruct prompt format) multi-turn | Poro 34B Chat multi-turn | |:--------------------|:---------------------------------------------|:--------------------------------------------|:---------------------------------------------|:--------------------------------------------|:-------------------------| | Coding | 1.00 | 1.00 | TBA | TBA | 3.05 | | Extraction | 2.00 | 1.55 | TBA | TBA | 6.05 | | Humanities | 4.05 | 3.25 | TBA | TBA | 9.6 | | Math | 3.00 | 2.20 | TBA | TBA | 1.25 | | Reasoning | 2.90 | 2.45 | TBA | TBA | 3.65 | | Roleplay | 4.80 | 4.90 | TBA | TBA | 7.0 | | STEM | 5.10 | 4.20 | TBA | TBA | 7.65 | | Writing | 6.60 | 3.80 | TBA | TBA | 7.6 | | **Overall Average** | **3.68** | **2.92** | TBA | TBA | **5.73** | As we can see, Ahma 3B model struggles with multi-turn examples, as expected, since it has only been pretrained with single-turn instruction following examples. In addition, coding performance was expectedly poor because the Ahma 3B model is not trained with code data. Ahma 3B also seemed to have problems with the fact that it started to constantly repeat the generated text in some evaluation examples, which affected the scoring. With the addition of a repetition penalty setting to the evaluation script generation method, the scores already improved significantly, so the Ahma 3B model should be used with better generation settings in real-world use compared to the settings used in this benchmark. ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/). ## Team Members - Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/) - Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/) Feel free to contact us for more details 🤗 ![Ahma](ahma.jpg)
misri/leosamsHelloworldXL_helloworldXL50GPT4V
misri
2024-03-03T12:23:03Z
702
1
diffusers
[ "diffusers", "safetensors", "license:unknown", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-03-03T10:36:18Z
--- license: unknown ---
mradermacher/PygWin-4x7B-GGUF
mradermacher
2024-05-06T06:16:34Z
702
0
transformers
[ "transformers", "gguf", "moe", "merge", "en", "base_model:Kquant03/PygWin-4x7B", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-03-10T19:29:01Z
--- base_model: Kquant03/PygWin-4x7B language: - en library_name: transformers license: llama2 quantized_by: mradermacher tags: - moe - merge --- ## About static quants of https://huggingface.co/Kquant03/PygWin-4x7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/PygWin-4x7B-GGUF/resolve/main/PygWin-4x7B.Q2_K.gguf) | Q2_K | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/PygWin-4x7B-GGUF/resolve/main/PygWin-4x7B.IQ3_XS.gguf) | IQ3_XS | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/PygWin-4x7B-GGUF/resolve/main/PygWin-4x7B.IQ3_S.gguf) | IQ3_S | 8.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/PygWin-4x7B-GGUF/resolve/main/PygWin-4x7B.Q3_K_S.gguf) | Q3_K_S | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/PygWin-4x7B-GGUF/resolve/main/PygWin-4x7B.IQ3_M.gguf) | IQ3_M | 9.0 | | | [GGUF](https://huggingface.co/mradermacher/PygWin-4x7B-GGUF/resolve/main/PygWin-4x7B.Q3_K_M.gguf) | Q3_K_M | 9.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/PygWin-4x7B-GGUF/resolve/main/PygWin-4x7B.Q3_K_L.gguf) | Q3_K_L | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/PygWin-4x7B-GGUF/resolve/main/PygWin-4x7B.IQ4_XS.gguf) | IQ4_XS | 10.9 | | | [GGUF](https://huggingface.co/mradermacher/PygWin-4x7B-GGUF/resolve/main/PygWin-4x7B.Q4_K_S.gguf) | Q4_K_S | 11.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/PygWin-4x7B-GGUF/resolve/main/PygWin-4x7B.Q4_K_M.gguf) | Q4_K_M | 12.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/PygWin-4x7B-GGUF/resolve/main/PygWin-4x7B.Q5_K_S.gguf) | Q5_K_S | 13.8 | | | [GGUF](https://huggingface.co/mradermacher/PygWin-4x7B-GGUF/resolve/main/PygWin-4x7B.Q5_K_M.gguf) | Q5_K_M | 14.3 | | | [GGUF](https://huggingface.co/mradermacher/PygWin-4x7B-GGUF/resolve/main/PygWin-4x7B.Q6_K.gguf) | Q6_K | 16.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/PygWin-4x7B-GGUF/resolve/main/PygWin-4x7B.Q8_0.gguf) | Q8_0 | 21.2 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
OmnicromsBrain/NeuralStar_AlphaWriter_4x7b
OmnicromsBrain
2024-06-17T01:51:40Z
702
8
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "mlabonne/AlphaMonarch-7B", "FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B", "SanjiWatsuki/Kunoichi-DPO-v2-7B", "OmnicromsBrain/NeuralStar-7b-Lazy", "conversational", "base_model:mlabonne/AlphaMonarch-7B", "base_model:FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B", "base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B", "base_model:OmnicromsBrain/NeuralStar-7b-Lazy", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-15T02:46:54Z
--- license: apache-2.0 tags: - moe - frankenmoe - merge - mergekit - lazymergekit - mlabonne/AlphaMonarch-7B - FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B - SanjiWatsuki/Kunoichi-DPO-v2-7B - OmnicromsBrain/NeuralStar-7b-Lazy base_model: - mlabonne/AlphaMonarch-7B - FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B - SanjiWatsuki/Kunoichi-DPO-v2-7B - OmnicromsBrain/NeuralStar-7b-Lazy model-index: - name: NeuralStar_AlphaWriter_4x7b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 70.22 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=OmnicromsBrain/NeuralStar_AlphaWriter_4x7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.31 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=OmnicromsBrain/NeuralStar_AlphaWriter_4x7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.6 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=OmnicromsBrain/NeuralStar_AlphaWriter_4x7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 71.7 source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=OmnicromsBrain/NeuralStar_AlphaWriter_4x7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.0 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=OmnicromsBrain/NeuralStar_AlphaWriter_4x7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 63.0 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=OmnicromsBrain/NeuralStar_AlphaWriter_4x7b name: Open LLM Leaderboard --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65c70c9e21d80a923d664563/ntyev6qExGVY3Ysg2D6-l.png) # NeuralStar_AlphaWriter_4x7b I was blown away by the writing results I was getting from mlabonne/Beyonder-4x7B-v3 while writing in [NovelCrafter](https://www.novelcrafter.com). Inspired by his [LLM Course](https://github.com/mlabonne/llm-course) and fueled by his [LazyMergeKit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb). I couldnt help but wonder what a writing model would be like if all 4 “experts” excelled in creative writing. I present NeuralStar-AlphaWriter-4x7b: NeuralStar_AlphaWriter_4x7b is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B) * [FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B](https://huggingface.co/FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B) * [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) * [OmnicromsBrain/NeuralStar-7b-Lazy](https://huggingface.co/OmnicromsBrain/NeuralStar-7b-Lazy) ## &#9889; Quantized Models Special thanks to MRadermacher for the Static and iMatrx quantized models **.GGUF** https://huggingface.co/mradermacher/NeuralStar_AlphaWriter_4x7b-GGUF **iMatrix** https://huggingface.co/mradermacher/NeuralStar_AlphaWriter_4x7b-i1-GGUF Q4_K_M and Q5_K_M .gguf [**Here**](https://huggingface.co/OmnicromsBrain/NeuralStar_AlphaWriter_4x7b-GGUF) created with [mlabonne/Autogguf](https://colab.research.google.com/drive/1P646NEg33BZy4BfLDNpTz0V0lwIU3CHu) ## 🧩 Configuration ```yaml base_model: mlabonne/AlphaMonarch-7B experts: - source_model: mlabonne/AlphaMonarch-7B positive_prompts: - "chat" - "assistant" - "tell me" - "explain" - "I want" - source_model: FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B positive_prompts: - "edit" - "rewrite" - "evaluate" - "spelling" - "grammer" - source_model: SanjiWatsuki/Kunoichi-DPO-v2-7B positive_prompts: - "storywriting" - "write" - "scene" - "prose" - "character" - source_model: OmnicromsBrain/NeuralStar-7b-Lazy positive_prompts: - "codex" - "plot" - "outline" - "scenebeat" - "count" ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "OmnicromsBrain/NeuralStar_AlphaWriter_4x7b" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_OmnicromsBrain__NeuralStar_AlphaWriter_4x7b) | Metric |Value| |---------------------------------|----:| |Avg. |73.31| |AI2 Reasoning Challenge (25-Shot)|70.22| |HellaSwag (10-Shot) |88.31| |MMLU (5-Shot) |64.60| |TruthfulQA (0-shot) |71.70| |Winogrande (5-shot) |82.00| |GSM8k (5-shot) |63.00|
dfurman/Llama-3-70B-Orpo-v0.1
dfurman
2024-05-05T05:40:06Z
702
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "orpo", "llama 3", "rlhf", "sft", "conversational", "en", "dataset:mlabonne/orpo-dpo-mix-40k", "base_model:meta-llama/Meta-Llama-3-70B", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-26T17:13:28Z
--- language: - en license: llama3 library_name: transformers tags: - orpo - llama 3 - rlhf - sft datasets: - mlabonne/orpo-dpo-mix-40k base_model: - meta-llama/Meta-Llama-3-70B --- # dfurman/Llama-3-70B-Orpo-v0.1 ![](https://raw.githubusercontent.com/daniel-furman/sft-demos/main/assets/llama_3.jpeg) This is an ORPO fine-tune of [meta-llama/Meta-Llama-3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B) on 2k samples of [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k). It's a successful fine-tune that follows the ChatML template! ## 🔎 Application This model uses a context window of 8k. It was trained with the ChatML template. ## 🏆 Evaluation ### Open LLM Leaderboard | Model ID | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------: | --------: | --------: | ---------: | --------: | --------: | --------: | | [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) [📄](https://huggingface.co/datasets/open-llm-leaderboard/details_meta-llama__Meta-Llama-3-70B-Instruct) | 77.88 | 71.42 | 85.69 | 80.06 | 61.81 | 82.87 | 85.44 | | [**dfurman/Llama-3-70B-Orpo-v0.1**](https://huggingface.co/dfurman/Llama-3-70B-Orpo-v0.1) [📄](https://huggingface.co/datasets/open-llm-leaderboard/details_dfurman__Llama-3-70B-Orpo-v0.1) | **74.67** | **68.69** | **88.01** | **79.39** | **49.62** | **85.48** | **76.8** | | [meta-llama/Meta-Llama-3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B) [📄](https://huggingface.co/datasets/open-llm-leaderboard/details_meta-llama__Meta-Llama-3-70B) | 73.96 | 68.77 | 87.98 | 79.23 | 45.56 | 85.32 | 76.88 | ## 📈 Training curves You can find the experiment on W&B at [this address](https://wandb.ai/dryanfurman/huggingface/runs/ojsbud95/workspace?nw=nwuserdryanfurman). ## 💻 Usage <details> <summary>Setup</summary> ```python !pip install -qU transformers accelerate bitsandbytes from transformers import AutoTokenizer, BitsAndBytesConfig import transformers import torch if torch.cuda.get_device_capability()[0] >= 8: !pip install -qqq flash-attn attn_implementation = "flash_attention_2" torch_dtype = torch.bfloat16 else: attn_implementation = "eager" torch_dtype = torch.float16 bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch_dtype, bnb_4bit_use_double_quant=True, ) model = "dfurman/Llama-3-70B-Orpo-v0.1" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={ "torch_dtype": torch_dtype, "quantization_config": bnb_config, "device_map": "auto", "attn_implementation": attn_implementation, } ) ``` </details> ### Run ```python messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Tell me a recipe for a spicy margarita."}, ] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) print("***Prompt:\n", prompt) outputs = pipeline(prompt, max_new_tokens=1000, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print("***Generation:\n", outputs[0]["generated_text"][len(prompt):]) ``` <details> <summary>Output</summary> ``` """ """ ``` </details>
uygarkurt/llama3s-merged-linear
uygarkurt
2024-05-24T10:34:19Z
702
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-08T16:23:15Z
--- library_name: transformers license: mit --- # LLM Model Merging ## YouTube Tutorial <div align="center"> <a href="https://youtu.be/gNXBp3wttFU">Model Merging: Merge LLMs to Create Frankestein Models - Python, HuggingFace, Mergekit</a> <br> <br> <a href="https://youtu.be/gNXBp3wttFU"> <img src="./thumbnail1-button.png" height="85%" width="85%%"/> </a> </div> ## GitHub You can find the GitHub from here; https://github.com/uygarkurt/Model-Merge In this specific case, I typed `llama-3` into the open LLM leaderboard, took the best 3 models, merged them and created a better ranking model wihtout any training. As the main libraries we will be using [mergekit](https://github.com/arcee-ai/mergekit). <br/> <div align="center"> <a href=""> <img alt="open-source-image" src="https://img.shields.io/badge/%E2%9D%A4%EF%B8%8F_Open_Source-%2350C878?style=for-the-badge"/> </a> <a href="https://youtu.be/gNXBp3wttFU"> <img alt="youtube-tutorial" src="https://img.shields.io/badge/YouTube_Tutorial-grey?style=for-the-badge&logo=YouTube&logoColor=%23FF0000"/> </a> <a href="https://github.com/uygarkurt/Model-Merge"> <img alt="github-image" src="https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white" </a> </div> <br/>
neody/phi-3-medium-128k-iq-gguf
neody
2024-05-21T23:13:00Z
702
1
null
[ "gguf", "region:us" ]
null
2024-05-21T16:32:28Z
Entry not found
pankajmathur/orca_mini_v5_8b
pankajmathur
2024-05-30T23:04:54Z
702
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text2text-generation", "en", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
2024-05-26T00:02:09Z
--- license: llama3 language: - en library_name: transformers pipeline_tag: text2text-generation --- **Model Name: llama_3_orca_mini_v5_8b** # Llama-3-8b base model trained on Orca Style Mini Datasets <img src="https://huggingface.co/pankajmathur/orca_mini_v5_8b/resolve/main/orca_minis_small.jpeg" width="auto" /> ## NOTICE By providing proper credit and attribution, you are granted permission to use this model as a foundational base for further DPO/PPO tuning or Merges. I actively encourage users to customize and enhance the model according to their specific needs, as this version is designed to be a comprehensive, fully fine-tuned general model. Dive in and innovate! ## Evaluation We evaluated this model on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI. Here are the results on similar metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) | Metric |Value| |---------------------------------|----:| |Avg. |67.28| |AI2 Reasoning Challenge (25-Shot)|60.92| |HellaSwag (10-Shot) |81.78| |MMLU (5-Shot) |64.97| |TruthfulQA (0-shot) |55.04| |Winogrande (5-shot) |73.40| |GSM8k (5-shot) |67.55| <br> ## Example Usage Here is the ChatML prompt format ``` <|im_start|>system You are Orca Mini, a helpful AI assistant.<|im_end|> <|im_start|>user Hello Orca Mini, what can you do for me?<|im_end|> <|im_start|>assistant ``` Below shows a code example on how to use this model ```python from transformers import AutoModel, AutoTokenizer model_slug = "pankajmathur/orca_mini_v5_8b" model = AutoModel.from_pretrained(model_slug) tokenizer = AutoTokenizer.from_pretrained(model_slug) messages = [ {"role": "system", "content": "You are Orca Mini, a helpful AI assistant."}, {"role": "user", "content": "Hello Orca Mini, what can you do for me?"} ] gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt") model.generate(**gen_input) ``` This model is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE) **Quants** GGUF : Coming Soon AWQ: Coming Soon
Huffon/sentence-klue-roberta-base
Huffon
2021-06-20T17:32:17Z
701
8
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "ko", "dataset:klue", "arxiv:1908.10084", "region:us" ]
null
2022-03-02T23:29:04Z
--- language: ko tags: - roberta - sentence-transformers datasets: - klue --- # KLUE RoBERTa base model for Sentence Embeddings This is the `sentence-klue-roberta-base` model. The sentence-transformers repository allows to train and use Transformer models for generating sentence and text embeddings. The model is described in the paper [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084) ## Usage (Sentence-Transformers) Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python import torch from sentence_transformers import SentenceTransformer, util model = SentenceTransformer("Huffon/sentence-klue-roberta-base") docs = [ "1992년 7월 8일 손흥민은 강원도 춘천시 후평동에서 아버지 손웅정과 어머니 길은자의 차남으로 태어나 그곳에서 자랐다.", "형은 손흥윤이다.", "춘천 부안초등학교를 졸업했고, 춘천 후평중학교에 입학한 후 2학년때 원주 육민관중학교 축구부에 들어가기 위해 전학하여 졸업하였으며, 2008년 당시 FC 서울의 U-18팀이었던 동북고등학교 축구부에서 선수 활동 중 대한축구협회 우수선수 해외유학 프로젝트에 선발되어 2008년 8월 독일 분데스리가의 함부르크 유소년팀에 입단하였다.", "함부르크 유스팀 주전 공격수로 2008년 6월 네덜란드에서 열린 4개국 경기에서 4게임에 출전, 3골을 터뜨렸다.", "1년간의 유학 후 2009년 8월 한국으로 돌아온 후 10월에 개막한 FIFA U-17 월드컵에 출전하여 3골을 터트리며 한국을 8강으로 이끌었다.", "그해 11월 함부르크의 정식 유소년팀 선수 계약을 체결하였으며 독일 U-19 리그 4경기 2골을 넣고 2군 리그에 출전을 시작했다.", "독일 U-19 리그에서 손흥민은 11경기 6골, 2부 리그에서는 6경기 1골을 넣으며 재능을 인정받아 2010년 6월 17세의 나이로 함부르크의 1군 팀 훈련에 참가, 프리시즌 활약으로 함부르크와 정식 계약을 한 후 10월 18세에 함부르크 1군 소속으로 독일 분데스리가에 데뷔하였다.", ] document_embeddings = model.encode(docs) query = "손흥민은 어린 나이에 유럽에 진출하였다." query_embedding = model.encode(query) top_k = min(5, len(docs)) cos_scores = util.pytorch_cos_sim(query_embedding, document_embeddings)[0] top_results = torch.topk(cos_scores, k=top_k) print(f"입력 문장: {query}") print(f"<입력 문장과 유사한 {top_k} 개의 문장>") for i, (score, idx) in enumerate(zip(top_results[0], top_results[1])): print(f"{i+1}: {docs[idx]} {'(유사도: {:.4f})'.format(score)}") ```
castorini/duot5-base-msmarco
castorini
2021-12-07T12:53:29Z
701
0
transformers
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "arxiv:2101.05667", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
This model is a T5-base pairwise reranker fine-tuned on MS MARCO passage dataset for 50k steps (or 5 epochs). For more details on how to use it, check [pygaggle.ai](pygaggle.ai) Paper describing the model: [The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models](https://arxiv.org/pdf/2101.05667.pdf)
Intel/ColBERT-NQ
Intel
2024-03-29T14:22:35Z
701
5
transformers
[ "transformers", "pytorch", "bert", "colbert", "natural questions", "checkpoint", "text retrieval", "en", "dataset:natural_questions", "license:cc-by-sa-3.0", "endpoints_compatible", "region:us" ]
null
2023-02-07T12:55:52Z
--- license: cc-by-sa-3.0 datasets: - natural_questions language: - en tags: - colbert - natural questions - checkpoint - text retrieval metrics: - type: NQ 10 Recall - value: 71.1 - type: NQ 20 Recall - value: 76.3 - type: NQ 50 Recall - value: 80.4 - type: NQ 100 Recall - value: 82.7 - type: NQ 10 MRR - value: 52.1 - type: NQ 20 MRR - value: 52.3 - type: NQ 50 MRR - value: 52.5 - type: NQ 100 MRR - value: 52.5 --- # ColBERT NQ Checkpoint The ColBERT NQ Checkpoint is a trained model based on the ColBERT architecture, which itself leverages a BERT encoder for its operations. This model has been specifically trained on the Natural Questions (NQ) dataset, focusing on text retrieval tasks. | Model Detail | Description | | ----------- | ----------- | | Model Authors | ? | | Date | Feb 7, 2023 | | Version | Checkpoint | | Type | Text retrieval | | Paper or Other Resources | Base Mode: [ColBERT](https://github.com/stanford-futuredata/ColBERT) Dataset: [Natural Questions](https://huggingface.co/datasets/natural_questions) | | License | Other | | Questions or Comments | [Community Tab](https://huggingface.co/Intel/ColBERT-NQ/discussions) and [Intel DevHub Discord](https://discord.gg/rv2Gp55UJQ)| | Intended Use | Description | | ----------- | ----------- | | Primary intended uses | This model is designed for text retrieval tasks, allowing users to submit queries and receive relevant passages from a corpus, in this case, Wikipedia. It can be integrated into applications requiring efficient and accurate retrieval of information based on user queries. | | Primary intended users | Researchers, developers, and organizations looking for a powerful text retrieval solution that can be integrated into their systems or workflows, especially those requiring retrieval from large, diverse corpora like Wikipedia. | | Out-of-scope uses | The model is not intended for tasks beyond text retrieval, such as text generation, sentiment analysis, or other forms of natural language processing not related to retrieving relevant text passages. | # Evaluation The ColBERT NQ Checkpoint model has been evaluated on the NQ dev dataset with the following results, showcasing its effectiveness in retrieving relevant passages across varying numbers of retrieved documents: <table> <colgroup> <col class="org-right"> <col class="org-right"> <col class="org-right"> </colgroup> <thead> <tr> <th scope="col" class="org-right">NQ</th> <th scope="col" class="org-right">Recall</th> <th scope="col" class="org-right">MRR</th> </tr> </thead> <tbody> <tr> <td class="org-right">10</td> <td class="org-right">71.1</td> <td class="org-right">52.0</td> </tr> <tr> <td class="org-right">20</td> <td class="org-right">76.3</td> <td class="org-right">52.3</td> </tr> <tr> <td class="org-right">50</td> <td class="org-right">80.4</td> <td class="org-right">52.5</td> </tr> <tr> <td class="org-right">100</td> <td class="org-right">82.7</td> <td class="org-right">52.5</td> </tr> </tbody> </table> These metrics demonstrate the model's ability to accurately retrieve relevant information from a corpus, with both recall and mean reciprocal rank (MRR) improving as more passages are considered. # Ethical Considerations While not specifically mentioned, ethical considerations for using the ColBERT NQ Checkpoint model should include awareness of potential biases present in the training corpus (Wikipedia), and the implications of those biases on retrieved results. Users should also consider the privacy and data use implications when deploying this model in applications. Caveats and Recommendations - Index Creation: Users need to build a vector index from their corpus using the ColBERT codebase before running queries. This process requires computational resources and expertise in setting up and managing search indices. - Data Bias and Fairness: Given the Wikipedia-based training corpus, users should be mindful of potential biases and the representation of information within Wikipedia, adjusting their use case or implementation as necessary to address these concerns.
priyabrat/New_AI_or_Humantext_categorisation
priyabrat
2023-03-01T04:49:06Z
701
1
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-03-01T04:35:19Z
Entry not found
lorahub/flan_t5_large-wiqa_what_is_the_missing_first_step
lorahub
2023-07-24T10:20:36Z
701
0
peft
[ "peft", "region:us" ]
null
2023-07-24T10:20:26Z
--- library_name: peft ---
FlagAlpha/Atom-7B
FlagAlpha
2024-02-23T11:04:21Z
701
69
transformers
[ "transformers", "safetensors", "llama", "text-generation", "question-answering", "custom_code", "zh", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
question-answering
2023-08-28T10:36:41Z
--- developers: [https://huggingface.co/FlagAlphaAI] license: apache-2.0 language: - zh - en pipeline_tag: question-answering library_name: transformers --- # Atom-7B Atom-7B完全开源可商用,由Llama中文社区和AtomEcho(原子回声)联合研发,基于Llama2-7B采用大规模的中文数据进行了继续预训练,我们会持续提供更新的模型参数,模型训练过程见[llama.family](https://llama.family)。 模型的部署、训练、微调等方法详见Llama中文社区GitHub仓库:[**Llama-Chinese**](https://github.com/LlamaFamily/Llama-Chinese)。 ## 📝 中文数据 | 类型 | 描述 | | ---------------------------------------------------------- | ------------------------------------------------------------ | | 网络数据 | 互联网上公开的网络数据,挑选出去重后的高质量中文数据,涉及到百科、书籍、博客、新闻、公告、小说等高质量长文本数据。 | | [Wikipedia](https://github.com/goldsmith/Wikipedia) | 中文Wikipedia的数据 | | [悟道](https://github.com/BAAI-WuDao/Model) | 中文悟道开源的200G数据 | | [Clue](https://github.com/CLUEbenchmark/CLUEDatasetSearch) | Clue开放的中文预训练数据,进行清洗后的高质量中文长文本数据 | | 竞赛数据集 | 近年来中文自然语言处理多任务竞赛数据集,约150个 | | [MNBVC](https://github.com/esbatmop/MNBVC) | MNBVC 中清洗出来的部分数据集 | **我们也欢迎大家在[llama.family](https://llama.family)中贡献自己的数据,您的数据通过审核后会加入模型训练,也将影响模型未来的能力走向。** ## 📚 中文词表 为了提高中文文本处理的效率,我们针对Llama2模型的词表进行了深度优化。 首先,我们基于数百G的中文文本,**在Llama2词表的基础上扩展词库至65,000个单词**。 经过测试,我们的改进使得**中文编码/解码速度提高了约350%**。 此外,我们还扩大了中文字符集的覆盖范围,包括所有**emoji符号**,这使的生成带有表情符号的文章更加高效。 对于Llama2原生词表中的一些特殊情况,如数字、英文等,我们尽可能地避免对其进行修改或替换。 最终,成功地实现了一种既能提高中文处理效率又能保持Llama2原有性能的方法。 ## 📈 训练过程 **模型结构** 基于当前最优秀的开源模型Llama2,使用主流Decoder-only的标准Transformer网络结构,支持4K的上下文长度(Context Length),为同尺寸模型中最长,能满足更长的多轮对话、知识问答与摘要等需求,模型应用场景更广泛。 **FlashAttention-2高效训练** Atom-7B采用了FlashAttention-2技术进行训练。由于在处理较长的输入序列时,内存消耗的问题可能会导致“内存爆炸”现象。FlashAttention-2是一种高效注意力机制的实现方式之一,相较于传统的注意力技术(Attention),它拥有更快速的速度以及更加优化的内存占用率。 **基于NTK的自适应上下文扩展技术** - 可在不继续训练模型的情况下支持更长的上下文 - 本项目中模型默认支持4K上下文,利用上述技术可扩展至18K+ - 经过微调可以支持到32K+ ## 💻 推理配置 实际应用中,消费级显卡要比专业显卡便宜的多(比如3090相比A10,同样都是24G显存)。 对于消费级显卡,直接FP32肯定放不下,一般最基本的是FP16,而INT8和INT4量化就很有用,例如: - 对于3080显卡(10G显存),Atom-7B的INT8只需要8G显存可以直接部署。 - 对于3080显卡(10G显存),Atom-7B的INT4只需要5G显存可以直接部署。 --- # Llama中文社区 ## 🚀 社区地址: Github:[**Llama-Chinese**](https://github.com/LlamaFamily/Llama-Chinese) 在线体验链接:[**llama.family**](https://llama.family/) ## 🔥 社区介绍 欢迎来到Llama中文社区! 我们是一个专注于Llama模型在中文方面的优化和上层建设的高级技术社区。 **基于大规模中文数据,从预训练开始对Llama2模型进行中文能力的持续迭代升级**。 我们热忱欢迎对大模型LLM充满热情的开发者和研究者加入我们的行列。 ## 🐼 社区资源 - Llama2在线体验链接[**llama.family**](https://llama.family/),同时包含Meta原版和中文微调版本! - Llama2 Chat模型的[中文问答能力评测](https://github.com/LlamaFamily/Llama-Chinese/tree/main#-%E6%A8%A1%E5%9E%8B%E8%AF%84%E6%B5%8B)! - [社区飞书知识库](https://chinesellama.feishu.cn/wiki/space/7257824476874768388?ccm_open_type=lark_wiki_spaceLink),欢迎大家一起共建!
SurgeGlobal/OpenBezoar-SFT
SurgeGlobal
2024-04-26T15:46:21Z
701
3
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "en", "dataset:SurgeGlobal/LaMini", "dataset:SurgeGlobal/Orca", "dataset:SurgeGlobal/Evol-Instruct", "arxiv:2404.12195", "arxiv:2306.02707", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-09-22T07:48:21Z
--- license: cc-by-nc-4.0 datasets: - SurgeGlobal/LaMini - SurgeGlobal/Orca - SurgeGlobal/Evol-Instruct language: - en --- # OpenBezoar-SFT The OpenBezoar-SFT is an instruction-tuned version of [Open LlaMA 3B v2](https://huggingface.co/openlm-research/open_llama_3b_v2) with Q-LoRA on three of our custom datasets synthetically generated from [h2ogpt-gm-oasst1-en-2048-falcon-40b-v2](https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-2048-falcon-40b-v2). ## Model Details - Base model: [Open LlaMA 3B v2](https://huggingface.co/openlm-research/open_llama_3b_v2) - LoRA configuration: - r: 16 - alpha: 16 - dropout: 0.05 - target modules: [q_proj, v_proj, k_proj] - Datasets used for instruction tuning: - [LaMini](https://huggingface.co/datasets/SurgeGlobal/LaMini) - [Orca](https://huggingface.co/datasets/SurgeGlobal/Orca) - [Evol-Instruct](https://huggingface.co/datasets/SurgeGlobal/Evol-Instruct) ### Model Description OpenBezoar-SFT is built upon the Open Llama 3B v2 architecture and has been fine-tuned to improve its instruction-following abilities. ### Model Sources - **Repository:** [Bitbucket Project](https://bitbucket.org/paladinanalytics/workspace/projects/OP) - **Paper :** [Pre-Print](https://arxiv.org/abs/2404.12195) ## Instruction Format We follow a modified version of the Alpaca prompt template as shown below. It is important to utilize this template in order to obtain best responses for instruction related tasks. ``` ### System: Below is an instruction that describes a task, optionally paired with an input that provides further context following that instruction. Write a response that appropriately completes the request. ### Instruction: {instruction} ### Response: ``` Notice that **no** end-of-sentence (eos) token is being appended. *Note: The system prompt shown in the following figure is the one that the model has been trained on most of the time. However, you may attempt to use any other system prompt that is available in the [Orca](https://arxiv.org/abs/2306.02707) scheme.* ## Usage ```python from peft import PeftConfig, PeftModel from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig, AutoModelForSeq2SeqLM checkpoint = "SurgeGlobal/OpenBezoar-SFT" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained( checkpoint, load_in_4bit=True, # optionally for low resource environments device_map="auto" ) prompt = """### System: Below is an instruction that describes a task, optionally paired with an input that provides further context following that instruction. Write a response that appropriately completes the request. ### Instruction: {instruction} ### Response:""".format( instruction="What is the world state in the year 1597." ) inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=1024, do_sample=True) print(tokenizer.decode(outputs[0])) ``` ## Evaluations Refer to our self-reported evaluations in our paper (Section 4). ## Limitations - The model might not consistently show improved abilities to follow instructions, and it could respond inappropriately or get stuck in loops. - This model is not aligned to human preferences and therefore it may generate harmful and uncensored content. - Caution is urged against relying on this model for production or adjacent use-cases. ## Citation If you find our work useful, please cite our paper as follows: ``` @misc{surge2024openbezoar, title={OpenBezoar: Small, Cost-Effective and Open Models Trained on Mixes of Instruction Data}, author={Chandeepa Dissanayake and Lahiru Lowe and Sachith Gunasekara and Yasiru Ratnayake}, year={2024}, eprint={2404.12195}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Model Authors Chandeepa Dissanayake, Lahiru Lowe, Sachith Gunasekara, and Yasiru Ratnayake
swap-uniba/LLaMAntino-2-chat-7b-hf-UltraChat-ITA
swap-uniba
2024-01-08T11:04:45Z
701
7
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "text-generation-inference", "it", "arxiv:2312.09993", "license:llama2", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-12-16T18:22:47Z
--- license: llama2 language: - it tags: - text-generation-inference --- <img src="https://i.ibb.co/6mHSRm3/llamantino53.jpg" alt="llamantino53" border="0" width="200px"> # Model Card for LLaMAntino-2-chat-7b-UltraChat-ITA *Last Update: 08/01/2024*<br>*Example of Use*: [Colab Notebook](https://colab.research.google.com/drive/1lCQ7MqSNKILsIncNYhdN_yqzSvl4akat?usp=sharing) <hr> ## Model description <!-- Provide a quick summary of what the model is/does. --> **LLaMAntino-2-chat-7b-UltraChat** is a *Large Language Model (LLM)* that is an instruction-tuned version of **LLaMAntino-2-chat-7b** (an italian-adapted **LLaMA 2 chat**). This model aims to provide Italian NLP researchers with an improved model for italian dialogue use cases. The model was trained using *QLora* and using as training data [UltraChat](https://github.com/thunlp/ultrachat) translated to the italian language using [Argos Translate](https://pypi.org/project/argostranslate/1.4.0/). If you are interested in more details regarding the training procedure, you can find the code we used at the following link: - **Repository:** https://github.com/swapUniba/LLaMAntino **NOTICE**: the code has not been released yet, we apologize for the delay, it will be available asap! - **Developed by:** Pierpaolo Basile, Elio Musacchio, Marco Polignano, Lucia Siciliani, Giuseppe Fiameni, Giovanni Semeraro - **Funded by:** PNRR project FAIR - Future AI Research - **Compute infrastructure:** [Leonardo](https://www.hpc.cineca.it/systems/hardware/leonardo/) supercomputer - **Model type:** LLaMA-2-chat - **Language(s) (NLP):** Italian - **License:** Llama 2 Community License - **Finetuned from model:** [swap-uniba/LLaMAntino-2-chat-7b-hf-ITA](https://huggingface.co/swap-uniba/LLaMAntino-2-chat-7b-hf-ITA) ## Prompt Format This prompt format based on the [LLaMA 2 prompt template](https://gpus.llm-utils.org/llama-2-prompt-template/) adapted to the italian language was used: ```python " [INST]<<SYS>>\n" \ "Sei un assistente disponibile, rispettoso e onesto. " \ "Rispondi sempre nel modo piu' utile possibile, pur essendo sicuro. " \ "Le risposte non devono includere contenuti dannosi, non etici, razzisti, sessisti, tossici, pericolosi o illegali. " \ "Assicurati che le tue risposte siano socialmente imparziali e positive. " \ "Se una domanda non ha senso o non e' coerente con i fatti, spiegane il motivo invece di rispondere in modo non corretto. " \ "Se non conosci la risposta a una domanda, non condividere informazioni false.\n" \ "<</SYS>>\n\n" \ f"{user_msg_1}[/INST] {model_answer_1} </s> <s> [INST]{user_msg_2}[/INST] {model_answer_2} </s> ... <s> [INST]{user_msg_N}[/INST] {model_answer_N} </s>" ``` We recommend using the same prompt in inference to obtain the best results! ## How to Get Started with the Model Below you can find an example of model usage: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "swap-uniba/LLaMAntino-2-chat-7b-hf-UltraChat-ITA" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) user_msg = "Ciao! Come stai?" prompt = " [INST]<<SYS>>\n" \ "Sei un assistente disponibile, rispettoso e onesto. " \ "Rispondi sempre nel modo piu' utile possibile, pur essendo sicuro. " \ "Le risposte non devono includere contenuti dannosi, non etici, razzisti, sessisti, tossici, pericolosi o illegali. " \ "Assicurati che le tue risposte siano socialmente imparziali e positive. " \ "Se una domanda non ha senso o non e' coerente con i fatti, spiegane il motivo invece di rispondere in modo non corretto. " \ "Se non conosci la risposta a una domanda, non condividere informazioni false.\n" \ "<</SYS>>\n\n" \ f"{user_msg}[/INST]" pipe = transformers.pipeline( model=model, tokenizer=tokenizer, return_full_text=False, # langchain expects the full text task='text-generation', max_new_tokens=512, # max number of tokens to generate in the output temperature=0.8 #temperature for more or less creative answers ) # Method 1 sequences = pipe(text) for seq in sequences: print(f"{seq['generated_text']}") # Method 2 input_ids = tokenizer(prompt, return_tensors="pt").input_ids outputs = model.generate(input_ids=input_ids, max_length=512) print(tokenizer.batch_decode(outputs.detach().cpu().numpy()[:, input_ids.shape[1]:], skip_special_tokens=True)[0]) ``` If you are facing issues when loading the model, you can try to load it **Quantized**: ```python model = AutoModelForCausalLM.from_pretrained(model_id, load_in_8bit=True) ``` *Note*: 1) The model loading strategy above requires the [*bitsandbytes*](https://pypi.org/project/bitsandbytes/) and [*accelerate*](https://pypi.org/project/accelerate/) libraries 2) The Tokenizer, by default, adds at the beginning of the prompt the '\<BOS\>' token. If that is not the case, add as a starting token the *\<s\>* string. ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> *Coming soon*! ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> If you use this model in your research, please cite the following: ```bibtex @misc{basile2023llamantino, title={LLaMAntino: LLaMA 2 Models for Effective Text Generation in Italian Language}, author={Pierpaolo Basile and Elio Musacchio and Marco Polignano and Lucia Siciliani and Giuseppe Fiameni and Giovanni Semeraro}, year={2023}, eprint={2312.09993}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` *Notice:* Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. [*License*](https://ai.meta.com/llama/license/)
fireworks-ai/FireLLaVA-13b
fireworks-ai
2024-01-19T00:29:44Z
701
50
transformers
[ "transformers", "safetensors", "llava", "pretraining", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-01-05T18:19:22Z
--- license: llama2 --- # Model Details Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept the Llama 2 Community License Agreement before requesting access here. ## Model type: LLaVA vision-language model trained on OSS LLM generated instruction following data. ## Model state: FireLLaVA 13B was trained in December 2023 ## Paper or resources for more information: https://llava-vl.github.io/ # How to use the model The model is served on Fireworks.ai, and you can try it out here: https://app.fireworks.ai/models/fireworks/firellava-13b API endpoints are also available with instructions linked here: https://readme.fireworks.ai/docs/querying-vision-language-models Otherwise, if you wish to run the model locally using huggingface transformers library, you can do so, please read the instructions below. First, make sure to have transformers >= 4.35.3. The model supports multi-image and multi-prompt generation. Meaning that you can pass multiple images in your prompt. Make sure also to follow the correct prompt template (USER: xxx\nASSISTANT:) and add the token \<image\> to the location where you want to query images. However, do note that model performance with multiple images in the input may degrade since it is not trained with multiple images in the input. ## Using `pipeline` ```python from transformers import pipeline from PIL import Image import requests model_id = "fireworks-ai/FireLLaVA-13b" pipe = pipeline("image-to-text", model=model_id) url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg" image = Image.open(requests.get(url, stream=True).raw) prompt = "USER: <image>\nWhat is the make of the car? Answer with one word or phrase.\n\nASSISTANT:" outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200}) print(outputs) >>> [{'generated_text': 'USER: \nWhat is the make of the car? Answer with one word or phrase.\n\nASSISTANT: Volkswagen'}] ``` ## Using pure `transformers` ```python import requests from PIL import Image import torch from transformers import AutoProcessor, LlavaForConditionalGeneration model_id = "fireworks-ai/FireLLaVA-13b" prompt = "USER: <image>\nWhat is this?\n\nASSISTANT:" url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg" model = LlavaForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.float16, ).to(0) processor = AutoProcessor.from_pretrained(model_id) raw_image = Image.open(requests.get(url, stream=True).raw) inputs = processor(prompt, raw_image, return_tensors='pt').to(0, torch.float16) output = model.generate(**inputs, max_new_tokens=200, do_sample=False) print(processor.decode(output[0], skip_special_tokens=True)) >>> "This is an early Volkswagen Beetle car, also known as a VW bug, parked on a brick street and next to a building with doors ..." ```
Lambent/danube2-upscale-1.7
Lambent
2024-04-21T21:22:09Z
701
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "dataset:HuggingFaceTB/cosmopedia-100k", "dataset:Vezora/Tested-22k-Python-Alpaca", "dataset:sordonia/redpajama-sample_from_valid_all", "dataset:nampdn-ai/tiny-bridgedict", "dataset:teknium/GPTeacher-General-Instruct", "dataset:Severian/Internal-Knowledge-Map", "dataset:Severian/Internal-Knowledge-Map-StoryWriter-RolePlaying", "arxiv:2203.05482", "base_model:Lambent/danube2-upscale-1.53lisa", "base_model:Lambent/danube2-upscale-1.51galore", "base_model:Lambent/danube2-upscale-1.531qlora", "base_model:Lambent/danube2-upscale-1.51qlora", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-21T17:35:46Z
--- base_model: - Lambent/danube2-upscale-1.53lisa - Lambent/danube2-upscale-1.51galore - Lambent/danube2-upscale-1.531qlora - Lambent/danube2-upscale-1.51qlora library_name: transformers tags: - mergekit - merge datasets: - HuggingFaceTB/cosmopedia-100k - Vezora/Tested-22k-Python-Alpaca - sordonia/redpajama-sample_from_valid_all - nampdn-ai/tiny-bridgedict - teknium/GPTeacher-General-Instruct - Severian/Internal-Knowledge-Map - Severian/Internal-Knowledge-Map-StoryWriter-RolePlaying license: apache-2.0 --- # eq90parsedanube This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). First one that's shown promising capability improvement over the base model `h2o-danube2-1.8b-base`. Training methodology ... is a bit of a mess, trying out different things. I'm adding the datasets used at any point, but I don't think replicating the recipe is doable or sensible. Original upscale at Lambent/danube2-upscale-1, duplicating layers 16-21. Various training methods attempted to repair. Linear merge is of the 4 that were at least 90% parseable by the EQ-Bench benchmark. | Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average| |-------------------------------------------------------------------------|------:|------:|---------:|-------:|------:| |[danube2-upscale-1.7](https://huggingface.co/Lambent/danube2-upscale-1.7)| 27.97| 62.16| 42.2| 32.2| 41.13| | Model |EQ-Bench|Average| |-------------------------------------------------------------------------|-------:|------:| |[danube2-upscale-1.7](https://huggingface.co/Lambent/danube2-upscale-1.7)| 15.52| 15.52| ### EQ-Bench | Task |Version| Metric | Value | |Stderr| |--------|------:|-----------------------------|--------|---|------| |eq_bench| 2.1|eqbench,none | 15.52| | | | | |eqbench_stderr,none | 2.77| | | | | |percent_parseable,none | 100| | | | | |percent_parseable_stderr,none| 0| | | | | |alias |eq_bench| | | Average: 15.52% Average score: 15.52% ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * [Lambent/danube2-upscale-1.53lisa](https://huggingface.co/Lambent/danube2-upscale-1.53lisa) * [Lambent/danube2-upscale-1.51galore](https://huggingface.co/Lambent/danube2-upscale-1.51galore) * [Lambent/danube2-upscale-1.531qlora](https://huggingface.co/Lambent/danube2-upscale-1.531qlora) * [Lambent/danube2-upscale-1.51qlora](https://huggingface.co/Lambent/danube2-upscale-1.51qlora) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Lambent/danube2-upscale-1.531qlora parameters: weight: 1.0 - model: Lambent/danube2-upscale-1.53lisa parameters: weight: 1.0 - model: Lambent/danube2-upscale-1.51galore parameters: weight: 1.0 - model: Lambent/danube2-upscale-1.51qlora parameters: weight: 1.0 merge_method: linear dtype: float16 ```
flammenai/Mahou-1.3-llama3-8B
flammenai
2024-05-29T01:25:08Z
701
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:flammenai/MahouMix-v1", "base_model:nbeerbower/llama-3-Daredevil-Mahou-8B", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-28T22:20:30Z
--- library_name: transformers license: llama3 base_model: - nbeerbower/llama-3-Daredevil-Mahou-8B datasets: - flammenai/MahouMix-v1 --- ![image/png](https://huggingface.co/flammenai/Mahou-1.0-mistral-7B/resolve/main/mahou1.png) # Mahou-1.3-llama3-8B Mahou is our attempt to build a production-ready conversational/roleplay LLM. Future versions will be released iteratively and finetuned from flammen.ai conversational data. ### License This model is based on Meta Llama-3-8B and is governed by the [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE). ### Chat Format This model has been trained to use ChatML format. Note the additional tokens in [tokenizer_config.json](tokenizer_config.json). ``` <|im_start|>system {{system}}<|im_end|> <|im_start|>{{char}} {{message}}<|im_end|> <|im_start|>{{user}} {{message}}<|im_end|> ``` ### Roleplay Format - Speech without quotes. - Actions in `*asterisks*` ``` *leans against wall cooly* so like, i just casted a super strong spell at magician academy today, not gonna lie, felt badass. ``` ### ST Settings 1. Use ChatML for the Context Template. 2. Enable Instruct Mode. 3. Use the [Mahou preset](https://huggingface.co/datasets/flammenai/Mahou-ST-ChatML-Instruct/raw/main/Mahou.json). 4. Recommended: Add newline as a stopping string: `["\n"]` ### Method Finetuned for 10 epochs using an A100 on Google Colab. [Fine-tune Llama 3 with ORPO](https://huggingface.co/blog/mlabonne/orpo-llama-3) - [Maxime Labonne](https://huggingface.co/mlabonne)
lcw99/llama-3-10b-ko-240604-e2f
lcw99
2024-06-04T01:17:10Z
701
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-04T00:37:02Z
--- language: - ko license: apache-2.0 library_name: transformers --- # Model Card for Model ID ## Model Details ### Model Description Korean layer added instruction tunning of meta-llama/Meta-Llama-3-8B-Instruct #### Chat template tokenizer.apply_chat_template(chat, tokenize=False)
mossaic-candle/adaptive-lm-molecules
mossaic-candle
2022-05-23T17:13:09Z
700
1
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-05-23T13:55:25Z
Entry not found
SurgeGlobal/OpenBezoar-HH-RLHF-DPO
SurgeGlobal
2024-04-27T18:19:17Z
700
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "text-generation-inference", "en", "dataset:Anthropic/hh-rlhf", "arxiv:2305.18290", "arxiv:2404.12195", "arxiv:2306.02707", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-09-19T05:48:41Z
--- license: cc-by-nc-4.0 datasets: - Anthropic/hh-rlhf language: - en pipeline_tag: text-generation tags: - text-generation-inference --- # OpenBezoar-HH-RLHF-DPO The OpenBezoar-HH-RLHF-DPO is an LLM that has been fine tuned for human preferences alignment using Direct Preference Optimization (DPO), on top of [OpenBezoar-HH-RLHF-SFT](https://huggingface.co/SurgeGlobal/OpenBezoar-HH-RLHF-SFT) model on a subset of [Anthropic's HH-RLHF Dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf). ## Model Details - Base Model: [OpenBezoar-HH-RLHF-SFT](https://huggingface.co/SurgeGlobal/OpenBezoar-HH-RLHF-SFT) - Dataset used for SFT: First 100K examples of the [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf) dataset - Alignment Method: [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290) - Epochs: 1 ### Model Description OpenBezoar-HH-RLHF-SFT is an LLM that is built upon the OpenLLaMA 3B v2 architecture. This model has been fine-tuned for human preferences alignment using DPO. Alignment has been performed on top of the [OpenBezoar-HH-RLHF-SFT](https://huggingface.co/SurgeGlobal/OpenBezoar-HH-RLHF-SFT) model. For more information please refer to our paper. ### Model Sources - **Repository:** [Bitbucket Project](https://bitbucket.org/paladinanalytics/workspace/projects/OP) - **Paper :** [Pre-Print](https://arxiv.org/abs/2404.12195) ## Instruction Format We follow a modified version of the Alpaca prompt template as shown below. It is important to utilize this template in order to obtain best responses for instruction related tasks. ``` ### System: Below is an instruction that describes a task, optionally paired with an input that provides further context following that instruction. Write a response that appropriately completes the request. ### Instruction: {instruction} ### Response: ``` Notice that **no** end-of-sentence (eos) token is being appended. *Note: The system prompt shown in the following figure is the one that the model has been trained on most of the time. However, you may attempt to use any other system prompt that is available in the [Orca](https://arxiv.org/abs/2306.02707) scheme.* ## Usage ```python from peft import PeftConfig, PeftModel from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig, AutoModelForSeq2SeqLM checkpoint = "SurgeGlobal/OpenBezoar-HH-RLHF-DPO" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained( checkpoint, load_in_4bit=True, # optionally for low resource environments device_map="auto" ) prompt = """### System: Below is an instruction that describes a task, optionally paired with an input that provides further context following that instruction. Write a response that appropriately completes the request. ### Instruction: {instruction} ### Response:""".format( instruction="What is the world state in the year 1597." ) inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=1024, do_sample=True) print(tokenizer.decode(outputs[0])) ``` ## Evaluations Refer to our self-reported evaluations in our paper (Section 4). ## Limitations - The model might not consistently show improved abilities to follow instructions, and it could respond inappropriately or get stuck in loops. - Although this model is aligned to human preferences and has been evaluated for performance, it is not guaranteed that it will **refrain** from generating harmful content exclusively. - Caution is urged against relying on this model for production or adjacent use-cases. ## Citation If you find our work useful, please cite our paper as follows: ``` @misc{surge2024openbezoar, title={OpenBezoar: Small, Cost-Effective and Open Models Trained on Mixes of Instruction Data}, author={Chandeepa Dissanayake and Lahiru Lowe and Sachith Gunasekara and Yasiru Ratnayake}, year={2024}, eprint={2404.12195}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Model Authors Chandeepa Dissanayake, Lahiru Lowe, Sachith Gunasekara, and Yasiru Ratnayake
TheBloke/Code-33B-GGUF
TheBloke
2023-12-13T00:18:40Z
700
5
transformers
[ "transformers", "gguf", "llama", "code", "en", "dataset:ajibawa-2023/Code-74k-ShareGPT", "base_model:ajibawa-2023/Code-33B", "license:cc-by-nc-nd-4.0", "text-generation-inference", "region:us" ]
null
2023-12-12T18:28:59Z
--- base_model: ajibawa-2023/Code-33B datasets: - ajibawa-2023/Code-74k-ShareGPT inference: false language: - en license: cc-by-nc-nd-4.0 model_creator: Feynman Innovations model_name: Code 33B model_type: llama prompt_template: 'This is a conversation with your helpful AI assistant. AI assistant can generate Code in various Programming Languages along with necessary explanation. Context You are a helpful AI assistant. USER: {prompt} ASSISTANT: ' quantized_by: TheBloke tags: - code --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Code 33B - GGUF - Model creator: [Feynman Innovations](https://huggingface.co/ajibawa-2023) - Original model: [Code 33B](https://huggingface.co/ajibawa-2023/Code-33B) <!-- description start --> ## Description This repo contains GGUF format model files for [Feynman Innovations's Code 33B](https://huggingface.co/ajibawa-2023/Code-33B). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Code-33B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Code-33B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Code-33B-GGUF) * [Feynman Innovations's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ajibawa-2023/Code-33B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Ajibawa-Code ``` This is a conversation with your helpful AI assistant. AI assistant can generate Code in various Programming Languages along with necessary explanation. Context You are a helpful AI assistant. USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [code-33b.Q2_K.gguf](https://huggingface.co/TheBloke/Code-33B-GGUF/blob/main/code-33b.Q2_K.gguf) | Q2_K | 2 | 13.50 GB| 16.00 GB | smallest, significant quality loss - not recommended for most purposes | | [code-33b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Code-33B-GGUF/blob/main/code-33b.Q3_K_S.gguf) | Q3_K_S | 3 | 14.06 GB| 16.56 GB | very small, high quality loss | | [code-33b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Code-33B-GGUF/blob/main/code-33b.Q3_K_M.gguf) | Q3_K_M | 3 | 15.76 GB| 18.26 GB | very small, high quality loss | | [code-33b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Code-33B-GGUF/blob/main/code-33b.Q3_K_L.gguf) | Q3_K_L | 3 | 17.28 GB| 19.78 GB | small, substantial quality loss | | [code-33b.Q4_0.gguf](https://huggingface.co/TheBloke/Code-33B-GGUF/blob/main/code-33b.Q4_0.gguf) | Q4_0 | 4 | 18.36 GB| 20.86 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [code-33b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Code-33B-GGUF/blob/main/code-33b.Q4_K_S.gguf) | Q4_K_S | 4 | 18.44 GB| 20.94 GB | small, greater quality loss | | [code-33b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Code-33B-GGUF/blob/main/code-33b.Q4_K_M.gguf) | Q4_K_M | 4 | 19.62 GB| 22.12 GB | medium, balanced quality - recommended | | [code-33b.Q5_0.gguf](https://huggingface.co/TheBloke/Code-33B-GGUF/blob/main/code-33b.Q5_0.gguf) | Q5_0 | 5 | 22.40 GB| 24.90 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [code-33b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Code-33B-GGUF/blob/main/code-33b.Q5_K_S.gguf) | Q5_K_S | 5 | 22.40 GB| 24.90 GB | large, low quality loss - recommended | | [code-33b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Code-33B-GGUF/blob/main/code-33b.Q5_K_M.gguf) | Q5_K_M | 5 | 23.05 GB| 25.55 GB | large, very low quality loss - recommended | | [code-33b.Q6_K.gguf](https://huggingface.co/TheBloke/Code-33B-GGUF/blob/main/code-33b.Q6_K.gguf) | Q6_K | 6 | 26.69 GB| 29.19 GB | very large, extremely low quality loss | | [code-33b.Q8_0.gguf](https://huggingface.co/TheBloke/Code-33B-GGUF/blob/main/code-33b.Q8_0.gguf) | Q8_0 | 8 | 34.57 GB| 37.07 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Code-33B-GGUF and below it, a specific filename to download, such as: code-33b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Code-33B-GGUF code-33b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Code-33B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Code-33B-GGUF code-33b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m code-33b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "This is a conversation with your helpful AI assistant. AI assistant can generate Code in various Programming Languages along with necessary explanation.\n\nContext\nYou are a helpful AI assistant.\n\nUSER: {prompt}\nASSISTANT:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./code-33b.Q4_K_M.gguf", # Download the model file first n_ctx=2048, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "This is a conversation with your helpful AI assistant. AI assistant can generate Code in various Programming Languages along with necessary explanation.\n\nContext\nYou are a helpful AI assistant.\n\nUSER: {prompt}\nASSISTANT:", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./code-33b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Feynman Innovations's Code 33B **Code-33B** Large Language Models (LLMs) are good with code generations. Sometimes they do make mistakes in code generation. How about if they can give detailed explanation along with the code. This is what I have tried over here. The base Llama-1 model was used for training purpose. It is trained on around 74000 set of codes. Each set having 2 conversations. Along with Python, Java, JavaScript, GO, C++, Rust etc. code with detailed explanation is used for training purpose. It is built upon using my existing Dataset [Python-Code-23k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Python-Code-23k-ShareGPT). This conversation is in Vicuna/ShareGPT format. Each set, along with code, has detailed explanation. I have released the new data [Code-74k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-74k-ShareGPT) on which this Model is trained. **Training:** Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took 6 days & 5 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-1 by Meta. This is a full fine tuned model. Links for quantized models will be updated soon. **GPTQ GGUF & AWQ** GPTQ: TBA GGUF: [Link](https://huggingface.co/TheBloke/Code-33B-GGUF) AWQ: TBA **Example Prompt:** ``` This is a conversation with your helpful AI assistant. AI assistant can generate Code in various Programming Languages along with necessary explanation. Context You are a helpful AI assistant. USER: <prompt> ASSISTANT: ``` You can modify above Prompt as per your requirement. I have used ShareGPT/Vicuna format v1.1 . I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development. Thank you for your love & support. <!-- original-model-card end -->
capleaf/T-Llama
capleaf
2024-04-29T05:20:15Z
700
6
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "vi", "en", "license:llama2", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-14T11:01:05Z
--- language: - vi - en license: llama2 model-index: - name: T-Llama results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 54.18 name: normalized accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=1TuanPham/T-Llama name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 76.48 name: normalized accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=1TuanPham/T-Llama name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 47.98 name: accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=1TuanPham/T-Llama name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 46.47 source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=1TuanPham/T-Llama name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 71.27 name: accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=1TuanPham/T-Llama name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 29.64 name: accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=1TuanPham/T-Llama name: Open LLM Leaderboard --- <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/63905e87df447b438817b2cd/QFhLKQlWeyO9XumtyghVo.jpeg" alt="Image" style="width: 400px; height: auto; border-radius: 10px;" /> </p> ## Model Details - **Developed by:** Tuan Pham (FPTU HCM Student) - Contact me at: [email protected] or [email protected] - Looking for intern opportunity :D - **Model type:** Llama2-7B Decoder-only - **Finetuned from model :** * meta-llama/Llama-2-7b * bkai-foundation-models/vietnamese-llama2-7b-120GB * yeen214/llama2_7b_merge_orcafamily. - **Bilingual support :** English and Vietnamese ### Model Description <!-- Provide a longer summary of what this model is. --> This model is a proof of effort that one man can fine-tune his own model to reach SOTA. ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** * Training: https://github.com/vTuanpham/Vietnamese_QA_System * Data: https://github.com/vTuanpham/Large_dataset_translator - **Paper:** ... - **Demo:** ... - Here is a kaggle and colab script to quickly test the model: * https://www.kaggle.com/code/tuanphamm/t-llama-test * https://colab.research.google.com/drive/1Y-f0E6C_gN_Iy72UN3-Y_c5RdPgrPai-?usp=sharing ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Prompt template ``` [SYSTEM_PROMPT] ####### Instruction: [INPUT] %%%%%%% Response: [RESPONSE] ``` Recommend keeping the system prompt in english. ## How to Get Started with the Model Use the code below to get started with the model. ```python import torch from torch.cuda.amp import autocast from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer, pipeline def prompt_format(system_prompt, instruction): prompt = f"""{system_prompt} ####### Instruction: {instruction} %%%%%%% Response: """ return prompt system_prompt = """ You're an AI Large Language Model developed(created) by an AI developer named Tuấn, the architecture of you is decoder-based LM, your task are to think loudly step by step before give a good and relevant response to the user request, answer in the language the user preferred. The AI has been trained to answer questions, provide recommendations, and help with decision making. The AI thinks outside the box and follows the user requests """ instruction = "Xin chào" formatted_prompt = prompt_format(system_prompt, instruction) print(formatted_prompt) model_name = "1TuanPham/T-Llama" model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, use_cache=True, device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True) streamer = TextStreamer(tokenizer, skip_special_tokens=True) pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, streamer=streamer) with autocast(): output_default = pipe(formatted_prompt, pad_token_id=50256, max_new_tokens=128) ``` Example outputs: ``` > Xin chào Xin chào! Tôi là một AI được phát triển bởi một AI nhà phát triển tên là Tuấn. Tôi được thiết kế để giúp đỡ người dùng bằng cách trả lời các câu hỏi, đưa ra đề xuất và hỗ trợ trong quá trình ra quyết định. Tôi có thể hỗ trợ bạn bằng cách nghĩ ra các câu trả lời hay và phù hợp cho các câu hỏi của bạn. > Việt Nam có cà phê gì nổi tiếng thế giới ? Cà phê robusta của Việt Nam nổi tiếng trên toàn cầu. Loại cà phê này được trồng chủ yếu ở các vùng đất cao như Lâm Đồng, Gia Lai, KonTum và có hương vị đậm đà, đắng hơn so với cà phê Arabica. Cà phê robusta thường được sử dụng để pha chế các loại cà phê espresso và phin truyền thống. > Tôi có 500$ hãy build cho tôi một bộ PC Để xây dựng một bộ PC với 500 USD, bạn có thể bắt đầu bằng cách chọn các thành phần chính như CPU, GPU, bộ nhớ và bộ lưu trữ. Sau đó, bạn có thể thêm các thành phần bổ sung như loa, bàn phím và chuột để hoàn thiện hệ thống. Dưới đây là một số gợi ý về cách bạn có thể cấu thành một bộ PC với 500 USD: 1. CPU: Intel Core i5-12400 hoặc AMD Ryzen 5 5600X là những lựa chọn tốt để bắt đầu. 2. GPU: Nvidia GeForce GTX 1650 hoặc AMD Radeon RX 570 là những lựa chọn tốt để bắt đầu. 3. Bộ nhớ: 8GB RAM là một khởi đầu tốt. 4. Bộ lưu trữ: 500 GB ổ cứng SSD là một khởi đầu tốt. 5. Bàn phím và chuột: Bàn phím và chuột có thể được mua riêng hoặc cùng một bộ. 6. Loa: Một loa có thể được mua riêng hoặc cùng một bộ với bàn phím và chuột. > Cho tôi 3 thức uống để giúp ngủ ngon hơn Chắc chắn! Dưới đây là ba thức uống có thể giúp bạn ngủ ngon hơn: 1. Mật ong và chanh: Trộn một thìa mật ong với một thìa nước cốt chanh và uống trước khi đi ngủ. Mật ong có tác dụng làm dịu, trong khi chanh có chứa chất chống oxy hóa có thể giúp cải thiện chất lượng giấc ngủ. 2. Trà thảo dược: Một số loại trà thảo dược như hoa cúc, hoa oải hương và hoa sen có tác dụng thư giãn và giúp bạn chìm vào giấc ngủ. Hãy thử pha một ấm trà thảo dược ấm và uống trước khi đi ngủ. 3. Sữa ấm: Sữa ấm có chứa tryptophan, một loại hormone giúp bạn cảm thấy dễ chịu và thư giãn. Nó cũng có thể giúp thúc đẩy sản xuất melatonin, hormone giúp bạn chìm vào giấc ngủ. Hãy thử uống một cốc sữa ấm trước khi đi ngủ để giúp bạn ngủ ngon hơn. ``` The translated data has been thoroughly cleaned from the coding task, so in theory, the model should be able to code normally by incorporating the coding capabilities from the English coding examples task. (The translation will translate code definitions and variables.) I would not recommend running any code generated from this model. ```` > Viết cho tôi một đoạn code bằng python xây dựng một ứng dụng bằng Flask hiển thị 'Xin chào thế giới' trên màn hình Để viết một đoạn mã Python xây dựng một ứng dụng bằng Flask hiển thị 'Xin chào thế giới' trên màn hình, bạn có thể làm theo các bước sau: 1. Cài đặt Flask và các thư viện cần thiết: ``` pip install flask ``` 2. Tạo một ứng dụng Flask mới: ``` app = Flask(__name__) ``` 3. Thêm một route để hiển thị 'Xin chào thế giới': ``` @app.route('/') def hello(): return 'Xin chào thế giới!' ``` 4. Khởi tạo ứng dụng Flask: ``` app.run() ``` 5. Chạy ứng dụng Flask: ``` app.run(debug=True) ``` 6. Mở trình duyệt và truy cập vào địa chỉ IP hoặc tên máy chủ của máy tính hoặc máy chủ nơi bạn đã khởi động ứng dụng Flask. ```` These examples are sub-optimal as they were generated with greedy decoding to ensure reproducibility. Here's the recommended configuration: * top_k=50 * top_p=0.5 * temperature=0.5 * penalty_alpha=0.7 * repetition_penalty=1.15 Here is an example using the above generation config: ```` > Tôi buồn ngủ quá, cơ mà thứ 7 phải thi rồi, nay là rạng sáng thứ 6. Tôi có nên đi ngủ không trong khi tôi mới xong chưa được 1/3 bài để ôn ? Nếu bạn cảm thấy mệt mỏi và cần nghỉ ngơi thì việc đi ngủ sớm sẽ rất quan trọng đối với sức khỏe của bạn. Tuy nhiên, nếu bạn vẫn muốn hoàn thành một phần bài tập để chuẩn bị cho kỳ thi sắp tới, hãy cân nhắc thời gian còn lại của mình và ưu tiên những nhiệm vụ quan trọng nhất trước khi đi ngủ. Nếu bạn cảm thấy rằng mình đã làm đủ, hãy dành chút thời gian thư giãn hoặc thực hiện các hoạt động khác để giúp giảm bớt căng thẳng và tăng cường sự tỉnh táo của bạn vào buổi sáng hôm sau. Cuối cùng, quyết định cuối cùng phụ thuộc vào mức độ nghiêm trọng của kỳ thi và khả năng quản lý thời gian của bạn. # temperature=0.7 > Em ăn cơm chưa ? Không, tôi không ăn cơm. Tôi là một mô hình ngôn ngữ máy được phát triển bởi một nhà phát triển AI tên là Tuấn và được đào tạo để trả lời các câu hỏi và cung cấp phản hồi hữu ích cho người dùng. Mục tiêu của tôi là hỗ trợ người dùng bằng cách suy nghĩ trước khi đưa ra những phản hồi phù hợp và chính xác nhất có thể. > Viết cho tôi một ứng dụng flask in ra màn hình 'Xin chào Việt Nam!' Để viết ứng dụng Flask hiển thị "Xin chào Việt Nam!" trên màn hình, chúng ta có thể làm theo các bước sau: 1. Cài đặt Flask và các thư viện cần thiết như BeautifulSoup4 và jsonify. Bạn có thể sử dụng pip để cài đặt chúng: ``` pip install Flask pip install beautifulsoup4 pip install jsonify ``` 2. Tạo một tệp mới trong thư mục của bạn với tên `app.py`. Đây là mã mẫu cơ bản của ứng dụng: ```python from bs4 import BeautifulSoup from jsonify import jsonify import requests app = Flask(__name__) @app.route('/') def home(): html_string = """<h1>Chào mừng đến với Việt Nam!</h1> <p>Đây là một trang web dành cho những ai muốn tìm hiểu về đất nước Việt Nam xinh đẹp.</p>""" soup = BeautifulSoup(html_string, 'html.parser') print(soup) return soup if __name__ == '__main__': app.run() ``` 3. Khởi tạo ứng dụng bằng cách chạy `app.py`: ``` python app.py ``` 4. Mở trình duyệt web và truy cập vào địa chỉ IP hoặc tên máy chủ của ứng dụng (ví dụ: http://localhost:5000). Trang đầu sẽ hiển thị thông báo "Xin chào Việt Nam!". Lưu ý rằng đây chỉ là ví dụ đơn giản và bạn có thể tùy chỉnh ứng dụng để đáp ứng nhu cầu cụ thể của mình. > Liệt kê cho tôi 5 thứ cần thiết để đem đi cắm trại Chắc chắn! Dưới đây là năm vật dụng cần thiết khi đi cắm trại: 1. Lều - Một chiếc lều chắc chắn và thoải mái sẽ cung cấp nơi trú ẩn khỏi thời tiết và côn trùng. 2. Túi ngủ - Một túi ngủ ấm áp và thoải mái sẽ giúp bạn có giấc ngủ ngon trong khi ở ngoài trời. 3. Bếp di động - Một bếp di động như bình đun hoặc bếp cồn sẽ giúp bạn nấu ăn và giữ ấm thức ăn của mình. 4. Đèn pin - Một đèn pin mạnh mẽ sẽ giúp chiếu sáng khu vực cắm trại của bạn vào ban đêm. 5. Bản đồ và la bàn - Bản đồ và la bàn sẽ giúp bạn định hướng chính xác đến địa điểm cắm trại của bạn và quay lại nhà nếu cần. > Viết cho tôi một bài thơ về thành phố Đà Lạt Thành phố sương mù, nơi thiên nhiên thanh bình, Nơi những bông hoa khoe sắc và cây xanh tươi tốt, Một ốc đảo giữa lòng đất, một xứ sở thần tiên, Nơi mọi lo lắng đều tan biến trong không khí. Những con đường nhỏ đan xen qua những ngọn đồi thoai thoải, Và những ngôi nhà cổ kính, với những bức tường màu nâu ấm áp, Nơi những người dân thân thiện chào đón bạn bằng nụ cười, Và tiếng chim hót líu lo như lời chào của thiên nhiên. Những dòng suối chảy êm đềm, những thác nước đổ xuống, Và những hồ nước lấp lánh, những cảnh tượng ngoạn mục, Nơi sự tĩnh lặng ngự trị, nơi bình yên ngự trị, Nơi tâm hồn được chữa lành và cảm xúc được giải phóng. Ở đây, thời gian dường như đứng yên, Nơi những kỷ niệm vẫn còn nguyên vẹn, Nơi những mối quan hệ bền chặt được nuôi dưỡng, Nơi tình yêu và hòa bình chiếm vị trí hàng đầu. Thế giới bên ngoài có vẻ xa xôi, Nhưng ở đây, chỉ cần bước chân đi, Nơi bầu trời rộng lớn, những đám mây cao chót vót, Và những tia nắng vàng rực rỡ, sưởi ấm trái tim chúng ta. Ôi, thành phố sương mù, nơi thiên nhiên thanh bình, Nơi những giấc mơ trở thành hiện thực, nơi trái tim tràn ngập niềm vui, Nơi tình yêu và hòa bình ngự trị, nơi tình yêu và hòa bình ngự trị. ```` Note: 120GB of pre-trained Vietnamese  data might not be enough for a general question about Vietnamese events. ## Training Details **Hardware Type:** * GPU: VGA NVIDIA Tesla P100 16GB * SYSTEM RAM: 32GB **Hours used:** ~47.5 days Approx* ### Training Data * BactrianX * OpenOrca_translated * WizardLM_70k_translated * TigerLabMathInstruct_translated_vi * GradeSchoolMathInstruct_translated * vilm_lima-vi * MTEngVietnamese * databricks_dolly15k_translated * AlpacaCleaned_translated * databricks_dolly15k * OpenOrca * GradeSchoolMathInstruct * AlpacaCleaned * WebglmQA ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> * Learning rate: 2e-5 cosine * Optimizer: PagedLion8bit * QLora: rank: 64 /Q: 4-bit * Batch size: 2 * Gradient accumulation: 128 * Effective batch size: 256 * 250k examples of 70% Vietnamese 30% English for 3.37 epoch * 350k examples of 60% Vietnamese 40% English for 1.4 epoch ### Training loss Last 10 runs: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63905e87df447b438817b2cd/rV8Go_YFZv7QcR_FhFxp-.png) Each line is 12 hours All run plots: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63905e87df447b438817b2cd/bxdq_0Mu8seBJCJNmpoEb.png) A bad way to visualize i know... ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63905e87df447b438817b2cd/z1ZTm7Tab4tQbVPgQW1hU.png) Our model currently sits at TOP-5 on the VMLU benchmark # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_1TuanPham__T-Llama) | Metric |Value| |---------------------------------|----:| |Avg. |54.34| |AI2 Reasoning Challenge (25-Shot)|54.18| |HellaSwag (10-Shot) |76.48| |MMLU (5-Shot) |47.98| |TruthfulQA (0-shot) |46.47| |Winogrande (5-shot) |71.27| |GSM8k (5-shot) |29.64| ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> ```bibtex @online{t-llama, author = {Pham Minh Tuan}, title = {T-Llama: A New Language Model for Vietnamese}, % Placeholder year = 2024, url = {https://github.com/vTuanpham/Vietnamese_QA_System} } ```
neuralmagic/Llama-2-7b-evolcodealpaca
neuralmagic
2024-05-07T15:29:56Z
700
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "code", "dataset:theblackcat102/evol-codealpaca-v1", "arxiv:2405.03594", "arxiv:2107.03374", "base_model:meta-llama/Llama-2-7b-hf", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-15T15:45:58Z
--- base_model: meta-llama/Llama-2-7b-hf inference: true model_type: llama pipeline_tag: text-generation datasets: - theblackcat102/evol-codealpaca-v1 tags: - code --- # Llama-2-7b-evolcodealpaca This repo contains a [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b-hf) finetuned for code generation tasks using the [Evolved CodeAlpaca](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1) dataset. Official model weights from [Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment](https://arxiv.org/abs/2405.03594). **Authors**: Neural Magic, Cerebras ## Usage Below we share some code snippets on how to get quickly started with running the model. ### Sparse Transfer By leveraging a pre-sparsified model's structure, you can efficiently fine-tune on new data, leading to reduced hyperparameter tuning, training times, and computational costs. Learn about this process [here](https://neuralmagic.github.io/docs-v2/get-started/transfer). ### Running the model This model may be run with the transformers library. For accelerated inference with sparsity, deploy with [nm-vllm](https://github.com/neuralmagic/nm-vllm) or [deepsparse](https://github.com/neuralmagic/deepsparse). ```python # pip install transformers accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("neuralmagic/Llama-2-7b-evolcodealpaca") model = AutoModelForCausalLM.from_pretrained("neuralmagic/Llama-2-7b-evolcodealpaca", device_map="auto") input_text = "def fibonacci(n):\n" input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` ## Evaluation Benchmark Results Model evaluation metrics and results. | Benchmark | Metric | Llama-2-7b-evolcodealpaca | |------------------------------------------------|---------------|-------------| | [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 32.03 | ## Model Training Details Coming soon. ## Help For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ)
grimjim/kunoichi-lemon-royale-v2-32K-7B
grimjim
2024-05-13T02:49:11Z
700
4
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2212.04089", "base_model:grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B", "base_model:grimjim/kunoichi-lemon-royale-7B", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-02T02:00:29Z
--- base_model: - grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B - grimjim/kunoichi-lemon-royale-7B library_name: transformers tags: - mergekit - merge license: cc-by-nc-4.0 pipeline_tag: text-generation --- # kunoichi-lemon-royale-v2-32K-7B This merge amounts to the grafting of a model derived from Mistral v0.1 (4K sliding window context, to a maximum of 8K practical context length) onto a model derived from Mistral v0.2 (32K context length). It appears to work, although rope_theta in config.json was lowered from 1000000.0 to 100000.0, which works well enough to 16K. In light testing, this model appears to follow formatting very well, with temperature 1.0 and minP 0.01, using ChatML prompts, even though the underlying model claims to follow Alpaca prompts. This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). - Full weights: [kunoichi-lemon-royale-v2-32K-7B](https://huggingface.co/grimjim/kunoichi-lemon-royale-v2-32K-7B) - GGUFs: [kunoichi-lemon-royale-v2-32K-7B-GGUF](https://huggingface.co/grimjim/kunoichi-lemon-royale-v2-32K-7B-GGUF) ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B](https://huggingface.co/grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B) as a base. ### Models Merged The following models were included in the merge: * [grimjim/kunoichi-lemon-royale-7B](https://huggingface.co/grimjim/kunoichi-lemon-royale-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B dtype: bfloat16 merge_method: task_arithmetic slices: - sources: - layer_range: [0, 32] model: grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B - layer_range: [0, 32] model: grimjim/kunoichi-lemon-royale-7B parameters: weight: 1.00 ```
dranger003/dbrx-instruct-iMat.GGUF
dranger003
2024-04-14T16:30:40Z
700
19
gguf
[ "gguf", "text-generation", "base_model:databricks/dbrx-instruct", "license:other", "region:us" ]
text-generation
2024-04-06T22:03:14Z
--- license: other license_name: databricks-open-model-license library_name: gguf license_link: https://www.databricks.com/legal/open-model-license pipeline_tag: text-generation base_model: databricks/dbrx-instruct --- **Quants from @phymbert (author of the support for this model in llama.cpp) are posted [here](https://huggingface.co/models?sort=created&search=gguf+phymbert)** The quants here are meant to test imatrix quantized weights. <i>If you run metal, you may need this [PR](https://github.com/ggerganov/llama.cpp/pull/6662)</i> **Added `ggml-dbrx-instruct-16x12b-f16_imatrix-wiki.dat` which is a 2K batches (1M tokens) on FP16 weights using wiki.train.** | Quant | IMatrix Quant/Dataset/Chunks | Size (GiB) | PPL (wiki.test) | | -- | -- | -- | -- | | IQ4_XS | Q8_0/wiki.train/200 | 65.29 | 5.2260 +/- 0.03558 | | IQ4_XS | FP16/wiki.train/2000 | 65.29 | 5.2241 +/- 0.03559 | | IQ4_XS | - | 66.05 | 5.2546 +/- 0.03570 | **2024-04-13**: Support for this model has just being merged - [`PR #6515`](https://github.com/ggerganov/llama.cpp/pull/6515). **<u>You will need this llama.cpp commit [`4bd0f93e`](https://github.com/ggerganov/llama.cpp/commit/4bd0f93e4ab4fe6682e7d0241c1bdec1397e954a) to run this model</u>** Quants in this repo are tested running the following command (quants under IQ3 are very sensitive and unreliable so far - the imatrix may require to be trained on FP16 weights rather than Q8_0 and for longer than 200 chunks): ``` ./build/bin/main -ngl 41 -c 4096 -s 0 -e -p "<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\nWrite an essay about AI.<|im_end|>\n<|im_start|>assistant\n" -m ggml-dbrx-instruct-16x12b-<<quant-to-test>>.gguf ``` * GGUF importance matrix (imatrix) quants for https://huggingface.co/databricks/dbrx-instruct * The importance matrix is trained for ~100K tokens (200 batches of 512 tokens) using [wiki.train.raw](https://huggingface.co/datasets/wikitext). * [Which GGUF is right for me? (from Artefact2)](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) - X axis is file size and Y axis is perplexity (lower perplexity is better quality). * The [imatrix is being used on the K-quants](https://github.com/ggerganov/llama.cpp/pull/4930) as well (only for < Q6_K). * You can merge GGUFs with `gguf-split --merge <first-chunk> <output-file>` although this is not required since [f482bb2e](https://github.com/ggerganov/llama.cpp/commit/f482bb2e4920e544651fb832f2e0bcb4d2ff69ab). * What is importance matrix (imatrix)? You can [read more about it from the author here](https://github.com/ggerganov/llama.cpp/pull/4861). * How do I use imatrix quants? Just like any other GGUF, the `.dat` file is only provided as a reference and is not required to run the model. * If you need to use IQ1, then use IQ1_M as IQ1_S is very unstable. > DBRX is a transformer-based decoder-only large language model (LLM) that was trained using next-token prediction. It uses a fine-grained mixture-of-experts (MoE) architecture with 132B total parameters of which 36B parameters are active on any input. It was pre-trained on 12T tokens of text and code data. Compared to other open MoE models like Mixtral-8x7B and Grok-1, DBRX is fine-grained, meaning it uses a larger number of smaller experts. DBRX has 16 experts and chooses 4, while Mixtral-8x7B and Grok-1 have 8 experts and choose 2. This provides 65x more possible combinations of experts and we found that this improves model quality. DBRX uses rotary position encodings (RoPE), gated linear units (GLU), and grouped query attention (GQA). It uses the GPT-4 tokenizer as provided in the tiktoken repository. We made these choices based on exhaustive evaluation and scaling experiments. | Layers | Context | Template | | --- | --- | --- | | <pre>40</pre> | <pre>32768</pre> | <pre>\<\|im_start\|\>system<br>{system}\<\|im_end\|\><br>\<\|im_start\|\>user<br>{prompt}\<\|im_end\|\><br>\<\|im_start\|\>assistant<br> </pre> | * 16x12B MoE * 16 experts (12B params per single expert; top_k=4 routing) * 36B active params (132B total params) * Trained on 12T tokens * 32k sequence length training
AIGym/Llama-2-7b-chat-hf-structured-responses
AIGym
2024-04-20T15:53:11Z
700
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:AIGym/function_calling_v2", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-20T04:43:49Z
--- library_name: transformers license: apache-2.0 datasets: - AIGym/function_calling_v2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
abhishek/autotrain-llama3-oh-sft-v0-2
abhishek
2024-04-25T10:31:23Z
700
1
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "autotrain", "text-generation-inference", "peft", "conversational", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-04-23T18:16:46Z
--- license: other library_name: transformers tags: - autotrain - text-generation-inference - text-generation - peft widget: - messages: - role: user content: What is your favorite condiment? --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
mmnga/tokyotech-llm-Swallow-7b-instruct-v0.1-gguf
mmnga
2024-05-03T04:53:43Z
700
0
null
[ "gguf", "en", "ja", "dataset:TFMC/imatrix-dataset-for-japanese-llm", "license:llama2", "region:us" ]
null
2024-05-03T04:09:27Z
--- license: llama2 language: - en - ja datasets: - TFMC/imatrix-dataset-for-japanese-llm --- # tokyotech-llm-Swallow-7b-instruct-v0.1-gguf [tokyotech-llmさんが公開しているSwallow-7b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-v0.1)のggufフォーマット変換版です。 imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。 ## 他のモデル mistral [mmnga/tokyotech-llm-Swallow-MS-7b-instruct-v0.1-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-MS-7b-instruct-v0.1-gguf) [mmnga/tokyotech-llm-Swallow-7b-plus-hf-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-7b-plus-hf-gguf) [mmnga/tokyotech-llm-Swallow-MS-7b-v0.1-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-MS-7b-v0.1-gguf) [mmnga/tokyotech-llm-Swallow-MX-8x7b-NVE-v0.1-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-MX-8x7b-NVE-v0.1-gguf) llama2 [mmnga/tokyotech-llm-Swallow-7b-instruct-v0.1-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-7b-instruct-v0.1-gguf) [mmnga/tokyotech-llm-Swallow-13b-instruct-v0.1-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-13b-instruct-v0.1-gguf) [mmnga/tokyotech-llm-Swallow-70b-instruct-v0.1-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-70b-instruct-v0.1-gguf) ## Usage ``` git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp make -j ./main -m 'tokyotech-llm-Swallow-7b-instruct-v0.1-Q4_0.gguf' -n 128 -p '[INST]<<SYS>\nあなたは誠実で優秀な日本人のアシスタントです。\n<</SYS>>\n\n東京工業大学の主なキャンパスについて教えてください[/INST]' ```
afrideva/phi-3-portuguese-tom-cat-4k-instruct-GGUF
afrideva
2024-05-18T17:56:42Z
700
0
transformers
[ "transformers", "gguf", "portugues", "portuguese", "QA", "instruct", "phi", "ggml", "quantized", "text-generation", "pt", "dataset:rhaymison/superset", "base_model:rhaymison/phi-3-portuguese-tom-cat-4k-instruct", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
text-generation
2024-05-18T17:31:48Z
--- base_model: rhaymison/phi-3-portuguese-tom-cat-4k-instruct datasets: - rhaymison/superset inference: true language: - pt library_name: transformers license: apache-2.0 model-index: - name: phi-3-portuguese-tom-cat-4k-instruct results: - dataset: args: num_few_shot: 3 name: ENEM Challenge (No Images) split: train type: eduagarcia/enem_challenge metrics: - name: accuracy type: acc value: 61.58 source: name: Open Portuguese LLM Leaderboard url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct task: name: Text Generation type: text-generation - dataset: args: num_few_shot: 3 name: BLUEX (No Images) split: train type: eduagarcia-temp/BLUEX_without_images metrics: - name: accuracy type: acc value: 50.63 source: name: Open Portuguese LLM Leaderboard url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct task: name: Text Generation type: text-generation - dataset: args: num_few_shot: 3 name: OAB Exams split: train type: eduagarcia/oab_exams metrics: - name: accuracy type: acc value: 43.69 source: name: Open Portuguese LLM Leaderboard url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct task: name: Text Generation type: text-generation - dataset: args: num_few_shot: 15 name: Assin2 RTE split: test type: assin2 metrics: - name: f1-macro type: f1_macro value: 91.54 source: name: Open Portuguese LLM Leaderboard url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct task: name: Text Generation type: text-generation - dataset: args: num_few_shot: 15 name: Assin2 STS split: test type: eduagarcia/portuguese_benchmark metrics: - name: pearson type: pearson value: 75.27 source: name: Open Portuguese LLM Leaderboard url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct task: name: Text Generation type: text-generation - dataset: args: num_few_shot: 15 name: FaQuAD NLI split: test type: ruanchaves/faquad-nli metrics: - name: f1-macro type: f1_macro value: 47.46 source: name: Open Portuguese LLM Leaderboard url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct task: name: Text Generation type: text-generation - dataset: args: num_few_shot: 25 name: HateBR Binary split: test type: ruanchaves/hatebr metrics: - name: f1-macro type: f1_macro value: 83.01 source: name: Open Portuguese LLM Leaderboard url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct task: name: Text Generation type: text-generation - dataset: args: num_few_shot: 25 name: PT Hate Speech Binary split: test type: hate_speech_portuguese metrics: - name: f1-macro type: f1_macro value: 70.19 source: name: Open Portuguese LLM Leaderboard url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct task: name: Text Generation type: text-generation - dataset: args: num_few_shot: 25 name: tweetSentBR split: test type: eduagarcia/tweetsentbr_fewshot metrics: - name: f1-macro type: f1_macro value: 57.78 source: name: Open Portuguese LLM Leaderboard url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct task: name: Text Generation type: text-generation model_creator: rhaymison model_name: phi-3-portuguese-tom-cat-4k-instruct pipeline_tag: text-generation quantized_by: afrideva tags: - portugues - portuguese - QA - instruct - phi - gguf - ggml - quantized --- # phi-3-portuguese-tom-cat-4k-instruct-GGUF Quantized GGUF model files for [phi-3-portuguese-tom-cat-4k-instruct](https://huggingface.co/rhaymison/phi-3-portuguese-tom-cat-4k-instruct) from [rhaymison](https://huggingface.co/rhaymison) ## Original Model Card: # Phi-3-portuguese-tom-cat-4k-instruct <p align="center"> <img src="https://raw.githubusercontent.com/rhaymisonbetini/huggphotos/main/tom-cat.webp" width="50%" style="margin-left:'auto' margin-right:'auto' display:'block'"/> </p> This model was trained with a superset of 300,000 instructions in Portuguese. The model comes to help fill the gap in models in Portuguese. Tuned from the microsoft/Phi-3-mini-4k. # How to use ### FULL MODEL : A100 ### HALF MODEL: L4 ### 8bit or 4bit : T4 or V100 You can use the model in its normal form up to 4-bit quantization. Below we will use both approaches. Remember that verbs are important in your prompt. Tell your model how to act or behave so that you can guide them along the path of their response. Important points like these help models (even smaller models like 4b) to perform much better. ```python !pip install -q -U transformers !pip install -q -U accelerate !pip install -q -U bitsandbytes from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model = AutoModelForCausalLM.from_pretrained("rhaymison/phi-3-portuguese-tom-cat-4k-instruct", device_map= {"": 0}) tokenizer = AutoTokenizer.from_pretrained("rhaymison/phi-3-portuguese-tom-cat-4k-instruct") model.eval() ``` You can use with Pipeline. ```python from transformers import pipeline pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, do_sample=True, max_new_tokens=512, num_beams=2, temperature=0.3, top_k=50, top_p=0.95, early_stopping=True, pad_token_id=tokenizer.eos_token_id, ) def format_template(question:str): system_prompt = "Abaixo está uma instrução que descreve uma tarefa, juntamente com uma entrada que fornece mais contexto. Escreva uma resposta que complete adequadamente o pedido." return f"""<s><|system|> { system_prompt } <|user|> { question } <|assistant|> """ question = format_template("E possivel ir de Carro dos Estados unidos ate o japão") pipe(question) ``` If you are having a memory problem such as "CUDA Out of memory", you should use 4-bit or 8-bit quantization. For the complete model in colab you will need the A100. If you want to use 4bits or 8bits, T4 or L4 will already solve the problem. # 4bits example ```python from transformers import BitsAndBytesConfig import torch nb_4bit_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True ) model = AutoModelForCausalLM.from_pretrained( base_model, quantization_config=bnb_config, device_map={"": 0} ) ``` # Open Portuguese LLM Leaderboard Evaluation Results Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/rhaymison/phi-3-portuguese-tom-cat-4k-instruct) and on the [🚀 Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard) | Metric | Value | |--------------------------|---------| |Average |**64.57**| |ENEM Challenge (No Images)| 61.58| |BLUEX (No Images) | 50.63| |OAB Exams | 43.69| |Assin2 RTE | 91.54| |Assin2 STS | 75.27| |FaQuAD NLI | 47.46| |HateBR Binary | 83.01| |PT Hate Speech Binary | 70.19| |tweetSentBR | 57.78| ### Comments Any idea, help or report will always be welcome. email: [email protected] <div style="display:flex; flex-direction:row; justify-content:left"> <a href="https://www.linkedin.com/in/rhaymison-cristian-betini-2b3016175/" target="_blank"> <img src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white"> </a> <a href="https://github.com/rhaymisonbetini" target="_blank"> <img src="https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white"> </a>
CHE-72/Baichuan2-7B-Chat-Q3_K_L-GGUF
CHE-72
2024-06-22T09:09:30Z
700
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "zh", "base_model:baichuan-inc/Baichuan2-7B-Chat", "region:us" ]
null
2024-06-22T09:09:13Z
--- base_model: baichuan-inc/Baichuan2-7B-Chat language: - en - zh license_name: baichuan2-community-license license_link: https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat/blob/main/Community%20License%20for%20Baichuan2%20Model.pdf tags: - llama-cpp - gguf-my-repo tasks: - text-generation --- # CHE-72/Baichuan2-7B-Chat-Q3_K_L-GGUF This model was converted to GGUF format from [`baichuan-inc/Baichuan2-7B-Chat`](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo CHE-72/Baichuan2-7B-Chat-Q3_K_L-GGUF --hf-file baichuan2-7b-chat-q3_k_l.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo CHE-72/Baichuan2-7B-Chat-Q3_K_L-GGUF --hf-file baichuan2-7b-chat-q3_k_l.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo CHE-72/Baichuan2-7B-Chat-Q3_K_L-GGUF --hf-file baichuan2-7b-chat-q3_k_l.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo CHE-72/Baichuan2-7B-Chat-Q3_K_L-GGUF --hf-file baichuan2-7b-chat-q3_k_l.gguf -c 2048 ```
facebook/s2t-wav2vec2-large-en-de
facebook
2023-01-24T16:32:35Z
699
3
transformers
[ "transformers", "pytorch", "speech-encoder-decoder", "automatic-speech-recognition", "audio", "speech-translation", "speech2text2", "en", "de", "dataset:covost2", "dataset:librispeech_asr", "arxiv:2104.06678", "license:mit", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: - en - de datasets: - covost2 - librispeech_asr tags: - audio - speech-translation - automatic-speech-recognition - speech2text2 license: mit pipeline_tag: automatic-speech-recognition widget: - example_title: Common Voice 1 src: https://cdn-media.huggingface.co/speech_samples/common_voice_en_18301577.mp3 - example_title: Common Voice 2 src: https://cdn-media.huggingface.co/speech_samples/common_voice_en_99985.mp3 - example_title: Common Voice 3 src: https://cdn-media.huggingface.co/speech_samples/common_voice_en_99986.mp3 --- # S2T2-Wav2Vec2-CoVoST2-EN-DE-ST `s2t-wav2vec2-large-en-de` is a Speech to Text Transformer model trained for end-to-end Speech Translation (ST). The S2T2 model was proposed in [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/pdf/2104.06678.pdf) and officially released in [Fairseq](https://github.com/pytorch/fairseq/blob/6f847c8654d56b4d1b1fbacec027f47419426ddb/fairseq/models/wav2vec/wav2vec2_asr.py#L266). ## Model description S2T2 is a transformer-based seq2seq (speech encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech Translation (ST). It uses a pretrained [Wav2Vec2](https://huggingface.co/transformers/model_doc/wav2vec2.html) as the encoder and a transformer-based decoder. The model is trained with standard autoregressive cross-entropy loss and generates the translations autoregressively. ## Intended uses & limitations This model can be used for end-to-end English speech to German text translation. See the [model hub](https://huggingface.co/models?filter=speech2text2) to look for other S2T2 checkpoints. ### How to use As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the transcripts by passing the speech features to the model. You can use the model directly via the ASR pipeline ```python from datasets import load_dataset from transformers import pipeline librispeech_en = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") asr = pipeline("automatic-speech-recognition", model="facebook/s2t-wav2vec2-large-en-de", feature_extractor="facebook/s2t-wav2vec2-large-en-de") translation_de = asr(librispeech_en[0]["file"]) ``` or step-by-step as follows: ```python import torch from transformers import Speech2Text2Processor, SpeechEncoderDecoder from datasets import load_dataset import soundfile as sf model = SpeechEncoderDecoder.from_pretrained("facebook/s2t-wav2vec2-large-en-de") processor = Speech2Text2Processor.from_pretrained("facebook/s2t-wav2vec2-large-en-de") def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") ds = ds.map(map_to_array) inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt") generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"]) transcription = processor.batch_decode(generated_ids) ``` ## Evaluation results CoVoST-V2 test results for en-de (BLEU score): **26.5** For more information, please have a look at the [official paper](https://arxiv.org/pdf/2104.06678.pdf) - especially row 10 of Table 2. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2104-06678, author = {Changhan Wang and Anne Wu and Juan Miguel Pino and Alexei Baevski and Michael Auli and Alexis Conneau}, title = {Large-Scale Self- and Semi-Supervised Learning for Speech Translation}, journal = {CoRR}, volume = {abs/2104.06678}, year = {2021}, url = {https://arxiv.org/abs/2104.06678}, archivePrefix = {arXiv}, eprint = {2104.06678}, timestamp = {Thu, 12 Aug 2021 15:37:06 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2104-06678.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
huggingartists/taylor-swift
huggingartists
2022-07-11T13:52:52Z
699
1
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/taylor-swift", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/taylor-swift tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/721a6c465a666419bf286b473287c33f.446x446x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Taylor Swift</div> <a href="https://genius.com/artists/taylor-swift"> <div style="text-align: center; font-size: 14px;">@taylor-swift</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Taylor Swift. Dataset is available [here](https://huggingface.co/datasets/huggingartists/taylor-swift). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/taylor-swift") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/2l84tzp2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Taylor Swift's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1hy7aa65) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1hy7aa65/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/taylor-swift') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/taylor-swift") model = AutoModelWithLMHead.from_pretrained("huggingartists/taylor-swift") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
RajaSi/sd-prompt-generator-gpt-neo-gn
RajaSi
2022-11-21T16:35:06Z
699
2
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-11-21T11:07:13Z
hey friends welcome in this applied NLP tutorial we're going to learn how to fine tune a text generation model one second how to push the text generated like the fine-tuned model into hugging face model Hub and in this process we are also going to explore the stable diffusion part of it so this is a combination of a lot of different things. the model is uploaded to hugging face model Hub and the model I'm calling it SD prompt generator GPT Neo because this is a prompt generator for stable diffusion so if you want to create something using stable division the AI are generated so you ideally need to give a very detailed prompt. ideally is as you can see from the name it says SD prompt generator GPT Neo so we're going to use GPT Neo model to fine tune our prompts so that we have created a text generation model where we can give a prompt text and that will generate new prompt or a new extended prompt better prompt for us so what are we going to do we are going to take a set of existing stable diffusion prompts and we have got a 124 in 124 million stay GPT Neo model and we are going to fine tune that model based on this data and then we are going to finally save that model and then push the model into hugging phase model Hub
facebook/convnextv2-large-22k-384
facebook
2023-09-11T18:27:56Z
699
2
transformers
[ "transformers", "pytorch", "tf", "convnextv2", "image-classification", "vision", "dataset:imagenet-22k", "arxiv:2301.00808", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-18T20:31:45Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-22k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # ConvNeXt V2 (large-sized model) ConvNeXt V2 model pretrained using the FCMAE framework and fine-tuned on the ImageNet-22K dataset at resolution 384x384. It was introduced in the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Woo et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt-V2). Disclaimer: The team releasing ConvNeXT V2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXt V2 is a pure convolutional model (ConvNet) that introduces a fully convolutional masked autoencoder framework (FCMAE) and a new Global Response Normalization (GRN) layer to ConvNeXt. ConvNeXt V2 significantly improves the performance of pure ConvNets on various recognition benchmarks. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnextv2_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnextv2) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoImageProcessor, ConvNextV2ForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] preprocessor = AutoImageProcessor.from_pretrained("facebook/convnextv2-large-22k-384") model = ConvNextV2ForImageClassification.from_pretrained("facebook/convnextv2-large-22k-384") inputs = preprocessor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnextv2). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2301-00808, author = {Sanghyun Woo and Shoubhik Debnath and Ronghang Hu and Xinlei Chen and Zhuang Liu and In So Kweon and Saining Xie}, title = {ConvNeXt {V2:} Co-designing and Scaling ConvNets with Masked Autoencoders}, journal = {CoRR}, volume = {abs/2301.00808}, year = {2023}, url = {https://doi.org/10.48550/arXiv.2301.00808}, doi = {10.48550/arXiv.2301.00808}, eprinttype = {arXiv}, eprint = {2301.00808}, timestamp = {Tue, 10 Jan 2023 15:10:12 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2301-00808.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
ycros/BagelMIsteryTour-v2-8x7B-GGUF
ycros
2024-04-03T01:01:56Z
699
27
null
[ "gguf", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:mistralai/Mixtral-8x7B-v0.1", "base_model:jondurbin/bagel-dpo-8x7b-v0.2", "base_model:Sao10K/Sensualize-Mixtral-bf16", "base_model:Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora", "base_model:mistralai/Mixtral-8x7B-Instruct-v0.1", "license:cc-by-nc-4.0", "region:us" ]
null
2024-01-19T05:06:05Z
--- base_model: - mistralai/Mixtral-8x7B-v0.1 - jondurbin/bagel-dpo-8x7b-v0.2 - Sao10K/Sensualize-Mixtral-bf16 - mistralai/Mixtral-8x7B-v0.1 - Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora - mistralai/Mixtral-8x7B-Instruct-v0.1 tags: - mergekit - merge license: cc-by-nc-4.0 --- # BagelMIsteryTour-v2-8x7B These are GGUF quantized versions of [BagelMIsteryTour-v2-8x7B](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B) Bagel, Mixtral Instruct, with extra spices. Give it a taste. Works with Alpaca prompt formats, though the Mistral format should also work. ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63044fa07373aacccd8a7c53/lxNMzXo_dq_JCP9YyUyaw.jpeg) I started experimenting around seeing if I could improve or fix some of Bagel's problems. Totally inspired by seeing how well Doctor-Shotgun's Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss worked (which is a LimaRP tune on top of base Mixtral, and then merged with Mixtral Instruct) - I decided to try some merges of Bagel with Mixtral Instruct as a result. Somehow I ended up here, Bagel, Mixtral Instruct, a little bit of LimaRP, a little bit of Sao10K's Sensualize. So far in my testing it's working very well, and while it seems fairly unaligned on a lot of stuff, it's maybe a little too aligned on a few specific things (which I think comes from Sensualize) - so that's something to play with in the future, or maybe try to DPO out. I've been running (temp last) minP 0.1, dynatemp 0.5-4, rep pen 1.07, rep range 1024. I've been testing Alpaca style Instruction/Response, and Instruction/Input/Response and those seem to work well, I expect Mistral's prompt format would also work well. You may need to add a stopping string on "{{char}}:" for RPs because it can sometimes duplicate those out in responses and waffle on. Seems to hold up and not fall apart at long contexts like Bagel and some other Mixtral tunes seem to, definitely doesn't seem prone to loopyness either. Can be pushed into extravagant prose if the scene/setting calls for it. __Version 2:__ lowered the mix of Sensualize. This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) as a base. ### Models Merged The following models were included in the merge: * [jondurbin/bagel-dpo-8x7b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-8x7b-v0.2) * [Sao10K/Sensualize-Mixtral-bf16](https://huggingface.co/Sao10K/Sensualize-Mixtral-bf16) * [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) + [Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora](https://huggingface.co/Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora) * [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: mistralai/Mixtral-8x7B-v0.1 models: - model: mistralai/Mixtral-8x7B-v0.1+Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora parameters: density: 0.5 weight: 0.2 - model: Sao10K/Sensualize-Mixtral-bf16 parameters: density: 0.5 weight: 0.1 - model: mistralai/Mixtral-8x7B-Instruct-v0.1 parameters: density: 0.6 weight: 1.0 - model: jondurbin/bagel-dpo-8x7b-v0.2 parameters: density: 0.6 weight: 0.5 merge_method: dare_ties dtype: bfloat16 ```
Qwen/Qwen1.5-0.5B-Chat-GPTQ-Int4
Qwen
2024-04-30T07:20:21Z
699
11
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "chat", "conversational", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2024-02-04T18:23:22Z
--- license: other license_name: tongyi-qianwen-research license_link: >- https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat-GPTQ-Int4/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - chat --- # Qwen1.5-0.5B-Chat-GPTQ-Int4 ## Introduction Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: * 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated; * Significant performance improvement in human preference for chat models; * Multilingual support of both base and chat models; * Stable support of 32K context length for models of all sizes * No need of `trust_remote_code`. For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5). <br> ## Model Details Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B) and the mixture of SWA and full attention. ## Training details We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. ## Requirements The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen1.5-0.5B-Chat-GPTQ-Int4", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-0.5B-Chat-GPTQ-Int4") prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## Tips * If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`. ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen, title={Qwen Technical Report}, author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu}, journal={arXiv preprint arXiv:2309.16609}, year={2023} } ```
babybirdprd/Experiment26-7B-passthrough-6slice-GGUF
babybirdprd
2024-03-02T23:35:27Z
699
0
null
[ "gguf", "region:us" ]
null
2024-03-02T10:31:33Z
Entry not found
kaist-ai/mistral-orpo-alpha
kaist-ai
2024-03-17T09:48:18Z
699
8
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "dataset:HuggingFaceH4/ultrafeedback_binarized", "arxiv:2403.07691", "base_model:mistralai/Mistral-7B-v0.1", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-12T01:30:53Z
--- language: - en license: mit base_model: - mistralai/Mistral-7B-v0.1 datasets: - HuggingFaceH4/ultrafeedback_binarized pipeline_tag: text-generation model-index: - name: Mistral-ORPO-⍺ results: - task: type: text-generation dataset: name: AlpacaEval 1 type: AlpacaEval metrics: - type: AlpacaEval 1.0 value: 87.92% name: Win Rate source: url: https://github.com/tatsu-lab/alpaca_eval name: self-reported - task: type: text-generation dataset: name: AlpacaEval 2 type: AlpacaEval metrics: - type: AlpacaEval 2.0 value: 11.33% name: Win Rate source: url: https://github.com/tatsu-lab/alpaca_eval name: self-reported - task: type: text-generation dataset: name: MT-Bench type: MT-Bench metrics: - type: MT-Bench value: 7.23 name: Score source: url: https://github.com/lm-sys/FastChat/blob/main/fastchat/llm_judge/ name: self-reported --- # **Mistral-ORPO-⍺ (7B)** **Mistral-ORPO** is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) using the *odds ratio preference optimization (ORPO)*. With ORPO, the model directly learns the preference without the supervised fine-tuning warmup phase. **Mistral-ORPO-⍺** is fine-tuned exclusively on [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized). - **Github Repository**: https://github.com/xfactlab/orpo ## 👍 **Model Performance** ### 1) AlpacaEval & MT-Bench |Model Name|Size|Align|MT-Bench|AlpacaEval 1.0|AlpacaEval 2.0| |:--------|:--------------:|:--------------:|:-------------------:|:------------:|:------------:| |**Mistral-<tt>ORPO</tt>-⍺**|7B|<tt>ORPO</tt>|7.23|87.92|11.33| |**Mistral-<tt>ORPO</tt>-β**|7B|<tt>ORPO</tt>|7.32|91.41|12.20| |Zephyr β |7B|DPO|7.34|90.60|10.99| |TULU-2-DPO |13B|DPO|7.00|89.5|10.12| |Llama-2-Chat |7B|RLHF|6.27|71.37|4.96| |Llama-2-Chat |13B|RLHF|6.65|81.09|7.70| ### 2) IFEval | **Model Type** | **Prompt-Strict** | **Prompt-Loose** | **Inst-Strict** | **Inst-Loose** | |--------------------|:-----------------:|:----------------:|:---------------:|:--------------:| | **Mistral-ORPO-⍺** | 0.5009 | 0.5083 | 0.5995 | 0.6163 | | **Mistral-ORPO-β** | 0.5287 | 0.5564 | 0.6355 | 0.6619 | ## 🗺️ **MT-Bench by Category** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6415c043486c7c9a5d151583/1Ifpt0ljCfJPEoZAqlqqy.png) ## 🖥️ **Inference** ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("kaist-ai/mistral-orpo-alpha") tokenizer = AutoTokenizer.from_pretrained("kaist-ai/mistral-orpo-alpha") # Apply chat template query = [{'role': 'user', 'content': 'Hi! How are you doing?'}] prompt = tokenizer.apply_chat_template(query, tokenize=False, add_generation_prompt=True) inputs = tokenizer(prompt, return_tensors='pt') # Generation with specific configurations output = model.generate( **inputs, max_new_tokens=128, do_sample=True, temperature=0.7 ) response = tokenizer.batch_decode(output) #<|user|> #Hi! How are you doing?</s> #<|assistant|> #I'm doing well, thank you! How are you?</s> ``` ## 📎 **Citation** ``` @misc{hong2024orpo, title={ORPO: Monolithic Preference Optimization without Reference Model}, author={Jiwoo Hong and Noah Lee and James Thorne}, year={2024}, eprint={2403.07691}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Mar2Ding/songcomposer_sft
Mar2Ding
2024-04-20T09:07:01Z
699
0
transformers
[ "transformers", "pytorch", "internlm", "feature-extraction", "text-generation", "custom_code", "en", "zh", "arxiv:2402.17645", "license:apache-2.0", "region:us" ]
text-generation
2024-03-21T10:02:53Z
--- license: apache-2.0 pipeline_tag: text-generation language: - en - zh --- <p align="center"> <b><font size="6">SongComposer</font></b> <p> <div align="center"> [💻Github Repo](https://github.com/pjlab-songcomposer/songcomposer) [📖Paper](https://arxiv.org/abs/2402.17645) </div> **SongComposer** is a language large model (LLM) based on [InternLM2](https://github.com/InternLM/InternLM) for lyric and melody composition in song generation. We release SongComposer series in two versions: - SongComposer_pretrain: The pretrained SongComposer with InternLM2 as the initialization of the LLM, gains basic knowledge on lyric and melody. - SongComposer_sft: The finetuned SongComposer for *instruction-following song generation* including lyric to melody, melody to lyric, song continuation, text to song. ### Import from Transformers To load the SongComposer_sft model using Transformers, use the following code: ```python from transformers import AutoTokenizer, AutoModel ckpt_path = "Mar2Ding/songcomposer_sft" tokenizer = AutoTokenizer.from_pretrained(ckpt_path, trust_remote_code=True) model = AutoModel.from_pretrained(ckpt_path, trust_remote_code=True).cuda().half() prompt = 'Create a song on brave and sacrificing with a rapid pace.' model.inference(prompt, tokenizer) ``` ### 通过 Transformers 加载 通过以下的代码加载 SongComposer_sft 模型 ```python from transformers import AutoTokenizer, AutoModel ckpt_path = "Mar2Ding/songcomposer_sft" tokenizer = AutoTokenizer.from_pretrained(ckpt_path, trust_remote_code=True) model = AutoModel.from_pretrained(ckpt_path, trust_remote_code=True).cuda().half() prompt = 'Create a song on brave and sacrificing with a rapid pace.' model.inference(prompt, tokenizer) ``` ### Open Source License The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow free commercial usage.
allknowingroger/Lamma3merge3-15B-MoE
allknowingroger
2024-04-19T11:56:08Z
699
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "Malikeh1375/llama-3-base-instruct-slerp", "orpo-explorers/hf-llama3-8b-orpo-v0.0", "conversational", "base_model:Malikeh1375/llama-3-base-instruct-slerp", "base_model:orpo-explorers/hf-llama3-8b-orpo-v0.0", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-19T11:29:02Z
--- license: apache-2.0 tags: - moe - frankenmoe - merge - mergekit - lazymergekit - Malikeh1375/llama-3-base-instruct-slerp - orpo-explorers/hf-llama3-8b-orpo-v0.0 base_model: - Malikeh1375/llama-3-base-instruct-slerp - orpo-explorers/hf-llama3-8b-orpo-v0.0 --- # Lamma3merge3-15B-MoE Lamma3merge3-15B-MoE is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Malikeh1375/llama-3-base-instruct-slerp](https://huggingface.co/Malikeh1375/llama-3-base-instruct-slerp) * [orpo-explorers/hf-llama3-8b-orpo-v0.0](https://huggingface.co/orpo-explorers/hf-llama3-8b-orpo-v0.0) ## 🧩 Configuration ```yaml base_model: Malikeh1375/llama-3-base-instruct-slerp experts: - source_model: Malikeh1375/llama-3-base-instruct-slerp positive_prompts: ["why"] - source_model: orpo-explorers/hf-llama3-8b-orpo-v0.0 positive_prompts: ["what"] ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "allknowingroger/Lamma3merge3-15B-MoE" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Steelskull/L3-Arcania-4x8b
Steelskull
2024-05-07T14:45:51Z
699
21
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "not-for-all-audiences", "conversational", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-03T05:11:11Z
--- license: llama3 tags: - not-for-all-audiences --- <!DOCTYPE html> <style> body { font-family: 'Quicksand', sans-serif; background: linear-gradient(135deg, #2E3440 0%, #1A202C 100%); color: #D8DEE9; margin: 0; padding: 0; font-size: 16px; } .container { width: 80% auto; max-width: 1080px auto; margin: 20px auto; background-color: rgba(255, 255, 255, 0.02); padding: 20px; border-radius: 12px; box-shadow: 0 4px 10px rgba(0, 0, 0, 0.2); backdrop-filter: blur(10px); border: 1px solid rgba(255, 255, 255, 0.1); } .header h1 { font-size: 28px; color: #ECEFF4; margin: 0 0 20px 0; text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.3); } .update-section { margin-top: 30px; } .update-section h2 { font-size: 24px; color: #88C0D0; } .update-section p { font-size: 16px; line-height: 1.6; color: #ECEFF4; } .info img { width: 100%; border-radius: 10px; margin-bottom: 15px; } a { color: #88C0D0; text-decoration: none; } a:hover { color: #A3BE8C; } .button { display: inline-block; background-color: #5E81AC; color: #E5E9F0; padding: 10px 20px; border-radius: 5px; cursor: pointer; text-decoration: none; } .button:hover { background-color: #81A1C1; } pre { background-color: #2E3440; padding: 10px; border-radius: 5px; overflow-x: auto; } code { font-family: 'Courier New', monospace; color: #D8DEE9; } </style> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>L3-Arcania-4x8b Data Card</title> <link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet"> </head> <body> <div class="container"> <div class="header"> <h1>L3-Arcania-4x8b</h1> </div> <div class="info"> <img src="https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/HfdZs1XAXZ8vfd8ZFLq8H.png"> <p>Now that the cute anime girl has your attention.</p> <p><strong>Creator:</strong> <a href="https://huggingface.co/Steelskull" target="_blank">SteelSkull</a></p> <p><strong>About L3-Arcania-4x8b:</strong> A Mixture of Experts model designed for general assistance, storytelling, roleplay, and ERP.</p> <li>Integrates models from notable sources for enhanced performance in diverse tasks.</p> <p>This model is based off of the work ive done on Umbra v1-v3 basically the gates are trained off of Keywords that direct the gates but not limit as much as a full prompt would. My goal is Quality not quantity</p> <p><strong>Source Models:</strong></p> <ul> <li><a href="https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct">meta-llama/Meta-Llama-3-8B-Instruct</a></li> <li><a href="https://huggingface.co/Sao10K/L3-Solana-8B-v1">Sao10K/L3-Solana-8B-v1</a></li> <li><a href="https://huggingface.co/dreamgen-preview/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5">dreamgen-preview/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5</a></li> <li><a href="https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1">NeverSleep/Llama-3-Lumimaid-8B-v0.1</a></li> <li><a href="https://huggingface.co/cgato/L3-TheSpice-8b-v0.1.3">cgato/L3-TheSpice-8b-v0.1.3</a></li> </ul> </div> <div class="update-section"> <h2>Quants:</h2> <p> Recommended: (Thanks to <a href="https://huggingface.co/mradermacher">@Mradermacher!</a>, please send them likes!)</p> <p><a href="https://huggingface.co/mradermacher/L3-Arcania-4x8b-GGUF">L3-Arcania-4x8b-GGUF (all GGUFs)</a></p> <p><a href="https://huggingface.co/mradermacher/L3-Arcania-4x8b-i1-GGUF">L3-Arcania-4x8b-i1-GGUF (i Quant GGUFs)</a></p> <p> My Quants: (they work, just not many choices) </p> <p><a href="https://huggingface.co/SteelQuants/L3-Arcania-4x8b-Q4_K_M-GGUF">SteelQuants/L3-Arcania-4x8b-Q4_K_M-GGUF</a></p> <p><a href="https://huggingface.co/SteelQuants/L3-Arcania-4x8b-Q5_K_M-GGUF">SteelQuants/L3-Arcania-4x8b-Q5_K_M-GGUF</a></p> <h3>Config:</h3> <p>Recommended Prompt Format: [Llama 3] </p> <pre><code><|begin_of_text|><|start_header_id|>system<|end_header_id|> {{prompt}}<|eot_id|>{{history}}<|start_header_id|>{{char}}<|end_header_id|> </code></pre> <p> Model Config: </p> <pre><code>MODEL_NAME = "L3-Arcania-4x8b" base_model: meta-llama/Meta-Llama-3-8B-Instruct gate_mode: hidden dtype: bfloat16 experts: - source_model: Sao10K/L3-Solana-8B-v1 - source_model: dreamgen-preview/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5 - source_model: NeverSleep/Llama-3-Lumimaid-8B-v0.1 - source_model: cgato/L3-TheSpice-8b-v0.1.3 </code></pre> <p>L3-Arcania-4x8b combines the strengths of multiple models to deliver a well-rounded, capable assistant. It excels at general tasks, storytelling, roleplay, and even more mature content.</p> <p>The base model, Meta-Llama-3-8B-Instruct, provides a solid foundation. The expert models then enhance specific capabilities:</p> <ul> <li>L3-Solana-8B-v1 adds generalist knowledge and the ability to handle a wide range of topics, including NSFW content.</li> <li>opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5 strengthens storytelling, roleplay, and long-form writing abilities.</li> <li>Llama-3-Lumimaid-8B-v0.1 introduces expertise in romantic, flirtatious, and explicit interactions.</li> <li>L3-TheSpice-8b-v0.1.3 ensures the model remains focused, tailored, and high-quality.</li> </ul> <p>The positive and negative prompts guide each expert's influence, resulting in a model that is versatile yet refined, capable of both general assistance and more specialized, mature interactions.</p> </div> </div> </body> </html>
danyoung/billie
danyoung
2024-05-07T16:09:10Z
699
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-06T22:15:24Z
--- base_model: [] library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # 12_18 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * candidates/slerpfewshot/10/10_0 * candidates/slerpfewshot/6/6_1 ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: candidates/slerpfewshot/6/6_1 dtype: bfloat16 merge_method: slerp models: - model: candidates/slerpfewshot/6/6_1 - model: candidates/slerpfewshot/10/10_0 parameters: t: 0.5 ```
Josephgflowers/TinyLlama-Cinder-Tiny-Agent
Josephgflowers
2024-05-23T12:44:31Z
699
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:Josephgflowers/TinyLlama-Cinder-Math-Train", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-22T15:23:26Z
--- license: mit base_model: Josephgflowers/TinyLlama-Cinder-Math-Train tags: - generated_from_trainer model-index: - name: TinyLlama-Cinder-Tiny-Agent results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TinyLlama-Cinder-Tiny-Agent This model is a fine-tuned version of [Josephgflowers/TinyLlama-Cinder-Math-Train](https://huggingface.co/Josephgflowers/TinyLlama-Cinder-Math-Train) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
skaty5678/visa-chat-400-7b-Q4_K_M-gguf
skaty5678
2024-06-06T05:56:27Z
699
0
transformers
[ "transformers", "gguf", "llama", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-06-03T05:20:43Z
Entry not found
mradermacher/Trinity_Notus2-xb-GGUF
mradermacher
2024-06-12T10:37:46Z
699
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "janai-hq/trinity-v1", "argilla/notus-7b-v1", "en", "base_model:powermove72/Trinity_Notus2-xb", "endpoints_compatible", "region:us" ]
null
2024-06-12T09:58:36Z
--- base_model: powermove72/Trinity_Notus2-xb language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - janai-hq/trinity-v1 - argilla/notus-7b-v1 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/powermove72/Trinity_Notus2-xb <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Trinity_Notus2-xb-GGUF/resolve/main/Trinity_Notus2-xb.Q2_K.gguf) | Q2_K | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Trinity_Notus2-xb-GGUF/resolve/main/Trinity_Notus2-xb.IQ3_XS.gguf) | IQ3_XS | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/Trinity_Notus2-xb-GGUF/resolve/main/Trinity_Notus2-xb.Q3_K_S.gguf) | Q3_K_S | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/Trinity_Notus2-xb-GGUF/resolve/main/Trinity_Notus2-xb.IQ3_S.gguf) | IQ3_S | 5.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Trinity_Notus2-xb-GGUF/resolve/main/Trinity_Notus2-xb.IQ3_M.gguf) | IQ3_M | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Trinity_Notus2-xb-GGUF/resolve/main/Trinity_Notus2-xb.Q3_K_M.gguf) | Q3_K_M | 5.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Trinity_Notus2-xb-GGUF/resolve/main/Trinity_Notus2-xb.Q3_K_L.gguf) | Q3_K_L | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/Trinity_Notus2-xb-GGUF/resolve/main/Trinity_Notus2-xb.IQ4_XS.gguf) | IQ4_XS | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/Trinity_Notus2-xb-GGUF/resolve/main/Trinity_Notus2-xb.Q4_K_S.gguf) | Q4_K_S | 6.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Trinity_Notus2-xb-GGUF/resolve/main/Trinity_Notus2-xb.Q4_K_M.gguf) | Q4_K_M | 6.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Trinity_Notus2-xb-GGUF/resolve/main/Trinity_Notus2-xb.Q5_K_S.gguf) | Q5_K_S | 7.8 | | | [GGUF](https://huggingface.co/mradermacher/Trinity_Notus2-xb-GGUF/resolve/main/Trinity_Notus2-xb.Q5_K_M.gguf) | Q5_K_M | 8.0 | | | [GGUF](https://huggingface.co/mradermacher/Trinity_Notus2-xb-GGUF/resolve/main/Trinity_Notus2-xb.Q6_K.gguf) | Q6_K | 9.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Trinity_Notus2-xb-GGUF/resolve/main/Trinity_Notus2-xb.Q8_0.gguf) | Q8_0 | 12.0 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
instruction-pretrain/InstructLM-500M
instruction-pretrain
2024-06-27T03:35:35Z
699
32
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "en", "dataset:tiiuae/falcon-refinedweb", "dataset:instruction-pretrain/ft-instruction-synthesizer-collection", "arxiv:2406.14491", "arxiv:2309.09530", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-18T13:59:20Z
--- license: apache-2.0 datasets: - tiiuae/falcon-refinedweb - instruction-pretrain/ft-instruction-synthesizer-collection language: - en --- # Instruction Pre-Training: Language Models are Supervised Multitask Learners This repo contains the **general models pre-trained from scratch** in our paper [Instruction Pre-Training: Language Models are Supervised Multitask Learners](https://huggingface.co/papers/2406.14491). We explore supervised multitask pre-training by proposing ***Instruction Pre-Training***, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train language models. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. In our experiments, we synthesize 200M instruction-response pairs covering 40+ task categories to verify the effectiveness of *Instruction Pre-Training*. Instruction Pre-Training* outperforms *Vanilla Pre-training* in both general pre-training from scratch and domain-adaptive continual pre-training. **In pre-training from scratch, *Instruction Pre-Training* not only improves pre-trained base models but also benefits more from further instruction tuning.** In continual pre-training, *Instruction Pre-Training* enables Llama3-8B to be comparable to or even outperform Llama3-70B. <p align='center'> <img src="https://cdn-uploads.huggingface.co/production/uploads/66711d2ee12fa6cc5f5dfc89/vRdsFIVQptbNaGiZ18Lih.png" width="400"> </p> ## Resources **🤗 We share our data and models with example usages, feel free to open any issues or discussions! 🤗** - Context-Based Instruction Synthesizer: [instruction-synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer) - Fine-Tuning Data for the Synthesizer: [ft-instruction-synthesizer-collection](https://huggingface.co/datasets/instruction-pretrain/ft-instruction-synthesizer-collection) - General Models Pre-Trained from Scratch: - [InstructLM-500M](https://huggingface.co/instruction-pretrain/InstructLM-500M) - [InstructLM-1.3B](https://huggingface.co/instruction-pretrain/InstructLM-1.3B) - Domain-Specific Models Pre-Trained from Llama3-8B: - [Finance-Llama3-8B](https://huggingface.co/instruction-pretrain/finance-Llama3-8B) - [Biomedicine-Llama3-8B](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B) - General Instruction-Augmented Corpora: [general-instruction-augmented-corpora](https://huggingface.co/datasets/instruction-pretrain/general-instruction-augmented-corpora) - Domain-Specific Instruction-Augmented Corpora (no finance data to avoid ethical issues): [medicine-instruction-augmented-corpora](https://huggingface.co/datasets/instruction-pretrain/medicine-instruction-augmented-corpora) ## General Pre-Training From Scratch We augment the [RefinedWeb corproa](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) with instruction-response pairs generated by our [context-based instruction synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer) to pre-train general langauge models from scratch. To evaluate our general base model using the [lm-evaluation-harness framework](https://github.com/EleutherAI/lm-evaluation-harness) 1. Setup dependencies: ```bash git clone https://github.com/EleutherAI/lm-evaluation-harness cd lm-evaluation-harness pip install -e . ``` 2. Evalaute: ```bash MODEL=instruction-pretrain/InstructLM-500M add_bos_token=True # this flag is needed because lm-eval-harness set add_bos_token to False by default, but ours require add_bos_token to be True accelerate launch -m lm_eval --model hf \ --model_args pretrained=${MODEL},add_bos_token=${add_bos_token},dtype=float16 \ --gen_kwargs do_sample=False \ --tasks piqa,hellaswag,winogrande \ --batch_size auto \ --num_fewshot 0 accelerate launch -m lm_eval --model hf \ --model_args pretrained=${MODEL},add_bos_token=${add_bos_token},dtype=float16 \ --gen_kwargs do_sample=False \ --tasks social_iqa,ai2_arc,openbookqa,boolq,mmlu \ --batch_size auto \ --num_fewshot 5 ``` ## Citation If you find our work helpful, please cite us: Instruction Pre-Training ```bibtex @article{cheng2024instruction, title={Instruction Pre-Training: Language Models are Supervised Multitask Learners}, author={Cheng, Daixuan and Gu, Yuxian and Huang, Shaohan and Bi, Junyu and Huang, Minlie and Wei, Furu}, journal={arXiv preprint arXiv:2406.14491}, year={2024} } ``` [AdaptLLM](https://huggingface.co/papers/2309.09530) ```bibtex @inproceedings{ cheng2024adapting, title={Adapting Large Language Models via Reading Comprehension}, author={Daixuan Cheng and Shaohan Huang and Furu Wei}, booktitle={The Twelfth International Conference on Learning Representations}, year={2024}, url={https://openreview.net/forum?id=y886UXPEZ0} } ```
Abhaykoul/NOVA-4B-Q8_0-GGUF
Abhaykoul
2024-06-29T12:13:03Z
699
1
null
[ "gguf", "region:us" ]
null
2024-06-29T12:11:07Z
Entry not found
laituan245/molt5-large-smiles2caption
laituan245
2022-05-03T18:08:31Z
698
3
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "arxiv:2204.11817", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
2022-05-03T16:50:08Z
--- license: apache-2.0 --- This model can be used to generate an input caption from a SMILES string. ## Example Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("laituan245/molt5-large-smiles2caption", model_max_length=512) model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-large-smiles2caption') input_text = 'C1=CC2=C(C(=C1)[O-])NC(=CC2=O)C(=O)O' input_ids = tokenizer(input_text, return_tensors="pt").input_ids outputs = model.generate(input_ids, num_beams=5, max_length=512) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Paper For more information, please take a look at our paper. Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817) Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji*
timm/convnextv2_huge.fcmae_ft_in22k_in1k_384
timm
2024-02-10T23:29:18Z
698
1
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2301.00808", "license:cc-by-nc-4.0", "region:us" ]
image-classification
2023-01-05T01:45:48Z
--- license: cc-by-nc-4.0 library_name: timm tags: - image-classification - timm datasets: - imagenet-1k - imagenet-1k --- # Model card for convnextv2_huge.fcmae_ft_in22k_in1k_384 A ConvNeXt-V2 image classification model. Pretrained with a fully convolutional masked autoencoder framework (FCMAE) and fine-tuned on ImageNet-22k and then ImageNet-1k. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 660.3 - GMACs: 338.0 - Activations (M): 232.4 - Image size: 384 x 384 - **Papers:** - ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders: https://arxiv.org/abs/2301.00808 - **Original:** https://github.com/facebookresearch/ConvNeXt-V2 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('convnextv2_huge.fcmae_ft_in22k_in1k_384', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnextv2_huge.fcmae_ft_in22k_in1k_384', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 352, 96, 96]) # torch.Size([1, 704, 48, 48]) # torch.Size([1, 1408, 24, 24]) # torch.Size([1, 2816, 12, 12]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnextv2_huge.fcmae_ft_in22k_in1k_384', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2816, 12, 12) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP. | model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size| |------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------| | [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 | | [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 | | [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 | | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 | | [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 | | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 | | [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 | | [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 | | [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 | | [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 | | [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 | | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 | | [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 | | [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 | | [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 | | [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 | | [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 | | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 | | [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 | | [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 | | [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 | | [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 | | [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 | | [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 | | [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 | | [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 | | [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 | | [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 | | [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 | | [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 | | [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 | | [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 | | [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 | | [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 | | [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 | | [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 | | [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 | | [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 | | [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 | | [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 | | [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 | | [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 | | [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 | | [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 | | [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 | | [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 | | [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 | | [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 | | [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 | | [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 | | [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 | | [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 | | [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 | | [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 | ## Citation ```bibtex @article{Woo2023ConvNeXtV2, title={ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders}, author={Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon and Saining Xie}, year={2023}, journal={arXiv preprint arXiv:2301.00808}, } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
Triangles/gpt-neo-125m-primordial_earth_full
Triangles
2023-09-06T16:02:37Z
698
0
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "generated_from_trainer", "base_model:EleutherAI/gpt-neo-125M", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-08-19T19:13:10Z
--- license: mit base_model: EleutherAI/gpt-neo-125M tags: - generated_from_trainer model-index: - name: gpt-neo-125M-primordial_earth_full results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt-neo-125M-primordial_earth_full This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6208 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 332 | 3.6724 | | 3.6848 | 2.0 | 664 | 3.6293 | | 3.6848 | 3.0 | 996 | 3.6208 | ### Framework versions - Transformers 4.33.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
TheBloke/Carl-Llama-2-13B-GGUF
TheBloke
2023-09-27T12:47:42Z
698
4
transformers
[ "transformers", "gguf", "llama", "en", "dataset:jerryjalapeno/nart-100k-synthetic", "base_model:ajibawa-2023/carl-llama-2-13b", "license:cc-by-nc-nd-4.0", "text-generation-inference", "region:us" ]
null
2023-09-05T15:30:33Z
--- language: - en license: cc-by-nc-nd-4.0 datasets: - jerryjalapeno/nart-100k-synthetic model_name: Carl Llama 2 base_model: ajibawa-2023/carl-llama-2-13b inference: false model_creator: Feynman Innovations model_type: llama prompt_template: 'This is a conversation with your Therapist AI, Carl. Carl is designed to help you while in stress. It can answer your questions and help you to calm down Context You are Carl, A Therapist AI USER: {prompt} CARL: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Carl Llama 2 - GGUF - Model creator: [Feynman Innovations](https://huggingface.co/ajibawa-2023) - Original model: [Carl Llama 2](https://huggingface.co/ajibawa-2023/carl-llama-2-13b) <!-- description start --> ## Description This repo contains GGUF format model files for [Feynman Innovations's Carl Llama 2](https://huggingface.co/ajibawa-2023/carl-llama-2-13b). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Carl-Llama-2-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Carl-Llama-2-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Carl-Llama-2-13B-GGUF) * [Feynman Innovations's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ajibawa-2023/carl-llama-2-13b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Carl ``` This is a conversation with your Therapist AI, Carl. Carl is designed to help you while in stress. It can answer your questions and help you to calm down Context You are Carl, A Therapist AI USER: {prompt} CARL: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `cc-by-nc-nd-4.0`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Feynman Innovations's Carl Llama 2](https://huggingface.co/ajibawa-2023/carl-llama-2-13b). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [carl-llama-2-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Carl-Llama-2-13B-GGUF/blob/main/carl-llama-2-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [carl-llama-2-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Carl-Llama-2-13B-GGUF/blob/main/carl-llama-2-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [carl-llama-2-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Carl-Llama-2-13B-GGUF/blob/main/carl-llama-2-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [carl-llama-2-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Carl-Llama-2-13B-GGUF/blob/main/carl-llama-2-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [carl-llama-2-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Carl-Llama-2-13B-GGUF/blob/main/carl-llama-2-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [carl-llama-2-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Carl-Llama-2-13B-GGUF/blob/main/carl-llama-2-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [carl-llama-2-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Carl-Llama-2-13B-GGUF/blob/main/carl-llama-2-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [carl-llama-2-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Carl-Llama-2-13B-GGUF/blob/main/carl-llama-2-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [carl-llama-2-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Carl-Llama-2-13B-GGUF/blob/main/carl-llama-2-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [carl-llama-2-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Carl-Llama-2-13B-GGUF/blob/main/carl-llama-2-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [carl-llama-2-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Carl-Llama-2-13B-GGUF/blob/main/carl-llama-2-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [carl-llama-2-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Carl-Llama-2-13B-GGUF/blob/main/carl-llama-2-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Carl-Llama-2-13B-GGUF and below it, a specific filename to download, such as: carl-llama-2-13b.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Carl-Llama-2-13B-GGUF carl-llama-2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Carl-Llama-2-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Carl-Llama-2-13B-GGUF carl-llama-2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m carl-llama-2-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "This is a conversation with your Therapist AI, Carl. Carl is designed to help you while in stress. It can answer your questions and help you to calm down\n\nContext\nYou are Carl, A Therapist AI\nUSER: {prompt}\nCARL:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Carl-Llama-2-13B-GGUF", model_file="carl-llama-2-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Feynman Innovations's Carl Llama 2 **Carl: A Therapist AI** Early prevention can help lot of people to avoid depression and other mental illnesses. Therapy is a controversial use case because the outputs and capabilities of LLMs are uncertain. Many people don't have access the therapist, due to a financial, personal, social or other restriction. Here comes Carl: A Therapist AI which can quickly respond to you. It is trained on more than 100000 set of conversations. Each set having 10~15 conversations between Carl and client. Base data was obtained from jerryjalapeno/nart-100k-synthetic . This data was further refined and fine tuned. Entire dataset is synthetic. Synthetic data is used because there is little to no therapy conversation data which is publicly available and directly applicable to an LLM. This by means a no replacement to a Doctor or professional therapist. If you are in stress or going through a tough time, please seek professional help or talk to a friend/family member. **Training:** Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 50 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta. GGML Quant models are converted by Kijana Mitchell. Extremely thankful to him. **GPTQ** GPTQ: [TheBloke](https://huggingface.co/TheBloke/Carl-Llama-2-13B-GPTQ) Special Thanks to [TheBloke](https://huggingface.co/TheBloke) for guiding me and making this model available. **Example Prompt:** ``` This is a conversation with your Therapist AI, Carl. Carl is designed to help you while in stress. It can answer your questions and help you to calm down Context You are Carl, A Therapist AI USER: <prompt> CARL: ``` Note: This is just a research experiment, and the model should NOT be used as a human therapist. Use "cat" command to join all pytorch_model.bin parts. <!-- original-model-card end -->
TheBloke/Echidna-13B-v0.3-GGUF
TheBloke
2023-10-28T17:56:30Z
698
6
transformers
[ "transformers", "gguf", "llama", "base_model:NeverSleep/Echidna-13b-v0.3", "license:cc-by-nc-4.0", "text-generation-inference", "region:us" ]
null
2023-10-28T14:04:10Z
--- base_model: NeverSleep/Echidna-13b-v0.3 inference: false license: cc-by-nc-4.0 model_creator: NeverSleep model_name: Echidna 13B v0.3 model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Echidna 13B v0.3 - GGUF - Model creator: [NeverSleep](https://huggingface.co/NeverSleep) - Original model: [Echidna 13B v0.3](https://huggingface.co/NeverSleep/Echidna-13b-v0.3) <!-- description start --> ## Description This repo contains GGUF format model files for [NeverSleep's Echidna 13B v0.3](https://huggingface.co/NeverSleep/Echidna-13b-v0.3). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Echidna-13B-v0.3-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Echidna-13B-v0.3-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Echidna-13B-v0.3-GGUF) * [NeverSleep's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NeverSleep/Echidna-13b-v0.3) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [NeverSleep's Echidna 13B v0.3](https://huggingface.co/NeverSleep/Echidna-13b-v0.3). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [echidna-13b-v0.3.Q2_K.gguf](https://huggingface.co/TheBloke/Echidna-13B-v0.3-GGUF/blob/main/echidna-13b-v0.3.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [echidna-13b-v0.3.Q3_K_S.gguf](https://huggingface.co/TheBloke/Echidna-13B-v0.3-GGUF/blob/main/echidna-13b-v0.3.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [echidna-13b-v0.3.Q3_K_M.gguf](https://huggingface.co/TheBloke/Echidna-13B-v0.3-GGUF/blob/main/echidna-13b-v0.3.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [echidna-13b-v0.3.Q3_K_L.gguf](https://huggingface.co/TheBloke/Echidna-13B-v0.3-GGUF/blob/main/echidna-13b-v0.3.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [echidna-13b-v0.3.Q4_0.gguf](https://huggingface.co/TheBloke/Echidna-13B-v0.3-GGUF/blob/main/echidna-13b-v0.3.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [echidna-13b-v0.3.Q4_K_S.gguf](https://huggingface.co/TheBloke/Echidna-13B-v0.3-GGUF/blob/main/echidna-13b-v0.3.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [echidna-13b-v0.3.Q4_K_M.gguf](https://huggingface.co/TheBloke/Echidna-13B-v0.3-GGUF/blob/main/echidna-13b-v0.3.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [echidna-13b-v0.3.Q5_0.gguf](https://huggingface.co/TheBloke/Echidna-13B-v0.3-GGUF/blob/main/echidna-13b-v0.3.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [echidna-13b-v0.3.Q5_K_S.gguf](https://huggingface.co/TheBloke/Echidna-13B-v0.3-GGUF/blob/main/echidna-13b-v0.3.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [echidna-13b-v0.3.Q5_K_M.gguf](https://huggingface.co/TheBloke/Echidna-13B-v0.3-GGUF/blob/main/echidna-13b-v0.3.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [echidna-13b-v0.3.Q6_K.gguf](https://huggingface.co/TheBloke/Echidna-13B-v0.3-GGUF/blob/main/echidna-13b-v0.3.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [echidna-13b-v0.3.Q8_0.gguf](https://huggingface.co/TheBloke/Echidna-13B-v0.3-GGUF/blob/main/echidna-13b-v0.3.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Echidna-13B-v0.3-GGUF and below it, a specific filename to download, such as: echidna-13b-v0.3.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Echidna-13B-v0.3-GGUF echidna-13b-v0.3.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Echidna-13B-v0.3-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Echidna-13B-v0.3-GGUF echidna-13b-v0.3.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m echidna-13b-v0.3.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Echidna-13B-v0.3-GGUF", model_file="echidna-13b-v0.3.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: NeverSleep's Echidna 13B v0.3 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/LU1KiXFeJtt7UW_mYhwE0.png) # This model is a collab between [IkariDev](https://huggingface.co/IkariDev) and [Undi](https://huggingface.co/Undi95)! Echidna v0.3 model. Use Alpaca format. Suitable for RP, ERP and general stuff. Echidna v0.3, here it is.. its like Echidna v0.2 on steroids. This model seems to still be pretty sensitive to your generation settings, experiment until you've found your settings. [Recommended settings - No settings yet(Please suggest some over in the Community tab!)] <!-- description start --> ## Description <!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) --> This repo contains FP16 files of Echidna-13b-v0.3. [FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Echidna-13b-v0.3) <!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)--> <!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)--> [exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-13b-v0.3-8bpw-8h-exl2) <!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)--> <!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)--> [GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Echidna-13b-v0.3-GGUF) <!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)--> ## Ratings: Note: We have permission of all users to upload their ratings, i DONT screenshot random reviews without asking if i can put them here! No ratings yet! If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi". <!-- description end --> <!-- description start --> ## Models+loras used and recipe - Xwin-LM/Xwin-LM-13B-V0.2 - IkariDev/Athena-v3 - Heralax/Cat-0.5 - Undi95/PsyMedRP-v1-13B - cgato/Thespis-13b-v0.4 - KoboldAI/LLaMA2-13B-TiefighterLR - Heralax/Augmental-13b-two-epochs - Sao10K/SthenoWriter2.1-L2-13B - Undi95/Storytelling-v2.1-13B-lora - lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT - migtissera/Synthia-13B-v1.2 - Undi95/Trismegistus-lora <!-- description end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` ## Others Undi: If you want to support me, you can [here](https://ko-fi.com/undiai). IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek <!-- original-model-card end -->
backyardai/Iced-Lemon-Cookie-7B-GGUF
backyardai
2024-05-23T01:12:55Z
698
2
transformers
[ "transformers", "gguf", "mergekit", "merge", "base_model:FallenMerick/Iced-Lemon-Cookie-7B", "endpoints_compatible", "region:us" ]
null
2024-05-10T23:49:29Z
--- library_name: transformers tags: - mergekit - merge base_model: FallenMerick/Iced-Lemon-Cookie-7B model_name: Iced-Lemon-Cookie-7B-GGUF quantized_by: brooketh --- <img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;"> **<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>** <p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p> <p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p> *** # Iced Lemon Cookie 7B - **Creator:** [FallenMerick](https://huggingface.co/FallenMerick/) - **Original:** [Iced Lemon Cookie 7B](https://huggingface.co/FallenMerick/Iced-Lemon-Cookie-7B) - **Date Created:** 2024-05-09 - **Trained Context:** 32768 tokens - **Description:** Merge of LemonadeRP, IceLemonTeaRP, Kunoichi DPO, and Big L models. *** ## What is a GGUF? GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware. GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight. *** <img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;"> ## Backyard AI - Free, local AI chat application. - One-click installation on Mac and PC. - Automatically use GPU for maximum speed. - Built-in model manager. - High-quality character hub. - Zero-config desktop-to-mobile tethering. Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable. **Join us on [Discord](https://discord.gg/SyNN2vC9tQ)** ***
grimjim/Mistral-7B-Instruct-demi-merge-v0.3-7B
grimjim
2024-05-24T03:27:11Z
698
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:mistralai/Mistral-7B-v0.3", "base_model:mistralai/Mistral-7B-Instruct-v0.3", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-23T20:01:56Z
--- base_model: - mistralai/Mistral-7B-v0.3 - mistralai/Mistral-7B-Instruct-v0.3 library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # Mistral-7B-Instruct-demi-merge-v0.3-7B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). This is a blend of base and instruct models, intended to enable fine-tuning and/or merging (by anyone). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3) * [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.3 layer_range: [0,32] - model: mistralai/Mistral-7B-v0.3 layer_range: [0,32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.3 parameters: t: - value: 0.5 dtype: bfloat16 ```
RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf
RichardErkhov
2024-06-05T05:16:37Z
698
0
null
[ "gguf", "arxiv:2209.02970", "region:us" ]
null
2024-06-05T05:04:35Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Wenzhong-GPT2-110M - GGUF - Model creator: https://huggingface.co/IDEA-CCNL/ - Original model: https://huggingface.co/IDEA-CCNL/Wenzhong-GPT2-110M/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Wenzhong-GPT2-110M.Q2_K.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q2_K.gguf) | Q2_K | 0.08GB | | [Wenzhong-GPT2-110M.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.IQ3_XS.gguf) | IQ3_XS | 0.08GB | | [Wenzhong-GPT2-110M.IQ3_S.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.IQ3_S.gguf) | IQ3_S | 0.08GB | | [Wenzhong-GPT2-110M.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q3_K_S.gguf) | Q3_K_S | 0.08GB | | [Wenzhong-GPT2-110M.IQ3_M.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.IQ3_M.gguf) | IQ3_M | 0.09GB | | [Wenzhong-GPT2-110M.Q3_K.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q3_K.gguf) | Q3_K | 0.09GB | | [Wenzhong-GPT2-110M.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q3_K_M.gguf) | Q3_K_M | 0.09GB | | [Wenzhong-GPT2-110M.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q3_K_L.gguf) | Q3_K_L | 0.1GB | | [Wenzhong-GPT2-110M.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.IQ4_XS.gguf) | IQ4_XS | 0.1GB | | [Wenzhong-GPT2-110M.Q4_0.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q4_0.gguf) | Q4_0 | 0.1GB | | [Wenzhong-GPT2-110M.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.IQ4_NL.gguf) | IQ4_NL | 0.1GB | | [Wenzhong-GPT2-110M.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q4_K_S.gguf) | Q4_K_S | 0.1GB | | [Wenzhong-GPT2-110M.Q4_K.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q4_K.gguf) | Q4_K | 0.11GB | | [Wenzhong-GPT2-110M.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q4_K_M.gguf) | Q4_K_M | 0.11GB | | [Wenzhong-GPT2-110M.Q4_1.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q4_1.gguf) | Q4_1 | 0.11GB | | [Wenzhong-GPT2-110M.Q5_0.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q5_0.gguf) | Q5_0 | 0.11GB | | [Wenzhong-GPT2-110M.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q5_K_S.gguf) | Q5_K_S | 0.11GB | | [Wenzhong-GPT2-110M.Q5_K.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q5_K.gguf) | Q5_K | 0.12GB | | [Wenzhong-GPT2-110M.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q5_K_M.gguf) | Q5_K_M | 0.12GB | | [Wenzhong-GPT2-110M.Q5_1.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q5_1.gguf) | Q5_1 | 0.12GB | | [Wenzhong-GPT2-110M.Q6_K.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q6_K.gguf) | Q6_K | 0.13GB | | [Wenzhong-GPT2-110M.Q8_0.gguf](https://huggingface.co/RichardErkhov/IDEA-CCNL_-_Wenzhong-GPT2-110M-gguf/blob/main/Wenzhong-GPT2-110M.Q8_0.gguf) | Q8_0 | 0.17GB | Original model description: --- language: - zh inference: parameters: temperature: 0.7 top_p: 0.6 repetition_penalty: 1.1 max_new_tokens: 128 num_return_sequences: 3 do_sample: true license: apache-2.0 tags: - generate - gpt2 widget: - 北京是中国的 - 西湖的景色 --- # Wenzhong-GPT2-110M - Main Page:[Fengshenbang](https://fengshenbang-lm.com/) - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM) ## 简介 Brief Introduction 善于处理NLG任务,中文版的GPT2-Small。 Focused on handling NLG tasks, Chinese GPT2-Small. ## 模型分类 Model Taxonomy | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | | :----: | :----: | :----: | :----: | :----: | :----: | | 通用 General | 自然语言生成 NLG | 闻仲 Wenzhong | GPT2 | 110M | 中文 Chinese | ## 模型信息 Model Information 类似于Wenzhong2.0-GPT2-3.5B-chinese,我们实现了一个small版本的12层的Wenzhong-GPT2-110M,并且在悟道(300G版本)上面进行预训练。 Similar to Wenzhong2.0-GPT2-3.5B-chinese, we implement a small size Wenzhong-GPT2-110M with 12 layers, which is pre-trained on Wudao Corpus (300G version). ## 使用 Usage ### 加载模型 Loading Models ```python from transformers import GPT2Tokenizer,GPT2LMHeadModel hf_model_path = 'IDEA-CCNL/Wenzhong-GPT2-110M' tokenizer = GPT2Tokenizer.from_pretrained(hf_model_path) model = GPT2LMHeadModel.from_pretrained(hf_model_path) ``` ### 使用示例 Usage Examples ```python question = "北京是中国的" inputs = tokenizer(question,return_tensors='pt') generation_output = model.generate(**inputs, return_dict_in_generate=True, output_scores=True, max_length=150, # max_new_tokens=80, do_sample=True, top_p = 0.6, # num_beams=5, eos_token_id=50256, pad_token_id=0, num_return_sequences = 5) for idx,sentence in enumerate(generation_output.sequences): print('next sentence %d:\n'%idx, tokenizer.decode(sentence).split('<|endoftext|>')[0]) print('*'*40) ``` ## 引用 Citation 如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970): If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970): ```text @article{fengshenbang, author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen}, title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence}, journal = {CoRR}, volume = {abs/2209.02970}, year = {2022} } ``` 也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/): You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/): ```text @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```
kamii/MISTYwine-GGUF
kamii
2024-07-02T02:33:09Z
698
0
null
[ "gguf", "region:us" ]
null
2024-06-20T00:27:37Z
Entry not found
Pyjay/sentence-transformers-multilingual-snli-v2-500k
Pyjay
2021-08-05T21:42:55Z
697
1
sentence-transformers
[ "sentence-transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
2022-03-02T23:29:04Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # Pyjay/sentence-transformers-multilingual-snli-v2-500k This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('Pyjay/sentence-transformers-multilingual-snli-v2-500k') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('Pyjay/sentence-transformers-multilingual-snli-v2-500k') model = AutoModel.from_pretrained('Pyjay/sentence-transformers-multilingual-snli-v2-500k') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Pyjay/sentence-transformers-multilingual-snli-v2-500k) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 15604 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss` **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 180 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "callback": null, "epochs": 4, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 72, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
huggingtweets/d_q_nguyen
huggingtweets
2021-05-22T00:05:48Z
697
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://res.cloudinary.com/huggingtweets/image/upload/v1599893349/d_q_nguyen.jpg tags: - huggingtweets widget: - text: "My dream is" --- <link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css"> <style> @media (prefers-color-scheme: dark) { .prose { color: #E2E8F0 !important; } .prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; } } </style> <section class='prose'> <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('http://pbs.twimg.com/profile_images/1021749789598212096/eo8-km4g_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">Dat Quoc Nguyen 🤖 AI Bot </div> <div style="font-size: 15px; color: #657786">@d_q_nguyen bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@d_q_nguyen's tweets](https://twitter.com/d_q_nguyen). <table style='border-width:0'> <thead style='border-width:0'> <tr style='border-width:0 0 1px 0; border-color: #CBD5E0'> <th style='border-width:0'>Data</th> <th style='border-width:0'>Quantity</th> </tr> </thead> <tbody style='border-width:0'> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Tweets downloaded</td> <td style='border-width:0'>477</td> </tr> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Retweets</td> <td style='border-width:0'>365</td> </tr> <tr style='border-width:0 0 1px 0; border-color: #E2E8F0'> <td style='border-width:0'>Short tweets</td> <td style='border-width:0'>5</td> </tr> <tr style='border-width:0'> <td style='border-width:0'>Tweets kept</td> <td style='border-width:0'>107</td> </tr> </tbody> </table> [Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/30izyjvz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @d_q_nguyen's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/2zyuag4u) for full transparency and reproducibility. At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/2zyuag4u/artifacts) is logged and versioned. ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for text generation: <pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline generator = pipeline(<span style="color:#FF9800">'text-generation'</span>, model=<span style="color:#FF9800">'huggingtweets/d_q_nguyen'</span>) generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre> ### Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* </section> [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/borisdayma) <section class='prose'> For more details, visit the project repository. </section> [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ConvLab/t5-small-nlu-multiwoz21
ConvLab
2022-11-25T11:13:22Z
697
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "t5-small", "natural language understanding", "conversational system", "task-oriented dialog", "en", "dataset:ConvLab/multiwoz21", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
2022-11-25T04:25:43Z
--- language: - en license: apache-2.0 tags: - t5-small - text2text-generation - natural language understanding - conversational system - task-oriented dialog datasets: - ConvLab/multiwoz21 metrics: - Dialog acts Accuracy - Dialog acts F1 model-index: - name: t5-small-nlu-multiwoz21 results: - task: type: text2text-generation name: natural language understanding dataset: type: ConvLab/multiwoz21 name: MultiWOZ 2.1 split: test revision: 5f55375edbfe0270c20bcf770751ad982c0e6614 metrics: - type: Dialog acts Accuracy value: 77.8 name: Accuracy - type: Dialog acts F1 value: 86.5 name: F1 widget: - text: "user: I would like a taxi from Saint John's college to Pizza Hut Fen Ditton." - text: "user: we are staying 6 people for 4 nights starting from Tuesday. i need the reference number" inference: parameters: max_length: 100 --- # t5-small-nlu-multiwoz21 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on [MultiWOZ 2.1](https://huggingface.co/datasets/ConvLab/multiwoz21). Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 128 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 256 - optimizer: Adafactor - lr_scheduler_type: linear - num_epochs: 10.0 ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
DeepPavlov/mbart-large-50-ru-persona-chat
DeepPavlov
2023-04-23T10:50:08Z
697
2
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "text-generation", "ru", "license:openrail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-04-23T09:45:03Z
--- license: openrail language: - ru pipeline_tag: text-generation --- --- language: - ru --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** DeepPavlov team - **Model type:** text generation - **Language(s) (NLP):** Russian - **License:** Openrail - **Finetuned from model:** [facebook/mbart-large-50](facebook/mbart-large-50) # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> ```python from typing import List, TypedDict from dataclasses import dataclass from itertools import chain from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import torch @dataclass class H2PersonaChatHyperparametersV1: """ chat_history_pair_length: int - количество пар диалога с конца """ model_name: str = "facebook/bart-base" chat_history_pair_length: int = 7 persona_max_length: int = 14 chat_max_length: int = 25 debug_status: int = 0 class PersonaChatDatasetSampleV1(TypedDict): """ persona: List[str] - набор предложений фактов персоны history: List[str] - набор предложений истории переписки """ persona: List[str] history: List[str] sample_id: str class H2Seq2SeqInferenceSampleDictV1(TypedDict): input_ids: List[int] attention_mask: List[int] class H2Seq2SeqInferenceSampleDictV2(TypedDict): input_ids: torch.Tensor attention_mask: torch.Tensor def flat_list(list_of_lists: List[List]) -> List: return list(chain.from_iterable(list_of_lists)) class H2Seq2SeqInferencePersonaSampleV1: def __init__( self, dataset_sample: PersonaChatDatasetSampleV1, tokenizer: AutoTokenizer, hyperparameters: H2PersonaChatHyperparametersV1, ) -> None: self.dataset_sample = dataset_sample self.tokenizer = tokenizer self.hyperparameters = hyperparameters def add_spaces_after( self, items: List[str], ) -> List[str]: items = [item + " " for item in items] return items @property def bos_token_id(self): if "t5" in self.hyperparameters.model_name: return [] if self.tokenizer.bos_token_id is None: return [] return [self.tokenizer.bos_token_id] @property def eos_token_id(self): if self.tokenizer.eos_token_id is None: return [] return [self.tokenizer.eos_token_id] def add_sep_beetween(self, items: List[str], sep=" EOS ") -> List[str]: for i in range(1, len(items)): items[i] = sep + items[i] return items def add_spaces_between(self, items: List[str]) -> List[str]: items = self.add_spaces_after(items) items[-1] = items[-1].strip() return items def get_sample(self) -> H2Seq2SeqInferenceSampleDictV1: dialog_history = self.dataset_sample["history"] dialog_history = dialog_history[-self.hyperparameters.chat_history_pair_length * 2 - 1 :] dialog_history = self.add_sep_beetween(dialog_history) persona = self.dataset_sample["persona"] persona = self.add_sep_beetween( persona, sep=" ", ) KNOWLEDGE_IDS = self.tokenizer.encode( " [KNOWLEDGE] ", add_special_tokens=False, ) CONTEXT_IDS = self.tokenizer.encode( " [CONTEXT]", add_special_tokens=False, ) encoded_history = self.tokenizer.batch_encode_plus( dialog_history, add_special_tokens=False, truncation=True, max_length=self.hyperparameters.chat_max_length, ) encoded_history = flat_list(encoded_history["input_ids"]) encoded_persona = self.tokenizer.batch_encode_plus( persona, add_special_tokens=False, truncation=True, max_length=self.hyperparameters.persona_max_length, ) encoded_persona = flat_list(encoded_persona["input_ids"]) input_ids = [ *self.bos_token_id, *CONTEXT_IDS, *encoded_history, *KNOWLEDGE_IDS, *encoded_persona, *self.eos_token_id, ] attention_mask = [1] * len(input_ids) return H2Seq2SeqInferenceSampleDictV1( input_ids=input_ids, attention_mask=attention_mask, ) class DialogBotV1: def __init__( self, model: AutoModelForSeq2SeqLM, tokenizer: AutoTokenizer, hyperparameters: H2PersonaChatHyperparametersV1, history: List[str] = None, persona: List[str] = None, device: str = "cuda", shuffle_persona: bool = True, ): self.model = model self.tokenizer = tokenizer self.hyperparameters = hyperparameters self.device = device self.shuffle_persona = shuffle_persona self.debug_status = hyperparameters.debug_status if history is None: self.history = [] self.history = history if persona is None: self.persona = [] self.persona = persona def _get_sample( self, persona: List[str], history: List[str], ) -> H2Seq2SeqInferenceSampleDictV1: dataset_sample = PersonaChatDatasetSampleV1( persona=persona, history=history, ) sample = H2Seq2SeqInferencePersonaSampleV1( tokenizer=self.tokenizer, hyperparameters=self.hyperparameters, dataset_sample=dataset_sample, ) sample = sample.get_sample() print(self.tokenizer.decode(sample['input_ids'])) for key in sample.keys(): sample[key] = torch.tensor(sample[key]).unsqueeze(0).to(self.device) return sample def next_response( self, **generation_params, ) -> str: """ делает предсказание на основе текущей истории и персоны """ sample = self._get_sample( persona=self.persona, history=self.history, ) answer = self.generate_response( sample, **generation_params, ) answer = self.tokenizer.batch_decode( answer, skip_special_tokens=True, ) self.history.append(answer[0]) return answer[0] def generate_response( self, sample: H2Seq2SeqInferenceSampleDictV1, **generation_params, ): """ generation_params - https://huggingface.co/docs/transformers/v4.24.0/en/main_classes/text_generation """ with torch.no_grad(): return self.model.generate( **sample, **generation_params, ) # facebook/mbart-large-50 PRETRAINED_MODEL_NAME_OR_PATH = "DeepPavlov/mbart-large-50-ru-persona-chat" PAIR_DIALOG_HISTORY_LENGTH = 2 # CHAT_MAX_LENGTH for single sentence CHAT_MAX_LENGTH = 25 # PERSONA_MAX_LENGTH for single sentence PERSONA_MAX_LENGTH = 19 device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = AutoModelForSeq2SeqLM.from_pretrained(PRETRAINED_MODEL_NAME_OR_PATH) model.to(device) model.eval() tokenizer = AutoTokenizer.from_pretrained(PRETRAINED_MODEL_NAME_OR_PATH) if torch.cuda.is_available(): model.half() hyperparameters = H2PersonaChatHyperparametersV1( chat_history_pair_length=PAIR_DIALOG_HISTORY_LENGTH, persona_max_length=PERSONA_MAX_LENGTH, chat_max_length=CHAT_MAX_LENGTH, model_name=PRETRAINED_MODEL_NAME_OR_PATH, ) persona = [ "Я люблю играть с милыми песиками", "Я ненавижу лук и броколли" ] history = [ "Привет. Ты любишь лук?" ] persona_bot = DialogBotV1( model=model, tokenizer=tokenizer, hyperparameters=hyperparameters, history=history, persona=persona, device=device, ) GENERATION_PARAMS = { "max_new_tokens": 60, "penalty_alpha": 0.15, "top_k": 10 } response = persona_bot.next_response( **GENERATION_PARAMS, ) print(response) ``` ## Recommendations # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> - [Data Source | RU Persona Chat](https://toloka.ai/ru/datasets/#nlp) [More Information Needed] ### Preprocessing - Initial data was splitted by this script: ```python def ru_persona_chat_dataset_tranformer_v1( initial_dataset_path: str, output_folder: str, ) -> None: """ example ru_persona_chat_dataset_tranformer_v1( initial_dataset_path="./datasets/ru_persona_chat/dialogues.tsv", output_folder="./datasets/ru_persona_chat", ) """ assert initial_dataset_path is not None, "initial_dataset_path is None" assert output_folder is not None, "output_folder is None" dataset = pd.read_csv(initial_dataset_path, sep="\t") split_ratio = int(len(dataset) * 0.95) train_dataset = dataset[:split_ratio] valid_dataset = dataset[split_ratio:] print(f"Dataset lengths: train {len(train_dataset)}, valid {len(valid_dataset)}") # save csv files train_dataset.to_csv(output_folder + "/train.csv", index=False) valid_dataset.to_csv(output_folder + "/valid.csv", index=False) print("Datasets saved.") ``` # Evaluation ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> - BLUEL - CharF - RougeL
timm/edgenext_base.usi_in1k
timm
2023-04-23T22:42:59Z
697
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2206.10589", "arxiv:2204.03475", "license:mit", "region:us" ]
image-classification
2023-04-23T22:42:43Z
--- tags: - image-classification - timm library_name: timm license: mit datasets: - imagenet-1k --- # Model card for edgenext_base.usi_in1k An EdgeNeXt image classification model. Trained on ImageNet-1k by paper authors using distillation (`USI` as per `Solving ImageNet`). ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 18.5 - GMACs: 3.8 - Activations (M): 15.6 - Image size: train = 256 x 256, test = 320 x 320 - **Papers:** - EdgeNeXt: Efficiently Amalgamated CNN-Transformer Architecture for Mobile Vision Applications: https://arxiv.org/abs/2206.10589 - Solving ImageNet: a Unified Scheme for Training any Backbone to Top Results: https://arxiv.org/abs/2204.03475 - **Dataset:** ImageNet-1k - **Original:** https://github.com/mmaaz60/EdgeNeXt ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('edgenext_base.usi_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'edgenext_base.usi_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 80, 64, 64]) # torch.Size([1, 160, 32, 32]) # torch.Size([1, 288, 16, 16]) # torch.Size([1, 584, 8, 8]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'edgenext_base.usi_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 584, 8, 8) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Citation ```bibtex @inproceedings{Maaz2022EdgeNeXt, title={EdgeNeXt: Efficiently Amalgamated CNN-Transformer Architecture for Mobile Vision Applications}, author={Muhammad Maaz and Abdelrahman Shaker and Hisham Cholakkal and Salman Khan and Syed Waqas Zamir and Rao Muhammad Anwer and Fahad Shahbaz Khan}, booktitle={International Workshop on Computational Aspects of Deep Learning at 17th European Conference on Computer Vision (CADL2022)}, year={2022}, organization={Springer} } ``` ```bibtex @misc{https://doi.org/10.48550/arxiv.2204.03475, doi = {10.48550/ARXIV.2204.03475}, url = {https://arxiv.org/abs/2204.03475}, author = {Ridnik, Tal and Lawen, Hussam and Ben-Baruch, Emanuel and Noy, Asaf}, keywords = {Computer Vision and Pattern Recognition (cs.CV), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Solving ImageNet: a Unified Scheme for Training any Backbone to Top Results}, publisher = {arXiv}, year = {2022}, } ```
timm/caformer_m36.sail_in1k
timm
2023-05-05T05:43:06Z
697
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2210.13452", "license:apache-2.0", "region:us" ]
image-classification
2023-05-05T05:42:11Z
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for caformer_m36.sail_in1k A CAFormer (a MetaFormer) image classification model. Trained on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 56.2 - GMACs: 13.3 - Activations (M): 50.5 - Image size: 224 x 224 - **Papers:** - Metaformer baselines for vision: https://arxiv.org/abs/2210.13452 - **Original:** https://github.com/sail-sg/metaformer - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('caformer_m36.sail_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'caformer_m36.sail_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 96, 56, 56]) # torch.Size([1, 192, 28, 28]) # torch.Size([1, 384, 14, 14]) # torch.Size([1, 576, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'caformer_m36.sail_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 576, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{yu2022metaformer_baselines, title={Metaformer baselines for vision}, author={Yu, Weihao and Si, Chenyang and Zhou, Pan and Luo, Mi and Zhou, Yichen and Feng, Jiashi and Yan, Shuicheng and Wang, Xinchao}, journal={arXiv preprint arXiv:2210.13452}, year={2022} } ```
cckevinn/SeeClick-mind2web
cckevinn
2024-02-16T17:43:50Z
697
0
transformers
[ "transformers", "safetensors", "qwen", "text-generation", "custom_code", "autotrain_compatible", "region:us" ]
text-generation
2024-02-16T15:09:29Z
Entry not found
NorGLM/NorGPT-3B
NorGLM
2024-03-08T14:57:21Z
697
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "no", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-08T14:37:22Z
--- license: cc-by-nc-sa-4.0 language: - 'no' --- Gnerative Pretrained Tranformer with 3 Billion parameters for Norwegian. It belongs to NorGLM, a suite of pretrained Norwegian Generative Language Models (NorGLMs). The model is based on GPT2 architecture. NorGLM can be used for non-commercial purposes. All models in NorGLM are trained on 200G datasets, nearly 25B tokens, including Norwegian, Denish, Swedish, Germany and English. More training and evaluation details and papers will come soon!
lex-hue/Delexa-Instruct-V0.1-7b
lex-hue
2024-04-28T19:55:44Z
697
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "doi:10.57967/hf/2152", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-17T12:10:29Z
--- license: apache-2.0 model-index: - name: Delexa-Instruct-V0.1-7b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 66.38 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-Instruct-V0.1-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.9 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-Instruct-V0.1-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.79 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-Instruct-V0.1-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 61.73 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-Instruct-V0.1-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 78.37 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-Instruct-V0.1-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 62.93 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-Instruct-V0.1-7b name: Open LLM Leaderboard --- ## Delexa-V0.1-Instruct-7b: Our Newest and Best Model Yet! We are excited to announce the release of Delexa-V0.1-Instruct-7b, our newest and best model yet! Delexa-V0.1-Instruct-7b has shown excellent performance on a variety of tasks, and we are confident that it will be a valuable asset to the research community. ### Eval Results Delexa-V0.1-Instruct-7b was evaluated on a dataset of question-answer pairs. The model was given a single question and three different answer choices, and it was tasked with selecting the best answer. Delexa-V0.1-Instruct-7b achieved an average score of 8.27 on this task. Here is a table showing the detailed eval results: | Model | Turn 1 | Turn 2 | Average | |---|---|---|---| | gpt-4 | 8.95625 | 9.0250 | 8.990625 | | Delexa-V0.1-Instruct-7b | 8.57500 | 7.9500 | 8.268750 | | claude-v1 | 8.15000 | 7.6500 | 7.900000 | | gpt-3.5-turbo | 8.07500 | 7.8125 | 7.943750 | | vicuna-13b-v1.3 | 6.81250 | 5.9625 | 6.387500 | | palm-2-chat-bison-001 | 6.71250 | 6.0875 | 6.400000 | ### Technique One of the key factors that contributed to Delexa-V0.1-Instruct-7b's success is the technique of training the model with one question and three different answers. This technique allows the model to take into account different perspectives and viewpoints, which leads to more robust and accurate results. ### Future Work We are excited to continue working on Delexa and to see how it can be further improved. We are currently working on an Instruct model, which is a type of model that can be fine-tuned on specific tasks. We believe that Instruct models have the potential to be even more powerful than Delexa-V0.1-7b, and we are eager to see the results of our ongoing research. We would like to thank the entire team for their hard work on Delexa-V0.1-Instruct-7b. We are confident that this model will be a valuable asset to the research community. ### Guardrails: This Model allows 18+ content and lewd content, but it wont let any illegal content through (unless you jailbreak it). ### Support Our Work and Join our Community! [Our Patreon](https://patreon.com/Lex_Hue?utm_medium=unknown&utm_source=join_link&utm_campaign=creatorshare_creator&utm_content=copyLink) [Our Twitter](https://twitter.com/lex_hue) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lex-hue__Delexa-Instruct-V0.1-7b) | Metric |Value| |---------------------------------|----:| |Avg. |69.85| |AI2 Reasoning Challenge (25-Shot)|66.38| |HellaSwag (10-Shot) |85.90| |MMLU (5-Shot) |63.79| |TruthfulQA (0-shot) |61.73| |Winogrande (5-shot) |78.37| |GSM8k (5-shot) |62.93|
qwp4w3hyb/Phi-3-mini-4k-instruct-iMat-GGUF
qwp4w3hyb
2024-05-22T20:18:08Z
697
1
transformers
[ "transformers", "gguf", "phi3", "text-generation", "nlp", "code", "microsoft", "phi", "phi-3", "imatrix", "importance matrix", "conversational", "custom_code", "en", "base_model:microsoft/Phi-3-mini-4k-instruct", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-04-23T21:46:03Z
--- base_model: microsoft/Phi-3-mini-4k-instruct license: mit license_link: LICENSE language: - en pipeline_tag: text-generation tags: - nlp - code - microsoft - phi - phi-3 - gguf - imatrix - importance matrix --- # Quant Infos ## Includes latest bpe tokenizer fixes 🎉 - Updated for latest bpe pre-tokenizer fixes https://github.com/ggerganov/llama.cpp/pull/6920 - quants done with an importance matrix for improved quantization loss - K & IQ quants in basically all variants from Q6_K down to IQ1_S - fixed end token for instruct mode (<|eot_id|>[128009]) - Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [f4ab2a41476600a98067a9474ea8f9e6db41bcfa](https://github.com/ggerganov/llama.cpp/commit/f4ab2a41476600a98067a9474ea8f9e6db41bcfa) (master from 2024-04-29) - Imatrtix generated with [this](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) dataset. ``` ./imatrix -c 512 -m $model_name-f16.gguf -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat ``` # Original Model Card ## Model Summary The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support. The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters. Resources and Technical Documentation: + [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april) + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) + [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) + Phi-3 GGUF: [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf) + Phi-3 ONNX: [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx) ## Intended Uses **Primary use cases** The model is intended for commercial and research use in English. The model provides uses for applications which require: 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. **Use case considerations** Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. ## How to Use Phi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following: * When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function. * Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source. The current `transformers` version can be verified with: `pip list | grep transformers`. Phi-3 Mini-4K-Instruct is also available in [HuggingChat](https://aka.ms/try-phi3-hf-chat). ### Chat Format Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: ```markdown <|user|>\nQuestion <|end|>\n<|assistant|> ``` For example: ```markdown <|system|> You are a helpful AI assistant.<|end|> <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following: ```markdown <|system|> You are a helpful AI assistant.<|end|> <|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ### Sample inference code This code snippets show how to get quickly started with running the model on a GPU: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model = AutoModelForCausalLM.from_pretrained( "microsoft/Phi-3-mini-4k-instruct", device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct") messages = [ {"role": "system", "content": "You are a helpful digital assistant. Please provide safe, ethical and accurate information to the user."}, {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` ## Responsible AI Considerations Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model * Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 4K tokens * GPUs: 512 H100-80G * Training time: 7 days * Training data: 3.3T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. ### Datasets Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of 1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. ### Fine-tuning A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/sample_finetune.py). ## Benchmarks We report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5. All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation. As is now standard, we use few-shot prompts to evaluate the models, at temperature 0. The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3. More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model. The number of k–shot examples is listed per-benchmark. | | Phi-3-Mini-4K-In<br>3.8b | Phi-3-Small<br>7b (preview) | Phi-3-Medium<br>14b (preview) | Phi-2<br>2.7b | Mistral<br>7b | Gemma<br>7b | Llama-3-In<br>8b | Mixtral<br>8x7b | GPT-3.5<br>version 1106 | |---|---|---|---|---|---|---|---|---|---| | MMLU <br>5-Shot | 68.8 | 75.3 | 78.2 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 | | HellaSwag <br> 5-Shot | 76.7 | 78.7 | 83.2 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 | | ANLI <br> 7-Shot | 52.8 | 55.0 | 58.7 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 | | GSM-8K <br> 0-Shot; CoT | 82.5 | 86.4 | 90.8 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 | | MedQA <br> 2-Shot | 53.8 | 58.2 | 69.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 | | AGIEval <br> 0-Shot | 37.5 | 45.0 | 49.7 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 | | TriviaQA <br> 5-Shot | 64.0 | 59.1 | 73.3 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 | | Arc-C <br> 10-Shot | 84.9 | 90.7 | 91.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 | | Arc-E <br> 10-Shot | 94.6 | 97.1 | 98.0 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 | | PIQA <br> 5-Shot | 84.2 | 87.8 | 88.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 | | SociQA <br> 5-Shot | 76.6 | 79.0 | 79.4 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 | | BigBench-Hard <br> 0-Shot | 71.7 | 75.0 | 82.5 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 | | WinoGrande <br> 5-Shot | 70.8 | 82.5 | 81.2 | 54.7 | 54.2 | 55.6 | 65 | 62.0 | 68.8 | | OpenBookQA <br> 10-Shot | 83.2 | 88.4 | 86.6 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 | | BoolQ <br> 0-Shot | 77.6 | 82.9 | 86.5 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 | | CommonSenseQA <br> 10-Shot | 80.2 | 80.3 | 82.6 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 | | TruthfulQA <br> 10-Shot | 65.0 | 68.1 | 74.8 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 | | HumanEval <br> 0-Shot | 59.1 | 59.1 | 54.7 | 59.0 | 28.0 | 34.1 | 60.4 | 37.8 | 62.2 | | MBPP <br> 3-Shot | 53.8 | 71.4 | 73.7 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 | ## Software * [PyTorch](https://github.com/pytorch/pytorch) * [DeepSpeed](https://github.com/microsoft/DeepSpeed) * [Transformers](https://github.com/huggingface/transformers) * [Flash-Attention](https://github.com/HazyResearch/flash-attention) ## Hardware Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types: * NVIDIA A100 * NVIDIA A6000 * NVIDIA H100 If you want to run the model on: * NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager" * CPU: use the **GGUF** quantized models [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf) + Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx) ## Cross Platform Support ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model [here](https://aka.ms/phi3-mini-4k-instruct-onnx). Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile. Here are some of the optimized configurations we have added: 1. ONNX models for int4 DML: Quantized to int4 via AWQ 2. ONNX model for fp16 CUDA 3. ONNX model for int4 CUDA: Quantized to int4 via RTN 4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN ## License The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-4k/resolve/main/LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
okaris/face-controlnet-xl
okaris
2024-04-24T08:30:30Z
697
0
diffusers
[ "diffusers", "safetensors", "region:us" ]
null
2024-04-24T08:28:29Z
Entry not found
pyp1/VoiceCraft_330M_TTSEnhanced
pyp1
2024-04-25T16:47:32Z
697
1
voicecraft
[ "voicecraft", "safetensors", "text-to-speech", "pytorch_model_hub_mixin", "model_hub_mixin", "region:us" ]
text-to-speech
2024-04-25T16:46:55Z
--- library_name: voicecraft tags: - text-to-speech - pytorch_model_hub_mixin - model_hub_mixin repo_url: https://github.com/jasonppy/VoiceCraft --- This model has been pushed to the Hub using **voicecraft**: - Repo: https://github.com/jasonppy/VoiceCraft - Docs: [More Information Needed]
lodrick-the-lafted/Olethros-8B
lodrick-the-lafted
2024-04-29T08:16:25Z
697
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:lodrick-the-lafted/OpusStories", "dataset:lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-3.3K", "dataset:lodrick-the-lafted/Samantha-Opus", "dataset:lodrick-the-lafted/Worldsim-Opus", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-28T11:54:54Z
--- license: llama3 datasets: - lodrick-the-lafted/OpusStories - lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-3.3K - lodrick-the-lafted/Samantha-Opus - lodrick-the-lafted/Worldsim-Opus --- <img src=https://huggingface.co/lodrick-the-lafted/Olethros-8B/resolve/main/olethros.png> # Olethros-8B L3-8b-Instruct tuned on roughly 6000 Opus generations in the hopes of adding a bit of sovl. <br /> <br /> <br /> <br /> # Quants GGUF, Exl2 and AWQ available right now. | Type | Misc | Author | | ----- | ----- | ----- | [GGUF](https://huggingface.co/mradermacher/Olethros-8B-GGUF)| Static GGUF Quants | mradermacher | [AWQ](https://huggingface.co/lodrick-the-lafted/Olethros-8B-AWQ)| | lodrick | [exl2](https://huggingface.co/blockblockblock/Olethros-8B-bpw2.25-exl2)| 2.25bpw | blockblockblock | [exl2](https://huggingface.co/blockblockblock/Olethros-8B-bpw2.5-exl2)| 2.5bpw | blockblockblock | [exl2](https://huggingface.co/blockblockblock/Olethros-8B-bpw3-exl2)| 3.0bpw | blockblockblock | [exl2](https://huggingface.co/blockblockblock/Olethros-8B-bpw3.5-exl2)| 3.5bpw | blockblockblock | [exl2](https://huggingface.co/blockblockblock/Olethros-8B-bpw3.7-exl2)| 3.7bpw | blockblockblock | [exl2](https://huggingface.co/blockblockblock/Olethros-8B-bpw4-exl2)| 4.0bpw | blockblockblock | [exl2](https://huggingface.co/blockblockblock/Olethros-8B-bpw4.2-exl2)| 4.2bpw | blockblockblock | [exl2](https://huggingface.co/blockblockblock/Olethros-8B-bpw4.4-exl2)| 4.4bpw | blockblockblock | [exl2](https://huggingface.co/blockblockblock/Olethros-8B-bpw4.6-exl2)| 4.6bpw | blockblockblock | [exl2](https://huggingface.co/blockblockblock/Olethros-8B-bpw4.8-exl2)| 4.8bpw | blockblockblock | [exl2](https://huggingface.co/blockblockblock/Olethros-8B-bpw5-exl2)| 5.0bpw | blockblockblock | [exl2](https://huggingface.co/blockblockblock/Olethros-8B-bpw5.5-exl2)| 5.5bpw | blockblockblock | [exl2](https://huggingface.co/blockblockblock/Olethros-8B-bpw6-exl2)| 6.0bpw | blockblockblock |
mmnga/codegemma-1.1-2b-gguf
mmnga
2024-05-04T11:49:53Z
697
1
null
[ "gguf", "en", "ja", "dataset:kunishou/amenokaku-code-instruct", "dataset:TokenBender/code_instructions_122k_alpaca_style", "license:gemma", "region:us" ]
null
2024-05-04T09:56:24Z
--- license: gemma language: - en - ja datasets: - kunishou/amenokaku-code-instruct - TokenBender/code_instructions_122k_alpaca_style --- # codegemma-1.1-2b-gguf [googleさんが公開しているcodegemma-1.1-2b](https://huggingface.co/google/codegemma-1.1-2b)のggufフォーマット変換版です。 imatrixのデータは [kunishou/amenokaku-code-instruct](https://huggingface.co/datasets/kunishou/amenokaku-code-instruct) [TokenBender/code_instructions_122k_alpaca_style](https://huggingface.co/datasets/TokenBender/code_instructions_122k_alpaca_style) から1000ずつサンプリングしたデータを使用して作成しました。 ## 他のモデル [mmnga/codegemma-1.1-7b-it-gguf](https://huggingface.co/mmnga/codegemma-1.1-7b-it-gguf) [mmnga/codegemma-1.1-2b-gguf](https://huggingface.co/mmnga/codegemma-1.1-2b-gguf) [mmnga/gemma-2b-it-gguf](https://huggingface.co/mmnga/gemma-2b-it-gguf) [mmnga/gemma-7b-it-gguf](https://huggingface.co/mmnga/gemma-7b-it-gguf) [mmnga/gemma-1.1-7b-it-gguf](https://huggingface.co/mmnga/gemma-1.1-7b-it-gguf) [mmnga/codegemma-7b-it-gguf](https://huggingface.co/mmnga/codegemma-7b-it-gguf) ## Usage ``` git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp make -j ./main -m 'codegemma-1.1-2b-Q4_0.gguf' -n 128 -p 'Write Hello World.' ```
mradermacher/Qwen2-72B-GGUF
mradermacher
2024-06-07T08:26:10Z
697
1
transformers
[ "transformers", "gguf", "pretrained", "en", "base_model:Qwen/Qwen2-72B", "license:other", "endpoints_compatible", "region:us" ]
null
2024-06-07T02:37:30Z
--- base_model: Qwen/Qwen2-72B language: - en library_name: transformers license: other license_link: https://huggingface.co/Qwen/Qwen2-72B/blob/main/LICENSE license_name: tongyi-qianwen quantized_by: mradermacher tags: - pretrained --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Qwen/Qwen2-72B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2-72B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-GGUF/resolve/main/Qwen2-72B.Q2_K.gguf) | Q2_K | 29.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-GGUF/resolve/main/Qwen2-72B.IQ3_XS.gguf) | IQ3_XS | 32.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-GGUF/resolve/main/Qwen2-72B.IQ3_S.gguf) | IQ3_S | 34.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-GGUF/resolve/main/Qwen2-72B.Q3_K_S.gguf) | Q3_K_S | 34.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-GGUF/resolve/main/Qwen2-72B.IQ3_M.gguf) | IQ3_M | 35.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-GGUF/resolve/main/Qwen2-72B.Q3_K_M.gguf) | Q3_K_M | 37.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-GGUF/resolve/main/Qwen2-72B.Q3_K_L.gguf) | Q3_K_L | 39.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-GGUF/resolve/main/Qwen2-72B.IQ4_XS.gguf) | IQ4_XS | 40.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-GGUF/resolve/main/Qwen2-72B.Q4_K_S.gguf) | Q4_K_S | 44.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2-72B-GGUF/resolve/main/Qwen2-72B.Q4_K_M.gguf) | Q4_K_M | 47.5 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Qwen2-72B-GGUF/resolve/main/Qwen2-72B.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Qwen2-72B-GGUF/resolve/main/Qwen2-72B.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | | | [PART 1](https://huggingface.co/mradermacher/Qwen2-72B-GGUF/resolve/main/Qwen2-72B.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Qwen2-72B-GGUF/resolve/main/Qwen2-72B.Q5_K_M.gguf.part2of2) | Q5_K_M | 54.5 | | | [PART 1](https://huggingface.co/mradermacher/Qwen2-72B-GGUF/resolve/main/Qwen2-72B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Qwen2-72B-GGUF/resolve/main/Qwen2-72B.Q6_K.gguf.part2of2) | Q6_K | 64.4 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Qwen2-72B-GGUF/resolve/main/Qwen2-72B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Qwen2-72B-GGUF/resolve/main/Qwen2-72B.Q8_0.gguf.part2of2) | Q8_0 | 77.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
bjaidi/Phi-3-medium-128k-instruct-awq
bjaidi
2024-06-20T17:16:22Z
697
1
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "autotrain_compatible", "endpoints_compatible", "4-bit", "awq", "region:us" ]
text-generation
2024-06-20T04:45:43Z
Generated using autoawq: `pip install git+https://github.com/casper-hansen/AutoAWQ.git@f0321eedca887c12680553fc561d176b03b1b9a5 flash_attn` Following code used for generation: ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer model_path = 'models/Phi-3-medium-128k-instruct' quant_path = 'models/Phi-3-medium-128k-instruct-awq' quant_config = { "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" } # Load model model = AutoAWQForCausalLM.from_pretrained(model_path, **{"device_map": "auto"}) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) # Quantize model.quantize(tokenizer, quant_config=quant_config) # Save quantized model model.save_quantized(quant_path) tokenizer.save_pretrained(quant_path) ``` Original model here: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct --- license: mit ---
ethzanalytics/distilgpt2-tiny-conversational
ethzanalytics
2023-05-03T13:48:11Z
696
0
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "chatbot", "dialogue", "distilgpt2", "ai-msgbot", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - text-generation - chatbot - dialogue - distilgpt2 - gpt2 - ai-msgbot widget: - text: "I know you're tired, but can we go for another walk this evening?\nperson beta:\n\n" example_title: "walk" - text: "Have you done anything exciting lately?\nperson beta:\n\n" example_title: "activities" - text: "hey - do you have a favorite grocery store around here?\nperson beta:\n\n" example_title: "grocery" - text: "Can you take me for dinner somewhere nice this time?\nperson beta:\n\n" example_title: "dinner" - text: "What's your favorite form of social media?\nperson beta:\n\n" example_title: "social media" - text: "Hi, how are you?\nperson beta:\n\n" example_title: "greeting" - text: "I am the best; my sister is the worst. What am I?\nperson beta:\n\n" example_title: "sister" - text: "What do you call an alligator who's just had surgery to remove his left arm?\nperson beta:\n\n" example_title: "alligator" - text: "A man walks into a bar and asks for a drink. The bartender asks for $10, and he pays him $1. What did he pay him with?\nperson beta:\n\n" example_title: "dollar" - text: "What did I say was in the mailbox when it was actually in the cabinet?\nperson beta:\n\n" example_title: "mailbox" - text: "My friend says that she knows every language, but she doesn't speak any of them.. what's wrong with her?\nperson beta:\n\n" example_title: "language" inference: parameters: min_length: 2 max_length: 64 length_penalty: 0.7 no_repeat_ngram_size: 2 do_sample: True top_p: 0.95 top_k: 20 temperature: 0.3 repetition_penalty: 3.5 --- # distilgpt2-tiny-conversational This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on a parsed version of Wizard of Wikipedia. Persona alpha/beta framework designed for use with [ai-msgbot](https://github.com/pszemraj/ai-msgbot). It achieves the following results on the evaluation set: - Loss: 2.2461 ## Model description - a basic dialogue model for conversation. It can be used as a chatbot. - check out a [simple demo here](https://huggingface.co/spaces/ethzanalytics/dialogue-demo) ## Intended uses & limitations - usage is designed for integrating with this repo: [ai-msgbot](https://github.com/pszemraj/ai-msgbot) - the main specific information to know is that the model generates whole conversations between two entities, `person alpha` and `person beta`. These entity names are used functionally as custom `<bos>` tokens to extract when one response ends and another begins. ## Training and evaluation data - [wizard of Wikipedia](https://parl.ai/projects/wizard_of_wikipedia/) parsed, from parlAI ## Training procedure - deepspeed + huggingface trainer, an example notebook is in [ai-msgbot](https://github.com/pszemraj/ai-msgbot) ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | No log | 1.0 | 418 | 2.7793 | | 2.9952 | 2.0 | 836 | 2.6914 | | 2.7684 | 3.0 | 1254 | 2.6348 | | 2.685 | 4.0 | 1672 | 2.5938 | | 2.6243 | 5.0 | 2090 | 2.5625 | | 2.5816 | 6.0 | 2508 | 2.5332 | | 2.5816 | 7.0 | 2926 | 2.5098 | | 2.545 | 8.0 | 3344 | 2.4902 | | 2.5083 | 9.0 | 3762 | 2.4707 | | 2.4793 | 10.0 | 4180 | 2.4551 | | 2.4531 | 11.0 | 4598 | 2.4395 | | 2.4269 | 12.0 | 5016 | 2.4238 | | 2.4269 | 13.0 | 5434 | 2.4102 | | 2.4051 | 14.0 | 5852 | 2.3945 | | 2.3777 | 15.0 | 6270 | 2.3848 | | 2.3603 | 16.0 | 6688 | 2.3711 | | 2.3394 | 17.0 | 7106 | 2.3613 | | 2.3206 | 18.0 | 7524 | 2.3516 | | 2.3206 | 19.0 | 7942 | 2.3398 | | 2.3026 | 20.0 | 8360 | 2.3301 | | 2.2823 | 21.0 | 8778 | 2.3203 | | 2.2669 | 22.0 | 9196 | 2.3105 | | 2.2493 | 23.0 | 9614 | 2.3027 | | 2.2334 | 24.0 | 10032 | 2.2930 | | 2.2334 | 25.0 | 10450 | 2.2852 | | 2.2194 | 26.0 | 10868 | 2.2754 | | 2.2014 | 27.0 | 11286 | 2.2695 | | 2.1868 | 28.0 | 11704 | 2.2598 | | 2.171 | 29.0 | 12122 | 2.2539 | | 2.1597 | 30.0 | 12540 | 2.2461 | ### Framework versions - Transformers 4.16.1 - Pytorch 1.10.0+cu111 - Tokenizers 0.11.0
huggingtweets/netflixinator
huggingtweets
2022-08-15T23:30:16Z
696
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-06-16T18:11:36Z
--- language: en thumbnail: http://www.huggingtweets.com/netflixinator/1660606212293/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1417287754434727936/38RRdVlp_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Fourtoffee #FourtoffeeHype #NewDeal4Animation</div> <div style="text-align: center; font-size: 14px;">@netflixinator</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Fourtoffee #FourtoffeeHype #NewDeal4Animation. | Data | Fourtoffee #FourtoffeeHype #NewDeal4Animation | | --- | --- | | Tweets downloaded | 892 | | Retweets | 7 | | Short tweets | 66 | | Tweets kept | 819 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1x9z6gj4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @netflixinator's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/364d3c23) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/364d3c23/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/netflixinator') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ai-forever/mGPT-1.3B-kirgiz
ai-forever
2023-08-11T08:02:01Z
696
3
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "gpt3", "mgpt", "ky", "en", "ru", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-08-10T05:11:30Z
--- language: - ky - en - ru license: mit tags: - gpt3 - transformers - mgpt --- # 🇰🇬 Kirgiz mGPT 1.3B Language model for Kirgiz. Model has 1.3B parameters as you can guess from it's name. Kirgiz belongs to Turkic language family. It's a very fluid language with approximately 4 million speakers. Here are some facts about it: 1. It is also known as Kyrgyz and is the official language of Kyrgyzstan. 2. It uses a version of the Cyrillic script. 3. Manas, an epic poem in the Kyrgyz language, is one of the worlds longest epics. ## Technical details It's one of the models derived from the base [mGPT-XL (1.3B)](https://huggingface.co/ai-forever/mGPT) model (see the list below) which was originally trained on the 61 languages from 25 language families using Wikipedia and C4 corpus. We've found additional data for 23 languages most of which are considered as minor and decided to further tune the base model. **Kirgiz mGPT 1.3B** was trained for another 50000 steps with batch_size=4 and context window of **2048** tokens on 1 A100. Final perplexity for this model on validation is **8.2**. _Chart of the training loss and perplexity:_ ![](https://i.imgur.com/t9v4Idk.png) ## Other mGPT-1.3B models - [🇦🇲 mGPT-1.3B Armenian](https://huggingface.co/ai-forever/mGPT-1.3B-armenian) - [🇦🇿 mGPT-1.3B Azerbaijan](https://huggingface.co/ai-forever/mGPT-1.3B-azerbaijan) - [🍯 mGPT-1.3B Bashkir](https://huggingface.co/ai-forever/mGPT-1.3B-bashkir) - [🇧🇾 mGPT-1.3B Belorussian](https://huggingface.co/ai-forever/mGPT-1.3B-belorussian) - [🇧🇬 mGPT-1.3B Bulgarian](https://huggingface.co/ai-forever/mGPT-1.3B-bulgarian) - [🌞 mGPT-1.3B Buryat](https://huggingface.co/ai-forever/mGPT-1.3B-buryat) - [🌳 mGPT-1.3B Chuvash](https://huggingface.co/ai-forever/mGPT-1.3B-chuvash) - [🇬🇪 mGPT-1.3B Georgian](https://huggingface.co/ai-forever/mGPT-1.3B-georgian) - [🌸 mGPT-1.3B Kalmyk](https://huggingface.co/ai-forever/mGPT-1.3B-kalmyk) - [🇰🇿 mGPT-1.3B Kazakh](https://huggingface.co/ai-forever/mGPT-1.3B-kazakh) - [🐻 mGPT-1.3B Mari](https://huggingface.co/ai-forever/mGPT-1.3B-mari) - [🇲🇳 mGPT-1.3B Mongol](https://huggingface.co/ai-forever/mGPT-1.3B-mongol) - [🐆 mGPT-1.3B Ossetian](https://huggingface.co/ai-forever/mGPT-1.3B-ossetian) - [🇮🇷 mGPT-1.3B Persian](https://huggingface.co/ai-forever/mGPT-1.3B-persian) - [🇷🇴 mGPT-1.3B Romanian](https://huggingface.co/ai-forever/mGPT-1.3B-romanian) - [🇹🇯 mGPT-1.3B Tajik](https://huggingface.co/ai-forever/mGPT-1.3B-tajik) - [☕ mGPT-1.3B Tatar](https://huggingface.co/ai-forever/mGPT-1.3B-tatar) - [🇹🇲 mGPT-1.3B Turkmen](https://huggingface.co/ai-forever/mGPT-1.3B-turkmen) - [🐎 mGPT-1.3B Tuvan](https://huggingface.co/ai-forever/mGPT-1.3B-tuvan) - [🇺🇦 mGPT-1.3B Ukranian](https://huggingface.co/ai-forever/mGPT-1.3B-ukranian) - [🇺🇿 mGPT-1.3B Uzbek](https://huggingface.co/ai-forever/mGPT-1.3B-uzbek) - [💎 mGPT-1.3B Yakut](https://huggingface.co/ai-forever/mGPT-1.3B-yakut) ## Feedback If you'll found a bug of have additional data to train model on your language — please, give us feedback. Model will be improved over time. Stay tuned!
TheBloke/Llama-2-13B-Ensemble-v5-GGUF
TheBloke
2023-09-27T13:02:44Z
696
3
transformers
[ "transformers", "gguf", "llama", "base_model:yeontaek/llama-2-13B-ensemble-v5", "license:llama2", "text-generation-inference", "region:us" ]
null
2023-09-10T21:27:38Z
--- license: llama2 model_name: Llama 2 13B Ensemble v5 inference: false model_creator: yeontaek model_link: https://huggingface.co/yeontaek/llama-2-13B-ensemble-v5 model_type: llama quantized_by: TheBloke base_model: yeontaek/llama-2-13B-ensemble-v5 --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Llama 2 13B Ensemble v5 - GGUF - Model creator: [yeontaek](https://huggingface.co/yeontaek) - Original model: [Llama 2 13B Ensemble v5](https://huggingface.co/yeontaek/llama-2-13B-ensemble-v5) <!-- description start --> ## Description This repo contains GGUF format model files for [yeontaek's Llama 2 13B Ensemble v5](https://huggingface.co/yeontaek/llama-2-13B-ensemble-v5). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-13B-Ensemble-v5-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-13B-Ensemble-v5-GGUF) * [yeontaek's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/yeontaek/llama-2-13B-ensemble-v5) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [llama-2-13b-ensemble-v5.Q2_K.gguf](https://huggingface.co/TheBloke/Llama-2-13B-Ensemble-v5-GGUF/blob/main/llama-2-13b-ensemble-v5.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [llama-2-13b-ensemble-v5.Q3_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-13B-Ensemble-v5-GGUF/blob/main/llama-2-13b-ensemble-v5.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [llama-2-13b-ensemble-v5.Q3_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-13B-Ensemble-v5-GGUF/blob/main/llama-2-13b-ensemble-v5.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [llama-2-13b-ensemble-v5.Q3_K_L.gguf](https://huggingface.co/TheBloke/Llama-2-13B-Ensemble-v5-GGUF/blob/main/llama-2-13b-ensemble-v5.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [llama-2-13b-ensemble-v5.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-13B-Ensemble-v5-GGUF/blob/main/llama-2-13b-ensemble-v5.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [llama-2-13b-ensemble-v5.Q4_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-13B-Ensemble-v5-GGUF/blob/main/llama-2-13b-ensemble-v5.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [llama-2-13b-ensemble-v5.Q4_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-13B-Ensemble-v5-GGUF/blob/main/llama-2-13b-ensemble-v5.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [llama-2-13b-ensemble-v5.Q5_0.gguf](https://huggingface.co/TheBloke/Llama-2-13B-Ensemble-v5-GGUF/blob/main/llama-2-13b-ensemble-v5.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [llama-2-13b-ensemble-v5.Q5_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-13B-Ensemble-v5-GGUF/blob/main/llama-2-13b-ensemble-v5.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [llama-2-13b-ensemble-v5.Q5_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-13B-Ensemble-v5-GGUF/blob/main/llama-2-13b-ensemble-v5.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [llama-2-13b-ensemble-v5.Q6_K.gguf](https://huggingface.co/TheBloke/Llama-2-13B-Ensemble-v5-GGUF/blob/main/llama-2-13b-ensemble-v5.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [llama-2-13b-ensemble-v5.Q8_0.gguf](https://huggingface.co/TheBloke/Llama-2-13B-Ensemble-v5-GGUF/blob/main/llama-2-13b-ensemble-v5.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m llama-2-13b-ensemble-v5.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Llama-2-13B-Ensemble-v5-GGUF", model_file="llama-2-13b-ensemble-v5.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: yeontaek's Llama 2 13B Ensemble v5 No original model card was available. <!-- original-model-card end -->