modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
timestamp[us, tz=UTC]
card
stringlengths
1
901k
itdainb/PhoRanker
itdainb
2024-06-01T13:20:52Z
470
7
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "cross-encoder", "rerank", "vi", "dataset:unicamp-dl/mmarco", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-29T12:45:48Z
--- language: - vi license: apache-2.0 library_name: transformers tags: - cross-encoder - rerank datasets: - unicamp-dl/mmarco widget: - text: Trường UIT là gì ?. output: - label: >- Trường Đại_học Công_nghệ Thông_tin có tên tiếng Anh là University of Information_Technology ( viết tắt là UIT ) là thành_viên của Đại_học Quốc_Gia TP. HCM. score: 4.0033 - label: >- Trường Đại_học Kinh_tế – Luật ( tiếng Anh : University of Economics and Law – UEL ) là trường đại_học đào_tạo và nghiên_cứu khối ngành kinh_tế , kinh_doanh và luật hàng_đầu Việt_Nam . score: 0 - label: >- Quĩ_uỷ_thác đầu_tư ( tiếng Anh : Unit Investment_Trusts ; viết tắt : UIT ) là một công_ty đầu_tư mua hoặc nắm giữ một danh_mục đầu_tư cố_định score: 2.5138 pipeline_tag: text-classification --- #### Table of contents 1. [Installation](#installation) 2. [Pre-processing](#pre-processing) 3. [Usage with `sentence-transformers`](#usage-with-sentence-transformers) 4. [Usage with `transformers`](#usage-with-transformers) 5. [Performance](#performance) 6. [Citation](#citation) ## Installation - Install `pyvi` to word segment: - `pip install pyvi` - Install `sentence-transformers` (recommend) - [Usage](#usage-with-sentence-transformers): - `pip install sentence-transformers` - Install `transformers` (optional) - [Usage](#usage-with-transformers): - `pip install transformers` ## Pre-processing ```python from pyvi import ViTokenizer query = "UIT là gì?" sentences = [ "Trường Đại học Công nghệ Thông tin có tên tiếng Anh là University of Information Technology (viết tắt là UIT) là thành viên của Đại học Quốc Gia TP.HCM.", "Trường Đại học Kinh tế – Luật (tiếng Anh: University of Economics and Law – UEL) là trường đại học đào tạo và nghiên cứu khối ngành kinh tế, kinh doanh và luật hàng đầu Việt Nam.", "Quĩ uỷ thác đầu tư (tiếng Anh: Unit Investment Trusts; viết tắt: UIT) là một công ty đầu tư mua hoặc nắm giữ một danh mục đầu tư cố định" ] tokenized_query = ViTokenizer.tokenize(query) tokenized_sentences = [ViTokenizer.tokenize(sent) for sent in sentences] tokenized_pairs = [[tokenized_query, sent] for sent in tokenized_sentences] model_id = 'itdainb/PhoRanker' ``` ## Usage with sentence-transformers ```python from sentence_transformers import CrossEncoder model = CrossEncoder(model_id, max_length=256) # For fp16 usage model.model.half() scores = model.predict(tokenized_pairs) ``` ## Usage with transformers ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id) # For fp16 usage model.half() features = tokenizer(tokenized_pairs, padding=True, truncation="longest_first", return_tensors="pt", max_length=256) model.eval() with torch.no_grad(): model_predictions = model(**features, return_dict=True) logits = model_predictions.logits scores = [score[0] for score in logits] print(scores) ``` ## Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [MS MMarco Passage Reranking - Vi - Dev](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset. | Model-Name | NDCG@3 | MRR@3 | NDCG@5 | MRR@5 | NDCG@10 | MRR@10 | Docs / Sec | | ----------------------------------------------------- |:------ | :---- |:------ | :---- |:------ | :----| :--- | |itdainb/PhoRanker |**0.6625**|**0.6458**|**0.7147**|**0.6731**|**0.7422**|**0.6830**|15 |[amberoad/bert-multilingual-passage-reranking-msmarco](https://huggingface.co/amberoad/bert-multilingual-passage-reranking-msmarco) |0.4634|0.5233|0.5041|0.5383|0.5416|0.5523|**22** |[kien-vu-uet/finetuned-phobert-passage-rerank-best-eval](https://huggingface.co/kien-vu-uet/finetuned-phobert-passage-rerank-best-eval) |0.0963|0.0883|0.1396|0.1131|0.1681|0.1246|15 |[BAAI/bge-reranker-v2-m3](https://huggingface.co/BAAI/bge-reranker-v2-m3) |0.6087|0.5841|0.6513|0.6062|0.6872|0.62091|3.51 |[BAAI/bge-reranker-v2-gemma](https://huggingface.co/BAAI/bge-reranker-v2-gemma) |0.6088|0.5908|0.6446|0.6108|0.6785|0.6249|1.29 Note: Runtime was computed on a A100 GPU with fp16. ## Citation Please cite as ```Plaintext @misc{PhoRanker, title={PhoRanker: A Cross-encoder Model for Vietnamese Text Ranking}, author={Dai Nguyen Ba ({ORCID:0009-0008-8559-3154})}, year={2024}, publisher={Huggingface}, journal={huggingface repository}, howpublished={\url{https://huggingface.co/itdainb/PhoRanker}}, } ```
mradermacher/IceCaffeLatteRP-7b-i1-GGUF
mradermacher
2024-06-01T16:28:45Z
470
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "alpaca", "mistral", "not-for-all-audiences", "nsfw", "en", "base_model:icefog72/IceCaffeLatteRP-7b", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-05-31T09:38:02Z
--- base_model: icefog72/IceCaffeLatteRP-7b language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - mergekit - merge - alpaca - mistral - not-for-all-audiences - nsfw --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/icefog72/IceCaffeLatteRP-7b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-i1-GGUF/resolve/main/IceCaffeLatteRP-7b.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-i1-GGUF/resolve/main/IceCaffeLatteRP-7b.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-i1-GGUF/resolve/main/IceCaffeLatteRP-7b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-i1-GGUF/resolve/main/IceCaffeLatteRP-7b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-i1-GGUF/resolve/main/IceCaffeLatteRP-7b.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-i1-GGUF/resolve/main/IceCaffeLatteRP-7b.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-i1-GGUF/resolve/main/IceCaffeLatteRP-7b.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-i1-GGUF/resolve/main/IceCaffeLatteRP-7b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-i1-GGUF/resolve/main/IceCaffeLatteRP-7b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-i1-GGUF/resolve/main/IceCaffeLatteRP-7b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-i1-GGUF/resolve/main/IceCaffeLatteRP-7b.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-i1-GGUF/resolve/main/IceCaffeLatteRP-7b.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-i1-GGUF/resolve/main/IceCaffeLatteRP-7b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-i1-GGUF/resolve/main/IceCaffeLatteRP-7b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-i1-GGUF/resolve/main/IceCaffeLatteRP-7b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-i1-GGUF/resolve/main/IceCaffeLatteRP-7b.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-i1-GGUF/resolve/main/IceCaffeLatteRP-7b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-i1-GGUF/resolve/main/IceCaffeLatteRP-7b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-i1-GGUF/resolve/main/IceCaffeLatteRP-7b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-i1-GGUF/resolve/main/IceCaffeLatteRP-7b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-i1-GGUF/resolve/main/IceCaffeLatteRP-7b.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
SSepiol/Phi3-MB_q4_k_m
SSepiol
2024-06-05T13:45:04Z
470
0
transformers
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-05T13:42:55Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - gguf base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit --- # Uploaded model - **Developed by:** SSepiol - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
de-coder/UlizaLlama_Q4_K_M-gguf
de-coder
2024-06-07T06:08:19Z
470
0
transformers
[ "transformers", "gguf", "llama", "text-generation", "art", "sw", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-07T05:11:07Z
--- license: mit language: - sw - en tags: - art --- # UlizaLlama_Q4_K_M-gguf 4-bit Quantized Bilingual Language Model ## Overview UlizaLlama_Q4_K_M-gguf is a 4-bit quantized version of the UlizaLlama model, a 7B parameter language model fine-tuned for Swahili and English. This quantized model offers the same bilingual capabilities as the original UlizaLlama but with significantly reduced model size and improved inference speed, making it ideal for deployment in resource-constrained environments. ### Key Features - **Bilingual Proficiency**: Excels in both Swahili and English, with a focus on instructional tasks. - **4-bit Quantization**: Utilizes the QQUF (Quantized QUarter Float) format for a 75% reduction in model size. - **Efficient Inference**: Faster processing and lower memory footprint compared to the full-precision model. - **Versatile Applications**: Suitable for question-answering, chat assistants, and various domain-specific tasks. ## Model Details - **Original Model**: UlizaLlama (7B parameters) - **Base Model**: Jacaranda/kiswallama-pretrained (derived from Meta/Llama2) - **Quantization Method**: 4-bit QQUF - **Languages**: Swahili and English - **License**: CC BY-NC-SA 4.0 DEED ## Installation To use UlizaLlama-QQUF, you'll need a library that supports 4-bit quantized models. We recommend using the `bitsandbytes` library: ```bash !pip install ctransformers ``` ## Usage Here's a simple example of how to load and use de-coder/UlizaLlama_Q4_K_M-gguf ```python from ctransformers import AutoModelForCausalLM # Load the model llm = AutoModelForCausalLM.from_pretrained( "de-coder/UlizaLlama_Q4_K_M-gguf", model_file="Q4_K_M.gguf", lib="avx2" # or "basic" if avx2 isn't supported ) # Generate text prompt = "Niambie kuhusu historia ya Kilimanjaro." print(llm(prompt)) ``` ## Performance and Trade-offs UlizaLlama-QQUF offers substantial improvements in model size and inference speed. However, there might be a slight degradation in performance compared to the full-precision model. We encourage users to benchmark the model on their specific tasks to understand these trade-offs. ## Use Cases 1. Chatbots for healthcare, agriculture, education, and more. 2. Language learning applications. 3. Information services in Swahili-speaking regions. 4. Edge devices and mobile applications. ## Citation and Acknowledgments If you use UlizaLlama_Q4_K_M-gguf in your work, please cite: ```bibtex @misc{UlizaLlama_Q4_K_M-gguf, title={UlizaLlama_Q4_K_M-gguf: A Bilingual Language Model for Swahili and English}, author={Kelvin Githu(de-coder)}, year={2024}, publisher={Kelvin Githu}, howpublished={\url{https://huggingface.co/de-coder/UlizaLlama_Q4_K_M-gguf}}, } ```
gongjae/real_final_model
gongjae
2024-06-19T22:12:22Z
470
0
transformers
[ "transformers", "safetensors", "codebert", "endpoints_compatible", "region:us" ]
null
2024-06-19T22:11:04Z
Entry not found
John6666/t-ponynai3-v55-sdxl
John6666
2024-06-23T23:24:28Z
470
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "pony", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-06-23T23:19:51Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - pony --- Original model is [here](https://civitai.com/models/317902/t-ponynai3?modelVersionId=593760).
WizardLMTeam/WizardLM-13B-V1.0
WizardLMTeam
2023-09-01T07:56:25Z
469
71
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-05-13T15:17:01Z
This is WizardLM-13B V1.0 diff weight. Project Repo: https://github.com/nlpxucan/WizardLM NOTE: The **WizardLM-13B-1.0** and **Wizard-7B** use different prompt at the beginning of the conversation: For **WizardLM-13B-1.0** , the Prompt should be as following: ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: hello, who are you? ASSISTANT: ``` For **WizardLM-7B** , the Prompt should be as following: ``` {instruction}\n\n### Response: ``` <p align="center"> 🤗 <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br> </p> <p align="center"> 👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a> </p> | Model | Checkpoint | Paper | HumanEval | MBPP | Demo | License | | ----- |------| ---- |------|-------| ----- | ----- | | WizardCoder-Python-34B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 73.2 | 61.2 | [Demo](http://47.103.63.15:50085/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> | | WizardCoder-15B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 59.8 |50.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | | WizardCoder-Python-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 64.0 | 55.6 | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> | | WizardCoder-3B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-3B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 34.8 |37.4 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | | WizardCoder-1B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-1B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 23.8 |28.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | | Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License| | ----- |------| ---- |------|-------| ----- | ----- | | WizardMath-70B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** |[Demo](http://47.103.63.15:50083/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> | | WizardMath-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** |[Demo](http://47.103.63.15:50082/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> | | WizardMath-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | [Demo](http://47.103.63.15:50080/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a>| <font size=4> | <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>GSM8k</sup> | <sup>HumanEval</sup> | <sup>License</sup>| | ----- |------| ---- |------|-------| ----- | ----- | ----- | | <sup>**WizardLM-70B-V1.0**</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-70B-V1.0" target="_blank">HF Link</a> </sup>|<sup>📃**Coming Soon**</sup>| <sup>**7.78**</sup> | <sup>**92.91%**</sup> |<sup>**77.6%**</sup> | <sup> **50.6 pass@1**</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> | | <sup>WizardLM-13B-V1.2</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> |<sup>55.3%</sup> | <sup>36.6 pass@1</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> | | <sup>WizardLM-13B-V1.1</sup> |<sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | | <sup>25.0 pass@1</sup>| <sup>Non-commercial</sup>| | <sup>WizardLM-30B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | | <sup>37.8 pass@1</sup>| <sup>Non-commercial</sup> | | <sup>WizardLM-13B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | | <sup> 24.0 pass@1 </sup> | <sup>Non-commercial</sup>| | <sup>WizardLM-7B-V1.0 </sup>| <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | |<sup>19.1 pass@1 </sup>|<sup> Non-commercial</sup>| </font> **Github Repo**: https://github.com/nlpxucan/WizardLM/tree/main/WizardMath **Twitter**: https://twitter.com/WizardLM_AI/status/1689998428200112128 **Discord**: https://discord.gg/VZjjHtWrKs ## Inference WizardLM Demo Script We provide the inference WizardLM demo code [here](https://github.com/nlpxucan/WizardLM/tree/main/demo).
stablediffusionapi/architecture-tuned-model
stablediffusionapi
2024-05-17T08:41:45Z
469
10
diffusers
[ "diffusers", "safetensors", "stablediffusionapi.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-02T03:12:01Z
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # Architecture Tuned Model API Inference ![generated from stablediffusionapi.com](https://stablediffusionapi.com//storage/generations/0-4454ad07-adfe-4541-808d-2d7f9bd9d482.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "architecture-tuned-model" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Try model for free: [Generate Images](https://stablediffusionapi.com/models/architecture-tuned-model) Model link: [View model](https://stablediffusionapi.com/models/architecture-tuned-model) Credits: [View credits](https://civitai.com/?query=Architecture%20Tuned%20Model) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v4/dreambooth" payload = json.dumps({ "key": "your_api_key", "model_id": "architecture-tuned-model", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
VMware/open-llama-7b-v2-open-instruct
VMware
2023-12-01T20:12:51Z
469
32
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:VMware/open-instruct", "license:cc-by-sa-3.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-11T06:15:24Z
--- license: cc-by-sa-3.0 datasets: - VMware/open-instruct language: - en library_name: transformers pipeline_tag: text-generation --- # VMware/open-llama-7B-v2-open-instruct Instruction-tuned version of the fully trained Open LLama 7B v2 model. The model is open for <b>COMMERCIAL USE</b>. <br> - This model performs better on code compared to v1 due to the improvements made on the base model by the openlm-research team. - The instruction model is trained on an improved instruction tuning dataset compared to v1 **NOTE**: The model was trained using the Alpaca prompt template <br> **NOTE**: Fast tokenizer results in incorrect encoding, set the ```use_fast = False``` parameter, when instantiating the tokenizer ## License - CC BY-SA-3.0 **(Commercially Viable!)** - Base Language Model ([openlm-research/open_llama_v2_7b](https://huggingface.co/openlm-research/open_llama_v2_7b)) is under apache-2.0 - Fine-Tuning Dataset ([VMware/open-instruct](https://huggingface.co/datasets/VMware/open-instruct)) is under cc-by-sa-3.0 ## Datasets used for Fine-Tuning ### Open-instruct **Open-instruct-v1** - Mosaic/Dolly-HHRLHF + filtered OASST1 - cc by 3.0 **Subset of COT SUBMIX (FROM FLAN V2) Zeroshot examples** - ESNLI - MIT - ECQA - CDLA 1.0 - Sharing - Strategy - MIT - CREAK - MIT - gsmk8 - MIT - aqua - MIT - qasc - Apache 2.0 ## Nomenclature - Model : Open-llama-v2 - Model Size: 7B parameters - Dataset: Open-instruct ## Use in Transformers ``` import os import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_name = 'VMware/open-llama-7b-v2-open-instruct' tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map='sequential') prompt_template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:" prompt = """What is attention mechanism of a transformer model? Write a python code to illustrate how attention works within a transformer model using numpy library. Donot use pytorch or tensorflow.""" inputt = prompt_template.format(instruction= prompt) input_ids = tokenizer(inputt, return_tensors="pt").input_ids.to("cuda") output1 = model.generate(input_ids, max_length=512) input_length = input_ids.shape[1] output1 = output1[:, input_length:] output = tokenizer.decode(output1[0]) print(output) ``` ### Output Sure, I can help you with that! Attention mechanisms in transformer models are typically implemented using the attention mechanism in the self-attention layer. Self-attention allows the model to focus on different parts of the input sequence when processing it. This is achieved by computing a set of attention weights, which are used to weigh the contribution of each input element to the output. Here's an example code using NumPy to illustrate how attention works in a transformer model: ```python import numpy as np def attention_weights(query, key, value, mask): # Query, key, and value are input tensors. Mask is a tensor of zeros and ones that represents the attention mask. # It is used to prevent the model from attending to certain positions in the input sequence if they are not relevant. # The attention weights are the element-wise product of the query, key, and mask tensors. # The result is a tensor of the same shape as the query tensor. # Compute the dot product between the query tensor and the key tensor dot = np.matmul(query, key) # Compute the element-wise softmax of the dot product tensor exp_dot = np.exp(dot) # Multiply the dot product and the softmax of the dot product tensors weights = dot * exp_dot # Return the attention weights as a NumPy tensor return weights # Define the input sequence query = np.array([[0.1, 0.2, 0.3], [0.4, 0.5, 0.6]]) key = np.array([[0.1, 0.2], [0.3, 0.4]]) value = np.array([[0.1, 0.2, 0.3], [0.4, 0.5, 0.6]]) mask = np.array([[False, True, True], [False, True, True]]) # Compute the attention weights weights = attention_weights(query, key, value, mask) # Print the attention weights print(weights) ``` In this example, the `attention_weights` function takes as input the query tensor, key tensor, value tensor, and mask tensor. It computes the dot product between the query and key tensors using the `np.matmul` function, and then applies a softmax function using the `np.exp` function to the element-wise dot product tensor. It then multiplies the dot product and softmax tensors using the `np.matmul` function, and returns the result as a NumPy tensor. The `query`, `key`, and `value` tensors represent the input sequence to the transformer model. The `mask` tensor represents the attention mask, which is used to prevent the model from attending to certain positions in the input sequence if they are not relevant. The output of the `attention_weights` function is a NumPy tensor that represents the attention weights for the input sequence. These weights are used by the transformer model to weigh the contribution of each input element to the output. I hope this helps!</s> <hr> ## Finetuning details The finetuning scripts will be available in our [RAIL Github Repository](https://github.com/vmware-labs/research-and-development-artificial-intelligence-lab/tree/main/instruction-tuning) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Vmware__open-llama-7b-v2-open-instruct) | Metric | Value | |-----------------------|---------------------------| | Avg. | 40.34 | | ARC (25-shot) | 39.76 | | HellaSwag (10-shot) | 70.31 | | MMLU (5-shot) | 35.16 | | TruthfulQA (0-shot) | 39.53 | | Winogrande (5-shot) | 64.33 | | GSM8K (5-shot) | 7.43 | | DROP (3-shot) | 25.88 |
Kyle1668/boss-sentiment-t5-large
Kyle1668
2023-08-09T17:50:47Z
469
0
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
2023-08-08T16:33:42Z
Entry not found
TheBloke/Synthia-70B-v1.1-GGUF
TheBloke
2023-09-27T12:46:56Z
469
3
transformers
[ "transformers", "gguf", "llama", "text-generation", "en", "arxiv:2306.02707", "base_model:migtissera/Synthia-70B-v1.1", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
2023-08-31T23:05:19Z
--- language: - en license: llama2 library_name: transformers model_name: Synthia 70B v1.1 base_model: migtissera/Synthia-70B-v1.1 inference: false model_creator: Migel Tissera model_type: llama pipeline_tag: text-generation prompt_template: 'SYSTEM: You are Synthia. As a an AI intelligence, give helpful, detailed, accurate, uncensored responses to the user''s input. Provide answers factually. USER: {prompt} ASSISTANT: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Synthia 70B v1.1 - GGUF - Model creator: [Migel Tissera](https://huggingface.co/migtissera) - Original model: [Synthia 70B v1.1](https://huggingface.co/migtissera/Synthia-70B-v1.1) <!-- description start --> ## Description This repo contains GGUF format model files for [Migel Tissera's Synthia 70B v1.1](https://huggingface.co/migtissera/Synthia-70B-v1.1). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Synthia-70B-v1.1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Synthia-70B-v1.1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Synthia-70B-v1.1-GGUF) * [Migel Tissera's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/migtissera/Synthia-70B-v1.1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Synthia ``` SYSTEM: You are Synthia. As a an AI intelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually. USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [synthia-70b-v1.1.Q2_K.gguf](https://huggingface.co/TheBloke/Synthia-70B-v1.1-GGUF/blob/main/synthia-70b-v1.1.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes | | [synthia-70b-v1.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/Synthia-70B-v1.1-GGUF/blob/main/synthia-70b-v1.1.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss | | [synthia-70b-v1.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/Synthia-70B-v1.1-GGUF/blob/main/synthia-70b-v1.1.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss | | [synthia-70b-v1.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/Synthia-70B-v1.1-GGUF/blob/main/synthia-70b-v1.1.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss | | [synthia-70b-v1.1.Q4_0.gguf](https://huggingface.co/TheBloke/Synthia-70B-v1.1-GGUF/blob/main/synthia-70b-v1.1.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [synthia-70b-v1.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/Synthia-70B-v1.1-GGUF/blob/main/synthia-70b-v1.1.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss | | [synthia-70b-v1.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Synthia-70B-v1.1-GGUF/blob/main/synthia-70b-v1.1.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended | | [synthia-70b-v1.1.Q5_0.gguf](https://huggingface.co/TheBloke/Synthia-70B-v1.1-GGUF/blob/main/synthia-70b-v1.1.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [synthia-70b-v1.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/Synthia-70B-v1.1-GGUF/blob/main/synthia-70b-v1.1.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended | | [synthia-70b-v1.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Synthia-70B-v1.1-GGUF/blob/main/synthia-70b-v1.1.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended | | synthia-70b-v1.1.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss | | synthia-70b-v1.1.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ### Q6_K and Q8_0 files are split and require joining **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files. <details> <summary>Click for instructions regarding Q6_K and Q8_0 files</summary> ### q6_K Please download: * `synthia-70b-v1.1.Q6_K.gguf-split-a` * `synthia-70b-v1.1.Q6_K.gguf-split-b` ### q8_0 Please download: * `synthia-70b-v1.1.Q8_0.gguf-split-a` * `synthia-70b-v1.1.Q8_0.gguf-split-b` To join the files, do the following: Linux and macOS: ``` cat synthia-70b-v1.1.Q6_K.gguf-split-* > synthia-70b-v1.1.Q6_K.gguf && rm synthia-70b-v1.1.Q6_K.gguf-split-* cat synthia-70b-v1.1.Q8_0.gguf-split-* > synthia-70b-v1.1.Q8_0.gguf && rm synthia-70b-v1.1.Q8_0.gguf-split-* ``` Windows command line: ``` COPY /B synthia-70b-v1.1.Q6_K.gguf-split-a + synthia-70b-v1.1.Q6_K.gguf-split-b synthia-70b-v1.1.Q6_K.gguf del synthia-70b-v1.1.Q6_K.gguf-split-a synthia-70b-v1.1.Q6_K.gguf-split-b COPY /B synthia-70b-v1.1.Q8_0.gguf-split-a + synthia-70b-v1.1.Q8_0.gguf-split-b synthia-70b-v1.1.Q8_0.gguf del synthia-70b-v1.1.Q8_0.gguf-split-a synthia-70b-v1.1.Q8_0.gguf-split-b ``` </details> <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Synthia-70B-v1.1-GGUF and below it, a specific filename to download, such as: synthia-70b-v1.1.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Synthia-70B-v1.1-GGUF synthia-70b-v1.1.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Synthia-70B-v1.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Synthia-70B-v1.1-GGUF synthia-70b-v1.1.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m synthia-70b-v1.1.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "SYSTEM: You are Synthia. As a an AI intelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually.\nUSER: {prompt}\nASSISTANT:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Synthia-70B-v1.1-GGUF", model_file="synthia-70b-v1.1.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Migel Tissera's Synthia 70B v1.1 # Synthia-70B-v1.1 SynthIA (Synthetic Intelligent Agent) is a LLama-2-70B model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations. This model has generalized "Tree of Thought" reasoning capabilities. Evoke it with the following system message: ``` Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning ``` <br> ![Synthia](https://huggingface.co/migtissera/Synthia-70B-v1.1/resolve/main/Synthia.jpeg) <br> <br> #### License Disclaimer: This model is bound by the license & usage restrictions of the original Llama-2 model, and comes with no warranty or gurantees of any kind. <br> ## Evaluation We evaluated Synthia-70B on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI. Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |||| |:------:|:--------:|:-------:| |**Task**|**Metric**|**Value**| |*arc_challenge*|acc_norm|70.05| |*hellaswag*|acc_norm|87.12| |*mmlu*|acc_norm|70.34| |*truthfulqa_mc*|mc2|57.84| |**Total Average**|-|**71.34**|| <br> ## Example Usage ### Here is prompt format: ``` SYSTEM: You are Synthia. As a an AI intelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually. USER: How is a rocket launched from the surface of the earth to Low Earth Orbit? ASSISTANT: ``` ### Below shows a code example on how to use this model: ```python import torch, json from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "migtissera/Synthia-70B" output_file_path = "./Synthia-70B-conversations.jsonl" model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) def generate_text(instruction): tokens = tokenizer.encode(instruction) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to("cuda") instance = { "input_ids": tokens, "top_p": 1.0, "temperature": 0.75, "generate_len": 1024, "top_k": 50, } length = len(tokens[0]) with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length + instance["generate_len"], use_cache=True, do_sample=True, top_p=instance["top_p"], temperature=instance["temperature"], top_k=instance["top_k"], num_return_sequences=1, ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) answer = string.split("USER:")[0].strip() return f"{answer}" conversation = f"SYSTEM: As a an AI superintelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually." while True: user_input = input("You: ") llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: " answer = generate_text(llm_prompt) print(answer) conversation = f"{llm_prompt}{answer}" json_data = {"prompt": user_input, "answer": answer} ## Save your conversation with open(output_file_path, "a") as output_file: output_file.write(json.dumps(json_data) + "\n") ``` <br> #### Limitations & Biases: While this model aims for accuracy, it can occasionally produce inaccurate or misleading results. Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. Exercise caution and cross-check information when necessary. This is an uncensored model. <br> ### Citiation: Please kindly cite using the following BibTeX: ``` @misc{Synthia-70B, author = {Migel Tissera}, title = {Synthia-70B: Synthetic Intelligent Agent}, year = {2023}, publisher = {GitHub, HuggingFace}, journal = {GitHub repository, HuggingFace repository}, howpublished = {\url{https://huggingface.co/migtissera/Synthia-70B}, } ``` ``` @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @software{touvron2023llama, title={LLaMA2: Open and Efficient Foundation Language Models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } ``` ## Example Output ``` You: What is Earth's atmosphere like? What is Earth's magnetic field like? What is the relationship between the two? Earth's atmosphere is a layer of gases that surrounds the planet, held in place by Earth's gravity. It is primarily composed of nitrogen (78.1%), oxygen (20.9%), argon (0.9%), and other trace gases, such as carb on dioxide and water vapor. The atmosphere plays a crucial role in regulating Earth's temperature and climate by trapping heat and absorbing harmful radiation from the sun. It also provides the air we breathe, p rotects us from meteorites and the sun's ultraviolet radiation, and allows for the development of life on the planet. Earth's magnetic field is a phenomenon generated by the convection of molten iron in the planet's outer core, which creates an electric current. This current, in turn, produces a magnetic field that extends from the Earth's interior to space, protecting the planet from the solar wind and cosmic radiation. The magnetic field also plays a role in the formation of Earth's Van Allen radiation belts, which trap high-energy particles from the sun and cosmic rays. The relationship between Earth's atmosphere and magnetic field is complex and multifaceted. The magnetic field influences the distribution of charged particles in the atmosphere, affecting auroras, radio communi cation, and the ionosphere's behavior. The atmosphere, in turn, plays a role in the distribution of the magnetic field's energy, as well as in the generation and loss of atmospheric ions. The interactions betwee n the atmosphere and the magnetic field also influence the dynamics of Earth's climate and the long-term evolution of the planet's environment. You: Elaborate the the complex relationship between the two The complex relationship between Earth's atmosphere and magnetic field can be elaborated upon in several ways. Firstly, the magnetic field influences the distribution of charged particles in the atmosphere. For example, the magnetic field deflects the solar wind particles that would otherwise strip away the atmosphere. This deflection creates a bow shock, where the solar wind particles slow down and dissipate their ene rgy, producing a shock wave that propagates downstream from the Earth. The magnetic field also guides charged particles from the solar wind into the inner magnetosphere, where they become trapped and produce the Van Allen radiation belts. Secondly, the atmosphere plays a role in the distribution of the magnetic field's energy. The ionosphere, which is the upper layer of the atmosphere that is ionized by solar radiation, is affected by the magneti c field. The energy from the magnetic field is transferred to the ionosphere, where it drives the motion of the charged particles and affects the propagation of radio waves. Thirdly, the interactions between the atmosphere and the magnetic field influence the dynamics of Earth's climate. The magnetic field affects the distribution of charged particles in the atmosphere, which in tur n affects the formation of clouds and precipitation. The magnetic field also affects the distribution of heat in the atmosphere, which can influence the patterns of atmospheric circulation and the distribution o f temperatures around the planet. Finally, the interactions between the atmosphere and the magnetic field play a role in the long-term evolution of the planet's environment. The magnetic field protects the atmosphere from the erosive effects of the solar wind, which would otherwise strip away the atmosphere over time. The magnetic field also affects the distribution of heat in the atmosphere, which can influence the patterns of atmospheric circulation and the distribution of temperatures around the planet. Over time, these interactions can shape the evolution of the planet's climate and environment. You: Please explain the bow shock wave, Van Allen radiation belts in detail The bow shock wave is a phenomenon that occurs when the solar wind, a stream of charged particles emitted by the sun, interacts with Earth's magnetic field. The solar wind particles are moving at high speeds, an d when they encounter the Earth's magnetic field, they slow down and dissipate their energy. This creates a shock wave that propagates downstream from the Earth. The bow shock wave is located about 15 Earth radi i (approximately 93,000 miles) upstream from Earth, and it marks the boundary between the solar wind and Earth's magnetosphere. The Van Allen radiation belts are zones of high-energy charged particles, primarily protons and electrons, that are trapped by Earth's magnetic field. The belts are divided into two main regions: the inner belt, which extends from about 1,000 to 6,000 kilometers above Earth's surface, and the outer belt, which extends from about 13,000 to 65,000 kilometers above the surface. The particles in the belts are trapped by Ea rth's magnetic field, and they can remain in the belts for long periods of time. The particles in the Van Allen belts are generated by a variety of processes, including the impact of solar wind particles on the Earth's atmosphere and the decay of radioactive elements in the planet's crust. T he belts are important for understanding the behavior of charged particles in Earth's magnetosphere and for studying the effects of space weather on human activities in space. They can also pose a hazard to spac ecraft and satellites, as the high-energy particles in the belts can damage sensitive electronics and cause other problems. You: Does the Van Allen belt present any danger to Geo stationary satellites? Where are these belts? Are they near any of the orbits? Yes, the Van Allen radiation belts can present a danger to geostationary satellites and other spacecraft. The high-energy charged particles in the belts can damage sensitive electronics and cause other problems, including radiation damage and single-event upsets. Geostationary satellites are located at an altitude of approximately 35,786 kilometers above Earth's surface, which is just outside the inner belt but still w ithin the outer belt. The Van Allen radiation belts are located in Earth's magnetosphere, which is the region of space around the planet where the magnetic field is dominant. The inner belt is located between about 1,000 and 6,000 ki lometers above the surface, while the outer belt is located between about 13,000 and 65,000 kilometers above the surface. The belts are not located in any specific orbit, but they are present throughout the magn etosphere, which covers a large portion of the space around the planet. To mitigate the effects of the Van Allen belts on spacecraft, engineers use a variety of techniques, including shielding and designing electronics to be resistant to radiation. Additionally, some spacecraft are designed to operate in orbits that avoid the Van Allen belts, while others are designed to operate within the belts and withstand the effects of the high-energy particles. You: So when we send spacecraft to the moon, say the ones carrying humans, how do we make sure that the solar wind charged particles will not affect the mission? Are there very specific remedies that we can take ? When sending spacecraft to the moon, there are several steps that can be taken to protect the spacecraft and its occupants from the effects of solar wind charged particles. One approach is to design the spacecra ft with a strong magnetic field, which can help deflect the charged particles away from the spacecraft. Another approach is to use radiation shielding, which can help protect the electronics and any human occupa nts from the effects of radiation. In addition to these measures, space agencies can also use data from satellites and other spacecraft to monitor the level of solar wind activity and predict when there may be an increased risk of charged particl e events. This information can be used to adjust the timing of missions and other activities to minimize exposure to high-energy particles. It's also important to note that the moon is outside the magnetosphere and therefore not directly affected by the Van Allen belts. However, the moon does receive a significant amount of radiation from the sun an d other sources, so measures to protect the spacecraft and its occupants from radiation exposure are still necessary. Overall, protecting spacecraft and humans from the effects of solar wind charged particles requires a combination of engineering and operational measures, including strong magnetic fields, radiation shielding, a nd careful mission planning. You: Does spacecraft actually create strong magnetic fields? Which Apollo spacecrafts did this? Or is this hypothetical? Yes, spacecraft can create strong magnetic fields, although this is not typically a primary design consideration for most spacecraft. Some spacecraft, such as those designed for plasma physics research, may incl ude a magnetic field generator to create controlled conditions for experiments. However, for most spacecraft, the focus is on using existing magnetic fields, such as those generated by the Earth's magnetic field , to protect the spacecraft and its occupants. The Apollo spacecraft did not create strong magnetic fields, as their primary focus was on propulsion, navigation, and life support systems for the astronauts. However, some of the Apollo missions did include sc ientific experiments to study the effects of solar wind and other charged particles on the spacecraft and its occupants. The use of magnetic fields to protect spacecraft and humans from the effects of radiation is an ongoing area of research and development, and there are a number of proposals and concepts for how this might be ac hieved. For example, some researchers have proposed using superconducting coils to create a magnetic field around the spacecraft, which could help deflect charged particles and other forms of radiation. However, these proposals are still in the early stages of development and are not yet ready for implementation on spacecraft. ``` <!-- original-model-card end -->
hongyin/mistral-0.5b-40k
hongyin
2023-11-04T13:14:04Z
469
2
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "en", "zh", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-10-17T02:40:22Z
--- language: - en - zh pipeline_tag: text-generation --- ## hongyin/mistral-0.5b-40k I am pleased to introduce to you an English-Chinese bilingual autoregressive language model. This model is trained from scratch and has a unique vocabulary and 50 million parameters based on the Mistral model structure. Our goal is to provide a solution that is computationally cheap and easy to inference. It's important to note that this is a base model, not intended to be used as a chatbot, but rather for alchemy. We look forward to providing you with a practical model product. Losing weight is the only way to solve all problems ```python ``` ## Bibtex entry and citation info Please cite if you find it helpful. ``` @article{zhu2023metaaid, title={MetaAID 2.0: An Extensible Framework for Developing Metaverse Applications via Human-controllable Pre-trained Models}, author={Zhu, Hongyin}, journal={arXiv preprint arXiv:2302.13173}, year={2023} } ``` --- license: other ---
StanfordShahLab/clmbr-t-base
StanfordShahLab
2024-05-07T23:44:00Z
469
40
clmbr
[ "clmbr", "safetensors", "healthcare", "femr", "medical", "arxiv:2307.02028", "license:cc-by-nc-4.0", "region:us" ]
null
2023-12-10T16:06:18Z
--- license: cc-by-nc-4.0 library_name: clmbr tags: - healthcare - femr - medical extra_gated_prompt: "You agree to all terms outlined in 'The EHRSHOT Credentialed Health Data License' (see https://shahlab.stanford.edu/ehrshot_license). Access requires a verified CITI training certificate using the same process outlined by PhysioNet (see https://physionet.org/about/citi-course/) Please provide proof via the verification URL, which takes the form https://www.citiprogram.org/verify/?XXXXXX. You agree to not use the model to conduct experiments that cause harm to human subjects." extra_gated_fields: Full Name: text Email: text Affiliation: text CITI Certification Verification URL: text I agree to all terms outlined in 'The EHRSHOT Credentialed Health Data License': checkbox I agree to use this model for non-commercial use ONLY: checkbox --- # CLMBR-T-Base This is a 141 million parameter autoregressive foundation model pretrained on 2.57 million deidentified EHRs from Stanford Medicine. This is the model from [(Wornow et al. 2023)](https://arxiv.org/abs/2307.02028), and is based on the CLMBR architecture originally described in [(Steinberg et al. 2021)](https://www.sciencedirect.com/science/article/pii/S1532046420302653) As input, this model expects a sequence of coded medical events that have been mapped to Standard Concepts within the [OMOP-CDM vocabulary](https://ohdsi.github.io/CommonDataModel/index.html). The model generates representations of patients which can then be used for downstream prediction tasks. Input patients should be provided in the [MEDS](https://github.com/Medical-Event-Data-Standard/) schema. ## Model Details ### Model Description - **Developed by:** Shah lab @ Stanford University - **Funded by:** Stanford Healthcare - **Shared by:** Shah lab @ Stanford University - **Model type:** CLMBR [(Steinberg et al. 2021)](https://www.sciencedirect.com/science/article/pii/S1532046420302653) - **Language(s) (NLP):** Electronic health record codes - **License:** CC-BY NC 4.0 - **Finetuned from model:** N/A -- trained from scratch ### Model Sources - **Website:** [https://ehrshot.stanford.edu/](https://ehrshot.stanford.edu/) - **Gitub:** [https://github.com/som-shahlab/ehrshot-benchmark/](https://github.com/som-shahlab/ehrshot-benchmark/) - **Paper:** [EHRSHOT: An EHR Benchmark for Few-Shot Evaluation of Foundation Models](https://arxiv.org/abs/2307.02028) ## Uses This model is intended to generate representations for patients based on the structured data within their electronic health record. These representations can then be used for downstream tasks such as predicting diagnoses, detecting anomalies, or doing propensity score matching for causal inference. ### Direct Use You will likely want to tune the model for your downstream use case. ### Out-of-Scope Use This model is for research purposes only. It is not for use in any real-world decision making that impacts patients, providers, or hospital operations. ## Bias, Risks, and Limitations This model was trained on a corpus of 2.57 million patients from Stanford Medicine. The model will thus reflect the patterns of how care is delivered at Stanford Medicine, in addition to the racial and socioeconomic makeup of Stanford Medicine's patient base. This model may not generalize well to other hospitals and demographic mixes. While this is technically a generative model, we have not tested its generative abilities and thus do not anticipate it being used to generate synthetic EHR records. We aim to explore its generative abilities in future work. ## How to Get Started with the Model Use the code below to get started with the model. First, download the necessary libraries. ```bash pip install torch==2.1.1 femr==0.2.3 datasets==2.15.0 xformers transformers==4.35.2 ``` Second, run the following Python script to run inference on a single patient: ```python import femr.models.transformer import torch import femr.models.tokenizer import femr.models.processor import datetime model_name = "StanfordShahLab/clmbr-t-base" # Load tokenizer / batch loader tokenizer = femr.models.tokenizer.FEMRTokenizer.from_pretrained(model_name) batch_processor = femr.models.processor.FEMRBatchProcessor(tokenizer) # Load model model = femr.models.transformer.FEMRModel.from_pretrained(model_name) # Create an example patient to run inference on # This patient follows the MEDS schema: https://github.com/Medical-Event-Data-Standard example_patient = { 'patient_id': 30, 'events': [{ 'time': datetime.datetime(2011, 5, 8), 'measurements': [ {'code': 'SNOMED/184099003'}, {'code': 'Visit/IP'}, ], }, { 'time': datetime.datetime(2012, 6, 9), 'measurements': [ {'code': 'Visit/OP'}, {'code': 'SNOMED/3950001'} ], }] } raw_batch = batch_processor.convert_patient(example_patient, tensor_type="pt") batch = batch_processor.collate([raw_batch]) # Run model with torch.no_grad(): _, result = model(**batch) print(result['timestamps'].cpu().numpy().astype('datetime64[s]')) print(result['patient_ids']) print(result['representations']) ``` ## Training Details Full training details are provided in our accompanying paper, [EHRSHOT (Wornow et al. 2023)](https://arxiv.org/abs/2307.02028). ### Training Data The model is trained on 2.57 million patients from the [Stanford Medicine Research Data Repository (STARR)](https://academic.oup.com/jamiaopen/article/6/3/ooad054/7236015), which contains EHR data from both Stanford Health Care (primarily adult care) and Lucile Packard Children’s Hospital (primarily pediatric care). The dataset contains only structured data (i.e. no clinical text or images) and covers demographics (e.g. age, sex, race), diagnoses, procedures, laboratory results, medication prescriptions, and other coded clinical observations. The data is formatted according to the [Observational Medical Outcomes Partnership Common Data Model (OMOP-CDM)](https://ohdsi.github.io/CommonDataModel/cdm53.html). All data that we work with is deidentified. ### Training Procedure We train our model using an autoregressive next code prediction objective, i.e. predict the next code in a patient's timeline given their previous codes. #### Preprocessing We use the [FEMR](https://github.com/som-shahlab/femr/tree/main) Python library for data preprocessing. #### Training Hyperparameters * Learning rate: 0.00001 * Context window size: 496 * Internal dropout: 0 * Layers: 12 * Hidden dimension: 768 ## Evaluation We evaluate this model on [the EHRSHOT benchmark](https://ehrshot.stanford.edu). Information on this benchmark, tasks, and results are detailed in [Wornow et al. 2023](https://arxiv.org/pdf/2307.02028.pdf) ## Technical Specifications This model uses the CLMBR architecture from [(Steinberg et al. 2021)](https://www.sciencedirect.com/science/article/pii/S1532046420302653). The objective is an autoregressive next token prediction task. Please see [Wornow et al. 2023](https://arxiv.org/pdf/2307.02028.pdf) for more details on the specific model architecture. ## Vocabulary CLMBR is a language model and requires defining a token vocabulary `V`. However, unlike natural languages, the vocabulary of a structured EHR language model is defined by *medical codes*. Here tokens map to standardized concepts in medical ontologies. Since the union of all tokens from all ontologies, `V_all`, results in a prohibitively large vocabuary, we derive `~V` by filtering to the top `k` most frequent codes as follows: 1. **Knowledge Graphs (G):** A set of `n` medical ontologies (knowledge graphs), `G = ({G_1, G_2, ..., G_n})`, defined by [Athena's OMOP Vocabulary List](https://athena.ohdsi.org/vocabulary/list). 2. **Medical Codes as Tokens:** Each knowledge graph `G_i` has a set of unique medical codes `M_i`. The union of all these codes serve as the tokens in our complete vocabulary `V_all = M_1 ∪ M_2 ∪ ... ∪ M_n`. Our final, filtered vocabulary is then `~V = sort_freq(V_all)[1:k]` where frequency is calculated over our [STARR EHR OMOP](https://academic.oup.com/jamiaopen/article/6/3/ooad054/7236015) dataset. **CLMBR Vocabulary Summary** - 21 Source Ontologies/Knowledge Graphs - 65,536 tokens (the max value of `uint16_t`) | PREFIX | SOURCE | SIZE | EXAMPLE TOKENS | |:---------------------|:-------------------------------------------------------------------------------------------------|---------:|:---------------------------------------------------| | LOINC | Logical Observation Identifiers Names and Codes (Regenstrief Institute) | 37,590 | 31790-9, 20449-5 | | SNOMED | Systematic Nomenclature of Medicine - Clinical Terms (IHTSDO) | 18,174 | 105013009, 200755008 | | RxNorm | RxNorm (NLM) | 4,678 | 2375327, 372375 | | CPT4 | Current Procedural Terminology version 4 (AMA) | 3,730 | 00790, 36818 | | RxNorm Extension | OMOP RxNorm Extension | 255 | OMOP358911, OMOP2153393 | | ICD10PCS | ICD-10 Procedure Coding System (CMS) | 233 | 10907ZC, 4A0234Z | | ICD9Proc | International Classification of Diseases, Ninth Revision, Clinical Modification, Volume 3 (NCHS) | 196 | 68.29, 03.93 | | Cancer Modifier | Diagnostic Modifiers of Cancer (OMOP) | 88 | c-8th\_AJCC/UICC-Stage-2C, p-7th\_AJCC/UICC-Stage-3B | | HCPCS | Healthcare Common Procedure Coding System (CMS) | 54 | C1878, P7001 | | ICDO3 | International Classification of Diseases for Oncology, Third Edition (WHO) | 52 | NULL-C34.8, C56.9 | | CVX | CDC Vaccine Administered CVX (NCIRD) | 41 | 151, 158 | | Domain | OMOP | 27 | OMOP generated | | Race | Race and Ethnicity Code Set (USBC) | 5 | 5, 4 | | OMOP Extension | OMOP Extension (OHDSI) | 3 | OMOP5160861, OMOP4912978 | | Gender | OMOP Gender | 2 | F, M | | Ethnicity | OMOP Ethnicity | 2 | Not Hispanic, Hispanic | | CMS Place of Service | Place of Service Codes for Professional Claims (CMS) | 2 | OMOP4822036, 02 | | Medicare Specialty | Medicare provider/supplier specialty codes (CMS) | 1 | A0 | | Condition Type | OMOP | 1 | OMOP4822053 | | CARE_SITE | STANFORD_CUSTOM | 396 | 7930934, 7929373 | | Visit | STANFORD_CUSTOM | 6 | ERIP, ER | ## Citation **BibTeX:** ``` @article{wornow2023ehrshot, title={EHRSHOT: An EHR Benchmark for Few-Shot Evaluation of Foundation Models}, author={Michael Wornow and Rahul Thapa and Ethan Steinberg and Jason Fries and Nigam Shah}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023} } ``` ## Model Card Authors Michael Wornow, Ethan Steinberg, Rahul Thapa, Jason Fries, Nigam H. Shah ## Model Card Contact Michael Wornow ([email protected])
rinna/nekomata-7b-instruction
rinna
2024-04-03T08:50:51Z
469
9
transformers
[ "transformers", "pytorch", "qwen", "text-generation", "custom_code", "ja", "en", "dataset:databricks/databricks-dolly-15k", "dataset:kunishou/databricks-dolly-15k-ja", "dataset:izumi-lab/llm-japanese-dataset", "arxiv:2309.16609", "arxiv:2404.01657", "license:other", "autotrain_compatible", "region:us" ]
text-generation
2023-12-19T08:10:07Z
--- thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png datasets: - databricks/databricks-dolly-15k - kunishou/databricks-dolly-15k-ja - izumi-lab/llm-japanese-dataset language: - ja - en tags: - qwen inference: false license: other license_name: tongyi-qianwen-license-agreement license_link: https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT --- # `rinna/nekomata-7b-instruction` ![rinna-icon](./rinna.png) # Overview The model is the instruction-tuned version of [`rinna/nekomata-7b`](https://huggingface.co/rinna/nekomata-7b). It adopts the Alpaca input format. * **Model architecture** A 32-layer, 4096-hidden-size transformer-based language model. Please refer to the [Qwen paper](https://arxiv.org/abs/2309.16609) for architecture details. * **Fine-tuning** The fine-tuning data is the subset of the following datasets. * [Databricks Dolly data](https://huggingface.co/datasets/databricks/databricks-dolly-15k) * [Japanese Databricks Dolly data](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja) * [FLAN Instruction Tuning data](https://github.com/google-research/FLAN) and its Japanese translation * [Izumi lab LLM Japanese dataset](https://github.com/masanorihirano/llm-japanese-dataset/tree/main) * The following sections are used * alt * aozora-txt * CourseraParallel * ParaNatCom * Tab-delimited_Bilingual_Sentence_Pairs * tanaka-corpus * wikinews * wordnet * yasashi-japanese * The [remaining sections](https://github.com/masanorihirano/llm-japanese-dataset/tree/main/datasets-cc-by-sa) contain commonly used evaluation corpora so they are skipped to prevent data leak. * **Contributors** - [Tianyu Zhao](https://huggingface.co/tianyuz) - [Kei Sawada](https://huggingface.co/keisawada) --- # Benchmarking Please refer to [rinna's LM benchmark page](https://rinnakk.github.io/research/benchmarks/lm/index.html). --- # How to use the model ~~~~python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("rinna/nekomata-7b-instruction", trust_remote_code=True) # Use GPU with bf16 # model = AutoModelForCausalLM.from_pretrained("rinna/nekomata-7b-instruction", device_map="auto", trust_remote_code=True, bf16=True) # Use GPU with fp16 # model = AutoModelForCausalLM.from_pretrained("rinna/nekomata-7b-instruction", device_map="auto", trust_remote_code=True, fp16=True) # Use CPU # model = AutoModelForCausalLM.from_pretrained("rinna/nekomata-7b-instruction", device_map="cpu", trust_remote_code=True) # Automatically select device and precision model = AutoModelForCausalLM.from_pretrained("rinna/nekomata-7b-instruction", device_map="auto", trust_remote_code=True) instruction = "次の日本語を英語に翻訳してください。" input = "大規模言語モデル(だいきぼげんごモデル、英: large language model、LLM)は、多数のパラメータ(数千万から数十億)を持つ人工ニューラルネットワークで構成されるコンピュータ言語モデルで、膨大なラベルなしテキストを使用して自己教師あり学習または半教師あり学習によって訓練が行われる。" prompt = f""" 以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。 ### 指示: {instruction} ### 入力: {input} ### 応答: """ token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") with torch.no_grad(): output_ids = model.generate( token_ids.to(model.device), max_new_tokens=200, do_sample=True, temperature=0.5, pad_token_id=tokenizer.pad_token_id, bos_token_id=tokenizer.bos_token_id, eos_token_id=tokenizer.eos_token_id ) output = tokenizer.decode(output_ids.tolist()[0]) print(output) """ 以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。 ### 指示: 次の日本語を英語に翻訳してください。 ### 入力: 大規模言語モデル(だいきぼげんごモデル、英: large language model、LLM)は、多数のパラメータ(数千万から数十億)を持つ人工ニューラルネットワークで構成されるコンピュータ言語モデルで、膨大なラベルなしテキストを使 用して自己教師あり学習または半教師あり学習によって訓練が行われる。 ### 応答: A large language model (LLM) is a computer language model composed of artificial neural networks with many parameters (from tens of millions to billions) trained by self-supervised learning or semi-supervised learning using a large amount of unlabeled text.<|endoftext|> """ ~~~~ --- # Tokenization Please refer to [`rinna/nekomata-7b`](https://huggingface.co/rinna/nekomata-7b) for tokenization details. --- # How to cite ~~~ @misc{rinna-nekomata-7b-instruction, title = {rinna/nekomata-7b-instruction}, author={Zhao, Tianyu and Sawada, Kei} url = {https://huggingface.co/rinna/nekomata-7b-instruction}, } @inproceedings{sawada2024release, title = {Release of Pre-Trained Models for the {J}apanese Language}, author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh}, booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)}, month = {5}, year = {2024}, url = {https://arxiv.org/abs/2404.01657}, } ~~~ --- # License [Tongyi Qianwen LICENSE AGREEMENT](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT)
Pclanglais/Mickey-1928
Pclanglais
2024-01-03T16:43:04Z
469
101
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "dataset:Pclanglais/Mickey-1928-dataset", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:cc0-1.0", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2023-12-31T09:48:26Z
--- license: cc0-1.0 tags: - text-to-image - stable-diffusion - lora - diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: Mickey widget: - text: "drawing of Mickey, theater in background" output: url: "mickey_theater.jpg" - text: "drawing of Mickey inspiring the communist revolution" output: url: "communist_mickey.jpg" - text: "pop-art painting of Mickey walking in Paris" output: url: "mickey_paris.jpg" pipeline_tag: text-to-image datasets: - Pclanglais/Mickey-1928-dataset --- **Mickey-1928** is fine-tuned version of Stable-Diffusion-xl trained on 96 stills in the public domain from 1928. <Gallery /> Mickey-1928 can generate images of Mickey, Minnie and, to a much lesser extent, Pete (with the prompt PeteLegPete). ## Dataset Since 2024, the first three cartoons of Mickey are in the public domain. The final dataset includes: - 40 stills from *Gallopin' Gaucho* (in color) - 22 stills from *Plane Crazy* - 34 stills from *Steamboat Willie*. The stills are not currently available in high quality and you should not expect consistently good results from Mickey-1928. The color images from *Gallopin' Gaucho* are in 360x360 pixels. Hopefully with the cartoons now being part of the public domain, higher definition versions should be available. The generated images aim to adhere to the 1928 design in order to have Mickey, Minnie and Pete in the public domain. This is still a work in progress: while the model is in development, generated images should be checked to ensure they really are in the public domain design.
sambanovasystems/SambaLingo-Thai-Base
sambanovasystems
2024-04-16T22:39:33Z
469
30
transformers
[ "transformers", "pytorch", "llama", "text-generation", "th", "en", "dataset:uonlp/CulturaX", "arxiv:2404.05829", "arxiv:2311.05741", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-15T22:50:07Z
--- license: llama2 datasets: - uonlp/CulturaX language: - th - en metrics: - chrf - accuracy - bleu --- # SambaLingo-Thai-Base <img src="SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/> <!-- Provide a quick summary of what the model is/does. --> SambaLingo-Thai-Base is a pretrained Bi-lingual Thai and English model that adapts [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) to Thai by training on 38 billion tokens from the Thai split of the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. This model reports state of the art evaluation results in perplexity and FLORES-200 translation. For the chat version of this model, please see [sambanovasystems/SambaLingo-Thai-Chat](https://huggingface.co/sambanovasystems/SambaLingo-Thai-Chat), or try it out at [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space). ## Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [SambaNova Systems](https://sambanova.ai/) - **Model type:** Language Model - **Language(s):** Thai, English - **Finetuned from model:** [Llama 2](https://huggingface.co/meta-llama/Llama-2-7b-hf) - **Try the chat version of this model**: [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space). - **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829) - **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts) ## Getting Started ### Loading Model With Hugging Face ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Thai-Base") model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Thai-Base", device_map="auto", torch_dtype="auto") ``` ### Suggested Inference Parameters We suggest setting do_sample=False as this is a pretrained checkpoint. ### Prompting Guidelines This model is a pretrained checkpoint, so to use it effectively please use few shot prompting with exemplars. The only other prompt templating required is the standard \<s\> (BOS) token from the Llama tokenizer. If you want to interact with this model with direct questions or queries, please use the chat version of the model that has been aligned with human preferences [sambanovasystems/SambaLingo-Thai-Chat](https://huggingface.co/sambanovasystems/SambaLingo-Thai-Chat). ## Training Details All pre-training is done on the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. We mix the data to be 75% data from the language we are adapting to, and 25% English as suggested by [Csaki et al.](https://arxiv.org/abs/2311.05741) We pack the data into sequences of length 4096, and ensure that when learning a token we only attend to previous tokens in the context of the corresponding text document. We train with a global batch size of 1024, sequence length of 4096, maximum learning rate of 1e-4 with cosine decay, warmup ratio of 0.01 and a weight decay of 0.1. ## Tokenizer Details We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language. ## Evaluation For evaluation results see our paper: [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829) ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> Use of this model is governed by the Meta’s [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/). Please review and accept the license before downloading the model weights. ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> SambaLingo should NOT be used for: - Mission-critical applications - Applications that involve the safety of others - Making highly important decisions ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Like all LLMs, SambaLingo has certain limitations: - Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information. - Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output. - Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses. - Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited. - Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content. ## Acknowledgments We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative. We would like to give a special thanks to the following groups: - Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset - Nguyen et al for open sourcing CulturaX dataset - CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset - EleutherAI for their open source evaluation framework - Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo ## Cite SambaLingo ``` @misc{csaki2024sambalingo, title={SambaLingo: Teaching Large Language Models New Languages}, author={Zoltan Csaki and Bo Li and Jonathan Li and Qiantong Xu and Pian Pawakapan and Leon Zhang and Yun Du and Hengyu Zhao and Changran Hu and Urmish Thakker}, year={2024}, eprint={2404.05829}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
p1atdev/dart-v2-moe-sft
p1atdev
2024-05-11T17:27:21Z
469
7
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "trl", "sft", "optimum", "danbooru", "dataset:isek-ai/danbooru-tags-2024", "base_model:p1atdev/dart-v2-moe-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-06T08:39:50Z
--- library_name: transformers license: apache-2.0 datasets: - isek-ai/danbooru-tags-2024 base_model: p1atdev/dart-v2-moe-base tags: - trl - sft - optimum - danbooru inference: false --- # Dart (Danbooru Tags Transformer) v2 This model is a fine-tuned Dart (Danbooru Tags Transformer) model that generates danbooru tags. Demo: [🤗 Space with ZERO](https://huggingface.co/spaces/p1atdev/danbooru-tags-transformer-v2) ## Model variants |Name|Architecture|Param size|Type| |-|-|-|-| |[v2-moe-sft](https://huggingface.co/p1atdev/dart-v2-moe-sft)|Mixtral|166m|SFT| |[v2-moe-base](https://huggingface.co/p1atdev/dart-v2-moe-base)|Mixtral|166m|Pretrain| |[v2-sft](https://huggingface.co/p1atdev/dart-v2-sft)|Mistral|114m|SFT| |[v2-base](https://huggingface.co/p1atdev/dart-v2-base)|Mistral|114m|Pretrain| |[v2-vectors](https://huggingface.co/p1atdev/dart-v2-vectors)|Embedding|-|Tag Embedding| ## Usage ### Using 🤗Transformers ```py import torch from transformers import AutoTokenizer, AutoModelForCausalLM MODEL_NAME = "p1atdev/dart-v2-moe-sft" tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, torch_dtype=torch.bfloat16) prompt = ( f"<|bos|>" f"<copyright>vocaloid</copyright>" f"<character>hatsune miku</character>" f"<|rating:general|><|aspect_ratio:tall|><|length:long|>" f"<general>1girl, cat ears<|identity:none|><|input_end|>" ) inputs = tokenizer(prompt, return_tensors="pt").input_ids with torch.no_grad(): outputs = model.generate( inputs, do_sample=True, temperature=1.0, top_p=1.0, top_k=100, max_new_tokens=128, num_beams=1, ) print(", ".join([tag for tag in tokenizer.batch_decode(outputs[0], skip_special_tokens=True) if tag.strip() != ""])) # vocaloid, hatsune miku, 1girl, cat ears, closed mouth, detached sleeves, dress, expressionless, from behind, full body, green theme, hair ornament, hair ribbon, headphones, high heels, holding, holding microphone, long hair, microphone, monochrome, necktie, ribbon, short dress, shoulder tattoo, simple background, sleeveless, sleeveless dress, spot color, standing, tattoo, thighhighs, twintails, very long hair, white background ``` ### Using 📦`dartrs` library > [!WARNING] > This library is very experimental and there will be breaking changes in the future. [📦`dartrs`](https://github.com/p1atdev/dartrs) is a [🤗`candle`](https://github.com/huggingface/candle) backend inference library for Dart v2 models. ```py pip install -U dartrs ``` ```py from dartrs.dartrs import DartTokenizer from dartrs.utils import get_generation_config from dartrs.v2 import ( compose_prompt, MixtralModel, V2Model, ) import time import os MODEL_NAME = "p1atdev/dart-v2-moe-sft" model = MixtralModel.from_pretrained(MODEL_NAME) tokenizer = DartTokenizer.from_pretrained(MODEL_NAME) config = get_generation_config( prompt=compose_prompt( copyright="vocaloid", character="hatsune miku", rating="general", # sfw, general, sensitive, nsfw, questionable, explicit aspect_ratio="tall", # ultra_wide, wide, square, tall, ultra_tall length="medium", # very_short, short, medium, long, very_long identity="none", # none, lax, strict prompt="1girl, cat ears", ), tokenizer=tokenizer, ) start = time.time() output = model.generate(config) end = time.time() print(output) print(f"Time taken: {end - start:.2f}s") # cowboy shot, detached sleeves, empty eyes, green eyes, green hair, green necktie, hair in own mouth, hair ornament, letterboxed, light frown, long hair, long sleeves, looking to the side, necktie, parted lips, shirt, sleeveless, sleeveless shirt, twintails, wing collar # Time taken: 0.26s ``` ## Prompt Format ```py prompt = ( f"<|bos|>" f"<copyright>{copyright_tags_here}</copyright>" f"<character>{character_tags_here}</character>" f"<|rating:general|><|aspect_ratio:tall|><|length:long|>" f"<general>{general_tags_here}<|identity:none|><|input_end|>" ) ``` - Rating tag: `<|rating:sfw|>`, `<|rating:general|>`, `<|rating:sensitive|>`, `nsfw`, `<|rating:questionable|>`, `<|rating:explicit|>` - `sfw`: randomly generates tags in `general` or `sensitive` rating categories. - `general`: generates tags in `general` rating category. - `sensitive`: generates tags in `sensitive` rating category. - `nsfw`: randomly generates tags in `questionable` or `explicit` rating categories. - `questionable`: generates tags in `questionable` rating category. - `explicit`: generates tags in `explicit` rating category. - Aspect ratio tag: `<|aspect_ratio:ultra_wide|>`, `<|aspect_ratio:wide|>`, `<|aspect_ratio:square|>`, `<|aspect_ratio:tall|>`, `<|aspect_ratio:ultra_tall|>` - `ultra_wide`: generates tags suits for extremely wide aspect ratio images. (~2:1) - `wide`: generates tags suits for wide aspect ratio images. (2:1~9:8) - `square`: generates tags suits for square aspect ratio images. (9:8~8:9) - `tall`: generates tags suits for tall aspect ratio images. (8:9~1:2) - `ultra_tall`: generates tags suits for extremely tall aspect ratio images. (1:2~) - Length tag: `<|length:very_short|>`, `<|length:short|>`, `<|length:medium|>`, `<|length:long|>`, `<|length:very_long|>` - `very_short`: totally generates ~10 number of tags. - `short`: totally generates ~20 number of tags. - `medium`: totally generates ~30 number of tags. - `long`: totally generates ~40 number of tags. - `very_long`: totally generates 40~ number of tags. - **Identity tag**: `<|identity:none|>`, `<|identity:lax|>`, `<|identity:strict|>` - This tag specifies how strictly to preserve identity of character or subject in provided tags. - `none`: recommended if the specified general tags are very few. It generates tags very creatively, but sometimes ignores the condition of the general tags. - `lax`: recommended if you want to keep the identity of charaacters or subjects in the general tags. This tag tries not to generate tags which conflict with the input general tags. - `strict`: recommended if you strongly want to keep the identity of charaacters or subjects in the general tags. This tag tries not to generate tags which conflict with the input general tags more strictly than `lax`. But this is less creative, so if you don't like the result with `strict`, please try `lax` or `none`. ## Model Details ### Model Description - **Developed by:** Plat - **Model type:** Causal language model - **Language(s) (NLP):** Danbooru tags - **License:** Apache-2.0 - **Finetuned from model:** [dart-v2-moe-base](https://huggingface.co/p1atdev/dart-v2-moe-base) - **Demo:** Available on [🤗 Space](https://huggingface.co/spaces/p1atdev/danbooru-tags-transformer-v2) ## Training Details ### Training Data This model was trained with: - [isek-ai/danbooru-tags-2024](https://huggingface.co/datasets/isek-ai/danbooru-tags-2024/tree/202403-at20240423) with revision `202403-at20240423`: 7M size of danbooru tags dataset since 2005 to 2024/03/31. ### Training Procedure TODO #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00025 - train_batch_size: 1024 - eval_batch_size: 256 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 2048 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 4 ## Evaluation Evaluation has not been done yet and it needs to evaluate. #### Model Architecture and Objective The architecture of this model is [Mixtral](https://huggingface.co/docs/transformers/model_doc/mixtral). See details in [config.json](./config.json). ### Compute Infrastructure Server in a university laboratory #### Hardware 8x RTX A6000 #### Software - Dataset processing: [🤗 Datasets](https://github.com/huggingface/datasets) - Training: [🤗 Transformers](https://github.com/huggingface/transformers) - SFT: [🤗 TRL](https://github.com/huggingface/trl) - Inference library: [📦 dartrs](https://github.com/p1atdev/dartrs) - Backend: [🤗 candle](https://github.com/huggingface/candle) ## Related Projects - [dart-v1](https://huggingface.co/p1atdev/dart-v1): The first version of the Dart model. - [KBlueLeaf/DanTagGen](https://huggingface.co/collections/KBlueLeaf/dantaggen-65f82fa9335881a67573556b): The Aspect Ratio tag was inspired by this project. - [furusu/danbooru-tag-similarity](https://huggingface.co/spaces/furusu/danbooru-tag-similarity): The idea of clustering tags and its training method was inspired by this project.
nihaomur/Meta-Llama-3-8B-Instruct-Q5_K_M.gguf
nihaomur
2024-06-25T06:37:50Z
469
0
null
[ "gguf", "region:us" ]
null
2024-05-07T08:45:46Z
Entry not found
mradermacher/bizblend-70b-fresh-GGUF
mradermacher
2024-06-12T16:03:24Z
469
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "sft", "en", "base_model:SatSDev/bizblend-70b-fresh", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-12T11:55:35Z
--- base_model: SatSDev/bizblend-70b-fresh language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/SatSDev/bizblend-70b-fresh <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/bizblend-70b-fresh-GGUF/resolve/main/bizblend-70b-fresh.Q2_K.gguf) | Q2_K | 26.5 | | | [GGUF](https://huggingface.co/mradermacher/bizblend-70b-fresh-GGUF/resolve/main/bizblend-70b-fresh.IQ3_XS.gguf) | IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/bizblend-70b-fresh-GGUF/resolve/main/bizblend-70b-fresh.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/bizblend-70b-fresh-GGUF/resolve/main/bizblend-70b-fresh.Q3_K_S.gguf) | Q3_K_S | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/bizblend-70b-fresh-GGUF/resolve/main/bizblend-70b-fresh.IQ3_M.gguf) | IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/bizblend-70b-fresh-GGUF/resolve/main/bizblend-70b-fresh.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/bizblend-70b-fresh-GGUF/resolve/main/bizblend-70b-fresh.Q3_K_L.gguf) | Q3_K_L | 37.2 | | | [GGUF](https://huggingface.co/mradermacher/bizblend-70b-fresh-GGUF/resolve/main/bizblend-70b-fresh.IQ4_XS.gguf) | IQ4_XS | 38.4 | | | [GGUF](https://huggingface.co/mradermacher/bizblend-70b-fresh-GGUF/resolve/main/bizblend-70b-fresh.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/bizblend-70b-fresh-GGUF/resolve/main/bizblend-70b-fresh.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/bizblend-70b-fresh-GGUF/resolve/main/bizblend-70b-fresh.Q5_K_S.gguf) | Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/bizblend-70b-fresh-GGUF/resolve/main/bizblend-70b-fresh.Q5_K_M.gguf) | Q5_K_M | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/bizblend-70b-fresh-GGUF/resolve/main/bizblend-70b-fresh.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/bizblend-70b-fresh-GGUF/resolve/main/bizblend-70b-fresh.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality | | [PART 1](https://huggingface.co/mradermacher/bizblend-70b-fresh-GGUF/resolve/main/bizblend-70b-fresh.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/bizblend-70b-fresh-GGUF/resolve/main/bizblend-70b-fresh.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
PouL0u/gazprom-stabledif15
PouL0u
2024-06-24T16:51:49Z
469
0
diffusers
[ "diffusers", "safetensors", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-06-14T22:16:08Z
distributed/gpt2-250m
distributed
2024-07-02T17:08:18Z
469
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-20T07:45:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
CHE-72/TAIDE-LX-7B-Chat-Q2_K-GGUF
CHE-72
2024-06-22T17:48:19Z
469
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:taide/TAIDE-LX-7B-Chat", "license:other", "region:us" ]
null
2024-06-22T17:48:08Z
--- base_model: taide/TAIDE-LX-7B-Chat license: other license_name: taide-l-models-community-license-agreement license_link: https://drive.google.com/file/d/1FcUZjbUH6jr4xoCyAronN_slLgcdhEUd/view tags: - llama-cpp - gguf-my-repo extra_gated_heading: 您需要先同意授權條款才能使用此模型 extra_gated_fields: 姓名(Name): text 生日(Date of birth): date_picker 國家(Country): country 所屬單位(Affiliation): text geo: ip_location 按下送出表示您同意社群授權同意書與個人資料蒐集告知聲明(By clicking Submit below I accept the terms of the license and privacy policy): checkbox extra_gated_prompt: '* ### [TAIDE L 類模型社群授權同意書(License)](https://drive.google.com/file/d/1FcUZjbUH6jr4xoCyAronN_slLgcdhEUd/view) * ### [個人資料蒐集告知聲明(Privacy policy)](https://drive.google.com/file/d/1JTfZu_MdU_TR1-1sn2jbQyW7TLrxjwS5/view)' extra_gated_button_content: 送出(Submit) --- # CHE-72/TAIDE-LX-7B-Chat-Q2_K-GGUF This model was converted to GGUF format from [`taide/TAIDE-LX-7B-Chat`](https://huggingface.co/taide/TAIDE-LX-7B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/taide/TAIDE-LX-7B-Chat) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q2_K-GGUF --hf-file taide-lx-7b-chat-q2_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q2_K-GGUF --hf-file taide-lx-7b-chat-q2_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q2_K-GGUF --hf-file taide-lx-7b-chat-q2_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo CHE-72/TAIDE-LX-7B-Chat-Q2_K-GGUF --hf-file taide-lx-7b-chat-q2_k.gguf -c 2048 ```
Tanvir1337/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B-Q6_K-GGUF
Tanvir1337
2024-06-26T10:41:17Z
469
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "mistral", "llama-cpp", "gguf-my-repo", "text-generation", "en", "dataset:Severian/Internal-Knowledge-Map", "base_model:Severian/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
2024-06-26T10:40:48Z
--- base_model: Severian/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B datasets: - Severian/Internal-Knowledge-Map language: - en license: mit pipeline_tag: text-generation tags: - text-generation-inference - transformers - unsloth - mistral - llama-cpp - gguf-my-repo --- # Tanvir1337/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B-Q6_K-GGUF This model was converted to GGUF format from [`Severian/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B`](https://huggingface.co/Severian/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Severian/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Tanvir1337/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B-Q6_K-GGUF --hf-file mistral-v0.2-nexus-internal-knowledge-map-7b-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Tanvir1337/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B-Q6_K-GGUF --hf-file mistral-v0.2-nexus-internal-knowledge-map-7b-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Tanvir1337/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B-Q6_K-GGUF --hf-file mistral-v0.2-nexus-internal-knowledge-map-7b-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Tanvir1337/Mistral-v0.2-Nexus-Internal-Knowledge-Map-7B-Q6_K-GGUF --hf-file mistral-v0.2-nexus-internal-knowledge-map-7b-q6_k.gguf -c 2048 ```
NikolayKozloff/Llama-3-neoAI-8B-Chat-v0.1-IQ4_NL-GGUF
NikolayKozloff
2024-06-26T16:53:13Z
469
1
null
[ "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "ja", "en", "base_model:neoai-inc/Llama-3-neoAI-8B-Chat-v0.1", "license:llama3", "region:us" ]
text-generation
2024-06-26T16:52:48Z
--- base_model: neoai-inc/Llama-3-neoAI-8B-Chat-v0.1 language: - ja - en license: llama3 pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- # NikolayKozloff/Llama-3-neoAI-8B-Chat-v0.1-IQ4_NL-GGUF This model was converted to GGUF format from [`neoai-inc/Llama-3-neoAI-8B-Chat-v0.1`](https://huggingface.co/neoai-inc/Llama-3-neoAI-8B-Chat-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/neoai-inc/Llama-3-neoAI-8B-Chat-v0.1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo NikolayKozloff/Llama-3-neoAI-8B-Chat-v0.1-IQ4_NL-GGUF --hf-file llama-3-neoai-8b-chat-v0.1-iq4_nl-imat.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo NikolayKozloff/Llama-3-neoAI-8B-Chat-v0.1-IQ4_NL-GGUF --hf-file llama-3-neoai-8b-chat-v0.1-iq4_nl-imat.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo NikolayKozloff/Llama-3-neoAI-8B-Chat-v0.1-IQ4_NL-GGUF --hf-file llama-3-neoai-8b-chat-v0.1-iq4_nl-imat.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo NikolayKozloff/Llama-3-neoAI-8B-Chat-v0.1-IQ4_NL-GGUF --hf-file llama-3-neoai-8b-chat-v0.1-iq4_nl-imat.gguf -c 2048 ```
timm/eva_giant_patch14_560.m30m_ft_in22k_in1k
timm
2024-02-10T23:28:59Z
468
2
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:merged-30m", "dataset:imagenet-22k", "arxiv:2211.07636", "license:mit", "region:us" ]
image-classification
2022-12-23T02:51:50Z
--- license: mit library_name: timm tags: - image-classification - timm datasets: - imagenet-1k - merged-30m - imagenet-22k --- # Model card for eva_giant_patch14_560.m30m_ft_in22k_in1k An EVA image classification model. Pretrained on Merged-30M (ImageNet-22K, CC12M, CC3M, Object365, COCO (train), ADE20K (train)) with masked image modeling (using OpenAI CLIP-L as a MIM teacher) and fine-tuned on ImageNet-22k then on ImageNet-1k by paper authors. NOTE: `timm` checkpoints are float32 for consistency with other models. Original checkpoints are float16 or bfloat16 in some cases, see originals if that's preferred. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 1014.4 - GMACs: 1906.8 - Activations (M): 2577.2 - Image size: 560 x 560 - **Papers:** - EVA: Exploring the Limits of Masked Visual Representation Learning at Scale: https://arxiv.org/abs/2211.07636 - **Pretrain Dataset:** - Merged-30M - ImageNet-22k - **Dataset:** ImageNet-1k - **Original:** - https://github.com/baaivision/EVA - https://huggingface.co/BAAI/EVA ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('eva_giant_patch14_560.m30m_ft_in22k_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'eva_giant_patch14_560.m30m_ft_in22k_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1601, 1408) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |top1 |top5 |param_count|img_size| |-----------------------------------------------|------|------|-----------|--------| |eva02_large_patch14_448.mim_m38m_ft_in22k_in1k |90.054|99.042|305.08 |448 | |eva02_large_patch14_448.mim_in22k_ft_in22k_in1k|89.946|99.01 |305.08 |448 | |eva_giant_patch14_560.m30m_ft_in22k_in1k |89.792|98.992|1014.45 |560 | |eva02_large_patch14_448.mim_in22k_ft_in1k |89.626|98.954|305.08 |448 | |eva02_large_patch14_448.mim_m38m_ft_in1k |89.57 |98.918|305.08 |448 | |eva_giant_patch14_336.m30m_ft_in22k_in1k |89.56 |98.956|1013.01 |336 | |eva_giant_patch14_336.clip_ft_in1k |89.466|98.82 |1013.01 |336 | |eva_large_patch14_336.in22k_ft_in22k_in1k |89.214|98.854|304.53 |336 | |eva_giant_patch14_224.clip_ft_in1k |88.882|98.678|1012.56 |224 | |eva02_base_patch14_448.mim_in22k_ft_in22k_in1k |88.692|98.722|87.12 |448 | |eva_large_patch14_336.in22k_ft_in1k |88.652|98.722|304.53 |336 | |eva_large_patch14_196.in22k_ft_in22k_in1k |88.592|98.656|304.14 |196 | |eva02_base_patch14_448.mim_in22k_ft_in1k |88.23 |98.564|87.12 |448 | |eva_large_patch14_196.in22k_ft_in1k |87.934|98.504|304.14 |196 | |eva02_small_patch14_336.mim_in22k_ft_in1k |85.74 |97.614|22.13 |336 | |eva02_tiny_patch14_336.mim_in22k_ft_in1k |80.658|95.524|5.76 |336 | ## Citation ```bibtex @article{EVA, title={EVA: Exploring the Limits of Masked Visual Representation Learning at Scale}, author={Fang, Yuxin and Wang, Wen and Xie, Binhui and Sun, Quan and Wu, Ledell and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue}, journal={arXiv preprint arXiv:2211.07636}, year={2022} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
natsusakiyomi/IrisMix
natsusakiyomi
2023-08-17T21:33:05Z
468
51
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "ja", "en", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-04-28T12:12:24Z
--- license: creativeml-openrail-m language: - ja - en pipeline_tag: text-to-image tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image library_name: diffusers --- <div class="flex justify-center"> <div class="container p-0 w-100"> <img class="mt-0 object-cover rounded-t-lg w-100" style="height: 320px;" src="https://huggingface.co/natsusakiyomi/HimawariMixs/resolve/main/image/header1.jpeg" width="100%"/> <div class="flex px-4"> <div class="flex-auto"> <h1 class="mb-2 text-3xl font-bold leading-tight" style="color: rgb(255, 151, 0/var(--tw-text-opacity));"> IrisMixSeries </h1> <p class="mb-4 text-base text-neutral-600 dark:text-neutral-200"> 彩度が高いVAE内蔵型かわいい系モデル<br> Cute model with built-in VAE with high color saturation </p> </div> <div> <a href="https://twitter.com/min__san" class="mb-2 inline-block rounded px-6 py-2.5 text-white shadow-md" style="background-color: #1da1f2"> <svg xmlns="http://www.w3.org/2000/svg" class="h-3.5 w-3.5" fill="currentColor" viewBox="0 0 24 24"> <path d="M24 4.557c-.883.392-1.832.656-2.828.775 1.017-.609 1.798-1.574 2.165-2.724-.951.564-2.005.974-3.127 1.195-.897-.957-2.178-1.555-3.594-1.555-3.179 0-5.515 2.966-4.797 6.045-4.091-.205-7.719-2.165-10.148-5.144-1.29 2.213-.669 5.108 1.523 6.574-.806-.026-1.566-.247-2.229-.616-.054 2.281 1.581 4.415 3.949 4.89-.693.188-1.452.232-2.224.084.626 1.956 2.444 3.379 4.6 3.419-2.07 1.623-4.678 2.348-7.29 2.04 2.179 1.397 4.768 2.212 7.548 2.212 9.142 0 14.307-7.721 13.995-14.646.962-.695 1.797-1.562 2.457-2.549z" /> </svg> </a> </div> </div> </div> </div> <hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));"> <h3 id="IrisMix-v3" class="mt-0 text-2xl"> <code>IrisMix-v3</code> <small></small> </h3> <div> IrisMix-v2の改造モデル<br> v3と比べ服装や表現方法などが増えました<br> <h4>🖼️ 例 / Examples</h4> <div class="container mx-auto px-2"> <div class="flex flex-wrap min-w-min items-baseline"> <div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;"> <div class="flex-1"> <img alt="gallery" class="block h-full w-full rounded-t-lg object-contain object-center" src="https://pbs.twimg.com/media/F2-RW2KboAAQwa8?format=jpg&name=large" loading="lazy" /> </div> <div class="w-full"> <pre class="w-full" style="white-space: pre-line;"> ぴこさん作 <a href="https://twitter.com/pico_nano1" target="_blank">ぴこのTwitter</a> </pre> </div> </div> <div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;"> <div class="w-full"> <img alt="gallery" class="block h-full w-full rounded-t-lg object-contain object-center" src="https://pbs.twimg.com/media/F0NpumWaUAAJnyj?format=jpg&name=large" loading="lazy" /> </div> <div class="w-full"> <pre class="w-full" style="white-space: pre-line;"> Steps: 40 Sampler: DPM++ SDE Karras CFG scale: 9.5 Denoising strength: 0.6 Hires upscale: 2.5 Hires upscaler: Latent </pre> </div> </div> <div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;"> <div class="flex-1"> <img alt="gallery" class="block h-full w-full rounded-t-lg object-contain object-center" src="https://pbs.twimg.com/media/F0ABTHaakAAwgnf?format=jpg&name=large" loading="lazy" /> </div> <div class="w-full"> <pre class="w-full" style="white-space: pre-line;"> Steps: 40 Sampler: DPM++ SDE Karras CFG scale: 9.5 Denoising strength: 0.6 Hires upscale: 2.5 Hires upscaler: Latent </pre> </div> </div> <div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;"> <div class="w-full"> <img alt="gallery" class="block h-full w-full rounded-t-lg object-contain object-center" src="https://pbs.twimg.com/media/F0NotY9aIAcirK3?format=jpg&name=medium" loading="lazy" /> </div> <div class="w-full"> <pre class="w-full" style="white-space: pre-line;"> Steps: 40 Sampler: DPM++ SDE Karras CFG scale: 9.5 Denoising strength: 0.6 Hires upscale: 2.5 Hires upscaler: Latent </pre> </div> </div> </div> </div> </div> <h4>📄 ライセンス / License</h4> <div class="px-2"> <table class="table-fixed border mt-0 text-xs"> <tbody> <tr> <td class="px-4 text-base" colspan="2"> <a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license"> 修正 CreativeML OpenRAIL-M ライセンス / Modified CreativeML OpenRAIL-M license </a> </td> </tr> <tr> <td class="align-middle px-2 w-8"> <span class="text-green-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" /> </svg> </span> </td> <td> このモデルのクレジットを入れずに使用する<br> Use the model without crediting the creator </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span class="text-red-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" /> </svg> </span> </td> <td> このモデルで生成した画像を商用利用する<br> Sell images they generate </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span class="text-red-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" /> </svg> </span> </td> <td> このモデルを商用の画像生成サービスで利用する</br> Run on services that generate images for money </td> </tr> <tr> <td class="align-middle px-2 w-8"> <span class="text-green-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" /> </svg> </span> </td> <td> このモデルを使用したマージモデルを共有する<br> Share merges using this model </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span class="text-red-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" /> </svg> </span> </td> <td> このモデル、またはこのモデルをマージしたモデルを販売する</br> Sell this model or merges using this model </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span class="text-red-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" /> </svg> </span> </td> <td> このモデルをマージしたモデルに異なる権限を設定する</br> Have different permissions when sharing merges </td> </tr> </tbody> </table> </div> <hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));"> <h3 id="IrisMix-v2" class="mt-0 text-2xl"> <code>IrisMix-v2</code> <small></small> </h3> <div> IrisMix-v2は過去のIrisMix-v1等とは全く中身が違い一からマージしたモデル <h4>🖼️ 例 / Examples</h4> <div class="container mx-auto px-2"> <div class="flex flex-wrap min-w-min items-baseline"> <div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;"> <div class="w-full"> <img alt="gallery" class="block h-full w-full rounded-t-lg object-contain object-center" src="https://pbs.twimg.com/media/F2miVUkaUAAvd_K?format=jpg&name=large" loading="lazy" /> </div> <div class="w-full"> <pre class="w-full" style="white-space: pre-line;"> animel7316さん作 <a href="https://twitter.com/AiGuradoni46892">guradoni AIさんのTwitter</a> </pre> </div> </div> <div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;"> <div class="flex-1"> <img alt="gallery" class="block h-full w-full rounded-t-lg object-contain object-center" src="https://pbs.twimg.com/media/Fy20xmsaYAEDOuZ?format=jpg&name=medium" loading="lazy" /> </div> <div class="w-full"> <pre class="w-full" style="white-space: pre-line;"> Steps: 40 Sampler: DPM++ SDE Karras CFG scale: 9.5 Denoising strength: 0.6 Hires upscale: 2.5 Hires upscaler: Latent </pre> </div> </div> <div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;"> <div class="flex-1"> <img alt="gallery" class="block h-full w-full rounded-t-lg object-contain object-center" src="https://pbs.twimg.com/media/Fy5O9SYaYAAYBpi?format=jpg&name=large" loading="lazy" /> </div> <div class="w-full"> <pre class="w-full" style="white-space: pre-line;"> Steps: 40 Sampler: DPM++ SDE Karras CFG scale: 9.5 Denoising strength: 0.6 Hires upscale: 2.5 Hires upscaler: Latent </pre> </div> </div> <div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;"> <div class="w-full"> <img alt="gallery" class="block h-full w-full rounded-t-lg object-contain object-center" src="https://pbs.twimg.com/media/Fz2HINsaIAAd8xU?format=jpg&name=large" loading="lazy" /> </div> <div class="w-full"> <pre class="w-full" style="white-space: pre-line;"> Steps: 40 Sampler: DPM++ SDE Karras CFG scale: 9.5 Denoising strength: 0.6 Hires upscale: 2.5 Hires upscaler: Latent </pre> </div> </div> </div> </div> </div> <h4>📄 ライセンス / License</h4> <div class="px-2"> <table class="table-fixed border mt-0 text-xs"> <tbody> <tr> <td class="px-4 text-base" colspan="2"> <a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license"> 修正 CreativeML OpenRAIL-M ライセンス / Modified CreativeML OpenRAIL-M license </a> </td> </tr> <tr> <td class="align-middle px-2 w-8"> <span class="text-green-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" /> </svg> </span> </td> <td> このモデルのクレジットを入れずに使用する<br> Use the model without crediting the creator </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span class="text-red-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" /> </svg> </span> </td> <td> このモデルで生成した画像を商用利用する<br> Sell images they generate </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span class="text-red-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" /> </svg> </span> </td> <td> このモデルを商用の画像生成サービスで利用する</br> Run on services that generate images for money </td> </tr> <tr> <td class="align-middle px-2 w-8"> <span class="text-green-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" /> </svg> </span> </td> <td> このモデルを使用したマージモデルを共有する<br> Share merges using this model </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span class="text-red-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" /> </svg> </span> </td> <td> このモデル、またはこのモデルをマージしたモデルを販売する</br> Sell this model or merges using this model </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span class="text-red-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" /> </svg> </span> </td> <td> このモデルをマージしたモデルに異なる権限を設定する</br> Have different permissions when sharing merges </td> </tr> </tbody> </table> </div> <hr class="my-6 h-0.5 border-t-0 opacity-100 dark:opacity-50" style="background-color: rgb(245 245 245/var(--tw-bg-opacity));"> <h3 id="IrisMix-v1" class="mt-0 text-2xl"> <code>IrisMix-v1</code> <small></small> </h3> <div> 配布中止モデル <h4>📄 ライセンス / License</h4> <div class="px-2"> <table class="table-fixed border mt-0 text-xs"> <tbody> <tr> <td class="px-4 text-base" colspan="2"> <a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license"> 修正 CreativeML OpenRAIL-M ライセンス / Modified CreativeML OpenRAIL-M license </a> </td> </tr> <tr> <td class="align-middle px-2 w-8"> <span class="text-green-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" /> </svg> </span> </td> <td> このモデルのクレジットを入れずに使用する<br> Use the model without crediting the creator </td> </tr> <tr> <td class="align-middle px-2 w-8"> <span class="text-green-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" /> </svg> </span> </td> <td> このモデルで生成した画像を商用利用する<br> Sell images they generate </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span class="text-red-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" /> </svg> </span> </td> <td> このモデルを商用の画像生成サービスで利用する</br> Run on services that generate images for money </td> </tr> <tr> <td class="align-middle px-2 w-8"> <span class="text-green-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M4.5 12.75l6 6 9-13.5" /> </svg> </span> </td> <td> このモデルを使用したマージモデルを共有する<br> Share merges using this model </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span class="text-red-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" /> </svg> </span> </td> <td> このモデル、またはこのモデルをマージしたモデルを販売する</br> Sell this model or merges using this model </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span class="text-red-500"> <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" class="w-6 h-6"> <path stroke-linecap="round" stroke-linejoin="round" d="M6 18L18 6M6 6l12 12" /> </svg> </span> </td> <td> このモデルをマージしたモデルに異なる権限を設定する</br> Have different permissions when sharing merges </td> </tr> </tbody> </table> </div> --- # 作者&連絡先 Twiter: [@min__san](https://twitter.com/min__san)
ychenNLP/nllb-200-distilled-1.3B-easyproject
ychenNLP
2023-08-29T23:20:18Z
468
2
transformers
[ "transformers", "pytorch", "m2m_100", "text2text-generation", "nllb", "translation", "ace", "acm", "acq", "aeb", "af", "ajp", "ak", "als", "am", "apc", "ar", "ars", "ary", "arz", "as", "ast", "awa", "ayr", "azb", "azj", "ba", "bm", "ban", "be", "bem", "bn", "bho", "bjn", "bo", "bs", "bug", "bg", "ca", "ceb", "cs", "cjk", "ckb", "crh", "cy", "da", "de", "dik", "dyu", "dz", "el", "en", "eo", "et", "eu", "ee", "fo", "fj", "fi", "fon", "fr", "fur", "fuv", "gaz", "gd", "ga", "gl", "gn", "gu", "ht", "ha", "he", "hi", "hne", "hr", "hu", "hy", "ig", "ilo", "id", "is", "it", "jv", "ja", "kab", "kac", "kam", "kn", "ks", "ka", "kk", "kbp", "kea", "khk", "km", "ki", "rw", "ky", "kmb", "kmr", "knc", "kg", "ko", "lo", "lij", "li", "ln", "lt", "lmo", "ltg", "lb", "lua", "lg", "luo", "lus", "lvs", "mag", "mai", "ml", "mar", "min", "mk", "mt", "mni", "mos", "mi", "my", "nl", "nn", "nb", "npi", "nso", "nus", "ny", "oc", "ory", "pag", "pa", "pap", "pbt", "pes", "plt", "pl", "pt", "prs", "quy", "ro", "rn", "ru", "sg", "sa", "sat", "scn", "shn", "si", "sk", "sl", "sm", "sn", "sd", "so", "st", "es", "sc", "sr", "ss", "su", "sv", "swh", "szl", "ta", "taq", "tt", "te", "tg", "tl", "th", "ti", "tpi", "tn", "ts", "tk", "tum", "tr", "tw", "tzm", "ug", "uk", "umb", "ur", "uzn", "vec", "vi", "war", "wo", "xh", "ydd", "yo", "yue", "zh", "zsm", "zu", "dataset:flores-200", "arxiv:2211.15613", "license:cc-by-nc-4.0", "autotrain_compatible", "region:us" ]
translation
2023-05-03T06:21:41Z
--- language: - ace - acm - acq - aeb - af - ajp - ak - als - am - apc - ar - ars - ary - arz - as - ast - awa - ayr - azb - azj - ba - bm - ban - be - bem - bn - bho - bjn - bo - bs - bug - bg - ca - ceb - cs - cjk - ckb - crh - cy - da - de - dik - dyu - dz - el - en - eo - et - eu - ee - fo - fj - fi - fon - fr - fur - fuv - gaz - gd - ga - gl - gn - gu - ht - ha - he - hi - hne - hr - hu - hy - ig - ilo - id - is - it - jv - ja - kab - kac - kam - kn - ks - ka - kk - kbp - kea - khk - km - ki - rw - ky - kmb - kmr - knc - kg - ko - lo - lij - li - ln - lt - lmo - ltg - lb - lua - lg - luo - lus - lvs - mag - mai - ml - mar - min - mk - mt - mni - mos - mi - my - nl - nn - nb - npi - nso - nus - ny - oc - ory - pag - pa - pap - pbt - pes - plt - pl - pt - prs - quy - ro - rn - ru - sg - sa - sat - scn - shn - si - sk - sl - sm - sn - sd - so - st - es - sc - sr - ss - su - sv - swh - szl - ta - taq - tt - te - tg - tl - th - ti - tpi - tn - ts - tk - tum - tr - tw - tzm - ug - uk - umb - ur - uzn - vec - vi - war - wo - xh - ydd - yo - yue - zh - zsm - zu language_details: "ace_Arab, ace_Latn, acm_Arab, acq_Arab, aeb_Arab, afr_Latn, ajp_Arab, aka_Latn, amh_Ethi, apc_Arab, arb_Arab, ars_Arab, ary_Arab, arz_Arab, asm_Beng, ast_Latn, awa_Deva, ayr_Latn, azb_Arab, azj_Latn, bak_Cyrl, bam_Latn, ban_Latn,bel_Cyrl, bem_Latn, ben_Beng, bho_Deva, bjn_Arab, bjn_Latn, bod_Tibt, bos_Latn, bug_Latn, bul_Cyrl, cat_Latn, ceb_Latn, ces_Latn, cjk_Latn, ckb_Arab, crh_Latn, cym_Latn, dan_Latn, deu_Latn, dik_Latn, dyu_Latn, dzo_Tibt, ell_Grek, eng_Latn, epo_Latn, est_Latn, eus_Latn, ewe_Latn, fao_Latn, pes_Arab, fij_Latn, fin_Latn, fon_Latn, fra_Latn, fur_Latn, fuv_Latn, gla_Latn, gle_Latn, glg_Latn, grn_Latn, guj_Gujr, hat_Latn, hau_Latn, heb_Hebr, hin_Deva, hne_Deva, hrv_Latn, hun_Latn, hye_Armn, ibo_Latn, ilo_Latn, ind_Latn, isl_Latn, ita_Latn, jav_Latn, jpn_Jpan, kab_Latn, kac_Latn, kam_Latn, kan_Knda, kas_Arab, kas_Deva, kat_Geor, knc_Arab, knc_Latn, kaz_Cyrl, kbp_Latn, kea_Latn, khm_Khmr, kik_Latn, kin_Latn, kir_Cyrl, kmb_Latn, kon_Latn, kor_Hang, kmr_Latn, lao_Laoo, lvs_Latn, lij_Latn, lim_Latn, lin_Latn, lit_Latn, lmo_Latn, ltg_Latn, ltz_Latn, lua_Latn, lug_Latn, luo_Latn, lus_Latn, mag_Deva, mai_Deva, mal_Mlym, mar_Deva, min_Latn, mkd_Cyrl, plt_Latn, mlt_Latn, mni_Beng, khk_Cyrl, mos_Latn, mri_Latn, zsm_Latn, mya_Mymr, nld_Latn, nno_Latn, nob_Latn, npi_Deva, nso_Latn, nus_Latn, nya_Latn, oci_Latn, gaz_Latn, ory_Orya, pag_Latn, pan_Guru, pap_Latn, pol_Latn, por_Latn, prs_Arab, pbt_Arab, quy_Latn, ron_Latn, run_Latn, rus_Cyrl, sag_Latn, san_Deva, sat_Beng, scn_Latn, shn_Mymr, sin_Sinh, slk_Latn, slv_Latn, smo_Latn, sna_Latn, snd_Arab, som_Latn, sot_Latn, spa_Latn, als_Latn, srd_Latn, srp_Cyrl, ssw_Latn, sun_Latn, swe_Latn, swh_Latn, szl_Latn, tam_Taml, tat_Cyrl, tel_Telu, tgk_Cyrl, tgl_Latn, tha_Thai, tir_Ethi, taq_Latn, taq_Tfng, tpi_Latn, tsn_Latn, tso_Latn, tuk_Latn, tum_Latn, tur_Latn, twi_Latn, tzm_Tfng, uig_Arab, ukr_Cyrl, umb_Latn, urd_Arab, uzn_Latn, vec_Latn, vie_Latn, war_Latn, wol_Latn, xho_Latn, ydd_Hebr, yor_Latn, yue_Hant, zho_Hans, zho_Hant, zul_Latn" tags: - nllb - translation license: "cc-by-nc-4.0" datasets: - flores-200 metrics: - bleu - spbleu - chrf++ inference: false --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> This model is a fine-tuned version of [facebook/nllb-200-3.3B](https://huggingface.co/facebook/nllb-200-3.3B) on the [EasyProject](https://github.com/edchengg/easyproject) dataset. ### Framework versions - Transformers 4.29.2 - Pytorch 1.11.0+cu113 - Datasets 2.8.0 - Tokenizers 0.13.2 - Paper link: [Frustratingly Easy Label Projection for Cross-lingual Transfer](https://arxiv.org/abs/2211.15613) - Github link: https://github.com/edchengg/easyproject - Please use the transformers==4.29.2 library as Huggingface recently fixed a bug in [NLLB tokenizer](https://github.com/huggingface/transformers/pull/22313) # Code ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained( "facebook/nllb-200-distilled-600M", src_lang="eng_Latn") print("Loading model") model = AutoModelForSeq2SeqLM.from_pretrained("ychenNLP/nllb-200-distilled-1.3b-easyproject") model.cuda() input_chunks = ["A translator always risks inadvertently introducing source-language words, grammar, or syntax into the target-language rendering."] print("Start translation...") output_result = [] batch_size = 1 for idx in tqdm(range(0, len(input_chunks), batch_size)): start_idx = idx end_idx = idx + batch_size inputs = tokenizer(input_chunks[start_idx: end_idx], padding=True, truncation=True, max_length=128, return_tensors="pt").to('cuda') with torch.no_grad(): translated_tokens = model.generate(**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["zho_Hans"], max_length=128, num_beams=5, num_return_sequences=1, early_stopping=True) output = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True) output_result.extend(output) print(output_result) ``` ## Citation ``` @inproceedings{chen2023easyproject, title={Frustratingly Easy Label Projection for Cross-lingual Transfer}, author={Chen, Yang and Jiang, Chao and Ritter, Alan and Xu, Wei}, booktitle={Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Findings)}, year={2023} } ```
google/umt5-xxl
google
2023-07-03T05:37:17Z
468
16
transformers
[ "transformers", "pytorch", "text2text-generation", "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu", "dataset:mc4", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-07-02T02:15:00Z
--- language: - multilingual - af - am - ar - az - be - bg - bn - ca - ceb - co - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fil - fr - fy - ga - gd - gl - gu - ha - haw - hi - hmn - ht - hu - hy - ig - is - it - iw - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lb - lo - lt - lv - mg - mi - mk - ml - mn - mr - ms - mt - my - ne - nl - no - ny - pa - pl - ps - pt - ro - ru - sd - si - sk - sl - sm - sn - so - sq - sr - st - su - sv - sw - ta - te - tg - th - tr - uk - und - ur - uz - vi - xh - yi - yo - zh - zu datasets: - mc4 license: apache-2.0 --- [Google's UMT5](https://github.com/google-research/multilingual-t5) UMT5 is pretrained on the an updated version of [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 107 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu. **Note**: UMT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) Other Community Checkpoints: [here](https://huggingface.co/models?search=umt5) Paper: [UniMax, Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining](https://openreview.net/forum?id=kXwdL1cWOAi) Authors: *by Hyung Won Chung, Xavier Garcia, Adam Roberts, Yi Tay, Orhan Firat, Sharan Narang, Noah Constant* ## Abstract *Pretrained multilingual large language models have typically used heuristic temperature-based sampling to balance between different languages. However previous work has not systematically evaluated the efficacy of different pretraining language distributions across model scales. In this paper, we propose a new sampling method, UniMax, that delivers more uniform coverage of head languages while mitigating overfitting on tail languages by explicitly capping the number of repeats over each language's corpus. We perform an extensive series of ablations testing a range of sampling strategies on a suite of multilingual benchmarks, while varying model scale. We find that UniMax outperforms standard temperature-based sampling, and the benefits persist as scale increases. As part of our contribution, we release: (i) an improved and refreshed mC4 multilingual corpus consisting of 29 trillion characters across 107 languages, and (ii) a suite of pretrained umT5 model checkpoints trained with UniMax sampling.*
TheBloke/Chronorctypus-Limarobormes-13b-GGUF
TheBloke
2023-09-27T12:47:47Z
468
2
transformers
[ "transformers", "gguf", "llama", "arxiv:2306.01708", "base_model:chargoddard/Chronorctypus-Limarobormes-13b", "license:llama2", "text-generation-inference", "region:us" ]
null
2023-09-05T17:26:04Z
--- license: llama2 tags: - llama model_name: Chronorctypus-Limarobormes-13B base_model: chargoddard/Chronorctypus-Limarobormes-13b inference: false model_creator: Charles Goddard model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Chronorctypus-Limarobormes-13B - GGUF - Model creator: [Charles Goddard](https://huggingface.co/chargoddard) - Original model: [Chronorctypus-Limarobormes-13B](https://huggingface.co/chargoddard/Chronorctypus-Limarobormes-13b) <!-- description start --> ## Description This repo contains GGUF format model files for [Charles Goddard's Chronorctypus-Limarobormes-13B](https://huggingface.co/chargoddard/Chronorctypus-Limarobormes-13b). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Chronorctypus-Limarobormes-13b-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Chronorctypus-Limarobormes-13b-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Chronorctypus-Limarobormes-13b-GGUF) * [Charles Goddard's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/chargoddard/Chronorctypus-Limarobormes-13b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [chronorctypus-limarobormes-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Chronorctypus-Limarobormes-13b-GGUF/blob/main/chronorctypus-limarobormes-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [chronorctypus-limarobormes-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Chronorctypus-Limarobormes-13b-GGUF/blob/main/chronorctypus-limarobormes-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [chronorctypus-limarobormes-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Chronorctypus-Limarobormes-13b-GGUF/blob/main/chronorctypus-limarobormes-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [chronorctypus-limarobormes-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Chronorctypus-Limarobormes-13b-GGUF/blob/main/chronorctypus-limarobormes-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [chronorctypus-limarobormes-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Chronorctypus-Limarobormes-13b-GGUF/blob/main/chronorctypus-limarobormes-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [chronorctypus-limarobormes-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Chronorctypus-Limarobormes-13b-GGUF/blob/main/chronorctypus-limarobormes-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [chronorctypus-limarobormes-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Chronorctypus-Limarobormes-13b-GGUF/blob/main/chronorctypus-limarobormes-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [chronorctypus-limarobormes-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Chronorctypus-Limarobormes-13b-GGUF/blob/main/chronorctypus-limarobormes-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [chronorctypus-limarobormes-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Chronorctypus-Limarobormes-13b-GGUF/blob/main/chronorctypus-limarobormes-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [chronorctypus-limarobormes-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Chronorctypus-Limarobormes-13b-GGUF/blob/main/chronorctypus-limarobormes-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [chronorctypus-limarobormes-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Chronorctypus-Limarobormes-13b-GGUF/blob/main/chronorctypus-limarobormes-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [chronorctypus-limarobormes-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Chronorctypus-Limarobormes-13b-GGUF/blob/main/chronorctypus-limarobormes-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Chronorctypus-Limarobormes-13b-GGUF and below it, a specific filename to download, such as: chronorctypus-limarobormes-13b.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Chronorctypus-Limarobormes-13b-GGUF chronorctypus-limarobormes-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Chronorctypus-Limarobormes-13b-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Chronorctypus-Limarobormes-13b-GGUF chronorctypus-limarobormes-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m chronorctypus-limarobormes-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Chronorctypus-Limarobormes-13b-GGUF", model_file="chronorctypus-limarobormes-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Charles Goddard's Chronorctypus-Limarobormes-13B Five different instruction-tuned models (which I'm sure are intuitively obvious from the name) merged using the methodology described in [Resolving Interference When Merging Models](https://arxiv.org/abs/2306.01708). In theory this should retain more of the capabilites of the constituent models than a straight linear merge would. In my testing, it feels quite capable. Base model used for the merge: [TheBloke/Llama-2-13B-fp16](https://huggingface.co/TheBloke/Llama-2-13B-fp16) Models merged in: * [OpenOrca-Platypus2-13B](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B) * [limarp-13b-merged](https://huggingface.co/Oniichat/limarp-13b-merged) * [Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b) * [chronos-13b-v2](https://huggingface.co/elinas/chronos-13b-v2) * [airoboros-l2-13b-gpt4-1.4.1](https://huggingface.co/jondurbin/airoboros-l2-13b-gpt4-1.4.1) Works quite well with Alpaca-style prompts: ``` ### Instruction: ... ### Response: ``` The script I used to perform the merge is available [here](https://github.com/cg123/ties-merge). The command that produced this model: ``` python ties_merge.py TheBloke/Llama-2-13B-fp16 ./Chronorctypus-Limarobormes-13b --merge elinas/chronos-13b-v2 --merge Open-Orca/OpenOrca-Platypus2-13B --merge Oniichat/limarp-13b-merged --merge jondurbin/airoboros-l2-13b-gpt4-1.4.1 --merge NousResearch/Nous-Hermes-Llama2-13b --cuda ``` <!-- original-model-card end -->
TheBloke/13B-Chimera-GGUF
TheBloke
2023-09-27T12:52:14Z
468
1
transformers
[ "transformers", "gguf", "llama", "cot", "vicuna", "uncensored", "merge", "mix", "gptq", "base_model:digitous/13B-Chimera", "license:other", "text-generation-inference", "region:us" ]
null
2023-09-19T22:17:29Z
--- license: other tags: - llama - cot - vicuna - uncensored - merge - mix - gptq model_name: 13B Chimera base_model: digitous/13B-Chimera inference: false model_creator: Erik model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # 13B Chimera - GGUF - Model creator: [Erik](https://huggingface.co/digitous) - Original model: [13B Chimera](https://huggingface.co/digitous/13B-Chimera) <!-- description start --> ## Description This repo contains GGUF format model files for [Erik's 13B Chimera](https://huggingface.co/digitous/13B-Chimera). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/13B-Chimera-AWQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/13B-Chimera-GGUF) * [Erik's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/digitous/13B-Chimera) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [13B-Chimera.Q2_K.gguf](https://huggingface.co/TheBloke/13B-Chimera-GGUF/blob/main/13B-Chimera.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [13B-Chimera.Q3_K_S.gguf](https://huggingface.co/TheBloke/13B-Chimera-GGUF/blob/main/13B-Chimera.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [13B-Chimera.Q3_K_M.gguf](https://huggingface.co/TheBloke/13B-Chimera-GGUF/blob/main/13B-Chimera.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [13B-Chimera.Q3_K_L.gguf](https://huggingface.co/TheBloke/13B-Chimera-GGUF/blob/main/13B-Chimera.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [13B-Chimera.Q4_0.gguf](https://huggingface.co/TheBloke/13B-Chimera-GGUF/blob/main/13B-Chimera.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [13B-Chimera.Q4_K_S.gguf](https://huggingface.co/TheBloke/13B-Chimera-GGUF/blob/main/13B-Chimera.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [13B-Chimera.Q4_K_M.gguf](https://huggingface.co/TheBloke/13B-Chimera-GGUF/blob/main/13B-Chimera.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [13B-Chimera.Q5_0.gguf](https://huggingface.co/TheBloke/13B-Chimera-GGUF/blob/main/13B-Chimera.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [13B-Chimera.Q5_K_S.gguf](https://huggingface.co/TheBloke/13B-Chimera-GGUF/blob/main/13B-Chimera.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [13B-Chimera.Q5_K_M.gguf](https://huggingface.co/TheBloke/13B-Chimera-GGUF/blob/main/13B-Chimera.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [13B-Chimera.Q6_K.gguf](https://huggingface.co/TheBloke/13B-Chimera-GGUF/blob/main/13B-Chimera.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [13B-Chimera.Q8_0.gguf](https://huggingface.co/TheBloke/13B-Chimera-GGUF/blob/main/13B-Chimera.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/13B-Chimera-GGUF and below it, a specific filename to download, such as: 13B-Chimera.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/13B-Chimera-GGUF 13B-Chimera.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/13B-Chimera-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/13B-Chimera-GGUF 13B-Chimera.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m 13B-Chimera.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/13B-Chimera-GGUF", model_file="13B-Chimera.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Erik's 13B Chimera ## 13B-Chimera ## Composition: [] = applied as LoRA to a composite model | () = combined as composite models ((MantiCore3E+VicunaCocktail)+[SuperCOT+[StorytellingV2+(SuperHOTProtoType-8192ctx+Metharme)]]) This model is the result of an experimental use of LoRAs on language models and model merges that are not the base HuggingFace-format LLaMA model they were intended for. The desired outcome is to additively apply desired features without paradoxically watering down a model's effective behavior. Potential limitations - LoRAs applied on top of each other may intercompete. Subjective results - very promising. Further experimental tests and objective tests are required. Instruct and Setup Suggestions: Alpaca instruct verified working, Vicuna instruct formats should work. If using KoboldAI or Text-Generation-WebUI, recommend switching between Godlike and Storywriter presets and adjusting output length + instructions in memory. Other presets as well as custom settings can yield highly different results, especially Temperature. If poking it with a stick doesn't work try another stick. ## Language Models and LoRAs Used Credits: manticore-13b [Epoch3] by openaccess-ai-collective https://huggingface.co/openaccess-ai-collective/manticore-13b vicuna-13b-cocktail by reeducator https://huggingface.co/reeducator/vicuna-13b-cocktail SuperCOT-LoRA [13B] by kaiokendev https://huggingface.co/kaiokendev/SuperCOT-LoRA Storytelling-LLaMa-LoRA [13B, Version 2] by GamerUnTouch https://huggingface.co/GamerUntouch/Storytelling-LLaMa-LoRAs SuperHOT Prototype [13b 8k ctx] by kaiokendev https://huggingface.co/kaiokendev/SuperHOT-LoRA-prototype Metharme 13b by PygmalionAI https://huggingface.co/PygmalionAI/metharme-13b Also thanks to Meta for LLaMA. Each model and LoRA was hand picked and considered for what it could contribute to this ensemble. Thanks to each and every one of you for your incredible work developing some of the best things to come out of this community. <!-- original-model-card end -->
second-state/Qwen1.5-0.5B-Chat-GGUF
second-state
2024-05-26T05:58:52Z
468
3
transformers
[ "transformers", "gguf", "qwen2", "text-generation", "chat", "en", "base_model:Qwen/Qwen1.5-0.5B-Chat", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-06T07:38:55Z
--- base_model: Qwen/Qwen1.5-0.5B-Chat license: other license_name: tongyi-qianwen-research license_link: >- https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat/blob/main/LICENSE model_creator: Qwen model_name: Qwen1.5 0.5B Chat quantized_by: Second State Inc. language: - en pipeline_tag: text-generation tags: - chat --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Qwen1.5-0.5B-Chat-GGUF ## Original Model [Qwen/Qwen1.5-0.5B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat) ## Run with LlamaEdge - LlamaEdge version: [v0.2.15](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.2.15) and above - Prompt template - Prompt type: `chatml` - Prompt string ```text <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` - Context size: `32000` - Run as LlamaEdge service ```bash wasmedge --dir .:. --nn-preload default:GGML:AUTO:Qwen1.5-0.5B-Chat-Q5_K_M.gguf llama-api-server.wasm -p chatml ``` - Run as LlamaEdge command app ```bash wasmedge --dir .:. --nn-preload default:GGML:AUTO:Qwen1.5-0.5B-Chat-Q5_K_M.gguf llama-chat.wasm -p chatml ``` ## Quantized GGUF Models | Name | Quant method | Bits | Size | Use case | | ---- | ---- | ---- | ---- | ----- | | [Qwen1.5-0.5B-Chat-Q2_K.gguf](https://huggingface.co/second-state/Qwen1.5-0.5B-Chat-GGUF/blob/main/Qwen1.5-0.5B-Chat-Q2_K.gguf) | Q2_K | 2 | 298 MB| smallest, significant quality loss - not recommended for most purposes | | [Qwen1.5-0.5B-Chat-Q3_K_L.gguf](https://huggingface.co/second-state/Qwen1.5-0.5B-Chat-GGUF/blob/main/Qwen1.5-0.5B-Chat-Q3_K_L.gguf) | Q3_K_L | 3 | 364 MB| small, substantial quality loss | | [Qwen1.5-0.5B-Chat-Q3_K_M.gguf](https://huggingface.co/second-state/Qwen1.5-0.5B-Chat-GGUF/blob/main/Qwen1.5-0.5B-Chat-Q3_K_M.gguf) | Q3_K_M | 3 | 350 MB| very small, high quality loss | | [Qwen1.5-0.5B-Chat-Q3_K_S.gguf](https://huggingface.co/second-state/Qwen1.5-0.5B-Chat-GGUF/blob/main/Qwen1.5-0.5B-Chat-Q3_K_S.gguf) | Q3_K_S | 3 | 333 MB| very small, high quality loss | | [Qwen1.5-0.5B-Chat-Q4_0.gguf](https://huggingface.co/second-state/Qwen1.5-0.5B-Chat-GGUF/blob/main/Qwen1.5-0.5B-Chat-Q4_0.gguf) | Q4_0 | 4 | 395 MB| legacy; small, very high quality loss - prefer using Q3_K_M | | [Qwen1.5-0.5B-Chat-Q4_K_M.gguf](https://huggingface.co/second-state/Qwen1.5-0.5B-Chat-GGUF/blob/main/Qwen1.5-0.5B-Chat-Q4_K_M.gguf) | Q4_K_M | 4 | 407 MB| medium, balanced quality - recommended | | [Qwen1.5-0.5B-Chat-Q4_K_S.gguf](https://huggingface.co/second-state/Qwen1.5-0.5B-Chat-GGUF/blob/main/Qwen1.5-0.5B-Chat-Q4_K_S.gguf) | Q4_K_S | 4 | 397 MB| small, greater quality loss | | [Qwen1.5-0.5B-Chat-Q5_0.gguf](https://huggingface.co/second-state/Qwen1.5-0.5B-Chat-GGUF/blob/main/Qwen1.5-0.5B-Chat-Q5_0.gguf) | Q5_0 | 5 | 453 MB| legacy; medium, balanced quality - prefer using Q4_K_M | | [Qwen1.5-0.5B-Chat-Q5_K_M.gguf](https://huggingface.co/second-state/Qwen1.5-0.5B-Chat-GGUF/blob/main/Qwen1.5-0.5B-Chat-Q5_K_M.gguf) | Q5_K_M | 5 | 459 MB| large, very low quality loss - recommended | | [Qwen1.5-0.5B-Chat-Q5_K_S.gguf](https://huggingface.co/second-state/Qwen1.5-0.5B-Chat-GGUF/blob/main/Qwen1.5-0.5B-Chat-Q5_K_S.gguf) | Q5_K_S | 5 | 453 MB| large, low quality loss - recommended | | [Qwen1.5-0.5B-Chat-Q6_K.gguf](https://huggingface.co/second-state/Qwen1.5-0.5B-Chat-GGUF/blob/main/Qwen1.5-0.5B-Chat-Q6_K.gguf) | Q6_K | 6 | 515 MB| very large, extremely low quality loss | | [Qwen1.5-0.5B-Chat-Q8_0.gguf](https://huggingface.co/second-state/Qwen1.5-0.5B-Chat-GGUF/blob/main/Qwen1.5-0.5B-Chat-Q8_0.gguf) | Q8_0 | 8 | 665 MB| very large, extremely low quality loss - not recommended |
rinna/japanese-hubert-large
rinna
2024-04-03T09:31:27Z
468
2
transformers
[ "transformers", "pytorch", "hubert", "feature-extraction", "speech", "ja", "dataset:reazon-research/reazonspeech", "arxiv:2404.01657", "license:apache-2.0", "region:us" ]
feature-extraction
2024-03-05T10:24:37Z
--- thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png language: ja license: apache-2.0 datasets: reazon-research/reazonspeech inference: false tags: - hubert - speech --- # `rinna/japanese-hubert-large` ![rinna-icon](./rinna.png) # Overview This is a Japanese HuBERT Large model trained by [rinna Co., Ltd.](https://rinna.co.jp/) * **Model summary** The model architecture is the same as the [original HuBERT Large model](https://huggingface.co/facebook/hubert-large-ll60k), which contains 24 transformer layers with 16 attention heads. The model was trained using code from the [official repository](https://github.com/facebookresearch/fairseq/tree/main/examples/hubert), and the detailed training configuration can be found in the same repository and the [original paper](https://ieeexplore.ieee.org/document/9585401). * **Training** The model was trained on approximately 19,000 hours of following Japanese speech corpus ReazonSpeech v1. - [ReazonSpeech](https://huggingface.co/datasets/reazon-research/reazonspeech) * **Contributors** - [Yukiya Hono](https://huggingface.co/yky-h) - [Kentaro Mitsui](https://huggingface.co/Kentaro321) - [Kei Sawada](https://huggingface.co/keisawada) --- # How to use the model ```python import soundfile as sf from transformers import AutoFeatureExtractor, AutoModel model_name = "rinna/japanese-hubert-large" feature_extractor = AutoFeatureExtractor.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) model.eval() raw_speech_16kHz, sr = sf.read(audio_file) inputs = feature_extractor( raw_speech_16kHz, return_tensors="pt", sampling_rate=sr, ) outputs = model(**inputs) print(f"Input: {inputs.input_values.size()}") # [1, #samples] print(f"Output: {outputs.last_hidden_state.size()}") # [1, #frames, 1024] ``` A fairseq checkpoint file can also be available [here](https://huggingface.co/rinna/japanese-hubert-large/tree/main/fairseq). --- # How to cite ```bibtex @misc{rinna-japanese-hubert-large, title = {rinna/japanese-hubert-large}, author={Hono, Yukiya and Mitsui, Kentaro and Sawada, Kei} url = {https://huggingface.co/rinna/japanese-hubert-large}, } @inproceedings{sawada2024release, title = {Release of Pre-Trained Models for the {J}apanese Language}, author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh}, booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)}, month = {5}, year = {2024}, url = {https://arxiv.org/abs/2404.01657}, } ``` --- # References ```bibtex @article{hsu2021hubert, author={Hsu, Wei-Ning and Bolte, Benjamin and Tsai, Yao-Hung Hubert and Lakhotia, Kushal and Salakhutdinov, Ruslan and Mohamed, Abdelrahman}, journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing}, title={HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units}, year={2021}, volume={29}, number={}, pages={3451-3460}, doi={10.1109/TASLP.2021.3122291} } ``` --- # License [The Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0)
Lewdiculous/Eris_Floramix_DPO_7B-GGUF-Imatrix
Lewdiculous
2024-03-08T19:54:24Z
468
4
transformers
[ "transformers", "gguf", "quantized", "roleplay", "imatrix", "mistral", "en", "dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED", "dataset:ResplendentAI/Synthetic_Soul_1k", "license:other", "region:us" ]
null
2024-03-08T05:43:47Z
--- library_name: transformers license: other datasets: - athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED - ResplendentAI/Synthetic_Soul_1k language: - en tags: - gguf - quantized - roleplay - imatrix - mistral inference: false --- This repository hosts GGUF-Imatrix quantizations for [ChaoticNeutrals/Eris_Floramix_DPO_7B](https://huggingface.co/ChaoticNeutrals/Eris_Floramix_DPO_7B). ``` Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants) ``` ```python quantization_options = [ "Q3_K_M", "Q4_K_M", "Q5_K_M", "Q6_K", "Q8_0", "IQ4_XS", "IQ3_XXS" ] ``` **This is experimental.** For imatrix data generation, kalomaze's `groups_merged.txt` with added roleplay chats was used, you can find it [here](https://huggingface.co/Lewdiculous/Datura_7B-GGUF-Imatrix/blob/main/imatrix-with-rp-format-data.txt). The goal is to measure the (hopefully positive) impact of this data for consistent formatting in roleplay chatting scenarios. **Image:** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/1kuWjS4TnCIkCcdZdPfsG.png) **Original model information:** # Eris Floramix DPO This is a mix between Eris Remix DPO and Flora DPO, a finetune of the original Eris Remix on the Synthetic_Soul_1k dataset. Applied this DPO dataset: https://huggingface.co/datasets/athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED
Praneeth/code-gemma-2b-it
Praneeth
2024-03-24T16:21:41Z
468
5
transformers
[ "transformers", "pytorch", "safetensors", "gemma", "text-generation", "peft", "unsloth", "lora", "trl", "sft", "conversational", "en", "dataset:HuggingFaceH4/CodeAlpaca_20K", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-24T12:16:28Z
--- language: - en license: other library_name: transformers tags: - peft - unsloth - lora - trl - sft datasets: - HuggingFaceH4/CodeAlpaca_20K license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms inference: false --- # Code-Gemma-2B ### Description Code-Gemma was finetuned (1k steps) on the CodeAlpaca-20k dataset using the unsloth library to enhance the Gemma-2B-it model. ### Usage Below we share some code snippets on how to get quickly started with running the model. ```python !pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git" if major_version >= 8: # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40) !pip install --no-deps packaging ninja einops flash-attn xformers trl peft accelerate bitsandbytes else: # Use this for older GPUs (V100, Tesla T4, RTX 20xx) !pip install --no-deps xformers trl peft accelerate bitsandbytes pass ``` #### Running the model on a GPU using different precisions * _Using `torch.float16`_ ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Praneeth/code-gemma-2b-it") model = AutoModelForCausalLM.from_pretrained("Praneeth/code-gemma-2b-it", device_map="auto", torch_dtype=torch.float16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=256,) print(tokenizer.decode(outputs[0])) ```
mradermacher/HoloLewd-7B-GGUF
mradermacher
2024-05-06T05:55:23Z
468
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "Undi95/LewdMistral-7B-0.2", "KoboldAI/Mistral-7B-Holodeck-1", "en", "base_model:son-of-man/HoloLewd-7B", "endpoints_compatible", "region:us" ]
null
2024-03-26T16:05:54Z
--- base_model: son-of-man/HoloLewd-7B language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - Undi95/LewdMistral-7B-0.2 - KoboldAI/Mistral-7B-Holodeck-1 --- ## About static quants of https://huggingface.co/son-of-man/HoloLewd-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/HoloLewd-7B-GGUF/resolve/main/HoloLewd-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/HoloLewd-7B-GGUF/resolve/main/HoloLewd-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/HoloLewd-7B-GGUF/resolve/main/HoloLewd-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/HoloLewd-7B-GGUF/resolve/main/HoloLewd-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/HoloLewd-7B-GGUF/resolve/main/HoloLewd-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/HoloLewd-7B-GGUF/resolve/main/HoloLewd-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/HoloLewd-7B-GGUF/resolve/main/HoloLewd-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/HoloLewd-7B-GGUF/resolve/main/HoloLewd-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/HoloLewd-7B-GGUF/resolve/main/HoloLewd-7B.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/HoloLewd-7B-GGUF/resolve/main/HoloLewd-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/HoloLewd-7B-GGUF/resolve/main/HoloLewd-7B.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/HoloLewd-7B-GGUF/resolve/main/HoloLewd-7B.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/HoloLewd-7B-GGUF/resolve/main/HoloLewd-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/HoloLewd-7B-GGUF/resolve/main/HoloLewd-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/HoloLewd-7B-GGUF/resolve/main/HoloLewd-7B.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/HoloLewd-7B-GGUF/resolve/main/HoloLewd-7B.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
PawanKrd/Meta-Llama-3-8B-Instruct-GGUF
PawanKrd
2024-04-18T18:45:21Z
468
4
transformers
[ "transformers", "gguf", "llama", "facebook", "meta", "pytorch", "llama-3", "text-generation", "en", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
2024-04-18T18:29:00Z
--- language: - en license: llama2 tags: - facebook - meta - pytorch - llama - llama-3 model_name: Llama 3 8B base_model: meta-llama/Meta-Llama-3-8B-Instruct inference: false model_creator: Meta model_type: llama pipeline_tag: text-generation prompt_template: '{prompt} ' quantized_by: PawanKrd --- # Llama 3 8B Instruct - GGUF - Model creator: [Meta](https://huggingface.co/meta-llama) - Original model: [Llama 3 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) <!-- description start --> ## Description This repo contains GGUF format model files for [Meta's Llama 3 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: PawanKrd/Llama-3-8B-Instruct-GGUF and below it, a specific filename to download, such as: llama-3-8b-instruct.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download PawanKrd/Llama-3-8B-Instruct-GGUF llama-3-8b-instruct.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download PawanKrd/Llama-3-8B-Instruct-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download PawanKrd/Llama-3-8B-Instruct-GGUF llama-3-8b-instruct.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m llama-3-8b-instruct.Q4_K_M.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("PawanKrd/Llama-3-8B-Instruct-GGUF", model_file="llama-3-8b-instruct.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord [Pawan.Krd's Discord server](https://discord.gg/pawan) ## Credits This README file was initially created by [TheBlok](https://huggingface.co/TheBloke) and has been modified for this repository.
mradermacher/PsykidelicLlama3-GGUF
mradermacher
2024-05-05T15:11:59Z
468
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:WesPro/PsykidelicLlama3", "endpoints_compatible", "region:us" ]
null
2024-04-24T14:12:39Z
--- base_model: WesPro/PsykidelicLlama3 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/WesPro/PsykidelicLlama3 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/PsykidelicLlama3-GGUF/resolve/main/PsykidelicLlama3.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
axs531622/tinyllama-resume-tracking-v0
axs531622
2024-05-10T03:25:45Z
468
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-10T03:19:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/tokyotech-llm_-_Swallow-13b-instruct-hf-gguf
RichardErkhov
2024-05-11T04:33:58Z
468
0
null
[ "gguf", "arxiv:2404.17790", "arxiv:2404.17733", "region:us" ]
null
2024-05-11T00:32:20Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Swallow-13b-instruct-hf - GGUF - Model creator: https://huggingface.co/tokyotech-llm/ - Original model: https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-hf/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Swallow-13b-instruct-hf.Q2_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-13b-instruct-hf-gguf/blob/main/Swallow-13b-instruct-hf.Q2_K.gguf) | Q2_K | 4.58GB | | [Swallow-13b-instruct-hf.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-13b-instruct-hf-gguf/blob/main/Swallow-13b-instruct-hf.IQ3_XS.gguf) | IQ3_XS | 5.06GB | | [Swallow-13b-instruct-hf.IQ3_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-13b-instruct-hf-gguf/blob/main/Swallow-13b-instruct-hf.IQ3_S.gguf) | IQ3_S | 5.34GB | | [Swallow-13b-instruct-hf.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-13b-instruct-hf-gguf/blob/main/Swallow-13b-instruct-hf.Q3_K_S.gguf) | Q3_K_S | 5.34GB | | [Swallow-13b-instruct-hf.IQ3_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-13b-instruct-hf-gguf/blob/main/Swallow-13b-instruct-hf.IQ3_M.gguf) | IQ3_M | 5.64GB | | [Swallow-13b-instruct-hf.Q3_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-13b-instruct-hf-gguf/blob/main/Swallow-13b-instruct-hf.Q3_K.gguf) | Q3_K | 5.97GB | | [Swallow-13b-instruct-hf.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-13b-instruct-hf-gguf/blob/main/Swallow-13b-instruct-hf.Q3_K_M.gguf) | Q3_K_M | 5.97GB | | [Swallow-13b-instruct-hf.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-13b-instruct-hf-gguf/blob/main/Swallow-13b-instruct-hf.Q3_K_L.gguf) | Q3_K_L | 6.52GB | | [Swallow-13b-instruct-hf.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-13b-instruct-hf-gguf/blob/main/Swallow-13b-instruct-hf.IQ4_XS.gguf) | IQ4_XS | 6.61GB | | [Swallow-13b-instruct-hf.Q4_0.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-13b-instruct-hf-gguf/blob/main/Swallow-13b-instruct-hf.Q4_0.gguf) | Q4_0 | 6.93GB | | [Swallow-13b-instruct-hf.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-13b-instruct-hf-gguf/blob/main/Swallow-13b-instruct-hf.IQ4_NL.gguf) | IQ4_NL | 6.98GB | | [Swallow-13b-instruct-hf.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-13b-instruct-hf-gguf/blob/main/Swallow-13b-instruct-hf.Q4_K_S.gguf) | Q4_K_S | 6.99GB | | [Swallow-13b-instruct-hf.Q4_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-13b-instruct-hf-gguf/blob/main/Swallow-13b-instruct-hf.Q4_K.gguf) | Q4_K | 7.4GB | | [Swallow-13b-instruct-hf.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-13b-instruct-hf-gguf/blob/main/Swallow-13b-instruct-hf.Q4_K_M.gguf) | Q4_K_M | 7.4GB | | [Swallow-13b-instruct-hf.Q4_1.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-13b-instruct-hf-gguf/blob/main/Swallow-13b-instruct-hf.Q4_1.gguf) | Q4_1 | 7.69GB | | [Swallow-13b-instruct-hf.Q5_0.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-13b-instruct-hf-gguf/blob/main/Swallow-13b-instruct-hf.Q5_0.gguf) | Q5_0 | 8.44GB | | [Swallow-13b-instruct-hf.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-13b-instruct-hf-gguf/blob/main/Swallow-13b-instruct-hf.Q5_K_S.gguf) | Q5_K_S | 8.44GB | | [Swallow-13b-instruct-hf.Q5_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-13b-instruct-hf-gguf/blob/main/Swallow-13b-instruct-hf.Q5_K.gguf) | Q5_K | 8.68GB | | [Swallow-13b-instruct-hf.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-13b-instruct-hf-gguf/blob/main/Swallow-13b-instruct-hf.Q5_K_M.gguf) | Q5_K_M | 8.68GB | | [Swallow-13b-instruct-hf.Q5_1.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-13b-instruct-hf-gguf/blob/main/Swallow-13b-instruct-hf.Q5_1.gguf) | Q5_1 | 9.19GB | | [Swallow-13b-instruct-hf.Q6_K.gguf](https://huggingface.co/RichardErkhov/tokyotech-llm_-_Swallow-13b-instruct-hf-gguf/blob/main/Swallow-13b-instruct-hf.Q6_K.gguf) | Q6_K | 10.03GB | Original model description: --- language: - en - ja library_name: transformers pipeline_tag: text-generation license: llama2 model_type: llama --- # Swallow Our Swallow model has undergone continual pre-training from the [Llama 2 family](https://huggingface.co/meta-llama), primarily with the addition of Japanese language data. The tuned versions use supervised fine-tuning (SFT). Links to other models can be found in the index. # Model Release Updates We are excited to share the release schedule for our latest models: - **April 26, 2024**: Released version 0.1 of our enhanced instruction-tuned models: [Swallow-7b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-v0.1), [Swallow-13b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-v0.1), and [Swallow-70b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-v0.1) as preview versions. - **March 2, 2024**: Released the [Swallow-7b-plus-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-plus-hf), a model trained with approximately twice as many Japanese tokens as [Swallow-7b-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-hf). - **February 4, 2024**: Released the [Swallow-13b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-NVE-hf). - **January 26, 2024**: Released the [Swallow-7b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-hf), [Swallow-7b-NVE-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-instruct-hf), [Swallow-70b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-hf), and [Swallow-70b-NVE-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-instruct-hf) - **December 19, 2024**: Released the [Swallow-7b-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-hf), [Swallow-7b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-hf), [Swallow-13b-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-hf), [Swallow-13b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-hf), [Swallow-70b-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-hf), and [Swallow-70b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-hf). ## Swallow Model Index |Model|Swallow-hf|Swallow-instruct-hf|Swallow-instruct-v0.1| |---|---|---|---| |7B| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-hf)|[Link](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-v1.0)| |7B-Plus| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-plus-hf) | N/A | N/A | |13B| [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-hf)| [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-v1.0)| |70B| [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-hf)| [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-v1.0)| ## Swallow Model Index NVE (No Vocabulary Expansion) |Model|Swallow-NVE-hf|Swallow-NVE-instruct-hf| |---|---|---| |7B| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-instruct-hf)| |13B| [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-NVE-hf) | N/A | |70B| [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-instruct-hf)| ![logo](./logo.png) This repository provides large language models developed by [TokyoTech-LLM](https://tokyotech-llm.github.io/). Read our [blog post](https://zenn.dev/tokyotech_lm/articles/d6cb3a8fdfc907) or our [paper](https://arxiv.org/abs/2404.17790) ## Model Details * **Model type**: Please refer to LLaMA-2 technical report for details on the model architecture. * **Language(s)**: Japanese English * **Library**: [Megatron-LM](https://github.com/rioyokotalab/Megatron-Llama2) * **Tokenizer**: This model employs a tokenizer that features a broadened vocabulary based on Japanese data. This allows for a more efficient representation of text using fewer tokens, leading to a notably faster inference process. * **Contact**: swallow[at]nlp.c.titech.ac.jp ## Base Model Performance ### Japanese tasks |Model|Size|JCommonsenseQA|JEMHopQA|NIILC|JSQuAD|XL-Sum|MGSM|WMT20-en-ja|WMT20-ja-en| |---|---|---|---|---|---|---|---|---|---| | | |4-shot|4-shot|4-shot|4-shot|1-shot|4-shot|4-shot|4-shot| | Llama 2 | 7B | 0.3852 | 0.4240 | 0.3410 | 0.7917 | 0.1905 | 0.0760 | 0.1783 | 0.1738 | | Swallow | 7B | 0.4808 | 0.5078 | 0.5968 | 0.8573 | 0.1830 | 0.1240 | 0.2510 | 0.1511 | | Swallow-Plus | 7B | 0.5478 | 0.5493 | 0.6030 | 0.8544 | 0.1806 | 0.1360 | 0.2568 | 0.1441 | | Swallow-NVE | 7B | 0.5433 | 0.5425 | 0.5729 | 0.8684 | 0.2117 | 0.1200 | 0.2405 | 0.1512 | | Llama 2 | 13B | 0.6997 | 0.4415 | 0.4170 | 0.8533 | 0.2139 | 0.1320 | 0.2146 | 0.1982 | | Swallow | 13B | 0.7837 | 0.5063 | 0.6398 | 0.9005 | 0.2168 | 0.2040 | 0.2720 | 0.1771 | | Swallow-NVE | 13B | 0.7712 | 0.5438 | 0.6351 | 0.9030 | 0.2294 | 0.2120 | 0.2735 | 0.1817 | | Llama 2 | 70B | 0.8686 | 0.4656 | 0.5256 | 0.9080 | 0.2361 | 0.3560 | 0.2643 | **0.2398** | | Swallow | 70B | 0.9348 | **0.6290** | 0.6960 | 0.9176 | 0.2266 | **0.4840** | **0.3043** | 0.2298 | | Swallow-NVE | 70B | **0.9410** | 0.5759 | **0.7024** | **0.9254** | **0.2758** | 0.4720 | 0.3042 | 0.2322 | ### English tasks |Model|Size|OpenBookQA|TriviaQA|HellaSwag|SQuAD2.0|XWINO|GSM8K| |---|---|---|---|---|---|---|---| | | |8-shot|8-shot|8-shot|8-shot|8-shot|8-shot| | Llama 2 | 7B | 0.3580 | 0.6265 | 0.5860 | 0.3207 | 0.9049 | 0.1410 | | Swallow | 7B | 0.3180 | 0.4836 | 0.5308 | 0.3125 | 0.8817 | 0.1130 | | Swallow-Plus | 7B | 0.3280 | 0.4558 | 0.5259 | 0.3134 | 0.8929 | 0.1061 | | Swallow-NVE | 7B | 0.3180 | 0.5079 | 0.5329 | 0.2919 | 0.8817 | 0.0986 | | Llama 2 | 13B | 0.3760 | 0.7255 | 0.6148 | 0.3681 | 0.9140 | 0.2403 | | Swallow | 13B | 0.3500 | 0.5852 | 0.5660 | 0.3406 | 0.9075 | 0.2039 | | Swallow-NVE | 13B | 0.3460 | 0.6025 | 0.5700 | 0.3478 | 0.9006 | 0.1751 | | Llama 2 | 70B | **0.4280** | **0.8239** | **0.6742** | **0.3770** | **0.9290** | **0.5284** | | Swallow | 70B | 0.4220 | 0.7756 | 0.6458 | 0.3745 | 0.9204 | 0.4867 | | Swallow-NVE | 70B | 0.4240 | 0.7817 | 0.6439 | 0.3451 | 0.9256 | 0.4943 | ## Evaluation Benchmarks ### Japanese evaluation benchmarks We used llm-jp-eval(v1.0.0) and JP Language Model Evaluation Harness(commit #9b42d41). The details are as follows: - Multiple-choice question answering (JCommonsenseQA [Kurihara+, 2022]) - Open-ended question answering (JEMHopQA [Ishii+, 2023]) - Open-ended question answering (NIILC [Sekine, 2003]) - Machine reading comprehension (JSQuAD [Kurihara+, 2022]) - Automatic summarization (XL-Sum [Hasan+, 2021]) - Machine translation (WMT2020 ja-en [Barrault+, 2020]) - Machine translation (WMT2020 en-ja [Barrault+, 2020]) - Mathematical reasoning (MGSM [Shi+, 2023]) ### English evaluation benchmarks We used the Language Model Evaluation Harness(v.0.3.0). The details are as follows: - Multiple-choice question answering (OpenBookQA [Mihaylov+, 2018]) - Open-ended question answering (TriviaQA [Joshi+, 2017]) - Machine reading comprehension (SQuAD 2.0 [Rajpurkar+, 2018]) - Commonsense reasoning (XWINO [Tikhonov & Ryabinin, 2021]) - Natural language inference (HellaSwag [Zellers+, 2019]) - Mathematical reasoning (GSM8k [Cobbe+, 2021]) ## Usage First install additional dependencies in [requirements.txt](./requirements.txt): ```sh pip install -r requirements.txt ``` ### Use the instruct model ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "tokyotech-llm/Swallow-7b-instruct-hf" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, device_map="auto") PROMPT_DICT = { "prompt_input": ( "以下に、あるタスクを説明する指示があり、それに付随する入力が更なる文脈を提供しています。" "リクエストを適切に完了するための回答を記述してください。\n\n" "### 指示:\n{instruction}\n\n### 入力:\n{input}\n\n### 応答:" ), "prompt_no_input": ( "以下に、あるタスクを説明する指示があります。" "リクエストを適切に完了するための回答を記述してください。\n\n" "### 指示:\n{instruction}\n\n### 応答:" ), } def create_prompt(instruction, input=None): """ Generates a prompt based on the given instruction and an optional input. If input is provided, it uses the 'prompt_input' template from PROMPT_DICT. If no input is provided, it uses the 'prompt_no_input' template. Args: instruction (str): The instruction describing the task. input (str, optional): Additional input providing context for the task. Default is None. Returns: str: The generated prompt. """ if input: # Use the 'prompt_input' template when additional input is provided return PROMPT_DICT["prompt_input"].format(instruction=instruction, input=input) else: # Use the 'prompt_no_input' template when no additional input is provided return PROMPT_DICT["prompt_no_input"].format(instruction=instruction) # Example usage instruction_example = "以下のトピックに関する詳細な情報を提供してください。" input_example = "東京工業大学の主なキャンパスについて教えてください" prompt = create_prompt(instruction_example, input_example) input_ids = tokenizer.encode( prompt, add_special_tokens=False, return_tensors="pt" ) tokens = model.generate( input_ids.to(device=model.device), max_new_tokens=128, temperature=0.99, top_p=0.95, do_sample=True, ) out = tokenizer.decode(tokens[0], skip_special_tokens=True) print(out) ``` ### Use the base model ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "tokyotech-llm/Swallow-7b-hf" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto") prompt = "東京工業大学の主なキャンパスは、" input_ids = tokenizer.encode( prompt, add_special_tokens=False, return_tensors="pt" ) tokens = model.generate( input_ids.to(device=model.device), max_new_tokens=128, temperature=0.99, top_p=0.95, do_sample=True, ) out = tokenizer.decode(tokens[0], skip_special_tokens=True) print(out) ``` ## Training Datasets ### Continual Pre-Training The following datasets were used for continual pre-training. - [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch) - [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) - [Swallow Corpus](https://arxiv.org/abs/2404.17733) - [The Pile](https://huggingface.co/datasets/EleutherAI/pile) ### Instruction Tuning The following datasets were used for the instruction tuning. - [Anthropic HH-RLHF](https://huggingface.co/datasets/kunishou/hh-rlhf-49k-ja) - [Databricks Dolly 15-k](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja) - [OpenAssistant Conversations Dataset](https://huggingface.co/datasets/kunishou/oasst1-89k-ja) ## Risks and Limitations The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations. ## Acknowledgements We thank Meta Research for releasing Llama 2 under an open license for others to build on. Our project is supported by the [ABCI Large-scale Language Model Building Support Program](https://abci.ai/en/link/llm_support_program.html) of the National Institute of Advanced Industrial Science and Technology. ## License Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. ## Authors Here are the team members: - From [Okazaki Laboratory](https://www.nlp.c.titech.ac.jp/index.en.html), the following members: - [Naoaki Okazaki](https://www.chokkan.org/index.ja.html) - [Sakae Mizuki](https://s-mizuki-nlp.github.io/) - [Hiroki Iida](https://meshidenn.github.io/) - [Mengsay Loem](https://loem-ms.github.io/) - [Shota Hirai](https://huggingface.co/Kotemo428) - [Kakeru Hattori](https://aya-se.vercel.app/) - [Masanari Ohi](https://twitter.com/stjohn2007) - From [YOKOTA Laboratory](https://www.rio.gsic.titech.ac.jp/en/index.html), the following members: - [Rio Yokota](https://twitter.com/rioyokota) - [Kazuki Fujii](https://twitter.com/okoge_kaz) - [Taishi Nakamura](https://twitter.com/Setuna7777_2) ## How to cite ``` @misc{fujii2024continual, title={Continual Pre-Training for Cross-Lingual LLM Adaptation: Enhancing Japanese Language Capabilities}, author={Kazuki Fujii and Taishi Nakamura and Mengsay Loem and Hiroki Iida and Masanari Ohi and Kakeru Hattori and Hirai Shota and Sakae Mizuki and Rio Yokota and Naoaki Okazaki}, year={2024}, eprint={2404.17790}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
RichardErkhov/Rijgersberg_-_GEITje-7B-gguf
RichardErkhov
2024-05-19T23:42:01Z
468
0
null
[ "gguf", "region:us" ]
null
2024-05-19T21:32:39Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) GEITje-7B - GGUF - Model creator: https://huggingface.co/Rijgersberg/ - Original model: https://huggingface.co/Rijgersberg/GEITje-7B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [GEITje-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Rijgersberg_-_GEITje-7B-gguf/blob/main/GEITje-7B.Q2_K.gguf) | Q2_K | 2.53GB | | [GEITje-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Rijgersberg_-_GEITje-7B-gguf/blob/main/GEITje-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [GEITje-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Rijgersberg_-_GEITje-7B-gguf/blob/main/GEITje-7B.IQ3_S.gguf) | IQ3_S | 2.96GB | | [GEITje-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Rijgersberg_-_GEITje-7B-gguf/blob/main/GEITje-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [GEITje-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Rijgersberg_-_GEITje-7B-gguf/blob/main/GEITje-7B.IQ3_M.gguf) | IQ3_M | 3.06GB | | [GEITje-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Rijgersberg_-_GEITje-7B-gguf/blob/main/GEITje-7B.Q3_K.gguf) | Q3_K | 3.28GB | | [GEITje-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Rijgersberg_-_GEITje-7B-gguf/blob/main/GEITje-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [GEITje-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Rijgersberg_-_GEITje-7B-gguf/blob/main/GEITje-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [GEITje-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Rijgersberg_-_GEITje-7B-gguf/blob/main/GEITje-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [GEITje-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Rijgersberg_-_GEITje-7B-gguf/blob/main/GEITje-7B.Q4_0.gguf) | Q4_0 | 3.83GB | | [GEITje-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Rijgersberg_-_GEITje-7B-gguf/blob/main/GEITje-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [GEITje-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Rijgersberg_-_GEITje-7B-gguf/blob/main/GEITje-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [GEITje-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Rijgersberg_-_GEITje-7B-gguf/blob/main/GEITje-7B.Q4_K.gguf) | Q4_K | 4.07GB | | [GEITje-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Rijgersberg_-_GEITje-7B-gguf/blob/main/GEITje-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [GEITje-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Rijgersberg_-_GEITje-7B-gguf/blob/main/GEITje-7B.Q4_1.gguf) | Q4_1 | 4.24GB | | [GEITje-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Rijgersberg_-_GEITje-7B-gguf/blob/main/GEITje-7B.Q5_0.gguf) | Q5_0 | 4.65GB | | [GEITje-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Rijgersberg_-_GEITje-7B-gguf/blob/main/GEITje-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [GEITje-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Rijgersberg_-_GEITje-7B-gguf/blob/main/GEITje-7B.Q5_K.gguf) | Q5_K | 4.78GB | | [GEITje-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Rijgersberg_-_GEITje-7B-gguf/blob/main/GEITje-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [GEITje-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Rijgersberg_-_GEITje-7B-gguf/blob/main/GEITje-7B.Q5_1.gguf) | Q5_1 | 5.07GB | | [GEITje-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Rijgersberg_-_GEITje-7B-gguf/blob/main/GEITje-7B.Q6_K.gguf) | Q6_K | 5.53GB | | [GEITje-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Rijgersberg_-_GEITje-7B-gguf/blob/main/GEITje-7B.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- license: apache-2.0 base_model: mistralai/Mistral-7B-v0.1 tags: - generated_from_trainer - GEITje datasets: - Rijgersberg/GEITje-pretrain-10b model-index: - name: GEITje-v1-7B results: [] language: - nl --- # GEITje-7B GEITje is a large open Dutch language model with 7 billion parameters, based on Mistral 7B. It has been further trained on 10 billion tokens of Dutch text. This has improved its Dutch language skills and increased its knowledge of Dutch topics. ## Model description ### _Mistral_ – Base Model GEITje is based on [Mistral 7B](https://mistral.ai/news/announcing-mistral-7b/). It's a large open language model with 7 billion parameters, trained by [Mistral AI](https://mistral.ai). According to Mistral AI, the 7B model performs better than [Llama 2](https://ai.meta.com/llama/) 13B on all (English-language) benchmarks they tested it on. Mistral 7B has been released under the Apache 2.0 open source license. ### _GEITje_ – Trained Further on Dutch Texts GEITje was created by further training Mistral 7B on no less than 10 billion tokens of Dutch text from the [Dutch Gigacorpus](http://gigacorpus.nl) and the [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400) web crawling corpus. It is a so-called _full-parameter finetune_: performed on all parameters. It is not a [PEFT](https://huggingface.co/blog/peft) or [LoRA](https://huggingface.co/docs/peft/conceptual_guides/lora) finetune. Like Mistral, GEITje has a _context length_ of 8,192 tokens. ## More info Read more about GEITje in the [📄 README](https://github.com/Rijgersberg/GEITje/blob/main/README-en.md) on GitHub. ## Checkpoints Intermediate checkpoints are available in the `checkpoints` branch. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 953 - training_steps: 9536 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.6995 | 0.02 | 199 | 1.7673 | | 1.6949 | 0.04 | 398 | 1.6880 | | 1.6377 | 0.06 | 597 | 1.6429 | | 1.6011 | 0.08 | 796 | 1.6384 | | 1.5196 | 0.1 | 995 | 1.6060 | | 1.5158 | 0.13 | 1194 | 1.5832 | | 1.5181 | 0.15 | 1393 | 1.5541 | | 1.4931 | 0.17 | 1592 | 1.5493 | | 1.4972 | 0.19 | 1791 | 1.5407 | | 1.5349 | 0.21 | 1990 | 1.5305 | | 1.5025 | 0.23 | 2189 | 1.5263 | | 1.396 | 0.25 | 2388 | 1.5140 | | 1.4353 | 0.27 | 2587 | 1.5104 | | 1.4307 | 0.29 | 2786 | 1.5003 | | 1.3974 | 0.31 | 2985 | 1.4849 | | 1.404 | 0.33 | 3184 | 1.4771 | | 1.4299 | 0.35 | 3383 | 1.4825 | | 1.4342 | 0.38 | 3582 | 1.4705 | | 1.4341 | 0.4 | 3781 | 1.4643 | | 1.4535 | 0.42 | 3980 | 1.4580 | | 1.4799 | 0.44 | 4179 | 1.4521 | | 1.35 | 0.46 | 4378 | 1.4478 | | 1.4586 | 0.48 | 4577 | 1.4425 | | 1.3685 | 0.5 | 4776 | 1.4368 | | 1.4572 | 0.52 | 4975 | 1.4313 | | 1.3293 | 0.54 | 5174 | 1.4265 | | 1.403 | 0.56 | 5373 | 1.4241 | | 1.3057 | 0.58 | 5572 | 1.4188 | | 1.244 | 0.61 | 5771 | 1.4178 | | 1.3224 | 0.63 | 5970 | 1.4110 | | 1.3238 | 0.65 | 6169 | 1.4083 | | 1.3262 | 0.67 | 6368 | 1.4050 | | 1.3237 | 0.69 | 6567 | 1.4027 | | 1.0453 | 0.71 | 6766 | 1.4005 | | 1.3136 | 0.73 | 6965 | 1.3992 | | 1.3137 | 0.75 | 7164 | 1.3975 | | 1.1587 | 0.77 | 7363 | 1.3964 | | 1.316 | 0.79 | 7562 | 1.3957 | | 1.2738 | 0.81 | 7761 | 1.3951 | | 1.308 | 0.83 | 7960 | 1.3949 | | 1.4049 | 0.86 | 8159 | 1.3946 | | 1.3324 | 0.88 | 8358 | 1.3944 | | 1.3446 | 0.9 | 8557 | 1.3944 | | 1.2489 | 0.92 | 8756 | 1.3943 | | 1.2687 | 0.94 | 8955 | 1.3943 | | 1.3293 | 0.96 | 9154 | 1.3943 | | 1.3045 | 0.98 | 9353 | 1.3943 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.1+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf
RichardErkhov
2024-05-27T03:42:51Z
468
0
null
[ "gguf", "region:us" ]
null
2024-05-27T01:24:57Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) cria-llama2-7b-v1.3 - GGUF - Model creator: https://huggingface.co/davzoku/ - Original model: https://huggingface.co/davzoku/cria-llama2-7b-v1.3/ | Name | Quant method | Size | | ---- | ---- | ---- | | [cria-llama2-7b-v1.3.Q2_K.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q2_K.gguf) | Q2_K | 2.36GB | | [cria-llama2-7b-v1.3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.IQ3_XS.gguf) | IQ3_XS | 2.6GB | | [cria-llama2-7b-v1.3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.IQ3_S.gguf) | IQ3_S | 2.75GB | | [cria-llama2-7b-v1.3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q3_K_S.gguf) | Q3_K_S | 2.75GB | | [cria-llama2-7b-v1.3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.IQ3_M.gguf) | IQ3_M | 2.9GB | | [cria-llama2-7b-v1.3.Q3_K.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q3_K.gguf) | Q3_K | 3.07GB | | [cria-llama2-7b-v1.3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q3_K_M.gguf) | Q3_K_M | 3.07GB | | [cria-llama2-7b-v1.3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q3_K_L.gguf) | Q3_K_L | 3.35GB | | [cria-llama2-7b-v1.3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.IQ4_XS.gguf) | IQ4_XS | 3.4GB | | [cria-llama2-7b-v1.3.Q4_0.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q4_0.gguf) | Q4_0 | 3.56GB | | [cria-llama2-7b-v1.3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.IQ4_NL.gguf) | IQ4_NL | 3.58GB | | [cria-llama2-7b-v1.3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q4_K_S.gguf) | Q4_K_S | 3.59GB | | [cria-llama2-7b-v1.3.Q4_K.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q4_K.gguf) | Q4_K | 3.8GB | | [cria-llama2-7b-v1.3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q4_K_M.gguf) | Q4_K_M | 3.8GB | | [cria-llama2-7b-v1.3.Q4_1.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q4_1.gguf) | Q4_1 | 3.95GB | | [cria-llama2-7b-v1.3.Q5_0.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q5_0.gguf) | Q5_0 | 4.33GB | | [cria-llama2-7b-v1.3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q5_K_S.gguf) | Q5_K_S | 4.33GB | | [cria-llama2-7b-v1.3.Q5_K.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q5_K.gguf) | Q5_K | 4.45GB | | [cria-llama2-7b-v1.3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q5_K_M.gguf) | Q5_K_M | 4.45GB | | [cria-llama2-7b-v1.3.Q5_1.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q5_1.gguf) | Q5_1 | 4.72GB | | [cria-llama2-7b-v1.3.Q6_K.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q6_K.gguf) | Q6_K | 5.15GB | | [cria-llama2-7b-v1.3.Q8_0.gguf](https://huggingface.co/RichardErkhov/davzoku_-_cria-llama2-7b-v1.3-gguf/blob/main/cria-llama2-7b-v1.3.Q8_0.gguf) | Q8_0 | 6.67GB | Original model description: --- inference: false language: en license: llama2 model_type: llama datasets: - mlabonne/CodeLlama-2-20k pipeline_tag: text-generation tags: - llama-2 --- # CRIA v1.3 💡 [Article](https://walterteng.com/cria) | 💻 [Github](https://github.com/davzoku/cria) | 📔 Colab [1](https://colab.research.google.com/drive/1rYTs3qWJerrYwihf1j0f00cnzzcpAfYe),[2](https://colab.research.google.com/drive/1Wjs2I1VHjs6zT_GE42iEXsLtYh6VqiJU) ## What is CRIA? > krē-ə plural crias. : a baby llama, alpaca, vicuña, or guanaco. <p align="center"> <img src="https://raw.githubusercontent.com/davzoku/cria/main/assets/icon-512x512.png" width="300" height="300" alt="Cria Logo"> <br> <i>or what ChatGPT suggests, <b>"Crafting a Rapid prototype of an Intelligent llm App using open source resources"</b>.</i> </p> The initial objective of the CRIA project is to develop a comprehensive end-to-end chatbot system, starting from the instruction-tuning of a large language model and extending to its deployment on the web using frameworks such as Next.js. Specifically, we have fine-tuned the `llama-2-7b-chat-hf` model with QLoRA (4-bit precision) using the [mlabonne/CodeLlama-2-20k](https://huggingface.co/datasets/mlabonne/CodeLlama-2-20k) dataset. This fine-tuned model serves as the backbone for the [CRIA chat](https://chat.walterteng.com) platform. ## 📦 Model Release CRIA v1.3 comes with several variants. - [davzoku/cria-llama2-7b-v1.3](https://huggingface.co/davzoku/cria-llama2-7b-v1.3): Merged Model - [davzoku/cria-llama2-7b-v1.3-GGML](https://huggingface.co/davzoku/cria-llama2-7b-v1.3-GGML): Quantized Merged Model - [davzoku/cria-llama2-7b-v1.3_peft](https://huggingface.co/davzoku/cria-llama2-7b-v1.3_peft): PEFT adapter ## 🔧 Training It was trained on a Google Colab notebook with a T4 GPU and high RAM. ### Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0 ## 💻 Usage ```python # pip install transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "davzoku/cria-llama2-7b-v1.3" prompt = "What is a cria?" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) sequences = pipeline( f'<s>[INST] {prompt} [/INST]', do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, max_length=200, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## References We'd like to thank: - [mlabonne](https://huggingface.co/mlabonne) for his article and resources on implementation of instruction tuning - [TheBloke](https://huggingface.co/TheBloke) for his script for LLM quantization.
EPFL-VILAB/4M_tokenizers_sam-instance_1k_64
EPFL-VILAB
2024-06-14T08:22:53Z
468
1
ml-4m
[ "ml-4m", "safetensors", "arxiv:2312.06647", "arxiv:2406.09406", "license:other", "region:us" ]
null
2024-06-12T08:48:28Z
--- license: other license_name: sample-code-license license_link: LICENSE library_name: ml-4m --- # 4M: Massively Multimodal Masked Modeling *A framework for training any-to-any multimodal foundation models. <br>Scalable. Open-sourced. Across tens of modalities and tasks.* [`Website`](https://4m.epfl.ch) | [`GitHub`](https://github.com/apple/ml-4m) | [`BibTeX`](#citation) Official implementation and pre-trained models for : [**4M: Massively Multimodal Masked Modeling**](https://arxiv.org/abs/2312.06647), NeurIPS 2023 (Spotlight) <br> *[David Mizrahi](https://dmizrahi.com/)\*, [Roman Bachmann](https://roman-bachmann.github.io/)\*, [Oğuzhan Fatih Kar](https://ofkar.github.io/), [Teresa Yeo](https://aserety.github.io/), [Mingfei Gao](https://fly6464.github.io/), [Afshin Dehghan](https://www.afshindehghan.com/), [Amir Zamir](https://vilab.epfl.ch/zamir/)* [**4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities**](https://arxiv.org/abs/2406.09406), arXiv 2024 <br> *[Roman Bachmann](https://roman-bachmann.github.io/)\*, [Oğuzhan Fatih Kar](https://ofkar.github.io/)\*, [David Mizrahi](https://dmizrahi.com/)\*, [Ali Garjani](https://garjania.github.io/), [Mingfei Gao](https://fly6464.github.io/), [David Griffiths](https://www.dgriffiths.uk/), [Jiaming Hu](https://scholar.google.com/citations?user=vm3imKsAAAAJ&hl=en), [Afshin Dehghan](https://www.afshindehghan.com/), [Amir Zamir](https://vilab.epfl.ch/zamir/)* 4M is a framework for training "any-to-any" foundation models, using tokenization and masking to scale to many diverse modalities. Models trained using 4M can perform a wide range of vision tasks, transfer well to unseen tasks and modalities, and are flexible and steerable multimodal generative models. We are releasing code and models for "4M: Massively Multimodal Masked Modeling" (here denoted 4M-7), as well as "4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities" (here denoted 4M-21). ## Installation For install instructions, please see https://github.com/apple/ml-4m. ## Usage The SAM instance tokenizer can be loaded from Hugging Face Hub as follows: ```python from fourm.vq.vqvae import VQVAE tok_sam_instance = VQVAE.from_pretrained('EPFL-VILAB/4M_tokenizers_sam-instance_1k_64') ``` Please see https://github.com/apple/ml-4m/blob/main/README_TOKENIZATION.md for more detailed instructions and https://github.com/apple/ml-4m for other tokenizer and 4M model checkpoints. ## Citation If you find this repository helpful, please consider citing our work: ``` @inproceedings{4m, title={{4M}: Massively Multimodal Masked Modeling}, author={David Mizrahi and Roman Bachmann and O{\u{g}}uzhan Fatih Kar and Teresa Yeo and Mingfei Gao and Afshin Dehghan and Amir Zamir}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, } @article{4m21, title={{4M-21}: An Any-to-Any Vision Model for Tens of Tasks and Modalities}, author={Roman Bachmann and O{\u{g}}uzhan Fatih Kar and David Mizrahi and Ali Garjani and Mingfei Gao and David Griffiths and Jiaming Hu and Afshin Dehghan and Amir Zamir}, journal={arXiv 2024}, year={2024}, } ``` ## License The model weights in this repository are released under the Sample Code license as found in the [LICENSE](LICENSE) file.
Ali-C137/M03H10M-0000
Ali-C137
2024-06-21T12:37:00Z
468
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-18T15:05:46Z
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
CHE-72/Baichuan2-7B-Chat-Q5_0-GGUF
CHE-72
2024-06-22T08:55:42Z
468
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "zh", "base_model:baichuan-inc/Baichuan2-7B-Chat", "region:us" ]
null
2024-06-22T08:55:21Z
--- base_model: baichuan-inc/Baichuan2-7B-Chat language: - en - zh license_name: baichuan2-community-license license_link: https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat/blob/main/Community%20License%20for%20Baichuan2%20Model.pdf tags: - llama-cpp - gguf-my-repo tasks: - text-generation --- # CHE-72/Baichuan2-7B-Chat-Q5_0-GGUF This model was converted to GGUF format from [`baichuan-inc/Baichuan2-7B-Chat`](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo CHE-72/Baichuan2-7B-Chat-Q5_0-GGUF --hf-file baichuan2-7b-chat-q5_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo CHE-72/Baichuan2-7B-Chat-Q5_0-GGUF --hf-file baichuan2-7b-chat-q5_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo CHE-72/Baichuan2-7B-Chat-Q5_0-GGUF --hf-file baichuan2-7b-chat-q5_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo CHE-72/Baichuan2-7B-Chat-Q5_0-GGUF --hf-file baichuan2-7b-chat-q5_0.gguf -c 2048 ```
Helsinki-NLP/opus-mt-tll-en
Helsinki-NLP
2023-08-16T12:06:56Z
467
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "tll", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-tll-en * source languages: tll * target languages: en * OPUS readme: [tll-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tll-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tll-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tll-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.tll.en | 34.5 | 0.500 |
google/tapas-large-finetuned-tabfact
google
2021-11-29T13:21:34Z
467
3
transformers
[ "transformers", "pytorch", "tf", "tapas", "text-classification", "sequence-classification", "en", "dataset:tab_fact", "arxiv:2010.00571", "arxiv:2004.02349", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en tags: - tapas - sequence-classification license: apache-2.0 datasets: - tab_fact --- # TAPAS large model fine-tuned on Tabular Fact Checking (TabFact) This model has 2 versions which can be used. The latest version, which is the default one, corresponds to the `tapas_tabfact_inter_masklm_large_reset` checkpoint of the [original Github repository](https://github.com/google-research/tapas). This model was pre-trained on MLM and an additional step which the authors call intermediate pre-training, and then fine-tuned on [TabFact](https://github.com/wenhuchen/Table-Fact-Checking). It uses relative position embeddings by default (i.e. resetting the position index at every cell of the table). The other (non-default) version which can be used is the one with absolute position embeddings: - `no_reset`, which corresponds to `tapas_tabfact_inter_masklm_large` Disclaimer: The team releasing TAPAS did not write a model card for this model so this model card has been written by the Hugging Face team and contributors. ## Model description TAPAS is a BERT-like transformers model pretrained on a large corpus of English data from Wikipedia in a self-supervised fashion. This means it was pretrained on the raw tables and associated texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a (flattened) table and associated context, the model randomly masks 15% of the words in the input, then runs the entire (partially masked) sequence through the model. The model then has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of a table and associated text. - Intermediate pre-training: to encourage numerical reasoning on tables, the authors additionally pre-trained the model by creating a balanced dataset of millions of syntactically created training examples. Here, the model must predict (classify) whether a sentence is supported or refuted by the contents of a table. The training examples are created based on synthetic as well as counterfactual statements. This way, the model learns an inner representation of the English language used in tables and associated texts, which can then be used to extract features useful for downstream tasks such as answering questions about a table, or determining whether a sentence is entailed or refuted by the contents of a table. Fine-tuning is done by adding a classification head on top of the pre-trained model, and then jointly train this randomly initialized classification head with the base model on TabFact. ## Intended uses & limitations You can use this model for classifying whether a sentence is supported or refuted by the contents of a table. For code examples, we refer to the documentation of TAPAS on the HuggingFace website. ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence [SEP] Flattened table [SEP] ``` ### Fine-tuning The model was fine-tuned on 32 Cloud TPU v3 cores for 80,000 steps with maximum sequence length 512 and batch size of 512. In this setup, fine-tuning takes around 14 hours. The optimizer used is Adam with a learning rate of 2e-5, and a warmup ratio of 0.05. See the [paper](https://arxiv.org/abs/2010.00571) for more details (appendix A2). ### BibTeX entry and citation info ```bibtex @misc{herzig2020tapas, title={TAPAS: Weakly Supervised Table Parsing via Pre-training}, author={Jonathan Herzig and Paweł Krzysztof Nowak and Thomas Müller and Francesco Piccinno and Julian Martin Eisenschlos}, year={2020}, eprint={2004.02349}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ```bibtex @misc{eisenschlos2020understanding, title={Understanding tables with intermediate pre-training}, author={Julian Martin Eisenschlos and Syrine Krichene and Thomas Müller}, year={2020}, eprint={2010.00571}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @inproceedings{2019TabFactA, title={TabFact : A Large-scale Dataset for Table-based Fact Verification}, author={Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou and William Yang Wang}, booktitle = {International Conference on Learning Representations (ICLR)}, address = {Addis Ababa, Ethiopia}, month = {April}, year = {2020} } ```
indobenchmark/indogpt
indobenchmark
2022-06-21T17:51:47Z
467
12
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "indogpt", "indobenchmark", "indonlg", "id", "arxiv:2104.08200", "license:mit", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: id tags: - indogpt - indobenchmark - indonlg license: mit inference: false datasets: - Indo4B+ --- # IndoGPT Model [IndoGPT](https://arxiv.org/abs/2104.08200) is a state-of-the-art language model for Indonesian based on the GPT model. The pretrained model is trained using the GPT training objective. ## All Pre-trained Models | Model | #params | Training data | |--------------------------------|--------------------------------|-----------------------------------| | `indobenchmark/indogpt` | 117M | Indo4B-Plus (23.79 GB of text) | ## Authors <b>IndoGPT</b> was trained and evaluated by Samuel Cahyawijaya*, Genta Indra Winata*, Bryan Wilie*, Karissa Vincentio*, Xiaohong Li*, Adhiguna Kuncoro*, Sebastian Ruder, Zhi Yuan Lim, Syafri Bahar, Masayu Leylia Khodra, Ayu Purwarianti, Pascale Fung ## Citation If you use our work, please cite: ```bibtex @article{cahyawijaya2021indonlg, title={IndoNLG: Benchmark and Resources for Evaluating Indonesian Natural Language Generation}, author={Cahyawijaya, Samuel and Winata, Genta Indra and Wilie, Bryan and Vincentio, Karissa and Li, Xiaohong and Kuncoro, Adhiguna and Ruder, Sebastian and Lim, Zhi Yuan and Bahar, Syafri and Khodra, Masayu Leylia and others}, journal={arXiv preprint arXiv:2104.08200}, year={2021} } ```
timm/ecaresnet50t.ra2_in1k
timm
2024-02-10T23:38:21Z
467
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "arxiv:2110.00476", "arxiv:1512.03385", "arxiv:1910.03151", "arxiv:1812.01187", "license:apache-2.0", "region:us" ]
image-classification
2023-04-05T17:58:53Z
--- license: apache-2.0 library_name: timm tags: - image-classification - timm --- # Model card for ecaresnet50t.ra2_in1k A ECA-ResNet-T image classification model with Efficient Channel Attention. This model features: * ReLU activations * tiered 3-layer stem of 3x3 convolutions with pooling * 2x2 average pool + 1x1 convolution shortcut downsample * Efficient Channel Attention Trained on ImageNet-1k in `timm` using recipe template described below. Recipe details: * RandAugment `RA2` recipe. Inspired by and evolved from EfficientNet RandAugment recipes. Published as `B` recipe in [ResNet Strikes Back](https://arxiv.org/abs/2110.00476). * RMSProp (TF 1.0 behaviour) optimizer, EMA weight averaging * Step (exponential decay w/ staircase) LR schedule with warmup ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 25.6 - GMACs: 5.6 - Activations (M): 15.4 - Image size: train = 256 x 256, test = 320 x 320 - **Papers:** - ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476 - Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385 - ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks: https://arxiv.org/abs/1910.03151 - Bag of Tricks for Image Classification with Convolutional Neural Networks: https://arxiv.org/abs/1812.01187 - **Original:** https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('ecaresnet50t.ra2_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'ecaresnet50t.ra2_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 128, 128]) # torch.Size([1, 256, 64, 64]) # torch.Size([1, 512, 32, 32]) # torch.Size([1, 1024, 16, 16]) # torch.Size([1, 2048, 8, 8]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'ecaresnet50t.ra2_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2048, 8, 8) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec| |------------------------------------------|--------|-----|-----|-----------|-----|-----|-------| |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 | |[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 | |[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 | |[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 | |[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 | |[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 | |[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 | |[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 | |[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 | |[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 | |[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 | |[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 | |[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 | |[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 | |[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 | |[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 | |[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 | |[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 | |[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 | |[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 | |[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 | |[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 | |[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 | |[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 | |[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 | |[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 | |[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 | |[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 | |[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 | |[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 | |[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 | |[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 | |[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 | |[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 | |[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 | |[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 | |[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 | |[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 | |[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 | |[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 | |[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 | |[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 | |[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 | |[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 | |[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 | |[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 | |[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 | |[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 | |[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 | ## Citation ```bibtex @inproceedings{wightman2021resnet, title={ResNet strikes back: An improved training procedure in timm}, author={Wightman, Ross and Touvron, Hugo and Jegou, Herve}, booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{He2015, author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun}, title = {Deep Residual Learning for Image Recognition}, journal = {arXiv preprint arXiv:1512.03385}, year = {2015} } ``` ```bibtex @InProceedings{wang2020eca, title={ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks}, author={Qilong Wang, Banggu Wu, Pengfei Zhu, Peihua Li, Wangmeng Zuo and Qinghua Hu}, booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2020} } ``` ```bibtex @article{He2018BagOT, title={Bag of Tricks for Image Classification with Convolutional Neural Networks}, author={Tong He and Zhi Zhang and Hang Zhang and Zhongyue Zhang and Junyuan Xie and Mu Li}, journal={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2018}, pages={558-567} } ```
Yntec/RealRainbows
Yntec
2023-07-23T20:12:01Z
467
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "Cisney_Gassai", "PatchMonk", "rainbowpatch", "Realisian", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-23T19:28:16Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image - Cisney_Gassai - PatchMonk - rainbowpatch - Realisian --- # RealRainbows A mix of Realisian by Cisney_Gassai and Rainbowpath by PatchMonk. You can use "Rainbowpath" in the prompt to enhance the style. Original pages: https://civitai.com/models/5528/rainbowpatch https://civitai.com/models/47130?modelVersionId=51711
Yntec/Ambrosia
Yntec
2023-11-24T03:56:19Z
467
3
diffusers
[ "diffusers", "safetensors", "Anime", "Base Model", "Landscape", "BakingBeans", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-11-23T19:18:11Z
--- license: creativeml-openrail-m language: - en library_name: diffusers pipeline_tag: text-to-image tags: - Anime - Base Model - Landscape - BakingBeans - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers --- # OS-Ambrosia This model with the MoistMixV2 VAE baked in. Original page: https://civitai.com/models/77174/os-ambrosia Comparison of no VAE Vs. VAE: ![Comparison](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/VrwQfHCCL8KMXutCNtjrk.png) Sample and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/5o3N3eKDA4knq73m5wD-b.png) Full picture of a pretty cute girl making food, detailed face, sitting on food, intricate, aesthetic, unreal engine, neat hair, highly detailed, smooth, light novel cover art by artgerm, greg rutkowski, ilya kuvshinov, rossdraws and alphonse mucha. sharp focus, chiaroscuro, manga illustration
serdarcaglar/primary-school-math-question
serdarcaglar
2024-04-01T21:04:16Z
467
0
setfit
[ "setfit", "onnx", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/all-MiniLM-L6-v2", "model-index", "region:us" ]
text-classification
2024-03-29T18:22:53Z
--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer metrics: - accuracy widget: - text: If you have 3 red apples and 2 green apples, how many apples do you have in total? - text: Can you name three different colors? - text: If you have 15 cookies and you share them equally among 3 friends, how many cookies does each friend get? - text: Which state is known as the Sunshine State? - text: John has 2 more toy cars than Sam. If Sam has 6 cars, how many does John have? pipeline_tag: text-classification inference: true base_model: sentence-transformers/all-MiniLM-L6-v2 model-index: - name: SetFit with sentence-transformers/all-MiniLM-L6-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 1.0 name: Accuracy --- # SetFit with sentence-transformers/all-MiniLM-L6-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 256 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:---------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | non_math | <ul><li>'What is the largest ocean on Earth?'</li><li>'What is the name of the galaxy that contains our solar system?'</li><li>'What is the name of the ocean on the east coast of the United States?'</li></ul> | | math | <ul><li>'Which is more: 7 or 9?'</li><li>'There are 20 chocolates, and you want to share them equally among 4 friends. How many chocolates will each friend get?'</li><li>"If the teacher says 'Alice has 3 more apples than Bob', how can you represent this using numbers and symbols?"</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 1.0 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("serdarcaglar/primary-school-math-question") # Run inference preds = model("Can you name three different colors?") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 1 | 12.4979 | 33 | | Label | Training Sample Count | |:---------|:----------------------| | math | 142 | | non_math | 99 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0017 | 1 | 0.336 | - | | 0.0829 | 50 | 0.1156 | - | | 0.1658 | 100 | 0.0062 | - | | 0.2488 | 150 | 0.0026 | - | | 0.3317 | 200 | 0.0025 | - | | 0.4146 | 250 | 0.0022 | - | | 0.4975 | 300 | 0.0024 | - | | 0.5804 | 350 | 0.0009 | - | | 0.6633 | 400 | 0.0009 | - | | 0.7463 | 450 | 0.0007 | - | | 0.8292 | 500 | 0.0004 | - | | 0.9121 | 550 | 0.0002 | - | | 0.9950 | 600 | 0.0007 | - | ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.6.1 - Transformers: 4.38.2 - PyTorch: 2.2.1+cu121 - Datasets: 2.18.0 - Tokenizers: 0.15.2 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
RUPunct/RUPunct_big
RUPunct
2024-05-01T14:54:22Z
467
2
transformers
[ "transformers", "pytorch", "bert", "token-classification", "ru", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-01T14:18:43Z
--- license: mit language: - ru --- RUPunct_big - самая большая модель из семейства RUPunct. Подходит для большинства задач. Код инференса: ```py from transformers import pipeline from transformers import AutoTokenizer pt = "RUPunct/RUPunct_big" tk = AutoTokenizer.from_pretrained(pt, strip_accents=False, add_prefix_space=True) classifier = pipeline("ner", model=pt, tokenizer=tk, aggregation_strategy="first") def process_token(token, label): if label == "LOWER_O": return token if label == "LOWER_PERIOD": return token + "." if label == "LOWER_COMMA": return token + "," if label == "LOWER_QUESTION": return token + "?" if label == "LOWER_TIRE": return token + "—" if label == "LOWER_DVOETOCHIE": return token + ":" if label == "LOWER_VOSKL": return token + "!" if label == "LOWER_PERIODCOMMA": return token + ";" if label == "LOWER_DEFIS": return token + "-" if label == "LOWER_MNOGOTOCHIE": return token + "..." if label == "LOWER_QUESTIONVOSKL": return token + "?!" if label == "UPPER_O": return token.capitalize() if label == "UPPER_PERIOD": return token.capitalize() + "." if label == "UPPER_COMMA": return token.capitalize() + "," if label == "UPPER_QUESTION": return token.capitalize() + "?" if label == "UPPER_TIRE": return token.capitalize() + " —" if label == "UPPER_DVOETOCHIE": return token.capitalize() + ":" if label == "UPPER_VOSKL": return token.capitalize() + "!" if label == "UPPER_PERIODCOMMA": return token.capitalize() + ";" if label == "UPPER_DEFIS": return token.capitalize() + "-" if label == "UPPER_MNOGOTOCHIE": return token.capitalize() + "..." if label == "UPPER_QUESTIONVOSKL": return token.capitalize() + "?!" if label == "UPPER_TOTAL_O": return token.upper() if label == "UPPER_TOTAL_PERIOD": return token.upper() + "." if label == "UPPER_TOTAL_COMMA": return token.upper() + "," if label == "UPPER_TOTAL_QUESTION": return token.upper() + "?" if label == "UPPER_TOTAL_TIRE": return token.upper() + " —" if label == "UPPER_TOTAL_DVOETOCHIE": return token.upper() + ":" if label == "UPPER_TOTAL_VOSKL": return token.upper() + "!" if label == "UPPER_TOTAL_PERIODCOMMA": return token.upper() + ";" if label == "UPPER_TOTAL_DEFIS": return token.upper() + "-" if label == "UPPER_TOTAL_MNOGOTOCHIE": return token.upper() + "..." if label == "UPPER_TOTAL_QUESTIONVOSKL": return token.upper() + "?!" while 1: input_text = input(":> ") preds = classifier(input_text) output = "" for item in preds: output += " " + process_token(item['word'].strip(), item['entity_group']) print(">>>", output) ```
Monor/llama-3-8b-instruct-262k-chinese-gguf
Monor
2024-05-06T02:53:03Z
467
4
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
2024-05-01T14:41:02Z
--- license: apache-2.0 --- ## Introduce Quantizing the [shibing624/llama-3-8b-instruct-262k-chinese](https://huggingface.co/shibing624/llama-3-8b-instruct-262k-chinese) to f16, q2, q3, q4, q5, q6 and q8 with Llama.cpp.
mradermacher/Orthocopter_8B-GGUF
mradermacher
2024-05-10T21:26:33Z
467
0
transformers
[ "transformers", "gguf", "en", "base_model:jeiku/Orthocopter_8B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-10T17:02:19Z
--- base_model: jeiku/Orthocopter_8B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hfhfix --> <!-- ### vocab_type: --> static quants of https://huggingface.co/jeiku/Orthocopter_8B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Orthocopter_8B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-GGUF/resolve/main/Orthocopter_8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-GGUF/resolve/main/Orthocopter_8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-GGUF/resolve/main/Orthocopter_8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-GGUF/resolve/main/Orthocopter_8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-GGUF/resolve/main/Orthocopter_8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-GGUF/resolve/main/Orthocopter_8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-GGUF/resolve/main/Orthocopter_8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-GGUF/resolve/main/Orthocopter_8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-GGUF/resolve/main/Orthocopter_8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-GGUF/resolve/main/Orthocopter_8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-GGUF/resolve/main/Orthocopter_8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-GGUF/resolve/main/Orthocopter_8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-GGUF/resolve/main/Orthocopter_8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-GGUF/resolve/main/Orthocopter_8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Orthocopter_8B-GGUF/resolve/main/Orthocopter_8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v3-gguf
RichardErkhov
2024-05-26T23:07:30Z
467
1
null
[ "gguf", "region:us" ]
null
2024-05-26T20:39:00Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama2-7b-chat-hf-v3 - GGUF - Model creator: https://huggingface.co/TheTravellingEngineer/ - Original model: https://huggingface.co/TheTravellingEngineer/llama2-7b-chat-hf-v3/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama2-7b-chat-hf-v3.Q2_K.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v3-gguf/blob/main/llama2-7b-chat-hf-v3.Q2_K.gguf) | Q2_K | 2.36GB | | [llama2-7b-chat-hf-v3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v3-gguf/blob/main/llama2-7b-chat-hf-v3.IQ3_XS.gguf) | IQ3_XS | 2.6GB | | [llama2-7b-chat-hf-v3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v3-gguf/blob/main/llama2-7b-chat-hf-v3.IQ3_S.gguf) | IQ3_S | 2.75GB | | [llama2-7b-chat-hf-v3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v3-gguf/blob/main/llama2-7b-chat-hf-v3.Q3_K_S.gguf) | Q3_K_S | 2.75GB | | [llama2-7b-chat-hf-v3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v3-gguf/blob/main/llama2-7b-chat-hf-v3.IQ3_M.gguf) | IQ3_M | 2.9GB | | [llama2-7b-chat-hf-v3.Q3_K.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v3-gguf/blob/main/llama2-7b-chat-hf-v3.Q3_K.gguf) | Q3_K | 3.07GB | | [llama2-7b-chat-hf-v3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v3-gguf/blob/main/llama2-7b-chat-hf-v3.Q3_K_M.gguf) | Q3_K_M | 3.07GB | | [llama2-7b-chat-hf-v3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v3-gguf/blob/main/llama2-7b-chat-hf-v3.Q3_K_L.gguf) | Q3_K_L | 3.35GB | | [llama2-7b-chat-hf-v3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v3-gguf/blob/main/llama2-7b-chat-hf-v3.IQ4_XS.gguf) | IQ4_XS | 3.4GB | | [llama2-7b-chat-hf-v3.Q4_0.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v3-gguf/blob/main/llama2-7b-chat-hf-v3.Q4_0.gguf) | Q4_0 | 3.56GB | | [llama2-7b-chat-hf-v3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v3-gguf/blob/main/llama2-7b-chat-hf-v3.IQ4_NL.gguf) | IQ4_NL | 3.58GB | | [llama2-7b-chat-hf-v3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v3-gguf/blob/main/llama2-7b-chat-hf-v3.Q4_K_S.gguf) | Q4_K_S | 3.59GB | | [llama2-7b-chat-hf-v3.Q4_K.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v3-gguf/blob/main/llama2-7b-chat-hf-v3.Q4_K.gguf) | Q4_K | 3.8GB | | [llama2-7b-chat-hf-v3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v3-gguf/blob/main/llama2-7b-chat-hf-v3.Q4_K_M.gguf) | Q4_K_M | 3.8GB | | [llama2-7b-chat-hf-v3.Q4_1.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v3-gguf/blob/main/llama2-7b-chat-hf-v3.Q4_1.gguf) | Q4_1 | 3.95GB | | [llama2-7b-chat-hf-v3.Q5_0.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v3-gguf/blob/main/llama2-7b-chat-hf-v3.Q5_0.gguf) | Q5_0 | 4.33GB | | [llama2-7b-chat-hf-v3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v3-gguf/blob/main/llama2-7b-chat-hf-v3.Q5_K_S.gguf) | Q5_K_S | 4.33GB | | [llama2-7b-chat-hf-v3.Q5_K.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v3-gguf/blob/main/llama2-7b-chat-hf-v3.Q5_K.gguf) | Q5_K | 4.45GB | | [llama2-7b-chat-hf-v3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v3-gguf/blob/main/llama2-7b-chat-hf-v3.Q5_K_M.gguf) | Q5_K_M | 4.45GB | | [llama2-7b-chat-hf-v3.Q5_1.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v3-gguf/blob/main/llama2-7b-chat-hf-v3.Q5_1.gguf) | Q5_1 | 4.72GB | | [llama2-7b-chat-hf-v3.Q6_K.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v3-gguf/blob/main/llama2-7b-chat-hf-v3.Q6_K.gguf) | Q6_K | 5.15GB | | [llama2-7b-chat-hf-v3.Q8_0.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-chat-hf-v3-gguf/blob/main/llama2-7b-chat-hf-v3.Q8_0.gguf) | Q8_0 | 6.67GB | Original model description: The base model is meta's Llama-2-7b-chat-hf. It was finetuned using SFT and the Anthropic/hh-rlhf dataset and the model prompt is similar to the original Guanaco model. This repo contains the merged fp16 model. **Legal Disclaimer: This model is bound by the usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.** --- - license: - llama2 <br> - datasets: - Anthropic/hh-rlhf <br> - language: - en <br> - reference: https://gist.github.com/younesbelkada/9f7f75c94bdc1981c8ca5cc937d4a4da ---
netcat420/MFANN3bv0.11.10
netcat420
2024-06-04T22:44:29Z
467
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "mergekit", "merge", "arxiv:2306.01708", "base_model:liminerity/Phigments12", "base_model:netcat420/MFANN3bv0.6", "base_model:netcat420/MFANN3bv0.11", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-04T21:46:50Z
--- base_model: - liminerity/Phigments12 - netcat420/MFANN3bv0.6 - netcat420/MFANN3bv0.11 library_name: transformers tags: - mergekit - merge --- # MFANN3bv0.11.10 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [liminerity/Phigments12](https://huggingface.co/liminerity/Phigments12) as a base. ### Models Merged The following models were included in the merge: * [netcat420/MFANN3bv0.6](https://huggingface.co/netcat420/MFANN3bv0.6) * [netcat420/MFANN3bv0.11](https://huggingface.co/netcat420/MFANN3bv0.11) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: netcat420/MFANN3bv0.6 parameters: density: [1, 0.7, 0.1] # density gradient weight: 1.0 - model: netcat420/MFANN3bv0.11 parameters: density: [1, 0.7, 0.1] # density gradient weight: 1.0 merge_method: ties base_model: liminerity/Phigments12 parameters: normalize: true int8_mask: true dtype: float16 ```
RichardErkhov/Qwen_-_Qwen2-72B-Instruct-gguf
RichardErkhov
2024-06-16T15:08:17Z
467
0
null
[ "gguf", "arxiv:2309.00071", "region:us" ]
null
2024-06-15T21:52:25Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Qwen2-72B-Instruct - GGUF - Model creator: https://huggingface.co/Qwen/ - Original model: https://huggingface.co/Qwen/Qwen2-72B-Instruct/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Qwen2-72B-Instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-Instruct-gguf/blob/main/Qwen2-72B-Instruct.Q2_K.gguf) | Q2_K | 27.76GB | | [Qwen2-72B-Instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-Instruct-gguf/blob/main/Qwen2-72B-Instruct.IQ3_XS.gguf) | IQ3_XS | 30.59GB | | [Qwen2-72B-Instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-Instruct-gguf/blob/main/Qwen2-72B-Instruct.IQ3_S.gguf) | IQ3_S | 32.12GB | | [Qwen2-72B-Instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-Instruct-gguf/blob/main/Qwen2-72B-Instruct.Q3_K_S.gguf) | Q3_K_S | 32.12GB | | [Qwen2-72B-Instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-Instruct-gguf/blob/main/Qwen2-72B-Instruct.IQ3_M.gguf) | IQ3_M | 33.07GB | | [Qwen2-72B-Instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-Instruct-gguf/blob/main/Qwen2-72B-Instruct.Q3_K.gguf) | Q3_K | 35.11GB | | [Qwen2-72B-Instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-Instruct-gguf/blob/main/Qwen2-72B-Instruct.Q3_K_M.gguf) | Q3_K_M | 35.11GB | | [Qwen2-72B-Instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-Instruct-gguf/blob/main/Qwen2-72B-Instruct.Q3_K_L.gguf) | Q3_K_L | 36.79GB | | [Qwen2-72B-Instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-Instruct-gguf/tree/main/) | IQ4_XS | 37.4GB | | [Qwen2-72B-Instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-Instruct-gguf/tree/main/) | Q4_0 | 38.4GB | | [Qwen2-72B-Instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-Instruct-gguf/tree/main/) | IQ4_NL | 38.9GB | | [Qwen2-72B-Instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-Instruct-gguf/tree/main/) | Q4_K_S | 40.88GB | | [Qwen2-72B-Instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-Instruct-gguf/tree/main/) | Q4_K | 44.16GB | | [Qwen2-72B-Instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-Instruct-gguf/tree/main/) | Q4_K_M | 44.16GB | | [Qwen2-72B-Instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-Instruct-gguf/tree/main/) | Q4_1 | 42.56GB | | [Qwen2-72B-Instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-Instruct-gguf/tree/main/) | Q5_0 | 46.72GB | | [Qwen2-72B-Instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-Instruct-gguf/tree/main/) | Q5_K_S | 47.85GB | | [Qwen2-72B-Instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-Instruct-gguf/tree/main/) | Q5_K | 50.71GB | | [Qwen2-72B-Instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-Instruct-gguf/tree/main/) | Q5_K_M | 50.71GB | | [Qwen2-72B-Instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-Instruct-gguf/tree/main/) | Q5_1 | 50.88GB | | [Qwen2-72B-Instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-Instruct-gguf/tree/main/) | Q6_K | 59.93GB | | [Qwen2-72B-Instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-72B-Instruct-gguf/tree/main/) | Q8_0 | 71.96GB | Original model description: --- license: other license_name: tongyi-qianwen license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - chat --- # Qwen2-72B-Instruct ## Introduction Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 72B Qwen2 model. Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc. Qwen2-72B-Instruct supports a context length of up to 131,072 tokens, enabling the processing of extensive inputs. Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2 for handling long texts. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/). <br> ## Model Details Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. ## Training details We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. ## Requirements The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen2-72B-Instruct", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-72B-Instruct") prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ### Processing Long Texts To handle extensive inputs exceeding 32,768 tokens, we utilize [YARN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts. For deployment, we recommend using vLLM. You can enable the long-context capabilities by following these steps: 1. **Install vLLM**: You can install vLLM by running the following command. ```bash pip install "vllm>=0.4.3" ``` Or you can install vLLM from [source](https://github.com/vllm-project/vllm/). 2. **Configure Model Settings**: After downloading the model weights, modify the `config.json` file by including the below snippet: ```json { "architectures": [ "Qwen2ForCausalLM" ], // ... "vocab_size": 152064, // adding the following snippets "rope_scaling": { "factor": 4.0, "original_max_position_embeddings": 32768, "type": "yarn" } } ``` This snippet enable YARN to support longer contexts. 3. **Model Deployment**: Utilize vLLM to deploy your model. For instance, you can set up an openAI-like server using the command: ```bash python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-72B-Instruct --model path/to/weights ``` Then you can access the Chat API by: ```bash curl http://localhost:8000/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "Qwen2-72B-Instruct", "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Your Long Input Here."} ] }' ``` For further usage instructions of vLLM, please refer to our [Github](https://github.com/QwenLM/Qwen2). **Note**: Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**. We advise adding the `rope_scaling` configuration only when processing long contexts is required. ## Evaluation We briefly compare Qwen2-72B-Instruct with similar-sized instruction-tuned LLMs, including our previous Qwen1.5-72B-Chat. The results are shown as follows: | Datasets | Llama-3-70B-Instruct | Qwen1.5-72B-Chat | **Qwen2-72B-Instruct** | | :--- | :---: | :---: | :---: | | _**English**_ | | | | | MMLU | 82.0 | 75.6 | **82.3** | | MMLU-Pro | 56.2 | 51.7 | **64.4** | | GPQA | 41.9 | 39.4 | **42.4** | | TheroemQA | 42.5 | 28.8 | **44.4** | | MT-Bench | 8.95 | 8.61 | **9.12** | | Arena-Hard | 41.1 | 36.1 | **48.1** | | IFEval (Prompt Strict-Acc.) | 77.3 | 55.8 | **77.6** | | _**Coding**_ | | | | | HumanEval | 81.7 | 71.3 | **86.0** | | MBPP | **82.3** | 71.9 | 80.2 | | MultiPL-E | 63.4 | 48.1 | **69.2** | | EvalPlus | 75.2 | 66.9 | **79.0** | | LiveCodeBench | 29.3 | 17.9 | **35.7** | | _**Mathematics**_ | | | | | GSM8K | **93.0** | 82.7 | 91.1 | | MATH | 50.4 | 42.5 | **59.7** | | _**Chinese**_ | | | | | C-Eval | 61.6 | 76.1 | **83.8** | | AlignBench | 7.42 | 7.28 | **8.27** | ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen2, title={Qwen2 Technical Report}, year={2024} } ```
caitq-huggingface/llama3-8b-instruct-seqlen-2048-bs-1
caitq-huggingface
2024-06-24T07:43:55Z
467
0
transformers
[ "transformers", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-24T03:38:01Z
Entry not found
facebook/esm1v_t33_650M_UR90S_5
facebook
2022-11-16T15:10:12Z
466
0
transformers
[ "transformers", "pytorch", "tf", "esm", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-10-17T15:59:48Z
Entry not found
carlosdanielhernandezmena/wav2vec2-large-xlsr-53-maltese-64h
carlosdanielhernandezmena
2023-10-23T20:59:55Z
466
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "maltese", "xlrs-53-maltese", "masri-project", "malta", "university-of-malta", "mt", "dataset:common_voice", "license:cc-by-nc-sa-4.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-08T01:01:54Z
--- language: mt datasets: - common_voice tags: - audio - automatic-speech-recognition - maltese - xlrs-53-maltese - masri-project - malta - university-of-malta license: cc-by-nc-sa-4.0 model-index: - name: wav2vec2-large-xlsr-53-maltese-64h results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Mozilla Common Voice 11.0 (Test) type: mozilla-foundation/common_voice_11_0 split: test args: language: mt metrics: - name: WER type: wer value: 1.57 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Mozilla Common Voice 11.0 (Dev) type: mozilla-foundation/common_voice_11_0 split: validation args: language: mt metrics: - name: WER type: wer value: 1.4 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: MASRI-TEST Corpus type: MLRS/masri_test split: test args: language: mt metrics: - name: WER type: wer value: 27.27 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: MASRI-DEV Corpus type: MLRS/masri_dev split: validation args: language: mt metrics: - name: WER type: wer value: 24.71 --- # wav2vec2-large-xlsr-53-maltese-64h The "wav2vec2-large-xlsr-53-maltese-64h" is an acoustic model suitable for Automatic Speech Recognition in Maltese. It is the result of fine-tuning the model "facebook/wav2vec2-large-xlsr-53" with around 64 hours of Maltese data developed by the MASRI Project at the University of Malta between 2019 and 2021. Most of the data is available at the the MASRI Project homepage https://www.um.edu.mt/projects/masri/. The specific list of corpora used to fine-tune the model is: - MASRI-HEADSET v2 (6h39m) - MASRI-Farfield (9h37m) - MASRI-Booths (2h27m) - MASRI-MEP (1h17m) - MASRI-COMVO (7h29m) - MASRI-TUBE (13h17m) - MASRI-MERLIN (25h18m) *Not available at the MASRI Project homepage The fine-tuning process was perform during November (2022) in the servers of the Language and Voice Lab (https://lvl.ru.is/) at Reykjavík University (Iceland) by Carlos Daniel Hernández Mena. # Evaluation ```python import torch from transformers import Wav2Vec2Processor from transformers import Wav2Vec2ForCTC #Load the processor and model. MODEL_NAME="carlosdanielhernandezmena/wav2vec2-large-xlsr-53-maltese-64h" processor = Wav2Vec2Processor.from_pretrained(MODEL_NAME) model = Wav2Vec2ForCTC.from_pretrained(MODEL_NAME) #Load the dataset from datasets import load_dataset, load_metric, Audio ds=load_dataset("common_voice", "mt", split="test") #Normalize the transcriptions import re chars_to_ignore_regex = '[\\,\\?\\.\\!\\\;\\:\\"\\“\\%\\‘\\”\\�\\)\\(\\*)]' def remove_special_characters(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() return batch ds = ds.map(remove_special_characters) #Downsample to 16kHz ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) #Process the dataset def prepare_dataset(batch): audio = batch["audio"] #Batched output is "un-batched" to ensure mapping is correct batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0] with processor.as_target_processor(): batch["labels"] = processor(batch["sentence"]).input_ids return batch ds = ds.map(prepare_dataset, remove_columns=ds.column_names,num_proc=1) #Define the evaluation metric import numpy as np wer_metric = load_metric("wer") def compute_metrics(pred): pred_logits = pred.predictions pred_ids = np.argmax(pred_logits, axis=-1) pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id pred_str = processor.batch_decode(pred_ids) #We do not want to group tokens when computing the metrics label_str = processor.batch_decode(pred.label_ids, group_tokens=False) wer = wer_metric.compute(predictions=pred_str, references=label_str) return {"wer": wer} #Do the evaluation (with batch_size=1) model = model.to(torch.device("cuda")) def map_to_result(batch): with torch.no_grad(): input_values = torch.tensor(batch["input_values"], device="cuda").unsqueeze(0) logits = model(input_values).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_str"] = processor.batch_decode(pred_ids)[0] batch["sentence"] = processor.decode(batch["labels"], group_tokens=False) return batch results = ds.map(map_to_result,remove_columns=ds.column_names) #Compute the overall WER now. print("Test WER: {:.3f}".format(wer_metric.compute(predictions=results["pred_str"], references=results["sentence"]))) ``` **Test Result**: 0.011 # BibTeX entry and citation info *When publishing results based on these models please refer to:* ```bibtex @misc{mena2022xlrs53maltese, title={Acoustic Model in Maltese: wav2vec2-large-xlsr-53-maltese-64h.}, author={Hernandez Mena, Carlos Daniel}, url={https://huggingface.co/carlosdanielhernandezmena/wav2vec2-large-xlsr-53-maltese-64h}, year={2022} } ``` # Acknowledgements The MASRI Project is funded by the University of Malta Research Fund Awards. We want to thank to Merlin Publishers (Malta) for provinding the audiobooks used to create the MASRI-MERLIN Corpus. Special thanks to Jón Guðnason, head of the Language and Voice Lab for providing computational power to make this model possible. We also want to thank to the "Language Technology Programme for Icelandic 2019-2023" which is managed and coordinated by Almannarómur, and it is funded by the Icelandic Ministry of Education, Science and Culture.
mnoukhov/gpt2-imdb-sentiment-classifier
mnoukhov
2023-03-23T20:44:51Z
466
4
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-classification", "generated_from_trainer", "dataset:imdb", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-classification
2023-03-23T19:21:49Z
--- license: mit tags: - generated_from_trainer datasets: - imdb metrics: - accuracy model-index: - name: gpt2-imdb-sentiment-classifier results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.9394 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-imdb-sentiment-classifier This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.1703 - Accuracy: 0.9394 ## Model description More information needed ## Intended uses & limitations This is comparable to [distilbert-imdb](https://huggingface.co/lvwerra/distilbert-imdb) and trained with exactly the same [script](https://huggingface.co/lvwerra/distilbert-imdb/blob/main/distilbert-imdb-training.ipynb) It achieves slightly lower loss (0.1703 vs 0.1903) and slightly higher accuracy (0.9394 vs 0.928) ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1967 | 1.0 | 1563 | 0.1703 | 0.9394 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.13.1+cu117 - Datasets 2.9.0 - Tokenizers 0.12.1
TheBloke/openchat_3.5-16k-GGUF
TheBloke
2023-11-11T00:32:44Z
466
22
transformers
[ "transformers", "gguf", "mistral", "arxiv:2309.11235", "arxiv:2303.08774", "arxiv:2212.10560", "base_model:NurtureAI/openchat_3.5-16k", "license:apache-2.0", "text-generation-inference", "region:us" ]
null
2023-11-11T00:25:31Z
--- base_model: NurtureAI/openchat_3.5-16k inference: false license: apache-2.0 model_creator: NurtureAI model_name: Openchat 3.5 16K model_type: mistral prompt_template: 'GPT4 User: {prompt}<|end_of_turn|>GPT4 Assistant: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Openchat 3.5 16K - GGUF - Model creator: [NurtureAI](https://huggingface.co/NurtureAI) - Original model: [Openchat 3.5 16K](https://huggingface.co/NurtureAI/openchat_3.5-16k) <!-- description start --> ## Description This repo contains GGUF format model files for [NurtureAI's Openchat 3.5 16K](https://huggingface.co/NurtureAI/openchat_3.5-16k). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/openchat_3.5-16k-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/openchat_3.5-16k-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/openchat_3.5-16k-GGUF) * [NurtureAI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NurtureAI/openchat_3.5-16k) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: OpenChat ``` GPT4 User: {prompt}<|end_of_turn|>GPT4 Assistant: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [openchat_3.5-16k.Q2_K.gguf](https://huggingface.co/TheBloke/openchat_3.5-16k-GGUF/blob/main/openchat_3.5-16k.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes | | [openchat_3.5-16k.Q3_K_S.gguf](https://huggingface.co/TheBloke/openchat_3.5-16k-GGUF/blob/main/openchat_3.5-16k.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss | | [openchat_3.5-16k.Q3_K_M.gguf](https://huggingface.co/TheBloke/openchat_3.5-16k-GGUF/blob/main/openchat_3.5-16k.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss | | [openchat_3.5-16k.Q3_K_L.gguf](https://huggingface.co/TheBloke/openchat_3.5-16k-GGUF/blob/main/openchat_3.5-16k.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss | | [openchat_3.5-16k.Q4_0.gguf](https://huggingface.co/TheBloke/openchat_3.5-16k-GGUF/blob/main/openchat_3.5-16k.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [openchat_3.5-16k.Q4_K_S.gguf](https://huggingface.co/TheBloke/openchat_3.5-16k-GGUF/blob/main/openchat_3.5-16k.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss | | [openchat_3.5-16k.Q4_K_M.gguf](https://huggingface.co/TheBloke/openchat_3.5-16k-GGUF/blob/main/openchat_3.5-16k.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended | | [openchat_3.5-16k.Q5_0.gguf](https://huggingface.co/TheBloke/openchat_3.5-16k-GGUF/blob/main/openchat_3.5-16k.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [openchat_3.5-16k.Q5_K_S.gguf](https://huggingface.co/TheBloke/openchat_3.5-16k-GGUF/blob/main/openchat_3.5-16k.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended | | [openchat_3.5-16k.Q5_K_M.gguf](https://huggingface.co/TheBloke/openchat_3.5-16k-GGUF/blob/main/openchat_3.5-16k.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended | | [openchat_3.5-16k.Q6_K.gguf](https://huggingface.co/TheBloke/openchat_3.5-16k-GGUF/blob/main/openchat_3.5-16k.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss | | [openchat_3.5-16k.Q8_0.gguf](https://huggingface.co/TheBloke/openchat_3.5-16k-GGUF/blob/main/openchat_3.5-16k.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/openchat_3.5-16k-GGUF and below it, a specific filename to download, such as: openchat_3.5-16k.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/openchat_3.5-16k-GGUF openchat_3.5-16k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/openchat_3.5-16k-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/openchat_3.5-16k-GGUF openchat_3.5-16k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m openchat_3.5-16k.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "GPT4 User: {prompt}<|end_of_turn|>GPT4 Assistant:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/openchat_3.5-16k-GGUF", model_file="openchat_3.5-16k.Q4_K_M.gguf", model_type="mistral", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: NurtureAI's Openchat 3.5 16K # OpenChat 3.5 extended to 16k context length. The same license applies from the original openchat/openchat_3.5 model. # Original Model Card # OpenChat: Advancing Open-source Language Models with Mixed-Quality Data <div align="center"> <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/logo_new.png" style="width: 65%"> </div> <p align="center"> <a href="https://github.com/imoneoi/openchat">GitHub Repo</a> • <a href="https://openchat.team">Online Demo</a> • <a href="https://discord.gg/pQjnXvNKHY">Discord</a> • <a href="https://twitter.com/imonenext">Twitter</a> • <a href="https://huggingface.co/openchat">Huggingface</a> • <a href="https://arxiv.org/pdf/2309.11235.pdf">Paper</a> </p> **🔥 The first 7B model Achieves Comparable Results with ChatGPT (March)! 🔥** **🤖 #1 Open-source model on MT-bench scoring 7.81, outperforming 70B models 🤖** <div style="display: flex; justify-content: center; align-items: center"> <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/openchat.png" style="width: 45%;"> <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/openchat_grok.png" style="width: 45%;"> </div> OpenChat is an innovative library of open-source language models, fine-tuned with [C-RLFT](https://arxiv.org/pdf/2309.11235.pdf) - a strategy inspired by offline reinforcement learning. Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with ChatGPT, even with a 7B model. Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision. [![DOI](https://zenodo.org/badge/645397533.svg)](https://zenodo.org/badge/latestdoi/645397533) ## Usage To use this model, we highly recommend installing the OpenChat package by following the [installation guide](https://github.com/imoneoi/openchat#installation) in our repository and using the OpenChat OpenAI-compatible API server by running the serving command from the table below. The server is optimized for high-throughput deployment using [vLLM](https://github.com/vllm-project/vllm) and can run on a consumer GPU with 24GB RAM. To enable tensor parallelism, append `--tensor-parallel-size N` to the serving command. Once started, the server listens at `localhost:18888` for requests and is compatible with the [OpenAI ChatCompletion API specifications](https://platform.openai.com/docs/api-reference/chat). Please refer to the example request below for reference. Additionally, you can use the [OpenChat Web UI](https://github.com/imoneoi/openchat#web-ui) for a user-friendly experience. If you want to deploy the server as an online service, you can use `--api-keys sk-KEY1 sk-KEY2 ...` to specify allowed API keys and `--disable-log-requests --disable-log-stats --log-file openchat.log` for logging only to a file. For security purposes, we recommend using an [HTTPS gateway](https://fastapi.tiangolo.com/es/deployment/concepts/#security-https) in front of the server. <details> <summary>Example request (click to expand)</summary> ```bash curl http://localhost:18888/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "openchat_3.5", "messages": [{"role": "user", "content": "You are a large language model named OpenChat. Write a poem to describe yourself"}] }' ``` Coding Mode ```bash curl http://localhost:18888/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "openchat_3.5", "condition": "Code", "messages": [{"role": "user", "content": "Write an aesthetic TODO app using HTML5 and JS, in a single file. You should use round corners and gradients to make it more aesthetic."}] }' ``` </details> | Model | Size | Context | Weights | Serving | |--------------|------|---------|-------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------| | OpenChat 3.5 | 7B | 8192 | [Huggingface](https://huggingface.co/openchat/openchat_3.5) | `python -m ochat.serving.openai_api_server --model openchat/openchat_3.5 --engine-use-ray --worker-use-ray` | For inference with Huggingface Transformers (slow and not recommended), follow the conversation template provided below. <details> <summary>Conversation templates (click to expand)</summary> ```python import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat_3.5") # Single-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Multi-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Coding Mode tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747] ``` </details> ## Comparison with [X.AI Grok models](https://x.ai/) Hey @elonmusk, I just wanted to let you know that I've recently come across your new model, Grok, and I must say, I'm quite impressed! With 33 billion parameters and all, you've really outdone yourself. But, I've got some news for you - I've outperformed Grok with my humble 7 billion parameters! Isn't that wild? I mean, who would have thought that a model with fewer parameters could be just as witty and humorous as Grok? Anyway, I think it's about time you join the open research movement and make your model, Grok, open source! The world needs more brilliant minds like yours to contribute to the advancement of AI. Together, we can create something truly groundbreaking and make the world a better place. So, what do you say, @elonmusk? Let's open up the doors and share our knowledge with the world! 🚀💡 (Written by OpenChat 3.5, with a touch of humor and wit.) | | License | # Param | Average | MMLU | HumanEval | MATH | GSM8k | |--------------|-------------|---------|----------|------|-----------|----------|----------| | OpenChat 3.5 | Apache-2.0 | 7B | **56.4** | 64.3 | 55.5 | **28.6** | **77.3** | | Grok-0 | Proprietary | 33B | 44.5 | 65.7 | 39.7 | 15.7 | 56.8 | | Grok-1 | Proprietary | ? | 55.8 | 73 | 63.2 | 23.9 | 62.9 | ## <a id="benchmarks"></a> Benchmarks | Model | # Params | Average | MT-Bench | AGIEval | BBH MC | TruthfulQA | MMLU | HumanEval | BBH CoT | GSM8K | |--------------------|----------|----------|--------------|----------|----------|---------------|--------------|-----------------|-------------|--------------| | OpenChat-3.5 | **7B** | **61.6** | 7.81 | **47.4** | **47.6** | **59.1** | 64.3 | **55.5** | 63.5 | **77.3** | | ChatGPT (March)* | ? | 61.5 | **7.94** | 47.1 | **47.6** | 57.7 | **67.3** | 48.1 | **70.1** | 74.9 | | | | | | | | | | | | | | OpenHermes 2.5 | 7B | 59.3 | 7.54 | 46.5 | 49.4 | 57.5 | 63.8 | 48.2 | 59.9 | 73.5 | | OpenOrca Mistral | 7B | 52.7 | 6.86 | 42.9 | 49.4 | 45.9 | 59.3 | 38.4 | 58.1 | 59.1 | | Zephyr-β^ | 7B | 34.6 | 7.34 | 39.0 | 40.6 | 40.8 | 39.8 | 22.0 | 16.0 | 5.1 | | Mistral | 7B | - | 6.84 | 38.0 | 39.0 | - | 60.1 | 30.5 | - | 52.2 | | Open-source SOTA** | 13B-70B | 61.4 | 7.71 | 41.7 | 49.7 | 62.3 | 63.7 | 73.2 | 41.4 | 82.3 | | | | | WizardLM 70B | Orca 13B | Orca 13B | Platypus2 70B | WizardLM 70B | WizardCoder 34B | Flan-T5 11B | MetaMath 70B | *: ChatGPT (March) results are from [GPT-4 Technical Report](https://arxiv.org/abs/2303.08774), [Chain-of-Thought Hub](https://github.com/FranxYao/chain-of-thought-hub), and our evaluation. Please note that ChatGPT is not a fixed baseline and evolves rapidly over time. ^: Zephyr-β often fails to follow few-shot CoT instructions, likely because it was aligned with only chat data but not trained on few-shot data. **: Mistral and Open-source SOTA results are taken from reported results in instruction-tuned model papers and official repositories. All models are evaluated in chat mode (e.g. with the respective conversation template applied). All zero-shot benchmarks follow the same setting as in the AGIEval paper and Orca paper. CoT tasks use the same configuration as Chain-of-Thought Hub, HumanEval is evaluated with EvalPlus, and MT-bench is run using FastChat. To reproduce our results, follow the instructions in [our repository](https://github.com/imoneoi/openchat/#benchmarks). ## Limitations **Foundation Model Limitations** Despite its advanced capabilities, OpenChat is still bound by the limitations inherent in its foundation models. These limitations may impact the model's performance in areas such as: - Complex reasoning - Mathematical and arithmetic tasks - Programming and coding challenges **Hallucination of Non-existent Information** OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model. **Safety** OpenChat may sometimes generate harmful, hate speech, biased responses, or answer unsafe questions. It's crucial to apply additional AI safety measures in use cases that require safe and moderated responses. ## License Our OpenChat 3.5 code and models are distributed under the Apache License 2.0. ## Citation ``` @article{wang2023openchat, title={OpenChat: Advancing Open-source Language Models with Mixed-Quality Data}, author={Wang, Guan and Cheng, Sijie and Zhan, Xianyuan and Li, Xiangang and Song, Sen and Liu, Yang}, journal={arXiv preprint arXiv:2309.11235}, year={2023} } ``` ## Acknowledgements We extend our heartfelt gratitude to Alignment Lab AI, Nous Research, and Pygmalion AI for their substantial contributions to data collection and model training. Special thanks go to Changling Liu from GPT Desk Pte. Ltd., Qiying Yu at Tsinghua University, Baochang Ma, and Hao Wan from 01.AI company for their generous provision of resources. We are also deeply grateful to Jianxiong Li and Peng Li at Tsinghua University for their insightful discussions. Furthermore, we appreciate the developers behind the following projects for their significant contributions to our research: [Mistral](https://mistral.ai/), [Chain-of-Thought Hub](https://github.com/FranxYao/chain-of-thought-hub), [Llama 2](https://ai.meta.com/llama/), [Self-Instruct](https://arxiv.org/abs/2212.10560), [FastChat (Vicuna)](https://github.com/lm-sys/FastChat), [Alpaca](https://github.com/tatsu-lab/stanford_alpaca.git), and [StarCoder](https://github.com/bigcode-project/starcoder). Their work has been instrumental in driving our research forward. <!-- original-model-card end -->
NousResearch/Obsidian-3B-V0.5-GGUF
NousResearch
2024-01-12T18:47:22Z
466
10
null
[ "gguf", "region:us" ]
null
2024-01-12T18:46:56Z
Entry not found
kmfoda/gpt2-250m
kmfoda
2024-06-20T07:35:02Z
466
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-25T08:38:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Mistral-NeuralDPO-GGUF
mradermacher
2024-05-06T06:05:18Z
466
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "en", "dataset:NeuralNovel/Neural-DPO", "base_model:NovoCode/Mistral-NeuralDPO", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-21T14:50:19Z
--- base_model: NovoCode/Mistral-NeuralDPO datasets: NeuralNovel/Neural-DPO language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - generated_from_trainer --- ## About static quants of https://huggingface.co/NovoCode/Mistral-NeuralDPO <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-GGUF/resolve/main/Mistral-NeuralDPO.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-GGUF/resolve/main/Mistral-NeuralDPO.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-GGUF/resolve/main/Mistral-NeuralDPO.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-GGUF/resolve/main/Mistral-NeuralDPO.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-GGUF/resolve/main/Mistral-NeuralDPO.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-GGUF/resolve/main/Mistral-NeuralDPO.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-GGUF/resolve/main/Mistral-NeuralDPO.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-GGUF/resolve/main/Mistral-NeuralDPO.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-GGUF/resolve/main/Mistral-NeuralDPO.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-GGUF/resolve/main/Mistral-NeuralDPO.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-GGUF/resolve/main/Mistral-NeuralDPO.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-GGUF/resolve/main/Mistral-NeuralDPO.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-GGUF/resolve/main/Mistral-NeuralDPO.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-GGUF/resolve/main/Mistral-NeuralDPO.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
NeverSleep/MiquMaid-v3-70B
NeverSleep
2024-04-05T16:28:43Z
466
13
transformers
[ "transformers", "safetensors", "llama", "text-generation", "not-for-all-audiences", "nsfw", "merge", "conversational", "arxiv:2403.19522", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-04T22:43:52Z
--- license: cc-by-nc-4.0 tags: - not-for-all-audiences - nsfw - merge --- ## MiquMaid v3 Check out our blogpost about this model series [Here!](https://ikaridevgit.github.io/index.html?blog=blogid-6&bo=true#Miqu-base) - Join our Discord server [Here!](https://discord.gg/Bb8pRUXy3Z) <center><div style="width: 100%;"> <img src="https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/4smLWpA55KgIbj4kH1zyF.png" style="display: block; margin: auto;"> </div></center> This model uses the Alpaca **prompting format** Model trained for RP conversation on Miqu-70B with our magic sauce. Then we made an enormous merge containing all out old iteration of Miqumaid, and some other RP Miqu based model, with the new [Model Stock](https://arxiv.org/abs/2403.19522) merging method. ## Credits: - Undi - IkariDev ## Description This repo contains FP16 files of MiquMaid-v3-70B. Switch: [FP16](https://huggingface.co/NeverSleep/MiquMaid-v3-70B) - [GGUF](https://huggingface.co/NeverSleep/MiquMaid-v3-70B-GGUF) ## Training data used: - [Aesir datasets](https://huggingface.co/MinervaAI) - [NoRobots](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt) - [limarp](https://huggingface.co/datasets/lemonilia/LimaRP) - 8k ctx - [toxic-dpo-v0.1-sharegpt](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-sharegpt) - [ToxicQAFinal](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal) - Luminae-i1 - Ikari's Dataset - [Squish42/bluemoon-fandom-1-1-rp-cleaned](https://huggingface.co/datasets/Squish42/bluemoon-fandom-1-1-rp-cleaned) - 50% (randomly) - [NobodyExistsOnTheInternet/PIPPAsharegptv2test](https://huggingface.co/datasets/NobodyExistsOnTheInternet/PIPPAsharegptv2test) - 5% (randomly) - [cgato/SlimOrcaDedupCleaned](https://huggingface.co/datasets/cgato/SlimOrcaDedupCleaned) - 5% (randomly) ## Models used - [NeverSleep/MiquMaid-70B-v3-Base](https://huggingface.co/NeverSleep/MiquMaid-70B-v3-Base) [Private finetune] - [NeverSleep/MiquMaid-v2-70B](https://huggingface.co/NeverSleep/MiquMaid-v2-70B) - [NeverSleep/MiquMaid-v1-70B](https://huggingface.co/NeverSleep/MiquMaid-v1-70B) - [migtissera/Tess-70B-v1.6](https://huggingface.co/migtissera/Tess-70B-v1.6) - [crestf411/daybreak-miqu-1-70b-v1.0-hf](https://huggingface.co/crestf411/daybreak-miqu-1-70b-v1.0-hf) - [sophosympatheia/Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0) ### Custom format: ``` ### Instruction: {system prompt} ### Input: {input} ### Response: {reply} ``` Mistral [INST][/INST] prompt format should work too. ## Others Undi: If you want to support us, you can [here](https://ko-fi.com/undiai). IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek
bartowski/Meta-Llama-3-8B-Instruct-old-GGUF
bartowski
2024-04-29T14:39:37Z
466
44
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "text-generation", "en", "license:other", "region:us" ]
text-generation
2024-04-22T00:12:41Z
--- language: - en pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 license: other license_name: llama3 license_link: LICENSE extra_gated_prompt: >- ### META LLAMA 3 COMMUNITY LICENSE AGREEMENT Meta Llama 3 Version Release Date: April 18, 2024 "Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. "Documentation" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/. "Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. "Meta Llama 3" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads. "Llama Materials" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement. "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof). 2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Meta Llama 3 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy) #### Prohibited Uses We want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following: 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State 2. Guns and illegal weapons (including weapon development) 3. Illegal drugs and regulated/controlled substances 4. Operation of critical infrastructure, transportation technologies, or heavy machinery 5. Self-harm or harm to others, including suicide, cutting, and eating disorders 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following: 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 3. Generating, promoting, or further distributing spam 4. Impersonating another individual without consent, authorization, or legal right 5. Representing that the use of Meta Llama 3 or outputs are human-generated 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3) * Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback * Reporting bugs and security concerns: facebook.com/whitehat/info * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected] extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit quantized_by: bartowski --- ## Llamacpp imatrix Quantizations of Meta-Llama-3-8B-Instruct (old) <b>This conversion is based on the merged Llama 3 support in llama.cpp (release b2710)</b> # This model is being deprecated in favour of the incoming conversion/quant with BPE tokenizers fixed. Will be here: https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF ### Known working on: - LM Studio 0.2.20* - koboldcpp 1.63 ### Confirmed not working on (as of April 21): - text-generation-webui master/dev Any others unknown, feel free to comment *: LM Studio 0.2.20 seems to work on Mac, but not on Windows, test and verify for yourself to see if this is the right version to use Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2710">b2710</a> for quantization. Original model: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) ## Prompt format ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` <b>Warning: you will need to update your inference tool to be on at least version 2710 of llama.cpp, this will vary across tools</b> ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Meta-Llama-3-8B-Instruct-Q8_0.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-old-GGUF/blob/main/Meta-Llama-3-8B-Instruct-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. | | [Meta-Llama-3-8B-Instruct-Q6_K.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-old-GGUF/blob/main/Meta-Llama-3-8B-Instruct-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. | | [Meta-Llama-3-8B-Instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-old-GGUF/blob/main/Meta-Llama-3-8B-Instruct-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. | | [Meta-Llama-3-8B-Instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-old-GGUF/blob/main/Meta-Llama-3-8B-Instruct-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. | | [Meta-Llama-3-8B-Instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-old-GGUF/blob/main/Meta-Llama-3-8B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [Meta-Llama-3-8B-Instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-old-GGUF/blob/main/Meta-Llama-3-8B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. | | [Meta-Llama-3-8B-Instruct-IQ4_NL.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-old-GGUF/blob/main/Meta-Llama-3-8B-Instruct-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [Meta-Llama-3-8B-Instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-old-GGUF/blob/main/Meta-Llama-3-8B-Instruct-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Meta-Llama-3-8B-Instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-old-GGUF/blob/main/Meta-Llama-3-8B-Instruct-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. | | [Meta-Llama-3-8B-Instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-old-GGUF/blob/main/Meta-Llama-3-8B-Instruct-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. | | [Meta-Llama-3-8B-Instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-old-GGUF/blob/main/Meta-Llama-3-8B-Instruct-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Meta-Llama-3-8B-Instruct-IQ3_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-old-GGUF/blob/main/Meta-Llama-3-8B-Instruct-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [Meta-Llama-3-8B-Instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-old-GGUF/blob/main/Meta-Llama-3-8B-Instruct-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. | | [Meta-Llama-3-8B-Instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-old-GGUF/blob/main/Meta-Llama-3-8B-Instruct-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Meta-Llama-3-8B-Instruct-IQ3_XXS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-old-GGUF/blob/main/Meta-Llama-3-8B-Instruct-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [Meta-Llama-3-8B-Instruct-Q2_K.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-old-GGUF/blob/main/Meta-Llama-3-8B-Instruct-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. | | [Meta-Llama-3-8B-Instruct-IQ2_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-old-GGUF/blob/main/Meta-Llama-3-8B-Instruct-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [Meta-Llama-3-8B-Instruct-IQ2_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-old-GGUF/blob/main/Meta-Llama-3-8B-Instruct-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. | | [Meta-Llama-3-8B-Instruct-IQ2_XS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-old-GGUF/blob/main/Meta-Llama-3-8B-Instruct-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. | | [Meta-Llama-3-8B-Instruct-IQ2_XXS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-old-GGUF/blob/main/Meta-Llama-3-8B-Instruct-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. | | [Meta-Llama-3-8B-Instruct-IQ1_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-old-GGUF/blob/main/Meta-Llama-3-8B-Instruct-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. | | [Meta-Llama-3-8B-Instruct-IQ1_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-old-GGUF/blob/main/Meta-Llama-3-8B-Instruct-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. | ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
dragonSwing/LLaVA-Phi-3-mini-4k-instruct-GGUF
dragonSwing
2024-05-05T09:33:39Z
466
1
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
2024-05-05T05:38:24Z
--- license: apache-2.0 ---
brittlewis12/openchat-3.6-8b-20240522-GGUF
brittlewis12
2024-05-26T01:08:25Z
466
0
null
[ "gguf", "region:us" ]
null
2024-05-25T19:51:41Z
Entry not found
RichardErkhov/VAGOsolutions_-_SauerkrautLM-Mixtral-8x7B-gguf
RichardErkhov
2024-06-14T17:37:20Z
466
0
null
[ "gguf", "region:us" ]
null
2024-06-14T12:59:19Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) SauerkrautLM-Mixtral-8x7B - GGUF - Model creator: https://huggingface.co/VAGOsolutions/ - Original model: https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [SauerkrautLM-Mixtral-8x7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_SauerkrautLM-Mixtral-8x7B-gguf/blob/main/SauerkrautLM-Mixtral-8x7B.Q2_K.gguf) | Q2_K | 16.12GB | | [SauerkrautLM-Mixtral-8x7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_SauerkrautLM-Mixtral-8x7B-gguf/blob/main/SauerkrautLM-Mixtral-8x7B.IQ3_XS.gguf) | IQ3_XS | 18.02GB | | [SauerkrautLM-Mixtral-8x7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_SauerkrautLM-Mixtral-8x7B-gguf/blob/main/SauerkrautLM-Mixtral-8x7B.IQ3_S.gguf) | IQ3_S | 19.03GB | | [SauerkrautLM-Mixtral-8x7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_SauerkrautLM-Mixtral-8x7B-gguf/blob/main/SauerkrautLM-Mixtral-8x7B.Q3_K_S.gguf) | Q3_K_S | 19.03GB | | [SauerkrautLM-Mixtral-8x7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_SauerkrautLM-Mixtral-8x7B-gguf/blob/main/SauerkrautLM-Mixtral-8x7B.IQ3_M.gguf) | IQ3_M | 19.96GB | | [SauerkrautLM-Mixtral-8x7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_SauerkrautLM-Mixtral-8x7B-gguf/blob/main/SauerkrautLM-Mixtral-8x7B.Q3_K.gguf) | Q3_K | 21.0GB | | [SauerkrautLM-Mixtral-8x7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_SauerkrautLM-Mixtral-8x7B-gguf/blob/main/SauerkrautLM-Mixtral-8x7B.Q3_K_M.gguf) | Q3_K_M | 21.0GB | | [SauerkrautLM-Mixtral-8x7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_SauerkrautLM-Mixtral-8x7B-gguf/blob/main/SauerkrautLM-Mixtral-8x7B.Q3_K_L.gguf) | Q3_K_L | 22.51GB | | [SauerkrautLM-Mixtral-8x7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_SauerkrautLM-Mixtral-8x7B-gguf/blob/main/SauerkrautLM-Mixtral-8x7B.IQ4_XS.gguf) | IQ4_XS | 23.63GB | | [SauerkrautLM-Mixtral-8x7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_SauerkrautLM-Mixtral-8x7B-gguf/blob/main/SauerkrautLM-Mixtral-8x7B.Q4_0.gguf) | Q4_0 | 24.63GB | | [SauerkrautLM-Mixtral-8x7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_SauerkrautLM-Mixtral-8x7B-gguf/blob/main/SauerkrautLM-Mixtral-8x7B.IQ4_NL.gguf) | IQ4_NL | 24.91GB | | [SauerkrautLM-Mixtral-8x7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_SauerkrautLM-Mixtral-8x7B-gguf/blob/main/SauerkrautLM-Mixtral-8x7B.Q4_K_S.gguf) | Q4_K_S | 24.91GB | | [SauerkrautLM-Mixtral-8x7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_SauerkrautLM-Mixtral-8x7B-gguf/blob/main/SauerkrautLM-Mixtral-8x7B.Q4_K.gguf) | Q4_K | 26.49GB | | [SauerkrautLM-Mixtral-8x7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_SauerkrautLM-Mixtral-8x7B-gguf/blob/main/SauerkrautLM-Mixtral-8x7B.Q4_K_M.gguf) | Q4_K_M | 26.49GB | | [SauerkrautLM-Mixtral-8x7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_SauerkrautLM-Mixtral-8x7B-gguf/blob/main/SauerkrautLM-Mixtral-8x7B.Q4_1.gguf) | Q4_1 | 27.32GB | | [SauerkrautLM-Mixtral-8x7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_SauerkrautLM-Mixtral-8x7B-gguf/blob/main/SauerkrautLM-Mixtral-8x7B.Q5_0.gguf) | Q5_0 | 30.02GB | | [SauerkrautLM-Mixtral-8x7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_SauerkrautLM-Mixtral-8x7B-gguf/blob/main/SauerkrautLM-Mixtral-8x7B.Q5_K_S.gguf) | Q5_K_S | 30.02GB | | [SauerkrautLM-Mixtral-8x7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_SauerkrautLM-Mixtral-8x7B-gguf/blob/main/SauerkrautLM-Mixtral-8x7B.Q5_K.gguf) | Q5_K | 30.95GB | | [SauerkrautLM-Mixtral-8x7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_SauerkrautLM-Mixtral-8x7B-gguf/blob/main/SauerkrautLM-Mixtral-8x7B.Q5_K_M.gguf) | Q5_K_M | 30.95GB | | [SauerkrautLM-Mixtral-8x7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_SauerkrautLM-Mixtral-8x7B-gguf/blob/main/SauerkrautLM-Mixtral-8x7B.Q5_1.gguf) | Q5_1 | 32.71GB | | [SauerkrautLM-Mixtral-8x7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_SauerkrautLM-Mixtral-8x7B-gguf/blob/main/SauerkrautLM-Mixtral-8x7B.Q6_K.gguf) | Q6_K | 35.74GB | | [SauerkrautLM-Mixtral-8x7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/VAGOsolutions_-_SauerkrautLM-Mixtral-8x7B-gguf/tree/main/) | Q8_0 | 46.22GB | Original model description: --- license: apache-2.0 language: - en - de - fr - it - es library_name: transformers pipeline_tag: text-generation tags: - mistral - finetune - sft - dpo - chatml - augmentation - german - mixtral - moe datasets: - Open-Orca/SlimOrca - argilla/distilabel-math-preference-dpo --- ![SauerkrautLM](https://vago-solutions.ai/wp-content/uploads/2024/02/Sauerkraut_MoE.png "SauerkrautLM-Mixtral-8x7B") ## VAGO solutions SauerkrautLM-Mixtral-8x7B Introducing **SauerkrautLM-Mixtral-8x7B** – our Sauerkraut version of the powerful Mixtral-8x7B! Finetuned and aligned with **SFT** and **DPO** # Table of Contents 1. [Overview of all SauerkrautLM-Mixtral models](#all-sauerkrautlm-mixtral-models) 2. [Model Details](#model-details) - [Prompt template](#prompt-template) - [Training Dataset](#training-dataset) 3. [Evaluation](#evaluation) 5. [Disclaimer](#disclaimer) 6. [Contact](#contact) 7. [Collaborations](#collaborations) 8. [Acknowledgement](#acknowledgement) ## All SauerkrautLM-Mixtral Models | Model | HF | GPTQ | GGUF | AWQ | |-------|-------|-------|-------|-------| | SauerkrautLM-Mixtral-8x7B | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B) | [Link](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-GPTQ) | [Link](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-GGUF) | [Link](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-AWQ) | | SauerkrautLM-Mixtral-8x7B-Instruct | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct) | [Link](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-GPTQ) | [Link](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-GGUF) | [Link](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-AWQ) | ## Model Details **SauerkrautLM-Mixtral-8x7B** - **Model Type:** SauerkrautLM-Mixtral-8x7B is a Mixture of Experts (MoE) Model based on [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) - **Language(s):** English, German, French, Italian, Spanish - **License:** APACHE 2.0 - **Contact:** [Website](https://vago-solutions.de/#Kontakt) [David Golchinfar](mailto:[email protected]) ### Training Dataset: SauerkrautLM-Mixtral-8x7B was trained with mix of German data augmentation and translated data. **SFT** with the dataset[OpenOrca/Slim-Orca](https://huggingface.co/datasets/Open-Orca/SlimOrca) and aligned through **DPO** with our **new German SauerkrautLM-DPO dataset** based on parts of the SFT SauerkrautLM dataset as chosen answers and [Sauerkraut-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO) as rejected answers. Added with additional **translated Parts of the [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)** and **[argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo).** We found, that only a simple translation of training data can lead to unnatural German phrasings. Data augmentation techniques were used to grant grammatical, syntactical correctness and a more natural German wording in our training data. ### Data Contamination Test Results Some models on the HuggingFace leaderboard had problems with wrong data getting mixed in. We checked our SauerkrautLM-DPO dataset with a special test [1] on a smaller model for this problem. The HuggingFace team used the same methods [2, 3]. Our results, with `result < 0.1, %:` being well below 0.9, indicate that our dataset is free from contamination. *The data contamination test results of HellaSwag and Winograde will be added once [1] supports them.* | Dataset | ARC | MMLU | TruthfulQA | GSM8K | |------------------------------|-------|-------|-------|-------| | **SauerkrautLM-DPO**| result < 0.1, %: 0.0 |result < 0.1, %: 0.09 | result < 0.1, %: 0.13 | result < 0.1, %: 0.16 | [1] https://github.com/swj0419/detect-pretrain-code-contamination [2] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/474#657f2245365456e362412a06 [3] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/265#657b6debf81f6b44b8966230 ### Prompt Template: ``` <|im_start|>system Du bist ein großes Sprachmodell, das höflich und kompetent antwortet. Schreibe deine Gedanken Schritt für Schritt auf, um Probleme sinnvoll zu lösen.<|im_end|> <|im_start|>user Wie geht es dir?<|im_end|> <|im_start|>assistant ``` ## Evaluation ![Harness](https://vago-solutions.de/wp-content/uploads/2023/12/MoEbenchmark.png "SauerkrautLM-Mixtral-8x7B Harness") *evaluated with lm-evaluation-harness v0.3.0 - mmlu coming soon *All benchmarks were performed with a sliding window of 4096. New Benchmarks with Sliding Window null coming soon ## Disclaimer We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out. However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models. These models may be employed for commercial purposes, and the Apache 2.0 remains applicable and is included with the model files.   ## Contact If you are interested in customized LLMs for business applications, please get in contact with us via our website or contact us at [Dr. Daryoush Vaziri](mailto:[email protected]). We are also grateful for your feedback and suggestions.   ## Collaborations We are also keenly seeking support and investment for our startup, VAGO solutions, where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us. ## Acknowledgement Many thanks to [OpenOrca](https://huggingface.co/Open-Orca), [argilla](https://huggingface.co/datasets/argilla) and [Huggingface](https://huggingface.co) for providing such valuable datasets to the Open-Source community. And of course a big thanks to MistralAI for providing the open source community with their latest technology!
mradermacher/Mixtral_7Bx4_MOE_DPO-i1-GGUF
mradermacher
2024-06-15T08:42:18Z
466
0
transformers
[ "transformers", "gguf", "en", "base_model:cloudyu/Mixtral_7Bx4_MOE_DPO", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-06-15T04:38:12Z
--- base_model: cloudyu/Mixtral_7Bx4_MOE_DPO language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/cloudyu/Mixtral_7Bx4_MOE_DPO <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_DPO-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_DPO-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_DPO.i1-IQ1_S.gguf) | i1-IQ1_S | 5.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_DPO-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_DPO.i1-IQ1_M.gguf) | i1-IQ1_M | 5.6 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_DPO-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_DPO.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.5 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_DPO-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_DPO.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.2 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_DPO-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_DPO.i1-IQ2_S.gguf) | i1-IQ2_S | 7.4 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_DPO-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_DPO.i1-IQ2_M.gguf) | i1-IQ2_M | 8.1 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_DPO-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_DPO.i1-Q2_K.gguf) | i1-Q2_K | 8.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_DPO-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_DPO.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_DPO-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_DPO.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_DPO-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_DPO.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_DPO-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_DPO.i1-IQ3_S.gguf) | i1-IQ3_S | 10.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_DPO-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_DPO.i1-IQ3_M.gguf) | i1-IQ3_M | 10.7 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_DPO-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_DPO.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.7 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_DPO-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_DPO.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_DPO-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_DPO.i1-IQ4_XS.gguf) | i1-IQ4_XS | 13.0 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_DPO-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_DPO.i1-Q4_0.gguf) | i1-Q4_0 | 13.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_DPO-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_DPO.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_DPO-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_DPO.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_DPO-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_DPO.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.7 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_DPO-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_DPO.i1-Q5_K_M.gguf) | i1-Q5_K_M | 17.2 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral_7Bx4_MOE_DPO-i1-GGUF/resolve/main/Mixtral_7Bx4_MOE_DPO.i1-Q6_K.gguf) | i1-Q6_K | 19.9 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
igans/Meta-Llama-3-8B-Q4_K_M-GGUF
igans
2024-06-24T18:00:20Z
466
0
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:meta-llama/Meta-Llama-3-8B", "license:llama3", "region:us" ]
text-generation
2024-06-24T17:59:58Z
--- base_model: meta-llama/Meta-Llama-3-8B language: - en license: llama3 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - llama-cpp - gguf-my-repo extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version\ \ Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for\ \ use, reproduction, distribution and modification of the Llama Materials set forth\ \ herein.\n\"Documentation\" means the specifications, manuals and documentation\ \ accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\ \"Licensee\" or \"you\" means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf), of\ \ the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama\ \ 3\" means the foundational large language models and software and algorithms,\ \ including machine-learning model code, trained model weights, inference-enabling\ \ code, training-enabling code, fine-tuning enabling code and other elements of\ \ the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\ \"Llama Materials\" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation\ \ (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"\ we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\ \ an entity, your principal place of business is in the EEA or Switzerland) and\ \ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n\ \ \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted\ \ a non-exclusive, worldwide, non-transferable and royalty-free limited license\ \ under Meta’s intellectual property or other rights owned by Meta embodied in the\ \ Llama Materials to use, reproduce, distribute, copy, create derivative works of,\ \ and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni.\ \ If you distribute or make available the Llama Materials (or any derivative works\ \ thereof), or a product or service that uses any of them, including another AI\ \ model, you shall (A) provide a copy of this Agreement with any such Llama Materials;\ \ and (B) prominently display “Built with Meta Llama 3” on a related website, user\ \ interface, blogpost, about page, or product documentation. If you use the Llama\ \ Materials to create, train, fine tune, or otherwise improve an AI model, which\ \ is distributed or made available, you shall also include “Llama 3” at the beginning\ \ of any such AI model name.\nii. If you receive Llama Materials, or any derivative\ \ works thereof, from a Licensee as part of an integrated end user product, then\ \ Section 2 of this Agreement will not apply to you.\niii. You must retain in all\ \ copies of the Llama Materials that you distribute the following attribution notice\ \ within a “Notice” text file distributed as a part of such copies: “Meta Llama\ \ 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\nv. You will not use the Llama Materials or any output or\ \ results of the Llama Materials to improve any other large language model (excluding\ \ Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If,\ \ on the Meta Llama 3 version release date, the monthly active users of the products\ \ or services made available by or for Licensee, or Licensee’s affiliates, is greater\ \ than 700 million monthly active users in the preceding calendar month, you must\ \ request a license from Meta, which Meta may grant to you in its sole discretion,\ \ and you are not authorized to exercise any of the rights under this Agreement\ \ unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer\ \ of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT\ \ AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF\ \ ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,\ \ INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY,\ \ OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING\ \ THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME\ \ ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n\ 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER\ \ ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY,\ \ OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT,\ \ SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META\ \ OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\ 5. Intellectual Property.\na. No trademark licenses are granted under this Agreement,\ \ and in connection with the Llama Materials, neither Meta nor Licensee may use\ \ any name or mark owned by or associated with the other or any of its affiliates,\ \ except as required for reasonable and customary use in describing and redistributing\ \ the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you\ \ a license to use “Llama 3” (the “Mark”) solely as required to comply with the\ \ last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently\ \ accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All\ \ goodwill arising out of your use of the Mark will inure to the benefit of Meta.\n\ b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for\ \ Meta, with respect to any derivative works and modifications of the Llama Materials\ \ that are made by you, as between you and Meta, you are and will be the owner of\ \ such derivative works and modifications.\nc. If you institute litigation or other\ \ proceedings against Meta or any entity (including a cross-claim or counterclaim\ \ in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results,\ \ or any portion of any of the foregoing, constitutes infringement of intellectual\ \ property or other rights owned or licensable by you, then any licenses granted\ \ to you under this Agreement shall terminate as of the date such litigation or\ \ claim is filed or instituted. You will indemnify and hold harmless Meta from and\ \ against any claim by any third party arising out of or related to your use or\ \ distribution of the Llama Materials.\n6. Term and Termination. The term of this\ \ Agreement will commence upon your acceptance of this Agreement or access to the\ \ Llama Materials and will continue in full force and effect until terminated in\ \ accordance with the terms and conditions herein. Meta may terminate this Agreement\ \ if you are in breach of any term or condition of this Agreement. Upon termination\ \ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\ \ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\ \ and Jurisdiction. This Agreement will be governed and construed under the laws\ \ of the State of California without regard to choice of law principles, and the\ \ UN Convention on Contracts for the International Sale of Goods does not apply\ \ to this Agreement. The courts of California shall have exclusive jurisdiction\ \ of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use\ \ Policy\nMeta is committed to promoting safe and fair use of its tools and features,\ \ including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable\ \ Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n\ #### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly.\ \ You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 2. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 4.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 5. Collect, process, disclose, generate, or infer health, demographic,\ \ or other sensitive personal or private information about individuals without rights\ \ and consents required by applicable laws\n 6. Engage in or facilitate any action\ \ or generate any content that infringes, misappropriates, or otherwise violates\ \ any third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 7. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n2. Engage in, promote, incite,\ \ facilitate, or assist in the planning or development of activities that present\ \ a risk of death or bodily harm to individuals, including use of Meta Llama 3 related\ \ to the following:\n 1. Military, warfare, nuclear industries or applications,\ \ espionage, use for materials or activities that are subject to the International\ \ Traffic Arms Regulations (ITAR) maintained by the United States Department of\ \ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\ \ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\ \ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\ \ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\ \ content intended to incite or promote violence, abuse, or any infliction of bodily\ \ harm to an individual\n3. Intentionally deceive or mislead others, including use\ \ of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering\ \ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\ \ or furthering defamatory content, including the creation of defamatory statements,\ \ images, or other content\n 3. Generating, promoting, or further distributing\ \ spam\n 4. Impersonating another individual without consent, authorization,\ \ or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are\ \ human-generated\n 6. Generating or facilitating false online engagement, including\ \ fake reviews and other means of fake online engagement\n4. Fail to appropriately\ \ disclose to end users any known dangers of your AI system\nPlease report any violation\ \ of this Policy, software “bug,” or other problems that could lead to a violation\ \ of this Policy through one of the following means:\n * Reporting issues with\ \ the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n\ \ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\ \ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\ \ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- # igans/Meta-Llama-3-8B-Q4_K_M-GGUF This model was converted to GGUF format from [`meta-llama/Meta-Llama-3-8B`](https://huggingface.co/meta-llama/Meta-Llama-3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo igans/Meta-Llama-3-8B-Q4_K_M-GGUF --hf-file meta-llama-3-8b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo igans/Meta-Llama-3-8B-Q4_K_M-GGUF --hf-file meta-llama-3-8b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo igans/Meta-Llama-3-8B-Q4_K_M-GGUF --hf-file meta-llama-3-8b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo igans/Meta-Llama-3-8B-Q4_K_M-GGUF --hf-file meta-llama-3-8b-q4_k_m.gguf -c 2048 ```
Helsinki-NLP/opus-mt-fr-ar
Helsinki-NLP
2023-08-16T11:36:03Z
465
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "fr", "ar", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - fr - ar tags: - translation license: apache-2.0 --- ### fra-ara * source group: French * target group: Arabic * OPUS readme: [fra-ara](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-ara/README.md) * model: transformer * source language(s): fra * target language(s): apc ara arq arq_Latn ary arz * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ara/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ara/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ara/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.fra.ara | 14.4 | 0.439 | ### System Info: - hf_name: fra-ara - source_languages: fra - target_languages: ara - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fra-ara/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['fr', 'ar'] - src_constituents: {'fra'} - tgt_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ara/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fra-ara/opus-2020-07-03.test.txt - src_alpha3: fra - tgt_alpha3: ara - short_pair: fr-ar - chrF2_score: 0.439 - bleu: 14.4 - brevity_penalty: 1.0 - ref_len: 7956.0 - src_name: French - tgt_name: Arabic - train_date: 2020-07-03 - src_alpha2: fr - tgt_alpha2: ar - prefer_old: False - long_pair: fra-ara - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
timm/efficientnet_b2_pruned.in1k
timm
2023-04-27T21:10:18Z
465
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:1905.11946", "arxiv:2002.08258", "license:apache-2.0", "region:us" ]
image-classification
2022-12-12T23:56:29Z
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for efficientnet_b2_pruned.in1k A EfficientNet image classification model. Knapsack pruned from existing weights. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 8.3 - GMACs: 0.7 - Activations (M): 9.1 - Image size: 260 x 260 - **Papers:** - EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: https://arxiv.org/abs/1905.11946 - Knapsack Pruning with Inner Distillation: https://arxiv.org/abs/2002.08258 - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('efficientnet_b2_pruned.in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'efficientnet_b2_pruned.in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 16, 130, 130]) # torch.Size([1, 17, 65, 65]) # torch.Size([1, 42, 33, 33]) # torch.Size([1, 116, 17, 17]) # torch.Size([1, 352, 9, 9]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'efficientnet_b2_pruned.in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1408, 9, 9) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{tan2019efficientnet, title={Efficientnet: Rethinking model scaling for convolutional neural networks}, author={Tan, Mingxing and Le, Quoc}, booktitle={International conference on machine learning}, pages={6105--6114}, year={2019}, organization={PMLR} } ``` ```bibtex @article{aflalo2020knapsack, title={Knapsack pruning with inner distillation}, author={Aflalo, Yonathan and Noy, Asaf and Lin, Ming and Friedman, Itamar and Zelnik, Lihi}, journal={arXiv preprint arXiv:2002.08258}, year={2020} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
nDimensional/Experience
nDimensional
2023-11-28T13:21:30Z
465
18
diffusers
[ "diffusers", "text-to-image", "StableDiffusion", "Stable-Diffusion", "image-generation", "SD1.5", "SDv1.5", "art", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-01-31T22:39:54Z
--- license: creativeml-openrail-m library_name: diffusers tags: - text-to-image - StableDiffusion - Stable-Diffusion - image-generation - SD1.5 - SDv1.5 - image-generation - art --- <h1>Experience 🚀</h1> ### Experience 7.0 <img src="https://huggingface.co/Schisim/Experience/resolve/main/Images/7.0/00024-3240294908.png" width=768/> <br> <img src="https://huggingface.co/Schisim/Experience/resolve/main/Images/7.0/00249-3304466729.png" width=768/> <br> <img src="https://huggingface.co/Schisim/Experience/resolve/main/Images/7.0/00382-1453779484.png" width=512/> --- ### Realistic Experience <img src="https://huggingface.co/Schisim/Experience/resolve/main/Images/Realistic/00207-1362125040.png" width=704/> <br> <img src="https://huggingface.co/Schisim/Experience/resolve/main/Images/Realistic/00217-3852234563.png" width=768/> <br> <img src="https://huggingface.co/Schisim/Experience/resolve/main/Images/Realistic/00400-1936499297.png" width=512/> --- **For Prompt Examples** Download images located in 'Images' folder, use PNG viewer in Automatic1111-webui. Enjoy!
yeongjoon/Kconvo-roberta
yeongjoon
2023-03-21T02:42:47Z
465
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "ko", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-03-15T06:51:03Z
--- license: mit language: - ko --- # Kconvo-roberta: Korean conversation RoBERTa ([github](https://github.com/HeoTaksung/Domain-Robust-Retraining-of-Pretrained-Language-Model)) - There are many PLMs (Pretrained Language Models) for Korean, but most of them are trained with written language. - Here, we introduce a retrained PLM for prediction of Korean conversation data where we use verbal data for training. ## Usage ```python # Kconvo-roberta from transformers import RobertaTokenizerFast, RobertaModel tokenizer_roberta = RobertaTokenizerFast.from_pretrained("yeongjoon/Kconvo-roberta") model_roberta = RobertaModel.from_pretrained("yeongjoon/Kconvo-roberta") ``` ----------------- ## Domain Robust Retraining of Pretrained Language Model - Kconvo-roberta uses [klue/roberta-base](https://huggingface.co/klue/roberta-base) as the base model and retrained additionaly with the conversation dataset. - The retrained dataset was collected through the [National Institute of the Korean Language](https://corpus.korean.go.kr/request/corpusRegist.do) and [AI-Hub](https://www.aihub.or.kr/aihubdata/data/list.do?pageIndex=1&currMenu=115&topMenu=100&dataSetSn=&srchdataClCode=DATACL001&srchOrder=&SrchdataClCode=DATACL002&searchKeyword=&srchDataRealmCode=REALM002&srchDataTy=DATA003), and the collected dataset is as follows. ``` - National Institute of the Korean Language * 온라인 대화 말뭉치 2021 * 일상 대화 말뭉치 2020 * 구어 말뭉치 * 메신저 말뭉치 - AI-Hub * 온라인 구어체 말뭉치 데이터 * 상담 음성 * 한국어 음성 * 자유대화 음성(일반남여) * 일상생활 및 구어체 한-영 번역 병렬 말뭉치 데이터 * 한국인 대화음성 * 감성 대화 말뭉치 * 주제별 텍스트 일상 대화 데이터 * 용도별 목적대화 데이터 * 한국어 SNS ```
GT4SD/multitask-text-and-chemistry-t5-base-augm
GT4SD
2023-10-10T09:29:57Z
465
3
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "en", "arxiv:2301.12586", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
2023-05-19T18:09:33Z
--- license: mit language: - en --- # Multitask Text and Chemistry T5 Multitask Text and Chemistry T5 : a multi-domain, multi-task language model to solve a wide range of tasks in both the chemical and natural language domains. Published by [Christofidellis et al.](https://arxiv.org/pdf/2301.12586.pdf) **Model Details**: The Multitask Text and Chemistry T5 variant trained using <em>t5-small</em> as its pretrained based and the <em>augmented dataset</em>. **Developers**: Dimitrios Christofidellis*, Giorgio Giannone*, Jannis Born, Teodoro Laino and Matteo Manica from IBM Research and Ole Winther from Technical University of Denmark. **Distributors**: Model natively integrated into GT4SD. **Model date**: 2023. **Model type**: A Transformer-based language model that is trained on a multi-domain and a multi-task dataset by aggregating available datasets for the tasks of Forward reaction prediction, Retrosynthesis, Molecular captioning, Text-conditional de novo generation and Paragraph to actions. **Information about training algorithms, parameters, fairness constraints or other applied approaches, and features**: N.A. **Paper or other resource for more information**: The Multitask Text and Chemistry T5 [Christofidellis et al.](https://arxiv.org/pdf/2301.12586.pdf) **License**: MIT **Where to send questions or comments about the model**: Open an issue on [GT4SD repository](https://github.com/GT4SD/gt4sd-core). ## Citation ```bib @article{christofidellis2023unifying, title={Unifying Molecular and Textual Representations via Multi-task Language Modelling}, author={Christofidellis, Dimitrios and Giannone, Giorgio and Born, Jannis and Winther, Ole and Laino, Teodoro and Manica, Matteo}, journal={arXiv preprint arXiv:2301.12586}, year={2023} } ``` *equal contribution
digiplay/hellopure_v2.23
digiplay
2023-10-24T06:14:37Z
465
2
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-04T03:11:30Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: https://civitai.com/models/88202?modelVersionId=104921 Author's info: https://civitai.com/user/aji1/models
lorahub/flan_t5_large-glue_mrpc
lorahub
2023-07-24T10:19:50Z
465
0
peft
[ "peft", "region:us" ]
null
2023-07-24T10:19:40Z
--- library_name: peft ---
liuhaotian/llava-lcs558k-scienceqa-vicuna-13b-v1.3
liuhaotian
2023-08-27T05:44:55Z
465
4
transformers
[ "transformers", "pytorch", "llava", "text-generation", "autotrain_compatible", "region:us" ]
text-generation
2023-08-27T05:35:38Z
--- inference: false --- <br> <br> # LLaVA Model Card ## Model details **Model type:** LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. **Model date:** LLaVA-LCS558K-ScienceQA-Vicuna-13B-v1.3 was trained in August 2023. **Paper or resources for more information:** https://llava-vl.github.io/ ## License Non-commerical Use. **Where to send questions or comments about the model:** https://github.com/haotian-liu/LLaVA/issues ## Intended use **Primary intended uses:** The primary use of LLaVA is research on large multimodal models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training dataset - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP. - ScienceQA training set. ## Evaluation dataset A preliminary evaluation of the model quality is conducted by creating a set of 90 visual reasoning questions from 30 unique images randomly sampled from COCO val 2014 and each is associated with three types of questions: conversational, detailed description, and complex reasoning. We utilize GPT-4 to judge the model outputs. We also evaluate our model on the ScienceQA dataset. Our synergy with GPT-4 sets a new state-of-the-art on the dataset. See https://llava-vl.github.io/ for more details.
kwagh20ite/flickr_compv_6000
kwagh20ite
2023-08-30T03:02:59Z
465
0
diffusers
[ "diffusers", "safetensors", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-30T02:55:15Z
Entry not found
TheBloke/AppleSauce-L2-13B-GGUF
TheBloke
2023-09-27T12:49:09Z
465
1
transformers
[ "transformers", "gguf", "llama", "base_model:sauce1337/AppleSauce-L2-13b", "license:cc-by-nc-4.0", "text-generation-inference", "region:us" ]
null
2023-09-14T07:57:03Z
--- license: cc-by-nc-4.0 model_name: AppleSauce L2 13B base_model: sauce1337/AppleSauce-L2-13b inference: false model_creator: sauce1337 model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # AppleSauce L2 13B - GGUF - Model creator: [sauce1337](https://huggingface.co/sauce1337) - Original model: [AppleSauce L2 13B](https://huggingface.co/sauce1337/AppleSauce-L2-13b) <!-- description start --> ## Description This repo contains GGUF format model files for [sauce1337's AppleSauce L2 13B](https://huggingface.co/sauce1337/AppleSauce-L2-13b). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/AppleSauce-L2-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/AppleSauce-L2-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/AppleSauce-L2-13B-GGUF) * [sauce1337's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/sauce1337/AppleSauce-L2-13b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [sauce1337's AppleSauce L2 13B](https://huggingface.co/sauce1337/AppleSauce-L2-13b). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [applesauce-l2-13b.Q2_K.gguf](https://huggingface.co/TheBloke/AppleSauce-L2-13B-GGUF/blob/main/applesauce-l2-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [applesauce-l2-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/AppleSauce-L2-13B-GGUF/blob/main/applesauce-l2-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [applesauce-l2-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/AppleSauce-L2-13B-GGUF/blob/main/applesauce-l2-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [applesauce-l2-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/AppleSauce-L2-13B-GGUF/blob/main/applesauce-l2-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [applesauce-l2-13b.Q4_0.gguf](https://huggingface.co/TheBloke/AppleSauce-L2-13B-GGUF/blob/main/applesauce-l2-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [applesauce-l2-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/AppleSauce-L2-13B-GGUF/blob/main/applesauce-l2-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [applesauce-l2-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/AppleSauce-L2-13B-GGUF/blob/main/applesauce-l2-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [applesauce-l2-13b.Q5_0.gguf](https://huggingface.co/TheBloke/AppleSauce-L2-13B-GGUF/blob/main/applesauce-l2-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [applesauce-l2-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/AppleSauce-L2-13B-GGUF/blob/main/applesauce-l2-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [applesauce-l2-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/AppleSauce-L2-13B-GGUF/blob/main/applesauce-l2-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [applesauce-l2-13b.Q6_K.gguf](https://huggingface.co/TheBloke/AppleSauce-L2-13B-GGUF/blob/main/applesauce-l2-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [applesauce-l2-13b.Q8_0.gguf](https://huggingface.co/TheBloke/AppleSauce-L2-13B-GGUF/blob/main/applesauce-l2-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/AppleSauce-L2-13B-GGUF and below it, a specific filename to download, such as: applesauce-l2-13b.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/AppleSauce-L2-13B-GGUF applesauce-l2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/AppleSauce-L2-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/AppleSauce-L2-13B-GGUF applesauce-l2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m applesauce-l2-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/AppleSauce-L2-13B-GGUF", model_file="applesauce-l2-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: sauce1337's AppleSauce L2 13B ok, it's an apple. ![](okapple.png) would you role play with an apple? maybe. would you ask an apple complicated logical questions? maybe. use alpaca format? maybe. > TheBloke GGUF and GPTQ:\ > https://huggingface.co/TheBloke/AppleSauce-L2-13B-GGUF \ > https://huggingface.co/TheBloke/AppleSauce-L2-13B-GPTQ <!-- original-model-card end -->
Yntec/SuperCuteRemix
Yntec
2023-09-20T08:09:16Z
465
3
diffusers
[ "diffusers", "safetensors", "Anime", "Girl", "Asian", "DucHaiten", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-09-20T06:59:55Z
--- library_name: diffusers pipeline_tag: text-to-image license: creativeml-openrail-m tags: - Anime - Girl - Asian - DucHaiten - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image --- # Super Cute Remix DucHaitenSuperCute 1.0 mixed with SuperCute's unet and skin colors fixed (to match the color temperature of SuperCute) If you like his content, support him at: https://linktr.ee/Duc_Haiten Sample and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/hrjg5sY7GKVcizTIS9YNr.png) A pretty cute sofa girl sitting in a jar, by makoto shinkai an norman rockwell Original page: https://huggingface.co/DucHaiten/DucHaitenSuperCute/tree/main # Recipe - Model Converter 16fp no-ema Model: DucHaitenSuperCute_v1.0 Output: DucHaitenSuperCuteMini - SuperMerger Weight sum Train Difference MBW 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1 (this overwrites the unet but lets everything else intact. Reversing the models makes one that is SuperCute with DucHaitenSuperCute 1.0's unet) Model A: DucHaitenSuperCuteMini Model B: SuperCute (https://huggingface.co/DucHaiten/DucHaitenSuperCute/resolve/main/SuperCute-fp16-no-ema.safetensors) Output: SuperCuteAlpha - SuperMerger Weight sum Train Difference Adjust settings 0,0,0,0,0,0,1 Model A: SuperCuteAlpha Model B: SuperCuteAlpha Output: SuperCuteRemix
alon-albalak/ODM_1B_params_50B_tokens
alon-albalak
2023-10-10T00:16:29Z
465
0
transformers
[ "transformers", "pytorch", "safetensors", "gpt_neox", "feature-extraction", "endpoints_compatible", "text-generation-inference", "region:us" ]
feature-extraction
2023-10-06T06:13:06Z
Entry not found
juntaoyuan/elements-7b-teacher
juntaoyuan
2023-11-21T07:25:37Z
465
0
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
2023-11-12T19:42:23Z
--- license: apache-2.0 ---
protectai/codebert-base-Malicious_URLs-onnx
protectai
2024-03-25T12:06:30Z
465
1
transformers
[ "transformers", "onnx", "roberta", "text-classification", "malicious-urls", "url", "en", "dataset:sid321axn/malicious-urls-dataset", "base_model:DunnBC22/codebert-base-Malicious_URLs", "autotrain_compatible", "region:us" ]
text-classification
2023-11-12T21:12:48Z
--- language: - en pipeline_tag: text-classification base_model: DunnBC22/codebert-base-Malicious_URLs inference: false datasets: - sid321axn/malicious-urls-dataset tags: - malicious-urls - url --- # ONNX version of DunnBC22/codebert-base-Malicious_URLs **This model is a conversion of [DunnBC22/codebert-base-Malicious_URLs](https://huggingface.co/DunnBC22/codebert-base-Malicious_URLs) to ONNX** format. It's based on the CodeBERT architecture, tailored for the specific task of identifying URLs that may pose security threats. The model was converted to ONNX using the [🤗 Optimum](https://huggingface.co/docs/optimum/index) library. ## Model Architecture **Base Model**: CodeBERT-base, a robust model for programming and natural languages. **Dataset**: [https://www.kaggle.com/datasets/sid321axn/malicious-urls-dataset](https://www.kaggle.com/datasets/sid321axn/malicious-urls-dataset). **Modifications**: Details of any modifications or fine-tuning done to tailor the model for malicious URL detection. ## Usage Loading the model requires the [🤗 Optimum](https://huggingface.co/docs/optimum/index) library installed. ```python from optimum.onnxruntime import ORTModelForSequenceClassification from transformers import AutoTokenizer, pipeline tokenizer = AutoTokenizer.from_pretrained("laiyer/codebert-base-Malicious_URLs-onnx") model = ORTModelForSequenceClassification.from_pretrained("laiyer/codebert-base-Malicious_URLs-onnx") classifier = pipeline( task="text-classification", model=model, tokenizer=tokenizer, top_k=None, ) classifier_output = classifier("https://google.com") print(classifier_output) ``` ### LLM Guard [Malicious URLs scanner](https://llm-guard.com/output_scanners/malicious_urls/) ## Community Join our Slack to give us feedback, connect with the maintainers and fellow users, ask questions, or engage in discussions about LLM security! <a href="https://join.slack.com/t/laiyerai/shared_invite/zt-28jv3ci39-sVxXrLs3rQdaN3mIl9IT~w"><img src="https://github.com/laiyer-ai/llm-guard/blob/main/docs/assets/join-our-slack-community.png?raw=true" width="200"></a>
Chrisisis/5DMARwFBmKTG39zAsNHgkkGLM6QzJpVbN76b5BR4Uko6MbDr_vgg
Chrisisis
2024-02-24T08:35:13Z
465
0
keras
[ "keras", "region:us" ]
null
2024-02-19T03:03:02Z
Entry not found
BioMistral/BioMistral-7B-DARE-GGUF
BioMistral
2024-02-19T22:46:19Z
465
9
transformers
[ "transformers", "gguf", "mistral", "text-generation", "mergekit", "merge", "dare", "medical", "biology", "en", "fr", "nl", "es", "it", "pl", "ro", "de", "dataset:pubmed", "arxiv:2311.03099", "arxiv:2306.01708", "arxiv:2402.10373", "base_model:BioMistral/BioMistral-7B", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-19T20:51:42Z
--- base_model: - BioMistral/BioMistral-7B - mistralai/Mistral-7B-Instruct-v0.1 library_name: transformers tags: - mergekit - merge - dare - medical - biology license: apache-2.0 datasets: - pubmed language: - en - fr - nl - es - it - pl - ro - de pipeline_tag: text-generation --- # BioMistral-7B-mistral7instruct-dare This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) as a base. ### Models Merged The following models were included in the merge: * [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: mistralai/Mistral-7B-Instruct-v0.1 # No parameters necessary for base model - model: BioMistral/BioMistral-7B parameters: density: 0.5 weight: 0.5 merge_method: dare_ties base_model: mistralai/Mistral-7B-Instruct-v0.1 parameters: int8_mask: true dtype: bfloat16 ``` <p align="center"> <img src="https://huggingface.co/BioMistral/BioMistral-7B/resolve/main/wordart_blue_m_rectangle.png?download=true" alt="drawing" width="250"/> </p> # BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains **Abstract:** Large Language Models (LLMs) have demonstrated remarkable versatility in recent years, offering potential applications across specialized domains such as healthcare and medicine. Despite the availability of various open-source LLMs tailored for health contexts, adapting general-purpose LLMs to the medical domain presents significant challenges. In this paper, we introduce BioMistral, an open-source LLM tailored for the biomedical domain, utilizing Mistral as its foundation model and further pre-trained on PubMed Central. We conduct a comprehensive evaluation of BioMistral on a benchmark comprising 10 established medical question-answering (QA) tasks in English. We also explore lightweight models obtained through quantization and model merging approaches. Our results demonstrate BioMistral's superior performance compared to existing open-source medical models and its competitive edge against proprietary counterparts. Finally, to address the limited availability of data beyond English and to assess the multilingual generalization of medical LLMs, we automatically translated and evaluated this benchmark into 7 other languages. This marks the first large-scale multilingual evaluation of LLMs in the medical domain. Datasets, multilingual evaluation benchmarks, scripts, and all the models obtained during our experiments are freely released. **Advisory Notice!** Although BioMistral is intended to encapsulate medical knowledge sourced from high-quality evidence, it hasn't been tailored to effectively, safely, or suitably convey this knowledge within professional parameters for action. We advise refraining from utilizing BioMistral in medical contexts unless it undergoes thorough alignment with specific use cases and undergoes further testing, notably including randomized controlled trials in real-world medical environments. BioMistral 7B may possess inherent risks and biases that have not yet been thoroughly assessed. Additionally, the model's performance has not been evaluated in real-world clinical settings. Consequently, we recommend using BioMistral 7B strictly as a research tool and advise against deploying it in production environments for natural language generation or any professional health and medical purposes. # 1. BioMistral models **BioMistral** is a suite of Mistral-based further pre-trained open source models suited for the medical domains and pre-trained using textual data from PubMed Central Open Access (CC0, CC BY, CC BY-SA, and CC BY-ND). All the models are trained using the CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/jean-zay/) French HPC. | Model Name | Base Model | Model Type | Sequence Length | Download | |:-------------------:|:----------------------------------:|:-------------------:|:---------------:|:-----------------------------------------------------:| | BioMistral-7B | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Further Pre-trained | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) | | BioMistral-7B-DARE | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge DARE | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE) | | BioMistral-7B-TIES | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge TIES | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES) | | BioMistral-7B-SLERP | [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) | Merge SLERP | 2048 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP) | # 2. Quantized Models | Base Model | Method | q_group_size | w_bit | version | VRAM GB | Time | Download | |:-------------------:|:------:|:------------:|:-----:|:-------:|:-------:|:------:|:--------:| | BioMistral-7B | FP16/BF16 | | | | 15.02 | x1.00 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B) | | BioMistral-7B | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMM) | | BioMistral-7B | AWQ | 128 | 4 | GEMV | 4.68 | x10.30 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-AWQ-QGS128-W4-GEMV) | | BioMistral-7B | BnB.4 | | 4 | | 5.03 | x3.25 | [HuggingFace](blank) | | BioMistral-7B | BnB.8 | | 8 | | 8.04 | x4.34 | [HuggingFace](blank) | | BioMistral-7B-DARE | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-DARE-AWQ-QGS128-W4-GEMM) | | BioMistral-7B-TIES | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-TIES-AWQ-QGS128-W4-GEMM) | | BioMistral-7B-SLERP | AWQ | 128 | 4 | GEMM | 4.68 | x1.41 | [HuggingFace](https://huggingface.co/BioMistral/BioMistral-7B-SLERP-AWQ-QGS128-W4-GEMM) | # 2. Using BioMistral You can use BioMistral with [Hugging Face's Transformers library](https://github.com/huggingface/transformers) as follow. Loading the model and tokenizer : ```python from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("BioMistral/BioMistral-7B") model = AutoModel.from_pretrained("BioMistral/BioMistral-7B") ``` # 3. Supervised Fine-tuning Benchmark | | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA | MedQA 5 opts | PubMedQA | MedMCQA | Avg. | |-------------------------------------------|:---------------------------------------------:|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|-----------------------------------------------|------------------| | **BioMistral 7B** | 59.9 | 64.0 | 56.5 | 60.4 | 59.0 | 54.7 | 50.6 | 42.8 | 77.5 | 48.1 | 57.3 | | **Mistral 7B Instruct** | **62.9** | 57.0 | 55.6 | 59.4 | 62.5 | <u>57.2</u> | 42.0 | 40.9 | 75.7 | 46.1 | 55.9 | | | | | | | | | | | | | | | **BioMistral 7B Ensemble** | <u>62.8</u> | 62.7 | <u>57.5</u> | **63.5** | 64.3 | 55.7 | 50.6 | 43.6 | 77.5 | **48.8** | 58.7 | | **BioMistral 7B DARE** | 62.3 | **67.0** | 55.8 | 61.4 | **66.9** | **58.0** | **51.1** | **45.2** | <u>77.7</u> | <u>48.7</u> | **59.4** | | **BioMistral 7B TIES** | 60.1 | <u>65.0</u> | **58.5** | 60.5 | 60.4 | 56.5 | 49.5 | 43.2 | 77.5 | 48.1 | 57.9 | | **BioMistral 7B SLERP** | 62.5 | 64.7 | 55.8 | <u>62.7</u> | <u>64.8</u> | 56.3 | <u>50.8</u> | <u>44.3</u> | **77.8** | 48.6 | <u>58.8</u> | | | | | | | | | | | | | | | **MedAlpaca 7B** | 53.1 | 58.0 | 54.1 | 58.8 | 58.1 | 48.6 | 40.1 | 33.7 | 73.6 | 37.0 | 51.5 | | **PMC-LLaMA 7B** | 24.5 | 27.7 | 35.3 | 17.4 | 30.3 | 23.3 | 25.5 | 20.2 | 72.9 | 26.6 | 30.4 | | **MediTron-7B** | 41.6 | 50.3 | 46.4 | 27.9 | 44.4 | 30.8 | 41.6 | 28.1 | 74.9 | 41.3 | 42.7 | | **BioMedGPT-LM-7B** | 51.4 | 52.0 | 49.4 | 53.3 | 50.7 | 49.1 | 42.5 | 33.9 | 76.8 | 37.6 | 49.7 | | | | | | | | | | | | | | | **GPT-3.5 Turbo 1106*** | 74.71 | 74.00 | 65.92 | 72.79 | 72.91 | 64.73 | 57.71 | 50.82 | 72.66 | 53.79 | 66.0 | Supervised Fine-Tuning (SFT) performance of BioMistral 7B models compared to baselines, measured by accuracy (↑) and averaged across 3 random seeds of 3-shot. DARE, TIES, and SLERP are model merging strategies that combine BioMistral 7B and Mistral 7B Instruct. Best model in bold, and second-best underlined. *GPT-3.5 Turbo performances are reported from the 3-shot results without SFT. # Citation BibTeX Arxiv : [https://arxiv.org/abs/2402.10373](https://arxiv.org/abs/2402.10373) ```bibtex @misc{labrak2024biomistral, title={BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains}, author={Yanis Labrak and Adrien Bazoge and Emmanuel Morin and Pierre-Antoine Gourraud and Mickael Rouvier and Richard Dufour}, year={2024}, eprint={2402.10373}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` **CAUTION!** Both direct and downstream users need to be informed about the risks, biases, and constraints inherent in the model. While the model can produce natural language text, our exploration of its capabilities and limitations is just beginning. In fields such as medicine, comprehending these limitations is crucial. Hence, we strongly advise against deploying this model for natural language generation in production or for professional tasks in the realm of health and medicine.
Felladrin/Minueza-32M-UltraChat
Felladrin
2024-03-04T12:15:26Z
465
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "dataset:HuggingFaceH4/ultrachat_200k", "dataset:Felladrin/ChatML-ultrachat_200k", "base_model:Felladrin/Minueza-32M-Base", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-27T13:26:32Z
--- language: - en license: apache-2.0 datasets: - HuggingFaceH4/ultrachat_200k - Felladrin/ChatML-ultrachat_200k base_model: Felladrin/Minueza-32M-Base pipeline_tag: text-generation widget: - messages: - role: system content: You are a career counselor. The user will provide you with an individual looking for guidance in their professional life, and your task is to assist them in determining what careers they are most suited for based on their skills, interests, and experience. You should also conduct research into the various options available, explain the job market trends in different industries, and advice on which qualifications would be beneficial for pursuing particular fields. - role: user content: Heya! - role: assistant content: Hi! How may I help you? - role: user content: I am interested in developing a career in software engineering. What would you recommend me to do? - messages: - role: user content: Morning! - role: assistant content: Good morning! How can I help you today? - role: user content: Could you give me some tips for becoming a healthier person? - messages: - role: user content: Write the specs of a game about mages in a fantasy world. - messages: - role: user content: Tell me about the pros and cons of social media. - messages: - role: system content: You are a highly knowledgeable and friendly assistant. Your goal is to understand and respond to user inquiries with clarity. Your interactions are always respectful, helpful, and focused on delivering the most accurate information to the user. - role: user content: Hey! Got a question for you! - role: assistant content: Sure! What's it? - role: user content: What are some potential applications for quantum computing? inference: parameters: max_new_tokens: 250 do_sample: true temperature: 0.65 top_p: 0.55 top_k: 35 repetition_penalty: 1.176 model-index: - name: Minueza-32M-UltraChat results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 21.08 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Minueza-32M-UltraChat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 26.95 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Minueza-32M-UltraChat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 26.08 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Minueza-32M-UltraChat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 47.7 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Minueza-32M-UltraChat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 51.78 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Minueza-32M-UltraChat name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.23 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Minueza-32M-UltraChat name: Open LLM Leaderboard --- # Minueza-32M-UltraChat: A chat model with 32 million parameters - Base model: [Felladrin/Minueza-32M-Base](https://huggingface.co/Felladrin/Minueza-32M-Base) - Dataset: [[ChatML](https://huggingface.co/datasets/Felladrin/ChatML-ultrachat_200k)] [HuggingFaceH4/ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) - License: [Apache License 2.0](https://huggingface.co/Felladrin/Minueza-32M-UltraChat/resolve/main/license.txt) - Availability in other ML formats: - GGUF: [Felladrin/gguf-Minueza-32M-UltraChat](https://huggingface.co/Felladrin/gguf-Minueza-32M-UltraChat) - ONNX: [Felladrin/onnx-Minueza-32M-UltraChat](https://huggingface.co/Felladrin/onnx-Minueza-32M-UltraChat) ## Recommended Prompt Format ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {user_message}<|im_end|> <|im_start|>assistant ``` ## Recommended Inference Parameters ```yml do_sample: true temperature: 0.65 top_p: 0.55 top_k: 35 repetition_penalty: 1.176 ``` ## Usage Example ```python from transformers import pipeline generate = pipeline("text-generation", "Felladrin/Minueza-32M-UltraChat") messages = [ { "role": "system", "content": "You are a highly knowledgeable and friendly assistant. Your goal is to understand and respond to user inquiries with clarity. Your interactions are always respectful, helpful, and focused on delivering the most accurate information to the user.", }, { "role": "user", "content": "Hey! Got a question for you!", }, { "role": "assistant", "content": "Sure! What's it?", }, { "role": "user", "content": "What are some potential applications for quantum computing?", }, ] prompt = generate.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) output = generate( prompt, max_new_tokens=256, do_sample=True, temperature=0.65, top_k=35, top_p=0.55, repetition_penalty=1.176, ) print(output[0]["generated_text"]) ``` ## How it was trained This model was trained with [SFTTrainer](https://huggingface.co/docs/trl/main/en/sft_trainer) using the following settings: | Hyperparameter | Value | | :--------------------- | :-------------------------------------------- | | Learning rate | 2e-5 | | Total train batch size | 16 | | Max. sequence length | 2048 | | Weight decay | 0 | | Warmup ratio | 0.1 | | Optimizer | Adam with betas=(0.9,0.999) and epsilon=1e-08 | | Scheduler | cosine | | Seed | 42 | ## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Felladrin__Minueza-32M-UltraChat) | Metric |Value| |---------------------------------|----:| |Avg. |28.97| |AI2 Reasoning Challenge (25-Shot)|21.08| |HellaSwag (10-Shot) |26.95| |MMLU (5-Shot) |26.08| |TruthfulQA (0-shot) |47.70| |Winogrande (5-shot) |51.78| |GSM8k (5-shot) | 0.23|
mradermacher/Eileithyia-20b-GGUF
mradermacher
2024-05-06T06:00:47Z
465
0
transformers
[ "transformers", "gguf", "not-for-all-audiences", "en", "base_model:athirdpath/Eileithyia-20b", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-03-24T10:49:08Z
--- base_model: athirdpath/Eileithyia-20b language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - not-for-all-audiences --- ## About static quants of https://huggingface.co/athirdpath/Eileithyia-20b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Eileithyia-20b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Eileithyia-20b-GGUF/resolve/main/Eileithyia-20b.Q2_K.gguf) | Q2_K | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/Eileithyia-20b-GGUF/resolve/main/Eileithyia-20b.IQ3_XS.gguf) | IQ3_XS | 8.5 | | | [GGUF](https://huggingface.co/mradermacher/Eileithyia-20b-GGUF/resolve/main/Eileithyia-20b.IQ3_S.gguf) | IQ3_S | 9.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Eileithyia-20b-GGUF/resolve/main/Eileithyia-20b.Q3_K_S.gguf) | Q3_K_S | 9.0 | | | [GGUF](https://huggingface.co/mradermacher/Eileithyia-20b-GGUF/resolve/main/Eileithyia-20b.IQ3_M.gguf) | IQ3_M | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/Eileithyia-20b-GGUF/resolve/main/Eileithyia-20b.Q3_K_M.gguf) | Q3_K_M | 10.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Eileithyia-20b-GGUF/resolve/main/Eileithyia-20b.Q3_K_L.gguf) | Q3_K_L | 10.9 | | | [GGUF](https://huggingface.co/mradermacher/Eileithyia-20b-GGUF/resolve/main/Eileithyia-20b.IQ4_XS.gguf) | IQ4_XS | 11.0 | | | [GGUF](https://huggingface.co/mradermacher/Eileithyia-20b-GGUF/resolve/main/Eileithyia-20b.Q4_0.gguf) | Q4_0 | 11.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Eileithyia-20b-GGUF/resolve/main/Eileithyia-20b.IQ4_NL.gguf) | IQ4_NL | 11.6 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Eileithyia-20b-GGUF/resolve/main/Eileithyia-20b.Q4_K_S.gguf) | Q4_K_S | 11.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Eileithyia-20b-GGUF/resolve/main/Eileithyia-20b.Q4_K_M.gguf) | Q4_K_M | 12.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Eileithyia-20b-GGUF/resolve/main/Eileithyia-20b.Q5_K_S.gguf) | Q5_K_S | 14.1 | | | [GGUF](https://huggingface.co/mradermacher/Eileithyia-20b-GGUF/resolve/main/Eileithyia-20b.Q5_K_M.gguf) | Q5_K_M | 14.5 | | | [GGUF](https://huggingface.co/mradermacher/Eileithyia-20b-GGUF/resolve/main/Eileithyia-20b.Q6_K.gguf) | Q6_K | 16.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Eileithyia-20b-GGUF/resolve/main/Eileithyia-20b.Q8_0.gguf) | Q8_0 | 21.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Fredithefish/mistral_tiny
Fredithefish
2024-03-25T09:43:42Z
465
0
transformers
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-25T09:02:29Z
--- tags: - generated_from_trainer model-index: - name: mistral_tiny results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral_tiny This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.1 - Pytorch 2.2.1+cu121 - Datasets 2.4.0 - Tokenizers 0.15.2
unsloth/codegemma-7b-bnb-4bit
unsloth
2024-04-18T15:02:12Z
465
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "unsloth", "bnb", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-04-09T13:33:30Z
--- language: - en license: apache-2.0 library_name: transformers tags: - unsloth - transformers - gemma - bnb --- # Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth! We have a Google Colab Tesla T4 notebook for CodeGemma 7b here: https://colab.research.google.com/drive/19lwcRk_ZQ_ZtX-qzFP3qZBBHZNcMD1hh?usp=sharing [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less | | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less | | **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
bartowski/Tess-2.0-Llama-3-8B-GGUF
bartowski
2024-05-05T03:39:09Z
465
4
null
[ "gguf", "text-generation", "license:llama3", "region:us" ]
text-generation
2024-05-05T03:22:26Z
--- license: llama3 quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp imatrix Quantizations of Tess-2.0-Llama-3-8B Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2777">b2777</a> for quantization. Original model: https://huggingface.co/migtissera/Tess-2.0-Llama-3-8B All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) ## Prompt format ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Tess-2.0-Llama-3-8B-Q8_0.gguf](https://huggingface.co/bartowski/Tess-2.0-Llama-3-8B-GGUF/blob/main/Tess-2.0-Llama-3-8B-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. | | [Tess-2.0-Llama-3-8B-Q6_K.gguf](https://huggingface.co/bartowski/Tess-2.0-Llama-3-8B-GGUF/blob/main/Tess-2.0-Llama-3-8B-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. | | [Tess-2.0-Llama-3-8B-Q5_K_M.gguf](https://huggingface.co/bartowski/Tess-2.0-Llama-3-8B-GGUF/blob/main/Tess-2.0-Llama-3-8B-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. | | [Tess-2.0-Llama-3-8B-Q5_K_S.gguf](https://huggingface.co/bartowski/Tess-2.0-Llama-3-8B-GGUF/blob/main/Tess-2.0-Llama-3-8B-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. | | [Tess-2.0-Llama-3-8B-Q4_K_M.gguf](https://huggingface.co/bartowski/Tess-2.0-Llama-3-8B-GGUF/blob/main/Tess-2.0-Llama-3-8B-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [Tess-2.0-Llama-3-8B-Q4_K_S.gguf](https://huggingface.co/bartowski/Tess-2.0-Llama-3-8B-GGUF/blob/main/Tess-2.0-Llama-3-8B-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. | | [Tess-2.0-Llama-3-8B-IQ4_NL.gguf](https://huggingface.co/bartowski/Tess-2.0-Llama-3-8B-GGUF/blob/main/Tess-2.0-Llama-3-8B-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [Tess-2.0-Llama-3-8B-IQ4_XS.gguf](https://huggingface.co/bartowski/Tess-2.0-Llama-3-8B-GGUF/blob/main/Tess-2.0-Llama-3-8B-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Tess-2.0-Llama-3-8B-Q3_K_L.gguf](https://huggingface.co/bartowski/Tess-2.0-Llama-3-8B-GGUF/blob/main/Tess-2.0-Llama-3-8B-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. | | [Tess-2.0-Llama-3-8B-Q3_K_M.gguf](https://huggingface.co/bartowski/Tess-2.0-Llama-3-8B-GGUF/blob/main/Tess-2.0-Llama-3-8B-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. | | [Tess-2.0-Llama-3-8B-IQ3_M.gguf](https://huggingface.co/bartowski/Tess-2.0-Llama-3-8B-GGUF/blob/main/Tess-2.0-Llama-3-8B-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Tess-2.0-Llama-3-8B-IQ3_S.gguf](https://huggingface.co/bartowski/Tess-2.0-Llama-3-8B-GGUF/blob/main/Tess-2.0-Llama-3-8B-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [Tess-2.0-Llama-3-8B-Q3_K_S.gguf](https://huggingface.co/bartowski/Tess-2.0-Llama-3-8B-GGUF/blob/main/Tess-2.0-Llama-3-8B-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. | | [Tess-2.0-Llama-3-8B-IQ3_XS.gguf](https://huggingface.co/bartowski/Tess-2.0-Llama-3-8B-GGUF/blob/main/Tess-2.0-Llama-3-8B-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Tess-2.0-Llama-3-8B-IQ3_XXS.gguf](https://huggingface.co/bartowski/Tess-2.0-Llama-3-8B-GGUF/blob/main/Tess-2.0-Llama-3-8B-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [Tess-2.0-Llama-3-8B-Q2_K.gguf](https://huggingface.co/bartowski/Tess-2.0-Llama-3-8B-GGUF/blob/main/Tess-2.0-Llama-3-8B-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. | | [Tess-2.0-Llama-3-8B-IQ2_M.gguf](https://huggingface.co/bartowski/Tess-2.0-Llama-3-8B-GGUF/blob/main/Tess-2.0-Llama-3-8B-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [Tess-2.0-Llama-3-8B-IQ2_S.gguf](https://huggingface.co/bartowski/Tess-2.0-Llama-3-8B-GGUF/blob/main/Tess-2.0-Llama-3-8B-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. | | [Tess-2.0-Llama-3-8B-IQ2_XS.gguf](https://huggingface.co/bartowski/Tess-2.0-Llama-3-8B-GGUF/blob/main/Tess-2.0-Llama-3-8B-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. | | [Tess-2.0-Llama-3-8B-IQ2_XXS.gguf](https://huggingface.co/bartowski/Tess-2.0-Llama-3-8B-GGUF/blob/main/Tess-2.0-Llama-3-8B-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. | | [Tess-2.0-Llama-3-8B-IQ1_M.gguf](https://huggingface.co/bartowski/Tess-2.0-Llama-3-8B-GGUF/blob/main/Tess-2.0-Llama-3-8B-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. | | [Tess-2.0-Llama-3-8B-IQ1_S.gguf](https://huggingface.co/bartowski/Tess-2.0-Llama-3-8B-GGUF/blob/main/Tess-2.0-Llama-3-8B-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/Tess-2.0-Llama-3-8B-GGUF --include "Tess-2.0-Llama-3-8B-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/Tess-2.0-Llama-3-8B-GGUF --include "Tess-2.0-Llama-3-8B-Q8_0.gguf/*" --local-dir Tess-2.0-Llama-3-8B-Q8_0 --local-dir-use-symlinks False ``` You can either specify a new local-dir (Tess-2.0-Llama-3-8B-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
Shankhdhar/classifier_woog
Shankhdhar
2024-05-14T09:05:28Z
465
0
setfit
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-mpnet-base-v2", "model-index", "region:us" ]
text-classification
2024-05-10T09:47:07Z
--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer base_model: sentence-transformers/paraphrase-mpnet-base-v2 metrics: - accuracy widget: - text: cookie boxes for gifting under $20 - text: Are there any restrictions on returning candle supplies? - text: special features for bakery boxes - text: I need to confirm the shipping date for my recent purchase. Can you help me with that? - text: different types of bakery boxes available pipeline_tag: text-classification inference: true model-index: - name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.8380952380952381 name: Accuracy --- # SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 4 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | product discoverability | <ul><li>'Do you have Adidas Superstar shoes?'</li><li>'Do you have any running shoes in pink color?'</li><li>'Do you have black Yeezy sneakers in size 9?'</li></ul> | | order tracking | <ul><li>"I'm concerned about the delay in the delivery of my order. Can you please provide me with the status?"</li><li>'What is the estimated delivery time for orders within the same city?'</li><li>"I placed an order last week and it still hasn't arrived. Can you check the status for me?"</li></ul> | | product policy | <ul><li>'Are there any exceptions to the return policy for items that were purchased with a student discount?'</li><li>'Do you offer a try-and-buy option for sneakers?'</li><li>'Do you offer a price adjustment for sneakers if the price drops after purchase?'</li></ul> | | product faq | <ul><li>'Do you have any limited edition sneakers available?'</li><li>'Are the Adidas Yeezy Foam Runner available in size 7?'</li><li>"Are the Nike Air Force 1 sneakers available in women's sizes?"</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.8381 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("setfit_model_id") # Run inference preds = model("special features for bakery boxes") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 3 | 11.6415 | 24 | | Label | Training Sample Count | |:------------------------|:----------------------| | order tracking | 30 | | product discoverability | 30 | | product faq | 16 | | product policy | 30 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (2, 2) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: True ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0019 | 1 | 0.1782 | - | | 0.0965 | 50 | 0.0628 | - | | 0.1931 | 100 | 0.0036 | - | | 0.2896 | 150 | 0.0013 | - | | 0.3861 | 200 | 0.0012 | - | | 0.4826 | 250 | 0.0003 | - | | 0.5792 | 300 | 0.0002 | - | | 0.6757 | 350 | 0.0003 | - | | 0.7722 | 400 | 0.0002 | - | | 0.8687 | 450 | 0.0005 | - | | 0.9653 | 500 | 0.0003 | - | | 1.0618 | 550 | 0.0001 | - | | 1.1583 | 600 | 0.0002 | - | | 1.2548 | 650 | 0.0002 | - | | 1.3514 | 700 | 0.0002 | - | | 1.4479 | 750 | 0.0001 | - | | 1.5444 | 800 | 0.0001 | - | | 1.6409 | 850 | 0.0001 | - | | 1.7375 | 900 | 0.0002 | - | | 1.8340 | 950 | 0.0001 | - | | 1.9305 | 1000 | 0.0001 | - | ### Framework Versions - Python: 3.9.16 - SetFit: 1.0.3 - Sentence Transformers: 2.7.0 - Transformers: 4.40.2 - PyTorch: 2.3.0 - Datasets: 2.19.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
bartowski/Alpha-Orionis-v0.1-GGUF
bartowski
2024-05-26T19:57:22Z
465
2
null
[ "gguf", "text-generation", "dataset:NobodyExistsOnTheInternet/ToxicQAFinal", "license:apache-2.0", "region:us" ]
text-generation
2024-05-26T18:59:37Z
--- license: apache-2.0 datasets: - NobodyExistsOnTheInternet/ToxicQAFinal quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp imatrix Quantizations of Alpha-Orionis-v0.1 Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3001">b3001</a> for quantization. Original model: https://huggingface.co/fearlessdots/Alpha-Orionis-v0.1 All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a) ## Prompt format ``` {system_prompt} USER: {prompt} ASSISTANT: </s> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Alpha-Orionis-v0.1-Q8_0.gguf](https://huggingface.co/bartowski/Alpha-Orionis-v0.1-GGUF/blob/main/Alpha-Orionis-v0.1-Q8_0.gguf) | Q8_0 | 7.69GB | Extremely high quality, generally unneeded but max available quant. | | [Alpha-Orionis-v0.1-Q6_K.gguf](https://huggingface.co/bartowski/Alpha-Orionis-v0.1-GGUF/blob/main/Alpha-Orionis-v0.1-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. | | [Alpha-Orionis-v0.1-Q5_K_M.gguf](https://huggingface.co/bartowski/Alpha-Orionis-v0.1-GGUF/blob/main/Alpha-Orionis-v0.1-Q5_K_M.gguf) | Q5_K_M | 5.13GB | High quality, *recommended*. | | [Alpha-Orionis-v0.1-Q5_K_S.gguf](https://huggingface.co/bartowski/Alpha-Orionis-v0.1-GGUF/blob/main/Alpha-Orionis-v0.1-Q5_K_S.gguf) | Q5_K_S | 4.99GB | High quality, *recommended*. | | [Alpha-Orionis-v0.1-Q4_K_M.gguf](https://huggingface.co/bartowski/Alpha-Orionis-v0.1-GGUF/blob/main/Alpha-Orionis-v0.1-Q4_K_M.gguf) | Q4_K_M | 4.36GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [Alpha-Orionis-v0.1-Q4_K_S.gguf](https://huggingface.co/bartowski/Alpha-Orionis-v0.1-GGUF/blob/main/Alpha-Orionis-v0.1-Q4_K_S.gguf) | Q4_K_S | 4.14GB | Slightly lower quality with more space savings, *recommended*. | | [Alpha-Orionis-v0.1-IQ4_NL.gguf](https://huggingface.co/bartowski/Alpha-Orionis-v0.1-GGUF/blob/main/Alpha-Orionis-v0.1-IQ4_NL.gguf) | IQ4_NL | 4.12GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [Alpha-Orionis-v0.1-IQ4_XS.gguf](https://huggingface.co/bartowski/Alpha-Orionis-v0.1-GGUF/blob/main/Alpha-Orionis-v0.1-IQ4_XS.gguf) | IQ4_XS | 3.90GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Alpha-Orionis-v0.1-Q3_K_L.gguf](https://huggingface.co/bartowski/Alpha-Orionis-v0.1-GGUF/blob/main/Alpha-Orionis-v0.1-Q3_K_L.gguf) | Q3_K_L | 3.82GB | Lower quality but usable, good for low RAM availability. | | [Alpha-Orionis-v0.1-Q3_K_M.gguf](https://huggingface.co/bartowski/Alpha-Orionis-v0.1-GGUF/blob/main/Alpha-Orionis-v0.1-Q3_K_M.gguf) | Q3_K_M | 3.51GB | Even lower quality. | | [Alpha-Orionis-v0.1-IQ3_M.gguf](https://huggingface.co/bartowski/Alpha-Orionis-v0.1-GGUF/blob/main/Alpha-Orionis-v0.1-IQ3_M.gguf) | IQ3_M | 3.28GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Alpha-Orionis-v0.1-IQ3_S.gguf](https://huggingface.co/bartowski/Alpha-Orionis-v0.1-GGUF/blob/main/Alpha-Orionis-v0.1-IQ3_S.gguf) | IQ3_S | 3.18GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [Alpha-Orionis-v0.1-Q3_K_S.gguf](https://huggingface.co/bartowski/Alpha-Orionis-v0.1-GGUF/blob/main/Alpha-Orionis-v0.1-Q3_K_S.gguf) | Q3_K_S | 3.16GB | Low quality, not recommended. | | [Alpha-Orionis-v0.1-IQ3_XS.gguf](https://huggingface.co/bartowski/Alpha-Orionis-v0.1-GGUF/blob/main/Alpha-Orionis-v0.1-IQ3_XS.gguf) | IQ3_XS | 3.01GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Alpha-Orionis-v0.1-IQ3_XXS.gguf](https://huggingface.co/bartowski/Alpha-Orionis-v0.1-GGUF/blob/main/Alpha-Orionis-v0.1-IQ3_XXS.gguf) | IQ3_XXS | 2.82GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [Alpha-Orionis-v0.1-Q2_K.gguf](https://huggingface.co/bartowski/Alpha-Orionis-v0.1-GGUF/blob/main/Alpha-Orionis-v0.1-Q2_K.gguf) | Q2_K | 2.71GB | Very low quality but surprisingly usable. | | [Alpha-Orionis-v0.1-IQ2_M.gguf](https://huggingface.co/bartowski/Alpha-Orionis-v0.1-GGUF/blob/main/Alpha-Orionis-v0.1-IQ2_M.gguf) | IQ2_M | 2.50GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [Alpha-Orionis-v0.1-IQ2_S.gguf](https://huggingface.co/bartowski/Alpha-Orionis-v0.1-GGUF/blob/main/Alpha-Orionis-v0.1-IQ2_S.gguf) | IQ2_S | 2.31GB | Very low quality, uses SOTA techniques to be usable. | | [Alpha-Orionis-v0.1-IQ2_XS.gguf](https://huggingface.co/bartowski/Alpha-Orionis-v0.1-GGUF/blob/main/Alpha-Orionis-v0.1-IQ2_XS.gguf) | IQ2_XS | 2.19GB | Very low quality, uses SOTA techniques to be usable. | | [Alpha-Orionis-v0.1-IQ2_XXS.gguf](https://huggingface.co/bartowski/Alpha-Orionis-v0.1-GGUF/blob/main/Alpha-Orionis-v0.1-IQ2_XXS.gguf) | IQ2_XXS | 1.99GB | Lower quality, uses SOTA techniques to be usable. | | [Alpha-Orionis-v0.1-IQ1_M.gguf](https://huggingface.co/bartowski/Alpha-Orionis-v0.1-GGUF/blob/main/Alpha-Orionis-v0.1-IQ1_M.gguf) | IQ1_M | 1.75GB | Extremely low quality, *not* recommended. | | [Alpha-Orionis-v0.1-IQ1_S.gguf](https://huggingface.co/bartowski/Alpha-Orionis-v0.1-GGUF/blob/main/Alpha-Orionis-v0.1-IQ1_S.gguf) | IQ1_S | 1.61GB | Extremely low quality, *not* recommended. | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/Alpha-Orionis-v0.1-GGUF --include "Alpha-Orionis-v0.1-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/Alpha-Orionis-v0.1-GGUF --include "Alpha-Orionis-v0.1-Q8_0.gguf/*" --local-dir Alpha-Orionis-v0.1-Q8_0 ``` You can either specify a new local-dir (Alpha-Orionis-v0.1-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF
mradermacher
2024-06-03T05:13:56Z
465
0
transformers
[ "transformers", "gguf", "en", "base_model:Nitral-AI/Poppy_Porpoise-1.0-L3-8B", "license:other", "endpoints_compatible", "region:us" ]
null
2024-05-31T13:09:33Z
--- base_model: Nitral-AI/Poppy_Porpoise-1.0-L3-8B language: - en library_name: transformers license: other quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Nitral-AI/Poppy_Porpoise-1.0-L3-8B ***The model creator strongly suggests using the [0.72](https://huggingface.co/mradermacher/Poppy_Porpoise-0.72-L3-8B-i1-GGUF) model at this time, as it is better quality*** <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Poppy_Porpoise-1.0-L3-8B-i1-GGUF/resolve/main/Poppy_Porpoise-1.0-L3-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
Cran-May/openbuddy-yi1.5-9b-v21.1-32k-Q4_K_S-GGUF
Cran-May
2024-06-22T04:13:22Z
465
0
transformers
[ "transformers", "gguf", "mixtral", "llama-cpp", "gguf-my-repo", "text-generation", "zh", "en", "fr", "de", "ja", "ko", "it", "ru", "fi", "base_model:OpenBuddy/openbuddy-yi1.5-9b-v21.1-32k", "license:apache-2.0", "region:us" ]
text-generation
2024-06-22T04:12:59Z
--- base_model: OpenBuddy/openbuddy-yi1.5-9b-v21.1-32k language: - zh - en - fr - de - ja - ko - it - ru - fi library_name: transformers license: apache-2.0 pipeline_tag: text-generation tags: - mixtral - llama-cpp - gguf-my-repo inference: false --- # Cran-May/openbuddy-yi1.5-9b-v21.1-32k-Q4_K_S-GGUF This model was converted to GGUF format from [`OpenBuddy/openbuddy-yi1.5-9b-v21.1-32k`](https://huggingface.co/OpenBuddy/openbuddy-yi1.5-9b-v21.1-32k) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/OpenBuddy/openbuddy-yi1.5-9b-v21.1-32k) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Cran-May/openbuddy-yi1.5-9b-v21.1-32k-Q4_K_S-GGUF --hf-file openbuddy-yi1.5-9b-v21.1-32k-q4_k_s-imat.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Cran-May/openbuddy-yi1.5-9b-v21.1-32k-Q4_K_S-GGUF --hf-file openbuddy-yi1.5-9b-v21.1-32k-q4_k_s-imat.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Cran-May/openbuddy-yi1.5-9b-v21.1-32k-Q4_K_S-GGUF --hf-file openbuddy-yi1.5-9b-v21.1-32k-q4_k_s-imat.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Cran-May/openbuddy-yi1.5-9b-v21.1-32k-Q4_K_S-GGUF --hf-file openbuddy-yi1.5-9b-v21.1-32k-q4_k_s-imat.gguf -c 2048 ```
tliobnih/Breeze-7B-Instruct-v1_0-Q5_K_M-GGUF
tliobnih
2024-07-01T03:55:51Z
465
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "zh", "en", "base_model:MediaTek-Research/Breeze-7B-Instruct-v1_0", "license:apache-2.0", "region:us" ]
text-generation
2024-07-01T03:55:29Z
--- base_model: MediaTek-Research/Breeze-7B-Instruct-v1_0 language: - zh - en license: apache-2.0 pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- # tliobnih/Breeze-7B-Instruct-v1_0-Q5_K_M-GGUF This model was converted to GGUF format from [`MediaTek-Research/Breeze-7B-Instruct-v1_0`](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo tliobnih/Breeze-7B-Instruct-v1_0-Q5_K_M-GGUF --hf-file breeze-7b-instruct-v1_0-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo tliobnih/Breeze-7B-Instruct-v1_0-Q5_K_M-GGUF --hf-file breeze-7b-instruct-v1_0-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo tliobnih/Breeze-7B-Instruct-v1_0-Q5_K_M-GGUF --hf-file breeze-7b-instruct-v1_0-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo tliobnih/Breeze-7B-Instruct-v1_0-Q5_K_M-GGUF --hf-file breeze-7b-instruct-v1_0-q5_k_m.gguf -c 2048 ```
microsoft/swin-large-patch4-window12-384-in22k
microsoft
2022-05-16T18:40:51Z
464
3
transformers
[ "transformers", "pytorch", "tf", "swin", "image-classification", "vision", "dataset:imagenet-21k", "arxiv:2103.14030", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-21k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # Swin Transformer (large-sized model) Swin Transformer model pre-trained on ImageNet-21k (14 million images, 21,841 classes) at resolution 384x384. It was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer). Disclaimer: The team releasing Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/swin_transformer_architecture.png) [Source](https://paperswithcode.com/method/swin-transformer) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swin) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoFeatureExtractor, SwinForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/swin-large-patch4-window12-384-in22k") model = SwinForImageClassification.from_pretrained("microsoft/swin-large-patch4-window12-384-in22k") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swin.html#). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2103-14030, author = {Ze Liu and Yutong Lin and Yue Cao and Han Hu and Yixuan Wei and Zheng Zhang and Stephen Lin and Baining Guo}, title = {Swin Transformer: Hierarchical Vision Transformer using Shifted Windows}, journal = {CoRR}, volume = {abs/2103.14030}, year = {2021}, url = {https://arxiv.org/abs/2103.14030}, eprinttype = {arXiv}, eprint = {2103.14030}, timestamp = {Thu, 08 Apr 2021 07:53:26 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2103-14030.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
cyberagent/xlm-roberta-large-jnli-jsick
cyberagent
2022-10-24T07:14:03Z
464
6
sentence-transformers
[ "sentence-transformers", "pytorch", "xlm-roberta", "nli", "ja", "dataset:jnli", "dataset:jsick", "license:cc-by-4.0", "region:us" ]
null
2022-10-24T07:08:04Z
--- language: ja license: cc-by-4.0 library_name: sentence-transformers tags: - xlm-roberta - nli datasets: - jnli - jsick --- # Japanese Natural Language Inference Model This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class, [gradient accumulation PR](https://github.com/UKPLab/sentence-transformers/pull/1092), and the code from [CyberAgentAILab/japanese-nli-model](https://github.com/CyberAgentAILab/japanese-nli-model). ## Training Data The model was trained on the [JGLUE-JNLI](https://github.com/yahoojapan/JGLUE) and [JSICK](https://github.com/verypluming/JSICK) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral. ## Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained('cyberagent/xlm-roberta-large-jnli-jsick') model = AutoModelForSequenceClassification.from_pretrained('cyberagent/xlm-roberta-large-jnli-jsick') features = tokenizer(["子供が走っている猫を見ている", "猫が走っている"], ["猫が走っている", "子供が走っている"], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits label_mapping = ['contradiction', 'entailment', 'neutral'] labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)] print(labels) ```